psycholinguistics | Linguistics

Telegram-канал psycholinguistics - Psycholinguistics

493

Here we share reviews and articles about Psycholinguistics and other relevant fields.

Subscribe to a channel

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

#sound #language #common #research #researcher #combining_different_sounds #human_says #human #humanity #cognitive_elements #learned #chimpanzee #monkey #team #new #tool #element #university #combination #ran #away #year #signal #ago
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2020/10/201021180740.htm

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

👉🏽 Monkeys have the vocal tools but not the brain power for language

"Even if this finding only applies to macaque monkeys, it would still debunk the idea that it's the anatomy that limits speech in nonhumans," said researcher Asif Ghazanfar.
@psycholinguistics
PRINCETON, N.J., Dec. 9 (UPI) -- The vocal tracts of macaques, a group of Old World monkeys, are ready for speech. Their brains are not. That's the takeaway from a new paper published in the journal Science Advances.
The research suggests cognitive differences, not vocal adaptations, among humans and other animals explains the emergence of language.
"Now nobody can say that it's something about the vocal anatomy that keeps monkeys from being able to speak -- it has to be something in the brain," Asif Ghazanfar, a professor of psychology at Princeton University, said in a news release. "Even if this finding only applies to macaque monkeys, it would still debunk the idea that it's the anatomy that limits speech in nonhumans."
"Now, the interesting question is, what is it in the human brain that makes it special?" Ghazanfar asked.
Scientists arrived at their conclusion after an in-depth study of the macaque vocal tract. Researchers used X-rays to measure the movement of the tongue, lips and larynx as macaque specimens vocalized. Scientists designed a model to simulate the range of vocalizations made possible by the monkey's vocal tools.
Simulations showed macaques are physically capable of making vowel sounds, and could vocalize full sentences if they possessed the necessary brain circuitry.
Human and macaque lineages diverged more than 40 million years ago. Chimpanzee and human lineages separated more recently, between 7 and 13 million years ago. Comparing the brains of Old World monkeys to chimps may help scientists understand how the cognition necessary for language first emerged.
"The paper opens whole new doors for finding the key to the uniqueness of humans' unparalleled language ability," said Laurie Santos, a psychology professor at Yale University who did not participate in the study. "If a species as old as a macaque has a vocal tract capable of speech, then we really need to find the reason that this didn't translate for later primates into the kind of speech sounds that humans produce. I think that means we're in for some exciting new answers soon."
#language_evolution #monkey #vocal_tract #cognition
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.upi.com/Science_News/2016/12/09/Monkeys-have-the-vocal-tools-but-not-the-brain-power-for-language/4091481315567/?ur3=1

Читать полностью…

Psycholinguistics

📚 Mari Tervaniemi, Vesa Putkinen, Peixin Nie, Cuicui Wang, Bin Du, Jing Lu, Shuting Li, Benjamin Ultan Cowley, Tuisku Tammi, Sha Tao. Improved Auditory Function Caused by Music Versus Foreign Language Training at School Age: Is There a Difference? Cerebral Cortex, 2021; DOI: 10.1093/cercor/bhab194
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

The work was supported by the Science of Learning Institute at Johns Hopkins University, and the Dingwall Foundation Dissertation Fellowship in the Cognitive, Clinical, and Neural Foundations of Language.
#group #wiley #writing #write #science #motor #adult #foundation #handwriting_says #author #university #writer #hand #alphabet #video #lesson #rapp #new #true #learned #learn #learning #greensboro #carolina #dissertation #hopkins
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/07/210708111508.htm

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

👉🏽 Vocal music boosts the recovery of language functions after stroke

Listening to vocal music is a simple and cost-efficient way of promoting recovery and brain health after a stroke.
@psycholinguistics
A study conducted at the University of Helsinki and the Turku University Hospital Neurocenter compared the effect of listening to vocal music, instrumental music and audiobooks on the structural and functional recovery of the language network of patients who had suffered an acute stroke. In addition, the study investigated the links between such changes and language recovery during a three-month follow-up period. The study was published in the eNeuro journal.
Based on the findings, listening to vocal music improved the recovery of the structural connectivity of the language network in the left frontal lobe compared to listening to audiobooks. These structural changes correlated with the recovery of language skills.
"For the first time, we were able to demonstrate that the positive effects of vocal music are related to the structural and functional plasticity of the language network. This expands our understanding of the mechanisms of action of music-based neurological rehabilitation methods," says Postdoctoral Researcher Aleksi Sihvonen.
Listening to music supports other rehabilitation
Aphasia, a language impairment resulting from a stroke, causes considerable suffering to patients and their families. Current therapies help in the rehabilitation of language impairments, but the results vary and the necessary rehabilitation is often not available to a sufficient degree and early enough.
"Listening to vocal music can be considered a measure that enhances conventional forms of rehabilitation in healthcare. Such activity can be easily, safely and efficiently arranged even in the early stages of rehabilitation," Sihvonen says.
According to Sihvonen, listening to music could be used as a cost-efficient boost to normal rehabilitation, or for rehabilitating patients with mild speech disorders when other rehabilitation options are scarce.
After a disturbance of the cerebral circulation, the brain needs stimulation to recover as well as possible. This is the goal of conventional rehabilitation methods as well.
"Unfortunately, a lot of the time spent in hospital is not stimulating. At these times, listening to music could serve as an additional and sensible rehabilitation measure that can have a positive effect on recovery, improving the prognosis," Sihvonen adds.
#rehabilitation #rehabilitating #sihvonen #music #language #efficiently #say #conventional #study #speech #need #listening #recovery #based #current_therapies #change #structural
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/07/210709104224.htm

Читать полностью…

Psycholinguistics

📚 Manuel Bohn, Michael Henry Tessler, Megan Merrick, Michael C. Frank. How young children integrate information sources to infer the meaning of words. Nature Human Behaviour, 2021; DOI: 10.1038/s41562-021-01145-1
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

👉🏽 How children integrate information

Researchers use a computer model to explain how children integrate information during word learning.
@psycholinguistics
"We know that children use a lot of different information sources in their social environment, including their own knowledge, to learn new words. But the picture that emerges from the existing research is that children have a bag of tricks that they can use," says Manuel Bohn, a researcher at the Max Planck Institute for Evolutionary Anthropology.
For example, if you show a child an object they already know -- say a cup -- as well as an object they have never seen before, the child will usually think that a word they never heard before belongs with the new object. Why? Children use information in the form of their existing knowledge of words (the thing you drink out of is called a "cup") to infer that the object that doesn't have a name goes with the name that doesn't have an object. Other information comes from the social context: children remember past interactions with a speaker to find out what they are likely to talk about next.
"But in the real world, children learn words in complex social settings in which more than just one type of information is available. They have to use their knowledge of words while interacting with a speaker. Word learning always requires integrating multiple, different information sources," Bohn continues. An open question is how children combine different, sometimes even conflicting, sources of information.
Predictions by a computer program
In a new study, a team of researchers from the Max Planck Institute for Evolutionary Anthropology, MIT, and Stanford University takes on this issue. In a first step, they conducted a series of experiments to measure children's sensitivity to different information sources. Next, they formulated a computational cognitive model which details the way that this information is integrated.
"You can think of this model as a little computer program. We input children's sensitivity to different information, which we measure in separate experiments, and then the program simulates what should happen if those information sources are combined in a rational way. The model spits out predictions for what should happen in hypothetical new situations in which these information sources are all available," explains Michael Henry Tessler, one of the lead-authors of the study.
In a final step, the researchers turned these hypothetical situations into real experiments. They collected data with two- to five-year-old children to test how well the predictions from the model line up with real-world data. Bohn sums up the results: "It is remarkable how well the rational model of information integration predicted children's actual behavior in these new situations. It tells us we are on the right track in understanding from a mathematical perspective how children learn language."
Language learning as a social inference problem
How does the model work? The algorithm that processes the different information sources and integrates them is inspired by decades of research in philosophy, developmental psychology, and linguistics. At its heart, the model looks at language learning as a social inference problem, in which the child tries to find out what the speaker means -- what their intention is. The different information sources are all systematically related to this underlying intention, which provides a natural way of integrating them.
Additionally, the model also specifies what changes as children get older. Over development, children become more sensitive to the individual information sources, and yet the social reasoning process that integrates the information sources remains the same.

Читать полностью…

Psycholinguistics

📚 Tsunehiko Kohashi, Adalee J. Lube, Jenny H. Yang, Prema S. Roberts-Gaddipati, Bruce A. Carlson. Pauses during communication release behavioral habituation through recovery from synaptic depression. Current Biology, 2021; DOI: 10.1016/j.cub.2021.04.056
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

👉🏽 Electric fish -- and humans -- pause before communicating key points

Electric fish pause before sharing something particularly meaningful. Pauses also prime the sensory systems to receive new and important information. The study reveals an underlying mechanism for how pauses allow neurons in the midbrain to recover from stimulation.
@psycholinguistics
Electric fish and today's TED talk speakers take a page from Twain's playbook. They pause before sharing something particularly meaningful. Pauses also prime the sensory systems to receive new and important information, according to research from Washington University in St. Louis.
"There is an increased response in listeners to words -- or in this case, electrical pulses -- that happens right after a pause," said Bruce Carlson, professor of biology in Arts & Sciences and corresponding author of the study published May 26 in Current Biology. "Fish are basically doing the same thing we do to communicate effectively."
Beyond discovering interesting parallels between human language and electric communication in fish, the research reveals an underlying mechanism for how pauses allow neurons in the midbrain to recover from stimulation.
Carlson and collaborators, including first author Tsunehiko Kohashi, formerly a postdoctoral research associate at Washington University, conducted their study with electric fish called mormyrids. These fish use weak electric discharges, or pulses, to locate prey and to communicate with one another.
The scientists tracked the banter between fish housed under different conditions. They observed that electric fish that were alone in their tanks tend to hum along without stopping very much, producing fewer and shorter pauses in electric output than fish housed in pairs. What's more, fish tended to produce high frequency bursts of pulses right after they paused.
The scientists then tried an experiment where they inserted artificial pauses into ongoing communication between two fish. They found that the fish receiving a pause -- the listeners -- increased their own rates of electric signaling just after the artificially inserted pauses. This result indicates that pauses were meaningful to the listeners.
Other researchers have studied the behavioral significance of pauses in human speech. Human listeners tend to recognize words better after pauses, and effective speakers tend to insert pauses right before something that they want to have a significant impact.
"Human auditory systems respond more strongly to words that come right after a pause, and during normal, everyday conversations, we tend to pause just before speaking words with especially high-information content," Carlson said. "We see parallels in our fish where they respond more strongly to electrosensory stimuli that come after a pause. We also find that fish tend to pause right before they produce a high-frequency burst of electric pulses, which carries a large amount of information."
The scientists wanted to understand the underlying neural mechanism that causes these effects. They applied stimulation to electrosensory neurons in the midbrain of the electric fish and observed that continually stimulated neurons produced weaker and weaker responses. This progressive weakness is referred to as short-term synaptic depression.
Cue Mark Twain and his well-timed pauses.
The scientists inserted pauses into the continuous stimulation. They found that pauses as short as about one second allowed the synapses to recover from short-term depression and increased the response of the postsynaptic neurons to stimuli following the pause.
"Pauses inserted in electric speech reset the sensitivity of the listener's brain, which was depressed during the continuous part of the speech," Kohashi said. "Pauses seem to make the following message as clear as possible for the listener."
Similar to humans.
Synaptic depression and recovery are universal in the nervous system, the researchers noted.

Читать полностью…

Psycholinguistics

📚 Sonia Arenillas-Alcón, Jordi Costa-Faidella, Teresa Ribas-Prats, María Dolores Gómez-Roig, Carles Escera. Neural encoding of voice pitch and formant structure at birth as revealed by frequency-following responses. Scientific Reports, 2021; 11 (1) DOI: 10.1038/s41598-021-85799-x
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

👉🏽 Can a newborn's brain discriminate speech sounds?

People's ability to perceive speech sounds has been deeply studied, specially during someone's first year of life, but what happens during the first hours after birth? Are babies born with innate abilities to perceive speech sounds, or do neural encoding processes need to age for some time?
@psycholinguistics
Researchers from the Institute of Neurosciences of the University of Barcelona (UBNeuro) and the Sant Joan de Déu Research Institute (IRSJD) have created a new methodology to try to answer this basic question on human development.
The results, published in the Nature's open-access journal Scientific Reports, confirm that newborn neural encoding of voice pitch is comparable to the adults' sabilities after three years of being exposed to language. However, there are differences regarding the perception of spectral and temporal fine structures of sounds, which consists on the ability to distinguish between vocal sounds such as /o/ and /a/. Therefore, according to the authors, neural encoding of this sound aspect, recorded for the first time in this study, is not found mature enough after being born, but it needs a certain exposure to the language as well as stimulation and time to develop.
According to the researchers, knowing the level of development typical in these neural encoding processes from birth will enable them to make an "early detection of language impairments, which would provide an early intervention or stimulus to reduce future negative consequences."
The study is led by Carles Escera, professor of Cognitive Neuroscience at the Department of Clinical Psychology and Psychobiology of the UB, and has been carried out at the IRSJD, in collaboration with Maria Dolores Gómez Roig, head of the Department of Obstetrics and Gynecology of Hospital Sant Joan de Déu. The study is also signed by the experts Sonia Arenillas Alcón, first author of the article, Jordi Costa Faidella and Teresa Ribas Prats, all members of the Cognitive Neuroscience Research Group (Brainlab) of the UB.
Decoding the spectral and temporal fine structure of sound
In order to distinguish the neural response to speech stimuli in newborns, one of the main challenges was to record, using the baby's electroencephalogram, a specific brain response: the frequency-following response (FFR). The FFR provides information on the neural encoding of two specific features of sound: fundamental frequency, responsible for the perception of voice pitch (high or low), and the spectral and temporal fine structure. The precise encoding of both features is, according to the study, "fundamental for the proper perception of speech, a requirement in future language acquisition."
To date, the available tools to study this neural encoding enabled researchers to determine whether the newborn's baby was able to encode inflections in the voice pitch, but it did not when it came to the spectral and temporal fine structure. "Inflections in voice pitch contour are very important, especially in tonal variations like in Mandarin, as well as to perceive the prosody from speech that transmits emotional content of what is said. However, the spectral and temporal fine structure of sound is the most relevant aspect in language acquisition regarding non-tonal languages like ours, and the few existing studies on the issue do not inform about the precision with which a newborn's brain encodes it," note the authors.
The main cause of this lack of studies is the technical limitation caused by the type of sounds used to conduct these tests. Therefore, authors have developed a new stimulus (/oa/) whose internal structure (increasing change in voice pitch, two different vocals) allows them to evaluate the precision of the neural encoding of both features of the sound simultaneously using the FFR analysis.
An adapted test to the limitations of the hospital environment

Читать полностью…

Psycholinguistics

📚 Stuart K. Watson, Judith M. Burkart, Steven J. Schapiro, Susan P. Lambeth, Jutta L. Mueller, Simon W. Townsend. Nonadjacent dependency processing in monkeys, apes, and humans. Science Advances, 2020; 6 (43): eabb0725 DOI: 10.1126/sciadv.abb0725
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

👉🏽 Cognitive elements of language have existed for 40 million years

Humans are not the only beings that can identify rules in complex language-like constructions -- monkeys and great apes can do so, too, a new study has shown. Researcher used a series of experiments based on an 'artificial grammar' to conclude that this ability can be traced back to our ancient primate ancestors.
@psycholinguistics
Language is one of the most powerful tools available to humankind, as it enables us to share information, culture, views and technology. "Research into language evolution is thus crucial if we want to understand what it means to be human," says Stuart Watson, postdoctoral researcher at the Department of Comparative Language Science of the University of Zurich. Until now, however, little research has been conducted about how this unique communication system came to be.
Identifying connections between words
An international team led by Professor Simon Townsend at the Department of Comparative Language Science of the University of Zurich has now shed new light on the evolutionary origins of language. Their study examines one of the most important cognitive elements needed for language processing -- that is, the ability to understand the relationship between the words in a phrase, even if they are separated by other parts of the phrase, known as a "non-adjacent dependency." For example, we know that in the sentence "the dog that bit the cat ran away," it is the dog who ran away, not the cat, even though there are several other words in between the two phrases. A comparison between apes, monkeys and and humans has now shown that the ability to identify such non-adjacent dependencies is likely to have developed as far back as 40 million years ago.
Acoustic signals instead of words
The researchers used a novel approach in their experiments: They invented an artificial grammar, where sequences are formed by combining different sounds rather than words. This enabled the researchers to compare the ability of three different species of primates to process non-adjacent dependencies, even though they do not share the same communication system. The experiments were carried out with common marmosets -- a monkey native to Brazil -- at the University of Zurich, chimpanzees (University of Texas) and humans (Osnabrück University).
Mistakes followed by telltale looks
First, the researchers taught their test subjects to understand the artificial grammar in several practice sessions. The subjects learned that certain sounds were always followed by other specific sounds (e.g. sound 'B' always follows sound 'A'), even if they were sometimes separated by other acoustic signals (e.g. 'A' and 'B' are separated by 'X'). This simulates a pattern in human language, where, for example, we expect a noun (e.g. "dog") to be followed by a verb (e.g. "ran away"), regardless of any other phrasal parts in between (e.g. "that bit the cat").
In the actual experiments that followed, the researchers played sound combinations that violated the previously learned rules. In these cases, the common marmosets and chimpanzees responded with an observable change of behavior; they looked at the loudspeaker emitting the sounds for about twice as long as they did towards familiar combinations of sounds. For the researchers, this was an indication of surprise in the animals caused by noticing a 'grammatical error'. The human test subjects were asked directly whether they believed the sound sequences were correct or wrong.
Common origin of language
"The results show that all three species share the ability to process non-adjacent dependencies. It is therefore likely that this ability is widespread among primates," says Townsend. "This suggests that this crucial element of language already existed in our most recent common ancestors with these species." Since marmosets branched off from humanity's ancestors around 40 million years ago, this crucial cognitive skill thus developed many million years before human language evolved.

Читать полностью…

Psycholinguistics

✍🏽 W. Tecumseh Fitch, Bart de Boer, Neil Mathur and Asif A. Ghazanfar. Monkey vocal tracts are speech-ready. Science Advances, 2016; DOI: 10.1126/sciadv.1600723
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

👉🏽 Learning foreign languages can affect the processing of music in the brain

Research has shown that a music-related hobby boosts language skills and affects the processing of speech in the brain. According to a new study, the reverse also happens -- learning foreign languages can affect the processing of music in the brain.
@psycholinguistics
Research Director Mari Tervaniemi from the University of Helsinki's Faculty of Educational Sciences investigated, in cooperation with researchers from the Beijing Normal University (BNU) and the University of Turku, the link in the brain between language acquisition and music processing in Chinese elementary school pupils aged 8-11 by monitoring, for one school year, children who attended a music training programme and a similar programme for the English language. Brain responses associated with auditory processing were measured in the children before and after the programmes. Tervaniemi compared the results to those of children who attended other training programmes.
"The results demonstrated that both the music and the language programme had an impact on the neural processing of auditory signals," Tervaniemi says.
Learning achievements extend from language acquisition to music
Surprisingly, attendance in the English training programme enhanced the processing of musically relevant sounds, particularly in terms of pitch processing.
"A possible explanation for the finding is the language background of the children, as understanding Chinese, which is a tonal language, is largely based on the perception of pitch, which potentially equipped the study subjects with the ability to utilise precisely that trait when learning new things. That's why attending the language training programme facilitated the early neural auditory processes more than the musical training."
Tervaniemi says that the results support the notion that musical and linguistic brain functions are closely linked in the developing brain. Both music and language acquisition modulate auditory perception. However, whether they produce similar or different results in the developing brain of school-age children has not been systematically investigated in prior studies.
At the beginning of the training programmes, the number of children studied using electroencephalogram (EEG) recordings was 120, of whom more than 80 also took part in EEG recordings a year later, after the programme.
In the music training, the children had the opportunity to sing a lot: they were taught to sing from both hand signs and sheet music. The language training programme emphasised the combination of spoken and written English, that is, simultaneous learning. At the same time, the English language employs an orthography that is different from Chinese. The one-hour programme sessions were held twice a week after school on school premises throughout the school year, with roughly 20 children and two teachers attending at a time.
"In both programmes the children liked the content of the lessons which was very interactive and had many means to support communication between the children and the teacher" says Professor Sha Tao who led the study in Beijing.
#programme #language #training_programme #brain #tervaniemi #school #musically #musical #auditory #learning #result #music_processing #year_children #say #process #study #studied #eeg #science #investigated #normal
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/08/210803105546.htm

Читать полностью…

Psycholinguistics

📚 Robert W. Wiley, Brenda Rapp. The Effects of Handwriting Experience on Literacy Learning. Psychological Science, 2021; 095679762199311 DOI: 10.1177/0956797621993111
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

👉🏽 Handwriting beats typing and watching videos for learning to read

Though writing by hand is increasingly being eclipsed by the ease of computers, a new study finds we shouldn't be so quick to throw away the pencils and paper: handwriting helps people learn certain skills surprisingly faster and significantly better than learning the same material through typing or watching videos.
@psycholinguistics
"The question out there for parents and educators is why should our kids spend any time doing handwriting," says senior author Brenda Rapp, a Johns Hopkins University professor of cognitive science. "Obviously, you're going to be a better hand-writer if you practice it. But since people are handwriting less then maybe who cares? The real question is: Are there other benefits to handwriting that have to do with reading and spelling and understanding? We find there most definitely are."
The work appears in the journal Psychological Science.
Rapp and lead author Robert Wiley, a former Johns Hopkins University Ph.D. student who is now a professor at the University of North Carolina, Greensboro, conducted an experiment in which 42 people were taught the Arabic alphabet, split into three groups of learners: writers, typers and video watchers.
Everyone learned the letters one at a time by watching videos of them being written along with hearing names and sounds. After being introduced to each letter, the three groups would attempt to learn what they just saw and heard in different ways. The video group got an on-screen flash of a letter and had to say if it was the same letter they'd just seen. The typers would have to find the letter on the keyboard. The writers had to copy the letter with pen and paper.
At the end, after as many as six sessions, everyone could recognize the letters and made few mistakes when tested. But the writing group reached this level of proficiency faster than the other groups -- a few of them in just two sessions.
Next the researchers wanted to determine to what extent, if at all, the groups could generalize this new knowledge. In other words, they could all recognize the letters, but could anyone really use them like a pro, by writing with them, using them to spell new words and using them to read unfamiliar words?
The writing group was better -- decisively -- in all of those things.
"The main lesson is that even though they were all good at recognizing letters, the writing training was the best at every other measure. And they required less time to get there," Wiley said.
The writing group ended up with more of the skills needed for expert adult-level reading and spelling. Wiley and Rapp say it's because handwriting reinforces the visual and aural lessons. The advantage has nothing to do with penmanship -- it's that the simple act of writing by hand provides a perceptual-motor experience that unifies what is being learned about the letters (their shapes, their sounds, and their motor plans), which in turn creates richer knowledge and fuller, true learning, the team says.
"With writing, you're getting a stronger representation in your mind that lets you scaffold toward these other types of tasks that don't in any way involve handwriting," Wiley said.
Although the participants in the study were adults, Wiley and Rapp expect they'd see the same results in children. The findings have implications for classrooms, where pencils and notebooks have taken a backseat in recent years to tablets and laptops, and teaching cursive handwriting is all but extinct.
The findings also suggest that adults trying to learn a language with a different alphabet should supplement what they're learning through apps or tapes with good old-fashioned paperwork.
Wiley, for one, is making sure the kids in his life are stocked up on writing supplies.
"I have three nieces and a nephew right now and my siblings ask me should we get them crayons and pens? I say yes, let them just play with the letters and start writing them and write them all the time. I bought them all finger paint for Christmas and told them let's do letters."

Читать полностью…

Psycholinguistics

📚 Aleksi J. Sihvonen, Pablo Ripollés, Vera Leo, Jani Saunavaara, Riitta Parkkola, Antoni Rodríguez-Fornells, Seppo Soinila, Teppo Särkämö. Vocal Music Listening Enhances Poststroke Language Network Reorganization. eneuro, 2021; 8 (4): ENEURO.0158-21.2021 DOI: 10.1523/ENEURO.0158-21.2021
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

"The virtue of computational modeling is that you can articulate a range of alternative hypotheses -- alternative models -- with different internal wiring to test if other theories would make equally good or better predictions. In some of these alternatives, we assumed that children ignore some of the information sources. In others, we assumed that the way in which children integrate the different information sources changes with age. None of these alternative models provided a better explanation of children's behavior than the rational integration model," explains Tessler.
The study offers several exciting and thought-provoking results that inform our understanding of how children learn language. Beyond that, it opens up a new, interdisciplinary way of doing research. "Our goal was to put formal models in a direct dialogue with experimental data. These two approaches have been largely separated in child development research," says Manuel Bohn. The next steps in this research program will be to test the robustness of this theoretical model. To do so, the team is currently working on experiments that involve a new set of information sources to be integrated.
#model #modeling #child #social #inform #integrating #integrated #integrates #integrate #learn_new #research #researcher #bohn #different_information_sources #integration_predicted #data #explains #university #planck #anthropology #developmental #prediction #learning #way #development #world #word #say #rational
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/07/210701112710.htm

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

"We expect the same mechanism, more or less, plays a role in pauses during communication in other animals, including humans," Carlson said.
The research was supported by grants from the National Science Foundation (IOS-1050701 and IOS-1755071).
#electrical #pause #paused #electric_fish #human #research #researcher #carlson #neuron #word #science #information #produce_high #right #producing #produced #stimulation #stimulated #tend #tended #listener #speech #system #interesting #everyday #university #universal #scientist #said #including #communicate #communication
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/05/210526132131.htm

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…

Psycholinguistics

One of the most highlighted aspects of the study is that the stimulus and the methodology are compatible to the typical limitations of the hospital environment in which the tests are carried out. "Time is essential in the FFR research with newborns. On the one hand, because recording time limitations determine the stimuli they can record. On the other hand, for the actual conditions of the situation of newborns in hospitals, where there is a frequent and continuous access to the baby and the mother so they receive the required care and undergo evaluations and routine tests to rule out health problems," authors add. Considering these restrictions, the responses of the 34 newborns that were part of the study were recorded in sessions that lasted between twenty and thirty minutes, almost half the time used in common sessions in studies on speech sound discrimination.
A potential biomarker of learning problems
After this study, the objective of the researchers is to characterize the development f neural encoding of the spectral and temporal fine structure of speech sounds over time. To do so, they are currently recording the frequency-following response in those babies that took part in the present study, who are now 21 months old. "Given that the two first years of life are a critical period of stimulation for language acquisition, this longitudinal evaluation of the development will enable us to have a global view on how these encoding skills mature over the first months of life," note the researchers.
The aim is to confirm whether the observed alterations -after birth- in neural encoding of sounds are confirmed with the appearance of observable deficits in infant language development. If that happens, "that neural response could be certainly considered a useful biomarker in early detection of future literacy difficulties, just like detected alterations in newborns could predict the appearance of delays in language development. This is the objective of the ONA project, funded by the Spanish Ministry of Science and Innovation," they conclude.
#researcher #research #language #encode #encodes #sound #limitation #neural_encoding #structure #pitch #future #response #responsible #aspect #development #develop #developed #ffr #access #aspect_recorded #problem #vocal #study #ribas #new #escera #old #biomarker #test #emotional #evaluate #evaluation #project #record #recording #detection #detected #author #tonal #sant #hospital #sonia #arenillas #jordi
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/04/210426111601.htm

Читать полностью…

Psycholinguistics

@psycholinguistics

Читать полностью…
Subscribe to a channel