📚 Timotheus A. Bodt, Johann-Mattis List. Reflex prediction. Diachronica, 2021; DOI: 10.1075/dia.20009.bod
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
👉🏽 Linguists predict unknown words using language comparison
A new article describes an experiment that illustrates how the classical method for the reconstruction of unattested languages can also be used to predict hitherto undocumented words in poorly described and endangered languages of India.
@psycholinguistics
Two researchers from SOAS University of London and the Max Planck Institute for the Science of Human History have recently published a paper in an international journal for historical linguistics, Diachronica. In the article, they describe the results of an experiment in which they applied the traditional comparative method to explicitly predict the pronunciation of words in eight Western Kho-Bwa linguistic varieties spoken in India. Belonging to the Trans-Himalayan family (also known as Sino-Tibetan and Tibeto-Burman language family), these varieties have not yet been described in much detail and many words had not yet been documented in field work. The scholars started their experiment with an existing etymological dataset of Western Kho-Bwa varieties that was collected during fieldwork in the Indian state of Arunachal Pradesh between 2012 and 2017. Within the dataset, the authors observed multiple gaps in which the word forms for certain concepts were missing.
"When conducting fieldwork, it is inevitable that you miss out on some words. It's kind of annoying when you observe that afterwards, but in this case, we realized that this was the perfect opportunity to test how well the methods for linguistic reconstruction actually work," says Tim Bodt, first author of the study.
The researchers set up a computer-assisted workflow to predict the missing word forms. The classical methods are traditionally applied manually, but the new computational solutions helped the scholars to increase the efficiency and reliability of the process, and all results were later manually checked and refined. To increase the transparency and validity of the experiment, they then registered their predictions online.
"Registration is incredibly important in many scientific fields because it ensures that researchers adhere to good scientific practice, but as far as we know it has never been done in historical linguistics," says Johann-Mattis List, who carried out the computational analyses of the study.
"By registering our predictions online, we made sure we could no longer modify our predictions in light of the results we obtained during our subsequent verification process," Bodt, adds.
With predictions in hand, Bodt then traveled to India to verify the predicted words with native speakers of the Western Kho-Bwa languages. After asking the participating local language consultants to provide their words for the concepts under investigation, the authors compared these attested words with their earlier predictions. Based on the proportion of correctly predicted sounds per word form, the predictions were correct in 76% of all cases, which is remarkable given the limited amount of information that was used to predict the word forms. Moreover, the scholars were able to identify several reasons why certain predictions did not match the actual attested forms in the languages.
"The more we know about a language family in general, the better we can predict unknown word forms. This is all possible, because languages change their sound systems in a surprisingly regular manner," says List. "Despite the fact that so little was known about the Western Kho-Bwa languages and their linguistic history, we were able to show through our experiment that regular sound changes result in predictable word forms. In turn, our experiment has greatly improved our understanding of the Western Kho-Bwa languages and their linguistic history."
Apart from giving a concrete example for the power of the methodology of historical linguistics and the value of their experiment for linguistic studies, the authors identify certain additional benefits of predicting words in linguistic research.
📚 Angela John Thurman, Jamie O. Edgin, Stephanie L. Sherman, Audra Sterling, Andrea McDuffie, Elizabeth Berry-Kravis, Debra Hamilton, Leonard Abbeduto. Spoken language outcome measures for treatment studies in Down syndrome: feasibility, practice effects, test-retest reliability, and construct validity of variables generated from expressive language sampling. Journal of Neurodevelopmental Disorders, 2021; 13 (1) DOI: 10.1186/s11689-021-09361-6
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
👉🏽 New test to study language development in youth with Down syndrome
A team tested and validated elaborated language sampling (ELS) as a reliable set of procedures for collecting, measuring and analyzing the spoken language of youth with Down syndrome in a naturalistic setting. They found that ELS can be used to detect meaningful changes in communication skills of individuals with Down syndrome.
@psycholinguistics
The study, co-led by Angela Thurman and Leonard Abbeduto from the UC Davis MIND Institute and the Department of Psychiatry and Behavioral Sciences, focused on language as an outcome measure to detect meaningful changes in communication skills of individuals with Down syndrome. It successfully tested and validated ELS as a reliable set of procedures for collecting, measuring and analyzing the spoken language of participants interacting in a naturalistic setting.
Down syndrome and language delays
Down syndrome is the leading genetic cause of intellectual disability. Approximately one in every 700 babies in the United States is born with Down syndrome. Individuals with Down syndrome frequently have speech and language delays that might severely affect their independence and successful community inclusion.
"Interventions leading to improvements in language would have great impacts on the quality of life of individuals with Down syndrome," said Abbeduto, director of the UC Davis MIND Institute, professor of psychiatry and behavioral sciences and senior author of the study. "To develop and evaluate such interventions, we need a validated measurement tool and ELS provides that."
The ELS procedure
During the ELS procedure, researchers collect samples of participants' speech during two types of natural interactions: conversation and narration.
In conversation, trained examiners engage participants on a variety of topics in a sequenced and standardized manner. They start the conversation with a subject the participants find interesting then introduce a topic from predetermined age-appropriate lists. In their interactions, they follow a script to minimize their participation and maximize the participants' contribution. On average, the conversation lasts around 12 minutes.
In narration, the participants independently construct and tell the story in a wordless picture book. This process usually takes 10 to 15 minutes.
The researchers analyze the collected conversation and narration samples. In a previous ELS application involving participants with fragile X syndrome, the researchers derived five language outcome measures: talkativeness, lexical diversity (vocabulary), syntax, dysfluency (utterance planning) and unintelligibility (speech articulation).
Validity and reliability of the ELS measures in Down syndrome studies
For this study, four university testing sites recruited 107 participants with Down syndrome (55 males, 52 females). Participants were between the ages of 6 and 23 (mean age of 15.13 years) and with IQ levels under 70 (mean IQ of 48.73).
The participants came for a first visit to complete the ELS procedures and to take assessment tests of their IQ, autism symptom severity and other measures. Four weeks later, they revisited for a retest of the ELS procedures. This retest was to assess practice effects over repeated administrations and to check the reliability of ELS measures.
The study found that the ELS measures were generally valid and reliable across ages and IQ levels. It showed that the vocabulary, syntax and speech intelligibility variables demonstrated strong validity as outcome measures. Also, the ELS procedures were feasible for the majority of participants who successfully completed the tasks. Youth who were under 12, had phrase-level speech or less, and had a 4-year-old developmental level or lower found it more difficult to complete.
📚 Kadi L. Saar, Alexey S. Morgunov, Runzhang Qi, William E. Arter, Georg Krainer, Alpha A. Lee, Tuomas P. J. Knowles. Learning the molecular grammar of protein condensates from sequence determinants and embeddings. Proceedings of the National Academy of Sciences, 2021; 118 (15): e2019053118 DOI: 10.1073/pnas.2019053118
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
👉🏽 Artificial Intelligence could 'crack the language of cancer and Alzheimer's'
Powerful algorithms can 'predict' the biological language of cancer and neurodegenerative diseases like Alzheimer's, scientists have found. Big data produced during decades of research was fed into a computer language model to see if artificial intelligence can make more advanced discoveries than humans.
@psycholinguistics
Big data produced during decades of research was fed into a computer language model to see if artificial intelligence can make more advanced discoveries than humans.
Academics based at St John's College, University of Cambridge, found the machine-learning technology could decipher the 'biological language' of cancer, Alzheimer's, and other neurodegenerative diseases.
Their ground-breaking study has been published in the scientific journal PNAS today (April 8 2021) and could be used in the future to 'correct the grammatical mistakes inside cells that cause disease'.
Professor Tuomas Knowles, lead author of the paper and a Fellow at St John's College, said: "Bringing machine-learning technology into research into neurodegenerative diseases and cancer is an absolute game-changer. Ultimately, the aim will be to use artificial intelligence to develop targeted drugs to dramatically ease symptoms or to prevent dementia happening at all."
Every time Netflix recommends a series to watch or Facebook suggests someone to befriend, the platforms are using powerful machine-learning algorithms to make highly educated guesses about what people will do next. Voice assistants like Alexa and Siri can even recognise individual people and instantly 'talk' back to you.
Dr Kadi Liis Saar, first author of the paper and a Research Fellow at St John's College, used similar machine-learning technology to train a large-scale language model to look at what happens when something goes wrong with proteins inside the body to cause disease.
She said: "The human body is home to thousands and thousands of proteins and scientists don't yet know the function of many of them. We asked a neural network based language model to learn the language of proteins.
"We specifically asked the program to learn the language of shapeshifting biomolecular condensates -- droplets of proteins found in cells -- that scientists really need to understand to crack the language of biological function and malfunction that cause cancer and neurodegenerative diseases like Alzheimer's. We found it could learn, without being explicitly told, what scientists have already discovered about the language of proteins over decades of research."
Proteins are large, complex molecules that play many critical roles in the body. They do most of the work in cells and are required for the structure, function and regulation of the body's tissues and organs -- antibodies, for example, are a protein that function to protect the body.
Alzheimer's, Parkinson's and Huntington's diseases are three of the most common neurodegenerative diseases, but scientists believe there are several hundred.
In Alzheimer's disease, which affects 50 million people worldwide, proteins go rogue, form clumps and kill healthy nerve cells. A healthy brain has a quality control system that effectively disposes of these potentially dangerous masses of proteins, known as aggregates.
Scientists now think that some disordered proteins also form liquid-like droplets of proteins called condensates that don't have a membrane and merge freely with each other. Unlike protein aggregates which are irreversible, protein condensates can form and reform and are often compared to blobs of shapeshifting wax in lava lamps.
Professor Knowles said: "Protein condensates have recently attracted a lot of attention in the scientific world because they control key events in the cell such as gene expression -- how our DNA is converted into proteins -- and protein synthesis -- how the cells make proteins.
👉🏽 Sign-language exposure impacts infants as young as 5 months old
While it isn't surprising that infants and children love to look at people's movements and faces, recent research studies exactly where they look when they see someone using sign language. The research uses eye-tracking technology that offers a non-invasive and powerful tool to study cognition and language learning in pre-verbal infants.
@psycholinguistics
NTID researcher and Assistant Professor Rain Bosworth and alumnus Adam Stone studied early-language knowledge in young infants and children by recording their gaze patterns as they watched a signer. The goal was to learn, just from gaze patterns alone, whether the child was from a family that used spoken language or signed language at home.
They tested two groups of hearing infants and children that differ in their home language. One "control" group had hearing parents who spoke English and never used sign language or baby signs. The other group had deaf parents who only used American Sign Language at home. Both sets of children had normal hearing in this study. The control group saw sign language for the first time in the lab, while the native signing group was familiar with sign language.
The study, published in Developmental Science, showed that the non-signing infants and children looked at areas on the signer called "signing space," in front of the torso. The hands predominantly fall in this area about 80 percent of the time when signing. However, the signing infants and children looked primarily at the face, barely looking at the hands.
According to the findings, the expert sign-watching behavior is already present by about 5 months of age.
"This is the earliest evidence, that we know of, for effects of sign-language exposure," said Bosworth. "At first, it does seem counter-intuitive that the non-signers are looking at the hands and signers are not. We think signers keep their gaze on the face because they are relying on highly developed and efficient peripheral vision. Infants who are not familiar with sign language look at the hands in signing space perhaps because that is what is perceptually salient to them."
Another possible reason why signing babies keep their gaze on the face could be because they already understand that the face is very important for social interactions, added Bosworth.
"We think the reason perceptual gaze control matures so rapidly is because it supports later language learning, which is more gradual," Bosworth said. "In other words, you have to be able to know where to look before you learn the language signal."
Bosworth says more research is needed to understand the gaze behaviors of deaf babies who are or are not exposed to sign language.
The research was supported by grants awarded to Bosworth from the National Science Foundation and the National Eye Institute.
#signed #sign #signing #language #bosworth #science #looked #looking #look #gaze #infant #studied #study #perceptually #perceptual #hearing #signer #hands_predominantly #control_group #group #eye
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/04/210408152244.htm
📚 Ole Numssen, Danilo Bzdok, Gesa Hartwigsen. Functional specialization within the inferior parietal lobes across cognitive domains. eLife, 2021; 10 DOI: 10.7554/eLife.63591
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
👉🏽 The brain area with which we interpret the world
Language, empathy, attention - as different as these abilities may be, one brain region is involved in all these processes: The inferior parietal lobe (IPL). Yet until now it was unclear exactly what role it plays in these profoundly human abilities. Scientists have now shown that the IPL comes into play when we need to interpret our environment.
@psycholinguistics
It was already known that the inferior parietal lobe (IPL) is one of these regions in the human brain. Nevertheless, it was unclear how this area is able to process such very different functions. In a large study, scientists from the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and McGill University in Montreal have helped to solve this question. According to their findings, the different parts of the IPL specialize in different cognitive functions -- such as attention, language, and social cognition, with the latter reflecting the ability for perspective taking. At the same time, these areas work together with many other brain regions in a process-specific way. When it comes to language comprehension, the anterior IPL in the left hemisphere of the brain becomes active. For attention, it is the anterior IPL in the right side of the brain. If, on the other hand, social skills are required, the posterior parts of the IPL in both hemispheres of the brain spring into action simultaneously. "Social cognition requires the most complex interpretation," explains Ole Numssen, first author of the underlying study, which has now been published in the journal eLife. "Therefore, the IPLs on both sides of the brain probably work together here."
Moreover, these individual sub-areas then cooperate with different regions of the rest of the brain. In the case of attention and language, each IPL subregion links primarily to areas on one side of the brain. With social skills, it's areas on both sides. Again, this shows that the more complex the task, the more intensive the interaction with other areas.
"Our results provide insight into the basic functioning of the human brain. We show how our brains dynamically adapt to changing requirements. To do this, it links specialized individual areas, such as the IPL, with other more general regions. The more demanding the tasks, the more intensively the individual areas interact with each other. This makes highly complex functions such as language or social skills possible," says Ole Numssen. "The IPL may ultimately be considered as one of the areas with which we interpret the world."
Even in great apes, Numssen says, brain regions that correspond to the IPL do not only process purely physical stimuli, but also more complex information. Throughout evolution, they seem to have always been responsible for processing increasingly complex content. However, parts of the IPL are unique to the human brain and are not found in great apes -- a hint that this region has evolved in the course of evolution to enable key functions of human cognition.
The researchers from Leipzig and Montreal investigated such brain-behavior correlations with the help of three tasks that the study participants had to solve while lying in the MRI scanner. In the first task, they had to prove their understanding of language. To do this, they saw meaningful words such as "pigeon" and "house," but also words without meaning (known as pseudowords) such as "pulre," and had to decide whether it was a real word or not. A second task tested visual-spatial attention. For this task, they had to react to stimuli on one side of the screen, although they expected something to happen on the other side. The third task probed their ability for perspective taking using the so-called Sally Anne test. This is a comic strip consisting of four pictures in which two people interact with each other. A question in the end could only be answered correctly if the study participants were able to put themselves in the shoes of the respective persons.
📚 Kuniyoshi L. Sakai, Tatsuro Kuwamoto, Satoma Yagi, Kyohei Matsuya. Modality-Dependent Brain Activation Changes Induced by Acquiring a Second Language Abroad. Frontiers in Behavioral Neuroscience, 2021; 15 DOI: 10.3389/fnbeh.2021.631957
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
👉🏽 Measurable changes in brain activity during first few months of studying a new language
A study with first-time learners of Japanese has measured how brain activity changes after just a few months of studying a new language. The results show that acquiring a new language initially boosts brain activity, which then reduces as language skills improve.
@psycholinguistics
"In the first few months, you can quantitatively measure language-skill improvement by tracking brain activations," said Professor Kuniyoshi L. Sakai, a neuroscientist at the University of Tokyo and first author of the research recently published in Frontiers in Behavioral Neuroscience.
Researchers followed 15 volunteers as they moved to Tokyo and completed introductory Japanese classes for at least three hours each day. All volunteers were native speakers of European languages in their 20s who had previously studied English as children or teenagers, but had no prior experience studying Japanese or traveling to Japan.
Volunteers took multiple choice reading and listening tests after at least eight weeks of lessons and again six to fourteen weeks later. Researchers chose to assess only the "passive" language skills of reading and listening because those can be more objectively scored than the "active" skills of writing and speaking. Volunteers were inside a magnetic resonance imaging (MRI) scanner while taking the tests so that researchers could measure local blood flow around their brain regions, an indicator of neuronal activity.
"In simple terms, there are four brain regions specialized for language . Even in a native, second or third language, the same regions are responsible," said Sakai.
Those four regions are the grammar center and comprehension area in the left frontal lobe as well as the auditory processing and vocabulary areas in the temporo-parietal lobe. Additionally, the memory areas of the hippocampus and the vision areas of the brain, the occipital lobes, also become active to support the four language-related regions while taking the tests.
During the initial reading and listening tests, those areas of volunteers' brains showed significant increases in blood flow, revealing that the volunteers were thinking hard to recognize the characters and sounds of the unfamiliar language. Volunteers scored about 45% accuracy on the reading tests and 75% accuracy on the listening tests (random guessing on the multiple choice tests would produce 25% accuracy).
Researchers were able to distinguish between two subregions of the hippocampus during the listening tests. The observed activation pattern fits previously described roles for the anterior hippocampus in encoding new memories and for the posterior hippocampus in recalling stored information.
At the second test several weeks later, volunteers' reading test scores improved to an average of 55%. Their accuracy on the listening tests was unchanged, but they were faster to choose an answer, which researchers interpret as improved comprehension.
Comparing results from the first tests to the second tests, after additional weeks of study, researchers found decreased brain activation in the grammar center and comprehension area during listening tests, as well as in the visual areas of the occipital lobes during the reading tests.
"We expect that brain activation goes down after successfully learning a language because it doesn't require so much energy to understand," said Sakai.
Notably during the second listening test, volunteers had slightly increased activation of the auditory processing area of their temporal lobes, likely due to an improved "mind's voice" while hearing.
"Beginners have not mastered the sound patterns of the new language, so cannot hold in memory and imagine them well. They are still expending a lot of energy to recognize the speech in contrast to letters or grammar rules," said Sakai.
📚 Kali Woodruff Carr, Danielle R. Perszyk, Sandra R. Waxman. Birdsong fails to support object categorization in human infants. PLOS ONE, 2021; 16 (3): e0247430 DOI: 10.1371/journal.pone.0247430
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
👉🏽 New study identifies a limit on the range of vocalizations that support infant cognition
A new study finds that although human and non-human primate vocalizations facilitate core cognitive processes in very young human infants, birdsong does not.
@psycholinguistics
Northwestern scientists in the departments of psychology at Weinberg College of Arts and Sciences and communication sciences and disorders at the School of Communication, have new evidence documenting that not all naturally produced vocalizations support cognition in infants.
The new study, "Birdsong fails to support object categorization in human infants," will publish in PLOS ONE.
Ample evidence documents that infants as young as three- and four-months of age have begun to link the language they hear to the objects that surround them. Listening to their native language boosts their success in forming categories of objects (e.g., dog). Object categorization, the ability to identify commonalities among objects (e.g., Fido, Spot), is a fundamental building block of cognition.
In prior studies, Northwestern researchers found that infants' success in object categorization was boosted, not only in the context of listening to their native language, but also while listening to vocalizations of non-human primates. This indicated that the link between human language and cognition emerges very early and derives from an initially broad template that also includes vocalizations of other primates.
The researchers wondered if listening to birdsong, another naturally produced vocalization, would also support object categorization. Their decision to focus on infants' response to birdsong was strategic: selecting a phylogenetically distant species, whose vocal apparatus differs from our own, offered an opportunity to identify a boundary on which other naturally produced non-linguistic signals, if any, support early infant cognition.
"There are several reasons to predict that birdsong might, in fact, support infant categorization," said first author Kali Woodruff Carr, a Ph.D. candidate in psychology at Northwestern. "Birdsong is the most studied model system for human speech learning, because of behavioral, neural and genetic similarities between the acquisition of birdsong and human speech."
In the new study, 23 three- to four-month-old infants participated in the same categorization task as did infants in prior studies testing the effect of listening to language and other sounds. First, during a familiarization phase, they viewed colorful images depicting eight different members of a category (either dinosaurs or fish). In the current study, each such image was presented in conjunction with a song of a zebra finch. Next, during the test phase, infants viewed two new images, one from the same category they had just seen and one from a new category. By analyzing carefully infants' eye gaze, the researchers found that listening to the zebra finch song failed to form an object category. Unlike non-human primate vocalizations, birdsong failed to confer a cognitive advantage on infants' object categorization.
"This new evidence brings us closer to identifying which vocalizations initially support infant cognition," said senior author Sandra Waxman, professor of cognitive psychology at Weinberg College of Arts and Sciences, director of the Infant and Child Development Center at Northwestern and a faculty fellow in the University's Institute for Policy Research.
"We now know that infants' earliest link, which is sufficiently broad to include non-human primate calls, does not include zebra finch song. This will shed light on the ontogenetic and phylogenetic antecedents to human language acquisition and its quintessential link to cognition," Waxman said.
The researchers say future studies will identify whether infants' earliest link to cognition is sufficiently broad to include the vocalizations beyond those of primates (e.g., non-primate mammals), or whether only the vocalizations of primates are included in this privileged set.
📚 Sabina J. Sloman, Daniel M. Oppenheimer, Simon DeDeo. Can we detect conditioned variation in political speech? two kinds of discussion and types of conversation. PLOS ONE, 2021; 16 (2): e0246689 DOI: 10.1371/journal.pone.0246689
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
"Predicting words increases the transparency and efficiency of our research and our fieldwork. This is crucial in light of rapid language loss and limited funding for descriptive linguistic work. Moreover, it also has an educational aspect since it encourages speakers to reflect on their own linguistic heritage," says Bodt.
The researchers hope that the results of their ground-breaking experiment will encourage other linguistic field workers, descriptive linguists, and historical linguists to follow suit, and make more explicit and conscious use of the regularity of sound change and predictions of word forms.
#linguistics #linguist #predict #prediction #predicted #predictable #predicting #language #linguistic_varieties #say #word #researcher #research #bodt #manually #certain #sound #computational_solutions #comparative #compared #field #etymological #speaker #planck #scientific_fields #author #history #work #change #regular #regularity #applied #list
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/04/210427094808.htm
"Spoken language is the primary way we interact with the people around us, making language a frequent target of treatment. However, we have not had tools sensitive and accurate enough to confidently measure change in language treatment studies," said Thurman, associate professor of psychiatry and first author on the study. "The data from this study provide a critical first step indicating these procedures can be used to effectively measure language for people with Down syndrome."
The study, published April 8 in Journal of Neurodevelopmental Disorders, is available online. The researchers provided online manuals to help other investigators with the administration, training and assessment of fidelity of ELS procedures.
This study was co-authored by Jamie O. Edgin of University of Arizona, Stephanie L. Sherman and Debra Hamilton of Emory University, Audra Sterling of University of Wisconsin-Madison, Elizabeth Berry-Kravis of Rush University Medical Center, and Andrea McDuffie of UC Davis Health.
It was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (R01HD074346, P50HD103526) and National Center for Advancing Translational Sciences (UL1 TR000002).
#measure #measurement #participation #collecting_measuring #participants_interacting #language #el #speech #syndrome #university_testing #study #validated #validity #valid #science #age #researchers_collect #communication #community #tested #test #said #collected #provides #provide #provided #usually #set #setting #picture #disability #interactions_conversation #abbeduto #online #level #national #institute #thurman #symptom_severity #severely #trained #training #interact #berry
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/04/210408112341.htm
"Any defects connected with these protein droplets can lead to diseases such as cancer. This is why bringing natural language processing technology into research into the molecular origins of protein malfunction is vital if we want to be able to correct the grammatical mistakes inside cells that cause disease."
Dr Saar said: "We fed the algorithm all of data held on the known proteins so it could learn and predict the language of proteins in the same way these models learn about human language and how WhatsApp knows how to suggest words for you to use.
"Then we were able ask it about the specific grammar that leads only some proteins to form condensates inside cells. It is a very challenging problem and unlocking it will help us learn the rules of the language of disease."
The machine-learning technology is developing at a rapid pace due to the growing availability of data, increased computing power, and technical advances which have created more powerful algorithms.
Further use of machine-learning could transform future cancer and neurodegenerative disease research. Discoveries could be made beyond what scientists currently already know and speculate about diseases and potentially even beyond what the human brain can understand without the help of machine-learning.
Dr Saar explained: "Machine-learning can be free of the limitations of what researchers think are the targets for scientific exploration and it will mean new connections will be found that we have not even conceived of yet. It is really very exciting indeed."
The network developed has now been made freely available to researchers around the world to enable advances to be worked on by more scientists.
#protein #language #like #condensate #data #research #researcher #disease #machine #said #scientific #form #advanced #advance #professor #inside #knowles #scientist #human #learn #educated #netflix #suggests #suggest #dementia #dangerous #artificial #ease #people #network #healthy #game #new_connections #connected #availability #available #asked #ask
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/04/210408131441.htm
📚 Rain G. Bosworth, Adam Stone. Rapid development of perceptual gaze control in hearing native signing Infants and children. Developmental Science, 2021; DOI: 10.1111/desc.13086
@psycholinguistics
#brain #complex #study #function #functioning #cognitive #cognition #ipl #ipls #social #area #task #different #say #tested #test #planck #strip #provide #process #processing #link #word #meaningful_words #region #specialize #specialized #numssen
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/03/210326104714.htm
This pattern of brain activation changes -- a dramatic initial rise during the learning phase and a decline as the new language is successfully acquired and consolidated -- can give experts in the neurobiology of language a biometric tool to assess curricula for language learners or potentially for people regaining language skills lost after a stroke or other brain injury.
"In the future, we can measure brain activations to objectively compare different methods to learn a language and select a more effective technique," said Sakai.
Until an ideal method can be identified, researchers at UTokyo recommend acquiring a language in an immersion-style natural environment like studying abroad, or any way that simultaneously activates the brain's four language regions.
This pattern of brain activation over time in individual volunteers' brains mirrors results from previous research where Sakai and his collaborators worked with native Japanese-speaking 13- and 19-year-olds who learned English in standard Tokyo public school lessons. Six years of study seemed to allow the 19-year-olds to understand the second language well enough that brain activation levels reduced to levels similar to those of their native language.
The recent study confirmed this same pattern of brain activation changes over just a few months, not years, potentially providing encouragement for anyone looking to learn a new language as an adult.
"We all have the same human brain, so it is possible for us to learn any natural language. We should try to exchange ideas in multiple languages to build better communication skills, but also to understand the world better -- to widen views about other people and about the future society," said Sakai.
#language #active #activity #activates #activations_said #researcher #brain #volunteer #studying #study #language_improvement #activation_pattern #lobe #area #research_recently #previously_studied #test #japanese #skill #measure #comparing #compare #better #region #hippocampus #stored #level #native #week #learning #learn #learned #scores_improved #reading #initial #scored #second #pattern #increase #increased #new #recent #tokyo #school #year #providing
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/03/210326104719.htm
#infant #study #studied #cognitive #vocalizations_support_cognition #vocalization #vocal #non #human #language #primate #object #new_evidence #northwestern #object_categorization #waxman #category #birdsong #science #building #development #researcher #research #said #broad #phylogenetically #phylogenetic
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/03/210311142203.htm
👉🏽 The politics of synonyms
Researchers found people are more successful at identifying language associated with Republican speech than Democratic speech patterns.
@psycholinguistics
"While other studies have shown that people can detect social categories like the race and gender of a speaker based word choice, there hasn't been work on whether that's true for ideology ," said study contributor Danny Oppenheimer, professor of social and decision sciences in the Dietrich College of Humanities and Social Sciences. "[Political] ideology is a hidden variable, you can't tell by looking what party somebody identifies with, but many of these invisible categories are still detectable based on linguistic cues."
The team examined whether or not people can connect a political party to specific linguistic cues. The team did not examine politically tinged speech, like inheritance tax versus death tax, but how synonyms are used by each party. Examples include "financial versus monetary," "colleague versus friend" or "folks versus people." To explore this concept, the research team conducted four experiments to evaluate how successfully participants could complete the task at a rate greater than chance.
"Democrats and Republicans select different words to discuss a topic," said Oppenheimer. "We wanted to see if people can pick up on this subtle speech pattern."
In the study, the researchers used machine learning to scan the Congressional Record (2012 to 2017) and the presidential debate corpora to isolate linguistic variation between the two political parties. They identified 8,345 words that were part of the Republican corpus and 7,873 with the Democratic corpus.
The results of the four studies showed that even controlling for the dictionary definition of the word, the participants are more likely to associate "Republican language" with Republicans.
Oppenheimer believes the results of the study may skew more Republican because the five-year period of the study coincided with Republican control of the White House and Congress. He also noted that the majority of participants in the four studies self-identified as liberal, and the verbal cues may be stronger and more easily identifiable to those outside the party. In addition, the Congressional Record may not be representative of the variety of political speech people hear on a daily basis, which is more complex and adds context to the language used.
"The language we use is predictive, and humans are amazing at picking up on the subtle social cues of language," said Oppenheimer. "In a world where we are trying to create inclusion, if there are simple linguistic cues that we can [use] to make people feel less ostracized then that could be generally helpful to move toward these social goals."
Oppenheimer was joined by Sabina Sloman and Simon DeDeo also at Carnegie Mellon University on the project titled, "Can we detect conditioned variation in political speech? Two kinds of discussion and types of conversation."
#versus #republican #oppenheimer #linguistic #speech #political #politically #study #said_study #word #identifies #identified #identifiable #people #detect_social_categories #team #based_word #debate #mellon #cue #detectable #science #party
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽
https://www.sciencedaily.com/releases/2021/02/210211144352.htm
📚 Francesca Rocchi, Hiroyuki Oya, Fabien Balezeau, Alexander J. Billig, Zsuzsanna Kocsis, Rick L. Jenison, Kirill V. Nourski, Christopher K. Kovach, Mitchell Steinschneider, Yukiko Kikuchi, Ariane E. Rhone, Brian J. Dlouhy, Hiroto Kawasaki, Ralph Adolphs, Jeremy D.W. Greenlee, Timothy D. Griffiths, Matthew A. Howard, Christopher I. Petkov. Common fronto-temporal effective connectivity in humans and monkeys. Neuron, 2021; DOI: 10.1016/j.neuron.2020.12.026
@psycholinguistics
👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽👇🏽