Neuroscience research articles are provided.
What is neuroscience? Neuroscience is the scientific study of nervous systems. Neuroscience can involve research from many branches of science including those involving neurology, brain science, neurobiology, psychology, computer science, artificial intelligence, statistics, prosthetics, neuroimaging, engineering, medicine, physics, mathematics, pharmacology, electrophysiology, biology, robotics and technology.
– These articles focus mainly on neurology research. – What is neurology? – Definition of neurology: a science involved in the study of the nervous systems, especially of the diseases and disorders affecting them. – Neurology research can include information involving brain research, neurological disorders, medicine, brain cancer, peripheral nervous systems, central nervous systems, nerve damage, brain tumors, seizures, neurosurgery, electrophysiology, BMI, brain injuries, paralysis and spinal cord treatments.
What is Psychology? Definition of Psychology: Psychology is the study of behavior in an individual, or group. Psychology news articles are listed below.
Artificial Intelligence articles involve programming, neural engineering, artificial neural networks, artificial life, a-life, floyds, boids, emergence, machine learning, neuralbots, neuralrobotics, computational neuroscience and more involving A.I. research.
Robotics articles will cover robotics research press releases. Robotics news from universities, labs, researchers, engineers, students, high schools, conventions, competitions and more are posted and welcome.
Genetics articles related to neuroscience research will be listed here.
Neurotechnology research articles deal with robotics, AI, deep learning, machine learning, Brain Computer Interfaces, neuroprosthetics, neural implants and more. Read the latest neurotech news articles below.
Summary: People with grapheme-color synesthesia, associating colors with numbers and letters, have a significant advantage when it comes to statistical learning.
Source: University of Toronto
“When I see equations, I see the letters in colors. I don’t know why,” wrote Nobel Prize-winning physicist Richard Feynman. “I see vague pictures of Bessel functions with light-tan j’s, slightly violet-bluish n’s, and dark brown x’s flying around.”
Feynman was describing his grapheme-colour (GC) synesthesia – a condition in which individuals sense colours associated with letters and numbers.
Synesthesia is a family of conditions where individuals perceive stimulation through more than one sense. GC synesthesia is just one form. In others, musical notes evoke colours; words have associated tastes; sequences of numbers are sensed as points in space; numbers suggest people, like an elderly man or a baby girl.
Many synesthetes have expressed how their condition has enhanced their lives. When designing a set for a ballet or opera, the American artist David Hockney uses the colours he “sees” in the musical score. Grammy Award-winner Pharrell Williams has said that without his music-to-colour synesthesia, “I’d be lost.”
Now, an experiment led by University of Toronto psychologists has shown for the first time that grapheme-colour synesthesia provides a clear advantage in statistical learning – an ability to discern patterns – which is a critical aspect of learning language. The result provides insight into how we learn, and how children and adults may learn differently.
According to psychology graduate student Tess Forest, “Our result shows that when people experience the same patterns with more than one sense – for example, aurally and visually – they are better able to learn the patterns.”
Forest is the lead author of the paper describing the research, published online recently in the journal Cognition. She is a member of Assistant Professor Amy Finn’s Learning and Neural Development Lab in the department of psychology in the Faculty of Arts & Science. Finn is a senior author on the paper and co-authors include researchers from the department of psychology at the University of California, Berkeley.
The team obtained its result with an experiment in which subjects listened to an artificial “language” comprising nonsense words – for example, “muh-keh” and “beh-od.”
They were then asked to listen to a second set of words that included the artificial words, plus new artificial words they had not previously heard. The new artificial words contained combinations of syllables not included in the original artificial language; in other words, the new artificial words represented a “foreign” language to the participants. Participants were then asked to guess whether a word was part of the original artificial language or not.
The result? GC synesthetes scored higher at distinguishing between the two “languages” than other participants in the experiments.
“You can think about it this way: The GC synesthetes have twice the number of senses providing the same information,” says Forest. Not only are they sensing the patterns of syllables by listening, they are also sensing those patterns using synesthesia-evoked colours.
“This result is important for thinking about how we learn,” says Forest, “because real-world learning outside the lab uses multiple senses. Our result sheds light on the learning mechanisms in the brain that might be supporting statistical learning.”
One focus of Finn’s overall research is understanding how children and adults learn differently.
“This result has clear implications for understanding how children and adults might learn differently,” says Finn.
“This study shows that you can learn more when you have redundant information, that is, information perceived through more than one sense.
“This could be truer in infants if they are synesthetes – though I don’t think the evidence is very clear on that question yet. But, it’s something that could be more true in kids because their developing attentional systems make them less able to focus on only one thing at a time. When this information is redundant, this broader focus, could be quite useful for learning.”
[divider]About this neuroscience research article[/divider]
Source: University of Toronto Media Contacts: Chris Sasaki – University of Toronto Image Source: The image is adapted from the University of Toronto news release.
Original Research: Closed access “Superior learning in synesthetes: Consistent grapheme-color associations facilitate statistical learning”. Tess Allegra Forest, Alessandra Lichtenfeld, Bryan Alvarez, Amy S. Finn Cognition. doi:10.1016/j.cognition.2019.02.003
Superior learning in synesthetes: Consistent grapheme-color associations facilitate statistical learning
In synesthesia activation in one sensory domain, such as smell or sound, triggers an involuntary and unusual secondary sensory or cognitive experience. In the present study, we ask whether the added sensory experience of synesthesia can aid statistical learning—the ability to track environmental regularities in order to segment continuous information. To investigate this, we measured statistical learning outcomes, using an aurally presented artificial language, in two groups of synesthetes alongside controls and simulated the multimodal experience of synesthesia in non-synesthetes. One group of synesthetes exclusively had grapheme-color (GC) synesthesia, in which the experience of color is automatically triggered by exposure to written or spoken graphemes. The other group had both grapheme-color and sound-color (SC+) synesthesia, in which the experience of color is also triggered by the waveform properties of a voice, such as pitch, timbre, and/or musical chords. Unlike GC-only synesthetes, the experience of color in the SC+ group is not perfectly consistent with the statistics that signal word boundaries. We showed that GC-only synesthetes outperformed both non-synesthetes and SC+ synesthetes, likely because the visual concurrents for GC-only synesthetes are highly consistent with the artificial language. We further observed that our simulations of GC synesthesia, but not SC+ synesthesia produced superior statistical learning, showing that synesthesia likely boosts learning outcomes by providing a consistent secondary cue. Findings are discussed with regard to how multimodal experience can improve learning, with the present data indicating that this boost is more likely to occur through explicit, as opposed to implicit, learning systems.
[divider]Feel free to share this Neuroscience News.[/divider]