Summary: Once a word is knows, sounding it out is unnecessary, a new study reports.
Source: Georgetown University Medical Center.
Georgetown neuroscientists say once a word is known, sounding it out is not necessary.
Skilled readers can quickly recognize words when they read because the word has been placed in a visual dictionary of sorts which functions separately from an area that processes the sounds of written words, say Georgetown University Medical Center (GUMC) neuroscientists. The visual dictionary idea rebuts a common theory that our brain needs to “sound out” words each time we see them.
This finding, published online today in Neuroimage, matters because unraveling how the brain solves the complex task of reading can help in uncovering the brain basis of reading disorders, such as dyslexia, say the scientists.
“Beginning readers have to sound out words as they read, which makes reading a very long and laborious process,” says the study’s lead investigator, Laurie Glezer, PhD, a postdoctoral research fellow. The research was conducted in the Laboratory for Computational Cognitive Neuroscience at GUMC, led by Maximilian Riesenhuber, PhD.
“Even skilled readers occasionally have to sound out words they do not know. But once you become a fluent, skilled reader you no longer have to sound out words you are familiar with, you can read them instantly,” Glezer explains. “We show that the brain has regions that specialize in doing each of the components of reading. The area that is processing the visual piece is different from the area that is doing the sounding out piece.”
Glezer and her co-authors tested word recognition in 27 volunteers in two different experiments using fMRI. They were able to see that words that were different, but sound the same, like “hare” and “hair” activate different neurons, akin to accessing different entries in a dictionary’s catalogue.
“If the sounds of the word had influence in this part of the brain we would expect to see that they activate the same or similar neurons, but this was not the case — ‘hair’ and ‘hare’ looked just as different as ‘hair’ and ‘soup.’”
Glezer says that this suggests that in this region of the brain all that is used is the visual information of a word and not the sounds. In addition, the researchers found a different distinct region that was sensitive to the sounds, where ‘hair’ and ‘hare’ did look the same.
“This suggests that one region is doing the visual piece and the other is doing the sound piece,” explains Riesenhuber.
“One camp of neuroscientists believe that we access both the phonology and the visual perception of a word as we read them, and that the area or areas of the brain that do one, also do the other, but our study suggests this isn’t the case,” says Glezer.
Riesenhuber says that these findings might help explain why people with dyslexia have slower, more labored reading. “Because of phonological processing problems in dyslexia, establishing a finely tuned system that can quickly and efficiently learn and recognize words might be difficult or impossible,” he says.
About this neurology research article
Other Georgetown authors include Guinevere Eden, DPhil, director of Georgetown’s Center for the Study of Learning, and Xiong Jiang, PhD, director of the Cognitive Neuroimaging Laboratory, and Judy Kim. Additional authors include Megan Luetje and Eileen Napoliello of San Diego State University.
Funding: The authors report no personal financial interests related to the study. This study was funded by the National Science Foundation and the Eunice Kennedy Shriver National Institute of Child Health and Human Development.
Source: Karen Teber – Georgetown University Medical Center Image Source: This NeuroscienceNews.com image is Woman Reading a Novel, by Vincent Van Gogh, 1888. Adapted from the Georgetown University press release. Original Research:Abstract for “Uncovering phonological and orthographic selectivity across the reading network using fMRI-RA” by Laurie S. Glezer, Guinevere Eden, Xiong Jiang, Megan Luetje, Eileen Napoliello, Judy Kim, and Maximilian Riesenhuber in NeuroImage. Published online May 29 2016 doi:10.1016/j.neuroimage.2016.05.072
Cite This NeuroscienceNews.com Article
[cbtabs][cbtab title=”MLA”]Georgetown University Medical Center. “Standard Blood Pressure Target is Suffiecient for Treating Some Strokes.” NeuroscienceNews. NeuroscienceNews, 9 June 2016. <https://neurosciencenews.com/words-pictures-sounds-4419/>.[/cbtab][cbtab title=”APA”]Georgetown University Medical Center. (2016, June 9). Standard Blood Pressure Target is Suffiecient for Treating Some Strokes. NeuroscienceNews. Retrieved June 9, 2016 from https://neurosciencenews.com/words-pictures-sounds-4419/[/cbtab][cbtab title=”Chicago”]Georgetown University Medical Center. “Standard Blood Pressure Target is Suffiecient for Treating Some Strokes.” https://neurosciencenews.com/words-pictures-sounds-4419/ (accessed June 9, 2016).[/cbtab][/cbtabs]
Uncovering phonological and orthographic selectivity across the reading network using fMRI-RA
Reading has been shown to rely on a dorsal brain circuit involving the temporoparietal cortex (TPC) for grapheme-to-phoneme conversion of novel words (Pugh et al., 2001), and a ventral stream involving left occipitotemporal cortex (OTC) (in particular in the so-called “visual word form area”, VWFA) for visual identification of familiar words. In addition, portions of the inferior frontal cortex (IFC) have been posited to be an output of the dorsal reading pathway involved in phonology. While this dorsal versus ventral dichotomy for phonological and orthographic processing of words is widely accepted, it is not known if these brain areas are actually strictly sensitive to orthographic or phonological information. Using an fMRI rapid adaptation technique we probed the selectivity of the TPC, OTC, and IFC to orthographic and phonological features during single word reading. We found in two independent experiments using different task conditions in adult normal readers, that the TPC is exclusively sensitive to phonology and the VWFA in the OTC is exclusively sensitive to orthography. The dorsal IFC (BA 44), however, showed orthographic but not phonological selectivity. These results support the theory that reading involves a specific phonological-based temporoparietal region and a specific orthographic-based ventral occipitotemporal region. The dorsal IFC, however, was not sensitive to phonological processing, suggesting a more complex role for this region.
“Uncovering phonological and orthographic selectivity across the reading network using fMRI-RA” by Laurie S. Glezer, Guinevere Eden, Xiong Jiang, Megan Luetje, Eileen Napoliello, Judy Kim, and Maximilian Riesenhuber in NeuroImage. Published online May 29 2016 doi:10.1016/j.neuroimage.2016.05.072