Findings could be helpful for stroke patients.
A new study done by University of Texas at Dallas researchers indicates that watching 3-D images of tongue movements can help individuals learn speech sounds.
According to Dr. William Katz, co-author of the study and professor at UT Dallas’ Callier Center for Communication Disorders, the findings could be especially helpful for stroke patients seeking to improve their speech articulation.
“These results show that individuals can be taught consonant sounds in part by watching 3-D tongue images,” said Katz, who teaches in the UT Dallas School of Behavioral and Brain Sciences. “But we also are seeking to use visual feedback to get at the underlying nature of apraxia and other related disorders.”
The study, which appears in the journal Frontiers in Human Neuroscience, was small but showed that participants became more accurate in learning new sounds when they were exposed to visual feedback training.
Katz is one of the first researchers to suggest that the visual feedback on tongue movements could help stroke patients recover speech.
“People with apraxia of speech can have trouble with this process. They typically know what they want to say but have difficulty getting their speech plans to the muscle system, causing sounds to come out wrong,” Katz said.
“My original inspiration was to show patients their tongues, which would clearly show where sounds should and should not be articulated,” he said.
Technology recently allowed researchers to switch from 2-D technology to the Opti-Speech technology, which shows the 3-D images of the tongue. A previous UT Dallas research project determined that the Opti-Speech visual feedback system can reliably provide real-time feedback for speech learning.
Part of the new study looked at an effect called compensatory articulation — when acoustics are rapidly shifted and subjects think they are making a certain sound with their mouths, but hear feedback that indicates they are making a different sound.
Katz said people will instantaneously shift away from the direction that the sound has pushed them. Then, if the shift is turned off, they’ll overshoot.
“In our paradigm, we were able to visually shift people. Their tongues were making one sound but, little by little, we start shifting it,” Katz said. “People changed their sounds to match the tongue image.”
Katz said the research results highlight the importance of body visualization as part of rehabilitation therapy, saying there is much more work to be done.
“We want to determine why visual feedback affects speech,” Katz said. “How much is due to compensating, versus mirroring (or entrainment)? Do some of the results come from people visually guiding their tongue to the right place, then having their sense of ‘mouth feel’ take over? What parts of the brain are likely involved?
“3-D imaging is opening an entirely new path for speech rehabilitation. Hopefully this work can be translated soon to help patients who desperately want to speak better.”
Funding: The Opti-Speech study was co-authored by Sonya Mehta, a doctoral student in Communication Sciences and Disorders, and was funded by the UT Dallas Office of Sponsored Projects, the Callier Center Excellence in Education Fund, and a grant awarded by the National Institute on Deafness and Other Communication Disorders.
Source: Phil Roth – UT Dallas
Image Source: The image is credited to UT Dallas.
Original Research: Abstract for “Visual feedback of tongue movement for novel speech sound learning” by William F. Katz and Sonya Mehta in Frontiers in Human Neuroscience. Published online November 17 2015 doi:10.3389/fnhum.2015.00612
Abstract
Visual feedback of tongue movement for novel speech sound learning
Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.
“Visual feedback of tongue movement for novel speech sound learning” by William F. Katz and Sonya Mehta in Frontiers in Human Neuroscience. Published online November 17 2015 doi:10.3389/fnhum.2015.00612