Is There a Musical Method For Interpreting Speech?

Summary: A new study evaluates whether musicians have an advantage in understanding speech compared to those without musical training.

Source: Acoustical Society of America.

Cochlear implants have been a common method of correcting sensorineural hearing loss for individuals with damage to their brain, inner ear, or auditory nerves. The implanted devices use an electrode array that is inserted into the cochlea and assists in stimulating auditory nerve fibers. However, the speech patterns heard with the use of a cochlear implant are often spectrally degraded and can be difficult to understand. Vocoded speech, or distorted speech that imitates voice transduction by a cochlear implant, is used throughout acoustic and auditory research to explore speech comprehension under various conditions.

Researchers Kieran E. Laursen, Sara L. Protko and Terry L. Gottfried from Lawrence University, along with collaborators Iain C. Williams and Tahnee Marquardt from the University of North Carolina at Wilmington and the University of Oxford, respectively, will present their work on the effect of musical experience on the ability to understand vocoded speech at the 174th Meeting of the Acoustical Society of America, being held Dec. 4-8, 2017, in New Orleans, Louisiana.

Musical ability, described by a person’s aptitude for playing an instrument, interpreting sound patterns or recognizing different tones, has long been linked to higher cognitive capacity and better communication skills.

“We are testing to see if someone’s musicality or levels of musical experience affects their perceptions of vocoded speech,” Laursen said in an email. “So, the question lies in how does music affect one’s abilities to hear different pitches, intonations, and rhythms within distorted speech.”

“The acoustic information in vocoded speech is quite different from that of natural speech in the presence of noise,” said Gottfried. The rhythmic patterns of natural speech are often maintained in vocoded speech, so musicians may have the upper hand at interpretation due to their experience with rhythm production. However, musicians may also fair similarly to nonmusicians due to the loss of information that can result from vocoding.

Gottfried has been researching speech perception and its relation to music since he was in graduate school. “Over the years, I’ve continued my studies of this relation between speech and music perception, and there’s been considerable recent research that suggests musical experience is related not only to improved second language speech perception, but also to improved phonetic perception in one’s first language and in better recognition of speech in noise,” he said, regarding a study on speech perceptions by nonnative listeners to Mandarin tones.

Using a commercially available program called SuperLab, research participants (both musicians and nonmusicians) were asked to transcribe vocoded sentences and words. They were then assigned to a training method on either vocoded or natural speech and asked to again transcribe vocoded sentences. The initial results showed that musicians had no significant advantage over nonmusicians in interpreting vocoded speech patterns, but this may be due to limited sample variation.

 Image shows street musicians.
Musical ability, described by a person’s aptitude for playing an instrument, interpreting sound patterns or recognizing different tones, has long been linked to higher cognitive capacity and better communication skills. NeuroscienceNews.com image is in the public domain.

“Both groups scored well above chance on the Musical Ear Test, so it’s possible that, if we tested listeners with very poor musical ears, they would also not do so well on the vocoded speech,” Gottfried said. He also noted that the results are still useful in assessing the extent to which musical experience may relate to the perception of degraded speech.

The applications of this research span beyond the understanding of vocoded speech to a variety of acoustical interpretation patterns. Understanding normal speech in a noisy environment is dependent on rhythmic pattern interpretations and is acoustically similar to attempting to understand vocoded speech patterns. If musical experience improves vocoded understanding, it may also be useful in day to day speech interpretation in noisy environments.

About this neuroscience research article

Source: Julia Majors – Acoustical Society of America
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is in the public domain.
Original Research: The study was presented at the 174th meeting of the Acoustical Society of America.

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]Acoustical Society of America “Is There a Musical Method For Interpreting Speech?.” NeuroscienceNews. NeuroscienceNews, 9 December 2017.
<https://neurosciencenews.com/music-speech-8145/>.[/cbtab][cbtab title=”APA”]Acoustical Society of America (2017, December 9). Is There a Musical Method For Interpreting Speech?. NeuroscienceNews. Retrieved December 9, 2017 from https://neurosciencenews.com/music-speech-8145/[/cbtab][cbtab title=”Chicago”]Acoustical Society of America “Is There a Musical Method For Interpreting Speech?.” https://neurosciencenews.com/music-speech-8145/ (accessed December 9, 2017).[/cbtab][/cbtabs]

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.