Cracking the Code of Sound Recognition: Machine Learning Model Reveals How Our Brains Understand Communication Sounds

Summary: Researchers developed a machine learning model that mimics how the brains of social animals distinguish between sound categories, like mating, food or danger, and react accordingly.

The algorithm helps explain how our brains recognize the meaning of communication sounds, such as spoken words or animal calls, providing crucial insight into the intricacies of neuronal processing.

Insights from the research pave the way for treating disorders that affect speech recognition and improving hearing aids.

Key Facts:

  1. Researchers at the University of Pittsburgh have developed a machine-learning model that helps explain how the brain recognizes the meaning of communication sounds, such as spoken words or animal calls.
  2. The algorithm helps to understand the intricacies of neuronal processing that underlies sound recognition and provides a crucial insight into the process of vocal communication, which is fascinating in and of itself.
  3. The insights from this work pave the way for treating disorders that affect speech recognition, improving hearing aids, and understanding the ways our brains interact with one another and take ideas and convey them through sound.

Source: University of Pittsburgh

In a paper published today in Communications Biology, auditory neuroscientists at the University of Pittsburgh describe a machine-learning model that helps explain how the brain recognizes the meaning of communication sounds, such as animal calls or spoken words.  

The algorithm described in the study models how social animals, including marmoset monkeys and guinea pigs, use sound-processing networks in their brain to distinguish between sound categories – such as calls for mating, food or danger — and act on them.  

The study is an important step toward understanding the intricacies and complexities of neuronal processing that underlies sound recognition. The insights from this work pave the way for understanding, and eventually treating, disorders that affect speech recognition, and improving hearing aids.  

“More or less everyone we know will lose some of their hearing at some point in their lives, either as a result of aging or exposure to noise. Understanding the biology of sound recognition and finding ways to improve it is important,” said senior author and Pitt assistant professor of neurobiology Srivatsun Sadagopan, Ph.D.

“But the process of vocal communication is fascinating in and of itself. The ways our brains interact with one another and can take ideas and convey them through sound is nothing short of magical.”  

Humans and animals encounter an astounding diversity of sounds every day, from the cacophony of the jungle to the hum inside a busy restaurant.

No matter the sound pollution in the world that surrounds us, humans and other animals are able to communicate and understand one another, including pitch of their voice or accent.

When we hear the word “hello,” for example, we recognize its meaning regardless of whether it was said with an American or British accent, whether the speaker is a woman or a man, or if we’re in a quiet room or busy intersection. 

The team started with the intuition that the way the human brain recognizes and captures the meaning of communication sounds may be similar to how it recognizes faces compared with other objects. Faces are highly diverse but have some common characteristics. 

Instead of matching every face that we encounter to some perfect “template” face, our brain picks up on useful features, such as the eyes, nose and mouth, and their relative positions, and creates a mental map of these small characteristics that define a face. 

In a series of studies, the team showed that communication sounds may also be made up of such small characteristics.

The researchers first built a machine learning model of sound processing to recognize the different sounds made by social animals. To test if brain responses corresponded with the model, they recorded brain activity from guinea pigs listening to their kin’s communication sounds.

This shows a brain and sound waves
The study is an important step toward understanding the intricacies and complexities of neuronal processing that underlies sound recognition. Credit: Neuroscience News

Neurons in regions of the brain that are responsible for processing sounds lit up with a flurry of electrical activity when they heard a noise that had features present in specific types of these sounds, similar to the machine learning model. 

They then wanted to check the performance of the model against the real-life behavior of the animals.  

Guinea pigs were put in an enclosure and exposed to different categories of sounds — squeaks and grunts that are categorized as distinct sound signals. Researchers then trained the guinea pigs to walk over to different corners of the enclosure and receive fruit rewards depending on which category of sound was played.  

Then, they made the tasks harder: To mimic the way humans recognize the meaning of words spoken by people with different accents, the researchers ran guinea pig calls through sound-altering software, speeding them up or slowing them down, raising or lowering their pitch, or adding noise and echoes. 

Not only were the animals able to perform the task as consistently as if the calls they heard were  unaltered, they continued to perform well despite artificial echoes or noise. Better yet, the machine learning model described their behavior (and the underlying activation of sound-processing neurons in the brain) perfectly.  

As a next step, the researchers are translating the model’s accuracy from animals into human speech. 

“From an engineering viewpoint, there are much better speech recognition models out there. What’s unique about our model is that we have a close correspondence with behavior and brain activity, giving us more insight into the biology.

“In the future, these insights can be used to help people with neurodevelopmental conditions or to help engineer better hearing aids,” said lead author Satyabrata Parida, Ph.D., postdoctoral fellow at Pitt’s department of neurobiology. 

“A lot of people struggle with conditions that make it hard for them to recognize speech,” said Manaswini Kar, a student in the Sadagopan lab.

“Understanding how a neurotypical brain recognizes words and makes sense of the auditory world around it will make it possible to understand and help those who struggle.”  

About this machine learning and AI research news

Author: Anastasia Gorelova
Source: University of Pittsburgh
Contact: Anastasia Gorelova – University of Pittsburgh
Image: The image is credited to Neuroscience News

Original Research: Open access.
Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model” by Srivatsun Sadagopan et al. Communications Biology


Abstract

Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model

For robust vocalization perception, the auditory system must generalize over variability in vocalization production as well as variability arising from the listening environment (e.g., noise and reverberation).

We previously demonstrated using guinea pig and marmoset vocalizations that a hierarchical model generalized over production variability by detecting sparse intermediate-complexity features that are maximally informative about vocalization category from a dense spectrotemporal input representation.

Here, we explore three biologically feasible model extensions to generalize over environmental variability: (1) training in degraded conditions, (2) adaptation to sound statistics in the spectrotemporal stage and (3) sensitivity adjustment at the feature detection stage. All mechanisms improved vocalization categorization performance, but improvement trends varied across degradation type and vocalization type.

One or more adaptive mechanisms were required for model performance to approach the behavioral performance of guinea pigs on a vocalization categorization task.

These results highlight the contributions of adaptive mechanisms at multiple auditory processing stages to achieve robust auditory categorization.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.