Summary: As we imagine music in our heads, our auditory cortex and other brain regions process auditory information in the same way as if we are really listening to sounds, a new study reports.
Researchers at EPFL can now see what happens in our brains when we hear music in our heads. The researchers hope that in time their findings will be used to help people who have lost the ability to speak.
When we listen to music, different parts of our brain process different information – such as high and low frequencies – so that our auditory perception of the sounds matches what we hear. It’s easy to study the brain activity of someone who is listening to a song, for instance, as we have the technology to record and analyze the neural responses that each sound produces as it is heard. It’s much more complicated, however, to try and understand what happens in our brain when we hear music in our heads without any auditory stimulation. As with analyzing real music, the brain’s responses have to be linked to a given sound. But when the music is in our heads, that sound doesn’t actually exist – or at least our ears don’t hear it. Using a novel approach, researchers with EPFL’s Defitech Foundation Chair in Human-Machine Interface (CNBI) were able to analyze what happens in our brains when we hum in our heads. Recording an imaginary sound
EPFL researchers, in cooperation with a team from the University of California, Berkeley, worked with an epileptic patient who is also an experienced pianist. Initially, the patient was asked to play a piece of music on an electric piano with the sound turned on. The music and the corresponding brain activity were recorded. The patient then replayed the same piece, but this time the researchers asked him to imagine hearing the music in his head with the sound on the piano turned off. Once again, the brain activity and the music were recorded. The difference this second time around was that the music came from the mental representation made by the patient – the notes themselves were inaudible. By gathering information in these two different ways, the researchers were able to determine the brain activity produced for each sound, and then compare the data.
A totally new experiment
The experiment may seem simple, but in fact it’s truly one of a kind. “The technique used – electrocorticography – is extremely invasive. It involves implanting electrodes quite deep inside the patient’s brain,” explains Stéphanie Martin, lead author of the study and a doctoral student with the CNBI. “The technique is normally used to treat people with epilepsy who cannot take medication.” That’s why the researchers worked with this patient in particular. The electrodes, in addition to being used for treatment purposes, can measure brain activity with a very high spatial and temporal resolution – a necessity given just how rapid neuron responses are.
Possible future language-related applications
This is the first time a study has demonstrated that when we imagine music in our heads, the auditory cortex and other parts of the brain process auditory information, such as high and low frequencies, in the same way as they do when stimulated by real sound. The findings have been published in the journal Cerebral Cortex. The researchers mapped out the parts of the brain covered by the electrodes based on their function in this process and their reactions to both audible and imaginary sounds. The scientists’ aim is to one day apply these findings to language, such as for people who have lost their ability to speak. “We are at the very early stages of this research. Language is a much more complicated system than music: linguistic information is non-universal, which means it is processed by the brain in a number of stages,” explains Martin. “This recording technique is invasive, and the technology needs to be more advanced for us to be able measure brain activity with greater accuracy.” While more research needs to be done, a first step for researchers will be to replicate these results with aphasia patients – people who have lost the ability to speak – and determine whether the sounds they imagine can be recreated. The researchers hope their findings will eventually help such individuals speak again by ‘reading’ their internal speech and reproducing it vocally.
About this neuroscience research article
This study was carried out in cooperation with the following universities: University of California, Berkeley; University Hospital of Psychiatry, Bern; Inselspital, Bern; University of California, San Francisco; Freie Universität, Berlin; École normale supérieure, Paris; and the University of Maryland in College Park, USA.
Source:EPFL Publisher: Organized by NeuroscienceNews.com. Image Source: NeuroscienceNews.com image is credited to the researchers. Original Research:Abstract for “Neural Encoding of Auditory Features during Music Perception and Imagery” by Stephanie Martin, Christian Mikutta, Matthew K Leonard, Dylan Hungate, Stefan Koelsch, Shihab Shamma, Edward F Chang, José del R Millán, Robert T Knight, and Brian N Pasley in Cerebral Cortex. Published online October 27 2017 doi:10.1093/cercor/bhx277
Cite This NeuroscienceNews.com Article
[cbtabs][cbtab title=”MLA”]EPFL “That Music Playing In Your Head.” NeuroscienceNews. NeuroscienceNews, 11 November 2017. <https://neurosciencenews.com/music-neuroscience-7922/>.[/cbtab][cbtab title=”APA”]EPFL (2017, November 11). That Music Playing In Your Head. NeuroscienceNews. Retrieved November 11, 2017 from https://neurosciencenews.com/music-neuroscience-7922/[/cbtab][cbtab title=”Chicago”]EPFL “That Music Playing In Your Head.” https://neurosciencenews.com/music-neuroscience-7922/ (accessed November 11, 2017).[/cbtab][/cbtabs]
Neural Encoding of Auditory Features during Music Perception and Imagery
Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70–150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception.
“Neural Encoding of Auditory Features during Music Perception and Imagery” by Stephanie Martin, Christian Mikutta, Matthew K Leonard, Dylan Hungate, Stefan Koelsch, Shihab Shamma, Edward F Chang, José del R Millán, Robert T Knight, and Brian N Pasley in Cerebral Cortex. Published online October 27 2017 doi:10.1093/cercor/bhx277