That’s Music To My Brain

Summary: A new study reveals machine learning technology is able to analyze brain activity and determine which piece of music a person is listening to.

Source: D’Or Institute for Research and Education.

It may sound like sci-fi, but mind reading equipment is much closer to become a reality than most people can imagine. A new study carried out at D’Or Institute for Research and Education used a Magnetic Resonance (MR) machine to read participants’ minds and find out what song they were listening to. The study, published in Scientific Reports, contributes for the improvement of the technique and pave the way to new research on reconstruction of auditory imagination and inner speech. In the clinical domain, it can enhance brain-computer interfaces in order to establish communication with locked-in syndrome patients.

In the experiment, six volunteers heard 40 pieces of classical music, rock, pop, jazz, and others. The neural fingerprint of each song on participants’ brain was captured by the MR machine while a computer was learning to identify the brain patterns elicited by each musical piece. Musical features such as tonality, dynamics, rhythm and timbre were taken in account by the computer.

After that, researchers expected that the computer would be able to do the opposite: identify which song participants were listening to, based on their brain activity – a technique known as brain decoding. When confronted with two options, the computer showed up to 85% accuracy in identifying the correct song, which is a great performance, comparing to previous studies.

Researchers then pushed the test even harder by providing not two but 10 options (e.g. one correct and nine wrong) to the computer. In this scenario, the computer correctly identified the song in 74% of the decisions.

Image shows a woman listening to music.
The neural fingerprint of each song on participants’ brain was captured by the MR machine while a computer was learning to identify the brain patterns elicited by each musical piece. Musical features such as tonality, dynamics, rhythm and timbre were taken in account by the computer. NeuroscienceNews.com image is in the public domain.

In the future, studies on brain decoding and machine learning will create possibilities of communication regardless any kind of written or spoken language. “Machines will be able to translate our musical thoughts into songs”, says Sebastian Hoefle, researcher from D’Or Institute and PhD student from Federal University of Rio de Janeiro, Brazil. The study is a result of a collaboration between Brazilian researchers and colleagues from Germany, Finland and India.

According to Hoefle, brain decoding researches provide alternatives to understand neural functioning and interact with it using artificial intelligence. In the future, he expects to find answers for questions like “what musical features make some people love a song while others don’t? Is our brain adapted to prefer a specific kind of music?”

About this neuroscience research article

Funding: D’Or Institute for Research and Education, Rio de Janeiro Research Foundation funded this study.

Source: Cinthia Fonseca – D’Or Institute for Research and Education
Publisher: Organized by NeuroscienceNews.com.
Image Source: NeuroscienceNews.com image is in the public domain.
Original Research: Open access research in Scientific Reports.
doi:10.1038/s41598-018-20732-3

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]D’Or Institute for Research and Education “That’s Music To My Brain.” NeuroscienceNews. NeuroscienceNews, 5 February 2018.
<https://neurosciencenews.com/music-brain-ai-8423/>.[/cbtab][cbtab title=”APA”]D’Or Institute for Research and Education (2018, February 5). That’s Music To My Brain. NeuroscienceNews. Retrieved February 5, 2018 from https://neurosciencenews.com/music-brain-ai-8423/[/cbtab][cbtab title=”Chicago”]D’Or Institute for Research and Education “That’s Music To My Brain.” https://neurosciencenews.com/music-brain-ai-8423/ (accessed February 5, 2018).[/cbtab][/cbtabs]


Abstract

Identifying musical pieces from fMRI data using encoding and decoding models

Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.