Refresh

This website neurosciencenews.com/music-identification-brain-waves-22302/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

All in the Mind: Decoding Brainwaves to Identify the Music We Are Listening To

Summary: Combining neuroimaging and EEG data, researchers recorded the neural activity of people while listening to a piece of music. Using machine learning technology, the data was translated to reconstruct and identify the specific piece of music the test subjects were listening to.

Source: University of Essex

A new technique for monitoring brain waves can identify the music someone is listening to.

Researchers at the University of Essex hope the project could lead to helping people with severe communication disabilities such as locked-in syndrome or stroke sufferers by decoding language signals within their brains through non-invasive techniques.

Dr Ian Daly, from Essex’s School of Computer Science and Electronic Engineering who led the research, said: “This method has many potential applications. We have shown we can decode music, which suggests that we may, one day, be able to decode language from the brain.”

Essex scientists wanted to find a less invasive way of decoding acoustic information from signals in the brain to identify and reconstruct a piece of music someone was listening to.

Whilst there have been successful previous studies monitoring and reconstructing acoustic information from brain waves, many have used more invasive methods such as electrocortiography (ECoG) – which involves placing electrodes inside the skull to monitor the actual surface of the brain.

The research, published in the journal Scientific Reports, used a combination of two non-invasive methods – fMRI, which measures blood flow through the entire brain, and electroencephalogram (EEG), which measures what is happening in the brain in real time – to monitor a person’s brain activity whilst listening to a piece of music.

Using a deep learning neural network model, the data was translated to reconstruct and identify the piece of music.

Music is a complex acoustic signal, sharing many similarities with natural language, so the model could potentially be adapted to translate speech. The eventual goal of this strand of research would be to translate thought, which could offer an important aid in the future for people who struggle to communicate, such as those with locked-in syndrome.

This shows a woman listening to music on head phones
Essex scientists wanted to find a less invasive way of decoding acoustic information from signals in the brain to identify and reconstruct a piece of music someone was listening to. Image is in the public domain

Dr Daly added: “One application is brain-computer interfacing (BCI), which provides a communication channel directly between the brain and a computer. Obviously, this is a long way off but eventually we hope that if we can successfully decode language, we can use this to build communication aids, which is another important step towards the ultimate aim of BCI research and could, one day, provide a lifeline for people with severe communication disabilities.”

The research involved the re-use of fMRI and EEG data collected, originally, as part of a previous project at the University of Reading from participants listening to a series of 40-second pieces of simple piano music from a set of 36 pieces which differed in tempo, pitch harmony and rhythm. Using these combined data sets, the model was able to accurately identify the piece of music with a success rate of 71.8%.

About this music and neuroscience research news

Author: Ben Hall
Source: University of Essex
Contact: Ben Hall – University of Essex
Image: The image is in the public domain

Original Research: Open access.
Neural decoding of music from the EEG” by Ian Daly et al. Scientific Reports


Abstract

Neural decoding of music from the EEG

Neural decoding models can be used to decode neural representations of visual, acoustic, or semantic information. Recent studies have demonstrated neural decoders that are able to decode accoustic information from a variety of neural signal types including electrocortiography (ECoG) and the electroencephalogram (EEG).

In this study we explore how functional magnetic resonance imaging (fMRI) can be combined with EEG to develop an accoustic decoder. Specifically, we first used a joint EEG-fMRI paradigm to record brain activity while participants listened to music.

We then used fMRI-informed EEG source localisation and a bi-directional long-term short term deep learning network to first extract neural information from the EEG related to music listening and then to decode and reconstruct the individual pieces of music an individual was listening to. We further validated our decoding model by evaluating its performance on a separate dataset of EEG-only recordings.

We were able to reconstruct music, via our fMRI-informed EEG source analysis approach, with a mean rank accuracy of 71.8% (n = 18n = 18, p < 0.05p < 0.05). Using only EEG data, without participant specific fMRI-informed source analysis, we were able to identify the music a participant was listening to with a mean rank accuracy of 59.2% (n = 19n = 19, p < 0.05p < 0.05).

This demonstrates that our decoding model may use fMRI-informed source analysis to aid EEG based decoding and reconstruction of acoustic information from brain activity and makes a step towards building EEG-based neural decoders for other complex information domains such as other acoustic, visual, or semantic information.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.