Summary: Researchers have developed a new technique that uses EEG data to reconstruct images based on how we perceive faces.
Source: University of Toronto.
A new technique developed by neuroscientists at the University of Toronto Scarborough can, for the first time, reconstruct images of what people perceive based on their brain activity gathered by EEG.
The technique developed by Dan Nemrodov, a postdoctoral fellow in Assistant Professor Adrian Nestor’s lab at U of T Scarborough, is able to digitally reconstruct images seen by test subjects based on electroencephalography (EEG) data.
“When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing. We were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process,” says Nemrodov.
For the study, test subjects hooked up to EEG equipment were shown images of faces. Their brain activity was recorded and then used to digitally recreate the image in the subject’s mind using a technique based on machine learning algorithms.
It’s not the first time researchers have been able to reconstruct images based on visual stimuli using neuroimaging techniques. The current method was pioneered by Nestor who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past, but this is the first time EEG has been used.
And while techniques like fMRI – which measures brain activity by detecting changes in blood flow – can grab finer details of what’s going on in specific areas of the brain, EEG has greater practical potential given that it’s more common, portable, and inexpensive by comparison. EEG also has greater temporal resolution, meaning it can measure with detail how a percept develops in time right down to milliseconds, explains Nemrodov.
“fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale. So we can see with very fine detail how the percept of a face develops in our brain using EEG,” he says. In fact, the researchers were able to estimate that it takes our brain about 170 milliseconds (0.17 seconds) to form a good representation of a face we see.
This study provides validation that EEG has potential for this type of image reconstruction notes Nemrodov, something many researchers doubted was possible given its apparent limitations. Using EEG data for image reconstruction has great theoretical and practical potential from a neurotechnological standpoint, especially since it’s relatively inexpensive and portable.
In terms of next steps, work is currently underway in Nestor’s lab to test how image reconstruction based on EEG data could be done using memory and applied to a wider range of objects beyond faces. But it could eventually have wide-ranging clinical applications as well.
“It could provide a means of communication for people who are unable to verbally communicate. Not only could it produce a neural-based reconstruction of what a person is perceiving, but also of what they remember and imagine, of what they want to express,” says Nestor.
“It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects rather than relying on verbal descriptions provided to a sketch artist.”
The research, which will be published in the journal eNeuro, was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by a Connaught New Researcher Award.
“What’s really exciting is that we’re not reconstructing squares and triangles but actual images of a person’s face, and that involves a lot of fine-grained visual detail,” adds Nestor.
“The fact we can reconstruct what someone experiences visually based on their brain activity opens up a lot of possibilities. It unveils the subjective content of our mind and it provides a way to access, explore and share the content of our perception, memory and imagination.”
[cbtabs][cbtab title=”MLA”]University of Toronto “‘Mind Reading’ Algorithm Uses EEG Data to Reconstruct Images Based on What We Perceive.” NeuroscienceNews. NeuroscienceNews, 22 February 2018. < https://neurosciencenews.com/ai-eeg-images-8546/>.[/cbtab][cbtab title=”APA”]University of Toronto (2018, February 22). ‘Mind Reading’ Algorithm Uses EEG Data to Reconstruct Images Based on What We Perceive. NeuroscienceNews. Retrieved February 22, 2018 from https://neurosciencenews.com/ai-eeg-images-8546/[/cbtab][cbtab title=”Chicago”]University of Toronto “‘Mind Reading’ Algorithm Uses EEG Data to Reconstruct Images Based on What We Perceive.” https://neurosciencenews.com/ai-eeg-images-8546/ (accessed February 22, 2018).[/cbtab][/cbtabs]
The Neural Dynamics of Facial Identity Processing: insights from EEG-Based Pattern Analysis and Image Reconstruction
Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50-650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.
Significance Statement Identifying a face is achieved through fast and efficient processing of visual information. Here, we investigate the nature of this information, its specific content and its availability at a fine-grained temporal scale. Notably, we provide a way to extract, to assess and to visualize such information from neural data associated with individual face processing. Thus, the present work accounts for the time course of face individuation through appeal to its underlying visual representations while, also, it provides a first demonstration regarding the ability to reconstruct the appearance of stimulus images from electroencephalography data.