Summary: Technological advances allow researchers to observe how the brain processes semantic information.
Source: Johns Hopkins Applied Physics Laboratory.
When seeing objects in the world, individuals probably are not thinking explicitly about their semantic characteristics: Is it alive? Is it edible? Is it bigger than a bread box? But activation of these kinds of semantic attributes in the human brain is now directly observable, according to recently published findings from Johns Hopkins University, its Applied Physics Laboratory, and its School of Medicine.
“Most research into how the human brain processes semantic information uses noninvasive neuroimaging approaches like functional magnetic resonance imaging, which indirectly measures neural activity via changes in blood flow,” says Nathan Crone, a neurologist at Johns Hopkins Medicine and contributing author on the research. “Invasive alternatives like electrocorticography, or ECoG, can provide more direct observations of neural processing but can only be used in the rare clinical setting when implanting electrodes directly on the surface of the cortex is a clinical necessity, as in some cases of intractable epilepsy,” he explained.
Using ECoG recordings in epilepsy surgery patients at the Johns Hopkins Hospital, the team found that semantic information could be inferred from brain responses with very high fidelity while patients named pictures of objects. The findings were published in the article, “Semantic attributes are encoded in human electrocorticographic signals during visual object recognition,” included in the March issue of NeuroImage and now available online.
Researchers recorded ECoG while patients named objects from 12 different semantic categories, such as animals, foods and vehicles. “By learning the relationship between the semantic attributes associated with objects and the neural activity recorded when patients named these objects, we found that new objects could be decoded with very high accuracies,” said Michael Wolmetz, a cognitive neuroscientist at the Johns Hopkins Applied Physics Laboratory, and one of the paper’s authors. “Using these methods, we observed how different semantic dimensions — whether an object is manmade or natural, how large it typically is, whether it’s edible, for example — were organized in each person’s brain.”
Building on previous brain–computer interface research at Johns Hopkins showing that individual finger movements could be inferred from ECoG to control a prosthetic hand, this work demonstrates that individual concepts can also be inferred from similar brain signals. “This paradigm provides a framework for testing theories about what specific semantic features are represented in the human brain, how they are encoded in neural activity, and how cognitive processes modulate neurosemantic representations,” said Kyle Rupp, a doctoral student at Johns Hopkins and author on the paper. “Likewise, from a decoding perspective, models that decompose items in semantic features are very powerful in that they can interpret neural activity from concept classes they have not been trained on.”
While today’s methods to use brain–computer interfaces for communication are extremely limited, these results showing that semantic information can be studied and recovered using ECoG suggest that improvements may be on the way.
NeuroscienceNews would like to thank Michael Wolmetz for submitting this research article directly to us.
Source: Paulette Campbell – Johns Hopkins Applied Physics Laboratory
Image Source: NeuroscienceNews.com image is credited to Johns Hopkins Applied Physics Laboratory.
Original Research: Abstract for “Semantic attributes are encoded in human electrocorticographic signals during visual object recognition” by Kyle Rupp, Matthew Roos, Griffin Milsap, Carlos Caceres, Christopher Ratto, Mark Chevillet, Nathan E. Crone, and Michael Wolmetz in NeuroImage. Published online January 11 2017 doi:10.1016/j.neuroimage.2016.12.074
[cbtabs][cbtab title=”MLA”]Johns Hopkins Applied Physics Laboratory “Researchers Directly Observe Concepts in Human Brain.” NeuroscienceNews. NeuroscienceNews, 9 March 2017.
<https://neurosciencenews.com/concepts-brain-neuroscience-6225/>.[/cbtab][cbtab title=”APA”]Johns Hopkins Applied Physics Laboratory (2017, March 9). Researchers Directly Observe Concepts in Human Brain. NeuroscienceNew. Retrieved March 9, 2017 from https://neurosciencenews.com/concepts-brain-neuroscience-6225/[/cbtab][cbtab title=”Chicago”]Johns Hopkins Applied Physics Laboratory “Researchers Directly Observe Concepts in Human Brain.” https://neurosciencenews.com/concepts-brain-neuroscience-6225/ (accessed March 9, 2017).[/cbtab][/cbtabs]
Abstract
Semantic attributes are encoded in human electrocorticographic signals during visual object recognition
Non-invasive neuroimaging studies have shown that semantic category and attribute information are encoded in neural population activity. Electrocorticography (ECoG) offers several advantages over non-invasive approaches, but the degree to which semantic attribute information is encoded in ECoG responses is not known. We recorded ECoG while patients named objects from 12 semantic categories and then trained high-dimensional encoding models to map semantic attributes to spectral-temporal features of the task-related neural responses. Using these semantic attribute encoding models, untrained objects were decoded with accuracies comparable to whole-brain functional Magnetic Resonance Imaging (fMRI), and we observed that high-gamma activity (70–110 Hz) at basal occipitotemporal electrodes was associated with specific semantic dimensions (manmade-animate, canonically large-small, and places-tools). Individual patient results were in close agreement with reports from other imaging modalities on the time course and functional organization of semantic processing along the ventral visual pathway during object recognition. The semantic attribute encoding model approach is critical for decoding objects absent from a training set, as well as for studying complex semantic encodings without artificially restricting stimuli to a small number of semantic categories.
“Semantic attributes are encoded in human electrocorticographic signals during visual object recognition” by Kyle Rupp, Matthew Roos, Griffin Milsap, Carlos Caceres, Christopher Ratto, Mark Chevillet, Nathan E. Crone, and Michael Wolmetz in NeuroImage. Published online January 11 2017 doi:10.1016/j.neuroimage.2016.12.074