Refresh

This website neurosciencenews.com/senses-confidence-neuroscience-7577/ is currently offline. Cloudflare's Always Online™ shows a snapshot of this web page from the Internet Archive's Wayback Machine. To check for the live version, click Refresh.

A Brain System That Builds Confidence in What We See, Hear and Touch

Summary: A new study sheds light on the mechanisms behind metacognition and how we make sense of the world around us.

Source: EPFL.

A series of experiments at EPFL provide conclusive evidence that the brain uses a single mechanism (supramodality) to estimate confidence in different senses such as audition, touch, or vision. The study is published in the Journal of Neuroscience.

Behavioral scientists and psychologists use the term “metacognition” to describe our ability to access, report and regulate our own mental states: “thinking about thinking”, “knowing about knowing” “being aware about being aware”, are all higher-order cognitive skills that fit this category.

Specifically, metacognition enables the brain to compute a degree of confidence when we perceive events from the external world, such as a sound, light, or touch. The accuracy of confidence estimates is crucial in daily life, for instance when hearing a baby crying, or smelling a gas leak. Confidence estimates also need to combine input from multiple senses simultaneously, for instance when buying a violin based on how it sounds, feels, and looks.

From a neuroscience point of view, the way metacognition operates in different senses, and for combination of senses is still a mystery: Does metacognition use the same rules for visual, auditory, or tactile stimuli, or does it use different components of each of sensory domains? The first of these two ideas – i.e. the “common rules” – is known as “supramodality” and it has proven controversial among neuroscientists.

Settling the matter

A series of experiments by Olaf Blanke’s lab at EPFL now provide evidence in favor of supramodality. The study, led by researcher Nathan Faivre, tested human volunteers using three different types of experimental techniques: behavioral psychophysics, computational modeling, and electrophysiological recordings.

The behavioral part of the study found that participants with high metacognitive performance for one sense (e.g. vision) were likely to perform well in other senses (e.g. audition or touch). “In other words,” explains Faivre, “those of us who are good at knowing what they see are also good at knowing what they hear and what they touch.”

The computational modeling indicated that the confidence estimates we build when seeing an image or hearing a sound can be efficiently compared to one another. This implies that they share the same format.

Finally, the electrophysiological recordings revealed similar characteristics when the volunteers reported confidence in their responses to audio or audiovisual stimuli. This suggests that visual and audiovisual metacognition is based on similar neural mechanisms.

Image shows a man and cogwheels
From a neuroscience point of view, the way metacognition operates in different senses, and for combination of senses is still a mystery: Does metacognition use the same rules for visual, auditory, or tactile stimuli, or does it use different components of each of sensory domains? The first of these two ideas – i.e. the “common rules” – is known as “supramodality” and it has proven controversial among neuroscientists. NeuroscienceNews.com image is adapted from the EPFL news release.

“These results make a strong case in favor of the supramodality hypothesis,” says Faivre. “They show that there is a common currency for confidence in different sensory domains – in other words, that confidence in a signal is encoded with the same format in the brain no matter where the signal comes from. This gives metacognition a central status, whereby the monitoring of perceptual processes occurs through a common neural mechanism.”

The study is an important step towards a mechanistic understanding of human metacognition. It tells us something about how we perceive the world and become aware of our surroundings, and can potentially lead to ways of treating several neurological and psychiatric disorders where metacognition is impaired.

About this neuroscience research article

Funding: The study was funded by Bertarelli Foundation, Swiss National Science Foundation and European Science Foundation.

Source: EPFL
Image Source: NeuroscienceNews.com image is adapted from the EPFL news release.
Original Research: Abstract for “Behavioural, modeling, and electrophysiological evidence for supramodality in human metacognition” by Nathan Faivre, Elisa Filevich, Guillermo Solovey, Simone Kühn and Olaf Blanke in Journal of Neuroscience. Published online September 15 2017 doi:10.1523/JNEUROSCI.0322-17.2017

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]EPFL “A Brain System That Builds Confidence in What We See, Hear and Touch.” NeuroscienceNews. NeuroscienceNews, 25 September 2017.
<https://neurosciencenews.com/senses-confidence-neuroscience-7577/>.[/cbtab][cbtab title=”APA”]EPFL (2017, September 25). A Brain System That Builds Confidence in What We See, Hear and Touch. NeuroscienceNews. Retrieved September 25, 2017 from https://neurosciencenews.com/senses-confidence-neuroscience-7577/[/cbtab][cbtab title=”Chicago”]EPFL “A Brain System That Builds Confidence in What We See, Hear and Touch.” https://neurosciencenews.com/senses-confidence-neuroscience-7577/ (accessed September 25, 2017).[/cbtab][/cbtabs]


Abstract

Behavioural, modeling, and electrophysiological evidence for supramodality in human metacognition

Human metacognition, or the capacity to introspect on one’s own mental states, has been mostly characterized through confidence reports in visual tasks. A pressing question is to what extent results from visual studies generalize to other domains. Answering this question allows determining whether metacognition operates through shared, supramodal mechanisms, or through idiosyncratic, modality-specific mechanisms. Here, we report three new lines of evidence for decisional and post-decisional mechanisms arguing for the supramodality of metacognition. First, metacognitive efficiency correlated between auditory, tactile, visual, and audiovisual tasks. Second, confidence in an audiovisual task was best modeled using supramodal formats based on integrated representations of auditory and visual signals. Third, confidence in correct responses involved similar electrophysiological markers for visual and audiovisual tasks that are associated with motor preparation preceding the perceptual judgment. We conclude that the supramodality of metacognition relies on supramodal confidence estimates and decisional signals that are shared across sensory modalities.

SIGNIFICANCE STATEMENT

Metacognitive monitoring is the capacity to access, report and regulate one’s own mental states. In perception, this allows rating our confidence in what we have seen, heard or touched. While metacognitive monitoring can operate on different cognitive domains, we ignore whether it involves a single supramodal mechanism common to multiple cognitive domains, or modality-specific mechanisms idiosyncratic to each domain. Here, we bring evidence in favor of the supramodality hypothesis by showing that participants with high metacognitive performance in one modality are likely to perform well in other modalities. Based on computational modeling and electrophysiology, we propose that supramodality can be explained by the existence of supramodal confidence estimates, and by the influence of decisional cues on confidence estimates.

“Behavioural, modeling, and electrophysiological evidence for supramodality in human metacognition” by Nathan Faivre, Elisa Filevich, Guillermo Solovey, Simone Kühn and Olaf Blanke in Journal of Neuroscience. Published online September 15 2017 doi:10.1523/JNEUROSCI.0322-17.2017

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.