This shows an abstract brain.
By identifying decision-making signals in the primary somatosensory cortex, researchers have demonstrated that the brain relies on bidirectional feedback loops rather than a unidirectional hierarchy, offering a new model for building more efficient artificial intelligence. Credit: Neuroscience News

Discovery Redefines the Architecture of Thought

Summary: The assumption that the brain processes information like a one-way conveyor belt, moving from sensory input at the bottom to decision-making at the top, is being challenged.

A new study reveals that decision-making signals appear as early as the primary somatosensory cortex (S1). The team suggests that “natural intelligence” relies on complex, bidirectional feedback loops rather than the simple hierarchical flow used by today’s AI.

Key Research Findings

  • Challenging the Hierarchy: Traditional AI (like convolutional neural networks) is built on a “bottom-up” model where sensing happens first and deciding happens last in the frontal cortex. This study found decision signals in the very first stages of sensory perception.
  • The Power of Feedback: Decision-making in the brain is dynamically modulated by top-down regulation. Higher-level brain regions engage with “early” regions via feedback loops, allowing the brain to process information bidirectionally.
  • Evolution as an Architect: Natural intelligence, molded by a billion years of evolution, is far more computationally powerful and energy-efficient than current AI. Vlasov aims to “reverse-engineer” this architectural efficiency.
  • Virtual Reality Testing: The team recorded neural activity in mice navigating a virtual reality corridor. They found that even “perceptual” areas of the brain were actively involved in making decisions about the environment.
  • Future AI Architectures: Understanding these fast temporal dynamics and feedback loops provides a potential roadmap for the next generation of AI that is “less power hungry and more intelligent.”

Source: University of Illinois

New insight into decision-making pathways in the brain may impact the way engineers think about artificial intelligence, according to new research from The Grainger College of Engineering at the University of Illinois Urbana-Champaign.

Led by electrical and computer engineering professor Yurii Vlasov and published in Proceedings of the National Academy of Science (PNAS), the group’s findings highlight the involvement of early brain regions in decision-making, challenging long-held assumptions about brain hierarchy.

The human brain has long been considered the most complex structure in the universe; it remains such an enigma that reverse-engineering the brain was identified in 2008 by the National Academy of Engineering as one of 14 grand challenges for engineering in the 21st century.

For decades, assumptions about the human brain have formed the basis for convolutional neural networks and other types of artificial intelligence: namely, that decision-making occurs in a hierarchical bottom-up flow of information beginning in early brain regions and ending in the frontal cortex. But in recent years, scientists such as Vlasov have begun to question that prevailing view.

An alternative perspective hinges on natural intelligence — a process molded by evolution instead of machines. In this view of the brain, decision-making occurs not only through sequential stages but via nested feedback loops that operate bidirectionally.

Natural intelligence is more computationally powerful than current iterations of artificial intelligence and requires significantly less power, making it an attractive model for future AIs. To improve their understanding of this process, Vlasov and his interdisciplinary team of researchers sought to dissect and understand brain architecture from a systems-level view.

“We want to learn from a billion years of evolution,” Vlasov said. “How is that biological intelligence organized architecturally? Can we learn from the architectural side of the brain and emulate that to make AI more effective, less power hungry, and more intelligent than it currently is? In the level of decision-making, that’s where current AI is lacking.”

To contend with the complexities of studying the brain, Vlasov started by examining its earliest stages involved in sensing and perception of the world. After recording neural activity in mice navigating a virtual reality corridor and making perceptual decisions, the Illinois researchers were surprised to find decision-making signals as early in the brain hierarchy as in the primary somatosensory cortex (S1).

S1 appeared to be dynamically modulated by top-down regulation, engaged by the higher-level brain regions via feedback loops, suggesting that decision-making is not solely relying on unidirectional feed-forward processes as previously thought.

“The neural code of the brain is still mostly an unknown language,” Vlasov said. “But this systems-level understanding can be viewed as a potential impact on how more efficient artificial neural networks can be built — how the next generation of AI can be thought through. Maybe with these analogies that we learn from real brains, we can improve AI further.”

While not a direct recipe for building better AIs, Vlasov positions the results as something new that can be learned from the brain. Going forward, Vlasov and his team will further explore the complexity of their findings in the context of temporal dynamics while developing new tools to interrogate and collect signals from the brain.

“By looking at the fast temporal dynamics of neural activity, maybe we can understand better how these feedback loops are engaged in making decisions,” Vlasov said.

“Maybe that’s the approach that potentially uncovers these currently unknown mechanisms — how these feedback loops are organized dynamically and how they form and shape different levels of processing. Maybe that can be implemented in new architectures for AI.”

Key Questions Answered:

Q: If the “sensing” part of the brain is making decisions, what is the “thinking” part doing?

A: They are working together in real-time. Instead of the sensory area just sending a “picture” to the frontal cortex, the frontal cortex sends feedback back down to the sensory area to help it “decide” what it’s seeing as it sees it. It’s a constant, high-speed dialogue.

Q: Why does this matter for the future of ChatGPT or other AIs?

A: Most AI today is “feed-forward,” meaning it processes data in one direction. By adding the kind of “nested feedback loops” found in the brain, engineers could create AI that is much better at reasoning and pattern recognition while using a fraction of the electricity.

Q: Is this the “secret sauce” to reverse-engineering the human brain?

A: It’s a major piece of the puzzle. The National Academy of Engineering called reverse-engineering the brain one of the “grand challenges” of the century. Identifying that decision-making is a distributed, systems-level process rather than a top-down command is a fundamental shift in our “map” of the mind.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this decision-making and cognition research news

Author: Aaron Seidlitz
Source: University of Illinois
Contact: Aaron Seidlitz – University of Illinois
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Neural correlates of perceptual decision-making in the primary somatosensory cortex” by Alex G. Armstrong and Yurii Vlasov. PNAS
DOI:10.1073/pnas.2514107123


Abstract

Neural correlates of perceptual decision-making in the primary somatosensory cortex

The brain is thought to produce decisions by gradual accumulation of sensory evidence through a hierarchically organized feedforward cascade of neuronal activities that transforms early stimulus representations in the primary somatosensory cortex (S1) to a perceptual decision processed in premotor areas.

Recently, this prevailing view has been challenged by observation of choice-correlated neural activity as early in the hierarchy as S1. Here, to reconcile these seemingly controversial observations, we employ ethological whisker-guided navigation of mice in a tactile virtual reality paradigm combined with dense electrophysiological recordings in whisker-related wS1.

Leaving only a pair of C2 whiskers for mice to navigate with, we effectively designed an information bottleneck for sensory input to decision-making. We show that neural activity during sensory evidence accumulation exhibits dramatic collapse of the high-dimensional spiking activity to just a single latent variable followed by a slower and almost synchronous ramping up across the whole cortical column.

We show that this variable is consistent with models of gradual accumulation of noisy sensory evidence to a decision bound.

These observations indicate that S1 may directly participate in a categorical coding of all-or-none decision variable via cortico-cortical feedback loops through which sensory information reverberates to be transformed into perception and action.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.