Our Brains Have a Basic Algorithm That Enables Our Intelligence

Summary: Researchers report relatively simple math logic underlies complex brain computations.

Source: Medical College of Georgia at Augusta University.

Our brains have a basic algorithm that enables us to not just recognize a traditional Thanksgiving meal, but the intelligence to ponder the broader implications of a bountiful harvest as well as good family and friends.

“A relatively simple mathematical logic underlies our complex brain computations,” said Dr. Joe Z. Tsien, neuroscientist at the Medical College of Georgia at Augusta University, co-director of the Augusta University Brain and Behavior Discovery Institute and Georgia Research Alliance Eminent Scholar in Cognitive and Systems Neurobiology.

Tsien is talking about his Theory of Connectivity, a fundamental principle for how our billions of neurons assemble and align not just to acquire knowledge, but to generalize and draw conclusions from it.

“Intelligence is really about dealing with uncertainty and infinite possibilities,” Tsien said. It appears to be enabled when a group of similar neurons form a variety of cliques to handle each basic like recognizing food, shelter, friends and foes. Groups of cliques then cluster into functional connectivity motifs, or FCMs, to handle every possibility in each of these basics like extrapolating that rice is part of an important food group that might be a good side dish at your meaningful Thanksgiving gathering. The more complex the thought, the more cliques join in.

That means, for example, we cannot only recognize an office chair, but an office when we see one and know that the chair is where we sit in that office.

“You know an office is an office whether it’s at your house or the White House,” Tsien said of the ability to conceptualize knowledge, one of many things that distinguishes us from computers.

Tsien first published his theory in a 1,000-word essay in October 2015 in the journal Trends in Neuroscience. Now he and his colleagues have documented the algorithm at work in seven different brain regions involved with those basics like food and fear in mice and hamsters. Their documentation is published in the journal Frontiers in Systems Neuroscience.

“For it to be a universal principle, it needs to be operating in many neural circuits, so we selected seven different brain regions and, surprisingly, we indeed saw this principle operating in all these regions,” he said.

Intricate organization seems plausible, even essential, in a human brain, which has about 86 billion neurons and where each neuron can have tens of thousands of synapses, putting potential connections and communications between neurons into the trillions. On top of the seemingly endless connections is the reality of the infinite things each of us can presumably experience and learn.

Neuroscientists as well as computer experts have long been curious about how the brain is able to not only hold specific information, like a computer, but — unlike even the most sophisticated technology — to also categorize and generalize the information into abstract knowledge and concepts.

“Many people have long speculated that there has to be a basic design principle from which intelligence originates and the brain evolves, like how the double helix of DNA and genetic codes are universal for every organism,” Tsien said. “We present evidence that the brain may operate on an amazingly simple mathematical logic.”

“In my view, Joe Tsien proposes an interesting idea that proposes a simple organizational principle of the brain, and that is supported by intriguing and suggestive evidence,” said Dr. Thomas C. Südhof, Avram Goldstein Professor in the Stanford University School of Medicine, neuroscientist studying synapse formation and function and a winner of the 2013 Nobel Prize in Physiology or Medicine.

“This idea is very much worth testing further,” said Südhof, a sentiment echoed by Tsien and his colleagues and needed in additional neural circuits as well as other animal species and artificial intelligence systems.

At the heart of Tsien’s Theory of Connectivity is the algorithm, n=2?-1, which defines how many cliques are needed for an FCM and which enabled the scientists to predict the number of cliques needed to recognize food options, for example, in their testing of the theory.

N is the number of neural cliques connected in different possible ways; 2 means the neurons in those cliques are receiving the input or not; i is the information they are receiving; and -1 is just part of the math that enables you to account for all possibilities, Tsien explained.

To test the theory, they placed electrodes in the areas of the brain so they could “listen” to the response of neurons, or their action potential, and examine the unique waveforms resulting from each.

They gave the animals, for example, different combinations of four different foods, such as usual rodent biscuits as well as sugar pellets, rice and milk, and as the Theory of Connectivity would predict, the scientists could identify all 15 different cliques, or groupings of neurons, that responded to the potential variety of food combinations.

The neuronal cliques appear prewired during brain development because they showed up immediately when the food choices did. The fundamental mathematical rule even remained largely intact when the NMDA receptor, a master switch for learning and memory, was disabled after the brain matured.

Image shows a colorful brain.
Intricate organization seems plausible, even essential, in a human brain, which has about 86 billion neurons and where each neuron can have tens of thousands of synapses, putting potential connections and communications between neurons into the trillions. On top of the seemingly endless connections is the reality of the infinite things each of us can presumably experience and learn. Neurosciencenews image is for illustrative purposes only.

The scientists also learnefd that size does mostly matter, because while the human and animal brain both have a six-layered cerebral cortex — the lumpy outer layer of the brain that plays a key role in higher brain functions like learning and memory — the extra longitudinal length of the human cortex provides more room for cliques and FCMs, Tsien said. And while the overall girth of the elephant brain is definitely larger than the human brain, for example, most of its neurons reside in the cerebellum with far less in their super-sized cerebral cortex. The cerebellum is more involved in muscle coordination, which may help explain the agility of the huge mammal, particularly its trunk.

Tsien noted exceptions to the brain’s mathematical rule, such as in the reward circuits where the dopamine neurons reside. These cells tend to be more binary where we judge, for example, something as either good or bad, Tsien said.

The project grew out of Tsien’s early work in the creation of smart mouse Doogie 17 years ago while on faculty at Princeton University, in studying how changes in neuronal connections lay down memories in the brain.

About this neuroscience research article

Funding: The research was funded by the National Institutes of Health, a GRA equipment grant, the Yunnan Science Commission and the Chinese Natural Science Foundation. Collaborators include scientists from the University of Georgia, BanNa Biomedical Research Institute in Yunnan Province and Tsinghua University in Beijing, China.

Source: Toni Baker – Medical College of Georgia at Augusta University
Image Source: This NeuroscienceNews.com image is in the public domain.
Original Research: Full open access research for “Brain Computation Is Organized via Power-of-Two-Based Permutation Logic” by Kun Xie, Grace E. Fox, Jun Liu, Cheng Lyu, Jason C. Lee, Hui Kuang, Stephanie Jacobs, Meng Li, Tianming Liu, Sen Song and Joe Z. Tsien in Frontiers in Systems Neuroscience. Published online November 15 2016 doi:10.3389/fnsys.2016.00095

Cite This NeuroscienceNews.com Article

[cbtabs][cbtab title=”MLA”]Medical College of Georgia at Augusta University. “Our Brains Have a Basic Algorithm That Enables Our Intelligence.” NeuroscienceNews. NeuroscienceNews, 21 November 2016.
<https://neurosciencenews.com/brain-algorithm-intelligence-5562/>.[/cbtab][cbtab title=”APA”]Medical College of Georgia at Augusta University. (2016, November 21). Our Brains Have a Basic Algorithm That Enables Our Intelligence. NeuroscienceNews. Retrieved November 21, 2016 from https://neurosciencenews.com/brain-algorithm-intelligence-5562/[/cbtab][cbtab title=”Chicago”]Medical College of Georgia at Augusta University. “Our Brains Have a Basic Algorithm That Enables Our Intelligence.” https://neurosciencenews.com/brain-algorithm-intelligence-5562/ (accessed November 21, 2016).[/cbtab][/cbtabs]


Abstract

Brain Computation Is Organized via Power-of-Two-Based Permutation Logic

There is considerable scientific interest in understanding how cell assemblies—the long-presumed computational motif—are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N = 2i–1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors—the synaptic switch for learning and memory—were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques—which preferentially encode specific and low-combinatorial features and project inter-cortically—is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6—which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems—is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain’s basic computational algorithm is indeed organized by the power-of-two-based permutation logic. This simple mathematical logic can account for brain computation across the entire evolutionary spectrum, ranging from the simplest neural networks to the most complex.

“Brain Computation Is Organized via Power-of-Two-Based Permutation Logic” by Kun Xie, Grace E. Fox, Jun Liu, Cheng Lyu, Jason C. Lee, Hui Kuang, Stephanie Jacobs, Meng Li, Tianming Liu, Sen Song and Joe Z. Tsien in Frontiers in Systems Neuroscience. Published online November 15 2016 doi:10.3389/fnsys.2016.00095

Feel free to share this Neuroscience News.
Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.