A New Model of the Brain’s Real-Life Neural Networks

Summary: A new computational model predicts how information deep inside the brain could flow from one network to another, and how neural network clusters can self optimize over time.

Source: USC

Researchers at the Cyber-Physical Systems Group at the USC Viterbi School of Engineering, in conjunction with the University of Illinois at Urbana-Champaign, have developed a new model of how information deep in the brain could flow from one network to another and how these neuronal network clusters self-optimize over time.

Their work, chronicled in the paper “Network Science Characteristics of Brain-Derived Neuronal Cultures Deciphered From Quantitative Phase Imaging Data,” is believed to be the first study to observe this self-optimization phenomenon in in vitro neuronal networks, and counters existing models.

Their findings can open new research directions for biologically inspired artificial intelligence, detection of brain cancer and diagnosis and may contribute to or inspire new Parkinson’s treatment strategies.

The team examined the structure and evolution of neuronal networks in the brains of mice and rats in order to identify the connectivity patterns. Corresponding author and Electrical and Computing Engineering associate professor Paul Bogdan puts this work in context by explaining how the brain functions in decision-making. He references the brain activity that occurs when someone is perceived to be counting cards.

He says the brain might not actually memorize all the card options but rather is “conducting a type of model of uncertainty.” The brain, he says is getting considerable information from all the connections the neurons.

The dynamic clustering that is happening in this scenario is enabling the brain to gauge various degrees of uncertainty, get rough probabilistic descriptions and understand what sort of conditions are less likely.

“We observed that the brain’s networks have an extraordinary capacity to minimize latency, maximize throughput and maximize robustness while doing all of those in a distributed manner (without a central manager or coordinator).” said Bogdan who holds the Jack Munushian Early Career Chair at the Ming Hsieh Department of Electrical Engineering. “This means that neuronal networks negotiate with each other and connect to each other in a way that rapidly enhances network performance yet the rules of connecting are unknown.”

To Bogdan’s surprise, none of the classical mathematical models employed by neuroscience were able to accurately replicate this dynamic emergent connectivity phenomenon. Using multifractal analysis and a novel imaging technique called quantitative phase imagining (QPI) developed by Gabriel Popescu, a professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign, a co-author on the study, the research team was able to model and analyze this phenomenon with high accuracy.

HEALTH APPLICATIONS

The findings of this research could have a significant impact on the early detection of brain tumors. By having a better topological map of the healthy brain and brain’s activities to compare to–it will be easier to early detect structural abnormalities from imaging the dynamic connectivity among neurons in various cognitive tasks without having to do more invasive procedures.

This shows a head with a lot of node network likes inside it
To Bogdan’s surprise, none of the classical mathematical models employed by neuroscience were able to accurately replicate this dynamic emergent connectivity phenomenon. Image is in the public domain.

Says co-author Chenzhong Yin, a Ph.D. student in Bogdan’s Cyber Physical Systems Group, “Cancer spreads in small groups of cells and cannot be detected by FMRI or other scanning techniques until it’s too late.”

“But with this method we can train A.I. to detect and even predict diseases early by monitoring and discovering abnormal microscopic interactions between neurons, added Yin.

The researchers are now seeking to perfect their algorithms and imaging tools for use in monitoring these complex neuronal networks live inside a living brain.

This could have additional applications for diseases like Parkinson’s, which involves losing the neuronal connections between left and right hemispheres in the brain.

“By placing an imaging device on the brain of a living animal, we can also monitor and observe things like neuronal networks growing and shrinking, how memory and cognition form, if a drug is effective and ultimately how learning happens. We can then begin to design better artificial neural networks that, like the brain, would have the ability to self-optimize.”

USE FOR ARTIFICIAL INTELLIGENCE

“Having this level of accuracy can give us a clearer picture of the inner workings of biological brains and how we can potentially replicate those in artificial brains,” Bogdan said.

As humans we have the ability to learn new tasks without forgetting old ones. Artificial neural networks, however, suffer from what is known as the problem of catastrophic forgetting. We see this when we try to teach a robot two successive tasks such as climbing stairs and then turning off the light.

The robot may overwrite the configuration that allowed it to climb the stairs as it shifts toward the optimal state for performing the second task, turning off the light. This happens because deep learning systems rely on massive amounts of training data to master the simplest of tasks.

If we could replicate how the biological brain enables continual learning or our cognitive ability for inductive inference, Bogdan believes, we would be able to teach A.I. multiple tasks without an increase in network capacity.

Funding: The research was co-authored by: Chenzhong Yin, Xiongye Xiao, Valeriu Balaban, Mikhail E Kandel, Young Jae Lee, Gabriel Popescu, and Paul Bogdan. It was supported by the National Science Foundation (NSF), and the Defense Advanced Research Projects Agency (DARPA).

About this neuroscience research news

Source: USC
Contact: Amy Liberson – USC
Image: The image is in the public domain.

Original Research: Open access.
Network science characteristics of brain-derived neuronal cultures deciphered from quantitative phase imaging data” by Chenzhong Yin et al. Scientific Reports.


Abstract

Network science characteristics of brain-derived neuronal cultures deciphered from quantitative phase imaging data

Understanding the mechanisms by which neurons create or suppress connections to enable communication in brain-derived neuronal cultures can inform how learning, cognition and creative behavior emerge. While prior studies have shown that neuronal cultures possess self-organizing criticality properties, we further demonstrate that in vitro brain-derived neuronal cultures exhibit a self-optimization phenomenon. More precisely, we analyze the multiscale neural growth data obtained from label-free quantitative microscopic imaging experiments and reconstruct the in vitro neuronal culture networks (microscale) and neuronal culture cluster networks (mesoscale). We investigate the structure and evolution of neuronal culture networks and neuronal culture cluster networks by estimating the importance of each network node and their information flow. By analyzing the degree-, closeness-, and betweenness-centrality, the node-to-node degree distribution (informing on neuronal interconnection phenomena), the clustering coefficient/transitivity (assessing the “small-world” properties), and the multifractal spectrum, we demonstrate that murine neurons exhibit self-optimizing behavior over time with topological characteristics distinct from existing complex network models. The time-evolving interconnection among murine neurons optimizes the network information flow, network robustness, and self-organization degree. These findings have complex implications for modeling neuronal cultures and potentially on how to design biological inspired artificial intelligence.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.