Digital Mouse Brain Twin Offers New Window Into Neural Function

Summary: Researchers have created an AI-powered “digital twin” of the mouse visual cortex that can accurately simulate neural responses to visual input, including movies. Unlike earlier models, this digital twin generalizes beyond its training data, predicting neuron behavior and structure with remarkable accuracy.

Trained on 900 minutes of brain recordings, the model allows researchers to run limitless experiments quickly and efficiently. This advancement may revolutionize how we study intelligence, brain disorders, and eventually, the human brain.

Key Facts:

  • Visual Cortex Modeling: The AI model predicts how tens of thousands of neurons respond to novel visual stimuli.
  • Beyond Training Data: It generalizes to new inputs and even predicts anatomical features like neuron type and location.
  • Unlimited Experiments: The digital twin allows researchers to run virtual brain experiments far faster than in living subjects.

Source: Stanford

Much as a pilot might practice maneuvers in a flight simulator, scientists might soon be able to perform experiments on a realistic simulation of the mouse brain.

In a new study, Stanford Medicine researchers and collaborators used an artificial intelligence model to build a โ€œdigital twinโ€ of the part of the mouse brain that processes visual information.

This shows a mouse and a digital brain.
The digital twin revealed which similarities mattered the most. Credit: Neuroscience News

The digital twin was trained on large datasets of brain activity collected from the visual cortex of real mice as they watched movie clips. It could then predict the response of tens of thousands of neurons to new videos and images.

Digital twins could make studying the inner workings of the brain easier and more efficient.

โ€œIf you build a model of the brain and itโ€™s very accurate, that means you can do a lot more experiments,โ€ saidย Andreas Tolias, PhD, Stanford Medicine professor of ophthalmology and senior author of theย studyย published April 10 inย Nature.

โ€œThe ones that are the most promising you can then test in the real brain.โ€

The lead author of the study is Eric Wang, PhD, a medical student at Baylor College of Medicine.

Beyond the training distribution

Unlike previous AI models of the visual cortex, which could simulate the brainโ€™s response to only the type of stimuli they saw in the training data, the new model can predict the brainโ€™s response to a wide range of new visual input. It can even surmise anatomical features of each neuron.

The new model is an example of a foundation model, a relatively new class of AI models capable of learning from large datasets, then applying that knowledge to new tasks and new types of data โ€” or what researchers call โ€œgeneralizing outside the training distribution.โ€

(ChatGPT is a familiar example of a foundation model that can learn from vast amounts of text to then understand and generate new text.)

โ€œIn many ways, the seed of intelligence is the ability to generalize robustly,โ€ Tolias said. โ€œThe ultimate goal โ€” the holy grail โ€” is to generalize to scenarios outside your training distribution.โ€

Mouse movies

To train the new AI model, the researchers first recorded the brain activity of real mice as they watched movies โ€” made-for-people movies. The films ideally would approximate what the mice might see in natural settings.

โ€œItโ€™s very hard to sample a realistic movie for mice, because nobody makes Hollywood movies for mice,โ€ Tolias said. But action movies came close enough.

Mice have low-resolution vision โ€” similar to our peripheral vision โ€” meaning they mainly see movement rather than details or color. โ€œMice like movement, which strongly activates their visual system, so we showed them movies that have a lot of action,โ€ Tolias said.

Over many short viewing sessions, the researchers recorded more than 900 minutes of brain activity from eight mice watching clips of action-packed movies, such as Mad Max. Cameras monitored their eye movements and behavior.

The researchers used the aggregated data to train a core model, which could then be customized into a digital twin of any individual mouse with a bit of additional training.

Accurate predictions

These digital twins were able to closely simulate the neural activity of their biological counterparts in response to a variety of new visual stimuli, including videos and static images. The large quantity of aggregated training data was key to the digital twinsโ€™ success, Tolias said.

โ€œThey were impressively accurate because they were trained on such large datasets.โ€

Though trained only on neural activity, the new models could generalize to other types of data.

The digital twin of one particular mouse was able to predict the anatomical locations and cell type of thousands of neurons in the visual cortex as well as the connections between these neurons. 

The researchers verified these predictions against high-resolution, electron microscope imaging of that mouseโ€™s visual cortex, which was part of a larger project to map the structure and function of the mouse visual cortex in unprecedented detail.

The results of that project, known asย MICrONS, wasย publishedย simultaneously inย Nature.

Opening the black box

Because a digital twin can function long past the lifespan of a mouse, scientists could perform a virtually unlimited number of experiments on essentially the same animal.

Experiments that would take years could be completed in hours, and millions of experiments could run simultaneously, speeding up research into how the brain processes information and the principles of intelligence.ย 

โ€œWeโ€™re trying to open the black box, so to speak, to understand the brain at the level of individual neurons or populations of neurons and how they work together to encode information,โ€ Tolias said.

In fact, the new models are already yielding new insights. In another relatedย study, also simultaneously published inย Nature, researchers used a digital twin to discover how neurons in the visual cortex choose other neurons with which to form connections.

Scientists had known that similar neurons tend to form connections, like people forming friendships. The digital twin revealed which similarities mattered the most. Neurons prefer to connect with neurons that respond to the same stimulus โ€” the color blue, for example โ€” over neurons that respond to the same area of visual space.

โ€œItโ€™s like someone selecting friends based on what they like and not where they are,โ€ Tolias said. โ€œWe learned this more precise rule of how the brain is organized.โ€

The researchers plan to extend their modeling into other brain areas and to animals, including primates, with more advanced cognitive capabilities.

โ€œEventually, I believe it will be possible to build digital twins of at least parts of the human brain,โ€ Tolias said. โ€œThis is just the tip of the iceberg.โ€

Researchers from the University Gรถttingen and the Allen Institute for Brain Science contributed to the work.

Funding: The study received funding from the Intelligence Advanced Research Projects Activity, a National Science Foundation NeuroNex grant, the National Institute of Mental Health, the National Institute of Neurological Disorders and Stroke (grant U19MH114830), the National Eye Institute (grant R01 EY026927 and Core Grant for Vision Research T32-EY-002520-37), the European Research Council and the Deutsche Forschungsgemeinschaft.

About this AI research news

Author: Nina Bai
Source: Stanford
Contact: Nina Bai – Stanford
Image: The image is credited to Neuroscience News

Original Research: Open access.
Foundation model of neural activity predicts response to new stimulus types” by Andreas Tolias, et al. Nature


Abstract

Foundation model of neural activity predicts response to new stimulus types

The complexity of neural circuits makes it challenging to decipher the brainโ€™s algorithms of intelligence.

Recent breakthroughs in deep learning have produced models that accurately simulate brain activity, enhancing our understanding of the brainโ€™s computational objectives and neural coding.

However, it is difficult for such models to generalize beyond their training distribution, limiting their utility.

The emergence of foundation modelsย trained on vast datasets has introduced a new artificial intelligence paradigm with remarkable generalization capabilities.

Here we collected large amounts of neural activity from visual cortices of multiple mice and trained a foundation model to accurately predict neuronal responses to arbitrary natural videos.

This model generalized to new mice with minimal training and successfully predicted responses across various new stimulus domains, such as coherent motion and noise patterns.

Beyond neural response prediction, the model also accurately predicted anatomical cell types, dendritic features and neuronal connectivity within the MICrONS functional connectomics dataset.

Our work is a crucial step towards building foundation models of the brain. As neuroscience accumulates larger, multimodal datasets, foundation models will reveal statistical regularities, enable rapid adaptation to new tasks and accelerate research.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.