Virtual Rat with AI Brain Mimics Real Rodent Movement

Summary: Researchers created a virtual rat with an AI brain to study how real rats control movement. Using data from real rats, they trained the AI to mimic behaviors in a physics simulator.

The virtual rat’s neural activations closely matched those of real rats, offering new insights into brain function. This innovation could revolutionize neuroscience and improve robotic control systems.

Key Facts:

  1. Harvard and Google DeepMind developed a virtual rat with an AI brain.
  2. The virtual rat’s brain activity closely mimicked real rat brain activity.
  3. This model could advance neuroscience research and robotic control systems.

Source: Harvard

The agility with which humans and animals move is an evolutionary marvel that no robot has yet been able to closely emulate.

To help probe the mystery of how brains control movement, Harvard neuroscientists have created a virtual rat with an artificial brain that can move around just like a real rodent.  

This shows the virtual rodent.
Harvard and Google DeepMind researchers created a virtual rat using movement data recorded from real rats. Credit: Google DeepMind

Bence Ölveczky, professor in the Department of Organismic and Evolutionary Biology, led a group of researchers who collaborated with scientists at Google’s DeepMind AI lab to build a biomechanically realistic digital model of a rat.

Using high-resolution data recorded from real rats, they trained an artificial neural network – the virtual rat’s “brain” – to control the virtual body in a physics simulator called MuJoco, where gravity and other forces are present.

Publishing in Nature, the researchers found that activations in the virtual control network accurately predicted neural activity measured from the brains of real rats producing the same behaviors, said Ölveczky, who is an expert at training (real) rats to learn complex behaviors in order to study their neural circuitry.

The feat represents a new approach to studying how the brain controls movement, Ölveczky said, by leveraging advances in deep reinforcement learning and AI, as well as 3D movement-tracking in freely behaving animals.

The collaboration was “fantastic,” Ölveczky said. “DeepMind had developed a pipeline to train biomechanical agents to move around complex environments. We simply didn’t have the resources to run simulations like those, to train these networks.”

Working with the Harvard researchers was, likewise, “a really exciting opportunity for us,” said co-author and Google DeepMind Senior Director of Research Matthew Botvinick.

“We’ve learned a huge amount from the challenge of building embodied agents: AI systems that not only have to think intelligently, but also have to translate that thinking into physical action in a complex environment.

“It seemed plausible that taking this same approach in a neuroscience context might be useful for providing insights in both behavior and brain function.”

Graduate student Diego Aldarondo worked closely with DeepMind researchers to train the artificial neural network to implement what are called inverse dynamics models, which scientists believe our brains use to guide movement. When we reach for a cup of coffee, for example, our brain quickly calculates the trajectory our arm should follow and translates this into motor commands.

Similarly, based on data from actual rats, the network was fed a reference trajectory of the desired movement and learned to produce the forces to generate it. This allowed the virtual rat to imitate a diverse range of behaviors, even ones it hadn’t been explicitly trained on.

These simulations may launch an untapped area of virtual neuroscience in which AI-simulated animals, trained to behave like real ones, provide convenient and fully transparent models for studying neural circuits, and even how such circuits are compromised in disease.

While Ölveczky’s lab is interested in fundamental questions about how the brain works, the platform could be used, as one example, to engineer better robotic control systems.

A next step might be to give the virtual animal autonomy to solve tasks akin to those encountered by real rats.

“From our experiments, we have a lot of ideas about how such tasks are solved, and how the learning algorithms that underlie the acquisition of skilled behaviors are implemented,” Ölveczky continued.

“We want to start using the virtual rats to test these ideas and help advance our understanding of how real brains generate complex behavior.”

About this AI research news

Author: Anne Manning
Source: Harvard
Contact: Anne Manning – Harvard
Image: The image is credited to Google DeepMind

Original Research: Closed access.
A virtual rodent predicts the structure of neural activity across behaviors” by Bence Ölveczky et al. Nature


Abstract

A virtual rodent predicts the structure of neural activity across behaviors

Animals have exquisite control of their bodies, allowing them to perform a diverse range of behaviors. How such control is implemented by the brain, however, remains unclear. Advancing our understanding requires models that can relate principles of control to the structure of neural activity in behaving animals.

To facilitate this, we built a ‘virtual rodent’, in which an artificial neural network actuates a biomechanically realistic model of the rat in a physics simulator.

We used deep reinforcement learning to train the virtual agent to imitate the behavior of freely-moving rats, thus allowing us to compare neural activity recorded in real rats to the network activity of a virtual rodent mimicking their behavior.

We found that neural activity in the sensorimotor striatum and motor cortex was better predicted by the virtual rodent’s network activity than by any features of the real rat’s movements, consistent with both regions implementing inverse dynamics.

Furthermore, the network’s latent variability predicted the structure of neural variability across behaviors and afforded robustness in a way consistent with the minimal intervention principle of optimal feedback control.

These results demonstrate how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behavior and relate it to theoretical principles of motor control.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.