Summary: Understanding how brain activity translates to behavior requires tracking individual neurons in real-time—a task that is notoriously difficult when the subject is a “wiggly” roundworm or a deforming jellyfish. Researchers have developed three AI-infused tools to solve this “alignment and annotation” bottleneck.
These tools—BrainAlignNet, AutoCellLabeler, and CellDiscoveryNet—can automatically identify and track cells with up to 99.6% accuracy, even as animals warp and move. This breakthrough replaces months of manual labor with near-instant, automated analysis, providing a new model for decoding the nervous systems of living, behaving organisms.
Key Facts
- The “Wiggle” Problem: Tracking neurons in live animals is difficult because every part of the organism can move relative to any other part, constantly shifting the position of brain cells.
- BrainAlignNet: This tool tracks cells through long video series with 99.6% accuracy and works 600 times faster than previous methods.
- AutoCellLabeler: Capable of identifying specific cell types (e.g., the “NSM” neuron) with 98% accuracy using a color-coding system.
- CellDiscoveryNet: Uniquely identifies and clusters cell types across different animals without any human training or supervision, matching the performance of expert humans.
- Broad Application: While developed for the roundworm C. elegans, the tools have already been successfully applied to tracking the more complex, deforming nervous system of the C. hemisphaerica jellyfish.
Source: Picower Institute at MIT
Understanding the connection between behavior and brain cell activity is a major goal of neuroscience. To make progress, neuroscientists often choose simple, transparent lab animals because it’s possible to see all their neurons fluoresce to indicate their electrical activity as the animals behave. But visibility isn’t enough.
Precisely tracking each cell’s position and identity as the animals wiggle and warp during their complex movements is a huge challenge. In a new study in eLife, MIT neuroscientists debut three AI-infused tools to solve the problem.
“In a live behaving animal, we can now keep track of neurons over time and even determine the exact identities of most neurons. This is essential for our goal of relating brain activity to behavior,” said study senior author Steven Flavell, associate professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences, and an HHMI Investigator.
The three tools are “BrainAlignNet,” which can keep track of cells throughout a long time series of images, such as a video; “AutoCellLabeler,” which can identify the cell types in each image, if cued with some initial training; and CellDiscoveryNet, which can identify the cell types without any training or supervision.
The capability of the tools, which have largely ended the lab’s need to choose between speed and accuracy in labeling cells in their samples, Flavell said, provides a potential model for how other labs working with other large series of images—in human tissues or samples from other organisms—can approach the problem of identifying cell types and keeping track of them across many images, Flavell said.
“People are swimming in microscopy data these days,” Flavell said. “Automatically identifying all of the cells in each image is a problem that a lot of people are grappling with.”
Indeed, while Flavell’s lab focuses on decoding brain and behavior in the roundworm C. elegans, the study applied BrainAlignNet to C. hemisphaerica jellyfish in the lab of Picower Institute colleague and study co-author Brady Weissbourd. Weissbourd said the tool has been a big help in enabling his lab to extract neural activity data from videos of the animals as they exhibit behaviors (albeit while gently constrained under the coverslip of a slide).
“They call it a jellyfish for a reason,” said Weissbourd, an assistant professor of Biology and Brain and Cognitive Sciences.
“Any part of it can move relative to any other part of it. We’ve collected videos, but one of our major bottlenecks was figuring out how to actually extract neural activity data from those videos because all of the neurons are moving around arbitrarily relative to each other. The tool helped us to register our videos to be able to extract neural activity from them.”
Bottlenecks begone
Similarly, in Flavell’s lab back in 2022 when the lab was working on major studies of brainwide activity and serotonin’s influence during behavior, individuals with months of training had to spend up to five hours annotating each cell’s identity from each worm’s video recording.
That’s even though each neuron was highlighted using a comprehensive four-color-channel barcoding system originally invented at Columbia University called NeuroPAL. Lab members were despairing about how long it would take to annotate all their data and when Flavell looked into outsourcing the task, he reported to his lab members in a meeting, the estimates ran into the six figures.
The meeting was late in the week. By early the following week, study lead author Adam Atanas, a former graduate student in the lab, walked into Flavell’s office with the first version of AutoCellLabeler.
Each tool leverages existing underlying neural network architectures that Atanas and co-authors then optimized, tweaked and refined to specifically address the alignment and annotation problems. Some of the tools require training data, but CellDiscoveryNet did not.
But most importantly, Flavell said, the researchers did not need to explicitly direct the neural networks to look at specific criteria (cell colors, shapes, positions) to do their jobs. The networks themselves could learn what features in the image would lead them to task success. For example, aligning cells over time or annotating a cell’s identity.
Each tool attacks the “alignment and annotation” problem in different ways, but they’ve all been refined to the point where their results are highly accurate, the researchers report.
- BrainAlignNet rigorously and quickly solves only the alignment problem (“Is the cell that was here in this image now over here in this image?”). It works 600 times faster than the lab’s prior method yet with single-pixel, 99.6% accuracy compared to ground truth.
- AutoCellLabeler takes on the job of actually identifying each type of cell in an image (“Is this the neuron ’NSM’ in this image?”). The tool requires training from human annotated data, but is capable of working well even without the full four colors of NeuroPAL labeling. With NeuroPAL it was 98 percent accurate, and that was only a little bit reduced when samples were labeled with just two colors.
- CellDiscoveryNet can align and cluster fluorescently labeled cell types across different animals (“is this neuron in worm A the same cell type as this neuron in worm B?”) without any supervision or training. Its performance essentially matched well-trained human labelers.
There is further to go, Flavell and Weissbourd said. Weissbourd, for instance, is working on labeling all the cells in the jellyfish (only one type, making up 10 percent of the total, was labeled in this study). He is also developing a microscope capable of imaging the jellies as they swim freely.
In addition to Atanas, Flavell, and Weissbourd, the study’s other authors are Alicia Kun-Yang Lu, Brian Goodell, Jungsoo Kim, Saba Baskoylu, Di Kang, Talya Kramer, Eric Bueno, Flossie Wan, and Karen Cunningham.
Funding: Funding from sources including the National Institutes of Health, the National Science Foundation, the McKnight Foundation, The Alfred P. Sloan Foundation, The Howard Hughes Medical Institute, and the Freedom Together Foundation supported the research.
Key Questions Answered:
A: Imagine trying to keep your eye on a single grape in a bowl of Jell-O while someone shakes it—that’s what it’s like for neuroscientists to track a neuron in a moving worm. Before this AI, a single video could take a human expert five hours to label by hand. This AI does it in seconds.
A: While these specific tools were built for transparent lab animals like worms and jellyfish (where we can see every neuron), the underlying AI architecture can be adapted for any large series of microscopic images, including human tissue samples.
A: Yes. One of the tools, CellDiscoveryNet, can actually cluster and identify cell types across different animals without any human help or “training data” at all, essentially discovering the organization of the nervous system by itself.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by our staff.
About this AI and neuroscience research news
Author: David Orenstein
Source: Picower Institute at MIT
Contact: David Orenstein – Picower Institute at MIT
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Deep neural networks to register and annotate cells in moving and deforming nervous systems” by Adam A. Atanas, Alicia Kun-Yang Lu, Brian Goodell, Jungsoo Kim, Saba N. Baskoylu, Di Kang, Talya S. Kramer, Eric Bueno, Flossie K. Wan, Karen L. Cunningham, Brady Weissbourd, and Steven W. Flavell. eLife
DOI:10.7554/eLife.108159.2
Abstract
Deep neural networks to register and annotate cells in moving and deforming nervous systems
Aligning and annotating the heterogeneous cell types that make up complex cellular tissues remains a major challenge in the analysis of biomedical imaging data.
Here, we present a series of deep neural networks that allow for automatic non-rigid registration and cell identification, developed in the context of freely moving and deforming invertebrate nervous systems.
A semi-supervised learning approach was used to train a Caenorhabditis elegans registration network (BrainAlignNet) that aligns pairs of images of the bending C. elegans head with single-pixel-level accuracy.
When incorporated into an image analysis pipeline, this network can link neurons over time with 99.6% accuracy. This network could also be readily purposed to align neurons from the jellyfish Clytia hemisphaerica, an organism with a vastly different body plan and set of movements.
A separate network (AutoCellLabeler) was trained to annotate >100 neuronal cell types in the C. elegans head based on multi-spectral fluorescence of genetic markers. This network labels >100 different cell types per animal with 98% accuracy, exceeding individual human labeler performance by aggregating knowledge across manually labeled datasets.
Finally, we trained a third network (CellDiscoveryNet) to perform unsupervised discovery of >100 cell types in the C. elegans nervous system: by comparing multi-spectral imaging data from many animals, it can automatically identify and annotate cell types without using any human labels. The performance of CellDiscoveryNet matched that of trained human labelers.
These tools should be immediately useful for a wide range of biological applications and should be straightforward to generalize to many other contexts requiring alignment and annotation of dense heterogeneous cell types in complex tissues.

