Using AI to Decode Facial Behavior and Brain Health

Summary: While humans intuitively “read” emotions on a face, science has struggled to quantify the nuanced relationship between facial muscle movements and internal brain states. Now, researchers have launched Cheese3D, an AI-powered discovery platform.

This system uses high-speed cameras and machine learning to track subtle facial expressions in mice with such precision that it can predict the depth of anesthesia as accurately as an EEG, without ever touching the animal.

Key Facts

  • The Cheese3D Rig: The system uses six synchronized tiny cameras to film a mouse’s face from multiple perspectives, overcoming the challenge of the mouse’s cone-shaped anatomy.
  • AI Synthesis: Machine learning models act as an “expert film editor,” compiling the 2D footage into a 3D dataset that quantifies minute changes in muscle tone and expression.
  • EEG-Level Accuracy: In a landmark demonstration, Cheese3D measured the “depth” of anesthesia by tracking facial muscle tone. The results matched the gold-standard accuracy of invasive EEG methods but remained entirely non-invasive.
  • Developmental Milestones: Because facial movement is one of the first milestones in life, infants smile long before they crawl—this tool provides a new way to study how social communication develops and how it may be disrupted in conditions like autism.

Source: CSHL

Love, pain, joy, fear, desire: the full spectrum of emotion resides in facial expression. We grasp this almost intuitively. However, we still lack a quantifiable understanding of the nuanced relationship between the face and the brain.

We haven’t yet found a way to precisely measure and reliably interpret the full complexity of facial expressions in mice, let alone humans. Or have we?

This shows a face.
Subtle changes in facial muscle tone can teach us how the brain learns to move socially. Credit: Neuroscience News

Cold Spring Harbor Laboratory (CSHL) Assistant Professor Helen Hou and her team have developed a new tool that should help set science and medicine off in that direction.

In a study published in Nature Neuroscience, the Hou lab introduces a discovery platform called Cheese3D.

This innovative camera and computer vision system tracks even the subtlest changes in mouse facial expression. Then, using AI, it quantifies those changes so scientists can methodically study and interpret them.

Where did the idea come from? According to Hou, it was born of necessity. “When I started my lab, we were really excited to capture the rich repertoire of facial behavior,” she says. Experienced veterinarians can often “read” an animal’s well-being from its face. However, until now, there hasn’t been a reliable, automated way to measure facial expression with a level of detail that might offer insight into brain function.

Over the past three decades, CSHL has helped establish mice as vital models for studying the brain and how it controls behavior. But as everyone knows, there are clear distinctions between humans’ and mice’s faces. For one, theirs are cone-shaped.

To confront this challenge, the Hou lab worked with CSHL’s Core Facilities. Together, they rigged up a high-tech system of six tiny cameras that simultaneously film a mouse’s facial movements from multiple perspectives. Machine learning models compile the movies together like an expert film editor. Meanwhile, the rig also tracks electrical activity in the mouse’s brain.

Of course, it wasn’t merely a matter of having mice “say cheese.” To demonstrate the system’s accuracy, the Hou lab used Cheese3D to monitor several important behaviors, including eating. Perhaps most crucially, they ran the system on mice that had gone under anesthesia.

Impressively, they could use Cheese3D to measure how deeply “awake” or “asleep” the mice were at a given moment. In collaboration with CSHL’s Borniger lab, they matched the accuracy of gold standard EEG methods. Plus, they did it without disturbing the animal.

“Very subtle changes in facial muscle tone teach us a lot,” Hou explains. “So, we can predict depth of anesthesia in a non-invasive way using the face.”

Given the potential clinical implications, Hou is also starting to look into facial expressions during specific disease states. Additionally, she points out, “facial movement is one of the first milestones of development. We can smile long before we can crawl or walk. So, how do we learn to move our faces socially?”

Any new answer would have major implications for autism and behavioral therapy. With Cheese 3D, Hou and colleagues Kyle Daruwalla and Irene Nozal Martin have built a new way to ask the question.

Key Questions Answered:

Q: Can a mouse really “express” emotion like a human?

A: While they don’t smile at jokes, mice have a rich repertoire of facial behaviors related to pain, pleasure, and well-being. Veterinarians have “read” these for years; Cheese3D simply gives us the mathematical “dictionary” to translate those movements into data about the brain.

Q: Why do we need a camera if we already have EEGs?

A: EEGs require electrodes to be attached to the scalp or implanted in the brain, which can be stressful or physically disruptive. Cheese3D allows scientists to monitor brain states (like consciousness or stress) from across the room, ensuring the animal’s behavior is natural and undisturbed.

Q: Could this technology eventually be used on humans?

A: That is the ultimate goal. If we can map exactly how facial muscles correlate to specific brain circuits in mice, we can develop similar non-invasive tools for humans to help diagnose behavioral disorders or monitor patients under anesthesia more safely.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this neurodevelopment research news

Author: Samuel Diamond
Source: CSHL
Contact: Samuel Diamond – CSHL
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Cheese3D enables sensitive detection and analysis of whole-face movement in mice” by Kyle Daruwalla, Irene Nozal Martin, Linghua Zhang, Diana Naglič, Andrew Frankel, Catherine Rasgaitis, Rubin Zhao, Xinyan Zhang, Zainab Ahmad, Jeremy C. Borniger & Xun Helen Hou. Nature Neuroscience
DOI:10.1038/s41593-026-02262-8


Abstract

Cheese3D enables sensitive detection and analysis of whole-face movement in mice

Facial expressions and movements, from a subtle and ephemeral grimace to vigorous and rapid chewing, offer direct insights into the moment-to-moment changes of neural and physiological processes.

Mice, with discernible facial responses and evolutionarily conserved mammalian facial movement control circuits, provide an ideal model in which to unravel the link between facial movement and underlying states.

However, existing frameworks lack the spatial or temporal resolution to sensitively track all movements of the mouse face because of its small and conical form factor.

We introduce Cheese3D, a computer vision system that captures high-speed 3D motion of the entire mouse face (including ears, eyes, whisker pad and jaw, covering both sides of the face), using a calibrated six-camera array.

The interpretable framework extracts dynamics of anatomically meaningful 3D facial features in absolute world units at sub-mm precision.

The precise face-wide motion data generated by Cheese3D provides clear insights, as shown by proof-of-principle experiments predicting anesthetic depth from changing facial patterns, inferring tooth and muscle anatomy from fast ingestion motions across the entire face, measuring minute differences in movements evoked by brainstem stimulation and relating neural activity to spontaneous facial movements, including expressive features only measurable in 3D (for example, angles of ear motion).

Cheese3D can serve as a discovery tool that renders subtle mouse facial movements as a highly interpretable readout of otherwise hidden processes.

Join our Newsletter
I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )
Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.
We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any time.