Love, pain, joy, fear, desire: the full spectrum of emotion resides in facial expression. We grasp this almost intuitively. However, we still lack a quantifiable understanding of the nuanced relationship between the face and the brain. We haven't yet found a way to precisely measure and reliably interpret the full complexity of facial expressions in mice, let alone humans. Or have we? Cold Spring Harbor Laboratory (CSHL) Assistant Professor Helen Hou and her team have developed a new tool that should help set science and medicine off in that direction.
In a study published in Nature Neuroscience, the Hou lab introduces a discovery platform called Cheese3D . This innovative camera and computer vision system tracks even the subtlest changes in mouse facial expression. Then, using AI, it quantifies those changes so scientists can methodically study and interpret them.
Where did the idea come from? According to Hou, it was born of necessity. "When I started my lab, we were really excited to capture the rich repertoire of facial behavior," she says. Experienced veterinarians can often "read" an animal's well-being from its face. However, until now, there hasn't been a reliable, automated way to measure facial expression with a level of detail that might offer insight into brain function.
Over the past three decades, CSHL has helped establish mice as vital models for studying the brain and how it controls behavior. But as everyone knows, there are clear distinctions between humans' and mice's faces. For one, theirs are cone-shaped.
To confront this challenge, the Hou lab worked with CSHL's Core Facilities . Together, they rigged up a high-tech system of six tiny cameras that simultaneously film a mouse's facial movements from multiple perspectives. Machine learning models compile the movies together like an expert film editor. Meanwhile, the rig also tracks electrical activity in the mouse's brain.
Of course, it wasn't merely a matter of having mice "say cheese." To demonstrate the system's accuracy, the Hou lab used Cheese3D to monitor several important behaviors, including eating. Perhaps most crucially, they ran the system on mice that had gone under anesthesia. Impressively, they could use Cheese3D to measure how deeply "awake" or "asleep" the mice were at a given moment. In collaboration with CSHL's Borniger lab , they matched the accuracy of gold standard EEG methods. Plus, they did it without disturbing the animal.
"Very subtle changes in facial muscle tone teach us a lot," Hou explains. "So, we can predict depth of anesthesia in a non-invasive way using the face."
Given the potential clinical implications, Hou is also starting to look into facial expressions during specific disease states. Additionally, she points out, "facial movement is one of the first milestones of development. We can smile long before we can crawl or walk. So, how do we learn to move our faces socially?" Any new answer would have major implications for autism and behavioral therapy. With Cheese 3D, Hou and colleagues Kyle Daruwalla and Irene Nozal Martin have built a new way to ask the question.