Understanding the connection between behavior and brain cell activity is a major goal of neuroscience. To make progress, neuroscientists often choose simple, transparent lab animals because it's possible to see all their neurons fluoresce to indicate their electrical activity as the animals behave. But visibility isn't enough. Precisely tracking each cell's position and identity as the animals wiggle and warp during their complex movements is a huge challenge. In a new study in eLife, MIT neuroscientists debut three AI-infused tools to solve the problem.
"In a live behaving animal, we can now keep track of neurons over time and even determine the exact identities of most neurons. This is essential for our goal of relating brain activity to behavior," said study senior author Steven Flavell , associate professor in The Picower Institute for Learning and Memory and MIT's Department of Brain and Cognitive Sciences, and an HHMI Investigator.
The three tools are "BrainAlignNet," which can keep track of cells throughout a long time series of images, such as a video; "AutoCellLabeler," which can identify the cell types in each image, if cued with some initial training; and CellDiscoveryNet, which can identify the cell types without any training or supervision.
The capability of the tools, which have largely ended the lab's need to choose between speed and accuracy in labeling cells in their samples, Flavell said, provides a potential model for how other labs working with other large series of images—in human tissues or samples from other organisms—can approach the problem of identifying cell types and keeping track of them across many images, Flavell said.
"People are swimming in microscopy data these days," Flavell said. "Automatically identifying all of the cells in each image is a problem that a lot of people are grappling with."
Indeed, while Flavell's lab focuses on decoding brain and behavior in the roundworm C. elegans, the study applied BrainAlignNet to C. hemisphaerica jellyfish in the lab of Picower Institute colleague and study co-author Brady Weissbourd. Weissbourd said the tool has been a big help in enabling his lab to extract neural activity data from videos of the animals as they exhibit behaviors (albeit while gently constrained under the coverslip of a slide).
"They call it a jellyfish for a reason," said Weissbourd, an assistant professor of Biology and Brain and Cognitive Sciences. "Any part of it can move relative to any other part of it. We've collected videos, but one of our major bottlenecks was figuring out how to actually extract neural activity data from those videos because all of the neurons are moving around arbitrarily relative to each other. The tool helped us to register our videos to be able to extract neural activity from them."
Bottlenecks begone
Similarly, in Flavell's lab back in 2022 when the lab was working on major studies of brainwide activity and serotonin's influence during behavior, individuals with months of training had to spend up to five hours annotating each cell's identity from each worm's video recording. That's even though each neuron was highlighted using a comprehensive four-color-channel barcoding system originally invented at Columbia University called NeuroPAL. Lab members were despairing about how long it would take to annotate all their data and when Flavell looked into outsourcing the task, he reported to his lab members in a meeting, the estimates ran into the six figures.
The meeting was late in the week. By early the following week, study lead author Adam Atanas, a former graduate student in the lab, walked into Flavell's office with the first version of AutoCellLabeler.
Each tool leverages existing underlying neural network architectures that Atanas and co-authors then optimized, tweaked and refined to specifically address the alignment and annotation problems. Some of the tools require training data, but CellDiscoveryNet did not. But most importantly, Flavell said, the researchers did not need to explicitly direct the neural networks to look at specific criteria (cell colors, shapes, positions) to do their jobs. The networks themselves could learn what features in the image would lead them to task success. For example, aligning cells over time or annotating a cell's identity.
Each tool attacks the "alignment and annotation" problem in different ways, but they've all been refined to the point where their results are highly accurate, the researchers report.
- BrainAlignNet rigorously and quickly solves only the alignment problem ("Is the cell that was here in this image now over here in this image?"). It works 600 times faster than the lab's prior method yet with single-pixel, 99.6% accuracy compared to ground truth.
- AutoCellLabeler takes on the job of actually identifying each type of cell in an image ("Is this the neuron 'NSM' in this image?"). The tool requires training from human annotated data, but is capable of working well even without the full four colors of NeuroPAL labeling. With NeuroPAL it was 98 percent accurate, and that was only a little bit reduced when samples were labeled with just two colors.
- CellDiscoveryNet can align and cluster fluorescently labeled cell types across different animals ("is this neuron in worm A the same cell type as this neuron in worm B?") without any supervision or training. Its performance essentially matched well-trained human labelers.
There is further to go, Flavell and Weissbourd said. Weissbourd, for instance, is working on labeling all the cells in the jellyfish (only one type, making up 10 percent of the total, was labeled in this study). He is also developing a microscope capable of imaging the jellies as they swim freely.
In addition to Atanas, Flavell, and Weissbourd, the study's other authors are Alicia Kun-Yang Lu, Brian Goodell, Jungsoo Kim, Saba Baskoylu, Di Kang, Talya Kramer, Eric Bueno, Flossie Wan, and Karen Cunningham.
Funding from sources including the National Institutes of Health, the National Science Foundation, the McKnight Foundation, The Alfred P. Sloan Foundation, The Howard Hughes Medical Institute, and the Freedom Together Foundation supported the research.