Yongtao Liu is an R&D staff member at Oak Ridge National Laboratory's Center for Nanophase Materials Sciences (CNMS). In the Data NanoAnalytics Group, he is helping nanomaterials research move toward experiments that can run with far less handholding. His goal sounds simple but is tough in practice: What changes when an experiment can keep "thinking" after the scientist steps away? He is developing AI-driven "closed-loop" experiments that can plan measurements, read results as they come in and choose the next step, faster than a person could.
For Liu, the point is not to take scientists out of the process. It is to remove the slow, repetitive work that keeps good questions waiting in line. His guiding principle is balance. Autonomy should speed up exploration, while expert oversight and clear, explainable reasoning keep results reliable. "Autonomy can help us explore faster," he said, "but it must stay interpretable. We need to understand its choices and whether we should trust it."
Liu came to ORNL in 2021 as a postdoctoral scientist and soon took on a leadership role. In 2024 he became project lead for physics-informed and data fusion approach for cross-facility autonomous experiments. The motivation for this work grew from the fact that materials development and scientific discovery rarely depend on a single experiment but instead rely on correlating multiple experiments that provide complementary insights. "This principle also applies to autonomous experimentation" he said.
Turning nanoscience into a closed-loop, self-improving experiment
Many nanoscience experiments follow a manual loop. A researcher sets a condition, measures a response, adjusts and measures again, often hundreds of times. In scanning probe microscopy, a family of microscopes that "feel" a surface with a tiny tip, that loop can become especially repetitive.
Liu's approach replaces much of that repetition with software. Automation runs the instrument and collects data. A type of AI that finds patterns in data evaluates the results in real time and chooses the next best measurement. The goal is not only to generate more data, faster, but also to create an experiment that adapts as it learns.
"The AI can analyze the results in real time and automatically decide what you can do next," Liu added. That speed matters, but so does sensitivity. Algorithms can notice small, consistent changes that are easy to miss when a person is staring at a flood of plots and images.
When "novelty" might mean noise rather than new science
One major thread in Liu's work is "novelty discovery." The idea is to teach an autonomous experiment to recognize when something looks truly unusual, not just statistically different. In the best case, novelty points to new physics. It can reveal behavior in materials that existing explanations do not cover.
A concrete example comes from Liu's earlier work on halide perovskites. These materials are promising for devices like next-generation solar cells and light emitters. They are also known for complex, sometimes unstable behavior. In conductive atomic force microscopy, often called conductive AFM, his team used novelty detection to flag unusual current-voltage "hysteresis" behavior. Hysteresis means the electrical response depends on the path taken, not just the final setting. It is similar to how bending a paperclip one way changes how it behaves when it is bent back.
The algorithm noticed something specific. The opening of the hysteresis loop happened at different voltages depending on the local grain structure of the thin film. Grains are small crystalline regions, and their boundaries can change how electricity flows. Because this pattern was not well understood, the team applied representation learning, a type of analysis that helps reveal hidden structure in complex datasets. The result was a "partial knowledge map" that linked microstructure and electrical behavior. Some patterns fit existing ideas, while others still do not, and they now point to what should be studied next.
That experience shaped Liu's view of autonomy. Speeding up measurement is only half the job. Autonomous labs also generate massive datasets, and scientists need better tools to interpret them without fooling themselves.
False novelty is a real risk. "The most common false novelty is measurement noise, or experimental artifacts," Liu said. These are glitches caused by the instrument, the environment, or the sample, rather than true material behavior. AI can shine a spotlight on anomalies, but people still must decide whether the spotlight is on a discovery or a mirage.
From materials training to machine-guided discovery
Liu earned his bachelor's degree at Nankai University and completed his doctorate in materials science and engineering at the University of Tennessee, Knoxville. During graduate school, he ran into a problem that many materials researchers face. The material is complicated, and the number of possible experiments can be overwhelming.
A formative moment came while studying halide perovskite thin films. Researchers believed that many nanoscale features could affect how the films absorb light and conduct charge. Those features included grains, grain boundaries, crystal facets and internal "domain walls," which are borders between regions with different internal structure. The trouble was scale. Manually checking each feature and all their combinations was practically impossible.
"I remember thinking that a better approach would be to explore these structures automatically," he said, "rather than relying on human search." That realization pushed him toward AI-driven autonomous microscopy aimed at finding new structures and behaviors that would otherwise be too slow to uncover.
Building systems that span instruments, disciplines and time scales
CNMS is a Department of Energy user facility, where visiting researchers from around the world rely on its tools. Because the same instruments support many different projects, CNMS especially values methods that "travel well" - software and workflows robust enough to work reliably across a wide range of experiments.
Liu's work sits at the intersection of materials science, instrument engineering and AI. He argues that autonomy works only when those perspectives stay connected. Materials scientists understand what signals are physically plausible and what could be an artifact. Instrument engineers know how measurements can fail or drift. AI researchers build models that can learn from messy, real data without collapsing.
Liu said interdisciplinary teamwork works best when each group brings a complementary strength. Humans define the scientific questions and constraints. AI expands the team's ability to search. In a closed-loop system, that partnership can scan a vast parameter space - testing countless experimental settings and material variations - so the system can continue exploring and refining its approach autonomously, long after the researcher has left the controls.
Linking instruments across facilities into one learning workflow
Liu also leads efforts to build cross-facility closed-loop experiments that connect different tools into one decision-making chain. Such a workflow might include synthesis tools, such as autonomous pulsed laser deposition, which grows thin films by blasting material off a target with laser pulses. It may also include combinatorial growth systems that produce many material variants in a single run. Those samples can then be studied using autonomous scanning probe microscopy.
The central challenge is timing. Microscopes can make decisions in seconds. Making a new sample can take hours or even days. "It's like trying to run a loop while some parts respond instantly and others only update once per hour or per day," Liu said. The engineering problem is to keep the fast tools efficient while still making smart use of the slow ones. He wants the whole system to keep learning, rather than waiting.
Tools for autonomy that scientists can trust
Two of Liu's contributions focus on making autonomy practical and trustworthy in the real world. They are AEcroscopy and the Gated Active Learning Framework.
AEcroscopy is a software-hardware system that controls microscopes while standardizing data acquisition, data processing and experiment logging for automated and autonomous runs. In plain terms, it helps turn a long, repetitive measurement routine into a reliable script. Instead of a person changing a setting and taking the same measurement repeatedly, the system can step through conditions automatically, process the results, and record exactly what happened. This improves both speed and reproducibility, which is the ability to repeat an experiment and get consistent results.
The Gated Active Learning Framework addresses a different risk. AI can be fast enough to multiply a mistake. If the system assumes the data should look a certain way, it can misread results that do not fit. For example, the analysis may assume a signal has one clear peak. The real material might produce two peaks under certain conditions. If the AI is not built to notice the mismatch, it can "learn" the wrong lesson and reinforce its own error.
Liu's gating idea acts like a safety filter. The model is trained only on data that match its assumptions. Strange or out-of-family cases are held back for separate review. In his opinion, this helps autonomy stay honest. "The computer model should do what it can," he added, "instead of pretending it can do everything."
What AI should never do, and what it makes possible
Liu is direct about the limits. "AI should never hide its reasoning or replace critical scientific judgment," he says. If a system cannot explain why it chose an experiment, and if humans cannot question and validate the choice, then the lab is moving fast without knowing where it is going.
At the same time, he sees a unique strength in AI. It can explore enormous experimental landscapes systematically and adaptively, learning which paths are promising while the experiment is still running. "It lets us search spaces that are too big for any one person, or even a whole team, to cover by hand," he said.
His long-term vision is not AI that only predicts - he wants AI that helps scientists reason. In that future, the system proposes tests, spots patterns and challenges assumptions. People keep the work grounded in physical reality.
Training the next kind of scientist
Liu also thinks about what autonomy means for early-career researchers. His advice starts with fundamentals. Build domain knowledge first and learn how the experiment works with your own hands.
"When new students or postdocs enter an AI-enabled lab, the most important mindset is domain-knowledge-driven critical thinking," he said. Before relying on AI, they should learn to run the measurements themselves. That hands-on experience teaches a researcher to recognize when a surprising result is real, and when it is noise, drift, or a software assumption breaking in the wild.
Outside the loop
Even in a career built around autonomous science, Liu's daily work still depends on human choices. He chooses when to focus deeply on coding, when to step back and question a "novelty" and when to bring in collaborators to interpret a confusing result. The end goal may be greatly accelerated, self-driving experiments, but the destination is not science without people. It is science where people spend less time repeating steps and more time asking better questions.
UT-Battelle manages ORNL for the DOE's Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science . - Scott Gibson