Haozhe "Harry" Wang's electrical and computer engineering lab at Duke welcomed an unusual new lab member this fall: artificial intelligence.
Using publicly available AI foundation models such as OpenAI's ChatGPT and Meta's Segment Anything Model (SAM), Wang's team built ATOMIC (short for Autonomous Technology for Optical Microscopy & Intelligent Characterization)—an AI microscope platform that can analyze materials as accurately as a trained graduate student in a fraction of the time.
"The system we've built doesn't just follows instructions, it understands them," Wang said. "ATOMIC can assess a sample, make decisions on its own and produce results as well as a human expert."
Published on October 2 in the journal ACS Nano , the findings point to a new era of autonomous research, where AI systems work alongside humans to design experiments, run instruments and interpret data.
Wang's group studies two‑dimensional (2D) materials, crystals only one or a few atoms thick that are promising candidates for next-generation semiconductors, sensors and quantum devices. Their exceptional electrical properties and flexibility make them ideal for electronics, but fabrication defects can compromise these advantages. Determining how the layers stack and whether they contain microscopic defects requires laborious work and years of training.
"To characterize these materials, you usually need someone who understands every nuance of the microscope images," Wang said. "It takes graduate students months to years of high-level science classes and experience to get to that point."
To speed up the process, Wang's team linked an off‑the‑shelf optical microscope to ChatGPT, allowing the model to handle basic operations like moving the sample, focusing the image and adjusting light levels. Layered on top was SAM, an open‑source vision model designed to identify discrete objects, which in the case of materials samples would include regions containing defects and pure areas.
Together, the two AIs formed a powerful tool in the lab, a kind of virtual lab mate that could see, analyze and act on its own.
Turning general-purpose AI into a reliable scientific partner, however, required significant customization from the Wang lab. SAM could recognize regions within the microscopic images, yet it struggled with overlapping layers, a common issue in materials research. To overcome that, they added a topological correction algorithm to refine those regions, isolating single-layer areas from multilayer stacks.
Finally, the team asked the system to sort the isolated regions by their optical characteristics, which ChatGPT could do autonomously.
The results were remarkable: Across a range of 2D materials, the AI microscope matched or outperformed human analysis, identifying layer regions and subtle defects with up to 99.4 percent accuracy. The system maintained this performance even with images captured under imperfect conditions, such as overexposure, poor focus or low light, and in some cases spotted imperfections invisible to the human eye.
"The model could detect grain boundaries at scales that humans can't easily see," said Jingyun "Jolene" Yang, a PhD student in Wang's lab and first author on the paper. "It's not magic, however. When we zoom in, ATOMIC can see on a pixel-by-pixel level, making it a great tool for our lab."
By locating and categorizing microscopic defects, the system helps Wang's group determine the number of layers in a 2D material and pinpoint pristine regions suitable for follow‑up studies. Those high‑quality areas can then be used for other research in Wang's lab, such as soft robotics and next-generation electronics.
Even more impressive, the system required no specialized training data. Traditional deep‑learning approaches need thousands of labeled images. Wang's "zero‑shot" method leveraged the pre‑existing intelligence of foundation models, trained on broad swaths of human knowledge, to adapt instantly.