Penn Engineers have developed a new way to use AI to solve inverse partial differential equations (PDEs), a particularly challenging class of mathematical problems with broad implications for understanding the natural world.
The advance, which the researchers call "Mollifier Layers," could benefit fields as varied as genetics and weather forecasting, because inverse PDEs help scientists work backward from observable patterns to infer the hidden dynamics that produced them.
"Solving an inverse problem is like looking at ripples in a pond and working backward to figure out where the pebble fell," says Vivek Shenoy , Eduardo D. Glandt President's Distinguished Professor in Materials Science and Engineering (MSE) and senior author of a study published in Transactions on Machine Learning Research (TMLR) , which will be presented at the Conference on Neural Information Processing Systems (NeurIPS 2026). "You can see the effects clearly, but the real challenge is inferring the hidden cause."
Rather than simply throwing more computing power at the problem, the researchers looked for a better mathematical approach. "Modern AI often advances by scaling up computation," says Vinayak Vinayak , a doctoral candidate in MSE and co-first author of the study. "But some scientific challenges require better mathematics, not just more compute."
Why Inverse PDEs Matter
At their core, differential equations are mathematical tools for describing change. They help scientists model how systems evolve, such as how a population grows, how heat dissipates or how a chemical reaction unfolds over time.
Partial differential equations, or PDEs, handle more complex systems by describing how things change across both space and time. They are used to model phenomena as varied as weather systems, heat flowing through a material and, in the case of the Shenoy Lab , the organization of DNA inside cells.
Inverse PDEs ask an even harder question: Instead of using known rules to predict how a system will behave, they help scientists work backward from what they can observe to infer the hidden forces, parameters or dynamics that produced it.
"For years, we've used these equations to study how chromatin, which is the folded state of DNA inside the nucleus, organizes itself inside living cells," says Shenoy. "But we kept running into the same problem: We could see the structures and model their formation, but we could not reliably infer the epigenetic processes driving this system, namely the chemical changes that help control which genes are active. The more we tried to optimize the existing approach, the clearer it became that the mathematics itself needed to change."
Rethinking How AI Does the Math
At the center of the problem is a deceptively simple mathematical idea: differentiation, or measuring how something changes. In everyday terms, a derivative can tell scientists how quickly a quantity is rising or falling. Higher-order derivatives go further, helping describe more complex patterns of change.
For years, AI systems tackling inverse PDE problems have typically computed those derivatives through a method called recursive automatic differentiation, which repeatedly calculates how quantities change through a neural network, the skeleton of all AI models.
But for higher-order systems, especially when the data is noisy, that process can become unstable and require vast amounts of computing power.
As the researchers describe it, recursive automatic differentiation is like repeatedly zooming in on the slope of a line: if the line is jagged, each additional step can actually magnify noise in the data and make the final result less reliable. In other words, the team realized, they needed a way to reliably smooth out the signal before measuring change.
The Power of Mollifiers
In the 1940s, Kurt Otto Friedrichs , a German-American mathematician who later received the National Medal of Science, described "mollifiers ," mathematical tools that "mollify" particularly noisy or jagged functions, by smoothing out their sharpest features.
By adapting this technique, the team was able to sidestep the issues caused by recursive automatic differentiation. "We initially assumed the issue had to do with neural network's architecture," says Ananyae Kumar Bhartari , a graduate of Penn Engineering's Scientific Computing master's program and the paper's other co-first author. "But, after carefully adjusting the network, we eventually realized the bottleneck was recursive automatic differentiation itself."
Implementing a "mollifier layer," which smoothed the signal before measuring it, radically diminished both the noisiness and the power consumption scaling. "That let us solve these equations more reliably, without the same computational burden," says Bhartari.
Unraveling Chromatin
For the Shenoy Lab, one immediate application of mollifier layers is a better understanding of how tiny domains of chromatin, or the mix of proteins and DNA that packages chromosomes for storage inside cells, regulate access to genetic material inside the nucleus, building on prior work showing how epigenetic reactions and physical interactions organize chromatin structure.
"These domains are just 100 nanometers in size," says Shenoy, "but because accessibility determines gene expression, and gene expression governs cell identity, function, aging and disease, these domains play a critical role in biology and health."
By inferring the epigenetic reaction rates that drive these changes — in other words, how quickly those gene-regulating chemical changes happen — mollifier layers could help researchers studying chromatin move from simply observing its structure using powerful microscopes to modeling how it changes over time and how those changes influence gene expression.
"If we can track how these reaction rates evolve during aging, cancer or development," adds Vinayak, "this creates the potential for new therapies: If reaction rates control chromatin organization and cell fate, then altering those rates could redirect cells to desired states."
Future Directions
Mollifier layers could also prove useful far beyond biology. Because many problems in materials science, fluid mechanics and other areas of scientific machine learning involve higher-order equations and noisy data, the framework could offer a more stable and computationally efficient way to infer hidden parameters across a wide range of systems.
The researchers hope that the same mathematical approach that helped uncover hidden reaction rates in chromatin could help scientists tackle similarly difficult inverse problems in many other fields. "Ultimately, the goal is to move from observing complex patterns to quantitatively uncovering the rules that generate them," says Shenoy. "If you understand the rules that govern a system, you now have the possibility of changing it."
This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and supported by National Cancer Institute (NCI) Award U54CA261694 (V.B.S.); National Science Foundation (NSF) Center for Engineering Mechanobiology (CEMB) Grant CMMI -154857 (V.B.S.); NSF Grant DMS -2347834 (V.B.S.); National Institute of Biomedical Imaging and Bioengineering (NIBIB) Awards R01EB017753 (V.B.S) and R01EB030876 (V.B.S.) and National Institute of General Medical Sciences (NIGMS) Award R01GM155943 (V.B.S).