Foam Physics Unveils AI Training Parallels

University of Pennsylvania School of Engineering and Applied Science

Foams are everywhere: soap suds, shaving cream, whipped toppings and food emulsions like mayonnaise. For decades, scientists believed that foams behave like glass, their microscopic components trapped in static, disordered configurations.

Now, engineers at the University of Pennsylvania have found that foams actually flow ceaselessly inside while holding their external shape. More strangely, from a mathematical perspective, this internal motion resembles the process of deep learning, the method typically used to train modern AI systems.

The discovery could hint that learning, in a broad mathematical sense, may be a common organizing principle across physical, biological and computational systems, and provide a conceptual foundation for future efforts to design adaptive materials. The insight could also shed new light on biological structures that continuously rearrange themselves, like the scaffolding in living cells.

In a paper in Proceedings of the National Academy of Sciences , the team describes using computer simulations to track the movement of bubbles in a wet foam. Rather than eventually staying put, the bubbles continued to meander through possible configurations. Mathematically speaking, the process mirrors how deep learning involves continually adjusting an AI system's parameters — the information that encodes what an AI "knows" — during training.

"Foams constantly reorganize themselves," says John C. Crocker , Professor in Chemical and Biomolecular Engineering (CBE) and the paper's co-senior author. "It's striking that foams and modern AI systems appear to follow the same mathematical principles. Understanding why that happens is still an open question, but it could reshape how we think about adaptive materials and even living systems."

The Difficulty of Characterizing Foams

In some ways, foams behave mechanically like solids: they more or less hold their shape and can rebound when pressed. At a microscopic level, however, foams are "two-phase" materials, made up of bubbles suspended in a liquid or solid. Because foams are relatively easy to create and observe yet exhibit complex mechanical behavior, they have long served as model systems for studying other crowded, dynamic materials, including living cells.

To describe foams mathematically, researchers long assumed that, like the atoms in glass, bubbles behave like boulders: in a landscape of possible positions that require more or less energy to maintain, the bubbles "fall" into certain locations, as if rolling downhill. This picture neatly explains why foams can seem solid. Once a bubble settles into a low-energy position, this theory suggests, the bubble should remain in place, like a boulder resting in a valley.

"When we actually looked at the data, the behavior of foams didn't match what the theory predicted," says Crocker. "We started seeing these discrepancies nearly 20 years ago, but we didn't yet have the mathematical tools to describe what was really happening." Resolving that mismatch required a different mathematical perspective, one capable of characterizing systems that continue to reorganize without ever settling into a single, fixed configuration.

A New Mathematical Lens

During training, modern AI systems continually adjust their parameters — the numerical values that encode what they "know." Much like bubbles in foams were once thought to descend into metaphorical valleys, searching for the positions that require the least energy to maintain, early approaches to AI training aimed to optimize systems as tightly as possible to their training data.

Deep learning accomplishes this using optimization algorithms related to the mathematical technique "gradient descent," which involves repeatedly nudging a system in the direction that most improves its performance. If an AI's internal representation of its training data were a landscape, the optimizers guide the system downhill, step by step, toward configurations that reduce error — those that best match the examples it has seen before.

Over time, researchers realized that forcing systems into the deepest possible valleys was counterproductive. Models that optimized too precisely became brittle, unable to generalize beyond the data they had already seen. "The key insight was realizing that you don't actually want to push the system into the deepest possible valley," says Robert Riggleman , Professor in CBE and co-senior author of the new paper. "Keeping it in flatter parts of the landscape, where lots of solutions perform similarly well, turns out to be what allows these models to generalize."

When the Penn researchers looked again at their foam data through this lens, the parallel was hard to miss. Rather than settling into "deep" positions in this metaphorical landscape, bubbles in foams also remained in motion, much like the parameters in modern AI systems, continuously reorganizing within broad, flat regions with similar characteristics. The same mathematics that explains why deep learning works turned out to describe what foams had been doing all along.

Future Directions

The new paper raises questions as well as answers, but in a field long thought to be settled, that may be the work's most important contribution.

By showing that, rather than being stuck in glass-like, stable configurations, the bubbles in foam constantly meander in ways that mirror how AI models learn, the work invites researchers to consider what other complex systems this mathematical lens might help clarify.

Crocker's team is now turning back to the system that initially motivated his interest in foams — the cytoskeleton, the microscopic scaffolding inside cells that plays a central role in supporting life. Much like the foams in this paper, the cytoskeleton must constantly rearrange itself while maintaining overall structure.

"Why the mathematics of deep learning accurately characterizes foams is a fascinating question," says Crocker. "It hints that these tools may be useful far outside of their original context, opening the door to entirely new lines of inquiry."

This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and supported by the National Science Foundation Division of Materials Research (1609525, 1720530).

Additional co-authors include Amruthesh Thirumalaiswamy and Clary Rodríguez-Cruz.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.