As artificial intelligence (AI) takes on increasingly critical roles — from managing power grids to piloting autonomous vehicles — making sure these systems are safe has never been more important. But how can we be certain that the AI controlling them can be trusted?
A research team at the University of Waterloo is addressing this question using tools from applied mathematics and machine learning to rigorously check and verify the safety of AI-driven systems.
"Any time you're dealing with a dynamic system — something that changes over time, such as an autonomous vehicle or a power grid — you can mathematically model it using differential equations," said Dr. Jun Liu , professor of applied mathematics and Canada Research Chair in Hybrid Systems and Control.
To predict the long-term behaviour of these systems, scientists and engineers rely on a mathematical tool called a 'Lyapunov function.' Intuitively, this tool shows whether a system will naturally settle into a stable and safe state—much like a ball rolling to the bottom of a bowl and staying there. "Finding such a function, however, is often a notoriously difficult task," Liu said.
To tackle this challenge, Liu and his team turned to machine learning. They built a neural network that learns to satisfy the mathematical rules that determine whether a system remains stable and safe — the same rules engineers rely on to keep power grids and autonomous vehicles under control.
The researchers then used a separate reasoning system based on rigorous computation and mathematical logic to verify that these neural networks indeed satisfy the conditions required for safety guarantees. Together, these tools provide a way to confirm that AI controllers can safely and reliably manage complex systems.
It may sound surprising to use one form of AI to check another, but Liu explains that AI is a broad field. In their work, neural networks — one common type of AI — learn mathematical proofs of safety and sometimes even design the controllers themselves, while a logic-based system — another form of reasoning AI — verifies that those proofs are correct. Both are tasks researchers once had to do manually.
"To be clear, no one is attempting to create factories or systems run entirely by AI without any human input," Liu said. "There are areas such as ethics that will always be guided by human judgment. What these AI controllers and proof assistants are doing is taking over computation-intensive tasks, like deciding how to deploy power in a grid or constructing tedious mathematical proofs, that will be able to free up humans for higher-level decisions."
The framework has been tested on several challenging control problems, where it matched or exceeded traditional approaches. Liu's team is now developing it into an open-source toolbox and exploring industry collaborations to advance safe and trustworthy AI for physical systems.
This work is part of Waterloo's broader commitment to advancing safe and trustworthy AI, supported by initiatives such as the TRuST Scholarly Network and recent federal efforts to promote responsible and transparent AI.
The research, "Physics-informed neural network Lyapunov functions: PDE characterization, learning, and verification," appears in Automatica.