AMES, Iowa - From driving cars to flying drones, as autonomous robots take on more responsibility, they also face more human-like dilemmas - including what to do when rules collide.
For a self-driving vehicle, this conundrum might pop up when a pedestrian suddenly steps off a curb and into its path. By swerving to avoid them, the car will also have to briefly veer over the road's clearly marked center line. Is this a justifiable infraction? What if it leads to a collision with an oncoming car?
Quick look
Autonomous robots can follow rules - but what happens when the rules conflict? Iowa State researchers have developed a new "rulebooks" framework that helps robots make safer, more transparent judgment calls when perfection isn't possible.
Similarly, a drone might need to decide whether to fly through a narrow gap between two buildings or take the long way around to reach its destination. Neither option is perfect, but can the drone weigh the different risks that each path presents?
Tichakorn Wongpiromsarn, associate professor of computer science at Iowa State University, said these everyday scenarios reflect a growing reality: autonomous systems must make judgement calls and not just follow the rules.
"Robots are increasingly expected to operate without human intervention in situations where some rules may have to be bent," Wongpiromsarn said. "What's been missing is a principled way to justify these decisions."
This gap is what motivated Wongpiromsarn and fellow researchers Konstantin Slutsky, assistant professor of mathematics at Iowa State, and Emilio Frazzoli, professor of dynamic systems and control at ETH Zürich, to develop a new framework that helps autonomous systems make these decisions in a way that's transparent, predictable and defensible.
In a series of publications that culminated in a study recently published by IEEE Transactions on Robotics, Wongpiromsarn, Slutsky and Frazzoli introduce a new formal system - known as "rulebooks" - designed to help autonomous systems rank and reconcile competing goals.
Addressing common flaws
In robotics, Wongpiromsarn said, there's concern around the fact that today's autonomous systems are often optimized using a single mathematical cost function that blends all goals - such as safety, legality, efficiency and passenger comfort - into one score using weighted trade-offs.
How does this work? Simply put, engineers give each of these goals a weight (or value), meaning how important that goal is relative to the other goals. A robot then calculates a total score for every possible action and picks the one with the best score.
It's an approach that works well, Wongpiromsarn said - until it doesn't.
"The problem is that this approach treats all goals as if they can be balanced against each other, even when they shouldn't be," Wongpiromsarn said.
For example, if "efficiency" is weighted too high, the robot might drive too aggressively. And while engineers can adjust the weight to make the robot behave more cautiously, that doesn't fix the underlying problem, Wongpiromsarn explained.
"In this scenario, safety is being treated as just another factor to trade off," she said. "If safety truly comes first, you can't capture that with a single weight. Safety shouldn't be balanced against other goals; it should be a hard limit that the system never crosses."
Another problem is that this trade-off is hidden inside the system: "Because it's all blended into one number, it's difficult to see why the robot chose what it did or whether the priorities were balanced correctly," Wongpiromsarn said.
Wongpiromsarn said designers may also divide system goals into "hard" and "soft" constraints, with "hard" constraints taking priority no matter the cost. But this practice, she noted, exposes another basic flaw: what should a system do when a "hard" constraint - such as preventing harm - simply can't be satisfied?
For example, in the earlier scenario during which a pedestrian suddenly steps in front of a self‑driving car, the vehicle is left with two choices: attempt to brake and potentially hit the person, or swerve to avoid the person and potentially collide with an oncoming car. In this scenario, the safety constraint is impossible to fulfill. A hard‑versus‑soft framework offers no guidance - it can only declare the situation unsolvable, even though the vehicle must still act, Wongpiromsarn said.
New rulebooks framework uses rankings, not weights
Wongpiromsarn said the research team's new rulebooks framework avoids these issues by ranking goals instead of blending them together.
"In our framework, each rule represents a specific goal - avoiding collisions, following traffic laws and so on - and the system clearly defines which rules come first, which are tied and which can't be directly compared," she said.
Ultimately, this gives autonomous systems a principled way to compare unavoidable violations and choose the least harmful option, Wongpiromsarn said.
"This approach lets robots behave more like people," Wongpiromsarn said, noting the importance of creating frameworks that reflect "how people actually reason about what's right and wrong, safe and unsafe, and acceptable and unacceptable."
"People typically follow the most important rules first and only consider lower‑priority goals once the critical ones are met or proven impossible," she said.
Slutsky said the researchers' rulebooks structure also allows for gradual specification of priorities.
"This means you don't have to decide all of a robot's priorities at once," he said. "Some base priorities can be set by law, and then company building the robot can add more priorities later - as long as they stay consistent with the base priorities."
For example, if a law said "avoid harming humans or property" is the top priority for self-driving cars, that law would be a non-negotiable "must" for manufacturers. However, that same law doesn't specify whether "stay in your lane" is more or less important than "stay away from the curb," Slutsky said, which "allows manufacturers to choose how to rank those two goals - as long as both have lower priorities than 'avoid harming humans or property.'"
"Our hope is that this approach supports compliance without over-restricting," Slutsky said. "Everyone follows the same core rules, but companies still have the freedom to innovate and design their own behavior."
Why this matters right now
Autonomous robots already face situations where it's impossible to follow every rule, and regulators recognize this.
"With the rulebooks framework, we're not computing just one 'best' action," Wongpiromsarn said.
"We're identifying all actions that are optimal under a prioritized set of rules."
This difference, she said, makes it possible for engineers, regulators and even courts to ask a crucial question: Did the robot behave in line with the rules we said mattered most?
"That capability is especially important for post-incident analysis," Wongpiromsarn said. "After a crash, near-miss or regulatory review, understanding a machine's reasoning can be as important as the outcome itself."
The study also shows that rulebooks can serve as a common language for many different robot‑control methods. Logical rules ("if a pedestrian is present, always yield"), optimization goals ("minimize travel time") and constraint-based approaches can all be expressed within the same framework, eliminating the need to choose between competing mathematical philosophies or technical systems.
"In our tests, we showed that our algorithms can efficiently generate plans that respect complex priority structures and even outperform standard planning methods in situations where those methods break down," Wongpiromsarn said.
The implications also go beyond robots, the researchers said, noting that as artificial intelligence systems continue to take on more decision‑making in areas like transportation, coordination, health care and public safety, the need for systems that can justify their choices will only grow.
"The rulebooks concept offers a way to encode societal values, legal norms and organizational policies directly into machine decision-making," Wongpiromsarn said.
"It won't solve every ethical dilemma facing autonomous systems, but it may help ensure that when machines make hard choices, they do so according to priorities humans can understand and even hold them accountable for."