Whos At Fault When AI Is In Dock?

Australian Catholic University

Thomas More Law School senior lecturer Juan Diaz-Granados has written a book that addresses how to deal with risk, liability and compensation when AI is at fault.

Key points

  • Thomas More Law School senior lecturer Dr Juan Diaz-Granados has categorised AI systems into four zones to assess their potential risks and benefits.
  • Where those systems were at fault, those that were developed for the common good ought not to be penalised too severely so as not to punish innovation.

If a self-driving car causes a rear-ender, who should be responsible for the damages: the driver, the car manufacturer or the software developer?

Australian Catholic University innovation law expert Juan Diaz-Granados has explored that hypothetical scenario and other conundrums in a book about how laws should evolve to deal with autonomous and adaptive artificial intelligence.

AI and Tort Liability: Rethinking, Recalibrating and Reallocating Risk and Responsibility proposes a novel framework that governs how systems that simulate the decision-making process should be treated in tort law, the arm of the legal system that deals with harm to people, property and reputation.

The book uses a dynamic lens capable of considering how laws could adapt to address AI's role in complex matters such as healthcare, warfare and financial decisions.

Its arguments support AI developed for the common good and suggest guardrails for instruments geared towards profit.

"We have witnessed how quickly AI moved from an "interesting" technology with "huge potential" to a truly disruptive, transformative system reshaping multiple spheres of life," Dr Granados-Diaz said. "Yet our core tort doctrines—such as fault, foreseeability and causation—remain largely unchanged.

"They were designed for human actors and linear supply chains and, as a result, must now be re-examined with care. Otherwise, we risk encouraging the wrong behaviours, deterring the right innovations and eroding public trust."

The proposed framework maps AI systems into a matrix of four zones – green, yellow, orange and red – according to the private and public risks and benefits and prescribes tailored mechanisms for each.

In the case of an AI medical tool making a mistake, for instance, the law should be balanced to allow for compensation but not punish life-saving innovation too harshly.

Where the benefits are mostly private and profit-driven, the book suggests stronger liability rules.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.