Understanding influence of decision-making AI

Monash University

Monash University researchers have developed a crucial framework to help provide greater transparency into the decision-making processes of artificial intelligence (AI) that control systems such as driverless cars and medical equipment.

Artificial Intelligence (AI) has a considerable impact on peoples' everyday lives. From autonomous driving to telehealth, AI algorithms are providing the core technology for autonomous decision-making platforms that influence human behaviour and decision-making.
With so much reliance on AI technology throughout our daily lives, it's essential to understand what information is powering AI-based decision systems and in turn influencing the technology we have come to rely on.
Researchers at Monash University's Faculty of Information Technology (IT) have published a research paper which explores explainability in AI and how the required information can be more transparent to humans.
Associate Professor Carsten Rudolph from the Department of Software Systems and Cybersecurity in the Faculty of IT, explains why AI-based decision-support systems and their in-depth explanation of knowledge is vital.
"Explainable AI seeks to provide greater transparency into how algorithms make decisions. The reason this is so important is because the decisions which AI systems make could ultimately lead to an incorrect medical diagnosis or a pedestrian being struck by a driverless car. Even if the overall system might be working perfectly, it is essential to know the very root of performing the decision, especially when the decision or prediction is crucial," said Associate Professor Carsten Rudolph from the Faculty of IT.
The team of researchers propose a universal framework that can demonstrate how the information fed into an AI system is secure and authentic, thereby providing end-users with sufficient information on AI decision-making systems.
"We are ultimately trying to achieve better explainability, both from humans and security-aware aspects of AI systems. In our paper, we suggest the adoption of a human-centric perspective, which is all about looking at and understanding the data used for decision generation in AI-based systems," said PhD Researcher, Fariha Jaigirdar.
This project is a collaboration between Associate Professor Carsten Rudolph and Fariha Jaigirdar from the Department of Software Systems and Cybersecurity, Associate Professor Gillian Oliver from the Department of Human Centered Computing, Professor of Practice in Digital Health Chris Bain, and Professor David Watts from La Trobe University Law School.
Researchers are advocating for the relationship between input data, training data, and resulting classifications, as well as the origins of these various inputs, to be more obvious and transparent to the human user.
For the user, explainable AI is just one element in the complete provenance track of the data. Systems need to provide an accessible way to present this context information in a way that empowers users to make well-informed decisions in their interactions with the systems.
Maintaining the transparency and explainability of AI-based systems is especially crucial when it comes to the secure and legal aspects of data that is required in critical scenarios. It's intended that this research paper and suggested framework will identify essential points of consideration for the future design of AI decision-making systems.
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).