Global Team Unveils Framework to Address AI Trust

NC State

An international team of researchers has put forward a framework that it argues can be used to answer one of the biggest questions facing artificial intelligence (AI) technologies: can AI be trusted? The framework offers an organized approach for tackling a complex subject that draws on a wide variety of research disciplines.

"Companies, governments and everyday people are adopting AI tools to perform a wide variety of functions, but it's still not clear whether this is a technology that is actually trustworthy," says Roger Mayer, co-author of a paper on the work and a professor of leadership in North Carolina State University's Poole College of Management. "If we're going to use AI to make meaningful decisions, or even to inform important decisions, trust is a critical consideration. If we're going to pour money into AI applications, trust is a vital consideration. So developing an approach that will allow us to address the trustworthiness of AI in a meaningful way, both scientifically and practically, is a big step forward."

"Our global collaboration dives into the psychology, ethics and societal impact of trust in AI, proposing a transdisciplinary 'TrustNet Framework' to understand and bolster trust in AI to address grand challenges in areas as broad and urgent as misinformation, discrimination and warfare," says Frank Krueger, professor of systems social neuroscience in the School of Systems Biology at George Mason University.

The TrustNet Framework builds on ideas first developed at a TRUST workshop in Vienna, Austria, which drew an interdisciplinary team of researchers from around the world.

AI has the potential to enhance our lives in meaningful ways. For example, AI companions can offer emotional support in elder care, while AI tools can generate content and automate tasks that boost productivity. Yet, risks persist. Researchers considered various scenarios: algorithms used in hiring may carry hidden biases, just as humans do. And when it comes to misinformation, how can we tell if any AI is better than us at distinguishing fact from fiction?

As AI increasingly mediates high-stakes decisions, trust and accountability must become central concerns. Ultimately, what's at stake is trust-not just in AI systems, but in the people and institutions designing, deploying and overseeing them. To develop the TrustNet Framework, the researchers analyzed 34,459 multi-, inter-, and transdisciplinary trust research articles. The analysis concluded that more transdisciplinary studies are needed on the subject.

The TrustNet Framework encourages research teams to consider three components:

  • Problem transformation, including connecting the "grand challenge" of whether we should trust AI with scientific knowledge;
  • Producing new knowledge that allows us to address issues such as clarifying the roles of researchers and other stakeholders, and designing an integration concept that allows us to address a challenge from multiple perspective simultaneously; and
  • Transdisciplinary integration, assessing results to generate useful outputs for society and for science - answering research questions in a way that both furthers our understanding and has practical utility.

"Future trust frameworks must consider not only how humans trust AI, but also how AI systems might evaluate and respond to human reliability, and how AI establishes forms of AI-to-AI trust in networked and automated environments," explains René Riedl, a co-author of the paper who was instrumental in convening the initial group of researchers at the TRUST workshop. Riedl is also head of the Digital Business Management master's program at the University of Applied Sciences Upper Austria & Johannes Kepler University Linz, Austria.

The paper, "A call for transdisciplinary trust research in the artificial intelligence era," is published open access in the Nature journal Humanities & Social Sciences Communications.

Members of the research team included representatives from the following organizations: George Mason University, University of Applied Sciences Upper Austria & Johannes Kepler University, McGill University, Stanford University, Drexel University, University of Central Florida, University of Texas, Vrije Universiteit Amsterdam, Veterans Administration Medical Center, NC State University, American University, Graz University of Technology, University of Oxford, and Tamagawa University.

"Those working in AI, policy, ethics, or tech design would find this [paper] a valuable read to understand and combat the emerging societal AI trust challenges utilizing a common framework," Krueger added. "Trust is the foundation of all healthy relationships - between people and technologies. AI will reshape society, but trust - between people, systems and institutions - ultimately must guide how we build and use it."

"I have studied trust for more than 30 years, and one of the key elements of this TrustNet Framework that I really like is the fact that it is transdisciplinary," Mayer says. "An interdisciplinary approach draws on researchers from multiple disciplines. But a transdisciplinary approach goes further, incorporating input from other stakeholders - such as users of AI, those affected by the use AI, and policymakers that have authority over specific AI applications, such as autonomous vehicles. Any approach that wants to provide a meaningful analysis of trust that is of high value to society needs to incorporate transdisciplinary perspectives - it's one reason I think this framework has so much potential."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.