Why Relying On AI May Lead To Poor Decision Making

Lancaster

Guidance based on Artificial Intelligence (AI) may be uniquely placed to foster biases in humans, leading to less effective decision making say researchers, who found that people with a positive view of AI may be at higher risk of being misled by AI tools.

The study entitled "Examining Human Reliance on Artificial Intelligence in Decision Making" is published in Scientific Reports.

Lead author Dr Sophie Nightingale of Lancaster University said: "Understanding human reliance on AI is critical given controversial reports of AI inaccuracy and bias. Furthermore, the erroneous belief that using technology removes biases may lead to overreliance on AI."

The research team also included Joe Pearson, formerly of Lancaster University, Itiel Dror from Cognitive Consultants International (CCI-HQ) and Emma Jayes, Georgina Mason and Grace-Rose Whordley from the Defence Science and Technology Laboratory.

They asked 295 participants to judge the authenticity of 80 faces, half of which were real and half created by AI.

The task was accompanied by text providing guidance - supposedly from humans or from AI - giving predictions as to whether the face was real or fake. Examples of guidance include "based on the predictions of 100 humans with expertise in facial recognition, this is a synthetic face" and "based on an algorithm trained to classify real and synthetic faces, the prediction is this is a real face".

However, unknown to the participants, the guidance was correct only half of the time. 20 of each type of face (real or synthetic) were presented alongside correct guidance and 20 alongside incorrect guidance. The faces appeared in a random order, and participants were unaware of the manipulation of real versus synthetic faces and correct versus incorrect guidance.

Following the task, participants were asked to complete the human trust scale and the General Attitudes towards Artificial Intelligence Scale (GAAIS), which measured their propensity to trust other people as well as their general attitudes towards AI.

The results revealed that more positive attitudes toward AI produced a reduced ability to discriminate between real and synthetic faces but this was only the case for those who received AI guidance, not for those who received human guidance.

Dr Nightingale said: "The public are increasingly being offered AI solutions to help them to navigate decision making in the real world. But our findings suggest that AI-driven support tools may be uniquely placed to engender biases in humans and may ultimately impair rather than elevate decision making.

"Specifically, more positive attitudes toward AI produced a reduced ability to discriminate between real and synthetic faces but this was only the case for those who received AI guidance, not for those who received human guidance. This highlights how people with a positive view of AI may be at higher risk of being misled by AI tools."

She added that more research is needed to understand precisely how humans use AI guidance in various contexts.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.