A first-of-its-kind study led by researchers at the Centre for Addiction and Mental Health (CAMH) has found that artificial intelligence (AI) models used to predict aggressive incidents in acute psychiatric care can reinforce and amplify existing social and structural inequities by overestimating the likelihood of aggression among already marginalized groups. The findings, which were recently published in npj Mental Health Research, underscore the importance of careful evaluation to ensure AI tools don't perpetuate harm in clinical settings but promote more equitable care.
"While fairness of clinical AI tools has been evaluated in other areas, this study highlights a critical gap in mental healthcare considering assessments, which are used to train AI models, are often based on subjective observations that are shaped by underlying social and structural biases," says Dr. Marta Maslej , Staff Scientist at the Krembil Centre for Neuroinformatics (KCNI) and senior co-author of the study. "If fairness is not built in, the clinical use of AI models can lead to significant distress, loss of trust, and even precipitate aggressive incidents that would have otherwise not occurred. There is a clear need to develop AI applications that centre and promote equity."
Findings highlight the importance of fairness analysis
Several healthcare systems in the Netherlands, Switzerland, China, the US, and Canada have assessed or are considering the use of AI models to predict aggressive or violent behaviour to enable earlier intervention and targeted de-escalation. However, little research has examined whether these tools perform equitably across patient populations—particularly in psychiatry, where social and structural factors strongly shape care experiences.
To address this gap, the research team trained a machine learning (a form of AI) model on electronic health records from more than 17,000 CAMH inpatients and examined how prediction errors varied across intersecting social and demographic factors, including race, gender, and social context. The model showed clear bias with higher false positive rates reported for Black and Middle Eastern individuals, men, patients admitted to emergency care by police, and those with unstable or supportive forms of housing. These findings suggest that the model may disproportionately flag already over-surveilled or structurally disadvantaged groups as high risk, potentially shaping clinical decisions in ways that compound inequities.
Advancing equitable AI in mental healthcare
The findings highlight that fairness is not a secondary consideration, but a core requirement for the safe implementation of AI in psychiatric settings. The study is part of CAMH's broader commitment to leading the responsible and patient-centred implementation of AI in mental healthcare that is grounded in ethics, transparency, and trust.
As part of this work, the KCNI Predictive Care Lab , co-led by Drs. Laura Sikstrom and Marta Maslej , is advancing research to better understand and address the real-world impacts of AI in mental healthcare. Leveraging an award-winning computational-ethnographic approach, the lab aims to identify and address potential harms while designing AI systems that promote equity and improve outcomes for individuals and communities. Building directly on findings from this study, the team recently secured Canadian Institutes of Health Research (CIHR) funding to co-design a next-generation AI tool called FARE+ aimed to identify the drivers of biased predictions and inform strategies to mitigate them—advancing more equitable and clinically meaningful risk assessment.
"There is potential to use AI to redress historical and ongoing inequities in our health system by moving away from binary risk prediction to more patient-centred tools," says Dr. Laura Sikstrom . "By shifting from individual risk prediction to systemic bias detection, this research advances a new paradigm for AI in mental healthcare—one that prioritizes fairness, health equity, and the well-being of both patients and staff."
The study was led by Yifan Wang, former research trainee at KCNI and current medical student at the University of Ottawa, in collaboration with senior researchers at KCNI, and was supported through a SSHRC Insight Development Grant and Google Award for Inclusion Research.
About the Centre for Addiction and Mental Health (CAMH)