
Users around the world have rushed to adopt artificial intelligence - especially in safety-critical fields - but a new study has revealed the hype has prioritised technology for technology's sake instead of human-centred development.
The study examined the rise of eXplainable artificial intelligence (XAI) by reviewing research published about XAI's use in biomedical sensing and imagery between 2017 and 2023, a time identified by the authors as the "first hype cycle of XAI development".
XAI is a set of processes and methods which allow humans to understand the decisions and outputs made by machine learning algorithms.
The study was conducted by academics from Torrens University, Charles Darwin University (CDU), Australian Catholic University (ACU), University of Technology Sydney, and University of New England.
The study found the technique Shapley Additive eXPlanations (SHAP) to be the most popular XAI method in biomedical sensing, not for its effectiveness but for its accessibility and easy application.
SHAP was used in 77 per cent of studies examined, and its application was primarily used in healthcare, civil engineering and energy.
Co-author and CDU Adjunct Professor Niusha Shafiabady, who is also ACU's Head of Discipline for IT, said while SHAP was crucial in shaping this research field, it also constrained research and progress.
"Very few studies compared SHAP to alternative XAI methods, and even fewer evaluated whether clinicians could interpret SHAP explanations correctly," Professor Shafiabady said.
"It set expectations for what an 'explanation' looks like, and it encouraged researchers to treat feature‑attribution as the default form of interpretability.
"But SHAP also carries assumptions that often break down in biomedical contexts, including independence assumptions, sensitivity to preprocessing, and instability across correlated features."
Professor Shafiabady said XAI was in its transitional phase, and that it must move from "plausibility" to "accountability".
"The field has moved beyond the initial hype, but it has not yet reached methodological maturity," she said.
"The hype cycle revealed a deeper structural issue where XAI is treated as a technical add‑on rather than a human‑centred safety mechanism. Explanations must be evaluated not only for how they look, but for how they perform in real clinical workflows.
"This means developing domain‑specific XAI methods that respect the structure of biomedical data, building standardised evaluation protocols, and involving clinicians, technicians, and end‑users in the design and testing of explanations."
eXplainable Artificial Intelligence (XAI): A Systematic Review for Unveiling the Black Box Models and Their Relevance to Biomedical Imaging and Sensing was published in Advances in Biomedical Imaging and Sensing: Technologies, Applications, and Future Directions, a special issue of the international journal Sensors.