WMG launches new project to tackle responsible AI in financial services
The safe and responsible deployment of generative AI for financial advice is being addressed by experts at Warwick Manufacturing Group (WMG), University of Warwick after securing new UKFin+ funding from the Engineering and Physical Sciences Research Council (EPSRC).
The 12-month collaborative research project, led by WMG's Dr Anita Khadka and Professor Carsten Maple, comes amid growing concern about AI-generated misinformation, hallucinated advice, and regulatory uncertainty in high-risk financial contexts that has the potential to mislead consumers, and disrupt markets.
Working with experts from Keele University and Coventry University, the Responsible AI Systems for Ethical Finance project will investigate how Large Language Models, such as GPT-4, perform when applied to real-world financial contexts (e.g. investment advice, credit scoring and customer-facing automation). It will explore how hallucination risks can be detected, managed, and governed.
Dr Anita Khadka, Assistant Professor in Trustworthy and Responsible AI at WMG and Principal Investigator for RAISE-Fin, said: "As generative AI becomes increasingly embedded in financial services, ensuring the reliability, auditability, and regulatory alignment of these systems is critical. This project brings together technical, legal, and business expertise to address these urgent challenges."
The project is organised into three integrated work packages:
- WMG at University of Warwick will lead the development of an AI evaluation and hallucination detection framework, benchmarking model outputs using realistic prompts from high-risk financial use cases, to support future assurance standards and model audits.
- Coventry University will conduct a comparative legal and regulatory analysis of AI governance across the UK, EU, and US, producing a policy response toolkit for aligning LLM deployment with financial regulation.
- Keele University will focus on translating AI risks into business process compliance tools, helping financial institutions map hallucination vulnerabilities within core workflows.
The ultimate goal is to produce clear guidance, best practices and tailored auditing tools to help financial institutions and regulators, which will help enhance the safe, ethical, and innovative deployment of AI in financial services.
Professor Carsten Maple, Professor of Cyber Systems Engineering, WMG said: "This project is part of our programme of work in trustworthy AI being undertaken by the Secure Cyber Systems Research Group here at WMG. Our aim is to deliver world-class research with direct pathways to impact, that allows the confident and responsible design and deployment of AI."
Dr Dimitrios Kafteranis, Associate Professor at Coventry University's Research Centre for Resilient Business and Society, said: "While technology evolves quickly, the law usually fails to keep pace. While we welcome AI and new technologies, we must ensure that they adhere to legal standards, human rights, and ethical principles."
Dr Geetika Jain, Assistant Professor in Digital Transformation from Keele University, said: "The project provides a crucial framework for navigating the immense opportunities and risks of advanced AI in the FinTech sector. It moves beyond theoretical discussion to offer actionable policy pathways and risk mitigation strategies.
"The project outputs will directly inform both regulators and financial institutions, helping to shape a safer and more trustworthy financial services ecosystem grounded in responsible AI deployment."
The project is supported by FinTech West, the regional representative body for fintech in the southwest, which will convene stakeholders from financial institutions, regulators, and technology providers.