Imperial Leads AI Innovation Ahead of Safety Summit

Imperial College London has published a statement on AI-driven innovation to coincide with the UK AI Safety Summit, taking place on November 1 and 2.

The Summit, which is focused on the capabilities and risks of 'Frontier AI' will bring together national governments and AI industry leaders to discuss the safe development of AI models.

In the lead up to the Summit, Imperial has run a series of fringe events bringing together policymakers, industry and academia to discuss the best ways of enabling a new wave of innovation through the deployment of AI tools and systems.

The Secretary of State for Science, Innovation and Technology, Michelle Donelan MP, visited Imperial this week to see cutting-edge technologies that utilise AI in health. It has also been announced that Imperial will host a new £28m UKRI AI Centre for Doctoral Training, which will train a new generation of researchers to develop AI systems that address healthcare challenges.

These discussions have informed the following Statement on AI-Driven Innovation:

"AI technologies are ushering in a transformative digital era that can bring enormous benefits to societies and economies. Imperial welcomes the UK government's steps this week to categorise Frontier AI risks and build a new global consensus on AI safety to prevent potential harms of AI - from our everyday lives to existential threats.

Globally, governments are rightly considering ways in which to regulate AI. Such efforts must be carried out in close cooperation with all stakeholders: academia, civil society and industry, including the growing AI start-up community.

Only by combining deep technical and academic expertise together with that of policy makers and industrial R&D can we develop regulatory frameworks that enable the safe, productive and accelerated deployment of AI, providing both societal trust and investor confidence.

We will actively participate in this dialogue to co-design a regulatory framework that enables responsible and internationally leading AI innovation and does not restrictively 'police' such innovation efforts. The UK has very strong scientific and technical expertise in universities like Imperial, a vibrant AI innovation ecosystem, and a reputation of pragmatic and agile policy solutions. We can draw from this to make the UK a global AI leader.

Universities are dynamic environments that can accelerate innovation from foundational to applied AI. They offer spaces in which to test novel approaches to regulation in areas such as healthcare, energy and transport systems through dedicated sandboxes where regulatory tools can be refined in a safe, controlled environment. Universities are also critical to the future skills pipeline for AI – providing the knowledge and practical experience of cutting edge AI science and technology for workplaces of the future.

A responsible and innovation-first approach to the development of AI technologies will both enhance our AI and AI-driven innovation ecosystem and help build the UK's AI leadership."

In support of the statement, a number of academics from across the Imperial AI Network have said:

Professor Mary Ryan, Vice-Provost (Research and Enterprise):

"We need to start thinking about regulation as 'enabling' rather than of 'policing' AI. The UK can draw on the deep scientific and technical expertise found in universities like Imperial to bridge between cutting-edge AI research, industry adoption and pragmatic, adaptable policy solutions. Only then can we combine safety, trust and security in a framework that adds to and enables the UK's AI-driven innovation ecosystem."

Professor Aldo Faisal, UKRI Turing AI Fellow and Director of the UKRI Centre in AI for Healthcare:

"Development towards AGI has rapidly accelerated, and we are on the verge of huge disruption. It is critical that we respond pro-actively and choose sensible rules for AI. Regulation must support the whole eco-system and not leave innovation to the few biggest players. Learning from our pragmatic approach in medical regulation, where the UK is a world-leader, is a more sensible approach than more restrictive regulatory frameworks, for example in the nuclear industry."

Professor Alessandra Russo, Professor in Applied Computational Logic, Deputy Director of UKRI CDT on Safe and Trusted AI:

"It is now critical to intensify the effort to improve the interpretability, explainability and robustness of AI solutions. Without such developments, we will not be able to achieve the level of trust and accountability required to effectively realise AI-driven innovation in our society. We need to continue to invest in developing all areas of AI (e.g., "statistical", "deep", "hybrid" and "symbolic") in a multi-disciplinary way. It is through the richness and integration of these diverse AI approaches and their socio- and economic impact that we can enable a safe and fair path to AI-driven innovation."

David Shrier, Professor of Practice, AI & Innovation, Imperial College Business School, Co-Director, Trusted AI Alliance:

"AI is one of the most transformative technologies humanity has ever invented. In harnessing the potential of AI, multiple stakeholders need to be brought together to ensure AI serves the best interests of all of us. As part of this, we all collectively can help shape policies that achieve the objectives of providing protection for individuals and markets, enhancing competition, and accelerating innovation. And we need widely-available tools, like open source code libraries, and reference frameworks, for people to actually apply these ideas and principles and practice."

Dr Mark Kennedy, Director of the Data Science Institute:

"Enabling innovation in AI is about creating positive ways for people to do exciting things with AI, and show they can be fully evidence the work they do with AI. Whether it is in academic work, or in a business context – our focus should be on using AI to enable greater levels of innovation."

Professor Michael Huth, Head of the Department of Computing at Imperial & Co-founder of xayn.com:

"The UK has a fantastic opportunity in shaping an adaptive and responsible approach to AI regulation; it can draw on its experience in agile and forward-looking regulation across multiple complex sectors including finance and pharma - and tap into the UK's exceptional talent base for AI innovation in universities. Doing so will make the UK a rich environment for the considerable private investments that responsible and internationally leading AI research, development, and deployment will require. I welcome that the UK government aims to regulate AI in sector-specific ways, in contrast to the EU whose AI regulation is framed in terms of cross-sector product safety – which is likely to stifle AI innovation within the EU and especially so in the start-up space."

Professor Rafa Calvo, Professor at the Dyson School of Engineering, Co-Lead Leverhulme Centre for Future Intelligence:

"Trust and enthusiasm for AI innovation is only possible if the needs and experiences of the people impacted by the technologies are deeply understood. This means involving employees, service users and the broader public into the design process. We need to consider stakeholder expertise as equally important to technical expertise."

Rossella Arcucci, Elected Speaker for the Imperial AI Network, Director of Research at Data Science Institute:

"Academia has a key role in driving AI innovation, through leveraging the frontiers of AI developments, developing future AI leaders and the outreach of Big Tech.

The use of AI tools in areas including climate, weather and water observations, modelling and service creation is the dawn of a new era in climate science. AI has the potential to revolutionise existing approaches to complex weather and climate-related issues by enabling the processing of vast data volumes, knowledge extraction, and model enhancement.

This step-change in our capabilities to drive innovation, bolster resilience and provide improved services will help both governments and industry aspiring to meet their climate targets set by the Paris Agreement in 2015."

Dr Saira Ghafur, Lead for Digital Health, Institute of Global Health Innovation, Honorary Consultant in Respiratory Medicine, Imperial College Healthcare Trust:

"In terms of societal good and what AI can do, healthcare is one of those areas where we can really make a difference. To ensure maximal benefit, we need to ensure that any AI tools used in healthcare practice have a robust evidence base and undergo rigorous evaluation with a heavy focus on safety and are driven by responsible innovation.

The UK holds some of the best health datasets in the world and this can help underpin the development of data driven technologies that can provide the public with better, more efficient care."

Professor Sophia Yaliraki, Co-Director of Imperial-X and Professor of Theoretical Chemistry:

"Ensuring that graduates have the skills and knowledge to develop and deploy AI tools safely and effectively is essential for realising its benefits. At I-X and across Imperial, our education programmes provide students with the skills that they need to understand the developments taking place in AI and machine learning and to be able to continue this responsible innovation in the workplace."

Professor Washington Yotto Ochieng EBS FREng – Head of Department of Civil and Environmental Engineering at Imperial and Interim Director of Institute of Security Science and Technology:

"The world faces threats that are increasingly occurring simultaneously with devastating impacts on the built and natural environments. While the hyper-complexity involved in an increasingly connected world calls for further development and use of advanced analytics such as AI, this must be done in a safe and secure manner in a 'whole-of-society' community-led regulatory framework. We in the Department of Civil and Environmental Engineering (CEE) and the Institute of Security Science and Technology (ISST) are happy to be a part of the community."

Dr Yves-Alexandre de Montjoye, Associate Professor of Applied Mathematics and Computer Science:

"A key aspect to enable responsible AI innovation will be access to datasets, in particular medical datasets. It is crucial that these datasets are used anonymously when training AI systems. This had so far been an impossible task but, recently, AI tools have been shown to be capable of detecting that data was being misused and individual re-identified.

On the other side of this, transparency of the data used to train frontier AI models such as Large Language Models is crucial. Here too, AI tools can help unpack what data was used to train these large models, helping us understand what they learn and where they learn from."

AI at Imperial

Imperial's AI Network brings the combined power of nearly 300 researchers and 800 PhD/postdocs from across all our faculties and disciplines to accelerate the safe and productive development and deployment of AI.

Imperial-X (I-X) brings together 100 researchers and over 30 new research projects in AI and innovation. I-X combines Imperial's?strengths in both foundational and applied AI to address interdisciplinary challenges and support novel industrial collaborations. The global significance of I-X has been recognised by its inclusion in the Schmidt Futures Fellowship programme.

The science of data, how it is collected, curated, owned, and utilised is critical to the development of Large Language Models (LLMs). Imperial's Data Science Institute provides foundational expertise in data science and engineering.

Imperial is developing and supporting the next generation of AI skills. We host the UKRI Centre for Doctoral Training in AI for Healthcare and partner in the UKRI Centre for Doctoral Training in Safe and Trusted AI. Imperial has also been announced as the location for a new UKRI AI Centre for Doctoral Training in Digital Healthcare.

Our AI ecosystem is further strengthened by our UK and global research collaborations such as the Leverhulme Centre for Future Intelligence and with industry leaders such as GSK, Thomson Reuters and Amazon Web Services.

Imperial is at the heart of global collaborations with industry, non-profit and research institutions, and is proud to have supported AI-based start-ups across a range of sectors including healthcare and drug discovery energy systems, autonomous vehicles, financial services and education technologies.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.