The UK government hailed the recent US state visit as a landmark for the economy. A record £150 billion of inward investment was announced, including £31 billion targeted at artificial intelligence (AI) development.
Author
- Simon Thorne
Senior Lecturer in Computing and Information Systems, Cardiff Metropolitan University
That encompasses work on large language models (LLMs), the technology behind AI chatbots such as ChatGPT and other generative AI models. It will also cover the supercomputing infrastructure needed to deliver innovations.
Microsoft alone pledged US$30 billion (about £22 billion) over four years, half on capital expenditure such as new data centres, the rest on operations, research and sales. Tech company Nvidia has also promised £11 billion, with plans to deploy 120,000 of its Blackwell graphics processing units (to speed up computer graphics, for example in games, and process digital images) in UK projects. The US AI cloud computing company CoreWeave is building a £1.5 billion AI data centre in Scotland.
The political narrative is that the UK is becoming a global hub for AI . Yet behind the rhetoric lies a harder question: what kind of AI future do we want? Is it one where prosperity is broadly shared among the public, or one where private firms and foreign interests hold the levers of power, while the technology itself stagnates and spreads misinformation?
LLMs the technology powering generative AI models such as ChatGPT or Gemini, appear to be reaching their technical limits. The underlying hardware that LLMs are built on are called "transformer architectures", they excel at producing fluent text but have persistent problems with reasoning and fact. Since ChatGPT3.5 arrived in 2022, AI developers have scaled up models with more data and computing power, but gains have slowed and costs have soared .
This progress has also failed to solve their key problem, persistent "hallucinations" that are a significant barrier to leveraging LLMs for organisations and individuals. OpenAI admits that hallucinations - confident but false outputs from AI systems - are a product of how these systems predict words.
Filtering out hallucinations by forcing models to admit uncertainty in their output could cut hallucinations, but reduces usable outputs by around 30% .
Hallucinations may be seen as a necessary downside by OpenAI and other providers, but research also highlights their structural weaknesses. LLMs are not socialised beings but statistical engines, incapable of distinguishing fact from fabrication.
My own work has shown that users place misplaced trust in LLMs, assuming human-like reasoning where none exists. Simple logic tests expose weaknesses and the pattern is consistent: AI often underdelivers and requires human oversight to verify outputs.
This year, the Grok chatbot, developed by Elon Musk's company xAI, made antisemitic remarks and praised Adolf Hitler in responses on X. The company behind Grok, xAI, apologised and removed the posts, attributing some of the behaviour to an "unauthorised code path" or system update that made the model overly responsive to extremist-tainted user inputs.
In its apology published on X, xAI said: "Our intent for Grok is to provide helpful and truthful responses to users. After careful investigation, we discovered the root cause was an update to a code path upstream of the Grok bot.
The company said the update made Grok "susceptible to existing X user posts; including when such posts contained extremist views".
They added: "We thank all of the X users who provided feedback to identify the abuse of grok functionality, helping us advance our mission of developing helpful and truth-seeking artificial intelligence."
Retrospective alignment
All LLMs require retrospective alignment, a process of adjusting their outputs after training, to ensure their responses do not drift into harmful, biased, or destabilising territory. This can mean filtering hate speech, blocking misinformation about vaccines, preventing the promotion of self harm, or constraining political partisanship.
But unlike humans, whose ethical boundaries emerge through lived interaction and socialisation, AI models cannot self regulate. Their alignment is imposed after the fact by those that control them.
This process is not guaranteed to be neutral and we can never be sure who is actually calling the shots. Corporations, governments and powerful individuals may be in there acting as surrogate parents, embedding their own values and interests into the system's ethical boundaries. The danger here is that instead of aligning the LLM to broadly acceptable norms, it could potentially be aligned to promote undesirable extremist points of view.
In fact, malicious coherence, where AI systems are tuned to confidently repeat political narratives, may turn out to be a bigger risk than hallucinations. On X, Grok is already invoked as an arbiter of truth. It's very common to see: "Hey @Grok, is this true?" in the comments under posts. This ritual hands authority to a machine owned by one man.
The UK-US trade deal also gestures towards a broader range of machine intelligence applications, from autonomous vehicles and delivery drones to healthcare systems. Self-driving technology has been imminent for more than a decade, but it remains locked in extended pilot phases with Tesla, Waymo and Cruise all facing setbacks and safety controversies.
Delivery drones remain constrained by regulation, safety and logistical barriers.
There are impressive AI breakthroughs in healthcare, such as protein structure prediction and AI-assisted diagnostic imaging . But deployment in the NHS is fraught with concerns over trust, data governance and accountability.
The lesson is the same as with LLMs, progress is real but uneven, hype outpaces evidence, and without transparent oversight these systems risk being aligned more with corporate profit than with public benefit.
Whose future?
The UK-US trade deal illustrates both the promise and peril of today's AI moment. Technically, AI systems are stagnating: hallucinations persist, reasoning remains weak, and scaling them up further offers diminishing returns. Politically, the risk is sharper: models could be aligned to private or partisan interests, amplifying disinformation in an already fractured information ecosystem.
The opportunity to change the truth in real time through alignment fits less with Silicon Valley's promises of innovation than with Orwell's Ministry of Truth.
Whether in LLMs that confidently fabricate or in driverless cars that make unfortunate manoeuvres, the pattern repeats: systems promoted as transformative struggle with basic reliability, while control over their direction rests with a handful of powerful firms.
So whose AI future do we want? A future of public benefit, built on transparency, oversight, and verifiable outcomes? Or one where private corporations define what counts as truth?
The fanfare of investment cannot answer this. Only governance, accountability, and sovereignty can. Without them, the AI future being built in the UK may not belong to its citizens at all, but to the corporations and governments who claim to speak on their behalf.
Simon Thorne does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.