Controversy over the chatbot Grok escalated rapidly through the early weeks of 2026. The cause was revelations about its alleged ability to generate sexualised images of women and children in response to requests from users on the social media platform X.
Author
- Dareen Toro
Research Leader, Defence, Security and Justice team, RAND Europe
This prompted the UK media regulator Ofcom and, subsequently, the European Commission , to launch formal investigations. These developments come at a pivotal moment for digital regulation in the UK and the EU. Governments are moving from aspirational regulatory frameworks to a new phase of active enforcement, particularly with legislation such as the UK's Online Safety Act.
The central question here is not whether individual failures by social media companies occur, but whether voluntary safeguards - those devised by the social media companies rather than enforced by a regulator - remain sufficient where the risks are foreseeable. These safeguards can include such measures as blocking certain keywords in the user prompts to AI chatbots, for example.
Grok is a test case because of the integration of the AI produced within the X social media platform. X (formerly Twitter) has had longstanding challenges around content moderation, political polarisation and harassment.
Unlike standalone AI tools, Grok operates inside a high velocity social media environment. Controversial responses to user requests can be instantly amplified, stripped of context and repurposed for mass circulation.
In response to the concerns about Grok, X issued a statement saying the company would "continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content".
The statement added that image creation and the ability to edit images would now only be available to paid subscribers globally. Furthermore, X said it was "working round the clock" to apply additional safeguards and take down problematic and illegal content.
This last assurance - of building in additional safeguards - echoes earlier platform responses to extremist content, sexual abuse material and misinformation. That framing, however, is increasingly being rejected by regulators.
Under the UK's Online Safety Act (OSA) , the EU's AI Act and codes of practice and the EU Digital Services Act (DSA) , platforms are legally required to identify, assess and mitigate foreseeable risks arising from the design and operation of their services.
These obligations extend beyond illegal content. They include harms associated with political polarisation, radicalisation, misinformation and sexualised abuse.
Step by step
Research on online radicalisation and persuasive technologies has long emphasised that harm often emerges cumulatively, through repeated validation, normalisation and adaptive engagement rather than through isolated exposure. It is possible that AI systems like Grok could intensify this dynamic.
In the general sense, there is potential for conversational systems to legitimise false premises, reinforce grievances and adapt responses to users' ideological or emotional cues.
The risk is not simply that misinformation exists, but that AI systems may materially increase its credibility, durability or reach. Regulators must therefore assess not only individual results from AI, but whether the AI system itself enables escalation, reinforcement or the persistence of harmful interactions over time.
Safeguards used on social media with regard to AI-generated content can include the screening of user prompts, blocking certain keywords and moderating posts. Such measures used alone may be insufficient if the overall social media platform continues to amplify false or polarising narratives indirectly.
Generative AI alters the enforcement landscape in important ways. Unlike static feeds, conversational AI systems may engage users privately and repeatedly. This makes harm less visible, harder to find evidence for and more difficult to audit using tools designed for posts, shares or recommendations. This poses new challenges for regulators aiming to measure exposure, reinforcement or escalation over time.
These challenges are compounded by practical enforcement constraints, including limited regulator access to interaction logs.
Grok operates in an environment where AI tools can generate sexualised content and deepfakes without consent. In general, women are disproportionately targeted in terms of sexualised content, and the resulting harms are severe and enduring.
These harms frequently intersect with misogyny, extremist narratives and coordinated misinformation, illustrating the limits of siloed risk assessments that separate sexual abuse from radicalisation and information integrity.
Ofcom and the European Commission now have the authority not only to impose fines, but to mandate operational changes and restrict services under the OSA, DSA and AI Act.
Grok has become an early test of whether these powers will be used to address large-scale risks, rather than simply failures to remove content. narrow content takedown failures.
Enforcement, however, cannot stop at national borders. Platforms such as Grok operate globally, while regulatory standards and oversight mechanisms remain fragmented. OECD guidance has already underscored the need for common approaches, particularly for AI systems with significant societal impact.
Some convergence is now beginning to emerge through industry-led safety frameworks such as the one initiated by Open AI , and Anthropic's articulated risk tiers for advanced models. It is also emerging through the EU AI Act's classification of high-risk systems and development of voluntary codes of practice.
Grok is not merely a technical glitch, nor just another chatbot controversy. It raises a fundamental question about whether platforms can credibly self-govern where the risks are foreseeable. It also questions whether governments can meaningfully enforce laws designed to protect users, democratic processes and the integrity of information in a fragmented, cross-border digital ecosystem.
The outcome will indicate whether generative AI will be subject to real accountability in practice, or whether it will repeat the cycle of harm, denial and delayed enforcement that we have seen from other social media platforms.
![]()
Dareen Toro does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.