New Law Targets AI Child Abuse Images Amid Surge

UK Gov

New legislation sees government work with AI industry and child protection organisations to ensure AI models cannot be misused to create synthetic child sexual abuse images.

  • World-leading legislation sees government work with AI industry and child protection organisations to ensure AI models cannot be misused to create synthetic child sexual abuse images.
  • Technology Secretary and Home Secretary will have new powers to designate AI developers and charities like the Internet Watch Foundation as authorised testers.
  • Comes as fresh Internet Watch Foundation (IWF) data shows reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.(note)

Children will be better protected from becoming victims of horrific indecent deepfakes as the government introduces new laws to ensure Artificial Intelligence (AI) cannot be exploited to generate child sexual abuse material.

Data from the Internet Watch Foundation released today (Wednesday 12 November) shows reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025. (note)

There has also been a disturbing rise in depictions of infants, with images of 0-2-year-olds surging from 5 in 2024 to 92 in 2025. (note)

Under stringent new legislation, designated bodies like AI developers and child protection organisations, such as the Internet Watch Foundation (IWF), will be empowered to scrutinise AI models, and ensure safeguards are in place to prevent them generating or proliferating child sexual abuse material, including indecent images and videos of children.

Currently, criminal liability to create and possess this material means developers can't carry out safety testing on AI models, and images can only be removed after they have been created and shared online. This measure, one of the first of its kind in the world, ensures AI systems' safeguards can be robustly tested from the start, to limit its production in the first place.

The laws will also enable organisations to check models have protections against extreme pornography, and non-consensual intimate images.

While possessing and generating child sexual abuse material is already illegal under UK law, both real and synthetically produced by AI, improving AI image and video capabilities present a growing challenge.

We know that offenders who seek to create this heinous material often do so using images of real children - both those known to them and those found online - and attempt to circumnavigate safeguards designed to prevent this.

This measure aims to make such actions more difficult by empowering companies to ensure their safeguards are effective and to develop innovative, robust methods to prevent model misuse.

Technology Secretary Liz Kendall said:

We will not allow technological advancement to outpace our ability to keep children safe.

These new laws will ensure AI systems can be made safe at the source, preventing  vulnerabilities that could put children at risk.

By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.

Jess Phillips, Minister for Safeguarding and Violence Against Women and Girls, said:

We must make sure children are kept safe online and that our laws keep up with the latest threats. This new measure will mean legitimate AI tools cannot be manipulated into creating vile material and more children will be protected from predators as a result.

It comes as new Internet Watch Foundation data also shows the severity of the material has intensified over the past year. Category A content - images involving penetrative sexual activity, images involving sexual activity with an animal, or sadism - rose from 2,621 to 3,086 items, now accounting for 56% of all illegal material compared to 41% last year. (note)

Girls have been overwhelmingly targeted, making up 94% of illegal AI images in 2025.(note)

To ensure testing work is carried out safely and securely, the government will also bring together a group of experts in AI and child safety.

The group will help design the safeguards needed to protect sensitive data, prevent any risk of illegal content being leaked, and support the wellbeing of researchers involved.

These changes, which will be tabled today (Wednesday 12 November) as an amendment to the Crime and Policing Bill, mark a major step forward in safeguarding children in the digital age.

They reflect the government's commitment to working hand-in-hand with AI developers, tech platforms, and child protection organisations to build a safer online world for children.

We want the UK to be the safest place in the world to be online, particularly for children, and this includes when using AI Models. This measure aims to help us achieve that goal by making AI models used by the British public safer and more robust at preventing offenders from misusing this exciting technology for criminal activity.

This proactive approach not only protects children from exploitation and re-victimisation but also reinforces public trust in AI innovation - proving that technological progress and child safety can go hand in hand.

Kerry Smith, Chief Executive of the Internet Watch Foundation (IWF), said: 

We welcome the government's efforts to bring in new measures for testing AI models to check whether they can be abused to create child sexual abuse. For 3 decades, we have been at the forefront of preventing the spread of this imagery online - we look forward to using our expertise to help further the fight against this new threat.

AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material. Material which further commodifies victims' suffering, and makes children, particularly girls, less safe on and off line.

Safety needs to be baked into new technology by design. Today's announcement could be a vital step to make sure AI products are safe before they are released.

Notes

(note): Internet Watch Foundation research - trends of AI-Generated Child Sexual Abuse Material (CSAM) (data compares January to October 2024 vs January to October 2025)

  • AI reports actioned more than doubled, rising from 199 in 2024 to 426 in 2025.
  • While the overall number of AI images and videos decreased slightly (6,459 in 2024 to 5,560 in 2025), severity has intensified. Category A content rose from 2,621 to 3,086 items, now accounting for 56% of all illegal material compared to 41% last year.
  • Gender analysis shows girls remain overwhelmingly targeted, making up 94% of illegal AI images in 2025, though there is a small increase in boys appearing. Age profiles reveal a disturbing rise in depictions of infants: images of 0-2-year-olds surged from 5 in 2024 to 92 in 2025, while older age brackets saw reductions.

Each 'report' the IWF receives refers to a webpage or URL - each of which may contain one, or multiple, images or videos of child sexual abuse. A webpage only needs to contain a single confirmed image or video of child sexual abuse for the IWF to take action to have it removed.

The image by image analysis refers to individual images and videos which the IWF has discovered (hence the higher numbers). Each number is an individual image or video - allowing for a more granular break down of age/sex/severity of the abuse in the imagery.

DSIT

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.