UK Technology Firms and Child Safety Officials to Test AI's Ability to Generate Abuse Content

Tech firms and child protection agencies will be granted permission to assess whether artificial intelligence tools can generate child exploitation images under new British laws.

Significant Rise in AI-Generated Illegal Content

The announcement came as findings from a protection watchdog showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the amendments, the government will permit approved AI developers and child protection groups to examine AI systems – the foundational technology for conversational AI and visual AI tools – and verify they have adequate protective measures to prevent them from creating images of child sexual abuse.

"Fundamentally about stopping exploitation before it happens," stated the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the risk in AI systems early."

Addressing Regulatory Obstacles

The changes have been implemented because it is against the law to create and own CSAM, meaning that AI developers and others cannot create such images as part of a evaluation regime. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This law is designed to averting that issue by helping to stop the production of those materials at their origin.

Legal Structure

The amendments are being introduced by the government as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI systems developed to generate child sexual abuse material.

Practical Consequences

This recently, the official visited the London headquarters of Childline and listened to a simulated call to advisors featuring a report of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.

"When I hear about young people experiencing blackmail online, it is a source of extreme frustration in me and rightful anger amongst parents," he stated.

Alarming Data

A leading internet monitoring organization stated that instances of AI-generated exploitation content – such as webpages that may contain multiple files – had more than doubled so far this year.

Cases of the most severe material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly targeted, making up 94% of illegal AI images in 2025
  • Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a crucial step to guarantee AI tools are safe before they are released," stated the chief executive of the internet monitoring organization.

"AI tools have made it so victims can be victimised repeatedly with just a simple actions, giving criminals the ability to make potentially limitless amounts of sophisticated, lifelike exploitative content," she continued. "Content which further exploits victims' trauma, and makes young people, especially girls, more vulnerable on and off line."

Counseling Interaction Information

The children's helpline also released details of support interactions where AI has been referenced. AI-related risks mentioned in the sessions include:

  • Using AI to rate weight, body and appearance
  • Chatbots dissuading young people from talking to safe adults about abuse
  • Being bullied online with AI-generated material
  • Online extortion using AI-faked pictures

During April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and related terms were discussed, four times as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, encompassing using AI assistants for support and AI therapy applications.

Cameron Brown
Cameron Brown

Elara is a seasoned journalist and cultural critic with a passion for uncovering stories that connect diverse global communities.