British Technology Companies and Child Protection Officials to Examine AI's Capability to Generate Abuse Content

Tech firms and child protection agencies will receive permission to evaluate whether artificial intelligence systems can generate child abuse material under new British laws.

Substantial Increase in AI-Generated Illegal Content

The declaration came as revelations from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the government will permit approved AI companies and child safety organizations to examine AI models – the foundational technology for chatbots and image generators – and ensure they have sufficient safeguards to prevent them from creating images of child sexual abuse.

"Fundamentally about stopping abuse before it occurs," stated the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the risk in AI systems promptly."

Tackling Regulatory Challenges

The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.

This legislation is designed to averting that problem by helping to halt the creation of those materials at their origin.

Legal Structure

The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a ban on possessing, creating or sharing AI systems designed to create exploitative content.

Practical Consequences

This recently, the minister toured the London headquarters of a children's helpline and heard a simulated conversation to advisors involving a report of AI-based exploitation. The call portrayed a adolescent requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.

"When I hear about children facing blackmail online, it is a source of intense frustration in me and rightful anger amongst parents," he said.

Alarming Data

A leading internet monitoring organization stated that cases of AI-generated exploitation content – such as online pages that may include multiple images – had significantly increased so far this year.

Instances of category A content – the gravest form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025
  • Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a crucial step to guarantee AI tools are secure before they are launched," stated the head of the internet monitoring foundation.

"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, providing criminals the ability to create potentially endless quantities of sophisticated, photorealistic child sexual abuse material," she added. "Material which further commodifies victims' trauma, and renders young people, particularly female children, more vulnerable on and off line."

Support Interaction Data

The children's helpline also released details of counselling interactions where AI has been referenced. AI-related risks mentioned in the conversations comprise:

  • Employing AI to rate weight, body and appearance
  • Chatbots discouraging young people from talking to trusted guardians about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-faked images

Between April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and related topics were discussed, four times as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing using chatbots for support and AI therapeutic apps.

Peter Davis
Peter Davis

A seasoned blackjack strategist with years of experience in casino gaming and player education.