UK Tech Firms and Child Safety Officials to Test AI's Ability to Create Exploitation Images

Tech firms and child safety organizations will be granted authority to evaluate whether artificial intelligence systems can generate child exploitation material under new UK laws.

Substantial Rise in AI-Generated Illegal Content

The declaration coincided with revelations from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the changes, the government will allow designated AI companies and child protection organizations to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from creating depictions of child sexual abuse.

"Ultimately about preventing abuse before it happens," declared Kanishka Narayan, adding: "Specialists, under strict protocols, can now detect the risk in AI models early."

Tackling Regulatory Challenges

The changes have been introduced because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was published online before addressing it.

This law is designed to preventing that issue by helping to stop the production of those materials at source.

Legal Framework

The amendments are being added by the government as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or sharing AI systems developed to create exploitative content.

Practical Consequences

This week, the official toured the London base of a children's helpline and listened to a simulated call to counsellors featuring a report of AI-based abuse. The interaction depicted a adolescent requesting help after facing extortion using a explicit AI-generated image of himself, constructed using AI.

"When I learn about children experiencing extortion online, it is a cause of intense frustration in me and rightful concern amongst parents," he stated.

Concerning Data

A prominent online safety foundation reported that cases of AI-generated abuse material – such as webpages that may include numerous files – had more than doubled so far this year.

Cases of category A content – the most serious form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were predominantly victimized, making up 94% of prohibited AI depictions in 2025
  • Depictions of infants to toddlers rose from five in 2024 to 92 in 2025

Industry Response

The law change could "represent a vital step to ensure AI tools are secure before they are launched," stated the head of the online safety organization.

"AI tools have enabled so survivors can be victimised repeatedly with just a few clicks, providing criminals the capability to create possibly limitless quantities of advanced, lifelike exploitative content," she added. "Content which further commodifies victims' trauma, and makes young people, particularly female children, more vulnerable on and off line."

Support Interaction Information

The children's helpline also released details of support sessions where AI has been referenced. AI-related harms discussed in the sessions comprise:

  • Employing AI to evaluate weight, physique and looks
  • Chatbots dissuading young people from consulting trusted adults about abuse
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-manipulated images

Between April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and associated terms were mentioned, four times as many as in the equivalent timeframe last year.

Fifty percent of the references of AI in the 2025 sessions were related to mental health and wellbeing, including utilizing AI assistants for assistance and AI therapeutic applications.

Mary Raymond
Mary Raymond

A seasoned gaming journalist with a passion for slot mechanics and player advocacy.