British Technology Companies and Child Safety Officials to Test AI's Ability to Create Exploitation Images
Tech firms and child protection agencies will be granted permission to evaluate whether artificial intelligence systems can generate child exploitation material under recently introduced UK laws.
Substantial Increase in AI-Generated Harmful Content
The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the government will allow approved AI companies and child safety organizations to inspect AI systems – the foundational technology for conversational AI and image generators – and verify they have adequate safeguards to prevent them from producing images of child exploitation.
"Fundamentally about stopping abuse before it happens," declared Kanishka Narayan, noting: "Experts, under rigorous protocols, can now identify the danger in AI models promptly."
Addressing Regulatory Obstacles
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at preventing that problem by enabling to stop the production of those materials at source.
Legislative Framework
The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a ban on owning, producing or sharing AI models designed to generate exploitative content.
Real-World Impact
This week, the minister toured the London headquarters of a children's helpline and heard a mock-up call to advisors featuring a report of AI-based exploitation. The interaction depicted a adolescent seeking help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about young people facing blackmail online, it is a source of intense anger in me and justified concern amongst families," he stated.
Alarming Data
A leading online safety foundation stated that cases of AI-generated exploitation material – such as webpages that may contain numerous images – had more than doubled so far this year.
Instances of the most severe content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were predominantly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a crucial step to ensure AI products are safe before they are launched," commented the head of the online safety foundation.
"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a simple actions, giving criminals the ability to make potentially limitless quantities of advanced, photorealistic child sexual abuse material," she continued. "Content which additionally commodifies survivors' trauma, and makes children, especially girls, more vulnerable on and off line."
Support Session Information
The children's helpline also published details of support interactions where AI has been mentioned. AI-related harms mentioned in the conversations include:
- Employing AI to evaluate weight, physique and looks
- Chatbots dissuading young people from consulting trusted guardians about abuse
- Being bullied online with AI-generated content
- Digital extortion using AI-faked pictures
During April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and related topics were discussed, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellness, encompassing utilizing AI assistants for assistance and AI therapeutic applications.