November 21, 2025
Change in law aims to enable AI testing to prevent the creation of images of child sexual abuse

Change in law aims to enable AI testing to prevent the creation of images of child sexual abuse

Child protection organizations could test artificial intelligence (AI) models to prevent the creation of indecent images and videos of children under a proposed new law.

The law change, described as one of the first of its kind in the world, would allow certain organizations to audit AI models to prevent them from creating or distributing child sexual abuse material.

Under current UK law criminalizing the possession and creation of child sexual abuse material, developers cannot carry out safety tests on AI models, meaning images can only be removed after they have been created and shared online.

The changes, due to be tabled on Wednesday as an amendment to the Crime and Policing Act, would mean security measures within AI systems could be tested from the start, with the aim of restricting the production of images of child sexual abuse in the first place.

The government said the changes “represent a major step forward in protecting children in the digital age” and said the named panels could include AI developers and child protection organizations such as the Internet Watch Foundation (IWF).

The new legislation would also allow such organizations to verify whether AI models provide protection against extreme pornography and non-consensual intimate images, the Ministry of Science, Innovation and Technology said.

The announcement came as the IWF released data showing that the number of reports of AI-generated child sexual abuse material more than doubled last year, rising from 199 in the ten months from January to October 2024 to 426 in the same period in 2025.

According to the data, the severity of the material has increased over this time, with the most serious Category A content – images involving penetrative sexual activity, sexual activity with an animal or sadism – increasing from 2,621 to 3,086 items and now accounting for 56% of all illegal material, compared to 41% last year.

Technology Minister Liz Kendall
Technology Minister Liz Kendall said the government was “ensuring child safety is built into AI systems” (Jordan Pettitt/PA)

The data showed that girls were most often targeted, accounting for 94% of illegal AI images in 2025.

The government said it would bring together a group of AI and child safety experts to ensure testing is carried out “safely and securely”.

The experts’ job is to help develop security measures to protect sensitive data and prevent the risk of sharing illegal content.

Technology Minister Liz Kendall said: “We will not allow technological advances to outpace our ability to keep children safe.”

“These new laws will ensure AI systems can be made secure at source, avoiding vulnerabilities that could endanger children.

“By giving trusted organizations the ability to audit their AI models, we ensure that child safety is built into AI systems rather than an afterthought.”

Security Minister Jess Phillips said: “We must ensure children are safe online and that our laws keep pace with the latest threats.”

“This new measure means legitimate AI tools cannot be manipulated to create abhorrent material, protecting more children from predators.”

Kerry Smith, chief executive of the IWF, said: “AI tools have enabled survivors to be re-victimized in just a few clicks, giving criminals the ability to create potentially unlimited amounts of sophisticated, photorealistic child sexual abuse material.”

“Material that further commodifies the suffering of victims and makes children, especially girls, less safe online and offline.”

“Security must be integrated into new technologies through design.

“Today’s announcement could be an important step in ensuring AI products are safe before they come to market.”

The NSPCC said the new law should make it mandatory to test AI models in this way.

Rani Govender, policy manager for online child safety at the charity, said: “It is encouraging to see new laws forcing the AI ​​industry to take greater responsibility for auditing their models and preventing the creation of child sexual abuse material on their platforms.”

“But to make a real difference for children, this cannot be optional. The government must ensure there is a mandatory requirement for AI developers to use this provision so that protection from child sexual abuse is an integral part of product design.”

Leave a Reply

Your email address will not be published. Required fields are marked *