Child protection organizations could test artificial intelligence (AI) models to prevent the creation of indecent images and videos of children under a proposed new law.
The law change, described as one of the first of its kind in the world, would allow certain organizations to audit AI models to prevent them from creating or distributing child sexual abuse material.
Under current UK law criminalizing the possession and creation of child sexual abuse material, developers cannot carry out safety tests on AI models, meaning images can only be removed after they have been created and shared online.
The changes, due to be tabled on Wednesday as an amendment to the Crime and Policing Act, would mean security measures within AI systems could be tested from the start, with the aim of restricting the production of images of child sexual abuse in the first place.
The government said the changes “represent a major step forward in protecting children in the digital age” and said the named panels could include AI developers and child protection organizations such as the Internet Watch Foundation (IWF).
The new legislation would also allow such organizations to verify whether AI models provide protection against extreme pornography and non-consensual intimate images, the Ministry of Science, Innovation and Technology said.
The announcement came as the IWF released data showing that the number of reports of AI-generated child sexual abuse material more than doubled last year, rising from 199 in the ten months from January to October 2024 to 426 in the same period in 2025.
According to the data, the severity of the material has increased over this time, with the most serious Category A content – images involving penetrative sexual activity, sexual activity with an animal or sadism – increasing from 2,621 to 3,086 items and now accounting for 56% of all illegal material, compared to 41% last year.
The data showed that girls were most often targeted, accounting for 94% of illegal AI images in 2025.
The government said it would bring together a group of AI and child safety experts to ensure testing is carried out “safely and securely”.
The experts’ job is to help develop security measures to protect sensitive data and prevent the risk of sharing illegal content.
Technology Minister Liz Kendall said: “We will not allow technological advances to outpace our ability to keep children safe.”
“These new laws will ensure AI systems can be made secure at source, avoiding vulnerabilities that could endanger children.