Call for ban on AI apps creating naked images of children

The children's commissioner for England is calling on the government to ban apps which use artificial intelligence (AI) to create sexually explicit images of children.
Dame Rachel de Souza said a total ban was needed on apps which allow "nudification" - where photos of real people are edited by AI to make them appear naked.
She said the government was allowing such apps to "go unchecked with extreme real-world consequences".
A government spokesperson said child sexual abuse material was illegal and that there were plans for further offences for creating, possessing or distributing AI tools designed to create such content.
Deepfakes are videos, pictures or audio clips made with AI to look or sound real.
In a report published on Monday, Dame Rachel said the technology was disproportionately targeting girls and young women with many bespoke apps appearing to work only on female bodies.
Girls are actively avoiding posting images or engaging online to reduce the risk of being targeted, according to the report, "in the same way that girls follow other rules to keep themselves safe in the offline world - like not walking home alone at night".
Children feared "a stranger, a classmate, or even a friend" could target them using technologies which could be found on popular search and social media platforms.
Dame Rachel said: "The evolution of these tools is happening at such scale and speed that it can be overwhelming to try and get a grip on the danger they present.
"We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children's lives."
It is illegal under the Online Safety Act to share or threaten to share explicit deepfake images.
The government announced in February laws to tackle the threat of child sexual abuse images being generated by AI, which include making it illegal to possess, create, or distribute AI tools designed to create such material.
Dame Rachel said this does not go far enough, with her spokesman telling the BBC: "There should be no nudifying apps, not just no apps that are classed as child sexual abuse generators."
Rise in reported cases
In February the Internet Watch Foundation (IWF) - a UK-based charity partly funded by tech firms - had confirmed 245 reports of AI-generated child sexual abuse in 2024 compared with 51 in 2023, a 380% increase.
"We know these apps are being abused in schools, and that imagery quickly gets out of control," IWF Interim Chief Executive Derek Ray-Hill said on Monday.
A spokesperson for the Department for Science, Innovation and Technology said creating, possessing or distributing child sexual abuse material, including AI-generated images, is "abhorrent and illegal".
"Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines," they added.
"The UK is the first country in the world to introduce further AI child sexual abuse offences - making it illegal to possess, create or distribute AI tools designed to generate heinous child sex abuse material."
Dame Rachel also called for the government to:
- impose legal obligations on developers of generative AI tools to identify and address the risks their products pose to children and take action in mitigating those risks
- set up a systemic process to remove sexually explicit deepfake images of children from the internet
- recognise deepfake sexual abuse as a form of violence against women and girls
Paul Whiteman, general secretary of school leaders' union NAHT, said members shared the commissioner's concerns.
He said: "This is an area that urgently needs to be reviewed as the technology risks outpacing the law and education around it."
Media regulator Ofcom published the final version of its Children's Code on Friday, which puts legal requirements on platforms hosting pornography and content encouraging self-harm, suicide or eating disorders, to take more action to prevent access by children.
Websites must introduce beefed-up age checks or face big fines, the regulator said.
Dame Rachel has criticised the code saying it prioritises "business interests of technology companies over children's safety".