Thousands of AI-generated images depicting child abuse have been shared on a dark web forum, new research has found.
Around 3,000 AI images of child abuse were shared on the forum in September, with 564 depicting the most serious kind of imagery including rape, sexual torture, and bestiality.
Of the images, 1,372 depicted children aged between seven and 10 years old, according to research by the Internet Watch Foundation (IWF).
The charity said the most convincing images would even be difficult for trained analysts to distinguish from photographs and warned the text-to-image technology will only get better – making it harder for the police and other law enforcement to protect children.
Some images depict real children whose faces and bodies were used to train the AI models, which the charity has decided not to name.
In other cases, the models were used to “nudify” children based on fully-clothed images of them uploaded online.
Criminals are also using the technology to create images of celebrities who have been “de-aged” to depict them as children in sexual abuse scenarios.
ChatGPT and other chatbots ‘could be used to help launch cyberattacks’, study warns
Cher hits out at AI after hearing fake version of herself covering a Madonna track
New York’s mayor uses audio deepfakes to call residents in languages he doesn’t speak
‘This threat is here and now’
Ian Critchley, the National Police Chiefs’ Council lead for child protection in the UK, said the generation of such images online normalises child abuse in the real world.
“It is clear that this is no longer an emerging threat – it is here and now,” he said.
“We are seeing children groomed, we are seeing perpetrators make their own imagery to their own specifications, we are seeing the production of AI imagery for commercial gain – all of which normalises the rape and abuse of real children.”
Please use Chrome browser for a more accessible video player
What can be done about it?
The UK’s impending Online Safety Bill is designed to hold social media platforms more responsible for the content published on their platforms.
But it does not extend to the AI companies whose models are being altered and used to generate abusive imagery.
The UK government is hosting an AI safety summit next week that aims to address the risks associated with AI and consider what action is needed.
Be the first to get Breaking News
Install the Sky News app for free
Susie Hargreaves, chief executive of the IWF, said new EU laws on child sexual abuse should cover unknown imagery.
“We are seeing criminals deliberately training their AI on real victims’ images who have already suffered abuse,” she said.
“Children who have been raped in the past are now being incorporated into new scenarios because someone, somewhere, wants to see it.”
Please use Chrome browser for a more accessible video player
Politicians ‘caught asleep at the wheel’
Ellen Judson, head of the digital research hub at Demos, the think tank, said: “Once again, policymakers have been caught asleep at the wheel as generative AI continues to radically transform the nature of online harms.”
She called for the government to “get on the front foot” in their understanding and regulation of AI tools, specifically around how they are designed and developed.
“Waiting for the next crisis to occur before responding is simply not a sustainable approach,” she added.
A Home Office spokesperson said: “Online child sexual abuse is one of the key challenges of our age, and the rise in AI-generated child sexual abuse material is deeply concerning.
“We are working at pace with partners across the globe to tackle this issue, including the Internet Watch Foundation.
“Last month, the home secretary announced a joint commitment with the US government to work together to innovate and explore development of new solutions to fight the spread of this sickening imagery.”