Australia's eSafety Commissioner has launched enforcement action against a technology company responsible for AI-generated 'nudify' services used to create deepfake pornography of Australian school children.
A formal warning has been issued to a UK-based technology company for enabling the creation of child sexual exploitation material through the provision of its online 'nudify' services, in breach of an industry standard under the Online Safety Act.
The company - which eSafety has chosen not to name to avoid promoting it and its services - operates two of the world's most-visited online AI-generated nude image websites, which allows its users to upload photos of real people, including children.
In Australia, these two services have been attracting about 100,000 visitors per month and have been identified by eSafety as being used to generate explicit deepfake images of students in Australian schools.
"Following reports to eSafety, we found both online services were used nefariously by Australian school children, to create deepfake image-based abuse of their peers," eSafety Commissioner Julie Inman Grant said.
"These services failed to provide appropriate safeguards to prevent the creation of child sexual exploitation material and therefore pose a serious risk to our children.
"Shockingly, we found these services did little to deter the generation of synthetic child sexual abuse material by marketing alarming features such as undressing 'any girl,' with options for 'schoolgirl' and 'sex mode' to name a few," Ms Inman Grant said.
"And while these platforms can be accessed for free, the cost to the children targeted is incredibly high, if not incalculable."
eSafety's formal warning is the first step in its enforcement process. Further action will be considered should the company continue to fail to comply with Australian online safety standards.
"Our world-first industry standards have been designed to tackle the most harmful online content, including deepfakes and 'nudify services' that use AI to generate explicit material depicting a child," Ms Inman Grant said.
"We will not hesitate to use the full extent of our powers-including seeking civil penalties of up to $49.5 million-if non-compliance continues," Ms Inman Grant said.
"This enforcement action will also serve as a deterrent to other AI and technology companies which provide services that fail to prevent the creation of child sexual exploitation material."
"Unfortunately, these are not some niche parts of the internet. Experts recently assessed that nudify services globally are making tens of millions of dollars from this extremely harmful activity."
"We also welcome the Government's announcement last week that it will consult on new powers to restrict the availability of nudify services. In the meantime, eSafety will use all the powers currently available to us to prevent harms arising from these services. With deepfake image-based abuse erupting across Australian schools almost weekly, the time to act is now." Ms Inman Grant said.
Rise in deepfake reports
The formal warning follows a recent spike in reports to eSafety's image-based abuse scheme about digitally altered images, including deepfakes, from people under the age of 18.
Reports from children more than doubled in the past 18 months, compared to the total number of reports received in the seven years prior. Four out of five of these reports involved the targeting of females.
"We know that digitally-enabled harms are under-reported so we are certain this is just the tip of the iceberg," Ms Inman Grant said.
This data - in addition to feedback from school leaders and education sector representatives that deepfake incidents are occurring more frequently, particularly as children can easily access nudify apps and services and then circulate images within school settings - prompted eSafety to release an updated Toolkit for Schools including a step-by-step guide for dealing with deepfake incidents.
eSafety also issued an Online Safety Advisory to alert parents and schools to the recent proliferation of open-source AI nudify apps that are easily accessible by anyone with a smartphone.
"With just one photo, these apps can nudify and sexualise an image with the power of AI in seconds," Ms Inman Grant said
"We have seen these apps used to humiliate and bully children in the school yard and beyond. These depict children doing and saying things they did not say and do - but the fidelity of this deepfake imagery is so high that it is near-impossible to tell that the image isn't real."
eSafety can help
Australians who have experienced image-based abuse (the non-consensual sharing online of intimate images, including deepfakes) are encouraged to report it. Allegations of criminal nature should be reported to local police and then to us at eSafety.gov.au.
"Our specialist teams can provide advice, support, and help to remove harmful content wherever possible," Ms Inman Grant said.
"We have a very high success rate in removing harmful material - up to 98 per cent in cases of image-based abuse."
Alongside removal actions, eSafety has remedial powers which can be used to require the perpetrator to take further, specific actions.
"Ultimately, we need a holistic response that combines regulation, platform responsibility, education and cultural change to ensure emerging technologies are not used to shame, exploit or harm others, especially children."