ESafety Urges Schools To Report Rising Deepfake Cases

eSafety Commissioner Julie Inman Grant has issued an urgent call for schools to report deepfake incidents to appropriate authorities as the rapid proliferation of 'nudify' apps online takes a growing toll on communities around Australia.

The Commissioner has written to education ministers urging them to ensure schools adhere to state and territory child protection legislation and mandatory reporting obligations.

To help address the threat of AI-generated abuse in Australian classrooms, reports of which have steadily increased over the past 18 months, eSafety has today released an updated Toolkit for Schools including a step-by-step guide for dealing with deepfake incidents.

The guide strongly encourages educators to prioritise the wellbeing of children and targeted staff and report any potential criminal offence to local police.

"I'm calling on schools to report allegations of a criminal nature, including deepfake abuse of under-aged students, to police and to make sure their communities are aware that eSafety is on standby to remove this material quickly," Ms Inman Grant said.

"It is clear from what is already in the public domain, and from what we are hearing directly from the education sector, that this is not always happening."

"Our response guide helps schools prepare for and manage deepfake incidents, taking into account the distress and lasting harms these can cause to those targeted.

"It also encourages schools to openly communicate their online safety policies and procedures, and the potential for serious consequences, including criminal charges in some instances, for perpetrators who may be creating synthetic child sexual abuse material," Ms Inman Grant said.

eSafety today has also issued a new Online Safety Advisory to alert parents and schools to the recent proliferation of open-source AI 'nudify' apps that are easily accessible by anyone with a smartphone.

"Creating an intimate image of someone under the age of 18 is illegal. This includes the use of AI tools. Parents and carers can help educate their children that this behaviour can lead to criminal charges."

Additionally, eSafety is hosting a series of webinars throughout July and August for parents, educators and youth-serving organisations on AI-assisted image-based abuse and navigating the deepfake threat.

AI proliferation

New data reveals reports to eSafety's image-based abuse scheme about digitally altered intimate images, including deepfakes, from people under the age of 18 have more than doubled in the past 18 months, compared to the total number of reports received in the seven years prior. Four out of five of these reports involved the targeting of females.

While the rapid rise in reports is cause for concern, the reality may be worse, Ms Inman Grant warned.

"We suspect what is being reported to us is not the whole picture," Ms Inman Grant said.

"Anecdotally, we have heard from school leaders and education sector representatives that deepfake incidents are occurring more frequently, particularly as children are easily able to access and misuse nudify apps in school settings," Ms Inman Grant said.

"With just one photo, these apps can nudify the image with the power of AI in seconds. Alarmingly, we have seen these apps used to humiliate, bully and sexually extort children in the school yard and beyond. There have also been reports that some of these images have been traded among school children in exchange for money."

"We have already been engaging with police, the app makers and the platforms that host these high-risk apps to put them on notice that our mandatory standards come into full effect this week and carry up to a $49.5 million fine per breach, and that we will not hesitate to take regulatory action."

eSafety's role

eSafety's clear role as Australia's independent online safety regulator is to seek the removal of this kind of distressing material as quickly as possible, and to provide support to those impacted.

Australians who have experienced image-based abuse (the non-consensual sharing online of intimate images, including deepfakes) are encouraged to report it. Allegations of criminal nature should be reported to local police and then to us at eSafety.gov.au.

"Our specialist teams can provide advice, support, and help to remove harmful content wherever possible," Ms Inman Grant said.

"We have a very high success rate in removing harmful material - up to 98 per cent in cases of image-based abuse. Those affected consistently tell us that swift content takedown is their main concern."

Alongside removal actions, eSafety has remedial powers which can be used to require the perpetrator to take further, specific actions.

eSafety also supports schools through its education and professional learning programs, training and resources.

"Our world-first industry standards will tackle the most harmful online content, including deepfakes and 'nudify apps' that use AI to generate explicit material depicting a child under 18," Ms Inman Grant said.

"These standards require high-risk AI tools, including nudify apps, to implement robust safeguards that prevent misuse, including child exploitation."

Non-compliance may result in civil penalties of up to $49.5 million per breach.

"Safety by Design needs to be a fundamental tenet in the creation and deployment of these apps which is why eSafety also continues to push for an end to the industry's 'release first, fix later' mindset, particularly when the damage already wrought by these tools is so apparent," Ms Inman Grant said.

"We are looking forward to the Government moving forward with a digital duty of care and will continue to call for stronger industry adoption of eSafety's Safety by Design principles, which ensure platforms are built with embedded safety features such as privacy settings, effective moderation and age-appropriate protections from the outset, not as an afterthought.

"Ultimately, we need a holistic response that combines regulation, platform responsibility, education and cultural change to ensure emerging technologies are not used to shame, exploit or harm others, especially children."

Role of police

It is important to emphasise that eSafety's role as a regulator differs from the role of police.

"While we act swiftly to remove harmful content, it is police who are responsible for pursuing criminal charges. We also refer serious matters to the appropriate law enforcement agencies and often work in parallel with them during investigations," Ms Inman Grant said.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.