Court Fines $343K for Posting Aussie Women's Deepfakes

​The Federal Court has ordered Anthony Rotondo (Antonio Rotondo) to pay a $343,500 penalty plus costs, for posting deepfake images of several high-profile Australian women.

eSafety sought a significant civil penalty to reflect the seriousness of the Online Safety Act breaches and the damaging impacts of the image-based abuse on the women targeted.

Mr Rotondo admitted posting the images on a website called MrDeepFakes.com, which has since been shut down.

​This action sends a strong message about the consequences for anyone who perpetrates deepfake image-based abuse.

​eSafety remains deeply concerned by the non-consensual creation and sharing of explicit deepfake images which can cause significant psychological and emotional distress.

​Enforcement action

​That's why eSafety has taken a number of steps to tackle deepfake image-based abuse. This includes recently launching enforcement action against a technology company responsible for AI-generated 'nudify' services used to create deepfake sexualised images of Australian school children.

​Earlier this month, eSafety issued a formal warning to a UK-based technology company for enabling the creation of child sexual exploitation material through the provision of its online 'nudify' services, in breach of an industry standard under the Online Safety Act.

The company - which eSafety has chosen not to name to avoid promoting it and its services - operates two of the world's most-visited online AI-generated nude image websites, which allows its users to upload photos of real people, including children.

​In Australia, these two services have been attracting about 100,000 visitors per month and have been identified by eSafety as being used to generate explicit deepfake images of students in Australian schools.

The formal warning is the first step in eSafety's enforcement process. Further action will be considered should the company continue to fail to comply with Australian online safety standards.

​eSafety can help

​Australians who have experienced image-based abuse (the non-consensual sharing online of intimate images, including deepfakes) are encouraged to report it. Allegations of criminal nature should be reported to local police and then to us at eSafety.gov.au.

​eSafety's specialist teams can provide advice, support, and help to remove harmful content wherever possible. Last financial year we successfully removed over 85% of imaged-based abuse material.

​Alongside removal actions, eSafety has remedial powers which can be used to require the perpetrator to take further, specific actions.

​Following feedback from school leaders and education sector representatives that deepfake incidents have been occurring more frequently, eSafety released an updated Toolkit for Schools including a step-by-step guide for dealing with deepfake incidents.

​eSafety also issued an Online Safety Advisory to alert parents and schools to the recent proliferation of open-source AI nudify apps that are easily accessible by anyone with a smartphone.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.