Report Reveals Views on Sexualised Deepfake Abuse

Monash University
  • First study of its kind to interview perpetrators of deepfake sexual abuse to examine motivations

  • Increased accessibility of AI tools is helping perpetrators create realistic and harmful nude and sexual imagery

  • Education, tighter regulations on the marketing of AI tools and laws around creation and consumption of sexualised deepfake imagery may help combat this growing issue

AI tools are making it easier to create and disseminate deepfake imagery, and a new study from Monash University has revealed insights into the experience of both victims and perpetrators of sexualised deepfake abuse.

The research, funded by the Australian Research Council, is the first of its kind to include interviews with both perpetrators and victims. It aimed to understand patterns in abuse in Australia and motivations, including how people who engage in these harms rationalise and minimise their actions.

The study's lead author Professor Asher Flynn, from the School of Social Sciences at Monash University and a Chief Investigator on the Australian Research Council Centre of Excellence for the Elimination of Violence Against Women (CEVAW), said advances in digital technologies have provided new opportunities for people to engage in harmful sexual behaviours.

"Our findings indicate that creating and sharing sexualised deepfake imagery is not only normalised among some young men, but encouraged as a way to bond or gain status from their peers," Professor Flynn said. "Many participants frequently pointed to the positive reinforcement from peers about their technological prowess in creating realistic, but fake sexualised images as a key motivation."

The study also found that perpetrators frequently downplayed the harm caused, with many claiming AI technologies made the images easy to create, shifting the blame away from themselves.

"There is a clear disconnect between participants' understanding of sexualised deepfake abuse as harmful, and acknowledging their own actions. Many turned to victim-blaming, claiming it was just a joke or outright denial – echoing patterns we see in other forms of sexual violence. This makes it harder to recognise and report sexualised deepfake abuse, which in turn undermines accountability and weakens any deterrent effect."

Despite the severity of the harm, none of the perpetrators interviewed had faced legal consequences. Victims also reported little to no recourse – even when incidents were reported to police.

While women were often the targets of the abuse, particularly when the motivations were to harm, control or sexualise the subject of the fake image, the study found a pattern of perpetration against men motivated by monetary gain (sextortion), humour and humiliation.

Professor Flynn said tighter regulations of the accessibility of deepfake tools, as well as education around the potential consequences and harms of sexualised deepfake abuse are a necessary starting point to tackle this emerging form of abuse.

"The growing proliferation of AI tools, combined with the acceptance or normalising the creation of deepfakes more generally, has provided access and motivation to a broader range of people who might not otherwise engage in this type of abuse."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).