AFP, Monash Uni Poison Data To Fight AI Crime

The AFP and Monash University are teaming up to turn the tech tables on cybercriminals through a dose of digital poison.

The AI for Law Enforcement and Community Safety (AiLECS) Lab, a collaboration between the AFP and Monash University, is developing a new disruption tool which, among its broad applications, can slow down and stop criminals producing AI-generated child abuse material, extremist technology propaganda, and deepfake images and videos.

Known as 'data poisoning', it involves the subtle alteration of data to make it significantly more difficult to produce, manipulate and misuse images or videos using AI programs.

AI and machine learning tools (MLs) require significant amounts of online data to produce AI-generated content, so by poisoning this data, AI models then create inaccurate, skewed or corrupted results. This also makes it easier to spot a doctored image or video created by criminals.

The AI-disrupter, called 'Silverer', is in its prototype stage. It has been in development for 12 months under AiLECS researcher and Project Lead, PhD candidate Elizabeth Perry.

Ms Perry said the name was a nod to the silver used to make mirrors. Similarly, the tool would be used to create reflections of an original image.

"In this case, it's like slipping silver behind the glass, so when someone tries to look through it, they just end up with a completely useless reflection," she said.

"Before a person uploads images on social media or the internet, they can modify them using Silverer. This will alter the pixels to trick AI models and the resulting generations will be very low-quality, covered in blurry patterns, or completely unrecognisable.

"Offenders making deepfakes often try to use a victim's data to fine-tune an AI of their own; Silverer modifies the image by adding a subtle pattern to the image which tricks the AI into learning to reproduce the pattern, rather than generate images of the victim."

AFP Commander Rob Nelson said data-poisoning technologies were still in their infancy, and being tested, but showed promising early results in terms of law-enforcement capability.

"Where we see strong applications is in the misuse of AI technology for malicious purposes," Cmdr Nelson said.

"For example, if a criminal attempts to generate AI-based imagery using the poisoned data, the output image will be distorted or completely different from the original. By poisoning the data, we are actually protecting it from being generated into malicious content.

"A number of data-poisoning algorithms already exist, and as we see in other cyber security areas, emerging methods to avoid them appear quickly soon after.

"We don't anticipate any single method will be capable of stopping the malicious use or re-creation of data, however, what we are doing is similar to placing speed bumps on an illegal drag racing strip. We are building hurdles to make it difficult for people to misuse these technologies."

The AFP has identified an increase in AI-generated child abuse material, with criminals leveraging the technology to produce and share significant amounts of fake explicit content online.

Two Australian men are among 25 people arrested as part of a global operation targeting the alleged production and distribution of child abuse material generated by AI.

As part of this global resolution, the AFP charged a Queensland man, 31, and a New South Wales man, 38, in February, 2025, for allegedly possessing AI-generated child abuse content.

A Sydney man was charged in October, 2025, over the alleged importation of a child-like sex doll and the production and possession of AI-generated child abuse material on multiple digital devices.

A NSW South Coast man was charged in August, 2025, with three child abuse material offences, including allegedly possessing more than 1000 illicit images and videos involving minors as young as one. Examination of the man's device allegedly confirmed the presence of AI-generated child abuse material that featured the man. 

In another matter, a Melbourne man was sentenced to 13 months' imprisonment in July, 2024, for using an AI image generative program to manipulate and create nearly 800 images of realistic child abuse.

Commander Nelson said the disruption tool could aid investigators by cutting down the volume of fake material to wade through.

"Data poisoning, if performed on a large scale, has the potential to slow down the rise in AI-generated malicious content such as child abuse material, which would allow police to focus on identifying and removing real children from harm," Cmdr Nelson said.

Digital forensics expert and AiLECS Co-Director Associate Professor Campbell Wilson said the generation of fake and malicious images was becoming much more of a problem.

"Currently, these AI-generated harmful images and videos are relatively easily created using open-source technology and there's a very low barrier to entry for people to use these algorithms," Associate Professor Wilson said.

Scammers also use AI technology to generate deepfakes - hyper-realistic impersonations of real people - to create video ads, images, or news articles that appear to show celebrities and other trusted public figures promoting investment opportunities online.

The celebrity endorsement provides credibility for the scam, with victims tricked into sending sensitive information or transferring large amounts of money.

Australians lost more than $382 million to investment scams in the 2023-2024 financial year, which included deepfake celebrity investment scams.

Commander Nelson said the overarching goal of the 'Silverer' research project was to develop and continue to enhance technology that would be easy-to-use for ordinary Australians who wanted to protect their data on social media.

"Many harmful deepfakes are generated using only a small handful of training data images. If a user can poison those images before uploading them, it makes it significantly harder for criminals to generate malicious images of that user," Cmdr Nelson said.

"We urge the public to consider poisoning images at risk of being manipulated by criminals for deceptive purposes. A dose of data poison will make it significantly harder for criminals to distort reality with artificial intelligence."

The prototype version of the tool is currently in discussions to be used internally at the AFP.

The AiLECS Lab is a research collaboration between the AFP and Monash University that launched in Melbourne in 2019.

Born out of collaboration on accelerating digital forensics and countering online child exploitation, the AiLECS Lab researches the next generation of AI for ethical law enforcement and community safety applications.

The research is supported and funded via the AFP's Federal Government Confiscated Assets Account.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.