In a Perspective, Hany Farid highlights the risk of manipulated and fraudulent images and videos, known as deepfakes, and explores interventions that could mitigate the harms deepfakes can cause. Farid explains that visually discriminating the real from the fake has become increasingly difficult and summarizes his research on digital forensic techniques, used to determine whether images and videos have been manipulated. Farid celebrates the positive uses of generative AI, including helping researchers, democratizing content creation, and, in some cases, literally giving voice to those whose voice has been silenced by disability. But he warns against harmful uses of the technology, including non-consensual intimate imagery, child sexual abuse imagery, fraud, and disinformation. In addition, the existence of deepfake technology means that malicious actors can cast doubt on legitimate images by simply claiming the images are made with AI. So, what is to be done? Farid highlights a range of interventions to mitigate such harms, including legal requirements to mark AI content with metadata and imperceptible watermarks, limits on what prompts should be allowed by services, and systems to link user identities to created content. In addition, social media content moderators should ban harmful images and videos. Furthermore, Farid calls for digital media literacy to be part of the standard educational curriculum. Farid summarizes the authentication techniques that can be used by experts to sort the real from the synthetic, and explores the policy landscape around harmful content. Finally, Farid asks researchers to stop and question if their research output can be misused and if so, whether to take steps to prevent misuse or even abandon the project altogether. Just because something can be created does not mean it must be created.
Surviving AI Slop Explosion
PNAS Nexus
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.