
Using recordings of real people, today's AI-based tools can generate deceptively realistic images, videos, and audio - often with serious consequences, especially for schoolchildren or young adults. In the DEEP-PRISMA project, experts on technology assessment at the Karlsruhe Institute of Technology (KIT) are investigating how people deal with abusive fakes and are working with them to develop strategies against deepfake abuse. The project, which is funded by Germany's Federal Ministry of Research, Technology and Space, is also reviewing the current legal situation.
The number of deepfakes shared online is growing rapidly. AI-based tools can now create media content that even experts can hardly distinguish from genuine video and audio. Such fakes are often used for fraudulent purposes, e.g. to gain access to passwords, bank accounts, or trade secrets. "But the majority consists of sexualized or pornographic content. Studies indicate that about 98 percent of deepfakes are pornographic, and 99 percent of those portray female individuals," said Dr. Jutta Jahnel from KIT's Institute for Technology Assessment and Systems Analysis (ITAS). Jahnel has been studying the impact of digital image manipulation for many years. She noted, however, that exact figures are difficult to determine due to the high number of unreported cases and rapid technological developments.