In a Policy Forum, Daniel Schroeder and colleagues discuss the risks of malicious "Artificial Intelligence (AI) swarms", which enable a new class of large-scale, coordinated disinformation campaigns that pose significant risks to democracy. Manipulation of public opinion has long relied on rhetoric and propaganda. However, modern AI systems have created powerful new tools for shaping human beliefs and behavior on a societal scale. Large language models (LLMs) and autonomous agents can now generate vast amounts of persuasive, human-like content. When combined into collaborative AI Swarms – collections of AI-driven personas that retain memory and identity – these systems can mimic social dynamics and easily infiltrate online communities, making false narratives appear credible and widely shared. According to the authors, unlike earlier labor-intensive influence operations run by humans, AI systems can operate cheaply, consistently, and at tremendous scale, transforming once isolated disinformation efforts into persistent, adaptive campaigns that pose serious risks to democratic processes worldwide. Here, Schroeder et al. discuss the technology underpinning these malicious systems and identify pathways through which they can harm democratic discourse through widely used digital platforms. The authors argue that defense against these systems must be layered and pragmatic, aiming not for total prevention of their use, which is highly unlikely, but for raising the cost, risk, and visibility of manipulation. Because such efforts would require global coordination outside of corporate and governmental interests, Schroeder et al. propose a distributed "AI Influence Observatory," consisting of a network of academic groups, nongovernmental organizations, and other civil institutions to guide independent oversight and action. "Success depends on fostering collaborative action without hindering scientific research while ensuring that the public sphere remains both resilient and accountable," write the authors. "By committing now to rigorous measurement, proportionate safeguards, and shared oversight, upcoming elections could even become a proving ground for, rather than a setback to, democratic AI governance."
AI Swarms Emerge as New Threat to Democracy
American Association for the Advancement of Science (AAAS)
/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.