AI algorithms and psychological vulnerabilities can interact and increase the risk of violent extremism. This is demonstrated by a new theoretical model developed by an international team of researchers.

How are ordinary people drawn into extremist circles - and what role can artificial intelligence play in that process?
This question is addressed by a new study which, for the first time, combines psychological theories of radicalisation with knowledge of modern AI technologies such as recommendation algorithms, generative AI and botnets.
'We have developed a comprehensive model that shows how digital systems can exploit - or amplify - people's social and psychological needs in ways we do not yet fully understand,' explains Milan Obaidi, associate professor at the Department of Psychology at the University of Copenhagen.
Anger grows step by step
Radicalisation rarely begins as a sudden upheaval. Instead, individuals move gradually through a process in which digital technologies and psychological vulnerabilities can influence one another.
The study divides this process into four key phases:
- Exposure - algorithms present users with polarising or extreme content, often without the user actively seeking it out.
- Reinforcement - repeated exposure and algorithmic personalisation create echo chambers and reinforce the new attitudes.
- Group integration - online communities and even AI-generated 'peers' can create strong bonds of identity reminiscent of group membership.
- Violent acts - in rare cases, this development can culminate in violent extremism.
According to the researchers, AI systems can be seen as a kind of accelerator: they can identify psychologically vulnerable individuals, tailor content and create synthetic communities that resemble human interactions.
'We are seeing an environment where users are not only exposed to extreme content, but also have it reflected back to them by algorithms in ways that can amplify their sense of meaning, anger or injustice,' says Milan Obaidi, adding:
'It is the combination of the technology's scalability and people's psychological needs that makes this development particularly worrying.'
Generative AI introduces entirely new risks
Whereas recommendation algorithms primarily control what content the user sees, generative models such as large language models add a new layer: they can create the content that radicalises.
AI can:
- Produce vast amounts of personalised propaganda.
- Simulate communities via swarms of bots.
- Act as "AI companions" that reinforce users' extreme beliefs.
- Create highly convincing deepfakes and manipulated material.
'This development may make it harder to distinguish between human and non-human influences - and thus amplify radicalisation processes that were previously limited by human labour,' highlights Milan Obaidi.
Psychological vulnerability plays a crucial role
The study emphasises that not all users are equally vulnerable. AI particularly affects people who are already experiencing social isolation, identity insecurity, injustice or marginalisation - or a need for clarity, order and strong group affiliations.
- Jonas R. Kunst, University of Oslo
- Milan Obaidi, University of Copenhagen
- Anton Gollwitzer, BI Norwegian Business School and Max Planck Institute
- Petter B. Brandtzæg, University of Oslo
- Yannic Hinrichs, University of Oslo
- Neha Saini, University of Oslo
- Daniel T. Schroeder, SINTEF Digital
Precisely because AI systems are designed to maximise engagement, they may inadvertently exploit these very vulnerabilities - without any ideological intent.
'It is important to emphasise that AI does not create radicalisation out of the blue. But the technology can amplify known psychological mechanisms and make it easier for extreme ideas to gain a foothold among those who are already at risk,' says Milan Obaidi.
The study 'Intelligent Systems, Vulnerable Minds: A Framework for Radicalisation to Violence in the Age of AI' has been published in the journal Personality and Social Psychology Review. Read it here.