Research: AI Struggles to Penetrate Cybercriminal Circles

University of Edinburgh

Cybercriminals have been struggling to adopt AI in their work, reports the first of its kind study that analysed a dataset of 100 million posts from underground cybercrime communities.

In reality, most cybercriminals – often referred to as hackers – lack the skills or resources to support real innovation within their criminal activities, experts say.

It was found that AI was used most successfully when hiding patterns that are often detectable by cybersecurity defenders, and for running social media bots that conduct misogynistic harassment and make money from fraud.

The team of researchers from Universities of Edinburgh, Cambridge and Strathclyde analysed discussions from the CrimeBB database that contains over 100 million posts scraped from underground and dark web cybercrime forums.

These conversations were analysed using a combination of machine learning tools and manual sampling techniques, searching for posts that discussed how cybercrime actors were experimenting with AI technologies from November 2022 onwards, which marks the release of ChatGPT.

Through their analysis, researchers found that AI coding assistants are mostly proving useful for already skilled actors rather than reducing the skill barrier to committing cybercrime, as the AI tools still require significant skills and knowledge to use effectively.

They also found some evidence of the use of AI tools in more advanced forms of automation, especially in social engineering and bot farming.

Because most cybercrime is already heavily industrialised, deskilled, and reliant on automated tools and pre-made assets, this represents an evolution rather than a revolution in criminal practices, experts say.

In reassuring findings, guardrails on the major chatbots are having significant effects in reducing harm. But researchers say there is still cause for concern after observing early evidence that these communities are having some success in manipulating the outputs of the mainstream chatbots.

Interestingly, many of the people in these cybercrime communities were also seen to be panicking about potentially losing their 'day jobs' in IT as a result of AI disruption in mainstream software industries, which could then drive them and others towards more cybercriminal activity.

Contrary to reports from the cybersecurity industry to date, the authors warn that the most pressing risks are likely to be from the adoption of poorly secured agentic AI systems – a form of AI that can act autonomously, making decisions and carrying out actions on specific tasks.

There are also risks around insecure 'vibecoded' products – where computer code has been written using AI – by legitimate industry, rather than the adoption of AI tools by cybercriminals.

The findings have been peer reviewed and will be presented at the Workshop on the Economics of Information Security in Berkley, USA, in June 2026.

Dr Ben Collier, Senior Lecturer in Digital Methods at University of Edinburgh's School of Social and Political Science, said: "Cybercriminals are experimenting with these tools, but as far as we can tell it's not delivering them real benefits in their own work. Our message to industry is: don't panic yet. The immediate danger comes from companies and members of the public adopting poorly secured AI systems themselves, opening them up to catastrophic new attacks that can be performed by cybercriminals with little effort or skill."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.