Generative AI Speeds Up Cybersecurity Defenses

SEATTLE-Scientists are using generative AI to accelerate a key step in the defense against cyberattacks, performing complex operations in minutes instead of weeks.

The team led by Loc Truong at the Department of Energy's Pacific Northwest National Laboratory is using generative AI to reconstruct complex cyberattacks. These reconstructions are a crucial component of digital defense: Cybersecurity professionals need to understand exactly how an attack occurred to be sure they can stop it.

"To really protect against an attack, you need to replicate it," said Truong, a data scientist. "When an attack happens, usually a defender simply receives a text document explaining the attack, but someone needs to re-implement the entire attack. That can be a lengthy process and cost a lot of money. We hope to change that."

The work comes at a time when hackers and other bad actors have unfettered access to advanced generative AI tools, muddying the cyber landscape. PNNL cybersecurity researcher Kristopher Willis, who works on the project with Truong, noted that AI is part of the approach of some of the best hackers across industry, academia and government.

"At the most recent DEF CON, the largest hacker conference in the world, every team competing at the DEF CON Capture the Flag finals was using AI to assist with their attacks," said Willis, who was a participant in the finals.

Meanwhile, defenders like Truong and Willis are expanding the use of autonomous defense to stay ahead.

An animation unfolds in 4 parts, with a timeline progressing from left to right: First, a yellow box signifies an initial report; a green section signifies adversary emulation; a red section shows improved defenses; and the last, white section shows updating of defenses.
Illustration of the steps to analyze and protect against a cyberattack: a) the report is received, b) the attack is reconstructed much more quickly with generative AI using ALOHA, c) the attack is tested against defenses, and d) the defenses are updated if necessary. (Animation by Sara Levine | Pacific Northwest National Laboratory)

ALOHA and Claude

The PNNL team created an adaptive generative AI agent called ALOHA (Agentic LLMs for Offensive Heuristic Automation) using Claude, a popular large language model developed by Anthropic. The partnership allows laboratory researchers to benefit from an advanced LLM, and it allows Anthropic to have its technology subjected to rigorous testing to prevent misuse.

"PNNL's work using large language models to simulate attacks on critical infrastructure is crucial for understanding the national security implications of increasingly capable AI," said Marina Favaro, national security policy lead at Anthropic. "We're proud to have helped augment and accelerate the cyber defenders who need it most. This kind of collaboration helps us better understand the national security landscape and feeds directly into our safety processes and how we build Claude."

The PNNL technology works in concert with MITRE's open-source "Caldera" software, which helps defenders prepare for and defend against cyberattacks.

When an attack occurs, a human defender enters a text description of the attack into ALOHA and instructs the program to re-create the steps necessary to emulate that attack. The process of rebuilding the attack, known as adversary emulation, is key for defending the system.

A complex attack chain might include 20 different tactics encompassing 100 different steps-all of which need to be reconstructed.

"You describe what you want, in plain English, and generative AI runs the attack automatically," Truong said. "The technology speeds up the defender's response so that the cybersecurity expert doesn't need to carry out quite as many operations themselves. It's click and go."

In one test, working from simple plain-language text guidance, ALOHA tackled a multi-step attack chain by generating 1 million outputs, or tokens.

The real test comes next: ALOHA launches the rebuilt attack against the original target system in a siloed, offline environment to see if new protections that have been installed stop the attack. Then comes a streamlined back-and-forth: ALOHA attacks, the target responds, defenders buttress the new fortifications, and so on.

Without technology like ALOHA, the crucial first step of re-creating a sophisticated attack is arduous and expensive. The process can take weeks-and cost tens of thousands of dollars-as programmers sift through hundreds of tools to reconstruct the attack and the overall attack environment.

Kris Willis
Kris Willis is part of the team that developed ALOHA through PNNL's focus on Generative AI for Science, Energy, and Security. (Photo courtesy of Kris Willis | Pacific Northwest National Laboratory)

Programs like Caldera help automate the process, but many parameters still need to be entered manually, particularly if the attack was customized for a particular platform or system. As the reconstructed attack is run time and again, new errors usually require human attention and manual fixes. Once a company has made the sizeable financial investment necessary to reconstruct an attack, there's not much incentive to share the details with other organizations that could be targeted in a similar fashion.

In contrast, ALOHA automates almost the entire process using guidance from a human agent. And when ALOHA encounters an error, it automatically generates its own fix.

"There are many programs out there to detect attacks," said Willis. "ALOHA goes much further, adapting attacks to particular hardware, software, and environments, giving guidance on the best way to protect those systems-and then attacking again and again to identify gaps in security and to enhance response."

Defense in hours, not weeks

PNNL manages the Control Environment Laboratory Resource for the Cybersecurity and Infrastructure Security Agency. In a simulated test at CELR, the researchers used ALOHA to bolster the defenses of a water treatment plant. Defending against a complex attack involving more than 100 steps required just three hours using ALOHA. Typically, it would have taken most systems weeks to reconstruct such an attack.

"One small action in a long chain might take a millisecond for ALOHA but several minutes for a person," said Truong. "The difference in speed becomes very pronounced for complex attacks."

"An important next step will be to have people test in different types of systems, to expose ALOHA to more and more uses cases," said Truong.

The research has been funded by PNNL through its Generative AI for Science, Energy, and Security Science and Technology investment. In addition to Truong and Willis, researchers Dalton Arford, Tim Doster, Phillip Huang, Henry Kvinge, Jeff Morrow and Tim Stavenger have contributed. PNNL is one of two national laboratory partners that manage CISA's CELR platforms.

See Anthropic's article about this research.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.