Human-Machine Teaming Boosts Cross-Border Defense

The 2025 series of the Decision Advantage Sprint for Human-Machine Teaming marked a significant step forward in the integration of artificial intelligence and machine learning into battle management operations. Through a series of groundbreaking experiments, including the recent DASH 3 iteration, the U.S. Air Force, alongside its coalition partners, Canada and the United Kingdom, tested and refined AI's potential to enhance decision-making, improve operational efficiency, and strengthen interoperability in the face of growing global security challenges.

We understand that the next conflict cannot be won alone without the help of machine teammates and supported by our allies. DASH 3 demonstrated the value of these partnerships as we worked together in a coalition-led, simulated combat scenario. The tools we tested are vital for maintaining a decision advantage, and we look forward to expanding this collaboration in future DASH events," said Royal Canadian Air Force Capt. Dennis Williams, RCAF DASH 3 participant.

Held at the unclassified location of the Shadow Operations Center-Nellis in downtown Las Vegas, DASH 3 set the stage for this collaboration, led by the Advanced Battle Management System Cross-Functional Team. The experiment was executed in partnership with the Air Force Research Lab's 711th Human Performance Wing, U.S. Space Force, and the 805th Combat Training Squadron, also known as the ShOC-N, further solidifying the commitment to advancing battle management capabilities for the future.

AI Integration into Operational Decision-Making

In the third iteration of the DASH series seven teams, six from industry teams and one from the ShOC-N innovation team partnered with U.S., Canadian, and U.K. operators to test a range of decision advantage tools aimed at enhancing the rapid and effective generation of battle course of actions with multiple paths. The goal of a Battle COA is to map sequences of actions that align with the commander's intent while overcoming the complexities of modern warfare, including the fog and friction of battle. Examples of Battle COAs include recommended solutions for long-range kill chains, electromagnetic battle management problems, space and cyber challenges, or agile combat employment such as re-basing aircraft.

U.S. Air Force Col. John Ohlund, ABMS Cross Functional Team lead overseeing capability development, explained the importance of flexibility in COA generation: "For example, a bomber may be able to attack from multiple avenues of approach, each presenting unique risks and requires different supporting assets such as cyber, ISR [intelligence, surveillance, and reconnaissance], refueling, and air defense suppression. Machines can generate multiple paths, supporting assets, compounding uncertainties, timing, and more. Machines provide a rich solution space where many COAs are explored, but only some are executed, ensuring options remain open as the situation develops."

This ability to explore multiple COAs simultaneously allows for faster adaptation to unforeseen challenges and provides operators with diverse strategies to act upon as the situation unfolds. AI's integration into this process aims to not only speed up the decision-making cycle but also increase the quality of the solutions generated.

AI Speeds Decision Advantage

The speed at which AI systems can generate actionable recommendations is proving to be a game-changer in the decision-making process. Transitioning from the manual creation of COAs that once took minutes to producing viable options in seconds was identified as a radical advantage in combat scenarios. Initial results from the DASH 3 experiment show the power of AI in enabling faster, more efficient decision-making.

"AI systems demonstrated the ability to generate multi-domain COAs considering risk, fuel, time constraints, force packaging, and geospatial routing in under one minute," said Ohlund. "These machine-generated recommendations were up to 90% faster than traditional methods, with the best in machine-class solutions showing 97% viability and tactical validity."

For comparison, human performance in generating courses of action typically took around 19 minutes, with only 48% of the options being considered viable and tactically valid. "This dramatic reduction in time and improvement in the quality of solutions underscores AI's potential to significantly enhance the speed and accuracy of the decision-making process, while still allowing humans to make the final decisions on the battlefield," Ohlund added.

The ability to quickly generate multiple viable COAs not only improves the speed of decision-making but also gives commanders more options to work within a compressed time frame, making AI an essential tool for maintaining a strategic advantage in fast-paced combat situations.

Building Trust in AI: From Skepticism to Confidence

Skepticism surrounding the integration of AI in operational decision-making was common at the start of the DASH 3 experiment. However, participating operators saw a notable shift in their perspectives as the DASH progressed. U.S. Air Force First Lt. Ashley Nguyen, 964th Airborne Air Control Squadron DASH 3 participant, expressed initial doubt about the role AI could play in such a complex process. "I was skeptical about technology being integrated into decision-making, given how difficult and nuanced battle COA building can be," said Nguyen. "But working with the tools, I saw how user-friendly and timesaving they could be. The AI didn't replace us; it gave us a solid starting point to build from."

As the experiment unfolded, trust in AI steadily increased. Operators, gaining more hands-on experience, began to see the value in the AI's ability to generate viable solutions at an unprecedented speed. "Some of the AI-generated outputs were about 80% solutions," said Nguyen. "They weren't perfect, but they were a good foundation. This increased my trust in the system; AI became a helpful tool in generating a starting point for decision-making."

Trust and Collaboration Across Nations

The collaboration between the U.S. and its coalition partners was highlighted throughout the 2025 DASH series. The inclusion of operators from the UK and Canada brought invaluable perspectives, ensuring that the decision support tools tested could address a broad range of operational requirements.

"We understand that the next conflict cannot be won alone without the help of machine teammates and supported by our allies," said Royal Canadian Air Force Capt. Dennis Williams, RCAF DASH 3 participant. "DASH 3 demonstrated the value of these partnerships as we worked together in a coalition-led, simulated combat scenario. The tools we tested are vital for maintaining a decision advantage, and we look forward to expanding this collaboration in future DASH events."

This integration of human-machine teaming and coalition participation highlighted the potential for improving multinational interoperability in the command-and-control battlespace. "The involvement of our coalition partners was crucial, not just for the success of DASH 3 but also for reinforcing the alliances that underpin global security. DASH experimentation is intentionally a low barrier for entry from a security classification standpoint, enabling broad participation from allies and coalition partners alike," said U.S. Air Force Lt. Col. Shawn Finney, commander of the 805th Combat Training Squadron/ShOC-N.

Addressing Challenges: Weather and AI Hallucinations

The DASH 3 experiment was not just a test of new AI tools, but a continuation of a concerted effort to tackle persistent challenges, including the integration of weather data and the potential for AI "hallucinations." These issues have been focus areas throughout the DASH series, with each iteration bringing new insights and refinements to ensure AI systems are operationally effective.

Weather-related challenges are a critical factor in real-world operations, but due to simulation limitations, they were not fully integrated in the DASH series. Instead, weather-related challenges were manually simulated by human operators through 'white carding', a method that provided scenario-based weather effects, such as airfield closures or delays, into the experiment.

"We didn't overlook the role of weather," explained Ohlund. "While it wasn't a primary focus of this experiment, we fully understand its operational impact and are committed to integrating weather data into future decision-making models."

The risk of AI hallucinations, instances where AI produces incorrect or irrelevant outputs, particularly when using large language models, was another challenge tackled during the DASH 3 experiment. Aware of this potential issue, the development teams took proactive steps to design AI tools that minimized the risk of hallucinations and organizers diligently monitored the outputs throughout the experiment.

"Our team didn't observe hallucinations during the experiment, underscoring the effectiveness of the AI systems employed during the experiment," said Ohlund. "While this is a positive outcome, we remain vigilant about the potential risks, particularly when utilizing LLMs that may not be trained on military-specific jargon and acronyms. We are actively refining our systems to mitigate these risks and ensure AI outputs are reliable and relevant."

Looking Ahead: Building Trust in AI for Future Operations

As the U.S. Air Force moves forward with the 2026 series of DASH experiments, the lessons learned from 2025 iterations will serve as a crucial foundation for future efforts. The growing trust in human-machine collaboration, the strengthening of international partnerships, and the continuous refinement of AI tools all point to a future where AI plays an integral role in operational decision-making.

"The 2025 DASH series has established a strong foundation for future experiments, with the potential to further expand AI's role in battle management," said Ohlund. "By continuing to build trust with operators, improve AI systems, and foster international cooperation, the U.S. and its allies are taking critical steps toward ensuring they are prepared to address the evolving challenges of modern warfare."

"This is just the beginning," said Williams. "The more we can integrate AI into the decision-making process, the more time we can free up to focus on the human aspects of warfare. These tools are key to staying ahead of our adversaries and maintaining peace and stability on a global scale."

U.S. Air Force Logo
/U.S. Air Force Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.