FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatorydefenseITARCMMCDoD

The Five DoD Programs Where AI Risk Is Highest in 2026

By Basel IsmailMay 12, 2026

The Five DoD Programs Where AI Risk Is Highest in 2026

The Department of Defense is pushing AI into everything, and the compliance surface area is expanding faster than most prime contractors can map it. DoD's FY2025 budget requested $1.8 billion specifically for AI and machine learning initiatives, up from roughly $1.1 billion in FY2023. That money is flowing into programs where the intersection of autonomous decision-making, classified data handling, and CMMC 2.0 requirements creates real, concentrated risk. Here is where the exposure is highest heading into 2026, and what the primes are actually doing about it.

1. Joint All-Domain Command and Control (JADC2)

JADC2 is the connective tissue program. It links sensors, shooters, and decision-makers across every service branch using AI-driven data fusion and automated recommendations. The risk here is layered. You have AI models ingesting data from multiple classification levels, operating across networks with different authorization boundaries, and producing outputs that directly inform kinetic decisions.

The compliance challenge is that JADC2 touches NIST SP 800-171 and CMMC Level 2 requirements for every subcontractor feeding data into the system, while simultaneously requiring adherence to DoD Directive 3000.09 (the autonomous weapons directive, updated in January 2023). Any AI tool making targeting recommendations has to maintain a human-in-the-loop, and proving that in an audit when the system is processing thousands of data points per second is not trivial. Northrop Grumman and L3Harris have both publicly discussed standing up dedicated AI assurance teams specifically for JADC2 subcontracts.

2. Autonomous Logistics and Sustainment Programs

Less glamorous than autonomous weapons, but arguably where the most AI tools are actually deployed right now. Programs like the Army's Predictive Logistics initiative and the Air Force's Condition-Based Maintenance Plus (CBM+) use machine learning to predict equipment failures, optimize supply chains, and route maintenance resources. The Defense Logistics Agency processed over $42 billion in orders in FY2024, and AI is increasingly embedded in those workflows.

The risk profile here is about data integrity and supply chain security. These AI systems train on maintenance records, parts inventories, and operational readiness data, much of which is Controlled Unclassified Information (CUI) subject to DFARS 252.204-7012. A compromised training dataset does not just produce bad predictions; it can degrade readiness across an entire fleet. The GAO flagged this exact concern in GAO-24-105980 (March 2024), noting that DoD lacks consistent standards for validating AI training data in logistics applications. Primes like Raytheon (RTX) are responding by implementing data provenance tracking through their supply chains, essentially creating audit trails for every dataset that touches a predictive model.

3. Project Maven and Intelligence Fusion Programs

Project Maven started as the Algorithmic Warfare Cross-Functional Team in 2017 and has evolved considerably. Now housed under the Chief Digital and Artificial Intelligence Office (CDAO), Maven and its successor programs use computer vision and natural language processing to process intelligence data at scale. The National Defense Authorization Act for FY2024 (Section 1521) specifically required the CDAO to establish AI testing and evaluation frameworks for intelligence applications by October 2024.

The risk concentration here is about bias, accuracy, and oversight. An AI model that misidentifies a civilian structure as a military target in imagery analysis creates obvious catastrophic risk, but even lower-stakes errors in signals intelligence processing can cascade. The compliance burden falls heavily on the software vendors and integrators building these models. They need to demonstrate compliance with the DoD AI Ethical Principles (adopted February 2020), the Responsible AI Strategy and Implementation Pathway (published June 2022), and the specific testing requirements in DoDI 5000.97 (issued December 2023). Palantir and Scale AI have both invested heavily in documentation and audit infrastructure to meet these requirements, but mid-tier contractors are struggling.

4. Cyber Operations and Defensive AI

U.S. Cyber Command is deploying AI for both threat detection and automated response. Programs under the Persistent Cyber Training Environment (PCTE) and the Joint Cyber Warfighting Architecture (JCWA) increasingly rely on machine learning models that can identify and respond to intrusions faster than human analysts. The FY2025 budget allocated approximately $14.5 billion to cyberspace activities across DoD.

The compliance risk is recursive and somewhat ironic. The AI tools meant to enforce cybersecurity themselves become attack surfaces. A compromised defensive AI model could be manipulated to ignore specific threat signatures or generate false positives that overwhelm analysts. These systems must comply with CMMC Level 3 requirements (the expert level, still being finalized as of early 2025), and the NSA's guidance on securing AI systems published in April 2024 alongside CISA. The primes working this space, particularly Booz Allen Hamilton and General Dynamics IT, are implementing continuous monitoring of model behavior, essentially watching the watchers. But the standards for what constitutes "anomalous model behavior" are still being developed, which means contractors are building compliance programs against a moving target.

5. Autonomous and Semi-Autonomous Weapons Platforms

Programs like the Collaborative Combat Aircraft (CCA), the Navy's Ghost Fleet Overlord, and various autonomous ground vehicle initiatives are where public attention focuses, and where the regulatory framework is most prescriptive. DoD Directive 3000.09 requires senior official review for any autonomous weapon system, and the updated version tightened requirements around AI verification and validation.

The FY2025 budget included $5.8 billion for the CCA program alone. Every AI component in these platforms must go through the Test and Evaluation framework outlined in DoDI 5000.89, which was updated in November 2023 to specifically address AI-enabled systems. The challenge for contractors is that testing an autonomous system in a lab environment does not reliably predict behavior in contested, degraded, or operationally limited (CDO) environments. Lockheed Martin and Boeing are both building digital twin environments to simulate edge cases, but the DoD's Director of Operational Test and Evaluation (DOT&E) has repeatedly noted gaps between simulation performance and field performance. The January 2025 DOT&E annual report flagged AI reliability in autonomous platforms as a top concern for the third consecutive year.

What Primes Are Actually Doing

Across all five program areas, a few patterns are emerging. First, the large primes are creating centralized AI governance functions that sit between their engineering teams and their compliance organizations. These are not just policy shops; they have technical staff who can evaluate model architectures against regulatory requirements. Second, there is a growing investment in automated compliance documentation. When you are running hundreds of AI models across dozens of contracts, manual compliance tracking breaks down. Third, primes are pushing compliance requirements down to subcontractors earlier in the procurement cycle, often requiring evidence of CMMC Level 2 certification and AI-specific risk assessments before awarding subcontracts.

The mid-tier and small contractors feeding into these programs are where the real vulnerability sits. Many lack the resources to build dedicated AI governance teams, and the regulatory requirements are complex enough that general-purpose compliance frameworks do not cover the AI-specific elements.

How FirmAdapt Addresses This

FirmAdapt's architecture was built around the assumption that AI tools in regulated environments need continuous compliance monitoring, not periodic audits. For defense contractors navigating the intersection of CMMC 2.0, DoDI 5000.97, and the DoD AI Ethical Principles, FirmAdapt provides a unified compliance layer that maps AI tool usage to specific regulatory requirements in real time. This includes automated tracking of data provenance, model behavior baselines, and access controls that align with CUI handling requirements under DFARS 252.204-7012.

For mid-tier contractors who cannot staff a dedicated AI governance function, FirmAdapt serves as the compliance infrastructure that primes are increasingly requiring as a condition of subcontract awards. The platform maintains current mappings to DoD-specific AI regulations as they evolve, which matters considerably when frameworks like CMMC Level 3 and the CDAO's testing standards are still being finalized.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free
The Five DoD Programs Where AI Risk Is Highest in 2026 | FirmAdapt