FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatorydefenseITARCMMC

ITAR and the Cloud LLM Trap That Could Land a Defense Contractor in Federal Court

By Basel IsmailMay 9, 2026

ITAR and the Cloud LLM Trap That Could Land a Defense Contractor in Federal Court

A defense contractor's engineer pastes a technical specification into ChatGPT to help draft a proposal section. The spec describes performance characteristics of a radar subsystem. The entire interaction takes maybe 90 seconds. And it potentially constitutes an unauthorized export of controlled technical data under the International Traffic in Arms Regulations, carrying criminal penalties of up to 20 years imprisonment and $1 million per violation under 22 U.S.C. § 2778.

This is not a hypothetical edge case. It is the logical consequence of how ITAR defines exports, how cloud LLMs process data, and how little visibility most defense contractors have into what their employees are actually doing with generative AI tools.

The U.S. Person Requirement and Why It Matters Here

ITAR, administered by the State Department's Directorate of Defense Trade Controls (DDTC), restricts the export and re-export of defense articles, defense services, and technical data listed on the United States Munitions List (USML). Under 22 CFR § 120.16, "technical data" includes information required for the design, development, production, manufacture, assembly, operation, repair, testing, maintenance, or modification of defense articles.

Access to ITAR-controlled technical data is restricted to U.S. persons, defined under 22 CFR § 120.62 as U.S. citizens, lawful permanent residents, or protected persons. Any disclosure to a foreign person, whether inside or outside the United States, constitutes an export or a "deemed export" and requires prior authorization from DDTC, typically through a Technical Assistance Agreement (TAA) or a manufacturing license agreement.

Here is where cloud LLMs create a problem that most IT security frameworks were never designed to catch.

The Deemed Export Problem in Cloud AI

When you send a prompt to a cloud-hosted LLM, your input data travels to a data center, gets processed by infrastructure you do not control, and may be accessible to personnel you cannot vet. OpenAI, Anthropic, Google, and other major LLM providers operate global infrastructure. Their employees and subcontractors include foreign nationals. Their data processing pipelines may route through servers in multiple jurisdictions.

Under ITAR's deemed export rule (22 CFR § 120.17), releasing or otherwise making technical data available to a foreign person inside the United States is a deemed export to the country of that person's nationality. The regulation does not require intent. It does not require that the foreign person actually read or understood the data. Making it accessible is sufficient.

So when ITAR-controlled technical data enters a cloud LLM's processing pipeline, the contractor has effectively lost the ability to certify that no foreign person can access it. The data may be logged, cached, used in training pipelines, or reviewed by trust and safety teams. Any of those touchpoints could involve a foreign national. Each instance is a potential ITAR violation.

The DDTC has been clear on this general principle for years. In its 2020 guidance on cloud computing, DDTC confirmed that storing ITAR data in the cloud requires the same export controls as any other form of transfer. AWS GovCloud and Microsoft Azure Government exist specifically because standard commercial cloud environments cannot satisfy ITAR requirements. The logic extends directly to cloud AI services, which are even less controlled than standard cloud storage because the data is actively processed, not just stored.

Criminal Exposure Is Real and Recent

ITAR violations are not theoretical enforcement risks. The Department of Justice and DDTC pursue them actively.

  • In 2023, Raytheon agreed to pay over $200 million in combined penalties to settle ITAR and export control violations, including unauthorized exports of technical data related to defense systems.
  • In 2022, Honeywell paid $13 million to settle DDTC charges related to unauthorized exports of ITAR-controlled drawings and documents to foreign nationals in multiple countries.
  • In 2020, FLIR Systems (now part of Teledyne) paid $30 million for unauthorized exports of ITAR-controlled technical data, including dual-use technology specifications shared with foreign national employees.

These cases involved traditional export control failures: emailing files, sharing access with foreign national employees, or shipping hardware without licenses. Cloud LLM usage introduces a new vector that is harder to detect and potentially more widespread. A single engineer experimenting with a productivity tool can generate dozens of violations in an afternoon.

Under 22 U.S.C. § 2778(c), willful violations carry criminal penalties of up to $1 million per violation and 20 years imprisonment. Civil penalties under 22 CFR § 127.10 can reach $1,298,851 per violation (adjusted for inflation as of 2024). And DDTC has the authority to debar violators from future defense contracts, which for many companies is an existential threat.

Why Traditional Controls Miss This

Most defense contractors have Data Loss Prevention (DLP) tools, classified network segmentation, and ITAR-specific handling procedures. These controls were designed for email, file sharing, USB drives, and network boundaries. They were not designed for a world where employees can paste controlled data into a browser-based chat interface that sends it to an API endpoint outside the organization's security perimeter.

Several compounding factors make this particularly difficult to manage:

  • No file transfer occurs. DLP tools that monitor file attachments and downloads often miss copy-paste actions into web forms.
  • The data is transformed. An engineer might not paste a complete document but instead include fragments of technical data in a prompt, making keyword-based detection unreliable.
  • Shadow AI is pervasive. A 2024 Cisco survey found that 60% of employees at large enterprises admitted to using unapproved AI tools for work tasks. In defense environments, even a small percentage represents significant risk.
  • The compliance gap is invisible. Unlike an email to a foreign recipient, which leaves a clear audit trail, an LLM prompt may leave no trace in the organization's systems.

NIST SP 800-171, which governs CUI handling for defense contractors, requires access controls and audit logging. CMMC 2.0, now being phased in through 32 CFR Part 170 (finalized in October 2024), adds third-party assessment requirements. Neither framework explicitly addresses generative AI data flows, but both impose obligations that cloud LLM usage can silently violate.

What a Compliant Approach Actually Requires

If you are a defense contractor and your people need LLM capabilities, the path forward requires keeping ITAR-controlled data within an environment where you can guarantee U.S. person access only, full audit logging, and no data egress to uncontrolled systems. Concretely, that means:

  • AI models deployed on infrastructure that meets ITAR requirements (FedRAMP High or equivalent, with U.S. person administration).
  • No training on or retention of user inputs.
  • Complete audit trails showing what data was processed, by whom, and when.
  • Access controls that enforce U.S. person verification at the application layer, not just the network layer.
  • Data classification awareness so the system can flag or block prompts that appear to contain USML-controlled content.

How FirmAdapt Addresses This

FirmAdapt's architecture was built around the assumption that regulated data cannot leave a controlled environment. For defense sector deployments, this means AI processing occurs within infrastructure that satisfies ITAR's U.S. person access and data handling requirements, with no data transmitted to third-party model providers. Every interaction is logged with the granularity needed for DDTC compliance reviews and CMMC assessments.

FirmAdapt also implements classification-aware input controls that identify potential ITAR-controlled content before it is processed, giving compliance teams visibility into how AI tools are being used across the organization. The goal is straightforward: give defense contractors access to LLM capabilities without creating the export control exposure that commercial tools introduce by default.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free
ITAR and the Cloud LLM Trap That Could Land a Defense Contra | FirmAdapt