FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatorydefenseITARCMMC

Defense Manufacturing Execution Systems and the Generative AI Question

By Basel IsmailMay 12, 2026

Defense Manufacturing Execution Systems and the Generative AI Question

MES vendors have been on an AI feature binge. If you've attended any defense manufacturing trade show in the past eighteen months, you've seen the demos: predictive quality analytics, natural language queries against production data, AI-generated work instructions, automated nonconformance reporting. Some of these features are genuinely useful. Some are vaporware dressed up in a chatbot interface. But the interesting regulatory question isn't whether they work. It's whether turning them on creates an ITAR violation.

The Controlled Technical Data Problem

MES platforms in defense manufacturing environments sit on top of some of the most sensitive data a contractor handles. We're talking about process parameters for controlled articles, tooling specifications, inspection criteria, yield data tied to specific weapons systems. Under ITAR (22 CFR Parts 120-130), much of this qualifies as "technical data" as defined in 22 CFR 120.33: information required for the design, development, production, manufacture, assembly, operation, repair, testing, maintenance, or modification of defense articles.

The key constraint is 22 CFR 120.17 (now restructured under the 2020 ITAR rewrite, but the concept persists): an export includes disclosing or transferring technical data to a foreign person, whether that happens inside or outside the United States. This is the "deemed export" rule, and it applies regardless of intent.

Now layer in CMMC 2.0, which DoD finalized as a rule in October 2024 (48 CFR Part 204). Level 2 certification, required for contractors handling CUI, maps to the 110 controls in NIST SP 800-171 Rev 2. Several of those controls bear directly on what happens when you plug a generative AI feature into your MES: access control (3.1.x), audit and accountability (3.3.x), system and communications protection (3.13.x), and system and information integrity (3.14.x).

Where the AI Features Get Complicated

Most MES vendors adding generative AI capabilities are doing one of three things:

  • Cloud-hosted LLM integration. The MES sends queries containing production data to an external model (often OpenAI, Anthropic, or a fine-tuned open-source model hosted on AWS/Azure). The model processes the data and returns a response.
  • Embedded analytics with AI summarization. The MES runs analytics locally or in a private cloud instance, then uses a language model to generate human-readable summaries, reports, or recommendations.
  • RAG-based knowledge retrieval. The MES indexes technical documents, work instructions, and historical data into a vector database, then uses retrieval-augmented generation to answer operator questions.

Each of these architectures creates a different risk profile, but they share a common problem: controlled technical data is being processed by systems whose data handling, training pipelines, and hosting infrastructure may not be fully within the contractor's control.

The ITAR Exposure

If your MES sends a query containing process parameters for an ITAR-controlled article to a cloud-hosted LLM, you need to know exactly where that data goes. If the model provider's infrastructure routes through servers outside the United States, or if foreign nationals employed by the provider can access the data in the course of operations, you may have an unauthorized export on your hands. DDTC has not issued specific guidance on AI processing of technical data, but the statute doesn't require specific guidance to apply. The definitions are broad enough to cover this scenario already.

The penalties are not theoretical. In 2023, DDTC reached a $2 million consent agreement with Quicksilver Manufacturing Inc. for unauthorized exports of ITAR-controlled technical data. The violations involved sharing technical drawings and specifications with foreign persons without a license. The mechanism of sharing is irrelevant; what matters is that controlled data reached an unauthorized recipient or location.

The CMMC Exposure

CMMC Level 2 requires that CUI be handled in accordance with NIST SP 800-171. Control 3.1.3 requires organizations to control the flow of CUI in accordance with approved authorizations. Control 3.13.1 requires monitoring, control, and protection of communications at external boundaries and key internal boundaries. If your MES is sending CUI to a third-party AI service, that service is now part of your CUI boundary, and it needs to meet every applicable control.

This is where things get operationally painful. Most AI service providers, including the major cloud platforms, offer "GovCloud" or FedRAMP-authorized environments. But FedRAMP authorization alone doesn't satisfy CMMC. You need to verify that the specific AI service, not just the underlying IaaS, is operating within a compliant boundary. Microsoft's Azure Government, for example, is FedRAMP High authorized, but that doesn't automatically mean every Azure AI service running on top of it inherits that authorization.

What Vendors Aren't Telling You

I've reviewed the security documentation for several MES platforms that have recently added AI features. Common gaps include:

  • No clear documentation of where AI inference occurs (on-premises vs. cloud, which region, which provider).
  • Terms of service that permit the AI provider to use input data for model improvement, which could mean your controlled technical data ends up in training sets accessible to the provider's global workforce.
  • No distinction between AI features that process CUI/ITAR data and those that operate on non-sensitive operational metrics. The features are bundled, and the data flows are opaque.
  • Vague references to "enterprise-grade security" without mapping to specific NIST 800-171 controls or ITAR compliance requirements.

The burden here falls on the contractor, not the vendor. DDTC holds the exporter responsible. DoD holds the contractor responsible for CMMC compliance. If your MES vendor's AI feature creates an unauthorized data flow, the consent agreement or assessment failure lands on your desk.

Practical Steps

If you're running or evaluating an MES with AI capabilities in a defense manufacturing environment, a few things are worth doing now:

  • Map the data flows. For every AI feature, document exactly what data leaves the MES boundary, where it goes, who can access it, and whether the processing infrastructure is exclusively within the United States and staffed by U.S. persons.
  • Segment your data. If the AI features can be configured to operate only on non-controlled data (machine uptime, general scheduling, non-technical operational metrics), that's a much simpler compliance posture than allowing them to touch ITAR or CUI data.
  • Review the AI provider's terms. Specifically look for data retention, training data usage, and subprocessor clauses. If the provider retains input data or uses it for model training, that's a problem for controlled data.
  • Include AI features in your SSP. Your System Security Plan for CMMC should explicitly address AI processing as part of your CUI boundary. If it doesn't, your C3PAO assessor will likely ask about it anyway.
  • Get a Technology Control Plan in place. If you haven't already, a TCP that specifically addresses AI-processed technical data will help demonstrate due diligence to DDTC if questions arise.

How FirmAdapt Addresses This

FirmAdapt was built around the assumption that AI processing of regulated data requires the same compliance rigor as any other data handling operation. The platform's architecture keeps controlled data within defined compliance boundaries, with full auditability of every data flow, including AI inference. There is no silent routing of queries to third-party model providers, and no ambiguity about where processing occurs or who has access.

For defense contractors evaluating AI-enabled tools, FirmAdapt provides a way to get the operational benefits of generative AI without creating the ITAR and CMMC exposure that comes with most vendor implementations. The compliance controls are structural, not bolted on after the fact, which means your SSP and TCP documentation can reference concrete architectural decisions rather than vendor promises.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free
Defense Manufacturing Execution Systems and the Generative A | FirmAdapt