FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatorydefenseITARCMMCCMMC 2.0

How CMMC C3PAOs Will Treat AI Tools in Level 2 Assessments

By Basel IsmailMay 12, 2026

How CMMC C3PAOs Will Treat AI Tools in Level 2 Assessments

CMMC Level 2 assessments are starting to roll out in earnest, and a question that keeps coming up in conversations with defense contractors is: what happens when the C3PAO assessor finds out we're using AI tools? The short answer is that there is no separate "AI control" in CMMC 2.0. The longer answer is that AI tools will get scrutinized under existing NIST SP 800-171 Rev 2 controls, and assessors have enough latitude within those 110 controls to make your life very uncomfortable if you haven't thought this through.

CMMC Does Not Have an AI Carve-Out

The 32 CFR Part 170 final rule, published in October 2024, codifies CMMC 2.0 and maps Level 2 directly to the 110 security requirements in NIST SP 800-171 Rev 2. Nowhere in those requirements will you find the word "artificial intelligence." But that is precisely the problem. Assessors from authorized C3PAOs are trained to evaluate whether your system security plan (SSP) and actual implementation cover every system that processes, stores, or transmits CUI. If an AI tool touches CUI, or operates within your CUI boundary, it falls within scope. Period.

The Cyber AB (formerly the CMMC Accreditation Body) has not issued specific guidance on AI tools as of mid-2025. But assessors are already asking about them. I've heard from multiple organizations preparing for assessments that C3PAO teams are bringing up AI in their pre-assessment scoping discussions. They want to know what tools are in the environment, how data flows through them, and whether the organization even knows AI is present in their workflows.

The Controls That Will Come Up

Here are the NIST SP 800-171 control families where AI tools will draw the most scrutiny during a Level 2 assessment:

Access Control (3.1.x)

Control 3.1.1 requires limiting system access to authorized users. Control 3.1.2 limits access to the types of transactions and functions that authorized users are permitted to execute. If your team is feeding CUI into a cloud-based AI assistant, the assessor will want to see how you're enforcing access controls on that tool. Who can use it? What data can they input? Is there role-based access, or can any user with a login paste controlled technical information into a prompt window? If the AI tool has an API integration with other systems in your CUI boundary, 3.1.3 (control of CUI flow) becomes relevant fast.

Media Protection (3.8.x)

Control 3.8.1 covers protecting CUI on system media, which includes digital media. If an AI tool caches, logs, or stores prompts and outputs, those logs are system media containing CUI. Assessors will ask where that data lives, how it's protected at rest, and what the retention policy looks like. If you're using a SaaS AI product, you need to know whether the vendor retains prompt data, for how long, and in what jurisdiction.

System and Communications Protection (3.13.x)

Control 3.13.1 requires monitoring, controlling, and protecting communications at the external boundaries of the system. If your AI tool communicates with external servers, that is an external boundary. FIPS 140-2 validated encryption (3.13.11) applies to CUI in transit. Assessors will check whether data sent to and from an AI service is encrypted to the required standard. Many commercial AI APIs use TLS 1.2 or 1.3, which can satisfy this, but you need to verify and document it rather than assume.

Risk Assessment (3.11.x)

Control 3.11.1 requires periodic risk assessments. If you added AI tools to your environment after your last risk assessment, expect the assessor to flag that gap. AI tools introduce novel risks: data leakage through model training, prompt injection, hallucinated outputs that could corrupt controlled technical data. Your risk assessment should specifically address these.

Configuration Management (3.4.x)

Controls 3.4.1 and 3.4.2 require baseline configurations and security configuration settings. If an AI tool is deployed within your CUI boundary, it needs a documented baseline configuration. That includes settings for data retention, logging, access controls, and integration points. Browser extensions and plugins that use AI are particularly tricky here; assessors may ask whether you've inventoried those.

What Assessors Will Actually Ask

Based on conversations with organizations that have gone through pre-assessment activities, here is a realistic picture of what comes up:

  • Inventory questions: Do you have a complete inventory of AI tools in your environment, including browser extensions, embedded features in productivity suites, and developer tools like code completion assistants?
  • Data flow questions: Can you show me on your data flow diagram where AI tools interact with CUI? Is CUI ever sent outside your authorization boundary via an AI tool?
  • Vendor questions: Does your AI vendor have a FedRAMP authorization? If not, how are you satisfying the equivalent security requirements? Do they train on your data?
  • Policy questions: Does your acceptable use policy address AI tools? Have users been trained on what data they can and cannot input into AI systems?
  • Incident response questions: If CUI were inadvertently disclosed through an AI tool, would your incident response plan cover that scenario?

The FedRAMP question is worth pausing on. DFARS 252.204-7012 requires that cloud services used to process CUI meet FedRAMP Moderate baseline or equivalent. Most commercial AI platforms, including OpenAI's standard API and Anthropic's Claude, do not have FedRAMP Moderate authorization for their general-purpose offerings as of this writing. Microsoft's Azure OpenAI Service within Azure Government does hold FedRAMP High authorization, which is one reason it keeps showing up in defense contractor environments. But if you're using a non-FedRAMP AI tool and CUI is flowing through it, you have a significant gap that a C3PAO will flag.

How to Prepare Before the Assessment

Start with an AI tool inventory. This sounds basic, but most organizations I've spoken with undercount by 30% to 50% when they first try. Shadow AI is real; developers, analysts, and business users adopt tools without going through procurement. Check browser extensions, IDE plugins, Slack integrations, and any "smart" features recently added to existing software.

Next, update your SSP and data flow diagrams. Every AI tool that could touch CUI needs to appear in your architecture documentation. If you've decided to keep a tool out of the CUI boundary, document the technical and administrative controls that enforce that separation.

Review your vendor agreements. You need written confirmation of data handling practices, training data policies, encryption standards, and incident notification timelines. If a vendor cannot provide this, that is a red flag the assessor will notice.

Finally, run a tabletop exercise for an AI-related CUI spill. This tests your incident response plan and gives you documented evidence that you've considered the scenario. Assessors like seeing that kind of proactive work.

How FirmAdapt Addresses This

FirmAdapt was built for exactly this kind of regulatory environment. The platform processes data within a compliance-first architecture designed to keep CUI within defined authorization boundaries, with encryption, access controls, and audit logging that map directly to NIST SP 800-171 requirements. Data handling policies are documented and enforceable at the platform level, not dependent on individual user behavior.

For organizations preparing for CMMC Level 2 assessments, FirmAdapt provides the kind of AI capability that a C3PAO can evaluate against the 110 controls without finding gaps. The architecture supports the documentation, access control, and data flow requirements that assessors will ask about, which means your SSP reflects reality rather than aspiration.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free