Cleared Personnel, Insider Threat, and AI Tool Misuse
Cleared Personnel, Insider Threat, and AI Tool Misuse
A cleared employee pastes a chunk of a classified program's technical requirements into ChatGPT to "help summarize" a briefing. No malicious intent. No foreign handler. Just someone trying to get through their task list faster. Under NISPOM and your organization's insider threat program, that action likely triggers a reportable indicator, and the compliance consequences cascade from there in ways most defense contractors have not fully gamed out.
The Insider Threat Framework and Why AI Changes the Calculus
The foundational requirement here is Executive Order 13587, signed in October 2011, which mandated insider threat programs across the executive branch and, by extension, cleared contractor facilities. NISPOM (now codified under 32 CFR Part 117, effective February 2021) requires all cleared contractor facilities to establish and maintain insider threat programs that gather, integrate, and report relevant information about cleared personnel whose behavior may represent a threat to classified information.
DCSA's Insider Threat Program requirements, supplemented by the NITTF (National Insider Threat Task Force) minimum standards, lay out specific categories of reportable behavior. The relevant buckets include unauthorized disclosure of classified information, misuse of information technology systems, and attempts to access information beyond need-to-know. AI tool misuse can hit all three simultaneously.
The traditional insider threat model focused on indicators like unexplained affluence, foreign contacts, and disgruntlement. Those still matter. But the NITTF's own guidance has evolved to include "technology misuse" as a standalone indicator category. When a cleared person feeds controlled information into a commercial AI system, they are effectively exfiltrating data to a third-party server, likely hosted outside any accredited environment, possibly in a jurisdiction with different data sovereignty rules. The fact that they did it for convenience rather than espionage does not change the reporting obligation.
What Specifically Triggers Reporting
Let's walk through the mechanics. Under 32 CFR 117.13(d), cleared employees have an obligation to report activities that may indicate a threat to classified information. Facility Security Officers have parallel obligations to report to DCSA. The insider threat program itself must have procedures to "gather, integrate, and report" relevant information per 32 CFR 117.7(a).
Here are the AI-specific scenarios that should be on every FSO's radar:
- Pasting classified or CUI into commercial AI tools. This is the most obvious case. Even if the tool is "just" ChatGPT or Claude or Gemini, the data leaves the controlled environment. Under NISPOM, this constitutes a potential unauthorized disclosure. It also likely violates the facility's system security plan.
- Using AI to draft documents that incorporate classified concepts without markings. A subtler problem. If someone uses an AI tool to rephrase or restructure classified content into an unclassified-looking document, they may have created a derivative classification problem and an unauthorized disclosure simultaneously.
- Querying AI tools in patterns that reveal classified program details. Even if no single prompt contains classified information, a series of highly specific technical queries can constitute a mosaic. The intelligence community has long recognized mosaic theory; it applies here too.
- Using AI coding assistants on classified or export-controlled source code. GitHub Copilot, Amazon CodeWhisperer, and similar tools send code to external servers for processing. If that code is under ITAR (22 CFR 120-130) or EAR controls, you now have a potential export violation layered on top of the insider threat indicator.
- Downloading and running local AI models on government or accredited systems without authorization. Even open-source models like Llama or Mistral, if installed on systems within your accredited environment without approval, represent an unauthorized modification to the system security plan.
The Reporting Chain and Consequences
When an insider threat program identifies one of these indicators, the reporting chain under NISPOM goes to DCSA (via NISS, the successor to the old e-FCL system for many reporting functions). Depending on severity, it may also trigger reporting to the FBI and the relevant government contracting activity. If classified information was actually compromised, you are in security incident territory under 32 CFR 117.8, which carries its own timeline and procedural requirements.
The consequences for the individual can range from a security infraction to revocation of their personnel security clearance. DCSA revoked or denied approximately 3,200 clearances in FY2022 according to publicly available adjudication data. While most of those were for traditional factors (financial issues, drug involvement, foreign influence), the "misuse of information technology" adjudicative guideline under SEAD 4 (Security Executive Agent Directive 4, effective June 2017) covers exactly this kind of behavior. Guideline M specifically addresses misuse of information technology, and unauthorized introduction of data into non-accredited systems fits squarely within it.
For the company, failure to report can result in adverse actions against the facility clearance itself. DCSA has the authority to downgrade or revoke an FCL, which for most defense contractors is an existential business risk. The 2020 case involving a major defense subcontractor (details partially redacted in DCSA administrative actions) resulted in a facility receiving an unsatisfactory security rating in part due to inadequate insider threat program monitoring of IT systems.
The Gap in Most Programs Today
Here is where it gets practical. Most insider threat programs at cleared facilities were designed around network monitoring tools like SIEM platforms, user activity monitoring (UAM) software, and DLP solutions. These tools are configured to watch for data leaving the network through known channels: USB drives, email, cloud storage uploads.
Commercial AI tools present a different profile. They operate over HTTPS, often to the same domains employees legitimately access for other purposes. A DLP tool might catch someone uploading a file marked SECRET to Google Drive. It is far less likely to catch someone typing classified technical parameters into a browser-based chat interface. The data never exists as a file on the local system; it goes directly from the employee's memory through their keyboard into an API endpoint.
This means insider threat programs need to evolve in at least two ways. First, UAM solutions need to be configured to flag interactions with known AI service domains and APIs, not just traditional exfiltration vectors. Second, and more importantly, organizations need clear, specific, and regularly updated policies about which AI tools are authorized, for what purposes, and with what types of information. A vague "don't put classified info in AI tools" policy is insufficient when employees are genuinely uncertain about whether CUI, FOUO, or pre-decisional information counts.
DCSA's own Industrial Security Letters (ISLs) have not yet issued specific guidance on AI tool use by cleared personnel, though the topic has appeared in DCSA webinars and NISPPAC discussions throughout 2023 and 2024. The gap between the pace of AI adoption and the pace of formal guidance is real, and it leaves FSOs making judgment calls with significant consequences.
How FirmAdapt Addresses This
FirmAdapt's architecture is designed so that AI interactions happen within a controlled, auditable environment rather than through external commercial APIs. For organizations with cleared personnel, this means AI tool use can be governed by the same security controls and monitoring infrastructure that the insider threat program already relies on. Data stays within the accredited boundary, queries are logged in formats compatible with UAM requirements, and policy enforcement happens at the platform level rather than depending on individual employee judgment.
FirmAdapt also supports the policy documentation side. Organizations can define and enforce data classification restrictions on AI interactions, creating the kind of specific, auditable usage policies that satisfy both NISPOM insider threat program requirements and DCSA inspection expectations. When an FSO needs to demonstrate that their program accounts for AI tool use, the controls and audit trails are already in place.