FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryhealthcareHIPAAPHI

Breach Notification Timelines When the Breach Happened Through an AI Tool

By Basel IsmailMay 3, 2026

Breach Notification Timelines When the Breach Happened Through an AI Tool

HIPAA's breach notification rule is straightforward until it isn't. You discover a breach, you notify affected individuals within 60 days, you notify HHS, and if it hits 500 or more individuals in a state, you notify the media. Simple enough on paper. But the rule was written in 2009, finalized in 2013 under the Omnibus Rule, and it assumed a world where you generally knew what systems held PHI because you put the PHI there deliberately. AI tools have introduced a category of breach that the regulation's authors did not contemplate: the breach that happened through a system nobody formally authorized, nobody documented, and nobody realized was ingesting protected health information.

The 60-Day Clock and When It Actually Starts

Under 45 CFR 164.404(b), the 60-day window runs from the date the breach is discovered, not the date it occurred. Discovery happens when the covered entity knows about the breach or, through reasonable diligence, would have known about it. That "reasonable diligence" standard is where things get uncomfortable with AI tools.

Consider a scenario that has already played out at multiple health systems, though most have settled quietly. A department starts using a generative AI tool to draft patient communications or summarize clinical notes. No BAA in place. No security review. The tool's provider suffers a data incident, or the tool itself retains and exposes input data. PHI is compromised. But the covered entity doesn't learn about it for months because the tool was never in their asset inventory.

HHS has been consistent in enforcement actions: ignorance of a system's existence does not toll the clock. If reasonable diligence would have uncovered the use of the tool, the discovery date is when you should have found it, not when you actually did. OCR's 2023 enforcement against Banner Health ($1.25 million settlement) reinforced that inadequate monitoring of systems with PHI access is itself a violation. The penalty compounds when that monitoring gap also delays breach notification.

What "Reasonable Diligence" Looks Like Now

OCR has not published specific guidance on AI tool monitoring as of mid-2025, but the agency's existing framework is clear enough. Reasonable diligence includes periodic risk assessments under 45 CFR 164.308(a)(1)(ii)(A), workforce training on permissible uses of PHI, and technical controls that detect unauthorized data flows. If your organization has no mechanism for detecting when employees paste PHI into a third-party AI interface, you have a reasonable diligence problem. And that problem retroactively accelerates your notification clock.

This is not theoretical. In 2023, a workforce member at a covered entity used ChatGPT to help process patient intake forms. The data sat in OpenAI's systems for months before anyone in compliance knew. When it surfaced, the legal team had to determine whether the 60-day clock started at the date of actual discovery or the date a reasonable risk assessment program would have caught the shadow AI use. They concluded, with outside counsel, that the earlier date applied. They were already late.

The Harm Threshold and AI-Specific Complications

Under the Omnibus Rule, an impermissible use or disclosure of PHI is presumed to be a breach unless the covered entity can demonstrate a low probability that PHI was compromised. The four-factor risk assessment under 45 CFR 164.402(2) considers: the nature and extent of the PHI involved, who received or accessed it, whether the PHI was actually acquired or viewed, and the extent to which risk has been mitigated.

AI tools make each of these factors harder to evaluate.

  • Nature and extent of PHI: When someone pastes clinical notes into an AI tool, the PHI involved can be extensive, including diagnoses, treatment plans, identifiers, and sometimes Social Security numbers. Unlike a misdirected fax (the classic breach scenario), the data may be retained in model training pipelines, cached in server logs, or replicated across infrastructure in ways the AI vendor itself may not fully map.
  • Who accessed it: With a traditional breach, you can usually identify the unauthorized recipient. With an AI tool, the "recipient" might be a model, a cloud infrastructure provider, a subprocessor, or all three. Determining whether a human ever viewed the data requires cooperation from the AI vendor, and without a BAA, you have limited contractual leverage to compel that cooperation.
  • Whether PHI was actually acquired or viewed: AI vendors often cannot confirm whether specific input data was used in training, stored persistently, or purged. OpenAI's data retention policies, for example, have changed multiple times since 2023. If the vendor cannot tell you definitively that the data was not retained, you cannot satisfy the "low probability of compromise" standard.
  • Mitigation: You can ask the vendor to delete the data. But verifying deletion across distributed AI infrastructure is functionally impossible without audit rights you probably don't have if there's no BAA.

The practical result: when PHI goes through an undocumented AI tool, the four-factor test almost always fails to rebut the presumption of breach. You are notifying.

The Documentation Gap as a Standalone Violation

Beyond the breach itself, the absence of documentation creates independent HIPAA exposure. The Security Rule requires covered entities to maintain an accurate and current inventory of systems that create, receive, maintain, or transmit ePHI (45 CFR 164.310(d)). If an AI tool was handling PHI and it does not appear in your system inventory, your risk analysis was deficient, your access controls were deficient, and your audit controls under 164.312(b) were deficient.

OCR has stacked penalties for exactly this pattern. In the 2022 settlement with Oklahoma State University Center for Health Sciences ($875,000), OCR cited both the breach and the underlying failures in risk analysis and audit controls. When the breach comes through an undocumented AI tool, you are handing OCR a roadmap to multiple violations in a single incident.

State attorneys general add another layer. Under the HITECH Act, state AGs have independent enforcement authority for HIPAA violations. Several, notably in California, Massachusetts, and New York, have been increasingly active. A breach through an undocumented AI tool is exactly the kind of fact pattern that attracts AG attention because it suggests systemic governance failure, not just a one-off incident.

Practical Steps That Actually Help

The compliance response here is less about reacting to breaches and more about preventing the documentation gap that makes AI-related breaches so damaging.

  • Automated discovery of AI tool usage: Network monitoring that flags data flows to known AI service endpoints. DLP tools configured to detect PHI patterns in outbound traffic to non-approved services.
  • Acceptable use policies that name AI specifically: Generic "don't share PHI with unauthorized parties" policies are insufficient. Workforce members need to understand that pasting a clinical note into an AI chatbot is a potential HIPAA violation, full stop.
  • BAA coverage for any AI tool touching PHI: If the vendor won't sign a BAA, the tool cannot be used with PHI. This needs to be enforced technically, not just through policy.
  • Incident response playbooks that address AI-specific scenarios: Your breach response plan should include steps for AI vendor engagement, data retention inquiries, and the specific challenges of the four-factor risk assessment when the unauthorized recipient is a model rather than a person.

How FirmAdapt Addresses This

FirmAdapt was built with the assumption that AI tools in regulated environments need to be documented, governed, and auditable from the start. The platform maintains a continuous inventory of AI interactions involving sensitive data, which means the "reasonable diligence" standard is met by default. When PHI flows through FirmAdapt's architecture, there is a record of what was processed, when, under what authorization, and with what safeguards, exactly the documentation you need if a breach assessment is triggered.

FirmAdapt also operates under a compliance-first model that keeps data handling within boundaries where BAA coverage, access controls, and audit trails are built into the infrastructure rather than bolted on after the fact. For covered entities, this means the gap between "AI tool someone started using" and "AI tool that appears in our risk analysis with appropriate controls" simply does not exist. The tool and the compliance documentation are the same system.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free