Why a HIPAA Risk Analysis That Does Not Mention AI Is Already Out of Date
Why a HIPAA Risk Analysis That Does Not Mention AI Is Already Out of Date
OCR has been hammering organizations on risk analysis failures for years. It is, consistently, the most cited deficiency in HIPAA enforcement actions. Between 2008 and 2024, inadequate or missing risk analyses appeared in the vast majority of resolution agreements and civil money penalties. The fines are not trivial. In February 2023, Banner Health paid $1.25 million over a breach affecting nearly 3 million individuals, with OCR specifically calling out the failure to conduct an enterprise-wide risk analysis. In April 2024, Heritage Valley Health System settled for $950,000, same core issue. OCR is not subtle about what it expects here.
Now layer generative AI onto that landscape. If your workforce has access to ChatGPT, Copilot, Gemini, or any of the dozens of AI tools that have proliferated since late 2022, and your most recent risk analysis does not account for them, you have a gap. A significant, documentable gap that OCR investigators are trained to find.
The Security Rule Does Not Care What the Technology Is Called
The HIPAA Security Rule at 45 CFR 164.308(a)(1)(ii)(A) requires covered entities and business associates to conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of ePHI. The rule is technology-neutral by design. It was written in 2003 and has survived two decades of technological change precisely because it does not enumerate specific technologies. It requires you to assess whatever you are actually using.
That technology neutrality cuts both ways. It means OCR does not need a new regulation to expect AI coverage in your risk analysis. The existing rule already demands it. If a tool can touch, process, generate, infer, or store ePHI, it falls within scope. Generative AI tools, particularly those accessed via cloud APIs or embedded in productivity suites, clearly qualify.
The Threat Surface Is Real and Specific
Consider what actually happens when a clinician pastes a patient note into a general-purpose LLM to draft a referral letter. Or when a revenue cycle analyst feeds claims data into an AI tool to identify denial patterns. Or when a compliance team uses AI to summarize incident reports that contain PHI. Each of these creates a potential disclosure of ePHI to a system that may not be covered by a business associate agreement, may retain training data, and may route queries through infrastructure you have not evaluated.
These are not hypothetical scenarios. A July 2023 survey by the Definitive Healthcare research team found that 17% of healthcare professionals reported using generative AI tools in their workflows. A separate survey from Bain published in October 2023 put the figure higher, with roughly 75% of healthcare executives saying their organizations were already piloting or deploying AI. The adoption curve is steep, and it is outpacing governance at most organizations.
What OCR Is Signaling
OCR has not yet issued formal guidance specifically on generative AI and HIPAA. But the signals are clear enough if you know where to look.
In December 2023, HHS released its AI strategy, which explicitly acknowledged the risks AI poses to health data privacy and security. The HHS Office of the National Coordinator for Health IT finalized its HTI-1 rule in January 2024, which includes transparency and decision support requirements for AI in certified health IT. And OCR Director Melanie Fontes Rainer stated publicly throughout 2023 and 2024 that OCR's enforcement priorities include ensuring that risk analyses reflect current technology environments.
The enforcement playbook is well established. OCR investigates a breach or complaint, requests the organization's most recent risk analysis, and evaluates whether it is "accurate and thorough" per the regulatory standard. If the analysis was last updated in 2021, or even early 2023, and it contains no mention of AI tools that the workforce has clearly been using, the conclusion writes itself.
What Your Risk Analysis Should Actually Cover
Updating your risk analysis for AI does not require starting from scratch. It requires extending your existing methodology to cover a new category of tools and workflows. Here is what should be in scope:
- Inventory of AI tools in use. This includes sanctioned tools with enterprise licenses, tools embedded in existing platforms like Microsoft 365 Copilot or Epic's AI features, and unsanctioned tools that employees are using on their own. Shadow AI is the new shadow IT, and it is arguably harder to detect.
- Data flow mapping for AI interactions. Where does ePHI go when it enters an AI system? Is it processed locally or sent to a cloud endpoint? Is it retained for model training? What jurisdiction is the data processed in?
- Business associate agreement coverage. If a third-party AI vendor receives ePHI, you need a BAA. Period. Many general-purpose AI providers, including OpenAI's consumer products, explicitly disclaim HIPAA compliance. If there is no BAA, the use is a potential violation under 45 CFR 164.502(e).
- Access controls and authentication. Who can use AI tools that might process ePHI? Are there role-based restrictions? Is usage logged and auditable?
- Output risks. AI-generated content can hallucinate, fabricate, or subtly alter clinical information. If that output enters a medical record or a communication with a patient, the integrity of ePHI is compromised. Your risk analysis should address this.
- Training and workforce awareness. The Security Rule at 45 CFR 164.308(a)(5) requires security awareness training. If your training program does not address AI-specific risks, that is another gap.
The Practical Problem With Waiting
Some organizations are holding off on updating their risk analyses because they are waiting for OCR to issue specific AI guidance. This is a mistake, and it misunderstands how HIPAA enforcement works. OCR does not need to issue new guidance to enforce existing requirements. The risk analysis obligation has been in place since 2005. The standard is whether your analysis is accurate and thorough given your current environment. If your current environment includes AI and your analysis does not, you are already noncompliant.
There is also a timing problem. Risk analyses are supposed to be periodic, and OCR has consistently interpreted that to mean they should be updated when the environment changes materially. The introduction of generative AI tools into healthcare workflows between 2023 and 2025 qualifies as a material change by any reasonable interpretation. An organization that last updated its risk analysis in early 2023, before most of these tools were widely deployed, has a defensible argument for the gap. An organization that has not updated since then does not.
Do Not Forget State Law
HIPAA is the floor, not the ceiling. Several states have enacted or proposed AI-specific health data protections. Washington's My Health My Data Act, effective March 2024, applies broadly to health data and could implicate AI processing. Colorado's AI Act, signed in May 2024, imposes obligations on deployers of high-risk AI systems, which includes many healthcare applications. Your risk analysis should account for these overlapping requirements, particularly if you operate across multiple states.
How FirmAdapt Addresses This
FirmAdapt was built for exactly this kind of problem: deploying AI capabilities within regulated environments where the compliance requirements are non-negotiable. The platform's architecture keeps ePHI within controlled boundaries, supports BAA-covered deployments, and provides the audit logging and access controls that a HIPAA risk analysis requires you to document. When your risk analysis asks "how is AI being used and what safeguards are in place," FirmAdapt gives you concrete, defensible answers.
For organizations that need AI functionality but cannot afford the compliance exposure of general-purpose tools, FirmAdapt provides a path that does not require choosing between capability and compliance. The platform is designed so that the risk analysis conversation about AI is straightforward rather than alarming.