OCR Enforcement in 2026: What the MMG Fusion Settlement Tells Us About Where Audits Are Going
OCR Enforcement in 2026: What the MMG Fusion Settlement Tells Us About Where Audits Are Going
In February 2025, OCR settled with MMG Fusion, a Michigan-based dental marketing and patient engagement company, for $10,000 and a three-year corrective action plan. The fine itself is modest. The signal it sends is not.
MMG Fusion is a business associate. Not a hospital system. Not a health plan. A small software vendor that handles patient reviews and appointment reminders for dental practices. OCR investigated after a 2019 breach notification revealed that a misconfigured cloud server had exposed protected health information for approximately 13,000 patients. What they found during the investigation was arguably worse than the breach itself: MMG Fusion had never conducted a HIPAA-compliant risk analysis. Not an incomplete one. Not an outdated one. None at all.
The Settlement Details
The resolution agreement, published on HHS.gov, lays out a few key facts worth walking through.
- The breach: An unsecured cloud server exposed names, addresses, phone numbers, health insurance information, and treatment details for patients across multiple dental practices.
- The root cause finding: Failure to conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of ePHI, as required under 45 C.F.R. § 164.308(a)(1)(ii)(A).
- The penalty: $10,000 monetary settlement plus a corrective action plan requiring MMG Fusion to conduct a thorough risk analysis, develop a risk management plan, and submit to OCR monitoring for three years.
The corrective action plan is the real penalty here. Three years of OCR oversight means regular reporting, mandatory policy revisions subject to OCR approval, and the obligation to report any future security incidents or compliance failures during that window. For a small company, that operational burden dwarfs the $10,000 check.
Why This Settlement Matters More Than the Dollar Amount Suggests
OCR has been signaling for years that it would increase enforcement against business associates, not just covered entities. The MMG Fusion case is a clean example of that shift in practice. A few things stand out.
First, the company is small. OCR could have treated this as a technical assistance case or closed it after the breach notification. Instead, they pursued a formal resolution agreement. That tells you something about priorities. OCR's enforcement discretion is increasingly being exercised against the vendor layer of the healthcare ecosystem, where a lot of ePHI actually lives and where compliance maturity tends to be lowest.
Second, the core violation was not a sophisticated attack or a novel vulnerability. It was the absence of a risk analysis. This is the most basic requirement in the HIPAA Security Rule, and OCR has made it the centerpiece of enforcement action after enforcement action. According to OCR's own breach portal and resolution agreements, failure to conduct a risk analysis has been cited in the majority of settlements going back over a decade. The 2023 settlement with Banner Health ($1.25 million), the 2022 settlement with Oklahoma State University Center for Health Sciences ($875,000), and numerous others all feature the same finding. OCR is not getting creative. They are enforcing the same requirement because organizations keep failing to meet it.
Third, the corrective action plan template OCR used here is detailed and prescriptive. It requires not just a risk analysis but a written risk management plan that addresses every identified vulnerability, workforce training, and ongoing reporting. If you read the actual CAP document, it reads like a compliance program blueprint that OCR wishes the company had implemented on its own.
Now Add LLMs to This Picture
Here is where things get genuinely concerning for anyone deploying AI tools in a healthcare context. MMG Fusion exposed 13,000 records through a misconfigured server. That is a static data exposure; the server was sitting there, the data was at rest, and the scope was bounded by what was stored on that particular system.
An LLM-related breach has fundamentally different characteristics. Consider a scenario where a business associate deploys a large language model to handle patient communications, summarize clinical notes, or automate appointment workflows, and that model is hosted by a third-party API provider without a BAA in place, or the model's training pipeline inadvertently ingests and memorizes ePHI, or prompt injection exposes patient data through the model's outputs.
The scope problem is immediate. Unlike a static database, an LLM processes data dynamically. If ePHI is embedded in model weights or accessible through prompt manipulation, you may not be able to determine exactly which records were compromised, when the exposure began, or how many downstream users accessed the information. The breach notification requirements under 45 C.F.R. § 164.408 require you to identify affected individuals. With an LLM data leakage event, that identification may be functionally impossible.
The risk analysis problem compounds this. If OCR investigated an LLM-related breach and found that the deploying organization had never assessed the specific risks of using generative AI with ePHI, you would be looking at the same 45 C.F.R. § 164.308(a)(1)(ii)(A) violation that sank MMG Fusion, but with a breach affecting potentially orders of magnitude more individuals and involving a technology that regulators are already watching closely. The penalty calculus would be entirely different. OCR's penalty tiers under the HITECH Act allow for fines up to $2,067,813 per violation category per year, with an annual cap of $2,067,813 for identical violations. A systemic LLM failure could implicate multiple violation categories simultaneously.
OCR has not yet published enforcement guidance specific to LLMs and HIPAA. But they do not need to. The existing Security Rule requirements are technology-neutral by design. A risk analysis must account for all systems that create, receive, maintain, or transmit ePHI. If you are running patient data through a language model and that model is not in your risk analysis, you have the same gap MMG Fusion had, just with significantly higher stakes.
What This Means Practically
If you are a compliance officer or general counsel at a healthcare organization, or at a business associate serving healthcare clients, the MMG Fusion settlement is a useful gut check. A few concrete takeaways:
- Risk analysis is not optional and not a one-time event. OCR expects it to be updated whenever you adopt new technology, change infrastructure, or identify new threats. Deploying an LLM tool is exactly the kind of environmental change that triggers a reassessment.
- Business associates are squarely in scope. If you are a SaaS vendor, a marketing platform, a data analytics provider, or an AI tool vendor touching ePHI, you are subject to the same Security Rule requirements as the covered entity. The BAA does not shield you from OCR enforcement; it confirms your obligations.
- The corrective action plan is the real cost. Three years of OCR oversight is expensive, distracting, and constraining. For a company trying to grow or raise capital, it is a material business risk.
- Document everything. OCR's investigation methodology relies heavily on documentation requests. If you cannot produce a written risk analysis, policies, and evidence of implementation, the absence itself becomes the finding.
How FirmAdapt Addresses This
FirmAdapt was built for exactly this kind of regulatory environment. The platform's architecture keeps ePHI within customer-controlled infrastructure, which means there is no third-party model API silently processing protected data without a BAA or outside the scope of your risk analysis. Every AI interaction is logged, auditable, and traceable, giving compliance teams the documentation OCR expects to see when they come asking.
More specifically, FirmAdapt's compliance-first design means that risk analysis coverage for AI tools is not an afterthought bolted onto a general-purpose LLM. Data handling, access controls, and audit trails are built into the product so that when OCR asks how your organization assessed the risks of using AI with ePHI, you have a concrete, documented answer rather than a gap.