FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryhealthcareHIPAAPHI

Employer-Sponsored Health Plans, ERISA, and the AI Question for HR Departments

By Basel IsmailMay 2, 2026

Employer-Sponsored Health Plans, ERISA, and the AI Question for HR Departments

Self-insured employer health plans are HIPAA covered entities. This is not obscure trivia. It is black-letter law under 45 CFR 160.103, and it has been since the Privacy Rule took effect in 2003. Yet a surprising number of HR departments operate as though HIPAA is someone else's problem, something that lives with the TPA or the insurance carrier. When the employer is the plan sponsor of a self-insured group health plan, the plan itself is the covered entity, and the employer's HR staff who administer the plan are handling protected health information directly.

Now layer AI tools on top of that. The compliance picture gets complicated fast.

The Baseline: Why Self-Insured Plans Create Direct HIPAA Obligations

About 65% of covered workers in the U.S. are enrolled in self-insured plans, according to the 2023 KFF Employer Health Benefits Survey. That percentage has been climbing steadily, especially among employers with 200 or more workers, where it reaches 82%. When an employer self-insures, the group health plan is a covered entity under HIPAA. The employer itself is not a covered entity in its capacity as an employer, but the plan is, and the employer's employees who perform plan administration functions are subject to HIPAA's requirements.

This distinction matters operationally. Under 45 CFR 164.504(f), the plan sponsor can receive PHI from the plan only if the plan documents are amended to establish adequate separation between the plan administration functions and other employment functions. The employees who handle enrollment, claims disputes, eligibility determinations, and benefits coordination are on the plan administration side of that firewall. Everyone else in the company is not.

Most large employers get this right on paper. The plan documents include the required amendments. The workforce members with access to PHI receive training. But the introduction of AI tools into HR workflows is creating new gaps that existing compliance frameworks were never designed to address.

Where AI Enters the Picture

HR departments are adopting AI across a wide range of functions: benefits administration, claims analytics, employee wellness program targeting, leave management, and workforce planning. Some of these tools are built specifically for benefits administration. Others are general-purpose platforms, think large language models or analytics dashboards, being applied to HR data that happens to include health plan information.

Here is where the HIPAA exposure materializes. If an AI tool processes, stores, or transmits PHI from the group health plan, the vendor operating that tool is almost certainly a business associate under 45 CFR 160.103. That means a Business Associate Agreement is required before any PHI flows to the vendor. The BAA must include the provisions specified in 45 CFR 164.504(e), covering permitted uses and disclosures, safeguards, breach notification obligations, and subcontractor requirements.

The problem is that many AI tools are being adopted through procurement channels that do not route through privacy or compliance review. An HR operations team evaluates a workforce analytics platform, runs a pilot, uploads data, and starts generating insights. If that data includes information from the health plan, such as claims costs by department, utilization patterns, diagnosis categories, or even just enrollment data linked to identifiable individuals, the tool is touching PHI. No BAA, no adequate safeguards, no breach notification chain. The employer has a HIPAA violation before anyone in legal even knows the tool exists.

The ERISA Layer

ERISA adds a separate and equally serious set of obligations. Under ERISA Section 404(a), plan fiduciaries must act solely in the interest of participants and beneficiaries, with the care, skill, prudence, and diligence that a prudent person would use. When AI tools are used to make or inform decisions about plan administration, such as claims adjudication recommendations, eligibility determinations, or benefits design changes, the fiduciary duty question becomes unavoidable.

The DOL has been increasingly attentive to this. In its 2022 guidance on the use of AI in retirement plan investment decisions (Compliance Assistance Release No. 2022-01), the Department signaled that fiduciaries cannot outsource their judgment to algorithms without maintaining oversight and understanding of how those algorithms reach their conclusions. While that guidance was directed at 401(k) plans, the fiduciary principles under ERISA Sections 404 and 405 apply equally to health plan administration.

Consider a concrete scenario. An employer uses an AI analytics platform to identify high-cost claimants and recommend plan design changes intended to reduce costs. If those design changes disproportionately affect participants with specific health conditions, the employer may face claims under ERISA Section 510 (interference with protected rights) or even the Mental Health Parity and Addiction Equity Act if the changes create non-quantitative treatment limitations that fall harder on mental health or substance use disorder benefits. The AI tool's recommendation does not insulate the fiduciary from liability. The fiduciary made the decision.

The Minimum Necessary Standard and AI Training Data

One of the most underappreciated risks involves the HIPAA minimum necessary standard under 45 CFR 164.502(b). Covered entities must make reasonable efforts to limit PHI disclosures to the minimum necessary to accomplish the intended purpose. AI tools, particularly machine learning models, are data-hungry by design. They perform better with more data. This creates a structural tension with minimum necessary requirements.

If an HR team feeds an AI platform three years of granular claims data to build a predictive model for benefits utilization, the question is whether all of that data was necessary for the specific administrative purpose. If the purpose was forecasting aggregate plan costs, individual-level diagnosis codes probably were not necessary. If the AI vendor retains that data for model improvement or training purposes, you have a secondary use problem that likely exceeds the scope of the BAA, assuming a BAA even exists.

HHS OCR has not issued AI-specific enforcement guidance yet, but the existing framework is clear enough. The $4.3 million settlement with the University of Texas MD Anderson Cancer Center in 2018 (later vacated on other grounds by the Fifth Circuit in 2021) demonstrated that OCR takes data handling failures seriously, even when the underlying intent was legitimate research and analysis. The regulatory risk does not require a novel theory of liability. It requires applying existing rules to new technology.

Practical Steps for Compliance Teams

  • Audit AI tool procurement in HR. Identify every tool that touches data connected to the group health plan. This includes analytics platforms, chatbots used for benefits questions, wellness program platforms, and any tool integrated with your HRIS that pulls plan data.
  • Verify BAA coverage. For every vendor identified, confirm that a compliant BAA is in place and that it specifically contemplates the AI-related processing activities. A generic BAA from 2018 probably does not cover model training on PHI.
  • Enforce the firewall. Revisit the plan document amendments required under 45 CFR 164.504(f). Make sure the separation between plan administration and employment functions accounts for AI tools that might bridge that gap by aggregating data across both domains.
  • Assess fiduciary exposure. If AI tools are informing plan design or administration decisions, document the fiduciary review process. The DOL will want to see that a human with fiduciary responsibility evaluated the AI's output, understood its limitations, and made an independent judgment.
  • Apply minimum necessary to data inputs. Before feeding data into any AI tool, evaluate whether the granularity and volume of PHI is necessary for the stated purpose. De-identify where possible under the Safe Harbor or Expert Determination methods in 45 CFR 164.514.

How FirmAdapt Addresses This

FirmAdapt's architecture was built around the assumption that regulated data requires compliance controls at the infrastructure level, not as an afterthought. For organizations managing self-insured health plans, FirmAdapt enforces data segmentation that mirrors the HIPAA firewall requirements under 45 CFR 164.504(f), ensuring that PHI used for plan administration remains separated from employment data and that AI processing occurs within boundaries that satisfy the minimum necessary standard. Access controls, audit logging, and BAA-compliant data handling are built into the platform rather than bolted on through policy documents.

On the ERISA side, FirmAdapt provides the documentation and transparency infrastructure that fiduciaries need to demonstrate prudent oversight of AI-assisted decisions. When an AI tool generates a recommendation that touches plan administration, the platform captures the inputs, the logic applied, and the human review that followed. This gives compliance teams and general counsel a defensible record if the DOL or a plan participant ever asks how a particular decision was made.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free