AI Vendor Due Diligence Checklist for Healthcare CISOs in 2026
AI Vendor Due Diligence Checklist for Healthcare CISOs in 2026
If you are evaluating AI vendors for a healthcare environment right now, you already know the landscape has shifted considerably since HHS published its final rule on AI and HIPAA in late 2025. The enforcement posture is different. OCR's budget got a 15% bump for FY2026, and they have been explicit about treating AI-related PHI exposure as a priority. Meanwhile, the average healthcare data breach now costs $10.93 million according to IBM's 2025 report, up from $9.77 million the year before.
So here is a 47-point checklist you can actually hand to a vendor, or use internally to score responses. I have organized it into logical groupings. Some of these are obvious. Some are the ones that trip people up 18 months into a contract.
Business Associate Agreements and Contractual Structure
- 1. Does the vendor execute a BAA that explicitly covers AI-specific data processing, not just generic cloud hosting?
- 2. Does the BAA enumerate all subcontractors and downstream processors who may access PHI?
- 3. Are sub-BAAs in place with every subprocessor, and can the vendor produce copies on request?
- 4. Does the BAA address model training on PHI, with an explicit prohibition unless separately authorized?
- 5. Is there a contractual training data opt-out, not just a policy statement on a website?
- 6. Does the BAA specify breach notification timelines shorter than or equal to the HIPAA 60-day window? (Push for 72 hours; that aligns with the HHS proposed harmonization with CISA incident reporting.)
- 7. Does the contract include a right to audit clause, with specifics on scope, frequency, and access to infrastructure?
- 8. Are data return and destruction obligations defined for contract termination, including model weights trained on your data?
- 9. Does the agreement address indemnification for regulatory fines arising from vendor-side failures?
- 10. Is there a clear data processing addendum that specifies lawful bases for processing beyond the BAA?
Encryption and Data Protection
- 11. Is PHI encrypted at rest using AES-256 or equivalent?
- 12. Is PHI encrypted in transit using TLS 1.3?
- 13. Does the vendor support customer-managed encryption keys (CMEK)?
- 14. Is data encrypted during inference processing, or is PHI decrypted in memory with no protections? Ask specifically about confidential computing or trusted execution environments.
- 15. Are encryption keys rotated on a defined schedule, and is that schedule documented?
- 16. Does the vendor use tokenization or de-identification before PHI enters any AI pipeline?
- 17. If de-identification is used, does it meet the HIPAA Safe Harbor standard (all 18 identifiers removed) or the Expert Determination standard under 45 CFR 164.514(b)?
Data Retention, Residency, and Deletion
- 18. Where is PHI stored geographically? Can you contractually restrict it to U.S. data centers?
- 19. What is the default data retention period, and can you configure it?
- 20. Does the vendor retain prompt and response logs containing PHI? For how long?
- 21. Can you trigger deletion of all PHI on demand, with a certificate of destruction?
- 22. Does the vendor's backup and disaster recovery process create additional PHI copies, and are those covered by the same retention and deletion policies?
- 23. If the vendor uses a vector database or retrieval-augmented generation architecture, where are the embeddings stored and are they considered PHI? (They almost certainly are if generated from PHI. The 2025 OCR guidance on derived data was clear on this.)
Model Training and Data Usage
- 24. Does the vendor use customer data to train, fine-tune, or improve foundation models? Get this in writing, not from a FAQ page.
- 25. If training on customer data is an option, is it opt-in with a separate written authorization?
- 26. Can the vendor demonstrate technical controls (not just policies) that prevent PHI from leaking into training datasets?
- 27. Does the vendor perform membership inference testing or other privacy audits on their models?
- 28. If the vendor uses third-party foundation models (OpenAI, Anthropic, open-source), do those providers have their own sub-BAAs in place?
Access Controls and Authentication
- 29. Does the platform support RBAC with granular permissions aligned to minimum necessary standards under 45 CFR 164.502(b)?
- 30. Is MFA enforced for all administrative and end-user access?
- 31. Does the vendor support SSO integration with your identity provider?
- 32. Are API keys scoped, rotatable, and auditable?
- 33. Can you restrict access by IP range, VPN, or network segment?
Audit Logging and Monitoring
- 34. Does the vendor provide immutable audit logs for all PHI access events?
- 35. Do logs capture who accessed what data, when, and what the AI system did with it (including inference outputs)?
- 36. Can you export logs to your own SIEM in real time?
- 37. Are logs retained for at least six years, consistent with HIPAA's administrative requirements under 45 CFR 164.530(j)?
- 38. Does the vendor provide anomaly detection or alerting on unusual PHI access patterns?
Incident Response and Breach Notification
- 39. What is the vendor's contractual breach notification timeline? Again, 60 days is the HIPAA maximum. You want something faster.
- 40. Does the vendor have a documented incident response plan, and will they share it?
- 41. Has the vendor experienced a breach in the past three years? Check the HHS Breach Portal; anything affecting 500+ individuals is public.
- 42. Does the vendor carry cyber liability insurance, and what are the coverage limits?
Compliance Certifications and Third-Party Validation
- 43. Does the vendor hold a current SOC 2 Type II report? When was the last audit period?
- 44. Is the vendor HITRUST CSF certified? (HITRUST r2 validated assessment is the meaningful one; a self-assessment is not equivalent.)
- 45. Has the vendor completed a NIST AI RMF assessment or mapped their controls to it?
- 46. Can the vendor provide penetration test results from an independent third party conducted within the last 12 months?
- 47. Does the vendor participate in any regulatory sandbox or have documented correspondence with OCR regarding their AI product's compliance posture?
How to Use This List
Send it as a questionnaire before you sign anything. Weight the responses. Items 1 through 10 and 24 through 28 are deal-breakers; if a vendor cannot answer those clearly, the rest does not matter. Items 11 through 17 should be verifiable through documentation, not just verbal assurances. And items 34 through 38 are where most vendors fall short in practice. They will say they have logging, but when you ask for real-time SIEM export or six-year retention, the conversation gets quiet.
One more thing worth noting: OCR's 2024 settlement with Montefiore Medical Center ($4.75 million) turned partly on inadequate audit controls and failure to monitor access logs. That was before AI was in the picture. Layer AI inference on top of an already complex data access environment, and the audit trail requirements become substantially more demanding.
How FirmAdapt Addresses This
FirmAdapt was built from the ground up to satisfy exactly this kind of checklist. The platform executes BAAs that explicitly cover AI inference and prohibit training on customer data by default. PHI is encrypted at rest and in transit, with customer-managed keys available, and all processing happens in U.S.-based infrastructure with no data leaving your designated environment. Audit logs are immutable, exportable to your SIEM, and retained in compliance with HIPAA's six-year requirement.
On the model training question, FirmAdapt's architecture ensures that customer data never enters any training pipeline. Embeddings, prompts, and outputs are treated as PHI throughout their lifecycle, with deletion controls that cover derived data. If you are running vendors through this checklist, FirmAdapt can answer all 47 points with documentation, not just promises.