FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryfinancial servicesbankingcomplianceNYDFS Part 500

NYDFS Part 500 and AI: What the October 2024 Guidance Actually Requires

By Basel IsmailMay 5, 2026

NYDFS Part 500 and AI: What the October 2024 Guidance Actually Requires

In October 2024, the New York Department of Financial Services published an industry letter on AI cybersecurity risks. If you skimmed it and filed it under "general AI hand-wringing from regulators," go back and read it again. The letter is more specific than you might expect, and it maps directly onto the amended Part 500 requirements that took effect in stages through November 2024. NYDFS is not floating abstract principles here. They are telling covered entities exactly how existing cybersecurity obligations apply when AI enters the picture.

Let's walk through it section by section.

Risk Assessments: AI Gets Its Own Line Item

Section 500.9 already requires covered entities to conduct periodic risk assessments. The October guidance makes clear that AI-related threats need to be explicitly addressed in those assessments. Not lumped into "emerging technology" as a vague category. Explicitly.

NYDFS identifies three distinct AI threat vectors that risk assessments should cover: AI-enabled social engineering (think deepfake voice and video used in business email compromise), AI-powered cyberattacks that can adapt and evade traditional defenses, and risks introduced by a company's own use of AI tools, including exposure of nonpublic information (NPI) through AI systems.

The third one is where most compliance teams should spend their time. If your organization is deploying AI tools internally, whether for underwriting, claims processing, customer service, or back-office automation, the risk assessment needs to account for how those tools handle NPI. Where does the data go? Who can access it? Is it being used to train models you don't control? These are not hypothetical questions. NYDFS expects documented answers.

A Practical Note on Frequency

The amended Part 500 already moved risk assessments to at least annual cadence under Section 500.9(b). Given how fast AI tooling is evolving, annual may not be enough for the AI-specific components. The guidance doesn't mandate more frequent review, but it strongly implies that static, once-a-year assessments won't hold up if an AI-related incident occurs and your last assessment was eleven months old and didn't mention the tool involved.

Third-Party AI Diligence: Your Vendor's Problem Is Your Problem

Section 500.11 governs third-party service provider security policies. The AI guidance layers on top of this in a way that should get your procurement and vendor management teams' attention.

NYDFS is explicit: if you're using third-party AI tools, you need to evaluate the cybersecurity practices of those AI providers with the same rigor you'd apply to any third-party service provider handling NPI. This includes understanding how the AI vendor processes, stores, and protects data; whether your data is used for model training; what access controls the vendor maintains; and what happens to your data if the relationship terminates.

  • Data handling: Where does NPI go when it enters the AI system? Is it processed in environments you've approved?
  • Model training: Are your inputs being used to improve the vendor's general model? If so, that's a potential NPI exposure path.
  • Subprocessors: Does the AI vendor rely on other third parties (cloud providers, API services) that also touch your data?
  • Contractual protections: Do your agreements with AI vendors include the cybersecurity provisions required under Section 500.11(b)?

The amended Part 500 already requires written policies governing third-party risk, including minimum cybersecurity practices, due diligence processes, and contractual protections. The AI guidance is essentially saying: yes, all of that applies to your AI vendors too, and we will be looking at whether you treated them differently.

Access Controls and Data Governance

Section 500.7 requires access privileges to be limited to those necessary for job functions, reviewed periodically, and promptly revoked when no longer needed. The AI guidance extends this to AI systems themselves.

This is worth pausing on. NYDFS is treating AI tools as entities that have access to data, not just as software. If an AI system can query your customer database, pull claims records, or access financial data, the access that AI system has needs to be governed under the same framework you use for human users. Principle of least privilege applies. Periodic review applies. Logging applies.

For organizations deploying large language models or AI assistants internally, this means thinking carefully about what data those systems can reach. A chatbot deployed for employee Q&A that has read access to your entire document management system is a Section 500.7 problem if that access isn't scoped and documented.

Monitoring and Logging

Section 500.14(b) requires covered entities to implement monitoring that can detect unauthorized access or use of NPI. The AI guidance highlights that AI systems introduce new monitoring challenges because they can access and process large volumes of data quickly, sometimes in ways that don't look like traditional unauthorized access patterns.

NYDFS expects covered entities to have monitoring capabilities that can detect anomalous AI behavior. If an AI tool suddenly starts pulling data outside its normal scope, or if query volumes spike in unusual patterns, your monitoring should flag it. This is a meaningful technical requirement. Standard SIEM rules built for human user behavior may not catch AI-driven anomalies without tuning.

The guidance also touches on the integrity of AI outputs. If you're relying on AI for decisions that affect customers or operations, you need some mechanism to detect when outputs are unreliable or potentially compromised. This connects to the broader Part 500 emphasis on data integrity, but it's a newer operational challenge that most monitoring frameworks weren't designed for.

Incident Response: AI Incidents Are Cybersecurity Incidents

Section 500.16 requires a written incident response plan, and Section 500.17 sets out notification requirements for cybersecurity events. The AI guidance confirms what you'd expect: an AI-related security event, whether it's a data exposure through an AI tool, a compromised AI system, or an AI-enabled attack, falls squarely within existing incident response and notification obligations.

The 72-hour notification requirement under Section 500.17(a)(1) applies. If an AI system is involved in a cybersecurity event that meets the reporting threshold, the clock starts the same way it would for any other incident. NYDFS has shown it takes late reporting seriously; the $5 million settlement with First American Financial in 2023 for a vulnerability that exposed NPI is a reminder that notification failures carry real consequences.

Your incident response plan should include scenarios involving AI systems. Tabletop exercises should cover AI-specific incidents. If your IR plan was last updated before you deployed AI tools, it needs revision.

The Bigger Picture

What makes this guidance notable is how methodically NYDFS mapped AI risks onto existing Part 500 requirements. They didn't create a new framework. They said: here's how the framework you're already subject to applies to this technology. For compliance teams, that's both helpful and demanding. Helpful because you're not starting from scratch. Demanding because "we didn't think Part 500 applied to our AI tools" is not going to be a viable defense.

The amended Part 500 also introduced the CISO reporting requirement under Section 500.4(b), which mandates that the CISO report at least annually to the senior governing body on the cybersecurity program. AI risk should be a line item in that report. If your CISO isn't briefing the board on AI-related cybersecurity risks, the October guidance gives you a clear reason to start.

How FirmAdapt Addresses This

FirmAdapt was built for exactly this kind of regulatory environment, where AI capabilities need to operate within well-defined compliance boundaries. The platform's architecture enforces access controls, data governance, and audit logging at the system level, so AI interactions with sensitive data are scoped, monitored, and documented in ways that align with Part 500 requirements. Data residency, model isolation, and third-party processing controls are built into the infrastructure rather than bolted on after deployment.

For organizations subject to NYDFS Part 500, FirmAdapt provides the kind of auditable, compliance-first AI deployment that the October 2024 guidance contemplates. Risk assessment documentation, access control enforcement, monitoring of AI system behavior, and incident-relevant logging are part of the core platform, not optional configurations. If you need to demonstrate to NYDFS that your AI tools operate within your cybersecurity program, FirmAdapt gives you the technical foundation and the documentation trail to do it.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free
NYDFS Part 500 and AI: What the October 2024 Guidance Actual | FirmAdapt