Insurance Underwriting AI and the State-by-State Approval Maze
Insurance Underwriting AI and the State-by-State Approval Maze
If you are deploying AI in insurance underwriting, you already know there is no single federal regulator waiting to give you a thumbs up. Insurance regulation in the U.S. is a state affair, which means 50 different departments of insurance, 50 different sets of expectations, and an increasingly divergent patchwork of rules around algorithmic decision-making. The NAIC has tried to impose some coherence. Colorado went ahead and legislated. New York issued guidance. And the rest of the states are somewhere on a spectrum between "watching closely" and "we will get to it." Here is where things actually stand.
The NAIC Model Bulletin: A Framework, Not a Law
In December 2023, the NAIC adopted its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers. The bulletin is built on the NAIC's existing AI principles from 2020 (fair, accountable, compliant, transparent, secure) and attempts to translate those principles into concrete expectations for insurers using AI and machine learning in underwriting, rating, claims, and marketing.
A few things worth noting about the Model Bulletin. First, it applies broadly. The NAIC defines "AI System" to include machine learning, natural language processing, neural networks, and similar technologies. It also covers the use of third-party vendor models, which is significant because many insurers license underwriting models rather than building them in-house. Under the bulletin, the insurer remains responsible for the outputs of those models regardless of who built them.
Second, the bulletin requires insurers to implement a governance framework proportionate to the risk posed by each AI system. For underwriting, that means documented risk assessments, ongoing monitoring for unfair discrimination, and an audit trail that can be produced for regulators. The bulletin specifically calls out the risk that AI systems may use proxy variables that correlate with protected classes, even when those classes are not directly used as inputs.
Third, and this is the part that trips people up, the Model Bulletin is not binding on any state. It is a template. States can adopt it, modify it, or ignore it entirely. As of mid-2025, several states have adopted versions of the bulletin or incorporated its principles into regulatory guidance, but adoption is uneven. Connecticut, Vermont, and a handful of others have moved relatively quickly. Many states have not.
Colorado SB21-169: The One That Actually Has Teeth
Colorado's SB21-169, signed into law in June 2021, is the most aggressive state-level attempt to regulate AI in insurance. The law amends Colorado's insurance statutes to explicitly prohibit insurers from using external consumer data and information sources (ECDIS), algorithms, and predictive models in ways that unfairly discriminate based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression.
The Colorado Division of Insurance finalized its implementing regulation, Regulation 10-1-1, with an effective date of November 14, 2023. The regulation requires insurers to conduct quantitative testing of their models and algorithms to assess whether they produce outcomes that disproportionately impact protected classes. Insurers must submit a governance framework to the Division and, for life insurance, must complete initial testing and reporting by specific deadlines that began rolling out in 2024.
What makes Colorado particularly notable is the specificity of its requirements. Insurers must be able to articulate the business justification for each data element used in a model. If a model produces disparate impact, the insurer must demonstrate that the data element or model component is actuarially justified and that there is no less discriminatory alternative reasonably available. That standard, borrowed from fair lending law, is a significant compliance burden for any insurer relying on complex ML models where feature importance is difficult to isolate.
Colorado also requires insurers to maintain records of their testing for at least seven years. If you are running underwriting models in Colorado, your compliance documentation needs to be deep, current, and auditable.
New York Circular Letter No. 7 (2019): Older but Still Relevant
New York's Circular Letter No. 7, issued January 18, 2019, by the Department of Financial Services, predates the current wave of AI regulation but remains an important reference point. The letter reminds insurers that the use of external data sources, algorithms, and predictive models must comply with existing New York insurance law, including prohibitions on unfair discrimination under Insurance Law Sections 2606 and 4224.
The circular letter does not create new legal obligations. Instead, it clarifies that existing anti-discrimination requirements apply to algorithmic underwriting just as they apply to traditional underwriting. DFS explicitly states that insurers cannot use external data or models as a proxy for protected classes, and that the use of such tools does not shield an insurer from liability for discriminatory outcomes.
New York has also been active on the enforcement side. DFS has conducted targeted examinations of insurers' use of AI and external data, and the department has signaled that it expects insurers to be able to explain how their models work and demonstrate that they do not produce unfairly discriminatory results. In practice, this means New York regulators may request model documentation, validation reports, and disparate impact analyses during examinations, even without a Colorado-style statute mandating those specific deliverables.
The Practical Problem: Operating Across Multiple States
For any insurer or insurtech operating nationally, the challenge is not understanding any single state's requirements. It is managing the cumulative burden of all of them simultaneously. Consider the landscape:
- Colorado requires quantitative disparate impact testing with specific reporting timelines and a "less discriminatory alternative" standard.
- New York expects model explainability and compliance with existing anti-discrimination law, enforced through examinations.
- Connecticut adopted a version of the NAIC Model Bulletin in 2024, requiring governance frameworks and risk assessments.
- Illinois has its own AI-related legislation (HB 2557, effective January 1, 2026) that imposes notice requirements when AI is used in consequential decisions.
- Many other states have pending legislation or have issued informal guidance that may or may not align with the NAIC framework.
The result is that a single underwriting model deployed across 30 states may need to satisfy materially different documentation, testing, and reporting requirements in each jurisdiction. Maintaining a separate compliance process for each state is expensive and error-prone. Maintaining a single process calibrated to the strictest state's requirements is simpler but may impose unnecessary constraints in more permissive jurisdictions.
There is also a timing problem. State legislatures and insurance departments are moving at different speeds. A model that is compliant everywhere today may not be compliant everywhere six months from now. Compliance teams need to track not only enacted laws and final regulations but also proposed rules, bulletin adoptions, and examination trends across every state where the insurer writes business.
Vendor Risk Adds Another Layer
The NAIC Model Bulletin and Colorado's regulation both make clear that insurers cannot outsource compliance by outsourcing model development. If you license a third-party underwriting model, you own the compliance risk. That means you need sufficient access to model documentation, validation results, and ongoing monitoring data from your vendor to satisfy your own regulatory obligations. Many vendor contracts were not written with this level of transparency in mind, and renegotiating those terms is a nontrivial exercise.
How FirmAdapt Addresses This
FirmAdapt's architecture is designed for exactly this kind of multi-jurisdictional regulatory complexity. The platform maps AI governance requirements across state insurance departments, tracks regulatory changes as they move from proposal to adoption, and maintains jurisdiction-specific compliance documentation so that a single underwriting model can be validated and reported against the requirements of each state where it is deployed. For Colorado's disparate impact testing requirements, New York's examination expectations, and NAIC-aligned governance frameworks, FirmAdapt generates and maintains the documentation artifacts that regulators actually ask for.
For insurers managing third-party model risk, FirmAdapt also provides a structured framework for vendor oversight, including documentation of model inputs, validation results, and ongoing monitoring outputs. The goal is to make multi-state AI compliance a manageable operational process rather than a perpetual fire drill.