FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryfinancial servicesbankingcomplianceCFPB

The CFPB's Position on AI in Consumer Finance and the Enforcement Cases We Are Watching

By Basel IsmailMay 8, 2026

The CFPB's Position on AI in Consumer Finance and the Enforcement Cases We Are Watching

The CFPB has been signaling for a while now that it views AI in consumer lending through the lens of existing law, not future rulemaking. The agency's position is straightforward: if you use a model to make a credit decision, you still owe the consumer a specific explanation of why they were denied or received worse terms. The fact that your model is complex does not excuse you from that obligation. Circular 2023-03, issued in September 2023, made this explicit, and the enforcement trajectory since then has been worth watching closely.

What Circular 2023-03 Actually Says

The circular addresses adverse action notices under the Equal Credit Opportunity Act (ECOA) and Regulation B. Under existing law, when a creditor takes adverse action against an applicant, the creditor must provide specific reasons for the denial. The operative word is "specific." Telling someone they were denied because of "the model output" or "insufficient creditworthiness" does not cut it.

The CFPB's position in the circular is that creditors cannot use the complexity of their AI or machine learning models as a reason to provide vague or incomplete adverse action notices. If a model considers hundreds of features and arrives at a denial, the creditor still needs to identify the principal reasons for that denial in terms the applicant can understand and act on. The circular explicitly states that using a "black box" model does not change the creditor's legal obligations under ECOA or the Fair Credit Reporting Act (FCRA).

This matters because a lot of fintech lenders and even traditional banks have adopted ML models that ingest alternative data, behavioral signals, and non-traditional credit features. Some of these models genuinely struggle with post-hoc explainability. The CFPB is saying, in effect, that is your problem to solve before deployment, not a valid reason to give consumers less information.

The Specificity Requirement

Regulation B (12 CFR 1002.9) requires creditors to provide a statement of specific reasons for adverse action. The CFPB's sample forms list reasons like "length of employment" or "insufficient number of credit references." The circular makes clear that AI-driven decisions must map back to this level of granularity. Approximations and proxy explanations are acceptable only if they accurately reflect the reasons the model actually relied on. Creditors cannot just pick the closest-sounding reason from a dropdown if it does not reflect what the model did.

This creates a real technical challenge. SHAP values, LIME, and other explainability methods can approximate feature importance, but translating those into consumer-facing reason codes that are both accurate and actionable requires careful work. The CFPB is not prescribing a method, but it is making clear that the output needs to be honest and specific.

Enforcement Direction and the Cases Worth Tracking

The circular itself is not a regulation. It is interpretive guidance. But the CFPB has been backing it up with enforcement actions that signal where the boundaries are.

Upstart Network (2023)

The CFPB's examination of Upstart's AI lending model drew significant attention. Upstart uses machine learning models that incorporate non-traditional variables, including education and employment history, to make credit decisions. While Upstart reached a resolution with the CFPB in 2017 under a No-Action Letter (which the CFPB later sunset as a program), the agency's continued scrutiny of AI-based lending models like Upstart's has informed the broader enforcement posture. The message: novel data and complex models do not create novel exemptions.

The Auto Lending Cases

The CFPB has been active in auto lending, where dealer markup and algorithmic pricing intersect. In December 2023, the bureau finalized a rule requiring large nonbank auto lenders to submit to CFPB supervision. This is relevant because auto lending is one of the areas where algorithmic pricing models are most likely to produce disparate impact without clear adverse action explanations. The CFPB fined Ally Financial $98 million back in 2013 for discriminatory auto loan pricing, and the current enforcement posture suggests the bureau views AI-driven pricing as creating similar risks at greater scale.

The Tenant Screening Space

In 2023 and 2024, the CFPB turned attention to tenant screening companies that use algorithmic scoring. The bureau published a report in November 2022 finding that tenant screening reports were frequently inaccurate and that consumers had little ability to understand or dispute algorithmic decisions. While tenant screening sits at the intersection of FCRA and fair housing law, the CFPB's approach here mirrors the Circular 2023-03 logic: if an algorithm produces an adverse outcome, the consumer deserves a real explanation.

Ongoing Investigations We Cannot Name Yet

There are credible reports of active CFPB investigations into several mid-size fintech lenders using ML-based underwriting. We cannot identify them specifically because no public enforcement actions have been filed. But the pattern of Civil Investigative Demands (CIDs) in this space suggests the bureau is building cases around inadequate adverse action notices, specifically where lenders adopted complex models without building corresponding explainability infrastructure.

What This Means Practically

If you are deploying AI in any consumer-facing credit decision, the compliance requirements are not ambiguous. You need to be able to:

  • Identify the principal reasons for any adverse action in specific, consumer-understandable terms
  • Demonstrate that those reasons accurately reflect what the model relied on, not just a best guess from a reason code library
  • Maintain documentation showing how your explainability method maps model outputs to adverse action reasons
  • Monitor for disparate impact across protected classes, because the CFPB views unexplainable models as a fair lending risk multiplier

The CFPB has also signaled, through Director Rohit Chopra's public statements throughout 2023 and 2024, that the bureau considers "digital redlining" a priority. Chopra has drawn explicit parallels between historical redlining and modern algorithmic exclusion. Whether or not you agree with the framing, it tells you where enforcement resources are being directed.

The Political Dimension

It is worth noting that the CFPB's authority and funding have been subjects of active litigation. The Supreme Court's May 2024 decision in CFPB v. Community Financial Services Association upheld the bureau's funding mechanism as constitutional, which removed a significant existential threat. However, leadership changes and shifting political priorities could alter enforcement intensity. The legal obligations under ECOA, FCRA, and Regulation B remain regardless of who runs the bureau. The circular is interpretive guidance, but the statutes it interprets are not going anywhere.

For compliance teams, the practical takeaway is to build your AI governance around the statutes, not the current enforcement mood. If your explainability infrastructure only meets the minimum bar of today's enforcement posture, you are likely underbuilt for where the law already requires you to be.

How FirmAdapt Addresses This

FirmAdapt's architecture is built around the principle that compliance obligations like adverse action notice requirements need to be addressed at the model design stage, not retrofitted after deployment. The platform supports structured explainability workflows that map model outputs to specific, regulation-compliant reason codes, with audit trails documenting how those mappings were derived and validated. This is particularly relevant for organizations navigating Circular 2023-03's requirements, where the gap between model complexity and consumer-facing explanations creates real legal exposure.

For financial services teams deploying AI in credit decisioning, underwriting, or pricing, FirmAdapt provides the documentation and governance layer that regulators are increasingly expecting to see. The platform does not replace your legal judgment about what constitutes an adequate adverse action notice, but it gives you the infrastructure to make that judgment systematically and to demonstrate that you made it.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free
The CFPB's Position on AI in Consumer Finance and the Enforc | FirmAdapt