Credit Union AI Adoption and the NCUA Examination Questions That Are Coming
Credit Union AI Adoption and the NCUA Examination Questions That Are Coming
The NCUA has been relatively quiet on AI compared to the OCC and the FDIC. That quiet period is ending. If you're at a credit union that has deployed or is evaluating AI tools for lending, member services, fraud detection, or back-office operations, the examination questions are going to get specific in 2025 and 2026. The groundwork is already being laid, and it's worth understanding what examiners will likely focus on so you can prepare documentation now rather than scramble later.
Where the NCUA's Head Is At
In its 2024 Annual Report and the 2025 Supervisory Priorities letter published in January 2025, the NCUA flagged technology risk and third-party vendor management as key examination focus areas. The agency didn't call out "artificial intelligence" in a standalone section, but it threaded AI-adjacent concerns throughout its discussion of cybersecurity, information security, and operational resilience. NCUA Chairman Todd Harper made several public remarks in 2024 about the need for credit unions to understand the risks of emerging technologies, and the agency's Office of Examination and Insurance has been updating examiner guidance to address automated decision-making systems.
The NCUA also participates in the interagency process that produced the 2023 joint statement on AI (alongside the Fed, OCC, FDIC, and CFPB), which emphasized that existing regulatory frameworks already apply to AI. Translation: examiners don't need new rules to ask hard questions about your AI deployments. They have the authority now under existing safety and soundness standards, ECOA, the Fair Credit Reporting Act, and Part 741 of the NCUA's regulations.
The Examination Questions You Should Expect
Based on the supervisory priorities, the interagency AI guidance, and the NCUA's historical examination patterns, here are the areas where examiners are most likely to probe.
1. Vendor Due Diligence and Third-Party Risk Management
Most credit unions aren't building AI models in-house. They're buying them from fintechs, core processors, or CUSOs. The NCUA's third-party risk management expectations (outlined in NCUA Letter to Credit Unions 08-CU-09 and reinforced by subsequent guidance) require credit unions to understand what they're buying. For AI, that means examiners will want to see:
- Documentation of your due diligence process for the AI vendor, including financial stability, security posture, and regulatory compliance history
- Evidence that you understand what the model does, what data it ingests, and how it produces outputs
- Contractual provisions addressing data ownership, model transparency, audit rights, and incident notification
- Ongoing monitoring, not just the initial assessment. Annual reviews at minimum, with documented findings
If your vendor contract doesn't give you the right to audit the model or access documentation about how it works, that's a gap an examiner will notice.
2. Fair Lending and Adverse Action
If AI touches any part of your lending process, from underwriting to pricing to marketing, fair lending compliance is going to be a primary examination topic. The ECOA and Regulation B require credit unions to provide specific, accurate reasons when denying credit. The CFPB's Circular 2022-03 made clear that "the algorithm did it" is not an acceptable adverse action explanation, and NCUA examiners will apply the same standard.
Expect questions about:
- How the AI model's outputs are translated into adverse action notices that comply with Regulation B
- Whether you've conducted fair lending testing (disparate impact analysis) on the model's outputs
- How you handle model overrides, and whether override patterns suggest bias
- Whether the model uses any proxy variables that could correlate with protected class status
Credit unions under $10 billion in assets sometimes assume they're below the radar on fair lending. They're not. The NCUA conducts fair lending reviews across the asset spectrum, and AI-driven lending decisions will draw additional scrutiny precisely because the decision logic is harder to inspect.
3. Model Risk Management
The NCUA hasn't issued its own model risk management guidance, but examiners frequently reference the OCC/Fed's SR 11-7 (Supervisory Guidance on Model Risk Management) as a benchmark. For credit unions using AI models, examiners will likely ask about:
- Model validation, both initial and ongoing. Who validated the model? Are they independent from the team that selected or deployed it?
- Performance monitoring. Are you tracking the model's accuracy, drift, and stability over time?
- Documentation of model limitations and assumptions
- An inventory of all models in use, including AI/ML models embedded in vendor products
That last point trips up a lot of credit unions. If your core processor added an AI-powered fraud detection module in a recent update, it's now part of your model inventory whether you asked for it or not.
4. Data Governance and Privacy
AI models are hungry for data, and credit unions hold sensitive member information subject to the Gramm-Leach-Bliley Act, Part 748 of the NCUA's regulations, and potentially state privacy laws. Examiners will want to understand:
- What member data is being fed into AI systems, and whether members were notified
- Where the data is processed and stored (particularly relevant for cloud-based AI services)
- Whether you've assessed the AI vendor's data security practices against your own information security program requirements
- Data retention and deletion policies as they apply to AI training data and outputs
5. Board and Management Oversight
The NCUA places significant emphasis on board governance. Examiners will ask whether your board has been briefed on AI deployments, whether there's a policy framework governing AI adoption, and whether management has assigned clear accountability for AI risk. A credit union that deployed a member-facing chatbot or an AI-driven loan decisioning tool without any board awareness is going to have a difficult conversation during the exam.
What to Have Ready
Practically speaking, you should be assembling the following before your next examination cycle:
- A complete inventory of AI and automated decision-making tools in use, including those embedded in vendor platforms
- Vendor due diligence files with AI-specific documentation (model cards, data flow diagrams, bias testing results)
- Fair lending analysis for any AI tool involved in credit decisions
- Board minutes or committee reports reflecting AI risk discussions
- An AI-specific policy or an update to your existing technology and vendor management policies that addresses AI
- Incident response procedures that account for AI model failures or unexpected outputs
None of this requires you to become a machine learning expert. It requires you to ask your vendors the right questions, document the answers, and build a governance structure that treats AI with the same rigor you apply to any other operational risk.
How FirmAdapt Addresses This
FirmAdapt is built for exactly this kind of regulatory environment, where the rules aren't AI-specific yet but the expectations are already being enforced through existing frameworks. The platform maintains continuous documentation of AI system behavior, data flows, and decision outputs, which means when an NCUA examiner asks how a particular tool works and what oversight you have in place, the answer is already assembled. Vendor risk assessments, model inventories, and policy mappings are maintained in a single compliance layer rather than scattered across spreadsheets and email threads.
For credit unions specifically, FirmAdapt's architecture supports the fair lending documentation and adverse action traceability that examiners will focus on most heavily. The platform maps AI outputs back to the regulatory requirements they implicate, so your compliance team can demonstrate to examiners that governance isn't an afterthought. It's built into how the technology operates.