Regional Banks and the Tier 2 Examiner Questions About AI Governance
Regional Banks and the Tier 2 Examiner Questions About AI Governance
If you are a regional bank between $10 billion and $100 billion in assets, you are in an interesting spot right now. You are big enough that OCC and FDIC examiners expect mature governance frameworks, but not so big that you have a dedicated AI governance team with a $5 million budget. And examiners have started asking about AI. Not in the abstract, future-looking way they did in 2022. They are asking specific, operational questions, and they want documentation.
I have been tracking what examiners at the Tier 2 level (think district and field office examiners, not the folks doing the annual deep dive on JPMorgan) are actually asking about AI governance. The pattern is clear enough to be useful, so here is what to expect and how to structure your answers.
The Regulatory Foundation You Already Know (But Need to Map Explicitly)
The OCC's guidance on model risk management, OCC Bulletin 2011-12 (SR 11-7), remains the backbone here. The FDIC adopted it by reference in FIL-22-2017. Neither agency has issued standalone AI governance rules, but both have made clear through supervisory channels that they consider AI and machine learning models to fall squarely within existing model risk management (MRM) frameworks. The OCC's April 2023 semiannual risk report explicitly flagged AI adoption in community and regional banks as an emerging supervisory priority.
The interagency guidance on third-party risk management, finalized June 2023, also matters here. If you are using a vendor's AI for credit decisioning, fraud detection, or BSA/AML transaction monitoring, examiners will treat that as a third-party relationship subject to the full lifecycle of due diligence, ongoing monitoring, and contingency planning.
So the regulatory hooks are not new. What is new is the specificity of the questions.
What Examiners Are Actually Asking
1. Inventory and Classification
The first question is almost always some version of: "Do you have a complete inventory of AI and ML models in use, including those embedded in vendor products?" This sounds simple. It is not. Most regional banks I have spoken with can identify their internally developed models, but struggle to catalog AI components embedded in third-party platforms. Your core banking provider's fraud scoring module probably uses ML. Your marketing platform's next-best-offer engine almost certainly does. Examiners want to see that you have identified these, classified them by risk tier, and documented the classification rationale.
2. Validation and Testing
Examiners will ask how you validate AI models, with particular attention to whether your validation approach accounts for the ways AI differs from traditional statistical models. They want to see evidence of ongoing performance monitoring, not just initial validation. For credit models, they are looking for disparate impact testing aligned with ECOA and fair lending requirements. The CFPB's September 2023 guidance on adverse action notices for AI-driven credit decisions has filtered into OCC and FDIC exam expectations, even for institutions not directly supervised by the CFPB.
A specific question that has come up repeatedly: "How do you validate a model when the vendor will not disclose the full methodology?" Good question. The answer needs to involve outcome testing, benchmark comparisons, and contractual provisions for model documentation access. If your vendor agreement does not include audit rights over model methodology, expect a finding.
3. Governance Structure and Accountability
Who owns AI risk at your bank? Examiners want a name and a reporting line, not a committee reference. They want to see that your board has been briefed on AI usage and risk, ideally with documentation showing the board asked informed questions. A February 2024 OCC enforcement action against a $14 billion bank included a consent order citing, among other things, insufficient board oversight of model risk. The AI angle was not the headline, but it was in the details.
They will also ask whether your AI governance framework is integrated with your existing enterprise risk management program or operates as a standalone silo. Integration is the right answer. If your AI governance lives in a separate document that nobody maps to your risk appetite statement, that is a gap.
4. Change Management
This one catches people off guard. Examiners want to know your process for managing model updates, retraining, and drift. If your vendor pushes a model update, what is your review and approval process before it goes into production? Many regional banks have strong change management for code deployments but have not extended those controls to model updates. The OCC's Comptroller's Handbook on Model Risk Management, updated in August 2021, specifically addresses ongoing monitoring obligations that cover model changes.
5. Consumer Protection and Explainability
For any AI touching consumer-facing decisions, examiners will probe explainability. Can you explain to a consumer why they were denied credit, flagged for fraud, or routed to a particular product? The legal standard under ECOA and Regulation B requires specific adverse action reasons. "The model said so" does not satisfy the requirement. Examiners want to see that you have tested your ability to generate compliant explanations from your AI outputs.
How to Structure Your Answers
The banks that handle these exams well tend to organize their AI governance documentation around a few principles:
- Map AI governance to existing frameworks. Do not create a separate AI policy universe. Show examiners how AI risk fits within your MRM policy, your third-party risk management program, your fair lending program, and your information security program. Cross-references matter.
- Maintain a living inventory. Update it quarterly at minimum. Include vendor-embedded models. Classify by risk tier using criteria that align with OCC 2011-12's definition of materiality.
- Document board engagement. Board minutes should reflect AI-specific discussion at least annually. Include the materials presented, questions raised, and decisions made.
- Show your work on validation. For each model, maintain a validation file that includes methodology, test results, limitations identified, and compensating controls. For vendor models where full methodology is unavailable, document your outcome-based testing approach and any contractual provisions for transparency.
- Build explainability into procurement. Before you buy an AI-enabled product, require the vendor to demonstrate that the model can produce reason codes or explanations sufficient for regulatory compliance. Make this a contractual requirement, not a verbal assurance.
One practical note: examiners at the regional level often have limited AI technical expertise themselves. They are working from exam procedures and checklists. Clear, well-organized documentation that anticipates the checklist items will go further than a technically brilliant but poorly organized response. Structure your materials so a generalist examiner can follow the logic without needing a data science background.
The Timeline Pressure
The OCC's 2024 bank supervision operating plan listed AI governance as a horizontal review topic. FDIC regional directors have flagged it in pre-exam scoping letters with increasing frequency since mid-2023. If you have not received specific AI governance questions in an exam yet, you likely will within the next cycle. Building the documentation after you receive the request is significantly more painful than building it proactively.
How FirmAdapt Addresses This
FirmAdapt's platform is built to generate and maintain the kind of documentation that OCC and FDIC examiners expect. The model inventory, risk classification, validation records, and board reporting materials can be structured within FirmAdapt's compliance architecture so that they map directly to OCC 2011-12 requirements and the interagency third-party risk management guidance. The platform maintains audit trails for model changes and access decisions, which directly addresses the change management questions examiners are raising.
For regional banks specifically, FirmAdapt provides a way to operationalize AI governance without building a dedicated team from scratch. The compliance-first design means the governance framework is embedded in how the platform operates, not layered on as an afterthought. When an examiner asks for your AI model inventory or your validation documentation, the answer already exists in a format designed for regulatory consumption.