Why Your Bank's Vendor Risk Assessment Probably Has an AI-Sized Hole In It
Why Your Bank's Vendor Risk Assessment Probably Has an AI-Sized Hole In It
I was reviewing a community bank's vendor risk questionnaire last week. It was thorough, well-organized, and almost entirely useless for evaluating their new AI-powered fraud detection vendor. The questionnaire was last updated in 2019, and it asked all the right questions for 2019: SOC 2 status, data encryption standards, business continuity plans, breach notification timelines. Solid stuff. But it had zero questions about model explainability, training data provenance, algorithmic bias testing, or drift monitoring. The bank was about to onboard a vendor whose core product is a machine learning model, and the due diligence process had no mechanism to evaluate the thing that actually creates the risk.
This is not a niche problem. A 2023 Deloitte survey found that 73% of financial institutions were using or piloting AI tools through third-party vendors, but only 29% had updated their vendor risk management frameworks to include AI-specific criteria. That gap is where regulators are now looking.
The Regulatory Landscape Has Moved. Your Questionnaire Probably Hasn't.
The Gramm-Leach-Bliley Act (GLBA) requires financial institutions to ensure the security and confidentiality of customer information, including when that information is handled by service providers. Section 501(b) is explicit: you need to oversee your vendors' safeguards. The FTC's Safeguards Rule, updated in June 2023, reinforced that vendor oversight must be continuous, not a one-time checkbox.
On the OCC side, Bulletin 2013-29 (Third-Party Relationships: Risk Management Guidance) was supplemented in June 2023 by the Interagency Guidance on Third-Party Relationships, jointly issued by the OCC, FDIC, and Federal Reserve. That updated guidance specifically calls out "novel, complex, or high-risk" technologies and makes clear that banks need to assess the risks of a vendor's technology itself, not just the vendor's organizational controls.
The OCC has also been vocal about AI specifically. In an April 2023 speech, Acting Comptroller Michael Hsu flagged AI in third-party relationships as a supervisory priority, noting that banks "cannot outsource responsibility" for understanding how AI models make decisions that affect customers. The CFPB's September 2023 guidance on adverse action notices (CFPB Circular 2023-03) reinforced that if a vendor's AI model denies a consumer credit, the bank is on the hook for explaining why, even if the bank doesn't fully understand the model's logic.
So the regulatory expectation is clear: you need to understand what your AI vendors' models are doing, how they're doing it, and what could go wrong. A 2019 questionnaire asking about firewall configurations does not get you there.
The Questions You Should Be Adding
Here is a practical framework for supplementing your existing vendor risk questionnaire when the vendor deploys AI or machine learning. These are organized by risk domain.
Model Transparency and Explainability
- Can the vendor provide documentation of model architecture, feature selection rationale, and decision logic? You need this for ECOA and fair lending compliance if the model touches credit decisions. "It's proprietary" is not an acceptable answer when the OCC examiner shows up.
- Can the vendor generate individual-level explanations for model outputs? The CFPB's adverse action requirements demand specificity. "The model said no" will not survive a complaint investigation.
- What is the vendor's process for model versioning and change management? You should know when the model changes and what changed. A vendor that pushes silent updates to a production model is a compliance risk.
Training Data and Bias
- What data was used to train the model, and can the vendor document its provenance? If the training data includes protected-class proxies (zip codes that correlate with race, for example), you have a fair lending problem regardless of intent.
- Has the vendor conducted disparate impact testing, and will they share results? The DOJ's $98 million settlement with Trustmark National Bank in 2023 over redlining should be a reminder that algorithmic discrimination carries real financial consequences.
- How frequently is bias testing repeated? A one-time test at model launch is insufficient. Data distributions shift. Customer populations change. Quarterly testing is a reasonable baseline.
Data Handling and Privacy
- Does customer data leave the bank's environment for model training or inference? Under GLBA, you need to know exactly where nonpublic personal information goes. If the vendor's model runs in their cloud and ingests your customer data, your Section 501(b) obligations follow that data.
- Does the vendor use customer data to improve models for other clients? This is more common than banks realize. Your customers' data could be training a model that benefits your competitor. Your contract should address this explicitly.
- What is the data retention and deletion policy for inference logs? If the vendor retains input/output logs indefinitely, that is a data minimization issue and a potential breach surface.
Performance and Drift Monitoring
- What metrics does the vendor use to monitor model performance in production? Accuracy, precision, recall, false positive rates, and demographic parity metrics should all be tracked and reported.
- What is the vendor's process when model performance degrades? You need a defined threshold for when the vendor notifies you and a process for remediation. The 2023 Interagency Guidance is clear that ongoing monitoring is expected, not optional.
- Does the vendor have a kill switch or fallback process? If the model starts producing unreliable outputs, can it be taken offline without disrupting your operations? What is the manual fallback?
Regulatory and Audit Access
- Will the vendor permit OCC or other regulatory examiner access to model documentation and testing results? OCC Bulletin 2013-29 explicitly states that contracts should include provisions for regulatory access. If the vendor resists this, that tells you something.
- Will the vendor support your internal audit team's review of the model? Your second and third lines of defense need access. A vendor that treats model internals as entirely off-limits is incompatible with the OCC's expectations for critical activities.
Practical Implementation
You do not need to rebuild your entire vendor risk framework from scratch. The most effective approach is to create an AI-specific supplement that triggers when a vendor's product or service involves machine learning, natural language processing, or automated decision-making. Attach it to your existing process. Use your current risk-tiering methodology to determine which vendors get the full supplement versus a lighter version. A vendor providing AI-powered marketing analytics is a different risk profile than one making credit underwriting decisions.
Also, consider who is reviewing the answers. Your procurement team may not have the technical background to evaluate responses about model architecture or bias testing methodology. Involve your model risk management team if you have one, or consider engaging external expertise for critical vendors. The OCC's SR 11-7 (Guidance on Model Risk Management) applies to vendor models just as it does to models you build in-house.
One more thing: update your contracts. Your existing vendor agreements probably have standard GLBA-compliant data protection clauses, but they may not address model-specific obligations like bias testing frequency, explainability requirements, or your right to audit model performance. The contract is where these expectations become enforceable.
How FirmAdapt Addresses This
FirmAdapt was built with these regulatory expectations baked in from the start. The platform's architecture keeps customer data within the client's control environment, provides full audit trails of every AI-driven output, and supports explainability at the individual decision level. When regulators ask how the AI reached a conclusion, FirmAdapt can show them, which means your vendor due diligence file has actual substance behind it.
For banks and financial institutions evaluating AI vendors, FirmAdapt also serves as a reference point for what "good" looks like. The platform's compliance documentation, bias testing protocols, and model governance practices align with OCC and Interagency Guidance expectations. If you are updating your vendor risk questionnaire with the questions outlined above, FirmAdapt can answer all of them.