FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryfinancial servicesbankingcomplianceGLBA

GLBA in the Age of ChatGPT: What Financial Institutions Are Quietly Getting Wrong

By Basel IsmailMay 4, 2026

GLBA in the Age of ChatGPT: What Financial Institutions Are Quietly Getting Wrong

A loan officer pastes a borrower's W-2 data into ChatGPT to draft a summary for underwriting. A wealth advisor uploads a client's portfolio details to an AI tool to generate a quarterly review letter. An HR analyst at a bank feeds employee financial records into a free-tier AI assistant to build a report. None of these people think they are doing anything wrong. All of them just created potential GLBA Safeguards Rule violations.

The Gramm-Leach-Bliley Act has been around since 1999, and most compliance teams know its broad strokes well. But the collision between GLBA's requirements and the way employees actually use generative AI tools in 2024 and 2025 is producing a category of risk that very few institutions have properly addressed. The gap is not in policy documents. It is in enforcement architecture.

The Updated Safeguards Rule: Quick Refresher

The FTC's revised Safeguards Rule (16 CFR Part 314) went into full effect on June 9, 2023, after a series of extensions. The updated rule significantly tightened what "reasonable safeguards" means for financial institutions. Key additions include requirements for a designated Qualified Individual to oversee information security programs, written risk assessments, access controls, encryption of customer information both in transit and at rest, multi-factor authentication, and continuous monitoring.

Section 314.4(c) is where things get interesting for AI usage. It requires institutions to implement access controls on customer information, including controls that limit who can access data and restrictions on what those individuals can do with it. The rule also requires, under 314.4(d)(2), that institutions monitor the activity of authorized users and detect unauthorized access or use of customer information.

When an employee copies nonpublic personal information (NPI) into a third-party AI tool, they are effectively transmitting customer data to an external system that the institution does not control, has not risk-assessed, and almost certainly has not included in its information security program. Under the updated Safeguards Rule, that is a failure of access controls, a failure of monitoring, and potentially a failure of the encryption requirements if the data is transmitted to a tool without adequate protections.

The Per-Incident Violation Problem

Here is where the math gets uncomfortable. GLBA violations enforced by the FTC can carry penalties of up to $100,000 per violation, with individual officers and directors facing up to $10,000 per violation personally. The FTC has historically treated each instance of unauthorized disclosure or failure to safeguard as a separate violation.

Consider the scale. A mid-size bank with 500 employees, even if only 10% of them occasionally paste customer data into an AI chatbot, could be generating dozens of discrete violations per week. Each paste, each upload, each prompt containing NPI is a separate transmission of customer information to an unvetted third party. Multiply that across months of unsupervised usage and you are looking at exposure that dwarfs most compliance teams' estimates.

The FTC has been increasingly aggressive here. In its December 2022 action against Chegg, the FTC cited failures to monitor employee access to customer data and inadequate access controls as core Safeguards Rule violations. The consent order required a comprehensive overhaul of Chegg's information security program. In the May 2023 action against Edmodo, similar monitoring failures were central to the complaint. These cases predate the widespread adoption of ChatGPT and similar tools, but the legal framework they establish maps directly onto unsupervised AI usage.

State Regulators Are Watching Too

GLBA compliance is not solely an FTC matter. State regulators, particularly the New York Department of Financial Services under 23 NYCRR 500, have their own cybersecurity requirements that overlap with and in many cases exceed federal GLBA standards. NYDFS amended 23 NYCRR 500 in November 2023, adding explicit requirements around access privilege management and monitoring of privileged access activity. If your institution operates in New York, unsupervised AI usage by employees with access to NPI is a compliance problem under both federal and state frameworks simultaneously.

Why Acceptable Use Policies Are Not Enough

Most financial institutions that have addressed AI at all have done so through acceptable use policies. They send an email, update the employee handbook, maybe run a training session. The policy says something like "do not input customer data into unauthorized AI tools." Compliance checks the box.

The Safeguards Rule does not care about your policy. It cares about your controls. Section 314.4(c) requires you to implement access controls, not merely document them. Section 314.4(d) requires you to detect unauthorized access, not merely prohibit it. A policy without a corresponding technical control is, under the updated rule, arguably not a safeguard at all.

The FTC made this explicit in its 2023 enforcement guidance. Commissioner Alvaro Bedoya noted in public remarks that "a company cannot outsource its security obligations by telling employees to be careful." The Commission has consistently held that administrative controls alone, without technical enforcement, are insufficient to meet the Safeguards Rule standard.

Think about it from an examiner's perspective. If an institution can show that it deployed technical controls preventing NPI from being transmitted to unauthorized AI tools, and that it monitors for and detects attempts to circumvent those controls, it has a defensible position. If it can only show a policy document and a training log, it does not.

The Shadow AI Problem Is Structural

There is a reason employees paste data into ChatGPT: it makes them faster. Loan processing, client communications, regulatory filings, internal reporting. Generative AI is genuinely useful for all of these tasks. Telling employees not to use AI is like telling them not to use email in 2003. They will use it anyway, they will just hide it.

This is what makes shadow AI fundamentally different from other shadow IT problems. With unauthorized SaaS tools, you can monitor network traffic and block domains. With generative AI, the input vector is a text box in a browser. The data leaves through copy-paste, through file uploads, through API calls from browser extensions. Traditional DLP tools catch some of this, but most were not designed for the pattern of conversational data entry that characterizes AI tool usage.

Financial institutions need AI tools that employees can actually use for legitimate work purposes, but that operate within the institution's security perimeter and under its control framework. The alternative is not "no AI." The alternative is uncontrolled AI with escalating regulatory exposure.

What Examiners Will Ask

Based on recent examination trends and the updated Safeguards Rule requirements, institutions should expect regulators to ask specific questions about AI usage:

  • Has the institution conducted a risk assessment that specifically addresses generative AI tools, as required under 314.4(b)?
  • What technical controls prevent the transmission of NPI to unauthorized AI systems?
  • Does the institution's monitoring program, under 314.4(d), include detection of customer data being input into AI tools?
  • Has the Qualified Individual, required under 314.4(a), evaluated and documented AI-related risks in their annual report to the board?
  • If the institution has approved specific AI tools for use, were those tools evaluated under the institution's vendor management and third-party risk assessment programs?

If your compliance team cannot answer these questions with specifics and evidence, the gap is worth closing before someone asks.

How FirmAdapt Addresses This

FirmAdapt was built around the premise that regulated companies need AI they can actually use without creating compliance exposure. For financial institutions subject to GLBA, FirmAdapt's architecture keeps NPI within the institution's control perimeter. Data inputs are processed under configurable retention and access control policies, with audit logging that maps to Safeguards Rule monitoring requirements under 314.4(d). The platform does not train on customer data, and its deployment model allows institutions to include it in their existing information security programs and vendor risk assessments.

Practically, this means employees get a capable AI tool for the work they are already trying to do, while compliance teams get the technical controls and audit trails that the updated Safeguards Rule actually requires. FirmAdapt integrates with existing DLP and SIEM infrastructure, so it fits into the monitoring programs institutions have already built rather than requiring a parallel compliance workflow.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free
GLBA in the Age of ChatGPT: What Financial Institutions Are | FirmAdapt