FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryfinancial servicesbankingcomplianceConfidentiality

Investment Bank Pitch Books and the Confidentiality Problem When Analysts Use ChatGPT

By Basel IsmailMay 7, 2026

Investment Bank Pitch Books and the Confidentiality Problem When Analysts Use ChatGPT

A first-year analyst at a bulge bracket bank is working on an M&A pitch book at 2 AM. The deck needs to go to the managing director by 7 AM. There are 47 slides, the formatting is broken, and the analyst needs to draft a "strategic rationale" narrative for why Company A should acquire Company B. Company B is publicly traded. The deal has not been announced. The analyst pastes the merger details, financial projections, and target company name into ChatGPT to help restructure the text.

That interaction just created a regulatory exposure that most banks have not meaningfully addressed.

What Is Actually in a Pitch Book

For anyone outside banking, pitch books are the PowerPoint decks investment banks use to win advisory mandates and guide M&A transactions. They are dense, detailed, and frequently contain material nonpublic information (MNPI). A typical sell-side pitch book for a mid-market deal might include projected EBITDA for the target, a preliminary valuation range, a list of likely acquirers, deal structure preferences, and sometimes the seller's minimum acceptable price.

This is exactly the kind of information that triggers obligations under Section 10(b) of the Securities Exchange Act of 1934, SEC Rule 10b-5, and Regulation FD. If any of it reaches someone who trades on it, you have a textbook insider trading scenario. The SEC does not need to prove that the leak was intentional. Recklessness is enough under the misappropriation theory established in United States v. O'Hagan (1997).

Banks know this, of course. They spend enormous sums on information barriers, restricted lists, and surveillance systems. The problem is that those controls were designed for a world where data leakage happened through email, phone calls, and physical documents. Consumer AI tools represent a fundamentally different vector.

How the Leak Actually Happens

When an analyst pastes deal information into ChatGPT, Claude, or any consumer-tier large language model, that data is transmitted to a third-party server. OpenAI's terms of service for the free and Plus tiers historically allowed the company to use inputs for model training, though they introduced an opt-out in April 2023 and later changed the default for API usage. But the consumer chat product's data handling remains a concern, and most analysts are not using the API.

The specific risks break down into a few categories:

  • Data retention by the AI provider. Even with updated policies, prompts may be retained for abuse monitoring for up to 30 days under OpenAI's current data usage policy. That is 30 days during which MNPI about an unannounced deal sits on a third-party server.
  • Training data contamination. If the data is used for training (which depends on the product tier and settings), deal details could theoretically surface in outputs to other users. This is a low-probability but non-zero risk, and "low probability" is not a defense the SEC tends to accept.
  • Third-party access. A breach of the AI provider's systems could expose deal information. OpenAI disclosed a data breach in March 2023 where some users could see other users' chat titles and, in certain cases, payment information. The breach was limited, but it demonstrated that the platform is a target.
  • Regulatory discovery. In an SEC investigation, the existence of prompts containing MNPI on a third-party platform creates a discovery nightmare. The bank's information barrier policies almost certainly do not contemplate data flowing to an AI provider's servers.

The Regulatory Framework Banks Are Bumping Against

SEC Rule 15c3-5 requires broker-dealers to maintain risk management controls. FINRA Rule 3110 requires supervisory systems reasonably designed to achieve compliance with applicable securities laws. Neither rule was written with generative AI in mind, but both are broad enough to cover it. If a bank's supervisory procedures do not address the use of consumer AI tools by employees with access to MNPI, a FINRA examiner could reasonably argue the firm's supervisory system is deficient.

Then there is the duty of confidentiality owed to clients. Investment banking engagement letters universally contain confidentiality provisions. Many reference the obligation to maintain information in accordance with the bank's internal policies. If those policies prohibit sharing client information with unauthorized third parties, and an analyst sends deal terms to OpenAI's servers, the bank may be in breach of its engagement letter. That is a contractual exposure on top of the regulatory one.

The SEC's 2023 sweep examination priorities explicitly mentioned firms' use of emerging technologies, and the September 2023 updates to the SEC's marketing rule enforcement have shown the Commission is paying close attention to how firms use AI. In July 2023, the SEC proposed rules that would require broker-dealers and investment advisers to address conflicts of interest associated with predictive data analytics, which signals a broader regulatory interest in AI governance at financial institutions.

FINRA issued a report in June 2024 covering AI-related practices at broker-dealers, noting that firms were at varying stages of developing governance frameworks for generative AI. The report specifically flagged the risk of employees inputting confidential information into third-party AI tools.

What Banks Have Done So Far (and Why It Is Not Enough)

Most major banks moved quickly to ban or restrict consumer AI tools. JPMorgan restricted ChatGPT use in February 2023. Goldman Sachs, Citigroup, and Bank of America followed with similar restrictions. Some firms have built internal AI tools; Morgan Stanley launched an OpenAI-powered assistant for its wealth management division in September 2023, built on a controlled, internal infrastructure.

But bans are only as effective as enforcement. A May 2023 survey by Fishbowl (a workplace social network) found that 43% of professionals reported using AI tools for work, and 70% of those said they did so without their employer's knowledge. There is no reason to think banking analysts are dramatically different, especially when the pressure to produce pitch books on brutal timelines has not changed.

The gap is between policy and architecture. Telling a 23-year-old analyst not to use ChatGPT while also expecting a polished 50-slide deck by morning is a compliance strategy that relies entirely on individual discipline under extreme pressure. Anyone who has worked in banking knows how that tends to go.

What banks actually need is an AI tool that analysts can use, one that handles confidential information within a controlled environment, does not transmit data to third-party servers for training, maintains audit logs for regulatory purposes, and operates within the firm's existing information barrier framework. The solution is not less AI; it is AI deployed with the right architecture.

The Enforcement Risk Is Real and Approaching

The SEC has not yet brought an enforcement action specifically involving AI-facilitated MNPI leakage. But the pattern is familiar. New technology emerges, employees adopt it before compliance catches up, and the SEC eventually makes an example of someone. The $1.8 billion in fines the SEC and CFTC levied against financial firms for off-channel communications (WhatsApp, personal text messages) between 2021 and 2024 is the most relevant precedent. Banks knew employees were using personal devices for business communications. They failed to capture and retain those communications as required. The fines were staggering.

Consumer AI usage by employees with MNPI access is the same pattern, arguably with higher stakes because the information involved can be directly market-moving.

How FirmAdapt Addresses This

FirmAdapt is built specifically for environments where confidential data cannot leave a controlled perimeter. The platform provides AI capabilities, including document drafting, formatting, and analysis, within an architecture that does not send data to third-party model providers for training or retention beyond the firm's control. Audit logs capture every interaction, which means the firm can demonstrate to regulators exactly how AI was used and confirm that MNPI stayed within authorized boundaries.

For investment banks, this means analysts get a tool that actually helps with pitch book production while compliance teams get the controls and documentation they need. FirmAdapt integrates with existing information barrier frameworks, so the AI respects the same access restrictions that apply to human employees. It is a practical answer to a problem that policy memos alone will not solve.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free
Investment Bank Pitch Books and the Confidentiality Problem | FirmAdapt