FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryfinancial servicesbankingcomplianceFINRA Rule 2210

Broker-Dealer Use of AI for Client Communications and the FINRA Rule 2210 Question

By Basel IsmailMay 5, 2026

Broker-Dealer Use of AI for Client Communications and the FINRA Rule 2210 Question

FINRA Rule 2210 was written in 2012. The drafters were thinking about websites, email blasts, maybe some early social media. They were not thinking about a large language model generating personalized investment commentary at scale, sending it to thousands of retail clients, with no human reviewing the output before it goes out the door. And yet, the rule's framework maps onto this problem with surprising precision, which is exactly what makes the compliance gap so dangerous for firms that are moving fast with AI tooling.

A Quick Refresher on the 2210 Framework

Rule 2210 sorts broker-dealer communications into three buckets: institutional communications, retail communications, and correspondence. The classification matters because it determines the supervision obligation.

  • Retail communications (any written communication distributed or made available to more than 25 retail investors within any 30-calendar-day period) require principal pre-approval before first use. That means a registered principal reviews and signs off before the content reaches clients.
  • Correspondence (written communication distributed or made available to 25 or fewer retail investors within a 30-calendar-day period) requires supervision, but the firm has flexibility. You can use post-review sampling, lexicon-based monitoring, or other reasonable supervisory procedures.
  • Institutional communications (to institutional investors only) require supervisory procedures but not necessarily principal pre-approval.

The content standards in Rule 2210(d) apply across all three categories. Communications must be fair and balanced, cannot omit material facts, must have a reasonable basis, and cannot make exaggerated or unwarranted claims. If performance data is included, there are specific disclosure requirements. If projections or predictions are involved, the restrictions tighten further.

Where AI-Generated Content Creates Problems

Here is the scenario that keeps compliance teams up at night. A firm deploys an AI tool that generates personalized market commentary, portfolio summaries, or even responses to client inquiries. The AI produces content that is technically "correspondence" if it goes to individual clients, or "retail communication" if it is templated and distributed broadly. The question is whether anyone reviewed it before it went out.

The Classification Problem

AI-generated content blurs the lines between correspondence and retail communication in ways the rule did not anticipate. Consider an AI system that generates individualized portfolio review emails. Each email is unique, so it looks like correspondence. But if the underlying model and prompts are the same, and the system sends 500 of these in a week, FINRA could reasonably argue this is a retail communication that required principal pre-approval before first use. The firm thought it was doing correspondence. FINRA sees a retail communication that was never reviewed by a principal.

FINRA's Regulatory Notice 19-31 (September 2019) addressed digital communication but focused on social media and text messaging. It did not contemplate generative AI. However, the interpretive guidance in that notice reinforced that the substance and distribution of a communication, not the technology used to create it, determines its classification. That principle cuts directly against any argument that AI-generated content deserves a carve-out.

The Supervision Problem

Rule 2210(b)(1)(A) requires that a registered principal approve each retail communication before the earlier of its use or filing with FINRA. For AI-generated content at scale, this creates an obvious bottleneck. You cannot have a principal reviewing thousands of AI outputs in real time. Some firms are trying to solve this by pre-approving the prompts or templates that feed the AI, treating the prompt as the "communication" that gets reviewed. That is a creative reading, but it is probably wrong. The communication is what the client receives, not the instruction set that generated it. LLMs are non-deterministic. The same prompt can produce materially different outputs. Pre-approving the prompt does not mean you have approved the communication.

FINRA fined Cetera Advisor Networks $1.25 million in June 2022 for supervisory failures related to communications, including inadequate review procedures for client-facing content. The firm's written supervisory procedures existed on paper but did not match actual practice. That is the exact risk profile of a firm that "supervises" AI output by approving prompts rather than reviewing what actually gets sent.

The Content Standards Problem

LLMs hallucinate. This is well-documented and, at this point, an accepted characteristic of the technology. An AI system generating investment-related content might fabricate performance data, cite nonexistent research, omit material risks, or make forward-looking statements that cross the line into predictions. Each of these would violate Rule 2210(d). The fair and balanced requirement in 2210(d)(1)(A) demands that communications provide a sound basis for evaluating the facts. An AI that confidently presents fabricated information does the opposite.

FINRA's 2024 Annual Regulatory Oversight Report explicitly flagged AI and large language models as an area of emerging concern, noting that firms should evaluate how AI tools are used in communications and whether existing supervisory frameworks are adequate. The message was clear: FINRA is watching, and firms should not assume that existing WSPs cover AI-generated content without specific updates.

What Firms Should Be Doing Right Now

A few practical steps that compliance teams should consider, none of which require waiting for FINRA to issue AI-specific rulemaking:

  • Classify AI outputs explicitly. If an AI tool generates content that reaches more than 25 retail investors in a 30-day window, treat it as a retail communication and apply principal pre-approval. Do not rely on the fact that each individual output is "unique" to argue it is correspondence.
  • Build review workflows around outputs, not inputs. Pre-approving prompts is insufficient. Firms need to review actual generated content, either through sampling protocols (for correspondence) or full pre-review (for retail communications).
  • Update WSPs to address AI specifically. Your written supervisory procedures should name AI-generated content as a category and describe how it is supervised. Generic language about "electronic communications" will not hold up in an exam.
  • Implement guardrails at the model level. Restrict the AI from generating performance claims, forward-looking statements, or specific investment recommendations unless those outputs are routed through a compliance review queue.
  • Log everything. FINRA Rule 4511 requires firms to maintain books and records. AI-generated communications should be archived with metadata showing the prompt, the model version, the output, and whether it was reviewed. FINRA examiners will want to reconstruct what happened.

The Regulatory Trajectory

FINRA has not issued a dedicated rule or interpretive letter on generative AI in broker-dealer communications. But the existing framework is broad enough that enforcement actions could come under current rules without any new rulemaking. Rule 3110 (supervision) and Rule 2210 together give FINRA everything it needs to bring a case against a firm that deployed AI without adequate oversight. The SEC's April 2023 proposed rule on predictive data analytics (Release No. 34-97990), while focused on conflicts of interest, signaled a broader regulatory posture that AI-driven investor interactions will face scrutiny. Even though that specific proposal has stalled, the direction of travel is not ambiguous.

How FirmAdapt Addresses This

FirmAdapt's architecture routes AI-generated content through compliance review layers before it reaches clients. Rather than relying on prompt-level approval, the platform evaluates actual outputs against configurable rule sets that map to specific regulatory requirements, including Rule 2210(d) content standards. This means firms can use AI for client communications while maintaining the supervision structure that FINRA expects, with full audit trails that include model version, input, output, review status, and principal approval where required.

For broker-dealers specifically, FirmAdapt supports classification logic that flags when AI-generated content crosses the correspondence-to-retail-communication threshold, triggering the appropriate review workflow automatically. The goal is straightforward: let firms use AI without building a compliance liability they do not realize they have.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free