FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryfinancial servicesbankingcompliancePCAOB

Why Audit Firms Are Quietly Banning Public AI Tools (and What They Use Instead)

By Basel IsmailMay 7, 2026

Why Audit Firms Are Quietly Banning Public AI Tools (and What They Use Instead)

Over the past 18 months, every major audit firm has issued internal guidance restricting or outright prohibiting the use of public AI tools like ChatGPT, Google Gemini, and Copilot on client engagements. Most of these policies never made it into press releases. They circulated through internal memos, updated employee handbooks, and quiet updates to engagement protocols. But if you talk to anyone doing audit work right now, the picture is remarkably consistent: public generative AI is effectively off limits for anything touching client data, workpapers, or engagement deliverables.

What is interesting is not the bans themselves. It is the reasoning behind them, and what the firms are building (or buying) as replacements.

The Pattern Across Firms

Let's start with what we can piece together from public disclosures, conference remarks, and leaked internal guidance.

PwC announced a $1 billion investment in AI in 2023, partnered with Microsoft and OpenAI, and simultaneously told its 75,000 U.S. employees that client data cannot be entered into public AI tools. Their internal platform, built on Azure OpenAI Service with enterprise data protections, is the sanctioned alternative. Deloitte rolled out "PairD" internally, a proprietary AI assistant, while restricting use of consumer AI products. EY launched EY.ai, a $1.4 billion investment, and has been explicit that its 400,000 employees should use internally governed AI tools rather than public ones. KPMG went with a multi-model approach through its KymChat platform, again built with enterprise guardrails.

Mid-tier firms like BDO, Grant Thornton, and RSM have followed similar patterns, though with smaller budgets and more reliance on vendor solutions rather than custom builds.

The common thread: every firm wants AI's productivity gains, but none of them are willing to route client data through tools they do not control.

Why PCAOB and AICPA Standards Make This Inevitable

The regulatory logic here is straightforward once you look at the specific standards involved.

PCAOB AS 1220 (Engagement Quality Review) requires that engagement quality reviewers evaluate whether the engagement team exercised appropriate professional skepticism and obtained sufficient appropriate audit evidence. If an auditor uses a public AI tool to draft analysis or summarize financial data, the firm has a real problem demonstrating that the output was properly evaluated, that the underlying data was protected, and that the tool's reasoning can be inspected.

PCAOB AS 2301 (The Auditor's Responses to the Risks of Material Misstatement) requires auditors to design and implement responses to identified risks. Using an opaque AI model to assist with risk assessment, without understanding how it reached its conclusions, creates an obvious tension with this standard.

On the AICPA side, the Code of Professional Conduct, Section 1.700.001 (Confidential Client Information Rule) is the big one. It prohibits disclosure of confidential client information without consent. When you paste client financials into ChatGPT, you are transmitting that data to OpenAI's servers. Even with OpenAI's enterprise API data policies, the consumer product's terms of service historically allowed use of inputs for model training. The AICPA rule does not have a carve-out for "well, the AI company says they probably won't misuse it."

The AICPA also issued guidance in December 2023 through its Assurance Services Executive Committee, emphasizing that firms using AI in audit engagements must maintain documentation standards consistent with AU-C Section 230 (Audit Documentation). You need to be able to show what the AI did, what data it accessed, and how the auditor evaluated the output. Public AI tools give you essentially none of that.

And then there is the SEC's own interest. In a March 2024 speech, SEC Chief Accountant Paul Munter explicitly noted that audit firms' use of AI does not diminish their responsibilities under existing standards. The SEC is watching, and firms know it.

What the Replacement Architecture Looks Like

The internal tools these firms are building share several design principles worth noting:

  • Data isolation. Client data stays within the firm's tenant. No data is sent to shared model training pipelines. Most implementations use Azure OpenAI Service or AWS Bedrock specifically because these platforms offer contractual guarantees against using customer data for model improvement.
  • Audit trails. Every query, every response, every document accessed by the AI is logged. This is not optional; it is a direct response to AU-C 230 and PCAOB documentation requirements.
  • Role-based access controls. The AI can only access data that the specific user is authorized to see. This sounds basic, but public AI tools have zero concept of engagement-level data segregation.
  • Human-in-the-loop requirements. Firms are not letting AI generate final workpaper conclusions. The tools draft, summarize, and flag anomalies. Partners and managers review and approve. This maps directly to PCAOB AS 1201 (Supervision of the Audit Engagement).
  • Model governance. Firms are tracking which model versions are used on which engagements. If a model is updated mid-engagement, they want to know. This level of version control is nonexistent in consumer AI products.

What This Tells Us About Compliance-First AI Adoption Generally

Audit firms are a useful leading indicator here because they operate under some of the most prescriptive professional standards in any industry. But the underlying logic applies broadly to any regulated entity.

The core issue is not whether AI is useful. It obviously is. The issue is whether you can demonstrate, to a regulator or in litigation, that your use of AI was governed, documented, and consistent with your existing obligations. For audit firms, those obligations come from PCAOB and AICPA. For a hospital system, it is HIPAA. For a defense contractor, it is CMMC and ITAR. The specific statutes differ, but the architectural requirements converge: data isolation, access controls, audit logging, human oversight, and model governance.

This is why the "just use ChatGPT" approach has a shelf life in regulated industries. It is not that the technology is bad. It is that the deployment model is incompatible with compliance obligations. The firms that figured this out first, the Big Four, had the budgets to build custom solutions. Everyone else needs a different path to the same destination.

One More Thing Worth Noting

The PCAOB's 2024 inspection priorities explicitly mentioned firms' use of technology, including AI, as an area of focus. Inspectors are asking about AI policies during inspections. Firms that cannot articulate their governance framework for AI usage are going to have findings. This is not a hypothetical future risk; it is happening in current inspection cycles.

How FirmAdapt Addresses This

FirmAdapt was built around the same architectural principles that the Big Four spent billions implementing internally: tenant-level data isolation, comprehensive audit logging, role-based access controls, and human-in-the-loop workflows. The difference is that FirmAdapt delivers this as a platform rather than requiring each firm to build it from scratch. For mid-tier audit firms and other regulated companies that need compliant AI but lack a billion-dollar development budget, this is a practical path forward.

FirmAdapt's architecture maps directly to the requirements we have discussed here, from AICPA confidentiality rules to PCAOB documentation standards to the broader data governance expectations that regulators across industries are converging on. The platform maintains detailed records of every AI interaction, enforces data segregation at the engagement level, and provides the model version tracking that compliance teams need for inspection readiness. If your firm is navigating AI adoption under regulatory scrutiny, this is the kind of infrastructure that makes the conversation with your regulator a short one.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free