FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryfinancial servicesbankingcomplianceLP DDQs

Private Equity Firms and the AI Tool Question in LP Due Diligence

By Basel IsmailMay 9, 2026

Private Equity Firms and the AI Tool Question in LP Due Diligence

If you've reviewed a limited partner due diligence questionnaire in the last twelve months, you've probably noticed something new sitting between the cybersecurity section and the ESG disclosures: questions about artificial intelligence. Not vague, philosophical ones. Specific, operational questions about what AI tools your firm uses, how they interact with LP data, and what governance framework you have around them.

This shift happened fast. In 2022, most institutional LP DDQs from major allocators like CalPERS, CPP Investments, and the large endowments had zero AI-specific questions. By mid-2024, the Institutional Limited Partners Association (ILPA) updated its DDQ template to include technology and data governance questions that clearly contemplate AI usage. Several large pension funds and sovereign wealth funds have gone further, adding bespoke AI sections to their own questionnaires. If you're a GP raising a fund right now, you need a coherent answer to these questions. And "we don't really use AI" is increasingly not credible.

What LPs Are Actually Asking

The questions cluster into a few categories, and they're more granular than you might expect:

  • Inventory questions: What AI tools does the firm use across investment analysis, portfolio monitoring, investor relations, compliance, and back-office operations? This includes third-party SaaS products with AI features, not just purpose-built models.
  • Data handling questions: Does LP data, portfolio company data, or deal pipeline information flow into AI systems? If so, which ones, and what are the data retention and processing terms?
  • Governance questions: Does the firm have an AI usage policy? Who owns it? Is there a review or approval process before new AI tools are adopted?
  • Third-party risk questions: How does the firm evaluate AI vendors? Are there contractual provisions addressing data use for model training, subprocessor access, and confidentiality?
  • Regulatory compliance questions: How does the firm's AI usage align with SEC guidance, particularly the SEC's proposed rule on predictive data analytics (released July 2023, still pending finalization), and with applicable data protection regulations like GDPR or CCPA?

Some LPs are also asking whether AI outputs influence investment decisions and, if so, whether there's human oversight in the loop. This is directly relevant to fiduciary duty analysis under the Investment Advisers Act of 1940, since the SEC has signaled through enforcement actions and staff guidance that advisers can't outsource judgment to algorithms without appropriate supervision.

Why GPs Get This Wrong

The most common mistake is treating these questions as a checkbox exercise. A GP will write something like "the firm uses AI tools in accordance with applicable laws and regulations" and move on. That answer was fine for boilerplate cybersecurity questions five years ago. It does not work here, because LPs are asking these questions for a reason: they're getting pressure from their own boards, beneficiaries, and regulators.

CalSTRS, for example, has been publicly vocal about technology governance in its investment portfolio since at least 2023. The Ontario Teachers' Pension Plan has incorporated technology risk into its manager evaluation framework. When these allocators ask about AI, they want substance. They want to see that you've thought about it.

The second common mistake is underreporting. Firms will disclose their use of a deal sourcing platform but forget that their portfolio monitoring tool added a GPT-based analytics feature in a recent update. Or they'll skip mentioning that their IR team uses an AI-powered CRM. LPs are sophisticated enough to know that AI is embedded in a huge range of enterprise software now. An answer that lists zero or one AI tool looks like incomplete disclosure, not a clean bill of health.

The Confidentiality Problem

There's a real tension here that deserves attention. Many GP-LP agreements include strict confidentiality provisions around deal flow, portfolio company financials, and LP information. When a GP feeds portfolio company revenue data into a third-party AI analytics tool, the question of whether that constitutes a disclosure under the LPA's confidentiality provisions is genuinely unsettled.

Most enterprise AI vendor agreements now include clauses about not using customer data for model training, but the specifics vary wildly. OpenAI's enterprise terms, for instance, differ significantly from their consumer API terms on this point. Microsoft's Copilot for M365 has different data handling than Azure OpenAI Service. If your firm is using these tools, you need to know which version you're on and what the data flow actually looks like. LPs will ask, and your outside counsel should have reviewed the vendor agreements before you answer.

Building a DDQ-Ready AI Governance Framework

The good news is that the bar for "good" answers is still relatively low, because most GPs haven't done this work yet. A firm that can present a clear, documented AI governance framework will stand out in a fundraise. Here's what that looks like in practice:

  • Maintain an AI tool inventory. Every AI-enabled tool the firm uses, including features within broader platforms, should be cataloged. Note what data each tool accesses, where data is processed and stored, and what the vendor's data use terms are.
  • Adopt a written AI usage policy. This doesn't need to be a hundred pages. It should cover approved tools, prohibited uses (e.g., no LP PII in consumer-grade AI chatbots), approval processes for new tools, and incident response procedures.
  • Assign ownership. Someone at the firm, whether it's the CCO, COO, or a designated technology officer, should own AI governance. LPs want a name, not a committee.
  • Map AI usage to existing regulatory obligations. Your firm already has obligations under the Advisers Act, Regulation S-P (amended in May 2024 to expand incident response requirements), and likely GDPR if you have European LPs. Show how your AI governance integrates with these existing frameworks rather than sitting in a silo.
  • Document human oversight for investment decisions. If AI tools inform deal screening, valuation, or portfolio allocation, document the human review process. The SEC's 2024 examination priorities explicitly flagged firms' use of AI in advisory functions as an area of focus.

One practical note: the ILPA's 2024 DDQ updates also cross-reference cybersecurity and operational due diligence sections. Your AI governance answers need to be consistent with what you're saying about data security, business continuity, and vendor management elsewhere in the DDQ. Inconsistencies across sections are a red flag that experienced LP due diligence teams will catch immediately.

The SEC Dimension

It's worth noting that the SEC's interest in AI at investment advisers goes beyond the proposed predictive data analytics rule. In September 2024, the SEC settled charges against Delphia Inc. and Global Predictions Inc. for making misleading claims about their use of AI in investment processes, resulting in combined penalties of $400,000. The message was clear: if you say you're using AI, you'd better be using it the way you describe. The inverse applies too. If you tell LPs you're not using AI and the SEC finds otherwise during an examination, you have a disclosure problem.

Chair Gensler's tenure emphasized AI risk repeatedly, and while the leadership has changed, the examination staff's focus on technology governance has not diminished. The Division of Examinations' 2025 priorities continue to list AI and emerging technologies as a key area.

How FirmAdapt Addresses This

FirmAdapt was built for exactly this kind of regulatory intersection, where a firm needs to use AI operationally while maintaining auditable compliance with evolving disclosure and governance requirements. The platform's architecture keeps data processing within defined boundaries, maintains detailed logs of what data flows through AI features, and provides the documentation trail that LP DDQ responses require. When an LP asks what AI tools you use and how data is handled, FirmAdapt gives you a concrete, defensible answer rather than a vague assurance.

For PE firms specifically, FirmAdapt's compliance-first design means you can point to vendor-level controls, data isolation, and governance documentation that map directly to the ILPA DDQ framework and SEC examination expectations. It's the difference between telling an LP "we have policies" and showing them an architecture that enforces those policies by default.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free