Private Equity Marketeer
AI in Investor Relations
2026 Benchmark
98%
use AI for IR work at least weekly
88%
use two or more AI tools actively
65%
have a formal AI governance policy
57%
cite DDQ / RFP automation as a top forward priority
Co-Authors
Henry Yan
Principal · Whistler Capital Partners
Job Sanderman
Founder · Private Equity Marketeer
01 — Respondent Profile
Who answered this survey?
Forty IR and capital formation professionals across private markets — the majority at Director level and above, with significant authority over tools and workflow design. This wave adds respondents from LATAM and broadens coverage to Multi-Strategy and Real Assets.
72% of respondents have shared or primary decision-making authority over IR tools and workflow design. Team size skews small — 59% have 1–5 FTEs in IR — but 32% operate teams of 10 or more, reflecting the breadth of firm sizes in the sample.
"Investor reporting and IR communications have become more demanding over the last two years"
SA 18%
Agree 52%
Neutral 20%
SD 10%
70% agree or strongly agree that demands on IR have increased. A sceptical minority of 10% strongly disagree — consistent across waves.
02 — AI Maturity
Where firms stand on the maturity curve
The cohort spans the full maturity spectrum. The share of firms in production or scaled has grown to 33%, with piloting representing the single largest cohort at 36%. One respondent reports no AI use and no plans — a lone exception in an otherwise AI-active group.
30%
Exploring
Informal trials underway — no defined processes yet.
35%
Piloting
Defined use cases being tested — moving to workflow.
28%
In Production
AI embedded in specific IR processes routinely.
5%
Scaled
Embedded across major IR workflows firm-wide.
2%
No Use
No AI use and no plans currently.
Usage Frequency
98% of respondents use AI for IR work at least weekly.
AI Governance Policy
Formal Policy
Legal / Compliance approved
26
65% have a formal AI policy — the majority. However, 15% operate on informal guidance only and 10% have no policy at all, a meaningful exposure given LP data sensitivity.
03 — Use Cases
Where AI is delivering value in IR today
DDQ/RFP response leads at 55%. Drafting investor emails and quarterly report narratives are now tied in second place at 38%, reflecting growing breadth of AI application across IR communications. Supporting an internal IR knowledge repository holds at 32%.
Most Valued Use Cases — % citing as top 3
No. 01
Preparing responses to DDQs or RFPs
22 of 40 · 55%
No. 02=
Drafting investor email communications
15 of 40 · 38%
No. 02=
Drafting narrative sections of quarterly reports
15 of 40 · 38%
No. 04
Supporting an internal IR knowledge repository
13 of 40 · 32%
No. 05
Summarising portfolio company updates for LP reporting
10 of 40 · 25%
No. 06
Summarising meeting notes
9 of 40 · 22%
AI Capabilities Currently Being Used
Drafting / summarising
92%
Drafting and summarising is near-universal at 92%. PDF extraction now leads document search at 55% vs 50% — a signal that teams are increasingly working with unstructured document sources, not just querying approved content libraries.
04 — The Tools Stack
The AI tools IR teams are using
ChatGPT leads at 79%. Claude holds at 55% — the firm #2 tool, now clearly ahead of Microsoft Copilot at 40% and Gemini at 32%. Multi-tool usage has climbed to 88%, reflecting a maturing and more experimental stack. Perplexity and CRM-embedded AI each reach 10%.
Claude holds the #2 position at 55% — a meaningful margin over Copilot and Gemini. The jump in multi-tool users to 88% points to a cohort that is actively experimenting across platforms rather than standardising on one.
05 — Barriers to Adoption
What's slowing AI down in IR
Data confidentiality leads at 54%, with poor data quality at 46%. Notably, compliance and legal risk has risen to third place at 35%, overtaking internal expertise and training. Output reliability — hallucination, tone, and trustworthiness — is increasingly named as practical friction.
Data confidentiality or privacy concerns
LP data sensitivity, fund-level confidentiality
22 / 55%
Poor data quality or fragmented internal systems
Siloed data, CRM gaps, inconsistent structures
19 / 48%
Compliance or legal risk
Regulatory uncertainty, output liability
14 / 35%
Lack of internal expertise or training
Teams not equipped to evaluate or implement; time to train
12 / 30%
Lack of approved internal tools
No sanctioned platform — staff on personal accounts
8 / 20%
Vendor or technology risk
Hallucination rates, AI making things up, reliability of outputs
5 / 12%
No clear business case or ROI
1 / 2%
Leadership scepticism and ROI concerns remain near-absent. The challenge is operational — clean data, cleared tooling, trained teams, and increasingly, AI that produces outputs trustworthy enough for LP-facing use. Output quality has become a first-order concern.
06 — Operational Pain Points
The daily friction AI is asked to solve
Preparing LP-facing materials tops the list at 64%. Investor identification holds second place at 41%, reinforcing the gap between where teams are struggling and where AI is currently deployed. DDQ turnaround and bespoke LP requests round out the top four.
01
Preparing or updating LP-facing materials
62% · 25 of 40
02
Identifying relevant investors for fundraising
42% · 17 of 40
03=
Meeting DDQ / RFP turnaround timelines
38% · 15 of 40
03=
Responding to bespoke LP information requests
38% · 15 of 40
05
Aggregating investor or portfolio data from internal systems
32% · 13 of 40
06
Logging LP interactions in CRM
30% · 12 of 40
LP-facing materials dominate at 62% — extending well beyond DDQ/RFP to decks, quarterly reports, and pitch content. Investor identification at 42% remains a significant pain point where AI deployment is still nascent, pointing to a clear unmet opportunity.
Sponsor Feature
AI-powered CRM in action
With LP-facing materials and investor identification topping the operational pain list, see how Juniper Square's AI CRM is addressing these challenges directly.
07 — The 12–24 Month Opportunity
Where AI should go next in IR
DDQ and RFP automation and first-draft pitch and reporting materials are now exactly tied as the top forward priorities — both cited by 57% of respondents. IR teams are ready to hand AI the blank page, not just the Q&A. Automating internal approval workflows appears as a new priority at 15%.
01=
Preparing DDQ or RFP responses
23 of 40 · 57%
01=
Producing first-draft pitch or reporting materials
23 of 40 · 57%
03
Extracting portfolio data for LP reporting
16 of 40 · 40%
04
Drafting investor communications
14 of 40 · 35%
05
Responding to LP information requests
13 of 40 · 32%
06
Searching approved internal IR materials
10 of 40 · 25%
07
Compliance pre-checks for LP communications
8 of 40 · 20%
08
Automating internal approval workflows
6 of 40 · 15%
For the first time, DDQ/RFP and first-draft materials are exactly tied as the top forward priority — both cited by 57% of respondents. DDQ still holds the double mandate as top current use case and co-top forward priority, but content production is now level with automation in the minds of IR teams.
Don't have time to evaluate tools and implement — that's the real bottleneck.
Head of IR · Venture Capital · North America
Ensuring AI doesn't make anything up. Still not fully comfortable trusting everything that is AI generated.
Senior Investor Services Manager · Real Assets · EMEA
The greatest opportunity is identifying and reaching out to new LPs — AI should do more of the discovery work.
IR Head · Venture Capital · APAC
08 — Key Findings
Six things this data tells us
Taken together, the April 2026 data points to a cohort that has moved past early experimentation and is now navigating the harder questions of quality, governance, and where AI goes next.
✦
AI is embedded in daily IR work — and maturing
74% use AI every day. The production and scaled cohort now stands at 33%. The question has shifted decisively from adoption to depth, quality control, and institutional readiness.
✦
DDQ/RFP leads as both top use case and top opportunity
55% cite it as a top-3 value driver. DDQ and first-draft materials are now tied as the #1 forward priority at 57%, and DDQ turnaround is the third biggest operational pain point at 38%.
✦
Claude strengthens at 56% — the firm #2 platform
Claude holds at 55% adoption across the expanded respondent base, ahead of Microsoft Copilot at 38% and Gemini at 31%. IR AI is consolidating around ChatGPT and Claude as the two dominant general-purpose platforms.
✦
Compliance risk overtakes training as a barrier
Compliance and legal risk has risen to third place at 35%, overtaking internal expertise (30%). Data confidentiality (55%) and data quality (48%) remain the top two — AI readiness is inseparable from data infrastructure readiness.
✦
Output quality is the frontier problem
12% cite vendor and technology risk — and open-text comments name hallucination rates, AI that "makes things up," and inappropriate tone as practical, real-world friction for LP-facing outputs. Quality is now a first-order concern.
✦
Investor identification is the unmet AI opportunity
42% cite investor identification as a top operational pain point. Yet it barely registers in current AI use cases. This gap between where teams are struggling and where AI is deployed points to a significant unmet opportunity in private markets fundraising.
09 — Vendor Spotlight
Specialist tools named by respondents
Beyond the headline platforms, respondents volunteered these tools in open-text comments — a window into the specialist layer being evaluated alongside general-purpose AI. Mixed signals on newer entrants continue to emerge.
Loopio
DDQ / RFP Automation
DiligenceVault
DDQ / RFP Automation
RFPIO
DDQ / RFP Automation
NormAI
DDQ / RFP Automation
Hebbia
Enterprise Document Search
ToltIQ
Due Diligence / VDR Analysis
Glean
Enterprise Knowledge Search
Notion AI
Productivity / Knowledge Management
Perplexity
AI Research & Synthesis
GovernGPT
AI Governance & Compliance
Dakota's Joe
Investor Targeting / CRM AI
Note: Tools listed here were named by respondents in open-text comment fields only. Category labels reflect the tool's primary function based on publicly available information. Sentiment indicators reflect respondent comments and do not constitute an endorsement or assessment by PEM. Mixed ratings reflect individual respondent open-text comments.