Our Take
The survey numbers mask a deeper problem: firms are deploying conversational AI in regulated advice contexts without solving the hallucination and auditability requirements.
Why it matters
Financial advisors face legal exposure when AI generates plausible-sounding but incorrect retirement projections or suitability assessments that clients act on.
Do this week
Compliance teams: audit all client-facing AI outputs for hallucination controls and decision traceability before next regulatory review.
95% of wealth firms deployed AI without solving core risks
Wealth management firms have moved fast on generative AI adoption, with 95% of 100 surveyed firms running multiple live use cases (per EY study). Another 78% are exploring agentic AI tools for strategic advantages. Client demand is driving adoption: 78% of 2,100 respondents across 19 countries use generative AI for investment information (per Bridgewise report).
But the deployment pace has outrun risk management. Only 28% of 3,600 wealth management clients trust AI as much as their human advisor (per EY survey). Meanwhile, 46% of survey respondents remain unsure whether generative AI helps or threatens their practice (per Morningstar report).
The core technical problem is hallucination in regulated advice contexts. "In wealth management, where a client might act on a projected retirement income figure or a suitability assessment, the consequences can be financially and legally devastating," said Fredrik Davéus, CEO of Kidbrooke.
Regulatory liability exceeds typical AI deployment risks
Wealth management operates under fiduciary responsibility and regulatory precision requirements that general-purpose AI systems cannot meet. "The industry is not short of AI. It is short of AI that understands advice," said Hari Menon of Intellect.
The risk profile differs from typical enterprise AI deployments. Context loss across client interactions, non-compliant outputs, and decisions that cannot be reconstructed for regulatory review create direct legal exposure. "In client communication an incident could quickly become a conduct and mis-selling risk," explained Anna Golubeva of EXANTE.
Even informational AI outputs carry liability. Models implemented without alignment to firm investment strategies can generate content inconsistent with advisory frameworks, undermining client trust and creating regulatory gaps.
Governance-first architecture required for compliance
Effective implementation requires treating generative AI as a high-risk capability with detailed oversight. "If you can't explain it, log it, and reproduce it, you shouldn't be using it," said Golubeva.
The technical solution involves structured integration rather than direct client interaction. Davéus recommends an orchestration layer between large language models and deterministic financial analytics engines, where "the LLM's role is to interpret and communicate the outputs of proven financial models, not to replace them."
Three governance requirements emerge: certified data sources aligned with firm investment policies, comprehensive logging of all interactions for audit trails, and clear boundaries defining where AI operates versus human oversight. The industry is moving toward domain-specific "Advice Intelligence Systems" built explicitly for wealth management constraints rather than adapted general-purpose models.