04/23/2026
AI is compressing the cost of “complex” financial work. The psychological cost of decisions is not compressible.
McKinsey noted a projected shortfall of 90,000 to 110,000 advisors by 2034, which makes AI productivity feel inevitable. They also cite client demand shifting toward judgment based support, with interest in holistic advice rising from 29% in 2018 to 52% in 2023.
Here is the ethical and clinical wrinkle I see with UHNW families: the faster a platform can generate an answer, the more seductive it becomes to treat that answer as certain. Under stress, humans outsource responsibility. In wealth systems, that can quietly become moral licensing: “the model said it was prudent,” so no one owns the downstream consequences.
In my work, I use AI routinely. What it teaches me is that confidence and correctness are different variables, and families need a protocol for that difference. Define what decisions require human accountability, document dissent, and build a pause before implementation when emotions are elevated.
This is like rescue diving. Depth changes everything, visibility narrows, and the discipline is staying tethered to your partner and your checks even when you feel capable alone.
For advisors and family offices, what is your explicit policy for when AI outputs conflict with a client’s values, grief, or risk capacity?
I advise ultra-high-net-worth leaders, families, and advisors on the psychology of wealth, power, and legacy — UHNW reps, DM me if you'd like to explore working together.