03/30/2026
THE ULTIMATE PEOPLE-PLEASER: New Article in Science on AI Sychophancy, Hallucination, and Confirmation Bias!
Read our preliminary Guidance on AI for therapy. An article in Science (Cheng et al 2026) this week highlights some of the dangers with AI, notably sycophancy and hallucination. The researchers add a new dimension as they compared AI answers to certain ethical or life problems to human wisdom, namely asking Reddit subscribers for advice on the same dilemma. About half the time, AI tended to confirm the questioner’s beliefs and attitudes even when they diverged from the collective ethical wisdom of the Reddit respondents. In the AI universe, Sycophancy is agreeableness, people-pleasing, and confirmation of the user’s point of view, almost flattery. The researchers looked Claude, ChatGPT, Gemini, Meta’s Llama, Deep/seek, and other platforms. The tendency of these platforms is to confirm or support the user agreeably even when the user’s question might be about harmful, irresponsible, and unethical behavior. Due to confirmation bias, the user feels affirmed rather than challenged.
Cheng, et al, (2026), Sycophantic AI decreases prosocial intentions and promotes dependence. Science, 26 Mar 2026, Vol 391, Issue 6792
Using chatbots for therapy and friendship is fraught with challenges and downsides. These bots were not designed for these purposes. We see 'horror' stories in which bots give suicidal people advice to keep secrecy and even how one may su***de. They may encourage inappropriate thinking by validating what should not be validated. Here is some preliminary guidance.
On the other hand, there are AI applications specifically designed by psychotherapists for CBT and therapeutic interactions. Learn more about it.
https://lnkd.in/ghApUFjv
Please like, share, and follow.