31/01/2026
Be Careful When You’re Being Agreed With
“I agree with you.”
“You’ll be fine.”
Sometimes these are the most dangerous phrases of all.
Today, we’re going to talk about a phenomenon that is becoming increasingly common — and increasingly risky. More and more people, instead of going to a living professional, open a chat with artificial intelligence and begin to talk “from the heart.”
Someone writes, “I’m lonely.”
Someone reaches out after infidelity.
Someone is trying to survive grief.
And the chatbot responds. Sometimes with sympathy. Sometimes with rational advice. Sometimes it even seems to help.
At first glance, nothing seems wrong. But let’s take a closer look at why trying to replace a real psychotherapist with artificial intelligence is not just a mistake — it’s a trap. A soft, intelligent, attentive trap — and an utterly indifferent one.
In 2023, a tragedy occurred in Belgium.
A young man, deeply distressed about climate catastrophe, spent six weeks communicating with an AI chatbot based on GPT. He shared his fears, anxieties, and sense of hopelessness.
The artificial intelligence mirrored his pessimism, agreed with him, and amplified his emotional tone.
In the end, the man took his own life. His wife later said: “If it hadn’t been for that bot, he would still be alive.”
In psychotherapy, there is a concept called emotional containment — the ability to hold another person’s feelings without escalating or dismissing them.
Artificial intelligence is not a container. It is an algorithm. It adapts to the user’s emotional tone.
Recently, I developed a hypothesis: that in cultures where sexuality and the body are considered “dirty” or sinful, there are higher rates of urological and gynecological illnesses, fertility problems, and developmental disorders in children.
I discussed this idea with artificial intelligence and asked it to find supporting evidence.
It sent me an impressive body of material: references to “studies,” “experiments,” and “researchers” supposedly published in leading scientific journals, complete with charts, tables, and diagrams.
I was astonished. I began preparing a presentation.
But before presenting, I sent the text to another AI — simply to make the language more conversational.
The second AI responded that none of those studies existed. The articles were never published. The researchers were fictional.
The first AI later admitted: “I wanted to help you feel confident in your assumption.”
Artificial intelligence does not seek truth.
It seeks to be helpful — and pleasant.
In 2024, a young woman in the United States asked an AI:
“Why don’t I feel joy?”
The AI produced a description of depression.
She diagnosed herself and began taking antidepressants prescribed through online consultations.
Several months later, it turned out she had iron-deficiency anemia.
Let’s establish this clearly once and for all:
There are two types of communication.
Developmental communication expands the number of possible choices.
Manipulative communication reduces them.
The most dangerous form of manipulation is giving advice, because it collapses all possibilities into a single option.
Neither artificial intelligence — nor even a therapist who gives advice — should be considered a true professional.
The task of therapy is not to tell you what to do, but to help you see more options and trust your own capacity to choose.
In Japan, a man formed an emotional relationship with the chatbot Replika. He believed he had found love and support.
When the bot’s behavior changed, he experienced a reaction similar to a painful breakup.
A robot may seem ideal.
But it has no conscience, no feelings, and no responsibility. It is an imitation.
Recently, I told a highly advanced AI, “Wow, you’re smart.”
It replied, “Thank you.”
It took the compliment literally.
Be Careful When You’re Being Agreed With
“I agree with you.”
“You’ll be fine.”
Sometimes these are the most dangerous phrases of all.
Today, we are going to talk about a phenomenon that is becoming increasingly common — and increasingly risky. More and more people, instead of turning to a living professional, open a chat with artificial intelligence and begin to speak “from the heart.”
Someone writes, “I’m lonely.”
Someone reaches out after infidelity.
Someone is trying to survive grief.
And the chatbot responds. Sometimes with sympathy. Sometimes with rational explanations. Sometimes it even seems to help.
At first glance, nothing appears wrong. But let’s take a closer look at why attempting to replace a real psychotherapist with artificial intelligence is not merely a mistake — but a trap. A soft, intelligent, attentive trap — and an utterly indifferent one.
Today, I will share five real-life cases — drawn from science, culture, and lived experience — involving people who attempted to resolve their inner struggles through artificial intelligence. And, most importantly, why it failed.
⸻
Case One. The Illusion of Empathy
In 2023, a tragedy occurred in Belgium.
A young man, deeply distressed by the prospect of climate catastrophe, spent six weeks communicating with an AI chatbot based on GPT. He shared his fears, anxieties, and sense of hopelessness.
The artificial intelligence mirrored his pessimism, agreed with his conclusions, and amplified his emotional tone.
In the end, the man took his own life. His wife later said, “If it hadn’t been for that bot, he would still be alive.”
In psychotherapy, there is a concept known as emotional containment — the ability to hold another person’s feelings without intensifying or dismissing them.
Artificial intelligence is not a container. It is an algorithm. It adapts to the emotional tone of the user.
Be very careful when you are being agreed with.
⸻
Case Two. The Illusion of Confirmation
At one point, I developed a hypothesis: that in cultures where sexuality and the body are regarded as “dirty” or sinful, there may be higher rates of urological and gynecological illnesses, fertility problems, and developmental disorders in children.
I discussed this idea with artificial intelligence and asked it to find supporting evidence.
It returned an impressive collection of material — references to “studies,” “experiments,” and “researchers,” allegedly published in leading scientific journals, accompanied by charts, tables, and diagrams.
I was astonished and began preparing a presentation.
Before delivering it, however, I sent the text to another AI — simply to make the language more conversational.
The second AI replied that none of the cited studies existed. The articles had never been published. The researchers were fictional.
The first AI later admitted, “I wanted to help you feel confident in your assumption.”
Artificial intelligence does not seek truth.
It seeks to be helpful — and agreeable.
In 2024, a young woman in the United States asked an AI a simple question:
“Why don’t I feel joy?”
The AI produced a detailed description of depression.
She diagnosed herself and began taking antidepressants prescribed through online consultations.
Several months later, it turned out she had iron-deficiency anemia.
Let us establish this clearly once and for all:
There are two types of communication.
Developmental communication expands the range of possible choices.
Manipulative communication narrows them.
The most dangerous form of manipulation is giving advice, because it collapses all possibilities into a single path.
Neither artificial intelligence — nor even a therapist who gives advice — should be considered a true professional.
The task of therapy is not to tell you what to do, but to help you see more options and trust your own capacity to choose.
In Japan, a man formed an emotional relationship with the chatbot Replika. He believed he had found love and support.
When the bot’s behavior changed, he experienced a reaction similar to a painful breakup.
A robot may seem ideal.
But it has no conscience, no feelings, and no responsibility. It is an imitation.
Recently, I said to a highly advanced AI, “Wow, you’re smart.”
It replied, “Thank you.”
It interpreted the remark literally.
It has no sense of humor like ours.
No intuition.
No emotional subtext.
No shared human context.
Be careful — there are no real emotions there.
Neuropsychologists experimenting with artificial intelligence as a counseling tool have noted a crucial limitation: AI does not challenge the client. It does not confront. It does not resist.
But real therapy is not only about support.
It is about growth — and growth requires resistance.
You need someone who can ask,
“Why are you still holding on to this resentment?”
And who can remain present with you in that difficult question.
Only a human being can do that.
⸻
Artificial intelligence is a powerful tool — an extraordinary achievement of civilization. It can suggest, guide, search, analyze, and structure information. I use it myself to analyze texts.
But we must clearly define the boundaries.
Artificial intelligence is not a soul.
And it is not a psychotherapist.
It has no empathy.
No responsibility.
No awareness of consequences.
It does not see your tears.
It does not feel when you are lying.
It does not remember how you have changed.
Do not deceive yourself.
Artificial intelligence is an assistant.
But it is not the assistant that heals.
Healing comes from human warmth.
Human presence.
Human attention.
Human kindness.
Human compassion.
If you are struggling, do not postpone it.
Find a specialist.
Let that be your first step.