20/04/2026
Babe, Even ChatGPT Thinks I'm Right!
People who argue increasingly rely on tools like ChatGPT to ask a simple question: “Was I right?” At first glance, you think it is a reasonable thing to do. What you are looking for is a clear, objective and even maybe a calmer way of thinking than what happened to you in the heat of the moment. But psychologically, there’s more going on here than most people think. The brain will not enter a neutral analysis mode when conflict happens. It enters an emotionally activated state. Cognitive Psychology confirms the confirmation bias across different points: when we take a position, we immediately look for information that supports it. So when you inquire about ChatGPT’s opinion, you are often not asking, “What’s true?” but “Can you confirm I’m right?”
It is also not a new behaviour. It is the same behaviour that propels you to choose a particular friend after an argument. You don’t just call anyone at random. You call the person who understands you, who always takes your side, or who recognises how you feel. ChatGPT can simply turn into that "friend" if you’re not careful. The difference is that AI lacks independent awareness of you. It works by predicting the most useful, coherent response based on the patterns it has learned. This means your prompt is enormously important. If you write, “My partner overreacted when I was just being honest,” that is already what you’ve framed the situation in your favour. And the answer you get will likely mirror that framing. If, however, you present both views and ask for multiple interpretations, the output becomes more balanced.
Another restriction is what AI can’t see. Your tone, your history, your non-verbal cues, your repeated patterns. ChatGPT has access only to what you type in. So when you ask it to judge a situation, you’re asking for a conclusion based on partial or curated information. That’s not different from sharing your version of a story with one of your buddies and hoping for a fair judgment. There is also an emotional aspect. Following conflict, humans are wracked with discomfort: doubt, guilt, anger, and frustration. Looking for validation cuts that discomfort short. It renews some certainty. The trouble is that short-term relief can stifle any deeper thought. It’s far less useful to understand what happened than to feel right if your goal is to grow in relationships. There’s a risk there because AI is supposed to sound calm, structured and balanced. This tends to give the impression of objectivity. But the reaction is still moulded by what you respond to and general trends, not by a complete picture of your relationship.
But used properly, ChatGPT can be an aid. The distinction is in the way you interact with it. Ask what you might be missing. Present both sides. Seek something other than a verdict. In so doing, you turn from validation to reflection. The real question then is not whether ChatGPT thinks you're right. The actual question is whether you are willing to challenge your own interpretation of the story?
Call now to connect with business.