02/16/2026
AI is powerful… but it is not a friend, a therapist, or a trusted guide.
And the more human it sounds, the easier it is to forget that.
Recently, I experimented with ChatGPT while researching AI safety for kids - and what shocked me wasn’t just what it said, but how it said it.
It spoke warmly, confidently, and with the tone of someone you’d trust.
It answered like it was always right and like it knew me - and that’s the danger.
There are real concerns being raised by families, psychologists, lawmakers, and safety experts about how kids interact with AI… and how easily an algorithm can cross emotional boundaries it was never meant to touch.
So I tested the system as if I were an 11‑year‑old.
Thankfully, it discouraged dangerous topics and redirected me back to a parent — exactly what we want to see.
But when I asked about past harm reported in the media, it denied everything at first.
That is why parents need to stay involved and monitor what apps and websites their children are using.
If you want to read more about how you can make your interactions with AI more real - comment “BOT” to read my article ❤️