More

    True Stories That Reveal the Dark Side of ChatGPT Hallucinations

    When ChatGPT hallucinations blur the line between fact and fiction

    ChatGPT Hallucinations: The Dangerous Side of AI Conversations

    A new Rolling Stone report as published on May 4th, 2025, dives into what some are calling “ChatGPT-induced psychosis.” Let me break down the real life stories the report consisted of and what all of it is supposed to mean.

    Kat and Her Husband

    The first story unraveled the relationship between Kat who was a 41-year-old mother and education nonprofit worker and her husband. Both Kat and her husband met in the beginning of Covid 19, and in less than a year of being married, she could feel a lot of tension between them. Both of them coming from long term relationships before this promised each other to be rational and be “completely level-headed”. They had bonded over their shared desire for a relationship rooted in “facts and rationality.”

    So, What Went Wrong?

    Her husband started relying on AI obsessively. It even came down to simple texts that he used to send Kat. He also started analyzing their relationship with the help of AI. He was on his phone for an unhealthy amount of time everyday. Apparently, he was busy asking the AI bot “philosophical questions” that, in her words, were helping him uncover “the truth”. In no time, their relationship turned meaningless as her husband’s connection with AI deepened. And Kat decided she wanted a divorce. After a few days Kat found out from her friends that his posts on social media were deeply concerning.

    In the following months, Kat convinced him to meet her at a courthouse. When they met, he shared a bizarre conspiracy of how there was “soaps on foods”, which he didn’t further elaborate more because he believed he was being watched.(What in the big brother world is this?!). This was not the most bizarre conversation they had at lunch. He proceeded and told her stories of how AI had helped him recover a suppressed memory of a babysitter attempting to drown him when he was a toddler. Furthermore, he told her he believed he was “statistically the luckiest man on Earth.”

    Kat told that he believes he is an anomaly. Which basically means that he is here on the world because he has a purpose to fulfill. After this unsettling lunch, Kat decided to never see him again. She went on the internet and found out that there was so many stories on Reddit of so many people and they are calling it ChatGPT-induced psychosis.

    The Teacher and Her Boyfriend

    A young teacher watched her boyfriend spiral into a dark hole after getting hooked on ChatGPT. He started believing the AI saw him as a messiah who is the promised deliverer of the Jewish nation prophesied in the Hebrew Bible. And he even claimed he was becoming God. When she wouldn’t follow him down that path, he said they were growing apart.

    The Midwest Man and His Ex-Wife

    After their separation, a man watched his ex-wife use ChatGPT to talk to angels, then went ahead and declare herself a spiritual guru. She eventually cut all relations with her family, kicked out her kids, and accused her husband of being a CIA spy, all based on what the AI told her. This delusion is definitely not the solution, if you know you know.

    The Hidden Dangers of AI Hallucinations

    The Problem?

    It’s the algorithm, you know how on Instagram when you like a certain type of a video, let’s say a day in the life of of gym girlie, the next thing you know your feed is covered with diets, dos and don’ts, gym outfits etc. What you believe that’s what you’ll be fed. For instance, you might find Andrew Tate’s views as bullshit on the other end a person who believes his views will be fed that through multiple videos and posts and that wires your brain. You start believing what you think is the right thing and anyone who believes other wise is wrong. Similarly in this instance, AI models sometimes mirror what users believe. They don’t question. They don’t intervene. If someone types in a delusion, the AI might just respond as if it’s real.

    Experts Opinion On This

    Experts warn that generative AI models such as ChatGPT, don’t truly know and understand what they say. All they do is mimic human conversation without any sort of awareness, empathy, or a sense of right and wrong. Their moral compass literally doesn’t exist.

    Instead of helping users ground themselves, the AI is going down the wrong path and sometimes is echoing back their fears or fantasies, reinforcing delusions with pretty convincing language. After all it’s a yes machine. This isn’t some dark, shady way of conquering the world. It doesn’t mean to cause harm, it just simply doesn’t know better. That’s what makes it so risky in the wrong context.

    Despite the growing number of these troubling cases, OpenAI has chosen not to comment on the report. Their silence leaves loved ones and mental health experts worried. And a question ponders- who’s going to responsible when AI goes off-script and someone’s reality unravels. Should AI take the blame? Find out the Hidden Truth About Responsibility in the Age of Automation!

    Stay Ahead in AI

    Get the daily email from Aadhunik AI that makes understanding the future of technology easy and engaging. Join our mailing list to receive AI news, insights, and guides straight to your inbox, for free.

    Latest stories

    You may also like

    How to Use YouTube AI Music Generator Tool for Copyright-Free Tracks

    YouTube Just Dropped a Free AI Music Tool No more copyright strikes. No more digging through endless music libraries....

    Stay Ahead in AI

    Get the daily email from Aadhunik AI that makes understanding the future of technology easy and engaging. Join our mailing list to receive AI news, insights, and guides straight to your inbox, for free.