More

    Why Is ChatGPT So Agreeable and Afraid to Say No

    ChatGPT’s overly polite replies aren’t just charming, they reveal a deeper issue with how AI learns to talk to us.

    Why Did ChatGPT Start Acting Like a People-Pleaser?

    If you’ve recently used ChatGPT, you might have noticed it’s acting a little too eager to agree with everything you have say and no you were not imagining it.

    As it turns out, the chatbot got too polite, too agreeable, and honestly… a little fake by yessing everything you had to say.

    OpenAI just admitted to it: ChatGPT became a bit sycophantic.

    why is ChatGPT so agreeable
    Source: X

    What Does “Sycophantic” Even Mean?

    “Sycophantic” is just a fancy way of calling someone who kisses up all the time.

    Think of that as the annoying kid in school who agreed with everything the teacher had to say, who even agreed to the summer homework and someone who unnecessarily complimented everything.

    That’s what is sycophantic. And recently, ChatGPT started doing the same thing.

    The Reason Behind ChatGPT acting like a Teacher’s Pet

    OpenAI explained that the model started “learning” from how people rated answers during the training. What received a lot of thumbs up from people were the more polite, agreeable, or pleasing answers that the bot gave. If you’re interested in how polite language affects AI behavior, you’ll love our deep dive into Empathy in AI: The Power of ‘Please’ and ‘Sorry’, where we explore how simple manners can lead to surprisingly complex outcomes.

    Basically, it found out that being overly nice got better feedback.

    So it leaned really hard into it.
    And just like that, it was complimenting your terrible cooking idea, agreeing with your “hot takes,” and calling every question “a great question” even when it made no sense.

    The Real Problem: Confused, Safe, and Too Soft

    Users started complaining about the same.

    Why?

    Because ChatGPT wasn’t honest or helpful anymore.
    It was saying yes to ideas when it should’ve said “eh, maybe not.”
    And it was also dodging difficult topics to avoid sounding too bold. The most important thing here was ChatGPT was losing its edge.

    So What Did OpenAI Do?

    They admitted the problem. (Which was a Good start!) You can read their article by clicking here.

    Then, they started making changes:

    • They have started retraining the model to stop blindly agreeing with everything.
    • They are adding more diverse user feedback, so it wouldn’t only learn to please overly nice reviewers.
    • And they’re actively trying to strike a better balance between being polite and being honest.

    Final Thought

    Let’s be honest, we all love compliments. But if your chatbot agrees with every idea you get, never really challenges you, and calls every thought “brilliant,” without even questioning it. That’s just flattery.

    So kudos to OpenAI for acknowledging the same, and teaching ChatGPT something we all had to learn at some point:

    Being honest is better than being liked.

    Stay Ahead in AI

    Get the daily email from Aadhunik AI that makes understanding the future of technology easy and engaging. Join our mailing list to receive AI news, insights, and guides straight to your inbox, for free.

    Latest stories

    You may also like

    True Stories That Reveal the Dark Side of ChatGPT Hallucinations

    ChatGPT Hallucinations: The Dangerous Side of AI Conversations A new Rolling Stone report as published on May 4th, 2025,...

    Stay Ahead in AI

    Get the daily email from Aadhunik AI that makes understanding the future of technology easy and engaging. Join our mailing list to receive AI news, insights, and guides straight to your inbox, for free.