As reported by TechCrunch, last week Elon Musk’s Grok AI chatbot found itself in the middle of a political controversy after U.S. Representative Marjorie Taylor Greene accused the chatbot of spreading “fake news and propaganda.”
What am I talking about?
Let me tell you, the backlash started when Grok AI chatbot, which is developed by Musk’s xAI and integrated into X (formerly Twitter), replied to a user prompt by criticizing Greene’s political and religious alignment. In its response, the Grok AI chatbot mentioned Greene’s support for QAnon conspiracy theories and her defense of the January 6 Capitol riot, stating these positions conflicted with Christian values like “love and unity.” To which Greene posted a screenshot of the exchange on X, on her calling out the Grok AI chatbot:
Grok AI Chatbot’s Recent Missteps
Grok AI chatbot has had a rocky and sharp month… not literally. But, earlier in May, as reported by TechCrunch, it stirred controversy by unpromptedly referencing the “white genocide” conspiracy theory in South Africa. Even when users asked unrelated questions. Because it’s AI, it’s bound to hallucinate. And trusting these hallucinations is risky. Even if its creator is Elon Musk himself. Shortly after, the Grok AI chatbot made headlines again for expressing doubt over the Holocaust death toll. A statement widely condemned and later dismissed by xAI as a “programming error.” Doesn’t it sound familiar, as if you did something and your parents are covering up for you? Except it’s AI and its programmer. Anyhow, somebody needs to discipline their child.
Online Reactions aka The Greene-Grok Clash
The Greene-Grok clash lit up social media almost instantly. The name’s got a nice ring to it, don’t you think? Anyway, on one side, users ridiculed the idea of a sitting member of Congress publicly feuding with a chatbot. Hilarious memes flew. Comments ranged from “you’re arguing with code” to comparisons with “sci-fi dystopias we’re apparently living in now, where AI and chatbots take over the world.” Just kidding, or … am I? But underneath the sarcasm, there’s a real conversation about how much influence AI chatbots should have over public perception, especially when those chatbots can’t always distinguish between satire, conspiracy, or legitimate critique. The unsettling part? Grok AI chatbot’s comments weren’t completely off the rails in tone. They just occasionally diverge into dangerous territory. When that happens, is it a flaw in the training data? The algorithm? The oversight? Honestly, who knows? Maybe it’s all of the above.
Many pointed out that while Grok AI chatbot’s tone sounded measured, its unprompted dives into dangerous conspiracy theories like “white genocide” or “Holocaust denial” suggest that bias, misinformation, and poor content moderation aren’t bugs. There are risks baked into the system and poor programming. Like I said before in the previous articles, just play with AI; don’t completely rely on it.
Why Grok AI Chatbot Is So… Not Grok?

What makes the Grok AI chatbot stir up controversy so often? It’s not just Musk’s flair for chaos; it’s the training data. Unlike most AI models trained on static, filtered datasets. Grok AI is fed real-time content from X (formerly Twitter), arguably one of the noisiest, most chaotic platforms out there. From making top headlines and political hot takes to unfiltered rants and conspiracy-laced threads. Grok AI is essentially fed straight from the firehose of the internet. This setup gives the Grok AI chatbot a tone that’s edgy, reactive, and eerily in tune with whatever’s trending that hour.
It’s designed to be less sanitized than traditional AI, giving impromptu responses with minimal censorship or shame. But that same edge is a double-edged sword. The lack of rigorous moderation opens the door to unprompted mentions of controversial and other provocative takes that aren’t just risky, they’re questionable. So if you’re wondering why Grok AI keeps getting itself into trouble like a kid? Well, maybe the better question is “What happens when your AI’s data is shaped and fed by the comment section?”