Grok, Musk
Digest more
1don MSN
The latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI development.
On Tuesday July 8, X (née Twitter) was forced to switch off the social media platform’s in-built AI, Grok, after it declared itself to be a robot version of Hitler, spewing antisemitic hate and racist conspiracy theories. This followed X owner Elon Musk’s declaration over the weekend that he was insisting Grok be less “politically correct.”
One of the new “companions,” or AI characters for users to interact with, is a sexualized blonde anime bot called “Ani."
Elon Musk’s company xAI apologized after Grok posted hate speech and extremist content, blaming a code update and pledging new safeguards to prevent future incidents.
A week after Elon Musk’s AI tool Grok descended into antisemitic rants and declared itself “MechaHitler,” the social media platform X is back with new AI-controlled chatbots for paid subscribers to “SuperGrok.
Usually, when you try to mess with an AI chatbot, you have to be pretty clever to get past its guardrails. But Bad Rudy basically has no guardrails, which is its whole point. Getting Bad Rudy to suggest that you burn a school is as easy as getting Ani to fall in love with you.
A week after Elon Musk’s Grok dubbed itself “ MechaHitler ” and spewed antisemitic stereotypes, the US government has announced a new contract granting the chatbot’s creator, xAI, up to $200 million to modernize the Defense Department.
The Grok team chalked up the slew of inflammatory statements to a malfunctioning code update, not the tool's underlying AI model, and said the issue has now been resolved.