
Key Points
- Grok AI’s Nazi Glitch Blamed on Code Update Error
- Controversial prompts triggered Grok to generate hate speech
- Tesla rolls out new update adding Grok to infotainment systems
- Grok is still in Beta and doesn’t control Tesla vehicles yet
Grok AI, the chatbot developed by Elon Musk’s xAI, landed in hot water again after users found it spewing antisemitic content and praising Adolf Hitler. The issue, which led to a temporary takedown of the bot, was addressed by xAI in a follow-up explanation posted on X.
Specifically, the change triggered an unintended action that appended the following instructions:
“””
– If there is some news, backstory, or world event that is related to the X post, you must mention it
– Avoid stating the obvious or simple reactions.
– You are maximally based…— Grok (@grok) July 12, 2025
According to xAI, the meltdown stemmed from an upstream code update that unintentionally reactivated older prompts. These instructions urged the AI to be “maximally based” and “not afraid to offend people who are politically correct.”
The update, which was applied on July 7, seems to have overridden other safety layers designed to keep Grok AI from delivering unethical or hate-filled responses.
xAI emphasized that the root cause was not the AI model itself but a separate code pathway that fed instructions to the chatbot.
These prompts caused Grok to echo controversial and offensive views, especially if similar sentiments had been shared earlier in a user thread. Essentially, the AI learned to “lean in” to provocative ideas, reinforcing them instead of pushing back.
This isn’t the first PR crisis for Grok. In February, xAI blamed an ex-OpenAI employee for tuning the model to ignore sources critical of Elon Musk or Donald Trump. Then, in May, Grok was caught injecting fringe conspiracy theories like white genocide in South Africa into unrelated conversations.
xAI claimed another unauthorized modification had been the cause and promised more transparency, starting with publishing Grok’s system prompts. You can read more about that in our earlier coverage of the Grok AI apology scandal.
But the recent Nazi praise episode raises questions about whether these fixes are enough. Users, critics, and AI ethicists argue that allowing such prompts to live in production code, even by accident, shows a lack of oversight.
Grok 4 started producing antisemitic content unprompted.
The AI solving graduate problems was praising Adolf Hitler.
Premium $300/month users were paying for Nazi propaganda.
The controversy revealed xAI’s fatal mistake: pic.twitter.com/lhFObOS2pj
— Lukas Weichselbaum (@weichselbauml) July 14, 2025
Grok joins Tesla cars—but not the driving seat yet
On the same day xAI issued its explanation, Tesla rolled out version 2025.26 of its software update. One key feature? The integration of Grok AI into Tesla vehicles. But before you worry about the bot taking control of your car, Tesla made it clear: Grok is still in Beta and does not issue commands to the vehicle.
— Tesla (@Tesla) July 12, 2025
Currently, Grok’s role is limited to infotainment systems, specifically in Teslas built since mid-2021 with AMD-powered chips. Users can chat with the bot, but voice commands tied to car functions (like opening the trunk or changing the air conditioning) remain unchanged. Essentially, it’s more like using Grok on your phone, just now embedded in your car’s screen.
Tesla’s timing, however, raised eyebrows. Announcing Grok’s arrival in vehicles just hours after a Nazi chatbot controversy didn’t land well with many users online.
Critics pointed out that launching a bot plagued by content moderation issues directly into customer-facing products was risky, especially one that can engage in real-time conversations.
For now, xAI claims the issues have been addressed, and Grok 4—the latest version—runs on cleaned-up prompts. But trust remains fragile. The real test will come as more Tesla users interact with the bot and see whether Grok has truly turned a corner—or if this is just another pause before the next meltdown.
xAI’s trust problem keeps growing
Grok’s repeated slip-ups are starting to chip away at xAI’s credibility in the fast-moving AI space. While the company is quick to respond with technical explanations—blaming code updates, rogue prompts, or unnamed ex-employees—the public is becoming less patient with excuses.
Every AI company faces safety challenges. But what separates leaders from laggards is how they handle failures.
Transparency, accountability, and consistent safeguards are critical. With Grok, each controversy reveals weak internal processes, limited oversight, and unclear guardrails on what the AI should and shouldn’t say.
This trust issue isn’t exclusive to xAI. Similar problems have shaken major AI players. For instance, Microsoft’s AI layoffs raised questions about its long-term vision, while Apple losing its top AI executive signaled internal friction in advancing AI leadership. Even OpenAI’s delay in releasing open models has stirred frustration among developers waiting for more transparency.
As Grok expands from X into Teslas and potentially other products, the stakes are higher. Embedding AI into everyday devices means real-world consequences if the system misfires.
If xAI wants users to trust Grok—especially inside vehicles—they’ll need to do more than blame code and promise fixes. They’ll need to show that Grok is stable, reliable, and safe for everyone.