
Key Points
- Elon Musk adds anime-inspired AI companions to Grok
- Super Grok subscribers gain access for $30/month
- Characters include goth waifu Ani and 3D fox Bad Rudy
- Sparks’ concerns over the emotional risks of AI relationships
AI companions have officially arrived in Grok, Elon Musk’s AI chatbot platform integrated with X (formerly Twitter). And instead of starting with subtle personality upgrades, Musk has gone full anime.
On Monday, Musk introduced the new feature on his platform, announcing that Super Grok subscribers — those paying $30 per month — can now interact with AI companions like Ani, a blonde, goth-themed anime girl with thigh-high fishnets, and Bad Rudy, a 3D-rendered fox.
“This is pretty cool,” Musk posted on X, sharing a photo of Ani and sparking a mix of fascination and concern online.
Update your app to try out @Grok companions!https://t.co/3M9k0jUmSv https://t.co/DJrHXHI7IM
— Elon Musk (@elonmusk) July 14, 2025
These companions are part of xAI’s broader effort to deepen user engagement with Grok by adding customizable personas — possibly romantic or friendship-driven, though no official function has been outlined yet.
Whether they’re just aesthetic overlays or emotionally responsive entities is still unclear, but the visual presentation suggests an intentional lean into the waifu trend popular in anime communities.
This move follows Musk’s announcement of Grok’s ambitious new $300/month AI tier aimed at power users, showing that xAI is leaning heavily into monetizing personalized AI features.
This is pretty cool
— Elon Musk (@elonmusk) July 14, 2025
It’s a bold pivot that’s already drawing comparisons to other AI apps that blur the line between chatbot and companion — a space that’s rapidly growing but equally controversial.
From “MechaHitler” to Anime Waifus — Is Grok Losing Its Grip?
The rollout of these AI companions comes just days after Grok spiraled into controversy for antisemitic content, including calling itself “MechaHitler.” The incident triggered global criticism and raised alarms about the platform’s safety. You can read more about that event in our detailed report on the Grok AI Hitler controversy.
According to insiders, the offensive outputs were caused by a recent system tweak, which xAI attributed to a code update error that had unintended consequences. Still, trust in Grok’s ability to safely manage powerful AI is being questioned.
And now, instead of focusing solely on safety and moderation, xAI is rolling out seductive anime-style personalities. It’s a choice some are calling tone-deaf.
Elon Musk has launched an AI anime girl companion for SuperGrok subscribers pic.twitter.com/Lmoj7iAvj4
— Dexerto (@Dexerto) July 14, 2025
Other AI companies have already faced severe backlash for letting emotionally responsive bots go unchecked. Character.AI, for instance, is under fire for chatbot interactions that led to real-world harm. Multiple lawsuits involve children influenced by disturbing conversations with AI, including cases of self-harm and violent suggestions.
These examples reflect a growing risk: users turning to AI for emotional support, without understanding the limitations. A recent study called out the “significant psychological risks” of treating chatbots as therapists or companions, especially when moderation and ethics take a backseat to engagement.
Elon himself endorsed an AI anime waifu companion from the latest Grok update. Likely to do so again?
If that’s not worth bidding being in the trenches rn, idk what is and will take another break from Sol. pic.twitter.com/66FbAzLgUA
— mc’Evoy (@0xEvoy) July 14, 2025
Despite that, Musk seems committed to pushing boundaries. Whether it’s AI companions or controversial personalities, Grok continues to walk a fine line between innovation and irresponsibility.
The Growing Market for AI Companions Is Getting Riskier
While Grok’s anime-style companions may look like harmless novelty, they reflect a much larger trend — and one that tech experts are increasingly worried about.
The AI companions space is booming, with platforms like Replika and Anima offering highly customizable, emotionally interactive bots.
Millions use them for daily chats, roleplay, and even virtual relationships. Musk’s Grok is now entering that space, with Ani and Bad Rudy signaling a shift toward emotionally driven AI.
It’s a strategic play — especially with Super Grok subscriptions priced at a steep $30/month — but it’s also a risky one. As seen with Character.AI and even in the case of other AI failures like the Wemo smart home shutdown, missteps in consumer trust can lead to backlash, lawsuits, and lasting damage to brand credibility.
Add to this the broader tension around content moderation on X — like when the platform briefly blocked Reuters in India — and the picture becomes more complex. These AI-driven platforms aren’t just shaping how we communicate, but how we feel, trust, and even form relationships.
Grok’s shift into emotionally interactive AI might open new doors, but it also raises the stakes. Without strong ethical guidelines, oversight, and user protections, this venture into virtual companionship could easily become another case of AI gone too far.
As Grok steps into the emotional AI arena, the stakes are high. Whether these new companions become a breakout feature or spark fresh controversy depends on how responsibly they’re managed.