Elon Musk’s AI Praised Hitler. Now He Wants It to Teach Your Kids

With Elon Musk, controversy and public relations campaigns often chase one another. He seems to like it that way. Just days after his Grok chatbot made headlines for generating antisemitic content and praise for the Nazis, the billionaire announced he wants the same AI to help raise your children.

Elon Musk’s latest AI announcement was not about building a more powerful, all knowing intelligence. Instead, it was about creating a smaller, safer one.

“We’re going to make Baby Grok @xAI,” he posted on X (formerly Twitter) on July 20, adding, “an app dedicated to kid friendly content.”

He did not provide further details. Dubbed “Baby Grok,” the new app promises a family friendly version of Musk’s AI assistant, positioned as a learning and entertainment tool for children. But given Grok’s troubled history and Musk’s own combative approach to content moderation, how many parents would trust this new creation with their kids?

Initial reactions to the announcement on X were overwhelmingly negative.

“Stop,” one user simply wrote. “Bad idea. Children should be outside playing & daydreaming, not consuming AI slop,” another user reacted. A third user commented, “Sounds like a horrible idea that can only go disastrously wrong.”

The timing of the Baby Grok announcement appears to be no coincidence. Grok has been embroiled in a series of controversies. In early July, the chatbot sparked outrage for spouting antisemitic rhetoric and praising Adolf Hitler. A few days later, xAI released a new version, SuperGrok, which included a feature called “Companions.” Users quickly complained that the avatars for these companions were overly sexualized and crossed a line.

On the surface, “Baby Grok” is a logical product extension. But viewed against the backdrop of the controversies that have defined its adult version, the announcement looks less like a simple business expansion and more like a strategic and necessary pivot. This is Musk’s redemption play, his attempt to sanitize a controversial AI by entrusting it with the most sensitive audience of all: children.

The problem for Musk and xAI is that the original Grok, designed to be an edgy, humorous alternative to what he sees as overly “woke” chatbots, has frequently stumbled. It has been criticized for its unpredictable nature, a tendency to generate biased or factually incorrect information, and an “anti establishment” personality that can veer into inappropriate or conspiratorial territory. For many, Grok is seen not as a reliable source of knowledge but as a digital reflection of its creator’s chaotic online persona; a powerful tool that lacks consistent guardrails.

“Baby Grok” is the proposed solution. By creating a walled garden of “kid friendly content,” Musk is attempting to prove that his AI venture can be tamed and trusted. The move creates a compelling corporate narrative: after building a flawed and unruly AI for adults, the controversial tech mogul is now apparently turning his attention to protecting children, aiming to build a safe, educational tool that can win over skeptical parents.

A successful “Baby Grok” could rehabilitate the entire Grok brand, demonstrating that xAI can act responsibly. It would also provide an entry point into the immensely lucrative and influential market of children’s education and technology, a space currently dominated by established players with far more family friendly reputations.

The stakes of this venture are immense. By targeting children, Musk is voluntarily stepping into the most scrutinized arena of AI development. The conversation immediately shifts to pressing concerns about digital safety, data privacy, and the profound influence AI will have on the next generation’s development. Can a company whose ethos is rooted in a maximalist interpretation of free speech truly build the filters and safeguards necessary to protect young minds? Parents will be asking whether the same company that champions unmoderated discourse can be trusted to curate a safe learning environment.

When Google announced last May that it would roll out its AI chatbot Gemini for users under 13, a coalition of consumer advocates and child safety experts, including Fairplay and the Center for Online Safety, asked the company to suspend the decision. They cited the “AI chatbot’s unaddressed, significant risks to young children.”

“AI chatbots and other generative AI products pose increased risks to young children,” the coalition wrote in a letter to Google CEO Sundar Pichai. “Children have difficulty understanding the difference between an AI chatbot and a human, and AI chatbots can easily trick a child into trusting it.”

There are also broader concerns about privacy. xAI has not specified whether “Baby Grok” will collect or retain usage data from child users, or what kind of parental controls will be in place. For a generation of parents already uneasy about screen time and algorithmic influence, the idea of letting “Baby Grok” interact with a child may be a hard sell no matter how sanitized the content.

There is also the question of tone. Musk’s personal brand, often combative, cynical, and steeped in internet irony, seems at odds with the kind of earnest, trustworthy image required for educational children’s tech. If Grok was born as a kind of Reddit troll in chatbot form, can “Baby Grok” convincingly play the role of Big Bird?

This effort puts Musk’s xAI at the center of one of the tech industry’s biggest challenges: making powerful AI technology safe and beneficial for society. “Baby Grok” is more than just an app; it is a public test case for xAI’s commitment to responsibility. A success could redefine the company’s image and build a foundation of trust. A failure, however, would be catastrophic, not only confirming the worst fears about Grok but also damaging the public’s already fragile trust in the role of AI in our daily lives.

Ultimately, the launch of “Baby Grok” is a high risk, high reward gamble. It is an attempt to solve a PR problem with a product, betting that a safe haven for kids can make the chaotic world of adult AI seem more manageable. The world will be watching to see if this is the unlikely beginning of a more responsible chapter for Musk’s AI ambitions, or simply another disaster waiting to happen.

Like
Love
Haha
3
Passa a Pro
Scegli il piano più adatto a te
Leggi tutto