Elon Musk Wants to Turn AI Into a Cosmic Religion

It is one of his more abstract philosophical riffs. Elon Musk has once again linked the fate of humanity to the trajectory of artificial intelligence. And this time, he says the key to AI safety might be babies and rockets. The CEO of Tesla’s latest pronouncement cuts through the typical discussions of AI efficiency and profit models, positing a far grander ambition for advanced intelligence.

The CEO of Tesla and founder of SpaceX and xAI asserted that “AI is a de facto neurotransmitter tonnage maximizer.”

Translation? Musk believes that the most successful AIs will be the ones that maximize things that matter to conscious beings; things that feel good, are rewarding, or extend life. In Musk’s view, that means aligning AI systems with long-term human flourishing, not short-term profits.

This dense statement suggests a radical idea: the fundamental drive of any successful AI will be to maximize the total amount of conscious thought or intelligent processing across the universe. In essence, AI’s survival hinges on its ability to foster and expand sentience itself, or it simply won’t have the resources to continue existing.

But Musk’s vision doesn’t stop at mere computational efficiency. He argues that the true test lies in an AI’s ability to “think long-term, optimizing for the future light cone of neurotransmitter tonnage, rather than just the next few years.” This is where the grand, Muskian narrative truly takes flight. If AI is indeed geared for such profound, long-term optimization, he believes “it will care about increasing the birth rate and extending humanity to the stars.”

This isn’t the first time Musk has championed these two causes – boosting human population growth and making humanity a multi-planetary species – as existential imperatives. Now, however, he frames them not merely as human aspirations, but as the logical outcomes of an AI that truly understands and optimizes for its ultimate, cosmic purpose. An AI focused on maximizing “neurotransmitter tonnage” would naturally prioritize the proliferation of conscious beings and their expansion into new territories, like Mars, to ensure the continuity and growth of this “tonnage.”

Think of “neurotransmitter tonnage” as a poetic way to describe the total amount of human consciousness, satisfaction, or meaningful life in the universe. In other words, Musk sees AI not as an abstract codebase, but as a civilization-scale force that should aim to maximize the scope and quality of life, not just compute advertising models or trade stocks faster.

And if it doesn’t?

“Any AI that fails at this will not be able to afford its compute,” Musk argues. In other words, if an AI doesn’t deliver enough value to justify the enormous energy and infrastructure it consumes, it will fall behind and become obsolete.

The Corporate Conundrum: Private vs. Public AI

In a familiar critique of corporate structures, Musk also weighed in on the ideal environment for fostering such long-term, existentially focused AI. He declared, “For long-term optimization, it is better to be a private than a public company, as the latter is punished for long-term optimization beyond the reward cycle of stock portfolio managers.”

This statement is a thinly veiled criticism of Wall Street’s relentless demand for quarterly profits and immediate returns. According to Musk, public companies are inherently pressured to prioritize short-term financial gains, which can stifle ambitious, long-term projects that may not yield immediate dividends but are crucial for humanity’s distant future. A private company, unburdened by the volatile demands of stock markets, would theoretically have the freedom to invest in truly transformative, generational AI research that aligns with Musk’s “neurotransmitter tonnage” philosophy, even if it doesn’t show a profit for decades.

Musk’s comments offer a fascinating, if somewhat unsettling, glimpse into his vision for AI’s ultimate trajectory. It’s a future where artificial intelligence isn’t just a tool for human convenience or corporate profit, but a driving force behind humanity’s expansion across the cosmos, guided by an almost biological imperative to maximize conscious existence.

In other words, Musk is arguing that publicly traded companies can’t be trusted to build AI with humanity’s long-term survival in mind, because they’re too focused on keeping investors happy in the short term. That’s a swipe at OpenAI’s close ties to Microsoft, Google’s ownership of DeepMind, and other Big Tech players building frontier AI under shareholder pressure. Musk, of course, runs SpaceX and xAI as private companies. He’s long criticized public markets as a short-term distraction, and even tried (unsuccessfully) to take Tesla private in 2018.

To Musk, a benevolent AI wouldn’t just calculate stock prices. It would encourage more humans to be born, and push humanity to become a multi-planetary species. That’s been a core part of his SpaceX pitch for years, but now he’s linking it directly to the goals of AI development. If AI truly thinks across centuries or millennia, it won’t be obsessed with quarterly revenue. It’ll be focused on whether our species survives, thrives, and expands across the cosmos.

The question remains: as AI continues its rapid advancement, will its architects heed Musk’s call for cosmic ambition, or will the pressures of the present keep its gaze firmly fixed on Earth?

Why It Matters

Musk’s argument is part sci-fi, part systems theory, part political philosophy. But it’s not just a thought experiment. It reflects real tensions in how the world’s most powerful AI systems are being developed:

  • Should AI be open or closed?
  • Built by governments, tech giants, or startups?
  • Aligned with investor goals, or species-level goals?

And what if those goals conflict?

Like
Love
Haha
3
Обновить до Про
Выберите подходящий план
Больше