All-In Podcast Boys Poke Fun at Uber Founder's 'AI Psychosis' (Which They Encouraged)

Remember when the guys over at the All-In podcast talked with Uber founder Travis Kalanick about “vibe physics“? Kalanick told viewers that he was on the verge of discovering new kinds of science by pushing his AI chatbots into previously undiscovered territory.

It was ridiculous, of course, since that’s not how an AI chatbot or science works. And Kalanick’s ideas got ridiculed to no end by folks on social media. But the gentlemen of All-In now seem to be distancing themselves from Kalanick’s ideas, even suggesting it could be related to the rise of “AI psychosis,” despite the fact that they were more than happy to entertain the Uber founder’s rambling nonsense when he was on the show.

Kalanick appeared as a guest on the July 11 episode of All-In, explaining very earnestly how he was on the cusp of discovering exciting new things about quantum physics, previously unknown to science.

“I’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” Kalanick explained. “And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”

[embed]https://www.youtube.com/watch?v=Z0k-4wyH5vk[/embed]

The reality is that AI chatbots like Grok and ChatGPT are not capable of delivering new discoveries in quantum physics because that’s beyond their capabilities. They spit out sentences by remixing and rehashing their training data, not by testing hypotheses. But All-In co-host Chamath Palihapitiya thought Kalanick was on to something, taking it a step further by insisting that AI chatbots could just figure out the answer to any problem you posed.

“When these models are fully divorced from having to learn on the known world and instead can just learn synthetically, then everything gets flipped upside down to what is the best hypothesis you have or what is the best question? You could just give it some problem and it would just figure it out,” said Palihapitiya.

This kind of insistence that AI chatbots can solve any problem is central to their marketing, but it also sets up users for failure. Tools like Grok and ChatGPT still struggle with basic tasks like counting the number of U.S. state names that contain the letter R because that’s not what large language models are good at. But that hasn’t stopped folks like OpenAI CEO Sam Altman from making grandiose promises.

Co-host Jason Calacanis was the only one to suggest that perhaps Kalanick was misunderstanding his own experience during the July 11 episode. Calacanis asked Kalanick if he was “kind of reading into it and it’s just trying random stuff at the margins.” The Uber founder acknowledged that it can’t really come up with a new idea, but said it was only because “these things are so wedded to what is known.” Kalanick compared it to pulling a stubborn donkey, suggesting it was indeed capable of new discoveries if you just worked hard enough at it.

You’d expect that to be the last word on the topic, given the fact that the All-In guys like to avoid controversy. They infamously failed to produce an episode of the podcast the week that Elon Musk and President Trump had their blowout. (The podcast hosts are all friends with Musk, and co-host David Sacks is Trump’s crypto czar.) So listeners of the new episode may have been a bit surprised to hear Kalanick’s weird ideas discussed again, especially if it was to poke fun at him.

The latest episode of All-In, uploaded on Aug. 15, opened with a discussion of so-called “AI psychosis,” a term that hasn’t been defined in medical literature but has emerged in popular media to discuss how people who are struggling with their mental health might see their symptoms exacerbated by engaging too much with AI. Gizmodo reported last week about complaints filed with the FTC about users experiencing hallucinations, egged on by ChatGPT. One complaint even told of how one user stopped taking his medication because ChatGPT told him not to at the same time as he was experiencing a delusional breakdown.

AI psychosis isn’t a clinical term, and it’s hard to determine the precise number of people who are experiencing severe strains on their mental health from the use of AI chatbots. But ChatGPT’s creator, OpenAI, has acknowledged that it’s a problem. And Calacanis opened the show talking about how people can get “one-shotted,” the new slang co-opted from video games and used for people who fall too deep into the AI rabbit hole. They anthropomorphize AI and fail to understand it’s just a computer program, sending themselves into a delusional spiral.

“You may have even witnessed a little bit of this when Travis [Kalanick] was on the program a couple weeks ago and he said he was like spending his time on the fringes or the edges of… physics,” Calacanis said. “It really can take you down the rabbit hole.”

“Are you saying Travis is suffering from AI psychosis?” co-host David Friedberg asked.

“I’m saying we may need to do a health check. We may need to do a health check because smart people can get involved with these AI. So we may have to do a little welfare check on our boy TK,” Calacanis said, seemingly in earnest.

[embed]https://www.youtube.com/watch?v=J-kzYItjhDs[/embed]

Palihapitiya seemed to think the underlying problem with AI psychosis was just a product of the so-called loneliness epidemic, but he ignored his own role in feeding Kalanick’s narrative that AI chatbots were truly capable of new discoveries in science. David Sacks wasn’t having it, insisting that AI psychosis was just a moral panic similar to fears 20 years ago over social media.

“This whole idea of AI psychosis, I think I gotta call bullshit on the whole concept. I mean, what are we talking about here? People doing too much research?” Sacks said, trying to downplay the news reports. “This feels like the moral panic that was created over social media, but updated for AI.”

Sacks admitted there was a mental health crisis in the U.S., but didn’t believe it was AI’s fault. And there’s probably some truth to what Sacks is saying. All new technologies include some form of social upheaval and worries about what a given invention might mean for the future. But there’s also no denying that people are much lonelier and isolated since the advent of social media. And that may not all be social media’s fault. But revolutionary technologies will inevitably have both positive and negative impacts on society.

The question is always whether the positives outweigh the negatives. And the jury is arguably still out on both social media and AI chatbots.

Like
Love
Haha
3
Passa a Pro
Scegli il piano più adatto a te
Leggi tutto