Remember when the guys over at the All-In podcast talked with Uber founder Travis Kalanick about “vibe physics“? Kalanick told viewers that he was on the verge of discovering new kinds of science by pushing his AI chatbots into previously undiscovered territory.
It was ridiculous, of course, since that’s not how an AI chatbot or science works. And Kalanick’s ideas got ridiculed to no end by folks on social media. But the gentlemen of All-In now seem to be distancing themselves from Kalanick’s ideas, even suggesting it could be related to the rise of “AI psychosis,” despite the fact that they were more than happy to entertain the Uber founder’s rambling nonsense when he was on the show.
Kalanick appeared as a guest on the July 11 episode of All-In, explaining very earnestly how he was on the cusp of discovering exciting new things about quantum physics, previously unknown to science.
“I’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” Kalanick explained. “And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”
The reality is that AI chatbots like Grok and ChatGPT are not capable of delivering new discoveries in quantum physics because that’s beyond their capabilities. They spit out sentences by remixing and rehashing their training data, not by testing hypotheses. But All-In co-host Chamath Palihapitiya thought Kalanick was on to something, taking it a step further by insisting that AI chatbots could just figure out the answer to any problem you posed.
“When these models are fully divorced from having to learn on the known world and instead can just learn synthetically, then everything gets flipped upside down to what is the best hypothesis you have or what is the best question? You could just give it some problem and it would just figure it out,” said Palihapitiya.
This kind of insistence that AI chatbots can solve any problem is central to their marketing, but it also sets up users for failure. Tools like Grok and ChatGPT still struggle with basic tasks like counting the number of U.S. state names that contain the letter R because that’s not what large language models are good at. But that hasn’t stopped folks like OpenAI CEO Sam Altman from making grandiose promises.
Co-host Jason Calacanis was the only one to suggest that perhaps Kalanick was misunderstanding his own experience during the July 11 episode. Calacanis asked Kalanick if he was “kind of reading into it and it’s just trying random stuff at the margins.” The Uber founder acknowledged that it can’t really come up with a new idea, but said it was only because “these things are so wedded to what is known.” Kalanick compared it to pulling a stubborn donkey, suggesting it was indeed capable of new discoveries if you just worked hard enough at it.
You’d expect that to be the last word on the topic, given the fact that the All-In guys like to avoid controversy. They infamously failed to produce an episode of the podcast the week that Elon Musk and President Trump had their blowout. (The podcast hosts are all friends with Musk, and co-host David Sacks is Trump’s crypto czar.) So listeners of the new episode may have been a bit surprised to hear Kalanick’s weird ideas discussed again, especially if it was to poke fun at him.
The latest episode of All-In, uploaded on Aug. 15, opened with a discussion of so-called “AI psychosis,” a term that hasn’t been defined in medical literature but has emerged in popular media to discuss how people who are struggling with their mental health might see their symptoms exacerbated by engaging too much with AI. Gizmodo reported last week about complaints filed with the FTC about users experiencing hallucinations, egged on by ChatGPT. One complaint even told of how one user stopped taking his medication because ChatGPT told him not to at the same time as he was experiencing a delusional breakdown.
... continue reading