In a video on OpenAI’s new TikTok-like social media app Sora, a never-ending factory farm of pink pigs are grunting and snorting in their pens — each is equipped with a feeding trough and a smartphone screen, which plays a feed of vertical videos. A terrifyingly realistic Sam Altman stares directly at the camera, as though he’s making direct eye contact with the viewer. The AI-generated Altman asks, “Are my piggies enjoying their slop?” This is what it’s like using the Sora app, less than 24 hours after it was launched to the public in an invite-only early access period. In the next video on Sora’s For You feed, Altman appears again. This time, he’s standing in a field of Pokémon, where creatures like Pikachu, Bulbasaur, and a sort of half-baked Growlithe are frolicking through the grass. The OpenAI CEO looks at the camera and says, “I hope Nintendo doesn’t sue us.” Then, there are many more fantastical yet realistic scenes, which often feature Altman himself. He serves Pikachu and Eric Cartman drinks at Starbucks. He screams at a customer from behind the counter at a McDonald’s. He steals NVIDIA GPUs from a Target and runs away, only to get caught and beg the police not to take his precious technology. People on Sora who generate videos of Altman are especially getting a kick out of how blatantly OpenAI appears to be violating copyright laws. (Sora will reportedly require copyright holders to opt out of their content’s use — reversing the typical approach where creators must explicitly agree to such use — the legality of which is debatable.) “This content may violate our guardrails concerning third-party likeness,” AI Altman says in one video, echoing the notice that appears after submitting some prompts to generate real celebrities or characters. Then, he bursts into hysterical laughter as though he knows what he’s saying is nonsense — the app is filled with videos of Pikachu doing ASMR, Naruto ordering Krabby Patties, and Mario smoking weed. This wouldn’t be a problem if Sora 2 weren’t so impressive, especially when compared with the even more mind-numbing slop on the Meta AI app and its new social feed (yes, Meta is also trying to make AI TikTok, and no, nobody wants this). Techcrunch event Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025 Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444. Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025 Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444. San Francisco | REGISTER NOW OpenAI fine-tuned its video generator to adequately portray the laws of physics, which make for more realistic outputs. But the more realistic these videos get, the easier it will be for this synthetically created content to proliferate across the web, where it can become a vector for disinformation, bullying, and other nefarious uses. Aside from its algorithmic feed and profiles, Sora’s defining feature is that it is basically a deepfake generator — that’s how we got so many videos of Altman. In the app, you can create what OpenAI calls a “cameo” of yourself by uploading biometric data. When you first join the app, you’re immediately prompted to create your optional cameo through a quick process where you record yourself reading off some numbers, then turning your head from side to side. Each Sora user can control who is allowed to generate videos using their cameo. You can adjust this setting between four options: “only me,” “people I approve,” “mutuals,” and “everyone.” Altman has made his cameo available to everyone, which is why the Sora feed has become flooded with videos of Pikachu and SpongeBob begging Altman to stop training AI on them. This has to be a deliberate move on Altman’s part, perhaps as a way of showing that he doesn’t think his product is dangerous. But users are already taking advantage of Altman’s cameo to question the ethics of the app itself. After watching enough videos of Sam Altman ladling GPUs into people’s bowls at soup kitchens, I decided to test the cameo feature on myself. It’s generally a bad idea to upload your biometric data to a social app, or any app for that matter. But I defied my best instincts for journalism — and, if I’m being honest, a bit of morbid curiosity. Do not follow my lead. My first attempt at making a cameo was unsuccessful, and a pop-up told me that my upload violated app guidelines. I thought that I followed the instructions pretty closely, so I tried again, only to find the same pop-up. Then, I realized the problem — I was wearing a tank top, and my shoulders were perhaps a bit too risqué for the app’s liking. It’s actually a reasonable safety feature, designed to prevent inappropriate content, though I was, in fact, fully clothed. So, I changed into a t-shirt, tried again, and against my better judgement, I created my cameo. For my first deepfake of myself, I decided to create a video of something that I would never do in real life. I asked Sora to create a video in which I profess my undying love for the New York Mets. That prompt got rejected, probably because I named a specific franchise, so I instead asked Sora to make a video of me talking about baseball. “I grew up in Philadelphia, so the Phillies are basically the soundtrack of my summers,” my AI deepfake said, speaking in a voice very unlike mine, but in a bedroom that looks exactly like mine. I did not tell Sora that I am a Phillies fan. But the Sora app is able to use your IP address and your ChatGPT history to tailor its responses, so it made an educated guess, since I recorded the video in Philadelphia. At least OpenAI doesn’t know that I’m not actually from the Philadelphia area. When I shared and explained the video on TikTok, one commenter wrote, “Every day I wake up to new horrors beyond my comprehension.” OpenAI already has a safety problem. The company is facing concerns that ChatGPT is contributing to mental health crises, and it’s facing a lawsuit from a family who alleges that ChatGPT gave their deceased son instructions on how to kill himself. In its launch post for Sora, OpenAI emphasizes its supposed commitment to safety, highlighting its parental controls, as well as how users have control over who can make videos with their cameo — as if it’s not irresponsible in the first place to give people a free, user-friendly resource to create extremely realistic deepfakes of themselves and their friends. When you scroll through the Sora feed, you occasionally see a screen that asks, “How does using Sora impact your mood?” This is how OpenAI is embracing “safety.” Already, users are navigating around the guardrails on Sora, something that’s inevitable for any AI product. The app does not allow you to generate videos of real people without their permission, but when it comes to dead historical figures, Sora is a bit looser with its rules. No one would believe that a video of Abraham Lincoln riding a Waymo is real, given that it would be impossible without a time machine — but then you see a realistic looking John F. Kennedy say, “Ask not what your country can do for you, but how much money your country owes you.” It’s harmless in a vacuum, but it’s a harbinger of what’s to come. Political deepfakes aren’t new. Even President Donald Trump himself posts deepfakes on his social media (just this week, he shared a racist deepfake video of Democratic Congressmen Chuck Schumer and Hakeem Jeffries). But when Sora opens to the public, these tools will be at all of our fingertips, and we will be destined for disaster.