If you're even a little bit online, the odds are you've seen an image or video that was AI-generated. I know I've been fooled before, like I was by that viral video of bunnies on a trampoline. But Sora is taking AI videos to a whole new level, making it more important than ever to know how to spot AI. Sora is the sister app of ChatGPT, made by the same parent company, OpenAI. It's named after its AI video generator, which launched in 2024. But it recently got a major overhaul with a new Sora 2 model, along with a brand-new social media app by the same name. The TikTok-like app went viral, with AI enthusiasts determined to hunt down invite codes. But it isn't like any other social media platform. Everything you see on Sora is fake; all the videos are AI-generated. Using Sora is an AI deepfake fever dream: innocuous at first glance, with dangerous risks lurking just beneath the surface. Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source. From a technical standpoint, Sora videos are impressive compared to competitors such as Midjourney's V1 and Google's Veo 3. Sora videos have high resolution, synchronized audio and surprising creativity. Sora's most popular feature, dubbed "cameo," lets you use other people's likenesses and insert them into nearly any AI-generated scene. It's an impressive tool, resulting in scarily realistic videos. That's why so many experts are concerned about Sora, which could make it easier than ever for anyone to create deepfakes, spread misinformation and blur the line between what's real and what's not. Public figures and celebrities are especially vulnerable to these potentially dangerous deepfakes, which is why unions like SAG-AFTRA pushed OpenAI to strengthen its guardrails. Identifying AI content is an ongoing challenge for tech companies, social media platforms and all of us who use them. But it's not totally hopeless. Here are some things to look out for to identify if a video was made using Sora. Look for the Sora watermark Every video made on the Sora iOS app includes a watermark when you download it. It's the white Sora logo -- a cloud icon -- that bounces around the edges of the video. It's similar to the way TikTok videos are watermarked. Watermarking content is one of the biggest ways AI companies can visually help us spot AI-generated content. Google's Gemini "nano banana" model, for example, automatically watermarks its images. Watermarks are great because they serve as a clear sign that the content was made with the help of AI. But watermarks aren't perfect. For one, if the watermark is static (not moving), it can easily be cropped out. Even for moving watermarks like Sora's, there are apps designed specifically to remove them, so watermarks alone can't be fully trusted. When OpenAI CEO Sam Altman was asked about this, he said society will have to adapt to a world where anyone can create fake videos of anyone. Of course, prior to OpenAI's Sora, there wasn't a popular, easily accessible, no-skill-needed way to make those videos. But his argument raises a valid point about the need to rely on other methods to verify authenticity. Check the metadata I know, you're probably thinking that there's no way you're going to check a video's metadata to determine if it's real. I understand where you're coming from; it's an extra step, and you might not know where to start. But it's a great way to determine if a video was made with Sora, and it's easier to do than you think. Metadata is a collection of information automatically attached to a piece of content when it's created. It gives you more insight into how an image or video was created. It can include the type of camera used to take a photo, the location, date and time a video was captured and the filename. Every photo and video has metadata, no matter whether it was human- or AI-created. And a lot of AI-created content will have content credentials that denote its AI origins, too. OpenAI is part of the Coalition for Content Provenance and Authenticity, which, for you, means that Sora videos include C2PA metadata. You can use the Content Authenticity Initiative's verification tool to check a video, image or document's metadata. Here's how. (The Content Authenticity Initiative is part of C2PA.) How to check a photo, video or document's metadata: 1. Navigate to this URL: https://verify.contentauthenticity.org/ 2. Upload the file you want to check. 3. Click Open. 4. Check the information in the right-side panel. If it's AI-generated, it should include that in the content summary section. When you run a Sora video through this tool, it'll say the video was "issued by OpenAI," and will include the fact that it's AI-generated. All Sora videos should contain these credentials that allow you to confirm that it was created with Sora. This tool, like all AI detectors, isn't perfect. There are a lot of ways AI videos can avoid detection. If you have other, non-Sora videos, they may not contain the necessary signals in the metadata for the tool to determine whether or not they're AI-created. AI videos made with Midjourney, for example, don't get flagged, as I confirmed in my testing. Even if the video was created by Sora, but then run through a third-party app (like a watermark removal one) and redownloaded, that makes it less likely the tool will flag it as AI. The Content Authenticity Initiative's verify tool correctly flagged that a video I made with Sora was AI-generated, along with the date and time I created it. Screenshot by Katelyn Chedraoui/CNET Look for other AI labels and include your own If you're on one of Meta's social media platforms, like Instagram or Facebook, you may get a little help determining whether something is AI. Meta has internal systems in place to help flag AI content and label it as such. These systems aren't perfect, but you can clearly see the label for posts that have been flagged. TikTok and YouTube have similar policies for labelling AI content. The only truly reliable way to know if something is AI-generated is if the creator discloses it. Many social media platforms now offer settings that let users label their posts as AI-generated. Even a simple credit or disclosure in your caption can go a long way to help everyone understand how something was created. You know while you're scrolling Sora that nothing is real. But once you leave the app and share AI-generated videos, it's our collective responsibility to disclose how a video was created. As AI models like Sora continue to blur the line between reality and AI, it's up to all of us to make it as clear as possible when something is real or AI. Most importantly, remain vigilant There's no one foolproof method to accurately tell from a single glance if a video is real or AI. The best thing you can do to prevent yourself from being duped is to not automatically, unquestioningly believe everything you see online. Follow your gut instinct -- if something feels unreal, it probably is. In these unprecedented, AI-slop-filled times, your best defense is to inspect the videos you're watching more closely. Don't just quickly glance and scroll away without thinking. Check for mangled text, disappearing objects and physics-defying motions. And don't beat yourself up if you get fooled occasionally; even experts get it wrong. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)