When OpenAI released the first version of Sora, I was excited. For years, I'd had this short story sitting on my hard drive, something I'd written long ago and always dreamed of bringing to life as a short film. The only problem was I didn't have the expertise to shoot a movie, and my Blender 3D skills are rusty for lack of use. But Sora promised something different. I could upload my sketches, input my script, and generate the film in my mind. The creative barrier had finally been lifted.
But reality was a bit different from the demos. No matter what I tried, my outputs never looked anything like what OpenAI showcased. The scenes were adjacent to what I wanted, not what I actually needed.
It wasn't just Sora. I tested Runway ML, experimented with Veo, and tried to keep my spending reasonable. Every model generated the same kind of thing: something that "looked good" in a superficial way. They excelled at creating cliché scenes, the kind of generic image that checks all the technical boxes. But creating something that could fit into a coherent narrative, something with intention and specificity? That was nearly impossible.
When Sora 2 launched, I started right where I left off. Maybe this time would be different. The videos are more realistic than ever, but the main problem remains unchanged. These tools aren't struggling because they can't generate scenes or dialogue, they sure can. The issue is that they generate what I've come to call "AI Videos," and that's a distinct category with its own aesthetic fingerprint.
The New Uncanny Valley
Think about how you instantly recognize certain types of content. If I described a video to you right now: fast-paced, someone talking directly to a screen with multiple jump cuts, a ring light's circular reflection visible in their eyes, their bedroom visible in the background. You would instantly say "TikTok video." The format is hard to miss these days.
AI-generated videos have developed their own unique look. There's a visual quality that marks them, a subtle wrongness that your brain picks up on even when you can't articulate exactly what's off. It's the new uncanny valley, and I feel an intense revulsion whenever I encounter it. I'm not alone in this reaction either. In my small circle of friends and colleagues, we've all developed the same instinctive aversion.
I'm starting to feel the same revulsion for YouTube shorts even when they are created by real people. The reason is, well, YouTube has been secretly using AI to alter real videos, making authentic content start to look artificially generated. You will notice people's faces look smoothed or sharpened, and that happens without the creator's knowledge or consent. The line between real and AI-generated content is blurring from both directions.
So if these videos trigger such a negative response in many viewers, where can AI-generated content actually thrive? The answer: with spammers, scammers, rage-baiters, and manipulators.
These bad actors are having a field day with AI video tools. A couple months ago, I wrote about AI Video Overviews, I speculated that Google might eventually start using AI-generated videos as enhanced search results, synthesizing information from multiple sources into custom video summaries. That remains speculative. But for harmful content? That's not speculation, it's happening right now, at scale.
... continue reading