Tech News
← Back to articles

The next legal frontier is your face and AI

read original related products more articles

is a senior tech and policy editor focused on online platforms and free expression. Adi has covered virtual and augmented reality, the history of computing, and more for The Verge since 2011.

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the legal morass of AI, follow Adi Robertson. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

How it started

The song was called “Heart on My Sleeve,” and if you didn’t know better, you might guess you were hearing Drake. If you did know better, you were hearing the starting bell of a new legal and cultural battle: the fight over how AI services should be able to use people’s faces and voices, and how platforms should respond.

Back in 2023, the AI-generated faux-Drake track “Heart on My Sleeve” was a novelty; even so, the problems it presented were clear. The song’s close imitation of a major artist rattled musicians. Streaming services removed it on a copyright legal technicality. But the creator wasn’t making a direct copy of anything — just a very close imitation. So attention quickly turned to the separate area of likeness law. It’s a field that was once synonymous with celebrities going after unauthorized endorsements and parodies, and as audio and video deepfakes proliferated, it felt like one of the few tools available to regulate them.

Unlike copyright, which is governed by the Digital Millennium Copyright Act and multiple international treaties, there’s no federal law around likeness. It’s a patchwork of varying state laws, none of which were originally designed with AI in mind. But the past few years have seen a flurry of efforts to change that. In 2024, Tennessee Gov. Bill Lee and California Gov. Gavin Newsom — both of whose states rely heavily on their media industries — signed bills that expanded protections against unauthorized replicas of entertainers.

But law has predictably moved more slowly than tech. Last month OpenAI launched Sora, an AI video generation platform aimed specifically at capturing and remixing real people’s likenesses. It opened the floodgates to a torrent of often startlingly realistic deepfakes, including of people who didn’t consent to their creation. OpenAI and other companies are responding by implementing their own likeness policies — which, in the absence of anything else, could turn into the internet’s new rules of the road.

How it’s going

OpenAI has denied it was reckless launching Sora, with CEO Sam Altman claiming that if anything, it was “way too restrictive” with guardrails. Yet the service has still generated plenty of complaints. It launched with minimal restrictions on the likenesses of historical figures, only to reverse course after Martin Luther King Jr.’s estate complained about “disrespectful depictions” of the assassinated civil rights leader spewing racism or committing crimes. It touted careful restrictions on unauthorized use of living people’s likenesses, but users found ways around it to put celebrities like Bryan Cranston into Sora videos doing things like taking a selfie with Michael Jackson, leading to complaints from SAG-AFTRA that pushed OpenAI to strengthen guardrails in unspecified ways there too.

Even some people who did authorize Sora cameos (its word for a video using a person’s likeness) were unsettled by the results, including, for women, all kinds of fetish output. Altman said he hadn’t realized people might have “in-between” feelings about authorized likenesses, like not wanting a public cameo “to say offensive things or things that they find deeply problematic.”

... continue reading