is features writer with five years of experience covering the companies that shape technology and the people who use their tools.
Posts from this author will be added to your daily email digest and your homepage feed.
Public officials and journalists will soon be able to keep track of AI-generated deepfakes of themselves on YouTube through the platform’s likeness detection feature.
The tool is already available to millions of content creators on YouTube, but beginning Tuesday, it will expand to a pilot group of journalists, government officials, and political candidates. (At a briefing with reporters, YouTube declined to share who was in the pilot group, including whether Donald Trump is part of it.) Likeness detection is similar to Content ID, which scans YouTube for copyrighted material — except likeness detection looks for people’s faces. When there are matches, an individual in the program can request that YouTube remove the content, though the company says not every request will be approved. Removals are based on YouTube’s privacy policy, which includes carve outs for content like parody and satire.
“YouTube has a long history of protecting free expression, and that includes parody, satire, and political critique. If a video of a world leader is clear parody, it’s likely to stay up,” said Leslie Miller, YouTube’s vice president of government affairs and public policy. “We evaluate every removal request under our longstanding privacy guidelines to ensure we’re not stifling the very civic discourse we’re trying to protect.”
To join the program, individuals will be required to submit a video of themselves and a government ID. YouTube says this data will only be used for the likeness detection feature, and that individuals can withdraw from the program and request YouTube remove the data.
Amjad Hanif, vice president of creator products, said that so far the amount of content creators request that YouTube remove under the policy is “actually very small.”
“They may see lots of matches, and I think for a lot of them, it’s just been the awareness of what’s been created, but the volume of actual removal requests is really, really low because most of it turns out to be fairly benign or additive to their overall business,” Hanif said. Politicians, of course, may not see it the same way — but Hanif did hint at the possibility of allowing monetization on AI-generated deepfakes in the future.
“You may find that folks in the industry want to allow that, and that’s something that we’re investing in and we have a long history and experience in,” he said.