YouTube is expanding its likeness detection technology, which identifies AI-generated deepfakes, to a pilot group of government officials, political candidates, and journalists, the company announced Tuesday. Members of the pilot group will gain access to a tool that detects unauthorized AI-generated content and lets them request its removal if they believe it violates YouTube policy.
The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests.
Similar to YouTube’s existing Content ID system, which detects copyright-protected material in users’ uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people’s perception of reality, as they leverage the deepfaked personas of notable figures — like politicians or other government officials — to say and do things in these AI videos that they didn’t in real life.
With the new pilot program, YouTube aims to balance users’ free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure.
“This expansion is really about the integrity of the public conversation,” said Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy, in a press briefing ahead of Tuesday’s launch. “We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it,” she noted.
Image Credits:YouTube
Miller explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression.
The company noted it’s advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual’s voice and visual likeness.
To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works.
The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time.
... continue reading