Tech News
← Back to articles

YouTube's new AI deepfake tracking tool is alarming experts and creators

read original related products more articles

A YouTube tool that uses creators' biometrics to help them remove AI-generated videos that exploit their likeness also allows Google to train its AI models on that sensitive data, experts told CNBC.

In response to concern from intellectual property experts, YouTube told CNBC that Google has never used creators' biometric data to train AI models and it is reviewing the language used in the tool's sign-up form to avoid confusion. But YouTube told CNBC it will not be changing its underlying policy.

The discrepancy highlights a broader divide inside Alphabet , where Google is aggressively expanding its AI efforts while YouTube works to maintain trust with creators and rightsholders who depend on the platform for their businesses.

YouTube is expanding its "likeness detection," a tool the company introduced in October that flags when a creator's face is used without their permission in deepfakes, the term used to describe fake videos created using AI. The feature is being expanded to millions of creators in the YouTube Partner Program as AI-manipulated content becomes more prevalent throughout social media.

The tool scans videos uploaded across YouTube to identify where a creator's face may have been altered or generated by artificial intelligence. Creators can then decide whether to request the video's removal, but to use the tool, YouTube requires that creators upload a government ID and a biometric video of their face. Biometrics are the measurement of physical characteristics to verify a person's identity.

Experts say that by tying the tool to Google's Privacy Policy, YouTube has left the door for future misuse of creators' biometrics. The policy states that public content, including biometric information, can be used "to help train Google's AI models and build products and features."

"Likeness detection is a completely optional feature, but does require a visual reference to work," YouTube spokesperson Jack Malon said in a statement to CNBC. "Our approach to that data is not changing. As our Help Center has stated since the launch, the data provided for the likeness detection tool is only used for identity verification purposes and to power this specific safety feature."

YouTube told CNBC it is "considering ways to make the in-product language clearer." The company has not said what specific changes to the wording will be made or when they will take effect.

Experts remain cautious, saying they raised concerns about the policy to YouTube months ago.

"As Google races to compete in AI and training data becomes strategic gold, creators need to think carefully about whether they want their face controlled by a platform rather than owned by themselves," said Dan Neely, CEO of Vermillio, which helps individuals protect their likeness from being misused and also facilitates secure licensing of authorized content. "Your likeness will be one of the most valuable assets in the AI era, and once you give that control away, you may never get it back."

... continue reading