YouTube, a subsidiary of Alphabet Inc.’s (NASDAQ:GOOG) (NASDAQ:GOOGL) Google, is rolling out a pilot program allowing politicians, government officials and journalists to detect and request the removal of AI-generated deepfake videos that simulate their likeness.
YouTube Expands AI Likeness Detection To Protect Public Figures Online
On Tuesday, YouTube announced that its likeness detection technology, first launched for creators in 2025, will now be available to a pilot group of public figures, according to its blog post.
Eligible participants can verify their identity with a selfie and government ID, view detected matches, and request removal if the content violates YouTube policies.
"This expansion is really about the integrity of the public conversation," said Leslie Miller, YouTube's vice president of Government Affairs and Public Policy.
She added, "The risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we're also being careful about how we use it."
YouTube will review removal requests under its existing privacy guidelines, allowing videos that are protected forms of expression, such as parody or political critique.
AI-generated videos will be labeled, though placement varies depending on content sensitivity.
Amjad Hanif, YouTube's vice president of Creator Products, explained that labels are applied based on whether the AI aspect materially affects the content.
The company is also advocating for federal legislation, supporting the NO FAKES Act, which seeks to regulate unauthorized AI recreations of individuals' voices and likenesses.
YouTube Targets AI Spam And Deepfakes
According to a company blog post, YouTube launched a pilot program that requires a video selfie and a government ID to enroll.
The company emphasized that "detection does not guarantee removal," noting it will continue to protect content like parody and satire while carefully evaluating removal requests.
Earlier, the platform had removed more than a dozen popular AI-generated spam channels, including videos with talking cats and religious figures, some drawing millions of views.
CEO Neal Mohan said curbing low-quality AI content was a top priority, while a spokesperson reaffirmed that YouTube does not allow "spam, scams, or other deceptive practices" on its platform.
Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Photo courtesy: Shutterstock
This article YouTube Expands AI Deepfake Detection Tool To Politicians, Journalists originally appeared on Benzinga.com
.