Home / Tech / YouTube Expands AI Likeness Detection Tools to Protect Journalists and Public Figures

YouTube Expands AI Likeness Detection Tools to Protect Journalists and Public Figures

YouTube Expands AI Likeness Detection Tools to Protect Journalists and Public Figures

YouTube is expanding its artificial intelligence (AI) safety tools to protect a wider group of users, including journalists and civic leaders, as concerns grow over AI-generated impersonation and deepfakes online.

The new feature, known as likeness detection, will help individuals identify and flag videos that appear to use their face or identity without permission.

The expansion marks one of the platform’s most significant steps yet in addressing the risks posed by rapidly advancing generative AI technologies.

Protecting the Integrity of Public Conversation

According to Leslie Miller, the initiative is aimed at safeguarding public discourse, particularly for individuals whose work places them in the public eye.

“The expansion is really about the integrity of the public conversation,” Miller said in remarks reported by TechCrunch. “We know that the risks of AI impersonation are particularly high for those in the civic space.”

She added that while YouTube is introducing new protections, the company is also approaching the rollout cautiously to avoid misuse of the system.

From Creators to Journalists and Civic Voices

Initially developed to help content creators identify unauthorised uses of their likeness, the tool is now being extended to include journalists and other public figures.

Amjad Hanif explained that the company has long anticipated expanding the technology beyond the creator community.

We’ve always known that there was a need for this tech to go beyond just creators,” Hanif said in comments reported by The Hollywood Reporter.

He noted that as the platform learned more about how journalists and civic organisations interact with digital content—particularly during election cycles—the case for broader protection became clearer.

How the Likeness Detection System Works

Eligible users can access the tool through YouTube Studio, where a dedicated “Likeness” tab appears under the platform’s content detection features.

According to Rene Ritchie, the verification process is designed to ensure that only legitimate individuals gain access to the protection system.

To enrol, users must:

  1. Submit a government-issued ID

  2. Record a short video selfie for identity verification

  3. Complete the verification process through YouTube Studio

Once verified, users will be able to receive alerts when content appears to contain AI-generated or manipulated versions of their likeness.

Growing Concerns About AI Impersonation

The rollout comes amid increasing global concerns about deepfake technology, which allows realistic videos or audio to be generated using artificial intelligence.

Experts warn that such tools could be used to:

  1. Spread misinformation

  2. Manipulate public opinion

  3. Target journalists, politicians, and activists

  4. Undermine trust in legitimate media

During election periods in particular, AI-generated impersonations of public figures could have significant consequences for democratic processes.

Part of a Broader AI Safety Push

The expansion of likeness detection is part of YouTube’s broader effort to introduce AI transparency and safety measures across its platform.

In recent years, the company has introduced policies requiring creators to label realistic AI-generated content, especially when it involves public figures or sensitive topics.

These steps reflect a growing push across the technology industry to balance innovation in AI with safeguards against misuse.

A New Layer of Protection

While still evolving, YouTube’s likeness detection system could become a critical tool for journalists and public figures who face increasing risks from AI-driven impersonation.

By combining identity verification with automated detection systems, the platform hopes to give users greater control over how their image and identity appear in online videos.

As AI technology continues to develop, tools like this may play a key role in preserving trust, authenticity, and accountability in digital media.

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *