YouTube Adding Tools to Protect Against Unauthorized AI Use

YouTube is introducing AI detection tools designed to allow people to learn when their face and/or voice are copied and used in third-party videos. As part of the effort, YouTube’s existing Content ID program that protects copyrighted music will expand to include more broad-based voice simulation detection technology. The new tools aim to protect “people from a variety of industries — from creators and actors to musicians and athletes,” according to the company. The Google-owned platform is also coming up with a way to address unauthorized use of its content for training AI models.

“Together with our recent privacy updates, this will create a robust set of tools to manage how AI is used to depict people on YouTube,” the company said in a blog post that alludes to monetizing AI’s creative appropriations.

“At YouTube, we’re committed to ensuring our creators and partners thrive in this evolving landscape,” VP of Creator Products Amjad Hanif wrote in the post. “This means equipping them with the tools they need to harness AI’s creative potential while maintaining control over how their likeness, including their face and voice, is represented.”

“Starting early next year, YouTube will begin to test the synthetic-singing identification technology with its partners,” according to TechCrunch.

“The platform is developing separate tools for identifying content simulating performers’ voices and facial deepfakes of creators,” writes The Verge.

As for dealing with the unauthorized use of YouTube videos for model training, “this has been an issue for some time, leading creators to complain that companies like Apple, Nvidia, Anthropic, OpenAI and Google, among others, have trained on their material without their consent or compensation,” TechCrunch reports. “YouTube hasn’t yet revealed its plan to help protect creators (or generate additional revenue of its own from AI training), only that it has something in the works.”

For the moment, YouTube is largely focused on finding a way to compensate artists for work used to create AI music, something it has been talking about since August 2023 when it unveiled its Music AI Incubator in conjunction with Universal Music Group.

At the time, YouTube CEO Neal Mohan wrote of unlocking AI “opportunities” for artists. The company today says its Content ID system “currently processes billions of claims per year, and generates billions in revenue for creators and artists,” per TechCrunch.

SiliconANGLE points out that other platforms are making efforts at transparency, with Google “working on ways to watermark and detect AI-generated images using Google DeepMind’s SynthID.”

Meta Platforms “labels AI-generated content uploaded to its social media networks using open-source technology classifiers” from C2PA and others, though IEEE Spectrum dismissed its program as “flimsy.”

Whether steps by TikTok, which began flagging AI-generated content in May, are any better remains to be seen. SiliconANGLE says the ByteDance company’s AI tools claim to “automatically flag content, but users are expected to add labels themselves if it’s AI-generated.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.