YouTube is showing support for the “No Fakes Act,” designed to stop the spread of unauthorized AI-generated deepfakes. These are fake audio or videos made using technology to copy someone’s face or voice without permission.

First introduced in 2023 by Senators Chris Coons (D-DE) and Marsha Blackburn (R-TN), the bill, officially called the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act is now making another push this year. 

With Generative AI, people can now create realistic digital clones. That’s powerful, but risky, especially when creators, brands, and audiences can’t always tell what’s real. While AI has enhanced creativity, it raises questions about consent, ownership, and trust. So, YouTube says it’s “proud to support this important legislation” to ban unauthorized digital clones of voices and images.

What the “No Fakes Act” says

The bill provides some protections for platforms like YouTube. They won’t be held legally responsible for storing deepfakes as long as they act quickly to remove them once notified, and they inform the uploader that the content has been taken down.

The bill says, “Generative AI has opened new worlds of creative opportunities, providing tools that encourage millions of people to explore their own artistic potential. Along with these creative benefits, however, these tools can allow users to exploit another person’s voice or visual likeness by creating highly realistic digital replicas without permission.”

But that protection doesn’t apply if the platform is specifically designed or marketed for generating deepfakes. So, while YouTube might be shielded under this rule, AI-first tools built for creating synthetic content could still be liable.

For instance, a brand ad using an AI-generated voice of a known creator without permission. The NO FAKES Act gives the creator and others the power to file a takedown, ensuring they have a say in how their identity is used. That’s a big deal for rights management and brand safety in user-generated content and influencer collaborations.

YouTube says it’s putting control in the hands of individuals

YouTube said the bill “focuses on the best way to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down.”

In practice, that means people can now request the removal of altered or synthetic content that mimics their face or voice. YouTube recently updated its privacy process to reflect this change. “We updated our privacy process so that people can submit requests for the removal of altered or synthetic content that simulates their likeness, including their face or voice,” YouTube said.

Pilot programs and new AI detection tools

YouTube is also testing new tools to help detect and manage how AI is used on the platform. A pilot program, launched with support from figures in the creative industry, gives selected creators access to early-stage AI detection tools.

While details are still limited, the goal is to let creators know when their likeness is being used in AI-generated content so they can take action. 

Collaboration across the entertainment industry

YouTube says it’s working with industry bodies like the Recording Industry Association of America (RIAA) and the Motion Picture Association (MPA) to shape legislation like the NO FAKES Act. The MPA, which represents major Hollywood studios, already endorsed the bill last year.

“We believe collaboration is essential, especially as we navigate the evolving world of AI, and we've worked closely with sponsors and our partners across the industry,” the company said.

YouTube is also supporting the TAKE IT DOWN Act, another policy focused on closing legal gaps in AI-generated content. 

‍

‍

Industry's View
Stories like this, in your inbox every Wednesday
Our 1x weekly, bite-sized newsletter will give you everything you need to know
in the world of marketing:
HOME PAGE