YouTube supports the "No Fakes Act" to fight unauthorized AI deepfakes
The platform is also testing tools to detect and manage AI-generated content

Get Smarter at Marketing
YouTube is showing support for the âNo Fakes Act,â designed to stop the spread of unauthorized AI-generated deepfakes. These are fake audio or videos made using technology to copy someoneâs face or voice without permission.
First introduced in 2023 by Senators Chris Coons (D-DE) and Marsha Blackburn (R-TN), the bill, officially called the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act is now making another push this year.Â
With Generative AI, people can now create realistic digital clones. Thatâs powerful, but risky, especially when creators, brands, and audiences canât always tell whatâs real. While AI has enhanced creativity, it raises questions about consent, ownership, and trust. So, YouTube says itâs âproud to support this important legislationâ to ban unauthorized digital clones of voices and images.
What the âNo Fakes Actâ says
The bill provides some protections for platforms like YouTube. They wonât be held legally responsible for storing deepfakes as long as they act quickly to remove them once notified, and they inform the uploader that the content has been taken down.
The bill says, âGenerative AI has opened new worlds of creative opportunities, providing tools that encourage millions of people to explore their own artistic potential. Along with these creative benefits, however, these tools can allow users to exploit another personâs voice or visual likeness by creating highly realistic digital replicas without permission.â
But that protection doesnât apply if the platform is specifically designed or marketed for generating deepfakes. So, while YouTube might be shielded under this rule, AI-first tools built for creating synthetic content could still be liable.
For instance, a brand ad using an AI-generated voice of a known creator without permission. The NO FAKES Act gives the creator and others the power to file a takedown, ensuring they have a say in how their identity is used. Thatâs a big deal for rights management and brand safety in user-generated content and influencer collaborations.
YouTube says itâs putting control in the hands of individuals
YouTube said the bill âfocuses on the best way to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down.â
In practice, that means people can now request the removal of altered or synthetic content that mimics their face or voice. YouTube recently updated its privacy process to reflect this change. âWe updated our privacy process so that people can submit requests for the removal of altered or synthetic content that simulates their likeness, including their face or voice,â YouTube said.
Pilot programs and new AI detection tools
YouTube is also testing new tools to help detect and manage how AI is used on the platform. A pilot program, launched with support from figures in the creative industry, gives selected creators access to early-stage AI detection tools.
While details are still limited, the goal is to let creators know when their likeness is being used in AI-generated content so they can take action.Â
Collaboration across the entertainment industry
YouTube says itâs working with industry bodies like the Recording Industry Association of America (RIAA) and the Motion Picture Association (MPA) to shape legislation like the NO FAKES Act. The MPA, which represents major Hollywood studios, already endorsed the bill last year.
âWe believe collaboration is essential, especially as we navigate the evolving world of AI, and we've worked closely with sponsors and our partners across the industry,â the company said.
YouTube is also supporting the TAKE IT DOWN Act, another policy focused on closing legal gaps in AI-generated content.Â
â
â
%20(1).png)
in the world of marketing: