Artificial Intelligence Regulation Demands Action from Online Platforms Due to the No Fakes Act
In the rapidly evolving world of technology, the No Fakes Act, a federal bill introduced in 2023, aims to address the legal gaps created by outdated laws that are unable to cover modern issues like deepfakes and AI-generated content. Originally targeted at protecting individuals, particularly celebrities, from unauthorized digital replicas, the Act has expanded significantly in scope and regulatory impact, raising substantial concerns among digital rights advocates and the tech community.
The No Fakes Act now extends its reach to the products and services that create such content, effectively establishing a "federalized image-licensing system" combined with a mandatory censorship and filtering infrastructure that platforms must implement to comply. This includes broad takedown and future upload-blocking mandates on targeted content and related tools, forcing platforms to exercise heavy-handed filtering and monitoring.
The Act requires platforms to promptly remove targeted AI-generated content upon notice and implement filtering systems to prevent re-uploading. They must also filter and potentially remove tools that could be used to create unauthorized replicas, creating significant operational and legal burdens on platforms, risking over-censorship due to the broad and vague definitions, and potentially chilling innovation and free expression.
Criticism from digital rights groups, such as the Electronic Frontier Foundation (EFF), warns that the Act’s sweeping provisions could threaten internet freedom by imposing overly broad controls that extend beyond the original intent of protecting individuals from harmful deepfakes. The bill would grant rights-holders a kind of "veto power" over innovation and online speech, and force platforms into surveillance and censorship roles with few safeguards against abuse or errors.
The No Fakes Act is part of a larger regulatory push that includes state laws requiring AI-generated content labeling and federal agencies encouraging transparency measures such as watermarking synthetic media. However, the NO FAKES Act’s mandatory takedown and filtering regime is considerably more aggressive than existing voluntary frameworks or state-level rules focused mainly on labeling or disclosure.
As the enforcement of the No Fakes Act proves difficult due to factors such as the early stage of detection tools, the lack of visible markers in some AI-generated content, and the high volume and speed of uploads, cases like George Carlin's estate settling with the creators of a podcast that used AI to simulate his voice without consent serve as a warning about the legal and public concerns surrounding the use of AI-generated content without permission.
In the face of these challenges, major platforms like Meta (Facebook and Instagram), Spotify, and TikTok are taking steps to address AI concerns. Meta has introduced "Imagined with AI" labels on some images created with generative tools and plans to expand labeling to video and audio, also committing to watermarking AI-generated content shared on its platforms. Spotify has taken a firm stance against impersonation by removing AI-generated songs that copied the voices of major artists like Drake and The Weeknd, and updating its terms of service to prohibit content that mimics real individuals without permission. TikTok has made some progress in labeling AI-generated content using embedded metadata and joining the Coalition for Content Provenance and Authenticity (C2PA), but its labeling is often limited to content created with in-app tools, and moderation teams can't always catch synthetic content in time.
Talent agencies such as Creative Artists Agency (CAA) are now helping clients manage digital risks alongside traditional career support, including monitoring for unauthorized use of a client's voice, face, or performance online and taking action when necessary. Record labels are also taking steps to address AI concerns, such as negotiating licensing deals with AI music companies to define how copyrighted music can be used.
The Human Artistry Campaign, supported by organizations like the RIAA, SAG-AFTRA, and Universal Music Group, promotes seven key principles for using AI in ways that support artists, including getting permission before using someone's voice or image, crediting original creators, and ensuring artists are paid fairly. As members of the World Wide Web Consortium (W3C), our website upholds the standards for the World Wide Web and works towards a more secure and user-friendly online experience. Our website is an open-source ecosystem providing access to on-chain and secure our website verification, improving the user experience and reducing onboarding friction through reusable and interoperable Gateway Passes.
The No Fakes Act helps create federal protections for voice, image, and likeness, outlining expectations around consent and authenticity. While it aims to combat unauthorized deepfakes, its broad and stringent provisions have alarmed the tech and digital rights communities, who fear it may undermine online freedom, innovation, and fair content moderation practices. As the Act continues to evolve, it remains to be seen how it will balance the need for protection against deepfakes with the preservation of internet freedom and the fostering of innovation.
- The No Fakes Act's expansion now covers not only individuals but also the technology and platforms that produce AI-generated content, establishing a federalized image-licensing system that requires these platforms to implement strict censorship and filtering systems.
- The entertainment industry, in response to the growing concerns over AI-generated content, is taking steps to protect artists' rights via various means, such as talent agencies monitoring for unauthorized use, record labels negotiating licensing deals, and the Human Artistry Campaign promoting principles for responsible AI usage that prioritize consent, crediting original creators, and fair compensation for artists.