Deepfake AI regulation a tightrope walk for Congress
- U.S. lawmakers are navigating the regulation of generative AI to combat deepfake technology while upholding First Amendment rights.
- Issues with deepfake AI, creating deceptive audio and visual replicas, have prompted legislative attention due to concerns about misuse.
- Proposed bills such as the No AI FRAUD Act and the NO FAKES Act aim to address the unauthorized creation and distribution of AI-generated replicas.
- Some experts advocate for specific legislation to regulate deepfake AI, while others argue that existing laws suffice.
- Witnesses at a Senate Judiciary Subcommittee hearing stressed the need for regulations to protect individuals’ voices and likeness from misuse.
- Concerns about the impact of deepfake technology extend to various sectors, including business relationships and artistic integrity.
- Stakeholders, including Warner Music Group CEO Robert Kyncl and musical artist FKA Twigs, advocate for legislation to safeguard against exploitation of artists’ work.
- However, legal experts caution that proposed legislation must balance protection against deepfake misuse with First Amendment rights.
- The Federal Communications Commission and the Federal Trade Commission have taken steps to address AI-generated content, considering regulations to prevent deceptive practices.
- Globally, the European Union is also scrutinizing the use of deepfake technology, with investigations into Meta’s practices around political disinformation.