Meta’s AI Watermarking Plan Is Flimsy, at Best
- Deepfake robocalls of Joe Biden and fake endorsement of Donald Trump from Taylor Swift have raised concerns about the impact of AI on the 2024 election.
- Meta, the parent company of Facebook and Instagram, announced plans to label AI-generated content created with popular generative AI tools to address concerns.
- Advocates worry about AI’s potential harms to democracy and the effectiveness of Meta’s labeling approach in combating deepfakes.
- Meta’s system relies on watermarks in images, which may not be effective against unsecured “open-source” generative AI tools that do not produce watermarks.
- Concerns exist that bad actors can bypass Meta’s labeling system, even if using AI tools covered by Meta, such as products from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.
- Removing watermarks from images produced using the current C2PA watermarking standard can be done quickly, undermining Meta’s labeling efforts.