YouTube is developing a tool in partnership with Creative Artists Agency (CAA) to enable creators and celebrities to detect and manage AI-generated content that uses their likeness, including faces and voices. The tool will allow them to submit removal requests for unauthorized AI-generated content. CAA provides a database of digital copies for the celebrities which helps find the AI deepfakes. This initiative aims to address the growing issue of deepfakes and ensure content creators have more control over the use of their digital identities online.
The OWASP (Open Web Application Security Project) has released new security guidance for organizations running generative AI tools. The updated OWASP Top 10 for LLM focuses on addressing the growing threat of deepfakes, providing recommendations for risk assessment, threat actor identification, incident response, awareness training, and various event types. Additionally, the guidance advocates for establishing centers of excellence for gen AI security to develop security policies, foster collaboration, build trust, advance ethical practices, and optimize AI performance. This new guidance highlights the increasing need for a more comprehensive approach to securing AI and machine-learning tools, as attackers leverage AI to create more sophisticated and advanced threats.