News
Microsoft Urges Policymakers To Write Laws Against AI Deepfakes
Amid a fraught election cycle, Microsoft is urging policymakers to take legal action against AI-generated deepfakes.
Microsoft President Brad Smith warned in a blog post Tuesday that AI-generated deepfakes are becoming increasingly sophisticated, making them effective vehicles for fraud, abuse and manipulation, especially among vulnerable groups like seniors.
Smith's comments come just days after Elon Musk, owner of the X social media platform, shared a deepfake video of Vice President Kamala Harris making statements she never made.
"While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud," Smith wrote. Legislators must create frameworks that promote authentic content, enable the detection of deepfakes and spread awareness about AI misuse, he argued.
Many states already have laws requiring the disclosure of AI-generated content in political ads, while states like California, Washington, Texas and Michigan also regulate the use of deepfakes. In light of the incident with Musk this week, California Governor Gavin Newsom signaled that his state will consider tightening its deepfake regulations.
However, said Smith, legislatures must do more. He outlined three ways that policymakers can craft an effective framework to fight AI misuse:
- Enact a federal "deepfake fraud statute" to provide law enforcement with a framework to prosecute AI-generated fraud and scams.
- Require AI system providers to use advanced provenance tools to label synthetic content, enhancing public trust in digital information.
- Update federal and state laws on child sexual exploitation and non-consensual intimate imagery to include AI-generated content, imposing penalties to curb the misuse of AI for sexual exploitation.
"Microsoft offers these recommendations to contribute to the much-needed dialogue on AI synthetic media harms," wrote Smith. "Enacting any of these proposals will fundamentally require a whole-of-society approach. While it's imperative that the technology industry have a seat at the table, it must do so with humility and a bias towards action."
Smith also emphasized that the burden of curbing the negative effects of AI deepfakes doesn't only fall on lawmakers, but on the private sector, as well. Smith pointed to Microsoft's own AI safety track record, which includes implementing a robust safety architecture, attaching metadata to AI-generated images, developing standards for content provenance, and launching new detection tools like Azure Operator Call Protection.