News

Tech Giants Try To Get Ahead of AI-Based Election Misinformation

One day after OpenAI unveiled an AI that can create videos out of text, some of the world's biggest purveyors of generative AI -- including OpenAI itself -- are sounding the alarm about the abuse of such technology during an especially busy election year.

On Friday, 20 technology giants announced they have signed the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections" as part of the Munich Security Conference.

The accord, the signing of which appears to be a purely symbolic gesture so far, "seeks to set expectations for how signatories will manage the risks arising from Deceptive AI Election Content created through their publicly accessible, large-scale platforms or open foundational models, or distributed on their large-scale social or publishing platforms, in line with their own policies and practices."

Some election-specific abuses that the agreement seeks to address include "AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote."

The initial signees are:

Adobe Meta
Amazon Microsoft
Anthropic Nota
Arm OpenAI
ElevenLabs Snap Inc.
Google Stability AI
IBM TikTok
Inflection AI Trend Micro
LinkedIn Truepic
McAfee X

According to the press release, the above companies have specifically agreed to the following eight goals:

  1. Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
  2. Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content
  3. Seeking to detect the distribution of this content on their platforms
  4. Seeking to appropriately address this content detected on their platforms
  5. Fostering cross-industry resilience to deceptive AI election content
  6. Providing transparency to the public regarding how the company addresses it
  7. Continuing to engage with a diverse set of global civil society organizations, academics
  8. Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

Politically motivated AI "deepfakes" are not new, but in just the past few months, the technology behind them has simultaneously become much more accessible and much less easy to detect. With more than 50 national elections slated to take place this year, 2024 represents an inflection point for the global discourse on the responsible use of generative AI.

Some of the accord's signees have already been independently developing ways to fight AI-based election misinformation. Claude chatbot owner Anthropic, for instance, on Friday described some of the steps it has been taking to buttress its AI models against politically motivated bad actors. They include conducting robust testing of its models for weak spots in its policies, creating automated detection systems that can identify and block malicious users, and directing users to reputable sources of election information.

OpenAI, meanwhile, has recently started to include metadata in images created by DALL-E 3 or ChatGPT to indicate that they are AI-generated. This week, the company said it is exploring ways to mark Sora-generated videos in the same way. It's not a bullet-proof solution, however, as the metadata -- as it's currently implemented in DALL-E 3 and ChatGPT images -- can be easily removed.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

Upcoming Training Events