News

Meta Says It's Taking Steps To Preempt AI-Driven Election Abuse

With over 50 countries hitting the voting booths this year, the industry's biggest generative AI companies are paying close attention to how resilient their systems are against politically motivated misuse.

One of those companies is Meta, steward of the Llama large language model and parent company of Facebook, which has a notoriously poor record of curtailing the spread of election misinformation among its users. Now, with AI image and video generators becoming increasingly sophisticated and accessible at the same time that voters around the world are hitting the polls, Meta says it's working to make sure its various social media properties -- which, besides Facebook, also includes Instagram and Threads -- don't become breeding grounds for fake news (again).

Earlier this month, the company joined 19 others, including Microsoft, OpenAI and Google, in signing the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections." The accord's goal is to set expectations for how each of the signatories will combat "Deceptive AI Election Content" that leverages their respective solutions.

Meta also recently announced that it was looking into ways to label AI-generated media on its sites, including, potentially, audio and video.

"[I]t's important that we help people know when photorealistic content they're seeing has been created using AI. We do that by applying 'Imagined with AI' labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies' tools too," wrote Meta global affairs president Nick Clegg in a blog post.

"That's why we've been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads."

Meta expects this capability to be ready in the coming months, but for images only. Disclosing the provenance of AI-generated audio and video media -- and making that disclosure easy for the general public to see -- is another issue. "While companies are starting to include signals in their image generators," said Clegg, "they haven't started including them in AI tools that generate audio and video at the same scale, so we can't yet detect those signals and label this content from other companies."

For now, Meta's workaround is to "require" users to disclose whether a piece of audio or video media that they post is AI-generated or otherwise altered. Users who fail to meet this requirement "may" be levied with a penalty, according to Clegg, though it wasn't clear what form such a penalty could take.

This week, amid the run-up to the European Parliament elections, Meta's head of EU affairs, Marco Pancini, reiterated in a blog post that the company is "committed to taking a responsible approach to new technologies like GenAI."

Like other content, AI-generated images posted to Meta sites must abide by the company's terms of use; some may be subject to review and fact-checks, Pancini said. If an image is found to have been modified, Meta will mark it as "altered" and push it down users' feeds "so fewer people see it."

Meta is also scrutinizing paid ads. Ads that have been "debunked" won't run, according to Pancini, and advertisers will be required to "disclose if they use a photorealistic image or video, or realistic sounding audio, that has been created or altered digitally, including with AI, in certain cases."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

Upcoming Training Events