Tech companies sign agreement to combat AI-generated election fraud

Major technology companies signed an agreement on Friday to voluntarily take “reasonable precautions” to prevent the use of artificial intelligence tools to interfere with democratic elections around the world.

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to deep AI-generated voter fraud. Twelve other companies – including Elon Musk’s X – are also signing the agreement.

“Everyone recognizes that no single technology company, no single government, no single civil society organization can deal with the advent of this technology and its potentially devastating uses on their own,” said Nick Clegg , president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview before the summit.

The agreement is largely symbolic, but focuses on increasingly realistic AI-generated images, audio, and video that “fake or deceptively alter the appearance, voice, or actions of political candidates, election officials, and key stakeholders other in a democratic election, or that provides. false information to voters about when, where and how they can legally vote.”

The companies are not committed to banning or removing deepfakes. Instead, the agreement outlines the methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. He notes that the companies will share best practices with each other and provide “quick and proportionate responses” when such content begins to circulate.

The vagueness of the promises and lack of binding requirements likely helped to reach a wide range of companies, but advocates were disappointed looking for stronger assurances.

“The language is not as strong as one would expect,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and I acknowledge that the companies have a vested interest in not having their tools used to undermine free and fair elections. That said, it’s voluntary, and we’ll be keeping an eye on whether they follow through.”

Clegg said that “every company, rightly so, has its own set of policies.”

“This is not trying to put a straitjacket on everybody,” he said. “And anyway, nobody in the industry thinks you can deal with a whole new technology paradigm by sweeping things under the rug and trying to play. A mole and find everything that you think could make a person go astray.”

A number of political leaders from Europe and the US also joined in Friday’s announcement. European Commission Vice President Vera Jourova said that while such an agreement may not be comprehensive, there are “very influential and positive elements.” She also urged fellow politicians to take responsibility for not deceptively using AI tools and warned that AI-fueled disinformation could lead to “the end of democracy, not just in EU member states”.

The agreement comes at the German city’s annual security meeting as more than 50 countries are set to hold national elections in 2024. Bangladesh, Taiwan, Pakistan and more recently have already done so.

Attempts at AI-generated election interference have already begun, for example when AI robocalls imitating US President Joe Biden’s voice tried to dissuade people from voting in the New Hampshire primary last month.

Just days before Slovakia’s November elections, audio recordings generated by AI showed a candidate discussing plans to raise beer prices and rig the election. Fact checkers scrambled to identify them as false as they circulated on social media.

Politicians have also experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to advertisements.

The agreement calls on platforms to “pay attention to the context and in particular to protect the expression of education, information, art, satire and politics.”

He said the companies will focus on transparency for users regarding their policies and will work to educate the public about how they can avoid falling for AI fakes.

Most companies have previously said they are adding safeguards to their own AI generation tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users will know if what they are seeing is real. But most of those proposed solutions haven’t been rolled out yet and the companies are under pressure to do more.

That pressure is higher in the United States, where Congress has yet to pass laws governing AI in politics, leaving companies to largely regulate themselves.

The Federal Communications Commission recently confirmed that AI-generated audio clips in robocalls are against the law, but that doesn’t cover the depths of audio when circulated on social media or in campaign ads.

Many social media companies already have policies in place to discourage deceptive posts about election processes — AI-generated or not. Meta says it removes false information about “the dates, locations, times, and methods of voting, voter registration, or census participation” as well as other false posts intended to interfere with a person’s civic participation.

Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, said the deal seemed like a “positive step” but he would still like to see social media companies take other actions to combat misinformation, such as building content recommendations. systems that do not prioritize participation above all else.

Lisa Gilbert, executive vice president of the advocacy group Public Citizen, argued Friday that the agreement is not enough and that AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until substantial and sufficient therein. safeguards in place to help us avoid many potential problems.”

In addition to the companies that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-cloning startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and AI Stability, known for the Stable Diffusion image generator.

Notably absent is another popular AI image generator, Midjourney. The San Francisco-based startup did not immediately respond to a request for comment Friday.

The inclusion of X — which was not mentioned in an earlier announcement of the pending deal — was one of the surprises of Friday’s deal. Musk sharply reduced content moderation teams after taking over the former Twitter and described himself as a “free speech absolutist”.

In a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to protect free and fair elections.”

“X is committed to participating, collaborating with peers to combat AI threats while protecting free speech and maximizing transparency,” she said.

__

The Associated Press receives support from several private foundations to supplement its explanatory coverage of elections and democracy. See more about the AP democracy initiative here. The AP is solely responsible for all matters.

Leave a Reply

Your email address will not be published. Required fields are marked *