Tech Giants Vow To Combat AI Misuse In Election Year

Twenty of the world’s biggest technology companies, including Amazon, Adobe, Google, Meta, Microsoft, OpenAI, TikTok and X have vowed to take measures against the misuse of artificial intelligence (AI) to disrupt elections around the world this year.

AI has already been deployed this year to create false content aimed at manipulating voters, in a year when around 4 billion people in 40 countries are expected to participate in elections.

Generative AI tools that are increasingly accessible and powerful have surged in popularity over the past year, following the debut of OpenAI’s ChatGPT in late 2022.

Such tools can create photorealistic images or videos or convincing written content from text prompts.

Sam Altman. Image credit: OpenAI

AI misuse

OpenAI last week showed samples created by its upcoming text-to-video tool Sora, which is not currently available to the public and is being vetted by safety experts.

Experts fear such tools could be used to manipulate elections by creating false information around candidates.

In January a fake robocall in the voice of US president Joe Biden urged voters not to participate in New Hampshire’s primary election.

Taiwan saw fake content circulating on social media ahead of its 13 January election.

Voluntary measures

At the Munich Security Conference on Friday the tech firms announced their “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”, a voluntary agreement with eight specific commitments to deploy technology against harmful AI content.

Kent Walker, president of global affairs at Google, said AI misuse threatens not only election integrity, but also the “generational opportunity” presented by positive uses of the technology.

“We can’t let digital abuse threaten AI’s generational opportunity to improve our economies, create new jobs, and drive progress in health and science,” he said.

Lisa Gilbert, executive vice president of non-profit Public Citizen, which has been advocating for legislation around political and explicit AI-generated content, said voluntary measures were “not enough”.

‘Not enough’

“The AI companies must commit to hold back technology – especially text-to-video – that presents major election risks until there are substantial and adequate safeguards in place to help us avert many potential problems,” she said.

US senators Mark Warner and Lindsey Graham said in a joint statement the move was a “constructive step forward”.

“Time will tell how effective these steps are and if further action is needed,” they said.

The accord’s initial signatories are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago