Meta Restricts AI Tools For Political Ads In Deepfake Clampdown

Meta Platforms has announced efforts to ensure that its adverts going forward will publicly disclose when they have used AI-created or altered content.

Reuters reported Meta confirming that from 2024, advertisers will have to disclose when artificial intelligence (AI) or other digital methods are used to alter or create political, social or election related advertisements on Facebook and Instagram.

The Meta move comes after the world’s first AI Safety Summit was held at Bletchley Park last week, which resulted in the first international declaration on AI, when countries such as UK, US, EU, Australia, and China all agreed that artificial intelligence poses a potentially catastrophic risk to humanity.

Bletchley Park. Image may be subject to copyright

AI disclosures

That AI Safety Summit came amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.

According to the Reuters report, Meta says it would require in 2024 advertisers to disclose if their altered or created adverts portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person that does not exist.

Reuters reported that Meta will also ask advertisers to disclose if these ads show events that did not take place, alter footage of a real event, or even depict a real event without the true image, video, or audio recording of the actual event.

Meta is the second largest platform for digital advertising in the world, and it already blocks its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures.

The Meta move comes after Google announced the launch of image-customising generative AI ads tools last week.

The search engine giant also reportedly said it planned to keep politics out of its products by blocking a list of “political keywords” from being used as prompts.

Deepfake concerns

Meanwhile the issue of AI being used to create content that falsely depict candidates in political ads has already been raised by US lawmakers.

Image may be subject to copyright

In July the Biden administration announced a number of big name players in the artificial intelligence sector had agreed voluntary safeguards to the risks posed by AI.

The White House said it had secured voluntary commitments underscore “safety, security, and trust and mark a critical step toward developing responsible AI” from the likes of Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

Last week President Biden signed a wide-ranging executive order on AI that amongst other measures obliges companies developing the most powerful models to submit regular security reports to the federal government.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Apple, Google Mobile Ecosystems Should Be Investigated, CMA Told

CMA receives 'provisional recommendation' from independent inquiry that Apple,Google mobile ecosystem needs investigation

1 day ago

Australia Rejects Elon Musk Claim About Social Media Ban For Under-16s

Government minister flatly rejects Elon Musk's “unsurprising” allegation that Australian government seeks control of Internet…

2 days ago

Northvolt Files For Bankruptcy Protection In US

Northvolt files for Chapter 11 bankruptcy protection in the United States, and CEO and co-founder…

2 days ago

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

2 days ago

Former Policy Boss At X, Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

2 days ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

2 days ago