Meta Restricts AI Tools For Political Ads In Deepfake Clampdown

Meta Platforms has announced efforts to ensure that its adverts going forward will publicly disclose when they have used AI-created or altered content.

Reuters reported Meta confirming that from 2024, advertisers will have to disclose when artificial intelligence (AI) or other digital methods are used to alter or create political, social or election related advertisements on Facebook and Instagram.

The Meta move comes after the world’s first AI Safety Summit was held at Bletchley Park last week, which resulted in the first international declaration on AI, when countries such as UK, US, EU, Australia, and China all agreed that artificial intelligence poses a potentially catastrophic risk to humanity.

Bletchley Park. Image may be subject to copyright

AI disclosures

That AI Safety Summit came amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.

According to the Reuters report, Meta says it would require in 2024 advertisers to disclose if their altered or created adverts portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person that does not exist.

Reuters reported that Meta will also ask advertisers to disclose if these ads show events that did not take place, alter footage of a real event, or even depict a real event without the true image, video, or audio recording of the actual event.

Meta is the second largest platform for digital advertising in the world, and it already blocks its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures.

The Meta move comes after Google announced the launch of image-customising generative AI ads tools last week.

The search engine giant also reportedly said it planned to keep politics out of its products by blocking a list of “political keywords” from being used as prompts.

Deepfake concerns

Meanwhile the issue of AI being used to create content that falsely depict candidates in political ads has already been raised by US lawmakers.

Image may be subject to copyright

In July the Biden administration announced a number of big name players in the artificial intelligence sector had agreed voluntary safeguards to the risks posed by AI.

The White House said it had secured voluntary commitments underscore “safety, security, and trust and mark a critical step toward developing responsible AI” from the likes of Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

Last week President Biden signed a wide-ranging executive order on AI that amongst other measures obliges companies developing the most powerful models to submit regular security reports to the federal government.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Spyware Maker NSO Group Found Liable In US Court

Landmark ruling finds NSO Group liable on hacking charges in US federal court, after Pegasus…

2 days ago

Microsoft Diversifying 365 Copilot Away From OpenAI

Microsoft reportedly adding internal and third-party AI models to enterprise 365 Copilot offering as it…

2 days ago

Albania Bans TikTok For One Year After Stabbing

Albania to ban access to TikTok for one year after schoolboy stabbed to death, as…

2 days ago

Foldable Shipments Slow In China Amidst Global Growth Pains

Shipments of foldable smartphones show dramatic slowdown in world's biggest smartphone market amidst broader growth…

2 days ago

Google Proposes Remedies After Antitrust Defeat

Google proposes modest remedies to restore search competition, while decrying government overreach and planning appeal

2 days ago

Sega Considers Starting Own Game Subscription Service

Sega 'evaluating' starting its own game subscription service, as on-demand business model makes headway in…

2 days ago