Meta Restricts AI Tools For Political Ads In Deepfake Clampdown

Meta Platforms has announced efforts to ensure that its adverts going forward will publicly disclose when they have used AI-created or altered content.

Reuters reported Meta confirming that from 2024, advertisers will have to disclose when artificial intelligence (AI) or other digital methods are used to alter or create political, social or election related advertisements on Facebook and Instagram.

The Meta move comes after the world’s first AI Safety Summit was held at Bletchley Park last week, which resulted in the first international declaration on AI, when countries such as UK, US, EU, Australia, and China all agreed that artificial intelligence poses a potentially catastrophic risk to humanity.

Bletchley Park. Image may be subject to copyright

AI disclosures

That AI Safety Summit came amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.

According to the Reuters report, Meta says it would require in 2024 advertisers to disclose if their altered or created adverts portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person that does not exist.

Reuters reported that Meta will also ask advertisers to disclose if these ads show events that did not take place, alter footage of a real event, or even depict a real event without the true image, video, or audio recording of the actual event.

Meta is the second largest platform for digital advertising in the world, and it already blocks its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures.

The Meta move comes after Google announced the launch of image-customising generative AI ads tools last week.

The search engine giant also reportedly said it planned to keep politics out of its products by blocking a list of “political keywords” from being used as prompts.

Deepfake concerns

Meanwhile the issue of AI being used to create content that falsely depict candidates in political ads has already been raised by US lawmakers.

Image may be subject to copyright

In July the Biden administration announced a number of big name players in the artificial intelligence sector had agreed voluntary safeguards to the risks posed by AI.

The White House said it had secured voluntary commitments underscore “safety, security, and trust and mark a critical step toward developing responsible AI” from the likes of Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

Last week President Biden signed a wide-ranging executive order on AI that amongst other measures obliges companies developing the most powerful models to submit regular security reports to the federal government.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Is the Digital Transformation of Businesses Complete?

Digital transformation is an ongoing journey, requiring continuous adaptation, strong leadership, and skilled talent to…

18 hours ago

Craig Wright Faces Contempt Claim Over Bitcoin Lawsuit

Australian computer scientist faces contempt-of-court claim after suing Jack Dorsey's Block and Bitcoin Core developers…

19 hours ago

OpenAI Adds ChatGPT Search Features

OpenAI's ChatGPT gets search features, putting it in direct competition with Microsoft and Google, amidst…

19 hours ago

Google Maps Steers Into Local Information With AI Chat

New Google Maps allows users to ask for detailed information on local spots, adds AI-summarised…

20 hours ago

Huawei Sees Sales Surge, But Profits Fall

US-sanctioned Huawei sees sales surge in first three quarters of 2024 on domestic smartphone popularity,…

20 hours ago

Apple Posts China Sales Decline, Ramping Pressure On AI Strategy

Apple posts slight decline in China sales for fourth quarter, as Tim Cook negotiates to…

21 hours ago