TikTok To Label AI-Generated Content From Other Platforms
AI-generated content such as video and images are going to be labelled by TikTok using the Content Credentials digital watermark
TikTok is seeking to head off concerns about AI-generated content influencing upcoming elections around the world.
The popular short-video app announced that it “is starting to automatically label AI-generated content (AIGC) when it’s uploaded from certain other platforms.”
TikTok already labels AI-generated content made with tools inside the app, but the latest move would apply a label to videos and images generated outside of the service.
AI-generated content
TikTok is not the first social platform to do this. In February for example Meta Platforms said it had begun detecting and labelling AI-generated images made by other companies’ AI systems on Facebook, Instagram and Threads.
Meta had already labelled images created by its own AI services, which included invisible watermarks and metadata that could alert other companies that the image was artificially generated.
The moves by Meta and TikTok comes amidst growing concern over the potential for misuse generative AI systems that can create fake visual content (deepfakes etc) that appears authentic.
Now, TikTok said it will will detect when images or videos are uploaded to its platform containing metadata tags indicating the presence of AI-generated content.
TikTok does however claim to be the first social media platform to support the new tamper-proof Content Credentials metadata, developed by Adobe last year.
It also said that it is partnering with the Coalition for Content Provenance and Authenticity (C2PA).
“AI enables incredible creative opportunities, but can confuse or mislead viewers if they don’t know content was AI-generated,” it said. “Labelling helps make that context clear – which is why we label AIGC made with TikTok AI effects, and have required creators to label realistic AIGC for over a year. We also built a first-of-its-kind tool to make this easy to do, which over 37 million creators have used since last fall.”
“Over the coming months, we’ll also start attaching Content Credentials to TikTok content, which will remain on content when downloaded,” it added.
“With TikTok’s vast community of creators and users globally, we are thrilled to welcome them to both the C2PA and CAI as they embark on the journey to provide more transparency and authenticity on the platform,” said Dana Rao, General Counsel and Chief Trust Officer at Adobe. “At a time when any digital content can be altered, it is essential to provide ways for the public to discern what is true. Today’s announcement is a critical step towards achieving that outcome.”
AI risks
There have been concerns for a while now about AI-generated content, and its potential to mislead people ahead of important elections around the world.
For example India’s general election took place in April, and there is also going to be important elections in the United Kingdom and South Africa, as well as European Parliament elections in the summer.
The US Presidential elections are later this year.
To combat potential AI interference, big name tech firms agreed with the Biden Administration in July 2003 to implement voluntary safeguards to the risks posed by AI, including the use of watermarks.
In August 2023 Google DeepMind announced it was “launching a beta version of SynthID, a tool for watermarking and identifying AI-generated images.”
In February of this year, twenty of the world’s biggest technology companies, including Amazon, Adobe, Google, Meta, Microsoft, OpenAI, TikTok and X have vowed to take measures against the misuse of artificial intelligence (AI) to disrupt elections around the world this year.
In January a fake robocall in the voice of US president Joe Biden urged voters not to participate in New Hampshire’s primary election.
Taiwan saw fake content circulating on social media ahead of its 13 January election.