TikTok’s Chinese sister service Douyin has published new rules for creators requiring them to clearly label content generated by artificial intelligence (AI) tools, as China and other countries prepare AI regulatory frameworks.
Critics have warned that increasingly sophisticated generative AI systems, such as ChatGPT, are capable of creating content that appears authentic but can be used to spread misinformation.
Douyin’s new rules published on Tuesday say clear AI labels will “help other users differentiate between what’s virtual and what’s real”, according to local media reports.
The rules say creators will be held responsible for the consequences of posting AI-generated content.
Douyin and TikTok are both operated by Beijing-based ByteDance.
The firm said the rules are based on new regulations called the Administrative Provisions on Deep Synthesis for Internet Information Service that went into effect on 10 January.
The rules are seen as imposing obligations on the providers and users of “deep synthesis” tools, including technology that produces synthetic content such as deepfakes.
A February blog post by international law firm Allen & Overy noted that such technology could be “used by criminals to produce, copy and disseminate illegal or false information or assume other people’s identities to commit fraud”.
The regulation covers technologies that generate or edit text content, video and audio, as well as those that produce virtual scenes or 3D reconstructions.
Digital avatars are permited on Douyin, but must be registered with the platform and users are required to verify their real names.
The company said on Tuesday that those who use generative AI to create content that infringes on other people’s portrait rights and copyright, or contain falsified information will be “severely penalised”, the South China Morning Post reported.
Regulator the Cyberspace Administration of China (CAC) last month proposed regulations covering generative AI services in the country that aim to prevent discriminatory content, the spread of false information and content that harms personal privacy or intellectual property.
Under a 2018 regulation such tools must pass a CAC security assessment before being made available to the public.
The CAC is soliciting public feedback for the new rules until 10 May.
Apple co-founder Steve Wozniak warned this week that AI could be used to aid scammers by making them appear more convincing and said AI-generated content should be labelled as such.
He said the humans who publish AI-generated content should be held responsible for their publications and called for regulation of the sector.
US president Joe Biden last week met with the chief executives of Google, Microsoft and two major AI companies as the US government seeks to ensure artificial intelligence products are developed in a safe and secure way.
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…