TikTok Sister Site Douyin Mandates Labels For AI Content

TikTok’s Chinese sister service Douyin has published new rules for creators requiring them to clearly label content generated by artificial intelligence (AI) tools, as China and other countries prepare AI regulatory frameworks.

Critics have warned that increasingly sophisticated generative AI systems, such as ChatGPT, are capable of creating content that appears authentic but can be used to spread misinformation.

Douyin’s new rules published on Tuesday say clear AI labels will “help other users differentiate between what’s virtual and what’s real”, according to local media reports.

The rules say creators will be held responsible for the consequences of posting AI-generated content.

Image credit: Tara Winstead/Pexels

AI rules

Douyin and TikTok are both operated by Beijing-based ByteDance.

The firm said the rules are based on new regulations called the Administrative Provisions on Deep Synthesis for Internet Information Service that went into effect on 10 January.

The rules are seen as imposing obligations on the providers and users of “deep synthesis” tools, including technology that produces synthetic content such as deepfakes.

A February blog post by international law firm Allen & Overy noted that such technology could be “used by criminals to produce, copy and disseminate illegal or false information or assume other people’s identities to commit fraud”.

The regulation covers technologies that generate or edit text content, video and audio, as well as those that produce virtual scenes or 3D reconstructions.

Deepfakes

Digital avatars are permited on Douyin, but must be registered with the platform and users are required to verify their real names.

The company said on Tuesday that those who use generative AI to create content that infringes on other people’s portrait rights and copyright, or contain falsified information will be “severely penalised”, the South China Morning Post reported.

Regulator the Cyberspace Administration of China (CAC) last month proposed regulations covering generative AI services in the country that aim to prevent discriminatory content, the spread of false information and content that harms personal privacy or intellectual property.

Under a 2018 regulation such tools must pass a CAC security assessment before being made available to the public.

AI risk

The CAC is soliciting public feedback for the new rules until 10 May.

Apple co-founder Steve Wozniak warned this week that AI could be used to aid scammers by making them appear more convincing and said AI-generated content should be labelled as such.

He said the humans who publish AI-generated content should be held responsible for their publications and called for regulation of the sector.

US president Joe Biden last week met with the chief executives of Google, Microsoft and two major AI companies as the US government seeks to ensure artificial intelligence products are developed in a safe and secure way.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

5 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

7 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

9 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

9 hours ago