ByteDance’s TikTok has reportedly confirmed that it laying off hundreds of staff around the world as it shifts focus to a greater use of AI in content moderation.

Reuters reported that the job cuts includes a large number of staff in Malaysia and other parts of TikTok’s global workforce.

This is not the first time that job cuts have been announced at TikTok. In May this year, TikTok cut hundreds of jobs in its operations and marketing teams, as part of a wider restructuring drive by Beijing-based parent company ByteDance.

Image credit: Unsplash

Job cuts

Those cuts were not related to political tensions facing the company in the US, after American President Biden earlier in the year signed a law that would force ByteDance to divest TikTok to US ownership before the end of his term in January.

TikTok is currently fighting that order in the US courts.

Into this comes the news from two sources familiar with the matter, who told Reuters that more than 700 jobs were slashed in Malaysia.

TikTok later clarified that less than 500 employees in the country were affected.

According to the Reuters report, the employees are mostly were involved in the firm’s content moderation operations, and they were informed of their dismissal by email late Wednesday.

In response to Reuters’ queries, TikTok confirmed the layoffs and said that several hundred employees were expected to be impacted globally as part of a wider plan to improve its moderation operations.

Content moderation

At the moment TikTok utilises a mix of automated detection and human moderators to review content posted on the site.

ByteDance has over 110,000 employees in more than 200 cities globally, and the bad news is that the Reuters sources have indicated the firm is planning more job losses next month as it looks to consolidate some of its regional operations.

“We’re making these changes as part of our ongoing efforts to further strengthen our global operating model for content moderation,” a TikTok spokesperson was quoted by Reuters as saying.

TikTok expects to invest $2 billion globally in trust and safety this year and will continue to improve efficiency, with 80 percent of guidelines-violating content now removed by automated technologies, the spokesperson reportedly add.

Ever since the Covid-19 pandemic in 2020 and 2021, social networking giants such as YouTube, Twitter and Facebook began relying on artificial intelligence and automated tools to police material posted to their platforms.

But firms have to ensure they obey local law and are not caught out by reducing their content moderation abilities too much.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Marriott Agrees To Pay $52 Million To Settle Data Breaches

To settle US federal and state claims over multiple data breaches, Marriott International agrees $52…

4 hours ago

Tesla Shares Drop After Cybercab Unveiling

Mixed reactions as Elon Musk hypes $30,000 'self driving' robotaxi called Cybercab, as well as…

8 hours ago

AMD Launches New AI, Server Chips To Expand Nvidia Challenge

AMD unveils new AI and data centre chips as it seeks to improve challenge to…

1 day ago

Chinese Hackers Breach US Wiretap Systems – Report

AT&T and Verizon among US broadband providers reportedly hacked to target American government wiretapping platform

1 day ago

Fisker Unable To Migrate EV Data To New Owner’s Server

Firesale buyer files emergency objection, after bankrupt Fisker states it cannot transfer vital EV data…

1 day ago