Ofcom Gives Tech Firms Three Months To Implement Content Controls

The UK Houses of Parliament

Ofcom publishes codes of practice for tech platforms to comply with Online Safety Act, with measures coming into effect on 17 March

Getting your Trinity Audio player ready...

Ofcom has published codes of practice for online platforms to follow to comply with the Online Safety Act, with an array of more than 100,000 varied online services given three months to bring effective moderation systems into place.

Services ranging from Facebook and Instagram to Google to Reddit and OnlyFans must begin implementing compliant systems by 17 March or face large fines and other legal penalties.

The sites covered by the regulation include those publishing user-generated content to other users, such as social media services, as well as search engines and file-sharing services such as Dropbox and Mega.

The act lists some 130 “priority offences” that include content related to areas such as child sexual abuse, terrorism and fraud, which companies’ content moderation systems must now actively identify and remove.

A judge's gavel on a computer keyboard. Law, justice, court, DOJ, trial, regulation, Ofcom.

‘Biggest-ever’ policy change

Technology secretary Peter Kyle said the new guidelines represented the “biggest-ever” change to online safety policy.

“No longer will internet terrorists and child abusers be able to behave with impunity,” he wrote in the Guardian.

“Because for the first time, tech firms will be forced to proactively take down illegal content that plagues our internet. If they don’t, they will face enormous fines and, if necessary, Ofcom can ask the courts to block access to their platforms in Britain.”

The guidelines published by Ofcom advise companies to nominate a senior executive to hold responsibility for compliance, maintain properly staffed and funded moderation teams, test algorithms to ensure illegal content does not reach users, and remove accounts operated by, or on behalf of, outlawed militant groups.

Companies should also make available easy-to-find tools for making content complaints and should provide options for blocking and muting other accounts on the platform and disabling comments.

Platforms will also be required to institute automated systems such as “hash-matching”, a technique that identifies known child sexual abuse material for removal.

Child safety criticism

Child safety campaigners said the guidance does not not go far enough, with the Molly Rose Foundation saying it was “astonished” there were no specific, targeted measures to deal with self-harm and suicide content.

“Robust regulation remains the best way to tackle illegal content, but it simply isn’t acceptable for the regulator to take a gradualist approach to immediate threats to life,” said Andy Burrows, the group’s chief executive.

Children’s charity the NSPCC said it was “deeply concerned” that platforms such as WhatsApp would not be required to take down illegal content if it was not technically feasible.

“Today’s proposals will at best lock in the inertia to act, and at worst create a loophole which means services can evade tackling abuse in private messaging without fear of enforcement,” said acting chief Maria Neophytou.

The act became law in October of last year after years of tortuous negotiations over its detail and scope, and Ofcom began consulting on its illegal content codes in November.

‘Strengthened’ measures

It said it has “strengthened” its guidance to tech firms in several areas based on the consultation process.

Amongst other areas, the act is designed to address concerns of the impact of social media on young people, an issue that prompted the Australian government last month to pass a law banning children under 16 from such platforms.

Ofcom’s guidance must be approved by Parliament, but the regulator is publishing the information now to give companies time to take the measures into account.