Government Considers Revisiting Online Safety Act

Ofcom has urged social media companies to take action against posts inciting violence, as the government said it may revisit the Online Safety Act, which is due to come into full force next year.

The media regulator urged platforms to address content that depicts “hatred and disorder” and promotes violence and misinformation.

Existing powers enable Ofcom to suspend or restrict video-sharing platforms such as YouTube or TikTok that fail to protect the public from “harmful material” and the regulator is expected to be given more powers across social media platforms in general under the Online Safety Act next year.

“There is no need to wait to make your sites and apps safer for users,” said Ofcom safety director Gill Whitehead.

Image credit: Pexels

Online misinformation

Policing minister Dame Diana Johnson said tech firms “have an obligation now” to “deal with” material that incites violence.

Speaking on the Today programme on BBC Radio Four, Johnson said the government was considering revisiting the upcoming legislation.

“The events of the last few days have meant that we need to look very carefully at what more we can do,” she said.

She said a possible plan to ban convicted rioters from football matches was “being looked at”.

The government said last week social media platforms “clearly need to do far more” after a list supposedly containing the names and addresses of immigration lawyers was spread online, apparently originating from Telegram.

Azzurra Moores of fact-checking organisation Full Fact said online misinformation was a “clear and present danger spilling across into unrest on UK streets in real-time” and urged Ofcom and the government to take “bolder, stronger action”.

‘Not fit for purpose’

London mayor Sadiq Khan said last week the Online Safety Act is “not fit for purpose” due to the way misinformation spreads rapidly on social media and urged ministers to act “very, very quickly” to review it.

Platforms X and Telegram have been notably slow to act on harmful material, with X owner Elon Musk repeatedly relaying posts containing misinformation to his 193 million followers on the service and criticising the government for cracking down on hate speech.

Telegram said its moderators were “actively monitoring the situation and are removing channels and posts containing calls to violence”, adding that “calls to violence” are forbidden in its terms of service.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago