Government Considers Revisiting Online Safety Act

Ofcom has urged social media companies to take action against posts inciting violence, as the government said it may revisit the Online Safety Act, which is due to come into full force next year.

The media regulator urged platforms to address content that depicts “hatred and disorder” and promotes violence and misinformation.

Existing powers enable Ofcom to suspend or restrict video-sharing platforms such as YouTube or TikTok that fail to protect the public from “harmful material” and the regulator is expected to be given more powers across social media platforms in general under the Online Safety Act next year.

“There is no need to wait to make your sites and apps safer for users,” said Ofcom safety director Gill Whitehead.

Image credit: Pexels

Online misinformation

Policing minister Dame Diana Johnson said tech firms “have an obligation now” to “deal with” material that incites violence.

Speaking on the Today programme on BBC Radio Four, Johnson said the government was considering revisiting the upcoming legislation.

“The events of the last few days have meant that we need to look very carefully at what more we can do,” she said.

She said a possible plan to ban convicted rioters from football matches was “being looked at”.

The government said last week social media platforms “clearly need to do far more” after a list supposedly containing the names and addresses of immigration lawyers was spread online, apparently originating from Telegram.

Azzurra Moores of fact-checking organisation Full Fact said online misinformation was a “clear and present danger spilling across into unrest on UK streets in real-time” and urged Ofcom and the government to take “bolder, stronger action”.

‘Not fit for purpose’

London mayor Sadiq Khan said last week the Online Safety Act is “not fit for purpose” due to the way misinformation spreads rapidly on social media and urged ministers to act “very, very quickly” to review it.

Platforms X and Telegram have been notably slow to act on harmful material, with X owner Elon Musk repeatedly relaying posts containing misinformation to his 193 million followers on the service and criticising the government for cracking down on hate speech.

Telegram said its moderators were “actively monitoring the situation and are removing channels and posts containing calls to violence”, adding that “calls to violence” are forbidden in its terms of service.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Brazil Unfreezes Starlink, X Bank Accounts After Funds Transfer

Judge orders X, Starlink bank accounts unfrozen after $3.3m transfer pays off fines imposed on…

9 hours ago

Uber To Offer Waymo Robotaxi Rides In Austin, Atlanta

Uber expands deal with Waymo from Phoenix to Austin, Texas and Atlanta as it faces…

10 hours ago

GenAI Shopping: Revolutionising Retail Experiences

Discover how Generative AI is transforming the retail experience with personalised interactions, AI-powered search, and…

10 hours ago

US House Passes Bill Targeting Chinese EV Battery Tech

US House of Representatives passes bill restricting tax credits for electric vehicles using battery technology…

10 hours ago

NASA Mission To Jupiter’s Europa Gets Go-Ahead

NASA to launch 'Europa Clipper' mission to Jupiter's moon Europa next month as it seeks…

11 hours ago

Police Arrest Youth Over London Transport Hack

National Crime Agency arrests 17-year-old in Walsall over hack of Transport for London that compromised…

11 hours ago