The European Commission said it plans to propose legislation later this month that could force internet firms such as Google, Facebook and Twitter to remove extremist or militant material within an hour of being notified about it.
The move, announced at a press conference, follows the Commission’s ultimatum in March that it would give internet companies three months to show that they were improving their content removal processes, or face legislation.
At the time the Commission said it was considering a one-hour time limit in cases when firms were notified that they were hosting militant or extremist material.
Up to now the Commission has taken a relatively hands-off approach, allowing firms to police themselves under a voluntary code of conduct.
The Commission said the code of conduct, originally agreed with Facebook, Microsoft, Twitter and YouTube in 2016, could remain in place for content such as hate speech and counterfeit news, but said that for extremist material “absolute certainty” was needed.
“We came to the conclusion that it is too serious a threat and risk for European people that we should have absolute certainty that all the platforms and all the IT providers will delete the terrorist content and will cooperate with law enforcement bodies,” said European Justice Commissioner Vera Jourova at Wednesday’s press event, Reuters reported.
She said the proposal, which must be approved by EU member states and the European Parliament, would be presented later this month.
Late last month Julian King, the EU’s commissioner for security, said the Commission had “not seen enough progress” on quickly removing dangerous material and was planning to propose legislation allowing it to “take stronger action in order to better protect our citizens”.
In March the Commission had strengthened its voluntary guidelines for internet firms, adding a provision that encouraged removal of extremist or militant content within an hour.
The legislation is planned to cover all websites, and not only large firms with the staff to respond quickly to reports, King told the FT at the time.
The Commission sees extending the law to smaller platforms as a way to ensure illegal material is eradicated, King said.
“The difference in size and resources means platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent,” he said. “All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform.”
The British government has also singled out social media firms for criticism, following attacks in London and other European capitals in recent years.
Top Twitter and Facebook executives testified before the US Congress on Wednesday on the related issue of the use of their platforms by foreign powers to spread propaganda during the 2016 presidential election.
Ahead of this year’s midterm elections, Facebook chief operating officer Sheryl Sandberg and Twitter chief executive Jack Dorsey outlined their efforts to block propaganda efforts to the Senate Intelligence Committee.
Fourth quarter results beat Wall Street expectations, as overall sales rise 6 percent, but EU…
Hate speech non-profit that defeated Elon Musk's lawsuit, warns X's Community Notes is failing to…
Good luck. Russia demands Google pay a fine worth more than the world's total GDP,…
Google Cloud signs up Spotify, Paramount Global as early customers of its first ARM-based cloud…
Facebook parent Meta warns of 'significant acceleration' in expenditures on AI infrastructure as revenue, profits…
Microsoft says Azure cloud revenues up 33 percent for September quarter as capital expenditures surge…