Google said it plans to implement new measures to curb the use of YouTube as a propaganda tool for extremists amidst growing pressure to take action by the UK government and a series of attacks in the country in recent months.
In an editorial published in the Financial Times on Sunday, later republished as a blog post, Google called extremism an “attack on open societies”.
“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done,” wrote Google general counsel Kent Walker.
The four measures outlined in the article include increased use of technology to identify extremism-related videos, the introduction of more numerous independent staff to flag such content, steps to make borderline videos less visible and expanded support for counter-radicalisation efforts.
The technical resources are to include an expansion of the engineering support for training machine learning tools that can more quickly catch militant videos, Google said.
“We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months,” Walker wrote.
The company said it’s expanding YouTube’s Trusted Flagger programme, which relies on independent experts to identify videos that don’t meet YouTube’s standards, adding 50 non-governmental organisations (NGOs) to the 63 that already participate in the programme, supported by grant funding.
In the case of videos that don’t clearly violate YouTube’s policies, but which are considered inflammatory, Google plans to display the content behind an interstitial warning and won’t allow the content to display ads, user endorsements or comments.
“That means these videos will have less engagement and be harder to find,” wrote Walker.
Google said YouTube also plans to expand its participation in a programme that displays targeted anti-extremist adverts that point viewers to videos aiming to debunk militant propaganda.
“We’ll keep working on the problem until we get the balance right,” wrote Google’s Walker.
Social media companies including Google, Facebook and Twitter have been criticised for allowing extremist content and misinformation to flourish on their platforms while blocking content that appears relatively harmless, but which could offend the sensibilities of their most prudish users.
Facebook, for instance, was castigated earlier this year for asking an Italian art historian to remove a photo of the famous sixteenth-century statue of the god Neptune that stands in a public square in Bologna, because of the statue’s nudity, and in the past has also censured Gustave Courbet’s painting The Origin of the World.
The House of Commons Home Affairs Select Committee recently issued a report heavily critical of social networks’ efforts to remove illegal content.
Labour MP Yvette Cooper, who chairs the committee, said she welcomed Google’s latest efforts.
“The select committee recommended that they should be more proactive in searching for – and taking down – illegal and extremist content, and to invest more of their profits in moderation,” Cooper said. “News that Google will now proactively scan content and fund the trusted flaggers who were helping to moderate their own site is therefore important and welcome.”
Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…