Facebook has pledged to utilise artificial intelligence (AI) in order to keep terrorist content off the social network.
It should be noted that Facebook is a founding member of ‘Partnership on AI’ – a non-profit group founded last September by a number of tech firms to develop AI.
All of these comes as Facebook (and other tech firms) come under increasing political pressure from European leaders, in the wake of recent terrorist attacks.
And Facebook was one of the tech firms that met with Home Secretary Amber Rudd in the wake of attack in Westminster in March.
They pledged back then to work harder to tackle terrorist propaganda online, and now Facebook has opened up about its efforts.
Facebook’s Monika Bickert, director of Global Policy Management, and Brian Fishman, Counterterrorism policy manager said they wanted its platform to be a “hostile place for terrorists”.
“In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online,” they wrote, before explaining how they use AI to keep terrorist content off Facebook, something Facebook has not talked about publicly before.
“Our stance is simple: There’s no place on Facebook for terrorism,” they said. “We remove terrorists and posts that support terrorism whenever we become aware of them.”
They admitted it was a challenge to keep terrorist content off its platform, as it is used by nearly 2 billion every month.
“We want to find terrorist content immediately, before people in our community have seen it,” they wrote. “Already, the majority of accounts we remove for terrorism we find ourselves. But we know we can do better at using technology – and specifically artificial intelligence – to stop the spread of terrorist content on Facebook.”
They said that while the introduction of using AI to fight terrorism is a fairly recent development, it is already changing the way Facebook keeps potential terrorist propaganda and accounts off the platform.
“We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organisations in due course,” they wrote.
It seems that Facebook’s AI uses a combination of techniques to spot terrorist content.
This includes image matching (where the system checks if a video or picture matches known terrorism content).
Facebook has also begun using AI to understand any text that might be advocating for terrorism. Its AI algorithm is still in the early stages here, but it should “get better over time.”
Facebook also explained that it also removes terrorist clusters, as its algorithms also “fan out” to try to identify related material that may also support terrorism. It has also become much faster at detecting new fake accounts created by repeat offenders.
And Facebook’s anti terror efforts are also used across different platforms including WhatsApp and Instagram.
But it insists that at the moment, AI cannot catch everything and its algorithms are not yet as good as people when it comes to understanding terrorist-related context.
Facebook cited an example of a photo of an armed man waving an ISIS flag, which could be propaganda or recruiting material, but could also be an image in a news story.
This is why Facebook needs humans. It needs the Facebook community as a whole to report accounts or content that may violate its policies, and it has its own team of terrorism and safety specialists.
Indeed, Facebook said that in the past year it grown its team of counterterrorism specialists to more than 150 people, which can speak nearly 30 different languages. This human counterterrorism team also responds immediately to law enforcement requests.
And lastly Facebook said it partners with other tech firms, researchers and governments, to quickly identify and slow the spread of terrorist content online.
For example in December last year it joined with Microsoft, Twitter and Google to create a shared industry database that can quickly identify terrorist content.
“We want Facebook to be a hostile place for terrorists,” they wrote. “The challenge for online communities is the same as it is for real world communities – to get better at spotting the early signals before it’s too late.”
That said, Facebook and governments around the world disagree on giving governments backdoor access into their systems and the delicate issue of encryption remains a touchy subject at the moment.
Troubled battery maker Northvolt reportedly considers Chapter 11 bankruptcy protection in the United States as…
Microsoft's cloud business practices are reportedly facing a potential anti-competitive investigation by the FTC
Ilya Lichtenstein sentenced to five years in prison for hacking into a virtual currency exchange…
Target for Elon Musk's lawsuit, hate speech watchdog CCDH, announces its decision to quit X…
Antitrust penalty. European Commission fines Meta a hefty €798m ($843m) for tying Facebook Marketplace to…
Elon Musk continues to provoke the ire of various leaders around the world with his…