A senior Facebook executive has revealed the social networking is developing an artificial intelligence (AI) system to monitor potentially offensive uploaded content.
Joaquin Candela, Facebook’s director of applied machine learning said the AI system would be able to automatically flag offensive material in live video streams.
It comes as the social network is mired in a battle against fake news, offensive material and nudity issues.
Facebook has for a while now been exploring the potential uses of AI within its systems. Only last month CTO Mike Schroepfer said the company’s main areas of focus over the next ten years would be connectivity, artificial intelligence (AI) and virtual reality.
And Candela has now added to this when he told reporters that Facebook increasingly was using artificial intelligence to find offensive material.
It is “an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” he was quoted as saying by Reuters.
Until now, Facebook has traditionally relied on users to report offensive posts, which are then reviewed by a specialist Facebook team to see whether the post violates its strict “community standards”.
Since June Facebook has reportedly been working on using ‘automation’ to flag extremist video content. But now according to Reuters, the automated system also is being tested on Facebook Live.
Facebook Live was launched last December in an effort to take on the likes of Snapchat and Periscope. The streaming video service lets users share live video content with their friends in real-time.
It should be noted that Facebook’s AI system to flag offensive video is still at the research stage, and has two challenges, according to Candela.
“One, your computer vision algorithm has to be fast, and I think we can push there, and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down,” he reportedly said.
Facebook is also having to contend with a number of other issues as well. One is nudity, which has seen some photographs such as nursing mothers or photos from the Vietnam war, taken down for violating Facebook’s nudity rules.
The other issue is that of fake news, an issue that was thrust into the limelight after the US Presidential election of Donald Trump.
For example it is reported that before the election Facebook users subject to a number of false news stories such as Pope Francis endorsing Donald Trump or Democratic candidate Hillary Clinton being found dead.
Yann LeCun, Facebook’s director of AI research, told the Wall Street Journal that AI could be used to help weed out fake news. But the social network is apparently struggling how to figure out how to introduce the technology responsibly.
Quiz: What do you know about Facebook?
Welcome to Silicon UK: AI for Your Business Podcast. Today, we explore how AI can…
Japanese tech investment firm SoftBank promises to invest $100bn during Trump's second term to create…
Synopsys to work with start-up SiMa.ai on joint offering to help accelerate development of AI…
Start-up Basis raises $34m in Series A funding round for AI-powered accountancy agent to make…
Data analytics and AI start-up Databricks completes huge $10bn round from major venture capitalists as…
Congo files legal complaints against Apple in France, Belgium alleging company 'complicit' in laundering conflict…