Coronavirus: AI Takes Over Social Network Moderation As Staff Sent Home
Social networks are turning to AI and automated tools to police social networking posts, as staff are sent home due to Covid-19 pandemic
Social networking giants such as YouTube, Twitter and Facebook are now all relying on artificial intelligence and automated tools to police material posted to their platforms.
The firms are turning to automated tool for content review as staff from outsourced firms usually used to perform this task, are send home due to the Coronavirus pandemic sweeping the globe.
The lack of human oversight has already led to some mistakes. Reuters for example quoted Facebook’s head of safety as saying on Tuesday that a bug was responsible for posts on topics including coronavirus being erroneously marked as spam.
AI moderation
“This is a bug in an anti-spam system, unrelated to any change in our content moderator workforce,” Guy Rosen, Facebook’s vice president for integrity, reportedly said on Twitter.
“We’ve restored all the posts that were incorrectly removed, which included posts on all topics – not just those related to COVID-19. This was an issue with an automated system that removes links to abusive websites, but incorrectly removed a lot of other posts too,” he reportedly said.
Facebook users had shared screenshots with Reuters of notifications they had received saying articles from prominent news organisations had violated the company’s community guidelines.
Facebook at the weekend closed its London offices for ‘deep cleaning’, after visiting employee from Singapore was diagnosed with coronavirus.
The firms are admitting that the use of automated systems to fact check posted material may lead to some mistakes, but they insist they still need to remove harmful content.
This is especially important at the moment considering the current state of the world, and the dangers posed by those touting false information as fact.
Indeed, the Covid-19 pandemic has led to a surge of medical misinformation across the web.
Fewer people
“We believe the investments we’ve made over the past three years have prepared us for this situation,” said Facebook in a blog post on the matter. “With fewer people available for human review we’ll continue to prioritise imminent harm and increase our reliance on proactive detection in other areas to remove violating content.”
“We don’t expect this to impact people using our platform in any noticeable way,” Facebook said. “That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.”
“These are unprecedented times, but the safety and security of our platform will continue,” it said. “We are grateful to all of our teams working hard to continue doing the essential work to keep our community safe.”
Typically, social networking giants tend to outsource the human oversight of questionable content to third-party firms around the world.
These firms are found in locations such as India and the United States.