Wikipedia has revealed it will begin using an artificial intelligence (AI) tool to help Wiki editors detect “bad edits” to articles.
The tool is designed to help editors maintain the quality of Wikipedia by ‘triaging’ potentially suspect edits from the torrent of new edits it recieves on a daily basis.
“This service empowers Wikipedia editors by helping them discover damaging edits and can be used to immediately ‘score’ the quality of any Wikipedia article,” said Wikipedia in a blog post. “We’ve made this artificial intelligence available as an open web service that anyone can use.”
Wikipedia revealed it receives approximately half a million edits per day.
“In order to maintain the quality of Wikipedia, this firehose of new content needs to be constantly reviewed by Wikipedians,” it said. “The Objective Revision Evaluation Service (ORES) functions like a pair of X-ray specs, the toy hyped in novelty shops and the back of comic books – but these specs actually work to highlight potentially damaging edits for editors. This allows editors to triage them from the torrent of new edits and review them with increased scrutiny.”
“We’ve been testing the service for a few months and more than a dozen editing tools and services are already using it,” it added. “We’re beating the state of the art in the accuracy of our predictions. The service is online right now and it is ready for your experimentation.”
The ORES tool has been trained by Wikipedia editors to recognise the quality of an edit based on the language and context of the change. It will assess whether the edit content is “damaging”.
It remains to be seen how the ORES tool will be received by Wikipedia’s army of editors, especially given the concerns of some high ranking scientists about the dangers of artificial intelligence.
But others have pointed to the advantages to be gained from the new technology.
In August scientists from Arizona State University said they believe they can use artificial intelligence to learn more about the military strategy of Islamic State. The researchers used AI algorithms to study patterns and behaviour of IS extremists, and said they could be of “significant” help for both the US military and policymakers in the future.
Whatever happens, it is not as if the community-written encyclopaedia has not faced controversy in the past. In October 2013 for example, Wikipedia suspended more than 250 editing accounts over concerns surrounding sponsored editing.
A similar storm broke in 2012, when editors were caught promoting certain pages for cash. Trustee Roger Bamkin was seen placing content on the main Wikipedia page for money, whilst “Wikipedian In Residence” Max Klein set up a service for editing of the site called UntrikiWiki.
In June analyst house Gartner warned that artificial intelligence could assimilate the role of CIOs.
Are you fluent in the language of the Internet? Take our quiz!
Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector
Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…
Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…
Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…
Explore the future of work with the Silicon In Focus Podcast. Discover how AI is…
Executive hits out at the DoJ's “staggering proposal” to force Google to sell off its…