IBM Introduces Open Source Tool For Monitoring AI Bias

IBM is to roll out a cloud-based tool designed to detect bias in artificial intelligence models and give the organisations using them better visibility into why they are making the decisions they do.

The move highlights a growing focus on AI management within enterprises, with IBM and others arguing that liability issues around the technology are holding back large-scale deployments.

The issue extends beyond the business world as well, with Darpa, the Pentagon’s research arm, saying earlier this month it is planning a major investment in the area.

Darpa said research into “explainable” AI would be a significant part of a planned investment of $2 billion (£1.54bn) over the next five years into artificial intelligence research.

Transparency

Darpa director Steven Walker said the ability for AI systems to be able to explain to humans how they arrived at a particular answer in real time was “critically important” for giving military commanders confidence they could rely on the technology.

Similarly, IBM’s Institute for Business Value found that 82 percent of enterprises were considering AI deployments, but 60 percent feared liability issues.

AI’s limitations in technologies such as facial recognition have caused concern, but IBM said concerns have been raised over the technology’s use in sectors such as the processing of insurance claims, credit scores and medical expenses.

The company said its open source Fairness 360 tool provides tutorials for these and other areas.

The platform provides a visual dashboard indicating how algorithms are making decisions and which factors come into play in making final recommendations.

It also tracks the model’s accuracy, performance and fairness over time, helping firms comply with regulations.

Automated decision-making

It builds on machine learning frameworks including Watson, Tensorflow, SparkML, AWS SageMaker and AzureML.

The toolkit provides a library of algorithms, code and tutorials that academics, researchers and data scientists can integrate into their models. The tools are available on GitHub.

“We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making,” said IBM Cognitive Solutions senior vice president David Kenny.

IBM said it is planning to provide feedback indicating how different decision-making factors are weighted, confidence in recommendations, accuracy, performance, fairness and lineage of AI systems.

IBM Research has also proposed introducing a transparency rating system for AI services, similar to an UL rating.

Google, Microsoft and Facebook are amongst the other companies developing tools aimed at making it clearer what factors are used in AI-assisted decisions.

Earlier this year the House of Lords published a report into AI in which it recommended a code of ethics for the systems.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Craig Wright Sentenced For Contempt Of Court

Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…

2 days ago

El Salvador To Sell Or Discontinue Bitcoin Wallet, After IMF Deal

Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…

2 days ago

UK’s ICO Labels Google ‘Irresponsible’ For Tracking Change

Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…

2 days ago

EU Publishes iOS Interoperability Plans

European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…

3 days ago

Momeni Convicted In Bob Lee Murder

San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…

3 days ago