Artificial intelligence holds the promise for enterprises of being able to use sophisticated algorithms to churn through massive amounts of data, discover patterns, return insights, and help drive faster and better business decisions.
Image recognition, natural language processing, translation and other capabilities gained through neural networks, machine learning and deep learning will help executives open new revenue streams, improve efficiencies, reduce costs and drive automation.
At the same time, AI has been viewed as a “black box” of sorts. Data goes in one end and findings, decisions and insights come out the other, but there’s little visibility into how those findings are reached, and as businesses and consumers alike learn to rely more on AI technology, there’s increasing worry about bias in the findings. Neural networks are only as good as the data that goes in, and that data can be influenced by the people who input it.
Bringing Transparency to the Fore
IBM officials are now offering a cloud-based service designed to detect bias in AI and bring transparency to how AI-powered systems make decisions. The service runs on the IBM Cloud and can be used to manage AI systems from a wide array of tech vendors. At the same time, the company will release a toolkit to the open-source community that will include technology and education tools that others can use to detect and mitigate AI bias.
Being able to detect bias and get more visibility into the decision-making process of AI systems is crucial as industries adopt the technology, according to Ruchir Puri, chief architect for IBM Watson and an IBM Fellow.
“For AI to thrive and for businesses to reap its benefits, executives need to trust their AI systems,” Puri wrote in a post on the company blog. “They need capabilities to manage those systems and to detect and mitigate bias. It’s critical—oftentimes a legal imperative—that transparency is brought into AI decision-making. In the insurance industry, for example, claims adjusters may need to explain to a customer why their auto claim was rejected by an automated processing system.
“It’s time to start breaking open the black box of AI to give organizations confidence in their ability to manage those systems and explain how decisions are being made.”
IBM’s new Trust and Transparency capabilities for AI come as spending on cognitive and AI systems and software continues to quickly ramp. IDC analysts this month said spending on cognitive and AI systems will reach $24 billion this year and grow to $77.6 billion by 2022, increasing an average of 37.3 percent a year during that time.
High Number of Companies Moving Toward AI Adoption
IBM’s own research found that 82 percent of enterprises are considering or are moving ahead with AI adoption, with a focus on generating revenue, and that 60 percent fear liability issues involved in AI. Sixty-three percent lack the skills to take advantage of AI, IBM found.
IBM’s automated service works with models built from such popular machine learning frameworks and AI environments as Watson, TensorFlow, SparkML, Sagemaker from Amazon Web Services (AWS) and Microsoft’s AzureML. Users also can customize the service’s software to better fit the specifics of their organizations, IBM officials said.
The service can explain how decisions are being made and detect bias as those decisions are being made and will automatically recommend data that can be added to the model being run to mitigate bias that’s found.
The service can explain the factors that played into the decision, whether they pushed the decision in one direction over another and records the accuracy, performance and fairness of the model. In addition, the lineages of the AI systems are traced for customer service, regulatory and compliance reasons. All of this is accessed via visual dashboards.
AI Only as Good as Its Training
“Fairness is a key concern among enterprises deploying AI into apps that make decisions based on user information,” Puri wrote. “The reputational damage and legal impact resulting from demonstrated bias against any group of users can be seriously detrimental to businesses. AI models are only as good as the data used to train them, and developing a representative, effective training data set is very challenging.”
Even if bias is found during training, “the model may still exhibit bias in runtime. This can result from incongruities in optimization caused by assignment of different weights to different features,” he wrote.
The IBM AI Fairness 360 toolkit for the open-source community includes—initially—nine algorithms, code and three tutorials that will be available to data scientists, academics and researchers, company officials said. More tools and tutorials will be added in the future.
“AIF360 is a bit different from currently available open source efforts due its focus on bias mitigation (as opposed to simply on metrics), its focus on industrial usability, and its software engineering,” Kush Varshney, principal research staff member and manager at IBM Research, wrote in a post on the company blog.