Accounting, Finance Industries Demand Explainable AI Tools
October 1, 2018
As artificial intelligence-based tools become more widespread in the business industry, cloud service companies are debuting tools that explain the artificial intelligence algorithms they use to provide more transparency and assure users of their ethical behavior. That’s because regulated industries are demanding it. Capital One and Bank of America are just two such companies interested in using AI to improve detection of fraud, but want to know how the algorithms work before they implement such tools.
The Wall Street Journal reports that, “explainability issues have prompted companies such as IBM and Alphabet’s Google to incorporate transparency and ethics tools into their cloud-based AI service offerings.” In an IBM Institute of Business Value study, about 60 percent of 5,000 executives — up from 29 percent in 2016 — expressed concern “about being able to explain how AI is using data and making decisions in order to meet regulatory and compliance standards.”
“I don’t believe it’s possible for AI to scale in the enterprise beyond hundreds of [experiments] unless you have that explainability,” said KPMG principal of intelligent automation, cognitive and AI Vinodh Swaminathan, whose company is building its own explainability tools as well as using those from IBM.
Deep learning tools including neural networks “whose structure roughly tries to mimic the operations of the human brain” can be “black boxes both for the data scientists engineering them and the business executives touting their benefits.” IBM’s new AI tools “can show users which major factors led to an AI-based recommendation … [and] analyze AI decisions in real-time to identify inherent bias and recommend data and methods to address that bias.”
“Being able to unpack those models and understand where everything came from helps you understand how you’re reaching the decision,” said IBM senior vice president of cognitive solutions David Kenny.
Google started releasing “new tools for its open-source machine learning code” last year, and another AI tool released this month “lets non-programmers examine and debug machine learning systems in an effort to assess fairness in the algorithms.” A Microsoft spokesperson said that, “AI systems need to be designed with protections for fairness, transparency and safety, in a way that earns trust, which is a continued effort for the company.”
The Department of Defense’s research division is also involved, “marshalling an international effort to build so-called explainable AI systems that can translate complex algorithmic-made decisions into language humans can understand.”
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.