To Build Trust in AI, Big Blue Launches Tool To Measure 'Uncertainty'
- By John K. Waters
AI software development continues to be a land of evolving concepts and esoteric nomenclature that coders with little to no experience in this terrain are increasingly required to navigate. But even AI road warriors need effective tools to keep up with the accelerating pace of software delivery that increasingly includes AI, machine learning and deep learning. With its open source trusted AI toolkits, IBM has put up some useful signposts.
IBM Research has released a new open source developer toolkit called Uncertainty Qualification 360 (UQ360). The new toolkit focuses on what IBM believes will be the next big area of advancing trust in AI: communicating an AI's "uncertainty."
Uncertainty quantification is just what it sounds like: a determination of the level of confidence an AI system has in its decisions. The new UQ360 toolkit was designed to give data science practitioners and developers a set of algorithms to streamline the process of quantifying, evaluating, improving and communicating uncertainty of machine learning models.
What we're talking about here, IBM AI researchers Prasanna Sattigeri and Q. Vera Liao explained in a blog post, is a way to enable an AI system or application to express that it is unsure, "giving it intellectual humility and boosting the safety of its deployment."
IBM is billing UQ360, which was released at the 2021 IBM Data & AI Digital Developer Conference, as one of the first toolkits designed to provide both a comprehensive set of algorithms for quantifying uncertainty and the capabilities to measure and improve uncertainty quantification to streamline the development process. The tool comes as a Python package with a taxonomy and guidance for choosing these capabilities based on a developer's needs, the company says.
UQ360 is just the latest toolkit to emerge from IBM Research, alongside AI Fairness 360, the Adversarial Robustness Toolbox, AI Explainability 360 and AI Factsheets 360, all released over the last few years to advance various dimensions of AI trust.
"Trust" in this context refers to the ability of humans to have confidence in the output of an AI-enabled app or system. AI systems have traditionally been black boxes, but, as IBM puts it, "To trust a decision made by an algorithm, we need to know that it is fair, that it's reliable and can be accounted for, and that it will cause no harm." That level of trust requires transparency.
The fatal highway crash of a Tesla vehicle operating in self-driving mode in June threw another spotlight on the AI safety issue and the growing interest in shining a light in the AI black box. But Sattigeri, with whom I spoke over Zoom, said "miscalibrated uncertainties" are about more than just this kind of obviously critical application of AI.
"The self-driving example is a scary one," he allowed, "but take the loan approval process, where somebody is using an AI system to assist them in making a prediction that impacts your interest rate. Or in a health care setting, where the doctor needs to trust the AI to assist in making a diagnosis."
Quantifying uncertainty can show gaps in the knowledge of the training model, Sattigeri said, so the model can be improved.
"If we know [that the systems] are overconfident or underconfident," he said, "we can use recalibration algorithms to make them either loser, so you're increasing the margin of error, or [tighter] so you're decreasing the margin of error. And then it's up to the decision maker how they want to use it. If the uncertainty is too large, the loan officer can go ahead and do certain other investigation, maybe collecting addition information about the person."
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at firstname.lastname@example.org.