News

IBM Proposes AI Rules To Prevent Bias

IBM called for new standards to eliminate bias in artificial intelligence (AI) this week, urging companies and governments to work together to combat discriminatory practices that could harm women, minorities, the disabled and others.

"It seems pretty clear to us that government regulation of artificial intelligence is the next frontier in tech policy regulation," said Chris Padilla, VP of government and regulatory affairs at IBM, a day ahead of Wednesday's panel on AI at the World Economic Forum in Davos, Switzerland.

IBM CEO Ginni Rometty is scheduled to lead that panel, which will include Siemens CEO Joe Kaeser; Chris Liddell, White House deputy chief of staff for Policy Coordination; and Ángel Gurría, secretary general of the Organisation for Economic Co-operation and Development (OECD).

Big Blue wants companies and governments to develop standards that will, for example, address bias in algorithms that rely on historical data (such as zip codes or mortgage rates) that have been skewed by discrimination to ensure that African-Americans are have fair access to housing. Such standards would be likely be developed in the United States at the National Institute of Standards and Technology (NIST) within the U.S. Department of Commerce.

One step companies could take, IBM suggested, is to appoint AI ethics officers charged with determining how much potential harm an AI system might pose and maintain documentation about data when "making determinations or recommendations with potentially significant implications for individuals" so that the decisions can be explained.

"Bias is one of those things we know is there and influencing outcomes," Cheryl Martin, chief data scientist at Alegion, an Austin-based provider of human intelligence solutions for AI and machine learning initiatives, told Pure AI in an earlier interview. "But the concept of 'bias' isn't always clear. That word means different things to different people. It's important to define it in this context if we're going to mitigate the problem."

Martin laid out the four types of bias during that interview: sample/selection bias (the distribution of the training data fails to reflect the actual environment in which the machine learning model will be running), prejudices and stereotypes (which emerge in the differentiation process), systematic value distortion (when a device returns measurements or observations that are imprecise), and model insensitivity (the result of the way an algorithm is used for training on any set of data, even an unbiased set).

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured