News

Controversies at Google Raise the Question: What is Ethical AI?

Unrest inside Google over what some employees charge is a lack of academic freedom and diversity, and even outright censorship, has been generating headlines lately, and there's a nascent movement within the rank-and-file workforce to unionize over these concerns.

Much of this unrest is reportedly fueled by worries specifically about Google's research into, and application of, artificial intelligence (AI) technologies. Google, which is a subsidy of Alphabet Inc., parted company with computer scientist Timnit Gebru late last year under controversial circumstances. An advocate for diversity in technology, and a co-founder of the Black in AI community, Gebru was the technical co-lead of Google's  Ethical Artificial Intelligence Team. Her employment at the search engine giant ended after her superiors asked her to withdraw a then-unpublished paper, or remove the names of all Google employees from it. She refused and the company said that she quit; Gebru said she was fired. Hundreds of Google employees signed an online petition protesting Gebru's departure, and excepts and analysis of her paper have appeared in a number of tech publications.

Google recently fired Margaret Mitchell, the founder and former co-lead of the company's Ethical AI team. Both Google and Mitchell agree that she was fired, but Google says it was because Mitchell violated the company's code of conduct and security policies when she used automated software to search her old internal messages to find examples of discriminatory treatment toward Gebru; Mitchell said via Twitter that she was fired because she "…tried to raise concerns about race and gender inequity, and speak up about Google's problematic firing of Dr. Gebru."

The obvious common denominator here is so-called ethical AI. But what exactly does that phrase mean? According to former Carnegie Melon University professor Anupam Datta, the current definition is evolving, but at its core, its about practices that inculcate artificial intelligence with elements of explainability, transparency, agency, and ultimately, fairness.

Explainability is about illuminating how neural networks reach their decisions, so that human users can understand, appropriately trust, and effectively manage AI. Transparency is about shining a light on these increasingly complex systems, so they can be explainable. This is the element without which agency--the ability of a person to act on, or against these systems--is all but impossible. Combined, these elements make it possible to judge the fairness of a system and keep it from causing societal harms.

"The increasing complexity of these systems can make all of this very challenging," Datta told Pure AI. "Let's take credit decisioning, for example. Historically, this was done by humans using simple rule-based systems that might look at your debt-to-income ratio. If it's below a certain threshold, you get the loan; otherwise, you don't. It's a very easy system to explain and understand. But over time, there has been an increasing adoption of AI and machine learning in these processes, and the reasoning that is happening under the hood in these systems has gotten a lot more complicated."

The increased complexity makes it harder to get explanations from these systems, which makes them harder to argue with, Datta said.

"When we talk about 'agency,' we're talking about the contestability of a system," he said. "Let's say the system has decided to deny you credit, and you want to contest that decision. It's important to understand why that decision was reached,, so you can go back and say, hey you denied me because you said I had three delinquencies in the last six months, but actually, I had only two. An ethical system allows you to do this."

Datta has been conducting research on the responsible use of AI for years, first at Carnegie Melon, and currently at his new startup, a company called Truera. He founded the company with a PhD student, Shayak Sen, when their research identifies gender bias in online advertising, but found they lacked the tools to explain what caused the bias. Together with Will Uppington, they built a Model Intelligence Platform for analyzing enterprise AI models.

Datta emphasized that the bias we see in AI and machine learning systems was not programmed in by developers, but the result of models that automatically learn relationships from the training data. In other words, the bias is in the data.

"The people who are programming are not writing down the rules, if you will," he said. "They are training a model that automatically learns relationships from the training data. The training data has historical biases, let's say based on gender or race or other characteristics, and the machine learning models can learn and reinforce and amplify those historical biases. It's important to pay attention to the data collection process, examining the data ahead of time to make sure that historical biases are appropriately mitigated."

The full interview with Anupam Datta on ethical AI is available now on the WatersWorks podcast.

 

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured