Q&A

5 Questions with Ethical AI Expert Noelle Russell

In the first installment of our new series we asked Noelle Russell, founder and Chief AI Officer at the AI Leadership Institute, five questions about ethical AI.

Welcome to the first installment of our new monthly series, "Five Questions with..." In the coming months we will be posing questions that are top of mind among our readers to an expert or thought leader in the AI space. From research scientists to in-the-trenches engineers, C-suite executives to academics, ethics officers to product managers, these are the people who are making and managing the technologies driving truly fundamental changes in our workplaces and our lives. If you have any burning questions that you'd like our experts to answer, send them to jwaters@converge360.com.

Noelle Russell is the founder and Chief AI Officer at the AI Leadership Institute. She specializes in helping companies with emerging technology, cloud, AI, and generative AI. She has led teams at NPR, Microsoft, IBM, AWS, and Amazon Alexa, and is a committed champion for data and AI literacy. Russell has built more than 100 conversational AI applications since 2014 and has over 2 million unique users on Amazon Alexa. In the past year, she received her third Microsoft Most Valuable Professional (MVP) award for Artificial Intelligence, as well as VentureBeat's Women in AI Responsibility and Ethics award. She responded to our queries via email.

1. What would you say are the core ethical principles that should guide the development and deployment of AI systems?
Most experts, practitioners, and academics agree on five main tenets: transparency, explainability, trust/privacy, fairness, and robustness. Although interpretations of these principles can vary, one theme shines through: accountability. It is critical that organizations not only understand these principles, but also create the processes for holding themselves accountable for adhering to them.

In a time when AI is part of every pitch deck and expo hall booth, these are the five areas I always lean in on and ask more questions:

  • Transparency: This is from a data and algorithmic perspective. Can we see what you are attempting to accomplish?
  • Explainability: It isn't enough to see what's happening. It's equally important to understand how data is protected and how models are working.
  • Trust/Privacy: How we protect the data of our customers, and their behavior is more important than ever, and being good stewards of this information will create lifelong trusted relationships with our customers.
  • Fairness: Bias is in every model, as every model has been trained on human data. It is important to ensure that the bias in humanity doesn't get amplified by the models we choose to use or build. This requires us to ask better questions and empathize with those who could be hurt by the AI solutions we are building.
  • Robustness: It isn't enough to build an AI system that embodies the above principles. We must do all these things and build it in a scalable way. It is our responsibility to build AI systems that can scale up and down as needed to accommodate demand. Elastic scale of systems allows our AI solutions to be up and available when our users need them most.

2. What measures can be implemented to detect and mitigate bias in AI systems, and can AI ever be truly unbiased?
Mitigating bias is an important part of every AI system and solution. It's better to identify bias and develop mitigation strategies earlier rather than later when building a solution.

One way to detect and mitigate bias is to apply a philosophy of design justice. This means that you not only ask how an AI system might serve and help the user, but also investigate deeply how it might hurt them. This type of questioning in the design phase can create an important pivot point that will drive more inclusive conversations.

"Using the principles of responsible AI, the more transparent and explainable you can be about what you are doing and how you are doing it, the more of a positive effect it will have on adoption."

3. How do you view AI's impact on the future of work, and what ethical considerations should guide the automation of jobs?
There's a lot of confusion between the terms roles or jobs and tasks. AI is not a job killer; it is a human enabler.

Rather than simply focusing on the impact of this technology on jobs, it's important to look at every role and examine the tasks being performed. At least 20 percent of tasks in any given role can be automated to allow for the human in that role to elevate their work and complete higher-level human tasks. This is a paradigm shift we are only beginning to understand.

4. What are the long-term ethical risks of AI, and what safeguards can be put in place to protect against potential negative outcomes?
The risk is rooted in an important question: Whose ethics are we talking about? If you look at the leaders of the largest companies influencing AI development, it's hard not to notice the lack of diversity.

Organizations that include unique perspectives, from the boardroom to the whiteboard to the keyboard (a phrase I use to describe the extent to which an organization needs to think about responsible use of AI) are going to be better at serving more people with their solutions.

5. How does public perception of AI ethics affect its development and adoption, and how can trust be built between AI developers and the broader public?
We've all seen good products that don't get adopted. Google Glass, for example: It was an amazing idea that was a bit too early for customers to rally behind. Or good products that get bad press. Amazon Alexa, for example, is hurt by the media's portrayal of a device that's always listening, when we all know that our phones listen to our conversations, too.

Using the principles of responsible AI, the more transparent and explainable you can be about what you are doing and how you are doing it, the more of a positive effect it will have on adoption. Garnering developer trust with third-party APIs (think OpenAI) and allowing people transparent and explainable access to your service will drive broader adoption.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured