News

Model Framework for Ethical AI Gets an Update

Interest in artificial intelligence (AI) and machine learning technologies has exploded over the past two years, and not just in the software, but the new ethical, legal and governance challenges that come with it. That's the focus of the second edition of the "Model Artificial Intelligence Framework," released last month at the 2020 World Economic Forum's annual meeting in Davos, Switzerland.

Published by the Personal Data Protection Commission (PDPC) and Infocommunications Media Development Authority (IMDA), the report examines the risks of unintended discrimination potentially leading to unfair outcomes, as well as issues relating to consumers' knowledge about how AI is involved in making significant or sensitive decisions about them.

The publishers describe the report as "a living  and voluntary model framework… a general, ready-to-use tool to enable organizations that are deploying AI solutions at scale to do so in a responsible manner." And they stress that the Model Framework "is not intended for organizations that are deploying updated commercial off-the-shelf software packages that happen to now incorporate AI in their feature set."

The framework offers two guiding principles:

1) Organizations using AI in decision-making should ensure that the decision-making process is explainable, transparent, and fair. "Although perfect explainability, transparency and fairness are impossible to attain, organizations should strive to ensure that their use or application of AI is undertaken in a manner that reflects the objectives of these principles as far as possible. This helps build trust and confidence in AI."

2) AI solutions should be human-centric. "As AI is used to amplify human capabilities, the protection of the interests of human beings, including their well-being and safety, should be the primary considerations in the design, development and deployment of AI."

The list of specific recommendations in the framework includes:

  • Internal governance structure and measures: adapting existing or setting up internal governance structures and measures to incorporate values, risks, and responsibilities relating to algorithmic decision-making, which includes delineating clear roles and responsibilities for AI governance within an organization, processes and procedures to manage risks, and staff training.
  • Determining the level of human involvement in AI-augmented decision-making: based on the assessment of risks and identifying an appropriate level of human involvement in the process in order to minimize the risk of harm to individuals.
  • Operations management: issues to be considered when developing, selecting and maintaining AI models, including data management.
  • Stakeholder interaction and communication: strategies for communicating with an organization's stakeholders concerning the use of AI.

The Model Framework is intended to be broadly applicable, it's authors say. It's "algorithm-agnostic, technology-agnostic, sector-agnostic and scale-and-business-model-agnostic." Consequently, it can be adopted across all industries and businesses, regardless of the specific AI solution or technology involved.

The list of key update in the second edition includes:

  • Industry examples. The Model Framework now includes real-world industry examples in each of the four key governance areas, demonstrating effective implementation by organizations. Such examples are drawn from a variety of industries – ranging from banking and finance, healthcare to technology and transportation, and are based on different use cases, thereby reinforcing the neutral and flexible nature of the framework.
  • Additional tools to enhance the framework's usability. The IMDA and PDPC have released two additional documents meant to guide organizations in adopting the Model Framework: "The Implementation and Self-Assessment Guide for Organizations" and "The Compendium of Use Cases." Both documents are accessible online.

This edition also clarifies the concept of "human-over-the-loop" in human in AI-augmented decision-making. It also explains how organizations might consider factors such as the nature of harm (i.e. whether the harm is physical or intangible in nature), reversibility of harm (i.e. whether recourse is easily available to the affected party) and operational feasibility in determining the level of human involvement in such AI-augmented decision-making processes. It provides advice on how to build trust with various stakeholders by clarifying things like how AI is used in decision-making and how the organization has mitigated risks, as well as providing an appropriate channel to contest decisions made by AI. And it adds guidance to organization on how to adopt a risk-based approach to implementing AI governance measures.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured