News

New Rules for AI Coming from the European Union

Late last year, the European Commission (EC), which helps to shape the laws of the European Union (EU), published a white paper ("On Artificial Intelligence: A European Approach to Excellence and Trust") in which it made the case for creating societal safeguards for so-called high risk artificial intelligence (AI). The introduction reads, in part:

Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, artificial intelligence entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes.

The EC is expected to address its concerns about those potential risks sometime this year with a new set of rules for AI technologies, including a requirement for compliance tests and controls.

In its fourth-quarter and 2020 annual review of AI and automated systems, the law firm of Gibson, Dunn & Crutcher, which follows regulatory developments around the world, highlighted "a noticeable surge in AI-related regulatory policy proposals, and well as growing international coordination."

The European Union ("EU") has emerged as a pacesetter in AI regulation, taking significant steps towards a long-awaited comprehensive and coordinated regulation of AI at EU level—evidence of the European Commission's (the "Commission") ambition to exploit the potential of the EU's internal market and position itself as a major player in sustainable technological innovation. This legislation is expected imminently, and all signs point to a sweeping regulatory regime with a system for AI oversight of high-risk applications that could significantly impact technology companies active in the EU.

Anupam Datta, a computer science professor at Carnegie Mellon University and co-founder of startup Truera, has been conducting research on the responsible use of AI for years. He says vendors and organizations utilizing these technologies are anxiously awaiting the new rules and guidance on compliance. In an email to Pure AI, he offered some predictions about the coming regulations, and some recommendations:

First, the EC is likely to take a broad approach similar to the General Data Protection Regulation (GDPR), the EU's privacy law.

"There has been a lot of debate over scope," Datta said. "I believe the rules will be broadly applied… to any technology that leverages AI. Otherwise, you leave loopholes and risk missing important use cases that really should be subject to the rules and requirements. The expectations of compliance tests and controls will vary by materiality of the use cases. For example, the expectations for underwriting may be higher than the expectations for marketing models."

Many stakeholders are advocating for internal oversight only, which is, essentially, the honor system. But Datta believes a combination of internal and external oversight will be needed.

"Independent, external oversight requirements have been effective in encouraging responsible adoption of technology in a number of areas," he said. "We see this in finance, and we see it with GDPR. I think there needs to be a combination of internal and independent, external oversight."

Companies will also need better tools to assess model quality and performance, he said, because the new rules and regulations are highly likely to require companies to test model quality and measure model performance in live environments, which is something very few organizations are ready to do today.

"There has been a lack of good tools for measuring model quality during model development and on an ongoing basis with monitoring," Datta said. "There is likely to be a long grace period before penalties begin, as we saw with GDPR, and during that period, companies will need to race to get the right test and measurement tools in place to ensure compliance."

Not surprisingly, Datta's new company, Truera, provides a model intelligence platform designed to analyze machine learning, improve model quality, and build trust. The company grew out of his research at Carnegie Mellon.

The EU rules will eventually make their way to the US, Datta predicts. US regulators are watching the EU debate closely. "We are likely to see regulatory movement in this area in the US in 2021," Datta said.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured

Upcoming Training Events