European Union Proposes New Regulations for 'High-Risk' Use of AI
- By John K. Waters
The Executive Committee (EC) of the European Union (EU) has unveiled a proposed set of tough new rules aimed at reigning in "high-risk" uses of artificial intelligence (AI).
The list of proposed regulations includes a prohibition in principle on "remote biometric identification" (live facial scanning of people in public places); a ban on the use of AI for things like choosing school, job, or loan applicants; and a ban the use of AI for "social scoring" and systems used to manipulate human behavior.
Companies developing AI in and outside the EU could be fined 20 million euros ($24 million) or 4% of their global revenue if they failed to comply with the proposed rules, should they be adopted.
"With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted," said Margrethe Vestager, the EC’s executive VP for the Digital Age, in a statement. "By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way."
The EC is the executive branch of the EU responsible for proposing legislation, implementing decisions, upholding EU treaties, and managing the day-to-day business of the organization.
The EC listed four specific objectives for its proposed regulatory framework: 1) to ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values; 2) to ensure legal certainty to facilitate investment and innovation in AI; 3) to enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and 4) to facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
The EC considers the use of AI systems for real-time remote biometric identification of "natural persons" in publicly accessible spaces for the purpose of law enforcement to be particularly intrusive on the rights and freedoms of the people being identified "to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance, and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights," the proposal reads.
China has been widely criticized for using AI to give its citizens "social credit" scores, with which they could be rewarded or punished. The EC said AI technologies that allow governments to engage in such social scoring, or that exploit children, will be banned.
However, the EC might have to carve out an exception that allows authorities to use AI technologies in the fight against serious crime—say, allowing law enforcement to use facial recognition technology from CCTV cameras to find terrorists. France has shown a strong interest in integrating AI into its security apparatus.
"Five years ago, the world was watching the EU as it spearheaded the General Data Protection Regulation, and created the world-standard in data protection," said Estelle Massé, Global Data Protection Lead at digital rights advocacy organization Access Now, in a statement. "With this new AI legislation, we are again at a cornerstone moment, where the EU can lead the way — if it puts people’s rights at the center. If we have learned one thing from the GDPR, it is that the enforcement chapter of this future regulation will matter a lot to make this legislation a success."
The EC now has to work with EU countries and the European Parliament to sort out the final list of rules. The process could take more than year.
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at firstname.lastname@example.org.