News

AI Giants, Including OpenAI, Microsoft and Google, Join Governments in Urging Regulation

Prominent stakeholders in advanced AI  are reiterating their call for regulatory measures, aligning with a global consensus on the potential hazards posed by rapidly advancing technology.

In the past week alone, notable figures in the tech industry, including OpenAI, Microsoft, and Google, have garnered attention for their advocacy of increased oversight over the technologies they have played a pivotal role in introducing to the world. During the same timeframe, the Biden administration unveiled fresh initiatives aimed at promoting responsible AI. These developments coincide with a growing international wave of apprehension regarding the safe and responsible utilization of AI, encompassing concerns ranging from the spread of deepfake misinformation to existential risks posed by AI.

Although this movement gained traction with debut of advanced large language models (LLMs) that back generative AI systems like ChatGPT (such as GPT-4), AI leader OpenAI is already looking further out into the future to voice concerns about the "Governance of superintelligence," or future AI systems dramatically more capable than even Artificial General Intelligence (AGI). The latter refers to a hypothetical type of intelligent agent that can learn to do any intellectual task that human beings can do.

"In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past," OpenAI execs Sam Altman, Greg Brockman and Ilya Sutskever said. "We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can't just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example. We must mitigate the risks of today's AI technology too, but superintelligence will require special treatment and coordination."

Google and Alphabet CEO Sundar Pichai on May 22 penned a Financial Times article headlined "Google CEO: Building AI responsibly is the only race that really matters" where he doubled down on his previous calls for AI regulation. "I still believe AI is too important not to regulate, and too important not to regulate well," he said.

Meanwhile, Brad Smith, president of Microsoft, made a host of headlines in a speech delivered yesterday (May 25) in Washington attended by multiple members of Congress and civil society groups, CNN reported in the article "Microsoft leaps into the AI regulation debate, calling for a new US agency and executive order."

"In his remarks, Smith joined calls made last week by OpenAI -- the company behind ChatGPT and that Microsoft has invested billions in -- for the creation of a new government regulator that can oversee a licensing system for cutting-edge AI development, combined with testing and safety standards as well as government-mandated disclosure rules," CNN reported.

On the same day, Microsoft published a related post authored by Smith, titled "How do we best govern AI?"

"As technological change accelerates, the work to govern AI responsibly must keep pace with it," Smith said. "With the right commitments and investments, we believe it can."

Smith pointed to the company's five-point blueprint for the public governance of AI.

"In this approach, the government would define the class of high-risk AI systems that control critical infrastructure and warrant such safety measures as part of a comprehensive approach to system management," Smith said. "New laws would require operators of these systems to build safety brakes into high-risk AI systems by design. The government would then ensure that operators test high-risk systems regularly to ensure that the system safety measures are effective. And AI systems that control the operation of designated critical infrastructure would be deployed only in licensed AI datacenters that would ensure a second layer of protection through the ability to apply these safety brakes, thereby ensuring effective human control."

And, while all of the above comes from what are arguably three of the top leaders in the AI space, the U.S. government piled on also. On May 23 the White House published a fact sheet outlining new steps the administration has taken to advance responsible AI research, development, and deployment.

And the U.S. isn't the only nation addressing AI dangers, as Reuters on May 23 reported. "Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree laws governing the use of the technology," said the article, which lists efforts in countries ranging from Australia to Spain.

About the Author

David Ramel is an editor and writer for Converge360.

Featured