News
New Proposed Federal Rules Target AI, Cloud Reporting
A new rule being proposed by the federal government would establish reporting requirements for the development of advanced AI models and computing clusters.
The proposed rule is the idea of the Bureau of Industry and Security (BIS), an arm of the U.S. Department of Commerce.
Specifically, the BIS is asking for reporting on developmental activities, cybersecurity measures and outcomes from red-teaming efforts, which involve testing AI models for dangerous capabilities, such as assisting in cyberattacks or enabling the development of weapons by non-experts.
The rule is designed to help the Department of Commerce assess the defense-relevant capabilities of advanced AI systems and ensure they meet stringent safety and reliability standards. This initiative follows a pilot survey conducted earlier this year by BIS and aims to safeguard against potential abuses that could undermine global security, officials said.
"As AI is progressing rapidly, it holds both tremendous promise and risk," said Secretary of Commerce Gina M. Raimondo in a Sept. 9 news release. "This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security."
While initiated at the behest of national security, this is the latest effort to rein in rapidly advancing AI technologies that have the potential to disrupt global stability in various ways. For example, California just passed groundbreaking AI safety legislation that is now awaiting the governor's signature. If enacted, the bill would require large AI companies to test their systems for safety before they are released to the public. Additionally, the bill grants the state's attorney general the authority to sue companies for damages if their technologies cause significant harm, including death or property damage.
And shortly before that, the world's first global AI treaty was signed. The United States, the United Kingdom, the European Union and several other countries signed "The Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law," the world's first legally binding treaty aimed at regulating the use of AI. It was developed by the Council of Europe and opened for signatures on Sept. 5. The primary goal of the treaty is to ensure that AI systems are designed, developed, deployed and decommissioned in ways that respect human rights, support democratic institutions and uphold the rule of law.
What's more, just a couple weeks ago the U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST), announced that it has formalized agreements with leading AI companies Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.
Under a Memoranda of Understanding, the U.S. AI Safety Institute will gain access to new AI models from both companies before and after their public release. This collaboration aims to assess the capabilities and risks of these models and develop methods to mitigate potential safety concerns.
All of these efforts springing forth in such a short time period speak to the urgency of governments, organizations and industry leaders to address AI regulation.
"The information collected through the proposed reporting requirement will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors, all of which are imperative for maintaining national defense and furthering America's technological leadership," the BIS news release said. "With this proposed rule, the United States continues to foster innovation while safeguarding against potential abuses that could undermine global security and stability."
About the Author
David Ramel is an editor and writer for Converge360.