News

U.S. AI Safety Institute Announces Landmark Collaboration with AI Leaders Anthropic and OpenAI

The U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST), announced that it has formalized agreements with leading AI companies Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.

Under the Memoranda of Understanding, the U.S. AI Safety Institute will gain access to new AI models from both companies before and after their public release. This collaboration aims to assess the capabilities and risks of these models and develop methods to mitigate potential safety concerns.

"Safety is essential to fueling breakthrough technological innovation," said Elizabeth Kelly, Director of the U.S. AI Safety Institute, in a statement. "With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,"

"These agreements are just the start," she added, "but they are an important milestone as we work to help responsibly steward the future of AI."

The U.S. AI Safety Institute also intends to work closely with its partners at the U.K. AI Safety Institute to offer feedback to Anthropic and OpenAI on potential safety enhancements to their models.

"Safe, trustworthy AI is crucial for the technology's positive impact," said Anthropic co-founder and head of policy Jack Clark, in a statement. "Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment."

The agreements come at a time of increasing regulatory scrutiny over the safe and ethical use of AI technologies. California legislators are also poised to vote on a bill regulating AI development and deployment.

This initiative builds on NIST’s longstanding legacy in advancing measurement science and standards, with the aim of fostering the safe, secure, and trustworthy development and use of AI, as outlined in the Biden-Harris administration’s Executive Order on AI.

"We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said OpenAI chief strategy officer Jason Kwon.

In addition to its U.S. collaborations, the AI Safety Institute plans to work closely with the U.K. AI Safety Institute to provide feedback on safety improvements for these advanced AI systems.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured