News

CSA's New AI Safety Initiative Aims to Set Security Standards

In a landmark move, leading artificial intelligence software vendors have joined forces under the Cloud Security Alliance's (CSA) new AI Safety Initiative in an effort to establish trusted best practices for generative AI technology. Tech giants such as Microsoft, Amazon, Google, OpenAI, and Anthropic are among the prominent participants in this collaborative effort.

The AI Safety Initiative is a new CSA project focused on the development of practical safeguards for current generative AI technologies, while also preparing for the advancements of more powerful AI systems in the future.

"This coalition, and the guidelines emerging from it, will set standards that help ensure AI systems are built to be secure," said Matt Knight, Head of Security at OpenAI, in a statement.

One key objective of this initiative is the creation of security best practices for AI usage and deployment, which will be made freely accessible. The CSA emphasizes that these guidelines are designed to instill confidence in customers of all sizes, encouraging responsible AI adoption by mitigating associated risks.

"Through collaborative partnerships like this, we can collectively reduce the risk of these technologies being misused by taking the steps necessary to educate and instill best practices when managing the full lifecycle of AI capabilities, ensuring—most importantly—that they are designed, developed, and deployed to be safe and secure," said Jen Easterly, Director of the Cybersecurity and Infrastructure Security Agency (CISA). CISA is the operational lead for federal cybersecurity and the national coordinator for critical infrastructure security and resilience.

The CSA AI Safety Initiative is meant to complement existing AI assurance programs within governments, adding a layer of industry self-regulation. It aims to address critical ethical issues and societal impacts anticipated from significant AI advancements in the coming years.

"The CSA shares our belief that long-term generative AI advancements will be achieved when private organizations, government, and academia align around industry standards, as outlined in our Secure AI Framework (SAIF). Continued industry collaboration will help organizations ensure emerging AI technologies will have a major impact on the security ecosystem," said Phil Venables, CISO at Google Cloud, in a statement.

A list of the project's goals includes:

  • Create trusted best practices for AI and make them freely available, with an initial focus on Generative AI
  • Give customers of all sizes confidence to accelerate responsible adoption due to the presence of guidelines for usage that mitigate risks
  • Complement AI assurance programs within governments with a healthy degree of industry self-regulation
  • Provide forward thinking program to address critical ethical issues and impact to society resulting from significant advances in AI over the next several years

"Anthropic's AI systems are designed to be helpful, honest, and harmless," said the company's chief security officer Jason Clinton, in a statement. "We look forward to lending our expertise to crafting guidelines for safe and responsible AI systems for the wider industry. By collaborating on initiatives like this one focused on generative models today—with an eye toward more advanced AI down the line—we can ensure this transformative technology benefits all of society," said.

Caleb Sima, a veteran cybersecurity executive who is chairing the initiative, underscored the "transformative impact of generative AI technologies," such as chatbots and image manipulation tools. He stressed the importance of uniting industry leaders to share knowledge and best practices, noting that this collaboration has led to the development of robust recommendations for the industry.

With more than 1,500 expert participants, the initiative has become the largest in the 14-year history of the Cloud Security Alliance, marking a significant milestone in the field of AI and cybersecurity. The AI Safety Initiative has begun meetings of its core research working groups:

  • AI Technology and Risk Working Group
  • AI Governance & Compliance Working Group
  • AI Controls Working Group
  • AI Organizational Responsibilities Working Group

More information about participating in the initiative is available here

The CSA's second annual two-day Virtual AI Summit is scheduled for January 17-18, 2024. It will feature "industry innovators and experts to deliver critical AI topics such as shared responsibility between AI solution provider and AI consumer, pragmatic AI usage guidelines tied to existing security and governance frameworks, how cybersecurity makes AI safe and how AI makes cybersecurity better, ethical issues and societal impact from advances in AI, and many more issues facing created by the rapid emergence of AI," according to the CSA.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured