International Coalition Releases AI Security Guidelines, Sets Stage for Standards

The National Cyber Security Centre, with backing from global security entities including the U.S. Cybersecurity and Infrastructure Security Agency, unveiled a set of guidelines for artificial intelligence on Sunday, Nov. 26. The publication serves as a foundational step towards establishing global norms for the ethical development and use of AI technologies.

The guidelines offer a set of recommendations for AI system providers, regardless of whether their systems are built from the ground up or are integrations upon existing tools. "Adherence to these guidelines will ensure AI systems operate as designed, remain accessible when required, and safeguard sensitive data from unauthorized access," the document states.

Despite their non-binding nature, the consensus on the 20-page document by representatives from 18 countries marks a significant move by international agencies to jointly tackle the security and privacy challenges posed by burgeoning technologies like generative AI.

The document delineates a structured approach to enhancing AI security across four pivotal areas:

  • Secure Design: Emphasizing security from the outset of system conceptualization. It involves anticipating threats to emerging systems, keeping developers informed of emerging risks, and finding equilibrium between security and performance.
  • Secure Development: Addressing the importance of supply chain security from the start, ongoing vigilance during system construction, and the codification of best practices for emerging software and services.
  • Secure Deployment: The guidelines propose establishing procedures tailored to the AI system deployment phase, which includes protecting against potential threats and establishing incident response protocols, and ensuring conscientious release methods.
  • Secure Operation and Maintenance: Stressing the significance of security in the operational and maintenance stages post-deployment, with a focus on activity logging, update management, and transparent communication.

The endorsing nations now face the task of translating these guidelines into actionable and pragmatic practices.
Chris Hughes, Chief Security Advisor at Endor Labs and a Cyber Innovation Fellow with CISA, commented on the voluntary nature of the guidelines. "The main hurdle is ensuring these voluntary guidelines are put into practice. Unlike mandates from organizations like the European Union, these recommendations are suggestive rather than compulsory, leaving it to providers to opt in or out. However, as the regulatory framework around AI in the U.S. continues to develop, we might see this change," he observed.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.