AI Policy Watch

Artificial Intelligence and National Security: How Biden's AI Blueprint Aims to Shape the Future

Welcome to AI Policy Watch

Welcome to the inaugural post of our new "AI Policy Watch" column, written by journalist and author, John K. Waters. As the Editor in Chief in the Converge360 group of 1105 Media, he manages leading news sites focused on worldwide technology trends, from artificial intelligence in the enterprise to the software development lifecycle. In this column, he will be diving deep into the evolving intersection of AI, law, and policy, exploring how governments, regulators, and private organizations are grappling with the opportunities and challenges posed by AI technologies. From legislative proposals to ethical frameworks, from regulatory crackdowns to corporate responsibility initiatives, "AI Policy Watch" will keep you informed on the fast-changing landscape shaping the future of artificial intelligence.

As of today, numerous legislative efforts have been initiated in the United States to regulate artificial intelligence (AI) at both the federal and state levels—everything from the Algorithmic Accountability Act, first introduced in the Senate in 2022 and reintroduced in mid-2024, to California's Automated Decision Systems Accountability Act, proposed in earlier this year. According to the National Conference of State Legislatures, 45 states, Puerto Rico, the Virgin Islands, and Washington, D.C., introduced AI bills this year, and 31 states and those territories adopted resolutions or enacted legislation. 

The list of legislative efforts is long, and we appear to be stepping up to the challenge posed by the warp-speed evolution of AI early enough to mitigate some of the potential negative outcomes of this technology, while not impeding the benign goals of the industry's innovators.

There are efforts on the federal level, too, of course. On October 24, 2024, the Biden administration unveiled a sweeping National Security Memorandum (NSM) with the ambitious goal of ensuring that the United States remains the global leader in artificial intelligence. Titled with exhaustive precision, "Memorandum on Advancing the United States' Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence," the 40-page document is a dense yet groundbreaking manifesto. It serves as the administration’s most comprehensive statement to date on the intersection of AI and national security.

This memorandum follows the October 2023 AI Executive Order and is paired with the "Framework to Advance AI Governance and Risk Management in National Security". Together, these documents signal a paradigm shift in how Washington views and manages the AI revolution.

The NSM focuses sharply on "frontier AI models," systems that are so large and powerful they're on the literal frontier of AI—think OpenAI’s ChatGPT or Google’s Gemini, which push beyond the capabilities of earlier AI technologies. Unlike the narrowly specialized AI of the last decade, these models are versatile, general-purpose systems with applications spanning defense, cybersecurity, and geopolitics. As Elizabeth Kelly, Director of the U.S. AI Safety Institute (AISI), said in an interview, the administration's AI lens is firmly fixed on these frontier systems.

The memorandum defines frontier models as "general-purpose AI systems near the cutting-edge of performance." This designation underscores the administration’s ugent desire to leverage AI’s transformative potential while safeguarding against its risks. For the U.S., maintaining leadership in this domain isn’t just a technological goal, it’s a national security imperative.

In announcing the NSM, National Security Advisor Jake Sullivan invoked parallels to transformative technologies, such as nuclear energy and space exploration. Drawing inspiration from pivotal Cold War strategy documents like NSC-68, the Biden administration sees the AI NSM as a similarly foundational strategy to outpace global competitors—chiefly China—without igniting an arms race.

Although the analogy isn’t perfect—AI isn’t a weapon in itself—the comparison illustrates the administration’s belief that frontier AI could reshape national security as fundamentally as atomic power once did.

The NSM addresses four distinct audiences:

  • U.S. Federal Agencies: The memorandum outlines specific directives to streamline AI adoption, governance, and inter-agency coordination. It establishes a network of Chief AI Officers across national security agencies to spearhead implementation.
  • Private Sector Innovators: Recognizing that AI leadership resides largely in the private sector, the NSM sets expectations for collaboration while emphasizing cybersecurity and counterintelligence to shield sensitive innovations from espionage.
  • Global Allies: The document acts as a diplomatic roadmap, clarifying the U.S. commitment to co-developing AI capabilities with allies. It also provides a justification for policies like the 2022 export controls on advanced AI semiconductors.
  • Adversaries and Competitors: While never explicitly naming China, the NSM is a clear signal of U.S. intent to outpace its rivals in AI capabilities. Simultaneously, it gestures toward international governance, including a prohibition on removing human oversight in nuclear decision-making.

The NSM also identifies three core objectives:

  • Maintain Leadership in AI Development:
  • Talent Acquisition: Streamlining immigration for global AI experts is framed as a national security priority.
  • Infrastructure Investment: The memo outlines strategies for expanding AI-enabling infrastructure, including energy and data centers, though significant hurdles remain due to Congressional and local regulatory constraints.
  • Counterintelligence: To prevent espionage, the NSM prioritizes protecting U.S. AI intellectual property from theft by adversaries.
  • Accelerate AI Adoption: Federal agencies are tasked with overhauling hiring, contracting, and policy frameworks to integrate AI into national security operations. However, entrenched bureaucracy poses significant barriers to this transformation.
  • Establish Governance Frameworks: Robust governance is positioned not as a regulatory burden but as a means to accelerate adoption. By defining roles, responsibilities, and clear risk management protocols, the NSM aims to reduce uncertainty and foster innovation.

The administration argues that safety measures are essential to accelerate AI adoption. As Sullivan put it, "Uncertainty breeds caution." Ensuring security and trustworthiness removes barriers to experimentation and implementation, allowing agencies to harness AI more confidently and effectively.

The memorandum positions the AISI as a central hub for AI safety, tasked with issuing guidance, conducting model evaluations, and collaborating with the National Security Agency to develop cyber-focused AI defenses.

The NSM isn’t just a policy document—it’s a vision for the future of U.S. national security in the AI era. Yet its ambitious scope leaves unanswered questions about its longevity. While Kamala Harris might have doubled down on these initiatives, Donald Trump could disrupt or recalibrate the strategy. One thing is clear: the NSM is an opening move in a high-stakes race to define AI’s role in global security. Whether it’s remembered as a defining moment, or a missed opportunity will depend on how effectively it is implemented—and whether the U.S. can stay ahead in the AI revolution it seeks to lead.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured