News

OpenAI Launches Safety Fellowship Amid Wider Industry Shift Toward External AI Research

The OpenAI Safety Fellowship will run for six months from September 2026 to February 2027, according to a company announcement. Participants will receive stipends, access to OpenAI models, and technical support to conduct research in areas such as robustness, privacy, agent oversight, and misuse prevention.

The program is open to researchers, engineers, and practitioners from outside the company. Fellows are expected to produce outputs such as research papers, benchmarks, or datasets.

The initiative comes as AI companies face increasing scrutiny over how they manage risks associated with rapidly advancing systems.

OpenAI said the fellowship is intended to “support high-impact research on the safety and alignment of advanced AI systems” and to expand the number of people working on technical safety challenges.

OpenAI’s program reflects a wider trend among major AI developers to fund external research through fellowships, residencies, and academic partnerships.

Anthropic, a rival AI company focused on safety, runs a similar fellows program that supports independent researchers working on alignment, interpretability, and AI security. The program provides funding, mentorship, and compute resources, with participants typically producing publicly available research.

Google and its DeepMind unit operate a range of student researcher and fellowship programs that place participants on research teams for several months. These programs cover a broad range of AI topics, including safety-related work, though they are not always explicitly branded as alignment-focused.

Microsoft and Meta have also expanded funding for external AI research through academic partnerships, grants, and residency-style programs, often aimed at advancing work on responsible AI and system reliability.

Together, these initiatives form a growing ecosystem of externally funded research tied to leading AI labs.

OpenAI said the priority areas for its fellowship include “agentic oversight” and “high-severity misuse domains,” reflecting concerns about systems capable of taking multi-step actions with limited human intervention.

Recent advances in AI capabilities have enabled systems to perform more complex tasks, including coding, research assistance, and workflow automation.

This has shifted some safety concerns from harmful outputs toward the potential for unintended or harmful actions taken by autonomous or semi-autonomous systems.

The growth of fellowship programs comes amid increasing demand for AI safety researchers, a relatively small but expanding field.

Companies are offering competitive compensation and access to computing resources to attract talent, as they compete to develop more advanced models.

At the same time, governments and regulators are increasing pressure on AI developers to demonstrate that systems can be deployed safely and reliably.

While external programs may broaden participation in safety work, they do not replace internal decision-making processes at AI companies.

Researchers participating in fellowships typically do not have direct authority over product releases. Their work is generally advisory, focused on identifying risks and proposing mitigation strategies.

Responsibility for deploying AI systems remains with the companies that build and operate them.

OpenAI said the fellowship is part of a broader effort to support research and improve understanding of AI risks, but did not provide details on how findings from the program would be incorporated into product decisions.

The first cohort of the OpenAI Safety Fellowship is expected to be selected later this year.

 

.

 

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured