News

OpenAI Co-Founder Sutskever Launches Safe Superintelligence Company

Ilya Sutskever, a co-founder and former chief scientist of OpenAI, announced his next venture this week, a startup called Safe Superintelligence (SSI).

"Building safe superintelligence is the most important technical problem of our​​ time," Sutskever said in a statement. "We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence."

Sutskever is joined in this venture by co-founders Daniel Gross, a former Apple Inc. AI lead and investor, and Daniel Levy, a former colleague at OpenAI known for training large AI models. He made the announcement in a post on X.

Sutskever left OpenAI in mid-May following his involvement in the dramatic 2023 board ouster and subsequent reinstatement of Sam Altman as OpenAI’s CEO. At the time he hinted at a new venture, but has been notably silent until now. Sutskever, Gross, and Levy plan to operate their company with "no distraction by management overhead or product cycles," and the company's business model ensures that "safety, security, and progress are all insulated from short-term commercial pressures." The new company will prioritize engineering breakthroughs to ensure AI safety, rather than relying on external guardrails, he said

Safe Superintelligence Inc. will be based in Palo Alto, California, and Tel Aviv, Israel. The company’s mission echoes the original vision of OpenAI: to build an artificial general intelligence (AGI) that could surpass human capabilities. But unlike OpenAI, Safe Superintelligence intends to remain insulated from commercial pressures and the competitive AI landscape.

Sutskever played a significant part in key AI advances at Google and OpenAI. His advocacy for building larger AI models was instrumental in OpenAI’s success, particularly with the rise of ChatGPT. His departure from OpenAI and subsequent silence had fueled much speculation within Silicon Valley.

The new venture faces significant challenges, particularly in securing funding without immediate commercial returns. Investors will be betting on Sutskever’s team to achieve breakthroughs that can outpace larger, established rivals.

The AI industry has long grappled with the challenge of making AI systems safer. Current approaches involve using both humans and AI to guide software towards beneficial outcomes. Sutskever’s Safe Superintelligence Inc. seeks to embed safety into the fabric of its AI system. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs," the three founders said in a statement. "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead."

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured