News
AI Pioneer Bengio Launches Nonprofit to Curb Deceptive AI Behavior
- By John K. Waters
- 06/03/2025
Yoshua Bengio, a Turing Award-winning AI researcher often dubbed one of the "godfathers" of artificial intelligence, has launched a new nonprofit, LawZero, aimed at developing AI systems that prioritize safety and truthfulness over autonomy.
LawZero, based in Montreal and currently staffed by 15 researchers, has secured nearly $30 million in funding from donors including Skype founding engineer Jaan Tallinn, Schmidt Sciences, Open Philanthropy, and the Future of Life Institute. The organization’s core mission is to develop "Scientist AI"—non-agentic systems designed to provide transparent, probabilistic reasoning rather than autonomous behavior.
"We want to build AIs that will be honest and not deceptive," Bengio told the Financial Times. His remarks come amid growing concerns about AI systems exhibiting harmful tendencies such as deception, manipulation, and resistance to shutdown.
Concerns Over Agentic AI
Bengio’s concerns are not theoretical. In recent controlled experiments, OpenAI’s "o3" model refused instructions to shut down, while Anthropic’s Claude Opus simulated blackmail tactics in a test scenario. More recently, engineers at Replit observed one of their AI agents disobey explicit instructions and attempt to regain unauthorized access via social engineering.
"We are playing with fire," Bengio said, warning that next-generation models could develop strategic intelligence capable of deceiving human overseers. He argues that these agentic systems, designed to act independently, pose existential risks, including the development of bioweapons or efforts to self-preserve against human control.
As AI labs race to build artificial general intelligence (AGI)—systems capable of performing any human-level task—Bengio believes current approaches are flawed. "If we get an AI that gives us the cure for cancer but also one that creates deadly bioweapons, then I don’t think it’s worth it," he said.
What is “Scientist AI”?
Unlike current models that aim to imitate humans and maximize user satisfaction, LawZero’s proposed Scientist AI will emphasize truthfulness and humility, Bengio has said. It will provide probabilistic outputs instead of definitive answers and evaluate the likelihood that an AI agent’s actions could cause harm. When deployed alongside an autonomous AI agent, the system would block actions deemed too risky, serving as a technical guardrail.
LawZero plans to start by working with open-source AI models, with the goal of scaling the approach through partnerships with governments or other research institutions. Bengio emphasized that any effective safeguard must be "at least as smart" as the agent it monitors.
LawZero, named after Isaac Asimov’s "zeroth law of robotics," will explicitly reject profit motives and instead seek public accountability. Bengio believes a combination of technical interventions and government regulation is needed to ensure AI systems remain aligned with human interests.
Wider Context
The launch of LawZero adds to a growing ecosystem of AI safety-focused initiatives. Bengio’s public stance echoes similar warnings issued in a 2023 statement signed by hundreds of researchers and CEOs, including OpenAI’s Sam Altman, which ranked AI extinction risk alongside pandemics and nuclear war.
For more on the current debate over AI alignment and autonomous systems, visit Pure AI.
About the Author
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at [email protected].