News

Anthropic CEO Backs New California AI Legislation, with Some Reservations

Anthropic announced that it is lending its support to an amended version of California's Senate Bill 1047 (SB 1047), the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," because of revisions to the bill the company helped to influence, but not without some reservations.

"In our assessment the new SB 1047 is substantially improved to the point where we believe its benefits likely outweigh its costs," Anthropic CEO Dario Amodei said in a letter to California Governor Gavin Newsom on Aug. 21. "However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us."

California's proposed bill on AI regulation, SB 1047, advanced by State Senator Scott Wiener, a Democrat, mandates safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. If the bill passes, developers of AI software operating in the state will need to outline methods for turning off the AI models if they go awry, effectively implementing a kill switch. The bill would also give the state attorney general the power to sue if developers are not compliant.

Senator Wiener recently revised the bill to appease tech companies, relying in part on input from Anthropic, a San Francisco-based AI safety and research company backed by Amazon and Alphabet. The revised bill did away with a provision for a government AI oversight committee. (Amendments listed in our earlier report.)

In his letter, Amodei listed what he sees as the pros and cons of SB 1047. His list of pros included:

"Developing SSPs and being honest with the public about them"
The bill mandates the adoption of safety and security protocols (SSPs) similar to those used by top AI developers like Anthropic, Google, and OpenAI. Some companies haven't adopted these measures or have been vague about them, and there are no safeguards against misleading claims. "It is a major improvement, with very little downside, that SB 1047 requires companies to adopt some SSP (whose details are up to them) and to be honest with the public about their SSP-related practices and findings."

"Deterrence of downstream harms through clarifying the standard of care
AI systems are more adaptable than most technologies, and SSP-like measures by companies like Anthropic can reduce misuse risks. SB 1047 ties companies' liability to their SSPs, incentivizing the creation of effective protocols to prevent catastrophic risks. "As a company developing foundational models that also invests heavily in safety, Anthropic thinks it is important to systematize and incentivize this attitude across the industry."

"Pushing forward the science of AI risk reduction"
AI safety is an emerging field, with best practices still being developed. While early, strict legislation may be premature, it's crucial to push AI companies to invest in safety science. By requiring Safety and Security Protocols and tying them to liability, the bill encourages companies to address foreseeable risks and develop mitigation strategies before their models become societal risks.

His list of concerns included:

"Some concerning aspects of pre-harm enforcement are preserved in auditing and GovOps"
One of Anthropic's original concerns about the bill was the Frontier Model Division's (FMD) prescriptive guidance, reinforced by pre-harm enforcement. The company found it too inflexible for AI's early development stage. The amended SB 1047 eliminates the FMD and narrows pre-harm enforcement, though some powers have shifted to GovOps, which can now set binding requirements for private auditors. The relationship between these entities is complex, with GovOps providing non-binding guidance but influencing mandatory audit conditions.

"It is our best understanding that this interplay will not end up causing unnecessary pre-harm enforcement, but the language has enough ambiguity to raise concerns," Amodei wrote. "If implemented well, this could lead to well-defined standards for auditors and a well-functioning audit ecosystem, but if implemented poorly this could cause the audits to not focus on the core safety aspects of the bill."

"The bill's treatment of injunctive relief"
Another place pre-harm enforcement still exists is that the Attorney General retains broad authority to enforce the entire bill via injunctive relief, including before any harm has occurred. This is substantially narrower than previous pre-harm enforcement, but is still a vector for overreach.

"Miscellaneous other issues"
The company's list of concerns also included know-your-customer requirements on cloud providers, overly short notice periods for incident reporting, and overly expansive whistleblower protections that are subject to abuse, were not addressed.

"The burdens created by these provisions are likely to be manageable, if the executive branch takes a judicious approach to implementation," Amodei wrote. "If SB 1047 were signed into law, we would urge the government to avoid overreach in these areas in particular, to maintain a laser focus on catastrophic risks, and to resist the temptation to commandeer SB 1047's provisions to accomplish unrelated goals."

Opponents of the bill, which includes OpenAI, Meta, Y Combinator, and venture capital firm Andreessen Horowitz, argue that the bill's thresholds and liability provisions could stifle innovation and unfairly burden smaller developers. They criticize the bill for focusing on model-level regulations rather than specific misuse. He warned that strict requirements could drive innovation overseas and harm the open-source community.

Anjney Midha, General Partner at Andreessen Horowitz, has expressed concerns that startups, founders, and investors will feel blindsided by the bill and emphasized the need for lawmakers to consult with the tech community.

In an open letter, the AI Alliance, a group focused on safe AI and open innovation, voiced its concerns. The group noted that, although SB 1047 doesn't directly target open-source development, it would significantly impact it. The bill requires developers of AI models with 10^26 FLOPS or more to implement a shutdown control, but it doesn't address how this would work for open-source models. Although no such models exist yet, the bill could freeze open-source AI development at its 2024 level.

Several California representatives, including Ro Khanna, Anna Eshoo, and Zoe Lofgren, have opposed the bill, citing concerns about its impact on the state's economy and innovation ecosystem.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured