SAN FRANCISCO, Aug 21 (Reuters) – California legislators are set to vote on a bill as soon as this week that would broadly regulate how artificial intelligence is developed and deployed in California even as a number of tech giants have voiced broad opposition.
Here is background on the bill, known as SB 1047, and why it has faced backlash from Silicon Valley technologists and some lawmakers:
WHAT DOES THE BILL DO?
Advanced by State Senator Scott Wiener, a Democrat, the proposal would mandate safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. Developers of AI software operating in the state would also need to outline methods for turning off the AI models if they go awry, effectively a kill switch.
Advertisement · Scroll to continue
The bill would also give the state attorney general the power to sue if developers are not compliant, particularly in the event of an ongoing threat, such as the AI taking over government systems like the power grid.
As well, the bill would require developers to hire third-party auditors to assess their safety practices and provide additional protections to whistleblowers speaking out against AI abuses.
Advertisement · Scroll to continue
WHAT HAVE LAWMAKERS SAID?
SB 1047 has already passed the state Senate by a 32-1 vote. Last week it passed the state Assembly appropriations committee, setting up a vote by the full Assembly. If it passes by the end of the legislative session on Aug. 31, it would advance to Governor Gavin Newsom to sign or veto by Sept. 30.
Wiener, who represents San Francisco, home to OpenAI and many of the startups developing the powerful software, has said legislation is necessary to protect the public before advances in AI become either unwieldy or uncontrollable.
However, a group of California Congressional Democrats oppose the bill, including San Francisco’s Nancy Pelosi; Ro Khanna, whose congressional district encompasses much of Silicon Valley; and Zoe Lofgren, from San Jose.
Pelosi this week called SB 1047 ill-informed and said it may cause more harm than good. In an open letter last week, the Democrats said the bill could drive developers from the state and threaten so-called open-source AI models, which rely on code that is freely available for anyone to use or modify.
WHAT DO TECH LEADERS SAY?
Tech companies developing AI – which can respond to prompts with fully formed text, images or audio as well as run repetitive tasks with minimal intervention – have called for stronger guardrails for AI’s deployment. They have cited risks that the software could one day evade human intervention and cause cyberattacks, among other concerns. But they also largely balked at SB 1047.
Wiener revised the bill to appease tech companies, relying in part on input from AI startup Anthropic – backed by Amazon (AMZN.O), opens new tab and Alphabet (GOOGL.O), opens new tab. Among other changes, he eliminated the creation of a government AI oversight committee.
Wiener also took out criminal penalties for perjury, though civil suits may still be brought.
Alphabet’s Google and Meta (META.O), opens new tab have expressed concerns in letters to Wiener. Meta said the bill threatens to make the state unfavorable to AI development and deployment. The Facebook parent’s chief scientist, Yann LeCun, in a July X post called the bill potentially harmful to research efforts.
OpenAI, whose ChatGPT is credited with accelerating the frenzy over AI since its broad release in late 2022, has said AI should be regulated by the federal government and that SB 1047 creates an uncertain legal environment.
In a letter to Wiener, OpenAI said it opposes SB 1047 because it is a threat to AI’s growth and could cause entrepreneurs and engineers to leave the state.
Of particular concern is the potential for the bill to apply to open-source AI models. Many technologists believe open-source models are important for creating less risky AI applications more quickly, but Meta and others have fretted that they could be held responsible for policing open-source models if the bill passes. Wiener has said he supports open-source models and one of the recent amendments to the bill raised the standard for which open-sourced models are covered under its provisions.
The bill also has its backers in the technology sector. Geoffrey Hinton, widely credited as a “godfather of AI,” former OpenAI employee Daniel Kokotajlo and researcher Yoshua Bengio have said they support the bill.