Opponents of SB 1047, such as OpenAI, Anthropic, politicians Zoe Lofgren and Nancy Pelosi, and the California Chamber of Commerce, contend that the bill places too much emphasis on catastrophic risks and may disproportionately impact small, open-source AI developers. (Source: Image by RR)

Governor Newsom to Weigh Landmark AI Safety Bill on Advanced Models as Industry Watches

The California legislature has passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking a major step in AI regulation within the United States. This bill requires AI companies in California to implement safeguards before training advanced foundation models, such as ensuring the ability to quickly shut down models, protecting against unsafe post-training modifications, and maintaining testing procedures to assess potential risks of causing significant harm. The legislation, as reported in theverge.com, has stirred considerable debate within Silicon Valley and beyond due to its stringent safety measures.

Senator Scott Wiener, the primary author of SB 1047, has emphasized that the bill is a reasonable approach, requiring large AI labs to adhere to safety testing commitments they have already made. He mentioned that throughout the year, the bill was refined with input from open-source advocates and AI companies like Anthropic to better address foreseeable AI risks. According to Wiener, the bill is carefully calibrated to the known risks of AI development and deserves to be enacted to protect against future dangers.

Despite support from some quarters, SB 1047 has faced criticism from major players like OpenAI, Anthropic, and prominent California politicians such as Zoe Lofgren and Nancy Pelosi, as well as the California Chamber of Commerce. Critics argue that the bill focuses too heavily on catastrophic harms and could negatively impact small, open-source AI developers. In response to these concerns, the bill was amended to remove criminal penalties, limit the attorney general’s enforcement powers, and adjust membership requirements for the newly established Board of Frontier Models.

The bill will now go to Governor Gavin Newsom, who has until the end of September to decide whether to sign it into law. If enacted, this legislation would become one of the first significant AI safety regulations in the U.S., setting a precedent for how AI technologies are managed. Both OpenAI and Anthropic have acknowledged the improvements made to the bill after the amendments, with Anthropic’s CEO Dario Amodei stating that the benefits of the legislation now likely outweigh the costs. As the AI industry watches closely, SB 1047 could shape the future of AI development in California, particularly regarding safety protocols for advanced AI models. Governor Newsom’s decision will have a significant impact on the balance between innovation and safety regulation within the rapidly advancing AI sector.

read more at theverge.com