California State Senator Scott Wiener is once again tackling the pressing issue of AI safety with his newly proposed bill, SB 53.
Following a fervent opposition campaign from Silicon Valley against his previous AI safety legislation, SB 1047, in 2024, which aimed to hold tech companies accountable for the adverse effects of their AI systems, Wiener is back with a revised approach. Tech executives had warned that the bill, if enacted, would hinder the progress of AI innovation in the United States. Ultimately, Governor Gavin Newsom vetoed SB 1047, aligning with the tech industry’s viewpoint, leading to a celebratory event dubbed the “SB 1047 Veto Party” held by AI proponents.
Wiener’s current proposal, SB 53, is poised for Governor Newsom’s decision and appears to have garnered more support from the tech community, with Silicon Valley seemingly unopposed to its enactment.
Notably, Anthropic has publicly supported SB 53, while Meta has expressed that the bill offers a decent balance between innovation and necessary regulatory guardrails. Former White House AI policy advisor Dean Ball commented on the bill’s prospects, seeing it as a win for the voices advocating for measured AI regulation.
Should SB 53 be signed into law, it would introduce some of the first safety reporting norms for leading AI firms like OpenAI, Anthropic, and Google, which currently have no mandated requirement to disclose their AI safety testing methods. While many AI firms do release safety reports voluntarily, the lack of consistency and standardization raises concerns about the potential risks associated with their technologies.
The legislation mandates that AI labs earning over $500 million must publish safety reports regarding their most advanced models. Similar to SB 1047, SB 53 focuses predominantly on the highest risks posed by AI systems, including threats to human life, cyber warfare, and biological weapon development. Governor Newsom is also evaluating additional bills concerning other aspects of AI risk, including regulations on engagement-boosting algorithms in AI companions.
TechCrunch Event
San Francisco | October 27-29, 2025
Moreover, SB 53 aims to create secure channels for AI lab employees to report safety issues to appropriate authorities and establishes a state-run cloud platform, CalCompute, to broaden access to AI research resources beyond the major tech corporations.
The broader acceptance of SB 53 compared to its predecessor can be attributed to its more moderate stance; unlike SB 1047, which demanded liability from AI firms for resulting harms, SB 53 promotes self-reporting and transparency instead. It also specifically addresses the largest tech companies rather than startups.
Despite the improved reception, some in the tech industry argue that AI regulation should be reserved for the federal government. In a letter sent to Newsom, OpenAI claimed that compliance should be restricted to national regulations, emphasizing a belief that states should not impose their own AI laws. Venture capital firm Andreessen Horowitz expressed similar concerns, warning that certain California legislation could infringe upon the Constitution’s dormant Commerce Clause.
Senator Wiener contests these perspectives, maintaining that federal action on effective AI regulation is unlikely, thus necessitating state-level interventions. He also suggests that previous federal maneuvers to resist state regulations were influenced by the tech industry’s lobbying efforts.
There’s been a notable shift in the federal administration’s focus concerning AI, with the Trump administration prioritizing growth over safety in contrast to the previous administration’s stance. Vice President J.D. Vance, shortly after taking office, stressed at a conference in Paris that the focus should be on opportunities within the AI landscape rather than concerns over safety, which has been well received by the tech sector.
Senator Wiener asserts the importance of California in leading the AI safety initiative without hindering innovation.
I had the opportunity to speak with Senator Wiener about his ongoing negotiations with the tech sector and his commitment to AI safety legislation. The conversation has been edited for clarity and concision.
Senator Wiener, after our previous discussion surrounding SB 1047, how would you describe the journey you’ve embarked on in AI safety regulation?
It has been a dynamic journey filled with learning experiences and significant challenges. We’ve successfully elevated the conversation around AI safety, not just in California, but globally. The development of this powerful technology carries tremendous potential to impact the world, thus we need to ensure it does so positively, with a focus on minimizing risks while fostering innovation.
What insights have you gained from the last two decades in technology regarding the necessity of holding Silicon Valley accountable through legislation?
Representing the heart of AI innovation, I’ve seen how large tech firms, some of the wealthiest, have effectively resisted any form of regulatory accountability. The growing influence of technology companies in policymaking is alarming, especially when I see executives engaging closely with political leaders.
I want technology to flourish, as it is pivotal for progress. However, it’s crucial that this industry is not left unchecked; sensible regulations are paramount for safeguarding public interest. That’s the balance we’re attempting to maintain with AI safety legislation.
SB 53 is concentrated on the most severe potential harms of AI, like death, cyberattacks, and bioweapon proliferation. Why this focus?
The spectrum of AI risks is broad, encompassing job displacement, algorithmic bias, and misinformation. While other legislation addresses these issues, SB 53 zeroes in on catastrophic risks. The approach came from discussions with various AI professionals who underscored the need for such focused legislation.
Do you believe AI systems pose inherent dangers, or can they be weaponized to cause severe societal damage?
AI is not inherently safe; while many working within the industry are dedicated to minimizing risks, we acknowledge that there are individuals who may exploit these technologies for harmful purposes. Our legislation aims to create barriers against such misuse.
With Anthropic openly supporting SB 53, what has been the nature of your dialogue with other industry stakeholders?
We’ve engaged a range of stakeholders, including major employers and smaller startups. While there are companies not fully in support of SB 53, the opposition isn’t as aggressive as with SB 1047. The tone has shifted since SB 53 emphasizes transparency over liability, and the focus on large companies means startups are less impacted.
Do you experience any pressure from the powerful AI political action committees that have emerged recently?
The influence of wealth within politics is a product of the Citizens United ruling. However, I remain steadfast in my principles and advocacy for the public good, regardless of the pressures. I’ve faced challenges throughout my career yet stay committed to representing my constituents and advancing beneficial initiatives.
What would you convey to Governor Newsom as he weighs his decision on SB 53?
I appreciate the considerations he made when vetoing SB 1047; his thoughtful critique has shaped our current bill. We took those insights seriously and have adapted accordingly, striving to align closely with his constructive vision for AI regulation in California.
This rewritten article maintains the original HTML structure while providing unique content that can be effectively integrated into a WordPress platform.