Wednesday, 15 Oct 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • VIDEO
  • House
  • White
  • ScienceAlert
  • Trumps
  • Watch
  • man
  • Health
  • Season
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > Scott Wiener on his fight to make Big Tech disclose AI’s dangers
Tech and Science

Scott Wiener on his fight to make Big Tech disclose AI’s dangers

Last updated: September 23, 2025 8:24 pm
Share
Scott Wiener on his fight to make Big Tech disclose AI’s dangers
SHARE

California State Senator Scott Wiener is once again tackling the pressing issue of AI safety with his newly proposed bill, SB 53.

Following a fervent opposition campaign from Silicon Valley against his previous AI safety legislation, SB 1047, in 2024, which aimed to hold tech companies accountable for the adverse effects of their AI systems, Wiener is back with a revised approach. Tech executives had warned that the bill, if enacted, would hinder the progress of AI innovation in the United States. Ultimately, Governor Gavin Newsom vetoed SB 1047, aligning with the tech industry’s viewpoint, leading to a celebratory event dubbed the “SB 1047 Veto Party” held by AI proponents.

Wiener’s current proposal, SB 53, is poised for Governor Newsom’s decision and appears to have garnered more support from the tech community, with Silicon Valley seemingly unopposed to its enactment.

Notably, Anthropic has publicly supported SB 53, while Meta has expressed that the bill offers a decent balance between innovation and necessary regulatory guardrails. Former White House AI policy advisor Dean Ball commented on the bill’s prospects, seeing it as a win for the voices advocating for measured AI regulation.

Should SB 53 be signed into law, it would introduce some of the first safety reporting norms for leading AI firms like OpenAI, Anthropic, and Google, which currently have no mandated requirement to disclose their AI safety testing methods. While many AI firms do release safety reports voluntarily, the lack of consistency and standardization raises concerns about the potential risks associated with their technologies.

The legislation mandates that AI labs earning over $500 million must publish safety reports regarding their most advanced models. Similar to SB 1047, SB 53 focuses predominantly on the highest risks posed by AI systems, including threats to human life, cyber warfare, and biological weapon development. Governor Newsom is also evaluating additional bills concerning other aspects of AI risk, including regulations on engagement-boosting algorithms in AI companions.

See also  The Supreme Court Hands Trump A Big Loss In Pennsylvania

TechCrunch Event

San Francisco | October 27-29, 2025

Moreover, SB 53 aims to create secure channels for AI lab employees to report safety issues to appropriate authorities and establishes a state-run cloud platform, CalCompute, to broaden access to AI research resources beyond the major tech corporations.

The broader acceptance of SB 53 compared to its predecessor can be attributed to its more moderate stance; unlike SB 1047, which demanded liability from AI firms for resulting harms, SB 53 promotes self-reporting and transparency instead. It also specifically addresses the largest tech companies rather than startups.

Despite the improved reception, some in the tech industry argue that AI regulation should be reserved for the federal government. In a letter sent to Newsom, OpenAI claimed that compliance should be restricted to national regulations, emphasizing a belief that states should not impose their own AI laws. Venture capital firm Andreessen Horowitz expressed similar concerns, warning that certain California legislation could infringe upon the Constitution’s dormant Commerce Clause.

Senator Wiener contests these perspectives, maintaining that federal action on effective AI regulation is unlikely, thus necessitating state-level interventions. He also suggests that previous federal maneuvers to resist state regulations were influenced by the tech industry’s lobbying efforts.

There’s been a notable shift in the federal administration’s focus concerning AI, with the Trump administration prioritizing growth over safety in contrast to the previous administration’s stance. Vice President J.D. Vance, shortly after taking office, stressed at a conference in Paris that the focus should be on opportunities within the AI landscape rather than concerns over safety, which has been well received by the tech sector.

See also  These portable padded stadium seats are on sale and butt-approved

Senator Wiener asserts the importance of California in leading the AI safety initiative without hindering innovation.

I had the opportunity to speak with Senator Wiener about his ongoing negotiations with the tech sector and his commitment to AI safety legislation. The conversation has been edited for clarity and concision.

Senator Wiener, after our previous discussion surrounding SB 1047, how would you describe the journey you’ve embarked on in AI safety regulation?

It has been a dynamic journey filled with learning experiences and significant challenges. We’ve successfully elevated the conversation around AI safety, not just in California, but globally. The development of this powerful technology carries tremendous potential to impact the world, thus we need to ensure it does so positively, with a focus on minimizing risks while fostering innovation.

What insights have you gained from the last two decades in technology regarding the necessity of holding Silicon Valley accountable through legislation?

Representing the heart of AI innovation, I’ve seen how large tech firms, some of the wealthiest, have effectively resisted any form of regulatory accountability. The growing influence of technology companies in policymaking is alarming, especially when I see executives engaging closely with political leaders.

I want technology to flourish, as it is pivotal for progress. However, it’s crucial that this industry is not left unchecked; sensible regulations are paramount for safeguarding public interest. That’s the balance we’re attempting to maintain with AI safety legislation.

SB 53 is concentrated on the most severe potential harms of AI, like death, cyberattacks, and bioweapon proliferation. Why this focus?

The spectrum of AI risks is broad, encompassing job displacement, algorithmic bias, and misinformation. While other legislation addresses these issues, SB 53 zeroes in on catastrophic risks. The approach came from discussions with various AI professionals who underscored the need for such focused legislation.

See also  Trump morbidly says part of his ‘big plans’ for 2026 midterms is to ‘survive’

Do you believe AI systems pose inherent dangers, or can they be weaponized to cause severe societal damage?

AI is not inherently safe; while many working within the industry are dedicated to minimizing risks, we acknowledge that there are individuals who may exploit these technologies for harmful purposes. Our legislation aims to create barriers against such misuse.

With Anthropic openly supporting SB 53, what has been the nature of your dialogue with other industry stakeholders?

We’ve engaged a range of stakeholders, including major employers and smaller startups. While there are companies not fully in support of SB 53, the opposition isn’t as aggressive as with SB 1047. The tone has shifted since SB 53 emphasizes transparency over liability, and the focus on large companies means startups are less impacted.

Do you experience any pressure from the powerful AI political action committees that have emerged recently?

The influence of wealth within politics is a product of the Citizens United ruling. However, I remain steadfast in my principles and advocacy for the public good, regardless of the pressures. I’ve faced challenges throughout my career yet stay committed to representing my constituents and advancing beneficial initiatives.

What would you convey to Governor Newsom as he weighs his decision on SB 53?

I appreciate the considerations he made when vetoing SB 1047; his thoughtful critique has shaped our current bill. We took those insights seriously and have adapted accordingly, striving to align closely with his constructive vision for AI regulation in California.

This rewritten article maintains the original HTML structure while providing unique content that can be effectively integrated into a WordPress platform.

TAGGED:AIsbigDangersdisclosefightScottTechWiener
Share This Article
Twitter Email Copy Link Print
Previous Article EXCLUSIVE: Randy Rod Stewart 'Begged' Wife Penny Lancaster to 'Keep Sleazy Sex Secrets Out of Her New Memoir' EXCLUSIVE: Randy Rod Stewart 'Begged' Wife Penny Lancaster to 'Keep Sleazy Sex Secrets Out of Her New Memoir'
Next Article The  Billion Race: Bispecific Antibodies Are Redefining Cancer Therapy The $40 Billion Race: Bispecific Antibodies Are Redefining Cancer Therapy
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

NJ dad charged in tragic death of 4-month-old son who died in hot car after being left for ‘extended period’

A heartbreaking tragedy unfolded in New Jersey as a 4-month-old infant lost his life after…

March 22, 2025

Authors Flip the Script for Gender-Swapped Spooky Romances (Excl)

As the witching hour approaches, some literary heavyweights are embracing the eerie spirit of the…

September 26, 2025

Telegram jumps to $540mn profit despite founder facing legal peril

Unlock the Editor’s Digest for free Roula Khalaf, Editor of the FT, selects her favourite…

May 21, 2025

February 8, the Orangeburg Massacre

Today is Saturday, Feb. 8, the 39th day of 2025. There are 326 days left…

February 8, 2025

Increased testing for heart disease indicator needed worldwide, say experts

A recent review published in The Lancet has shed light on the prevalence of a…

September 13, 2024

You Might Also Like

AI Agents in Healthcare: Benefits, Use Cases, and Real-World Examples
Tech and Science

AI Agents in Healthcare: Benefits, Use Cases, and Real-World Examples

October 15, 2025
ICE Partnership with Oklahoma Highway Patrol Leads to 120 Illegal Alien Arrests, Including 91 Driving Big Rigs with CDLs | The Gateway Pundit | by Cassandra MacDonald
Politics

ICE Partnership with Oklahoma Highway Patrol Leads to 120 Illegal Alien Arrests, Including 91 Driving Big Rigs with CDLs | The Gateway Pundit | by Cassandra MacDonald

October 15, 2025
Motorola Moto X70 Air is official in China – and it raises some big Edge 70 questions
Tech and Science

Motorola Moto X70 Air is official in China – and it raises some big Edge 70 questions

October 15, 2025
Young Women Who Never Smoked Are Getting Lung Cancer—Here’s Why
Tech and Science

Young Women Who Never Smoked Are Getting Lung Cancer—Here’s Why

October 15, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?