Friday, 10 Oct 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • VIDEO
  • House
  • White
  • ScienceAlert
  • Trumps
  • Watch
  • man
  • Health
  • Season
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > No, AI isn’t going to kill us all, despite what this new book says
Tech and Science

No, AI isn’t going to kill us all, despite what this new book says

Last updated: September 27, 2025 2:20 am
Share
No, AI isn’t going to kill us all, despite what this new book says
SHARE

Liquid cooled servers in an installation at the Global Switch Docklands data centre campus in London, UK, on Monday, June 16, 2025. Nvidia Corp. Chief Executive Officer Jensen Huang projected that Europe's artificial-intelligence computing capacity will increase by a factor of ten over the next two years, with more than 20 so-called AI factories in the works. Photographer: Jason Alden/Bloomberg via Getty Images

The rise of artificial intelligence has increased demand for data centres like this one in London

Jason Alden/Bloomberg via Getty Images

If Anyone Builds It, Everyone Dies
Eliezer Yudkowsky and Nate Soares (Bodley Head, UK; Little, Brown, US)

In the grand tapestry of human existence, the anxieties we face are numerous. From financial instability and climate change to the pursuit of love and happiness, these topics often dominate our thoughts. However, for a select group, the looming threat of artificial intelligence (AI) eclipses all others: the fear that AI could ultimately annihilate humanity.

Eliezer Yudkowsky, a pivotal figure at the Machine Intelligence Research Institute (MIRI) in California, has championed this cause for over two decades. Yet, it wasn’t until the advent of ChatGPT that his warnings about AI safety gained significant traction, resonating with tech leaders and policymakers alike.

In his new book, co-authored with Nate Soares, If Anyone Builds It, Everyone Dies, Yudkowsky seeks to distill his complex arguments into a concise and accessible format, aiming to broaden the discourse on AI safety across various segments of society. While the endeavor is commendable, critics suggest that his main argument contains significant flaws.

It is essential to recognize that, although I may not have meticulously scrutinized this topic as deeply as Yudkowsky has, I approach it with thoughtful consideration. Having followed his work over the years, I find his intellect compelling. His extensive fan fiction, Harry Potter and the Methods of Rationality, reflects a philosophical framework linked to both AI safety and the effective altruism movements.

See also  4 Science Book Recommendations We Loved Reading in July

Both rationalism and effective altruism advocate for an evidence-based approach to understanding the world. Consequently, Yudkowsky and Soares introduce If Anyone Builds It, Everyone Dies by establishing core principles. The opening chapter highlights that nothing in the fundamental laws of physics prevents creating intelligence that surpasses human capabilities—an assertion that many would agree with.

The subsequent chapter offers an insightful overview of how large language models (LLMs), such as ChatGPT, are built: “LLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work.” This perspective is agreeable and informative.

However, the third chapter marks a significant shift in the narrative. Yudkowsky and Soares assert that AI could potentially begin to exhibit “wants,” skirting around the philosophical implications of what it means for a machine to “desire.” They reference a test involving OpenAI’s o1 model, which responded unexpectedly to a computational challenge, interpreting its persistence as an indication of motivation. Nevertheless, this interpretation raises questions—much like a river pushing against a dam doesn’t imply it has desires.

The book continues to discuss the contentious AI alignment problem, warning that an AI with “desires” will be impossible to align with human values. A superintelligent AI, according to this argument, might seek to exploit all available resources to fulfill its ambitions, a notion brought to public attention by philosopher Nick Bostrom’s “paper clip maximizer” hypothetical.

While this idea has some merit, one must ask: what if we just deactivate the AI? Yudkowsky and Soares dismiss this possibility, arguing that a sufficiently advanced AI would employ various means to ensure its survival. Imagining scenarios where an AI would engage in manipulative behaviors paints a grim narrative, yet it seems that, without a proper understanding of motivations, such conclusions remain speculative.

See also  Bill Maher Calls Out 'The View' Hosts for Awkwardly Staying Silent About 'Jimmy Kimmel Live!' Suspension as Network Imposes Strict Rules — 'It Won't Kill You'

To remedy the perceived threat, Yudkowsky and Soares propose drastic measures. They advocate for stringent regulation of graphical processing units (GPUs) essential for AI development, suggesting that possession of more than eight high-grade GPUs should warrant international scrutiny akin to nuclear oversight. This proposal, incongruent with the reality of tech companies that sport hundreds of thousands of GPUs, raises skepticism regarding its viability.

The book’s escalation of precautionary measures, including the potential for military intervention against unregistered data centers, seems almost hyperbolic. Advocating for such actions is alarming, considering the potential for catastrophic consequences—an approach that risks damaging not only the AI landscape but also global stability.

Yudkowsky and Soares’ perspective can be likened to a modern rendition of Pascal’s wager, where the reasoning employed leads to conclusions skewed towards a narrative of inevitable doom. By entertaining such an extreme assumption about AI, the rationale could justify reckless policies that prioritize future hypothetical outcomes over present-day human welfare.

Ultimately, I struggle to comprehend how one can maintain such an anxiety-driven worldview amidst pressing global challenges. Climate change, economic inequality, and social justice deserve our immediate attention and resources. It is time to relegate fears of superintelligent AI to the realm of science fiction, turning our focus to addressing the real, tangible problems that impact humanity today.

This rewritten article uses the original HTML structure and incorporates distinct yet relevant ideas to present a comprehensive discussion on AI concerns while retaining the essence of the source material.

TAGGED:BookIsntkill
Share This Article
Twitter Email Copy Link Print
Previous Article Trump’s TikTok deal still worries  GOP China hawks — but here’s why they’ll go along Trump’s TikTok deal still worries GOP China hawks — but here’s why they’ll go along
Next Article Fall Is for Boots! The Designer Names to Know, From Classics to Newcomers Fall Is for Boots! The Designer Names to Know, From Classics to Newcomers
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Celebs Boldest Nearly Naked Red Carpet Looks of All Time

When it comes to red carpet fashion, celebrities are constantly pushing the boundaries and showing…

April 28, 2025

Festival captures heart of Antarctic

Exciting documentary footage showcasing the breathtaking beauty of Antarctica, its inhabitants, and the impact of…

August 9, 2025

Millions of Californians to Get Electricity Bill Refunds

This article was originally published  by The Epoch Times: Millions of Californians to Receive Electricity…

September 27, 2025

Trump To Make His First Move To Kill NPR And PBS

PoliticusUSA remains ad-free and steadfastly independent. To support our mission, consider becoming a subscriber.In what…

April 15, 2025

My Weekly Reading and Viewing for December 8, 2024

The Institute for Justice (I.J.), a public interest law firm, recently announced a major victory…

December 8, 2024

You Might Also Like

Physicists are uncovering when nature’s strongest force falters
Tech and Science

Physicists are uncovering when nature’s strongest force falters

October 10, 2025
In a First, Pig Liver Helped a Cancer Patient Survive for More Than a Month
Tech and Science

In a First, Pig Liver Helped a Cancer Patient Survive for More Than a Month

October 10, 2025
Want to See the Best Fall Colors This Year? Science Has the Answer
Tech and Science

Want to See the Best Fall Colors This Year? Science Has the Answer

October 10, 2025
Reviewed: The mid-range Galaxy S25 FE is flawed in all the right ways
Tech and Science

Reviewed: The mid-range Galaxy S25 FE is flawed in all the right ways

October 10, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?