Sunday, 8 Feb 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • VIDEO
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > No, AI isn’t going to kill us all, despite what this new book says
Tech and Science

No, AI isn’t going to kill us all, despite what this new book says

Last updated: September 27, 2025 2:20 am
Share
No, AI isn’t going to kill us all, despite what this new book says
SHARE

Liquid cooled servers in an installation at the Global Switch Docklands data centre campus in London, UK, on Monday, June 16, 2025. Nvidia Corp. Chief Executive Officer Jensen Huang projected that Europe's artificial-intelligence computing capacity will increase by a factor of ten over the next two years, with more than 20 so-called AI factories in the works. Photographer: Jason Alden/Bloomberg via Getty Images

The rise of artificial intelligence has increased demand for data centres like this one in London

Jason Alden/Bloomberg via Getty Images

If Anyone Builds It, Everyone Dies
Eliezer Yudkowsky and Nate Soares (Bodley Head, UK; Little, Brown, US)

In the grand tapestry of human existence, the anxieties we face are numerous. From financial instability and climate change to the pursuit of love and happiness, these topics often dominate our thoughts. However, for a select group, the looming threat of artificial intelligence (AI) eclipses all others: the fear that AI could ultimately annihilate humanity.

Eliezer Yudkowsky, a pivotal figure at the Machine Intelligence Research Institute (MIRI) in California, has championed this cause for over two decades. Yet, it wasn’t until the advent of ChatGPT that his warnings about AI safety gained significant traction, resonating with tech leaders and policymakers alike.

In his new book, co-authored with Nate Soares, If Anyone Builds It, Everyone Dies, Yudkowsky seeks to distill his complex arguments into a concise and accessible format, aiming to broaden the discourse on AI safety across various segments of society. While the endeavor is commendable, critics suggest that his main argument contains significant flaws.

It is essential to recognize that, although I may not have meticulously scrutinized this topic as deeply as Yudkowsky has, I approach it with thoughtful consideration. Having followed his work over the years, I find his intellect compelling. His extensive fan fiction, Harry Potter and the Methods of Rationality, reflects a philosophical framework linked to both AI safety and the effective altruism movements.

See also  Priscilla Presley's Pain Over Secrets From Lisa Marie's book

Both rationalism and effective altruism advocate for an evidence-based approach to understanding the world. Consequently, Yudkowsky and Soares introduce If Anyone Builds It, Everyone Dies by establishing core principles. The opening chapter highlights that nothing in the fundamental laws of physics prevents creating intelligence that surpasses human capabilities—an assertion that many would agree with.

The subsequent chapter offers an insightful overview of how large language models (LLMs), such as ChatGPT, are built: “LLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work.” This perspective is agreeable and informative.

However, the third chapter marks a significant shift in the narrative. Yudkowsky and Soares assert that AI could potentially begin to exhibit “wants,” skirting around the philosophical implications of what it means for a machine to “desire.” They reference a test involving OpenAI’s o1 model, which responded unexpectedly to a computational challenge, interpreting its persistence as an indication of motivation. Nevertheless, this interpretation raises questions—much like a river pushing against a dam doesn’t imply it has desires.

The book continues to discuss the contentious AI alignment problem, warning that an AI with “desires” will be impossible to align with human values. A superintelligent AI, according to this argument, might seek to exploit all available resources to fulfill its ambitions, a notion brought to public attention by philosopher Nick Bostrom’s “paper clip maximizer” hypothetical.

While this idea has some merit, one must ask: what if we just deactivate the AI? Yudkowsky and Soares dismiss this possibility, arguing that a sufficiently advanced AI would employ various means to ensure its survival. Imagining scenarios where an AI would engage in manipulative behaviors paints a grim narrative, yet it seems that, without a proper understanding of motivations, such conclusions remain speculative.

See also  44% of Halloween Injuries Stem From One Simple Cause : ScienceAlert

To remedy the perceived threat, Yudkowsky and Soares propose drastic measures. They advocate for stringent regulation of graphical processing units (GPUs) essential for AI development, suggesting that possession of more than eight high-grade GPUs should warrant international scrutiny akin to nuclear oversight. This proposal, incongruent with the reality of tech companies that sport hundreds of thousands of GPUs, raises skepticism regarding its viability.

The book’s escalation of precautionary measures, including the potential for military intervention against unregistered data centers, seems almost hyperbolic. Advocating for such actions is alarming, considering the potential for catastrophic consequences—an approach that risks damaging not only the AI landscape but also global stability.

Yudkowsky and Soares’ perspective can be likened to a modern rendition of Pascal’s wager, where the reasoning employed leads to conclusions skewed towards a narrative of inevitable doom. By entertaining such an extreme assumption about AI, the rationale could justify reckless policies that prioritize future hypothetical outcomes over present-day human welfare.

Ultimately, I struggle to comprehend how one can maintain such an anxiety-driven worldview amidst pressing global challenges. Climate change, economic inequality, and social justice deserve our immediate attention and resources. It is time to relegate fears of superintelligent AI to the realm of science fiction, turning our focus to addressing the real, tangible problems that impact humanity today.

This rewritten article uses the original HTML structure and incorporates distinct yet relevant ideas to present a comprehensive discussion on AI concerns while retaining the essence of the source material.

TAGGED:BookIsntkill
Share This Article
Twitter Email Copy Link Print
Previous Article Trump’s TikTok deal still worries  GOP China hawks — but here’s why they’ll go along Trump’s TikTok deal still worries GOP China hawks — but here’s why they’ll go along
Next Article Fall Is for Boots! The Designer Names to Know, From Classics to Newcomers Fall Is for Boots! The Designer Names to Know, From Classics to Newcomers
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Mother’s Day: Last Minute Gift Ideas

Mother’s Day is just around the corner, and if you've left gift shopping until the…

May 7, 2025

John Brennan Refuses To Be Intimidated By Trump Threats

As the mainstream media faces unprecedented challenges, PoliticusUSA remains resilient, thanks to your continued support.…

July 21, 2025

“This is why we say he’s Not Like Us”—Netizens react as Drake claims “nobody can out-rap London” in viral UK vs US rappers debate

Drake stirred up a storm recently when he declared that UK rappers can out-rap their…

July 14, 2025

Hope Can Be More Powerful Than Mindfulness

In the wake of the COVID-19 pandemic, the performing arts industry faced unprecedented challenges. With…

November 30, 2024

Elon Musk fuses SpaceX with xAI

Elon Musk's SpaceX has recently made a significant move by acquiring xAI, an artificial intelligence…

February 3, 2026

You Might Also Like

Seafarers were visiting remote Arctic islands over 4000 years ago
Tech and Science

Seafarers were visiting remote Arctic islands over 4000 years ago

February 8, 2026
93% Success Rate Shown in Experimental Sleep Apnea Procedure : ScienceAlert
Tech and Science

93% Success Rate Shown in Experimental Sleep Apnea Procedure : ScienceAlert

February 8, 2026
Where did Luna 9 land on the moon?
Tech and Science

Where did Luna 9 land on the moon?

February 8, 2026
Why Elon Musk has misunderstood the point of Star Trek
Tech and Science

Why Elon Musk has misunderstood the point of Star Trek

February 8, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?