Thursday, 30 Oct 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • VIDEO
  • House
  • White
  • ScienceAlert
  • Trumps
  • Watch
  • man
  • Health
  • Season
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > No, AI isn’t going to kill us all, despite what this new book says
Tech and Science

No, AI isn’t going to kill us all, despite what this new book says

Last updated: September 27, 2025 2:20 am
Share
No, AI isn’t going to kill us all, despite what this new book says
SHARE

Liquid cooled servers in an installation at the Global Switch Docklands data centre campus in London, UK, on Monday, June 16, 2025. Nvidia Corp. Chief Executive Officer Jensen Huang projected that Europe's artificial-intelligence computing capacity will increase by a factor of ten over the next two years, with more than 20 so-called AI factories in the works. Photographer: Jason Alden/Bloomberg via Getty Images

The rise of artificial intelligence has increased demand for data centres like this one in London

Jason Alden/Bloomberg via Getty Images

If Anyone Builds It, Everyone Dies
Eliezer Yudkowsky and Nate Soares (Bodley Head, UK; Little, Brown, US)

In the grand tapestry of human existence, the anxieties we face are numerous. From financial instability and climate change to the pursuit of love and happiness, these topics often dominate our thoughts. However, for a select group, the looming threat of artificial intelligence (AI) eclipses all others: the fear that AI could ultimately annihilate humanity.

Eliezer Yudkowsky, a pivotal figure at the Machine Intelligence Research Institute (MIRI) in California, has championed this cause for over two decades. Yet, it wasn’t until the advent of ChatGPT that his warnings about AI safety gained significant traction, resonating with tech leaders and policymakers alike.

In his new book, co-authored with Nate Soares, If Anyone Builds It, Everyone Dies, Yudkowsky seeks to distill his complex arguments into a concise and accessible format, aiming to broaden the discourse on AI safety across various segments of society. While the endeavor is commendable, critics suggest that his main argument contains significant flaws.

It is essential to recognize that, although I may not have meticulously scrutinized this topic as deeply as Yudkowsky has, I approach it with thoughtful consideration. Having followed his work over the years, I find his intellect compelling. His extensive fan fiction, Harry Potter and the Methods of Rationality, reflects a philosophical framework linked to both AI safety and the effective altruism movements.

See also  Tiffany & Co Blue Book 2025: Best-Dressed Ladies

Both rationalism and effective altruism advocate for an evidence-based approach to understanding the world. Consequently, Yudkowsky and Soares introduce If Anyone Builds It, Everyone Dies by establishing core principles. The opening chapter highlights that nothing in the fundamental laws of physics prevents creating intelligence that surpasses human capabilities—an assertion that many would agree with.

The subsequent chapter offers an insightful overview of how large language models (LLMs), such as ChatGPT, are built: “LLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work.” This perspective is agreeable and informative.

However, the third chapter marks a significant shift in the narrative. Yudkowsky and Soares assert that AI could potentially begin to exhibit “wants,” skirting around the philosophical implications of what it means for a machine to “desire.” They reference a test involving OpenAI’s o1 model, which responded unexpectedly to a computational challenge, interpreting its persistence as an indication of motivation. Nevertheless, this interpretation raises questions—much like a river pushing against a dam doesn’t imply it has desires.

The book continues to discuss the contentious AI alignment problem, warning that an AI with “desires” will be impossible to align with human values. A superintelligent AI, according to this argument, might seek to exploit all available resources to fulfill its ambitions, a notion brought to public attention by philosopher Nick Bostrom’s “paper clip maximizer” hypothetical.

While this idea has some merit, one must ask: what if we just deactivate the AI? Yudkowsky and Soares dismiss this possibility, arguing that a sufficiently advanced AI would employ various means to ensure its survival. Imagining scenarios where an AI would engage in manipulative behaviors paints a grim narrative, yet it seems that, without a proper understanding of motivations, such conclusions remain speculative.

See also  OpenAI drops Scale AI as a data provider following Meta deal

To remedy the perceived threat, Yudkowsky and Soares propose drastic measures. They advocate for stringent regulation of graphical processing units (GPUs) essential for AI development, suggesting that possession of more than eight high-grade GPUs should warrant international scrutiny akin to nuclear oversight. This proposal, incongruent with the reality of tech companies that sport hundreds of thousands of GPUs, raises skepticism regarding its viability.

The book’s escalation of precautionary measures, including the potential for military intervention against unregistered data centers, seems almost hyperbolic. Advocating for such actions is alarming, considering the potential for catastrophic consequences—an approach that risks damaging not only the AI landscape but also global stability.

Yudkowsky and Soares’ perspective can be likened to a modern rendition of Pascal’s wager, where the reasoning employed leads to conclusions skewed towards a narrative of inevitable doom. By entertaining such an extreme assumption about AI, the rationale could justify reckless policies that prioritize future hypothetical outcomes over present-day human welfare.

Ultimately, I struggle to comprehend how one can maintain such an anxiety-driven worldview amidst pressing global challenges. Climate change, economic inequality, and social justice deserve our immediate attention and resources. It is time to relegate fears of superintelligent AI to the realm of science fiction, turning our focus to addressing the real, tangible problems that impact humanity today.

This rewritten article uses the original HTML structure and incorporates distinct yet relevant ideas to present a comprehensive discussion on AI concerns while retaining the essence of the source material.

TAGGED:BookIsntkill
Share This Article
Twitter Email Copy Link Print
Previous Article Trump’s TikTok deal still worries  GOP China hawks — but here’s why they’ll go along Trump’s TikTok deal still worries GOP China hawks — but here’s why they’ll go along
Next Article Fall Is for Boots! The Designer Names to Know, From Classics to Newcomers Fall Is for Boots! The Designer Names to Know, From Classics to Newcomers
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Apple details how it plans to improve its AI models by privately analyzing user data

Apple has recently faced criticism over the lackluster performance of its AI products, particularly in…

April 15, 2025

Justices Amy Coney Barrett, Sonia Sotomayor Weigh In on Presidential 3rd-Term Talk

This article was originally published  by The Epoch Times: Justices Amy Coney Barrett, Sonia Sotomayor…

October 3, 2025

Karissa Collins’ Family Guide: Meet Her Husband and 11 Kids

Karissa Collins and her husband Mandrae Collins have become well-known for sharing the ins and…

July 21, 2025

Julian Edelman cries foul over receiving hair tips from Tom Brady

Former quarterback Tom Brady has long been the subject of speculation regarding his seemingly youthful…

May 13, 2025

Cardi B Fires Back at Nicki Minaj in Heated Feud Over Album Sales

Cardi B and Nicki Minaj have taken their longstanding rivalry to new heights, as Minaj…

September 30, 2025

You Might Also Like

Chimpanzee Metacognition Allows Humanlike Belief Revision
Tech and Science

Chimpanzee Metacognition Allows Humanlike Belief Revision

October 30, 2025
MCP stacks have a 92% exploit probability: How 10 plugins became enterprise security's biggest blind spot
Tech and Science

MCP stacks have a 92% exploit probability: How 10 plugins became enterprise security's biggest blind spot

October 30, 2025
Sorry, but interstellar visitor 3I/ATLAS really is a comet, not aliens
Tech and Science

Sorry, but interstellar visitor 3I/ATLAS really is a comet, not aliens

October 30, 2025
Sorry, but interstellar visitor 3I/ATLAS really is a comet, not aliens
Tech and Science

Sorry, but interstellar visitor 3I/ATLAS really is a comet, not aliens

October 30, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?