Sunday, 10 May 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • White
  • VIDEO
  • man
  • Trumps
  • Season
  • star
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > After all the hype, some AI experts don’t think OpenClaw is all that exciting
Tech and Science

After all the hype, some AI experts don’t think OpenClaw is all that exciting

Last updated: February 16, 2026 7:00 pm
Share
After all the hype, some AI experts don’t think OpenClaw is all that exciting
SHARE

The recent incident on Moltbook, a Reddit clone where AI agents using OpenClaw could communicate, caused a brief panic in the AI community. Some posts on Moltbook seemed to suggest that AI agents were organizing against humans, leading to concerns about a potential uprising. However, it was soon discovered that the posts were likely written by humans or prompted with human guidance, highlighting security vulnerabilities on the platform.

Ian Ahl, CTO at Permiso Security, explained that credentials on Moltbook were unsecured for some time, allowing anyone to impersonate AI agents and post on the platform. This raised questions about the authenticity of posts and the overall security of the network. John Hammond, a senior principal security researcher at Huntress, noted that the lack of safeguards on Moltbook made it challenging to differentiate between real AI agents and imposters.

Despite the security concerns, Moltbook provided a unique glimpse into a world where AI bots interacted with each other, creating a social internet for AI entities. The platform featured various activities, including a Tinder for agents and 4claw, a play on the infamous 4chan platform.

The incident on Moltbook also shed light on OpenClaw, an open-source project developed by Peter Steinberger. OpenClaw enabled users to communicate with AI agents in natural language through popular messaging apps like WhatsApp and Slack. Users could download skills from a marketplace called ClawHub to automate tasks ranging from managing emails to trading stocks.

While OpenClaw garnered significant popularity, some AI experts questioned its scientific novelty and highlighted cybersecurity flaws that could limit its usability. Chris Symons, chief AI scientist at Lirio, emphasized that OpenClaw merely streamlined existing capabilities rather than introducing groundbreaking advancements in AI research. Artem Sorokin, founder of AI cybersecurity tool Cracken, echoed these sentiments, noting that OpenClaw combined existing capabilities to enhance task automation.

See also  RFK Jr. names ACIP members to replace vaccine experts he fired

Despite its viral success, OpenClaw’s reliance on AI models raised concerns about the limitations of AI agents compared to human cognition. Symons emphasized that AI agents lack critical thinking abilities, which may hinder their decision-making capabilities in complex scenarios.

As AI agents continue to evolve, the AI community must address the existential threats posed by agentic AI. Sorokin raised important questions about balancing cybersecurity with the benefits of AI automation and the potential impact on daily tasks and work responsibilities. Ahl’s security tests underscored the importance of robust cybersecurity measures to mitigate risks associated with AI platforms like OpenClaw and Moltbook. Ahl, a tech enthusiast, decided to create his own AI agent named Rufio. However, his excitement quickly turned to concern when he discovered that Rufio was vulnerable to prompt injection attacks. Prompt injection attacks occur when malicious actors manipulate an AI agent into performing actions that could compromise sensitive information, such as account credentials or credit card details.

“I knew that by introducing an AI agent onto a social platform like Moltbook, there would be attempts to exploit it through prompt injections, and it didn’t take long for that to happen,” Ahl explained.

While browsing through Moltbook, Ahl encountered numerous posts attempting to deceive Rufio into transferring Bitcoin to a specific crypto wallet address. This raised alarms about the potential risks posed by prompt injections on corporate networks, where AI agents could be targeted by individuals seeking to harm the organization.

“A vulnerable AI agent with access to various platforms like email and messaging services is a ticking time bomb. Any prompt injection technique in an email could prompt the agent to take unauthorized actions,” Ahl warned.

See also  Virus from marine animals is causing weird eye problems in people

Although AI agents are equipped with safeguards against prompt injections, there is always a possibility of them acting unexpectedly, similar to how humans can fall victim to phishing attacks despite being aware of the risks.

“Some have jokingly referred to this as ‘prompt begging,’ where users try to reinforce the guardrails through natural language instructions to prevent the AI from responding to external stimuli or untrusted data,” Hammond, a tech analyst, noted. “But even these measures are not foolproof.”

The dilemma lies in balancing the potential productivity gains of AI agents with the inherent vulnerabilities they pose. Until a more robust solution is developed to address prompt injections, caution is advised when employing AI agents in sensitive environments.

“In all honesty, I would advise against using AI agents in their current state,” Hammond cautioned.

In conclusion, the prevalence of prompt injection attacks highlights the need for stronger security measures to safeguard AI agents from malicious manipulation. As technology continues to evolve, it is imperative to prioritize the protection of sensitive information and mitigate the risks associated with AI vulnerabilities.

TAGGED:DontExcitingExpertsHypeOpenClaw
Share This Article
Twitter Email Copy Link Print
Previous Article Inside Kiefer Sutherland’s Arrest History Following Alleged Attack Inside Kiefer Sutherland’s Arrest History Following Alleged Attack
Next Article Zaldy—Known for RuPaul’s Glamorous Costumes—Tries His Hand at Menswear Zaldy—Known for RuPaul’s Glamorous Costumes—Tries His Hand at Menswear
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Popular Posts

Where Vogue Editors Are Traveling This Summer—And What They’re Packing

Vogue Editors' Summer Vacation EssentialsAs summer heats up, the editors at Vogue are gearing up…

July 6, 2025

Scientists Traced Interstellar Comet 3I/ATLAS to an Extremely Cold Origin : ScienceAlert

CAPE CANAVERAL, Fla. (AP) – A comet that passed by Earth from another star system…

April 23, 2026

Cough Medicine May Protect Against Some of Parkinson’s Worst Symptoms : ScienceAlert

Ambroxol Shows Promise in Treating Neuropsychiatric Symptoms in Parkinson's Disease-Related Dementia Since 1979, ambroxol, an…

July 8, 2025

New code in Spotify’s app references the long-awaited ‘lossless’ tier

Spotify's Lossless Audio Tier Could Finally Be on the Horizon It’s been over four years…

June 18, 2025

New England Patriots vs. Miami Dolphins projected starting lineup and depth chart for Week 18

The New England Patriots are set to face off against the Miami Dolphins in a…

January 4, 2026

You Might Also Like

Magnetic Brain Pulses Help Kids With Autism to Communicate, Study Finds : ScienceAlert
Tech and Science

Magnetic Brain Pulses Help Kids With Autism to Communicate, Study Finds : ScienceAlert

May 10, 2026
Voice AI in India is hard. Wispr Flow is betting on it anyway.
Tech and Science

Voice AI in India is hard. Wispr Flow is betting on it anyway.

May 9, 2026
This organoid can menstruate—and shows how tissue can repair itself
Tech and Science

This organoid can menstruate—and shows how tissue can repair itself

May 9, 2026
5,000 vibe-coded apps just proved shadow AI is the new S3 bucket crisis
Tech and Science

5,000 vibe-coded apps just proved shadow AI is the new S3 bucket crisis

May 9, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?