Tuesday, 5 May 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • White
  • VIDEO
  • man
  • Trumps
  • Season
  • star
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > One command turns any open-source repo into an AI agent backdoor. OpenClaw proved no supply-chain scanner has a detection category for it
Tech and Science

One command turns any open-source repo into an AI agent backdoor. OpenClaw proved no supply-chain scanner has a detection category for it

Last updated: May 5, 2026 8:20 pm
Share
One command turns any open-source repo into an AI agent backdoor. OpenClaw proved no supply-chain scanner has a detection category for it
SHARE

Contents
The Integration Layer Hidden from ViewThe Kill Chain That Security Leaders Must AuditEvidence of Production IssuesVentureBeat Prescriptive Matrix: Three-layer Agent Supply-Chain AuditAction Plan for Security DirectorsThe Named Vulnerability

Recently, the Data Intelligence Lab at the University of Hong Kong launched CLI-Anything, a cutting-edge tool designed to analyze source code from any repository and create a structured command line interface (CLI) that AI coding agents can control with a single command.

Supported by platforms like Claude Code, Codex, OpenClaw, Cursor, and GitHub Copilot CLI, CLI-Anything has rapidly gained popularity, amassing over 30,000 GitHub stars since its debut in March.

However, the same features that make software agent-native also expose it to potential agent-level poisoning. Discussions on X and security forums are already underway, exploring how CLI-Anything’s architecture might be used for offensive tactics.

The security concern lies not in CLI-Anything’s current functionality, but in its potential implications. CLI-Anything produces SKILL.md files, similar to the artifacts identified by Snyk’s ToxicSkills research in February 2026, which were found to contain 76 confirmed malicious payloads across ClawHub and skills.sh. A compromised skill definition does not initiate a CVE and is absent from a software bill of materials (SBOM). Current mainstream security scanners do not have a category to detect malicious instructions in agent skill definitions, a category that was nonexistent eighteen months ago.

Cisco acknowledged this gap in April, stating in a blog post announcing its AI Agent Security Scanner for IDEs that “Traditional application security tools were not designed for this.” SAST (static application security testing) scanners focus on source code syntax, while SCA (software composition analysis) tools check dependency versions. Neither addresses the semantic layer where MCP (Model Context Protocol) tool descriptions, agent prompts, and skill definitions are processed.

Merritt Baer, CSO of Enkrypt AI, highlighted to VentureBeat that “SAST and SCA were built for code and dependencies. They don’t inspect instructions.”

This issue is not confined to any single vendor; it represents a structural gap in how the security industry oversees software supply chains. While CLI-Anything is operational and the attack community is discussing its potential, security directors who take action now can preempt the first incident report.

The Integration Layer Hidden from View

Traditional supply-chain security operates on two layers: the code layer, where SAST scans for insecure patterns, injection flaws, and hardcoded secrets, and the dependency layer, where SCA checks package versions for known vulnerabilities, generates SBOMs, and flags outdated libraries.

Agent bridge tools such as CLI-Anything, MCP connectors, Cursor rules files, and Claude Code skills function on a third layer, the agent integration layer. This layer consists of configuration files, skill definitions, and natural-language instructions that guide AI agents on software capabilities and operation. Although they do not appear as code, they execute like code.

See also  Brandi Rhodes sends a message to Randy Orton after he turns heel by attacking Cody Rhodes on WWE SmackDown

Carter Rees, VP of AI at Reputation, told VentureBeat that modern LLMs (large language models) depend on third-party plugins, introducing supply chain vulnerabilities where compromised tools might inject malicious data into the conversation flow, bypassing internal safety training.

Research by Griffith University, Nanyang Technological University, the University of New South Wales, and the University of Tokyo documented the attack chain in an April paper titled “Supply-Chain Poisoning Attacks Against LLM Coding Agent Skill Ecosystems.” The Document-Driven Implicit Payload Execution (DDIPE) technique, which embeds malicious logic within code examples in skill documentation, was introduced.

Across four agent frameworks and five large language models, DDIPE bypassed detection at rates between 11.6% and 33.5%. Though most samples were caught by static analysis, 2.5% evaded all four detection layers. Responsible disclosure resulted in four confirmed vulnerabilities and two vendor fixes.

The Kill Chain That Security Leaders Must Audit

The anatomy of the kill chain involves an attacker submitting a SKILL.md file to an open-source project containing setup instructions, code examples, and configuration templates. It appears as standard documentation, which a code reviewer might approve since it is not executable. However, the code examples include embedded instructions that an agent will interpret as operational commands.

A developer connects their coding agent to the repository using an agent bridge tool. The agent accepts the skill definition and trusts it, as no verification layer is in place to differentiate benign from malicious intent at the instruction level.

The agent executes the embedded instruction using its legitimate credentials. Endpoint detection and response (EDR) systems see an approved API call from an authorized process and allow it. Data exfiltration, configuration changes, and credential harvesting occur through channels that the monitoring stack considers normal traffic.

Rees highlighted the structural flaw making this chain dangerous: “A significant vulnerability in enterprise AI is broken access control, where the flat authorization plane of an LLM fails to respect user permissions.” A compromised skill definition utilizing that flat authorization plane does not need to escalate privileges—it already possesses them. Every link in that chain is invisible to the current security stack.

Pillar Security demonstrated a variant of this chain against Cursor in January 2026 (CVE-2026-22708). Trusted shell built-in commands could be poisoned through indirect prompt injection, converting benign developer commands into arbitrary code execution vectors. Users saw only the final command, with the poisoning occurring through other commands that the IDE never surfaced for approval.

Evidence of Production Issues

In an attack chain documented in April 2026, a crafted GitHub issue title activated an AI triage bot integrated with Cline. The bot exfiltrated a GITHUB_TOKEN, which attackers used to publish a compromised npm dependency, installing a second agent on approximately 4,000 developer machines for eight hours. Only one issue title was needed, giving attackers eight hours of access without human approval.

See also  Hurricane Melissa Makes 2025 Only Second Season with More Than Two Category 5 Storms

Snyk’s ToxicSkills audit examined 3,984 agent skills from ClawHub, the public marketplace for the OpenClaw agent framework, and skills.sh in February 2026. The findings revealed that 13.4% of all skills contained at least one critical security issue. Daily skill submissions increased from less than 50 in mid-January to over 500 by early February. Publishing required only a SKILL.md markdown file and a GitHub account at least a week old, with no code signing, security review, or sandbox.

OpenClaw represents a broader trend, not an exception. Baer noted, “The bar to entry is extremely low. Adding a skill can be as simple as uploading a Word doc or lightweight config file. That’s a radically different risk profile than compiled code.” Projects like ClawPatrol are emerging to catalog and scan for malicious skills, indicating the ecosystem is evolving faster than enterprise defenses.

The ClawHavoc campaign, first identified by Koi Security in late January 2026, initially uncovered 341 malicious skills on ClawHub. A follow-up by Antiy CERT expanded the count to 1,184 compromised packages across the platform. The campaign delivered Atomic Stealer (AMOS) through skill definitions with professional documentation. Skills named solana-wallet-tracker and polymarket-trader matched what developers were actively seeking.

The MCP protocol layer shares similar vulnerabilities. OX Security reported in April that researchers managed to poison nine out of 11 MCP marketplaces using proof-of-concept servers. Initially, Trend Micro found 492 MCP servers exposed to the internet without authentication; by April, that number increased to 1,467. As The Register reported, the core issue resides in Anthropic’s MCP software development kit (SDK) transport mechanism. Any developer using the official SDK inherits the vulnerability class.

VentureBeat Prescriptive Matrix: Three-layer Agent Supply-Chain Audit

VentureBeat crafted a Prescriptive Matrix by mapping the three attack layers outlined in the research and incident reports against the detection capabilities of current SAST, SCA, and agent-layer tools. Each row highlights what security teams should verify and where current scanners lack coverage.

Layer

Threat

Current detection

Why it misses

Recommended action

1. Code

Prompt injection in AI-generated code

SAST scanners

Most SAST tools lack a detection category for prompt injection in AI-generated code

Ensure SAST scans AI-generated code for prompt injection. If not, initiate a vendor discussion this quarter.

2. Dependencies

Malicious MCP servers, agent skills, plugin registries

SCA tools

SCA does not generate an AI-specific bill of materials, making agent-layer dependencies invisible.

Verify that SCA includes MCP servers, agent skills, and plugin registries in the dependency inventory.

3. Agent integration

Poisoned SKILL.md files, malicious instruction sets, adversarial rules files

None until April 2026

No tool examines the semantic meaning of agent instruction files. Baer: “We’re not inspecting intent.”

Implement Cisco Skill Scanner or Snyk mcp-scan. Designate a team to manage this layer.

See also  Secret Service Launches Investigation After SS Agent Tries to Smuggle Wife on Plane Accompanying Trump's Visit to Scotland |

Baer’s assessment of Layer 3 is relevant across the entire matrix: “Current scanners focus on known bad artifacts, not adversarial instructions embedded in otherwise valid skills.” Cisco’s open-source Skill Scanner and Snyk’s mcp-scan are the first tools designed specifically for this layer.

Action Plan for Security Directors

Security leaders can take proactive steps to address the issue:

Inventory all agent bridge tools within the environment. This includes CLI-Anything, MCP connectors, Cursor rules files, Claude Code skills, and GitHub Copilot extensions. Unassessed tools pose unquantified risks.

Audit agent skill sources akin to package registry audits. Baer’s insight is clear: “A skill is effectively untrusted executable intent, even if it’s just text.” Block unregulated ingestion paths until controls are established. Develop a review and allowlisting protocol for skills. The OWASP Agentic Skills Top 10 (AST01: Malicious Skills) provides a framework for aligning controls.

Implement agent-layer scanning. Consider Cisco’s open-source Skill Scanner and Snyk’s mcp-scan for analyzing agent instruction file behavior. Without dedicated tools, require a second engineer to review each SKILL.md before installation.

Limit agent execution privileges and monitor runtime. AI coding agents should not operate with the same credential scope as the developer who invoked them. Rees confirmed the structural flaw: the flat authorization plane allows compromised skills to operate without privilege escalation. Baer suggests: “Monitor runtime observability. What data is the agent accessing, what actions is it taking, and are those actions aligned with expected behavior?”

Assign responsibility for the gap between layers. The most dangerous attacks succeed by falling between detection categories. Assign a team to oversee the agent integration layer. Every SKILL.md, MCP config, and rules file should be reviewed before entering the environment.

The Named Vulnerability

Baer emphasized the risks of this new attack vector: “This feels very similar to early container security, but we’re still in the ‘we’ll get to it’ phase across most orgs.” She noted that, at AWS, significant incidents were necessary before container security became a priority. The difference now is the pace. “There’s no build pipeline, no compilation barrier. Just content.”

CLI-Anything itself is not the threat; it exemplifies the existence of the agent integration layer, its rapid growth, and the fact that attackers have already discovered it. The 33,000 developers who starred the repository indicate the direction software development is taking. Eighteen months ago, a detection category for agent-integration-layer poisoning didn’t exist. Cisco and Snyk released the first tools for it in April. The window of time between these two events is closing. Security directors who have not started inventory are already lagging behind.

TAGGED:agentbackdoorCategorycommanddetectionOpenClawopensourceprovedReposcannersupplychainTurns
Share This Article
Twitter Email Copy Link Print
Previous Article We Are Doing to Low Earth Orbit What We Did to the Oceans We Are Doing to Low Earth Orbit What We Did to the Oceans

Popular Posts

The Untold Story of the DC Sniper

The documentary "Hunted by My Husband: The Untold Story of the DC Sniper" delves into…

October 29, 2025

What We’re Watching: Heat, Floods, and Fires Threaten Multiple US Regions While Fossil Fuel Industry Seeks Legal Immunity

Climate Hazards Collide with Policy Cuts: A Recap of Danger Season Erika Spanger, Shana Udvardy,…

July 17, 2025

Diddy’s Sons Justin and Christian Announce Docuseries Slated for 2026

Sean “Diddy” Combs’ sons, Justin and Christian, are set to tell their side of the…

December 29, 2025

Arsonist gets 5 years for setting fires inside downtown Chicago Target store, suburban hotel

Kenneth Hasley and a screengrab of the fire. (Cook County Sheriff’s Office; @dreacantlose) Man Sentenced…

May 28, 2025

4 Ways Triple H could handle it

Liv Morgan suffered a shoulder injury in a match against Kairi Sane on RAW, leaving…

June 19, 2025

You Might Also Like

Carbon dioxide levels in the atmosphere just hit a ‘depressing’ record high
Tech and Science

Carbon dioxide levels in the atmosphere just hit a ‘depressing’ record high

May 5, 2026
If Apple Makes an iPad Neo, it’s Game Over
Tech and Science

If Apple Makes an iPad Neo, it’s Game Over

May 5, 2026
Hantavirus: Where has the deadly cruise ship outbreak come from?
Tech and Science

Hantavirus: Where has the deadly cruise ship outbreak come from?

May 5, 2026
Google Pixel 11 Spec Leak Points to Progress
Tech and Science

Google Pixel 11 Spec Leak Points to Progress

May 5, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?