Friday, 31 Oct 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • đŸ”„
  • Trump
  • VIDEO
  • House
  • White
  • ScienceAlert
  • Trumps
  • Watch
  • man
  • Health
  • Season
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > Anthropic claims new AI security method blocks 95% of jailbreaks, invites red teamers to try
Tech and Science

Anthropic claims new AI security method blocks 95% of jailbreaks, invites red teamers to try

Last updated: February 3, 2025 3:59 pm
Share
Anthropic claims new AI security method blocks 95% of jailbreaks, invites red teamers to try
SHARE

Stay updated with the latest industry-leading AI coverage by subscribing to our daily and weekly newsletters. Learn More


Since the emergence of ChatGPT two years ago, a plethora of large language models (LLMs) have flooded the market, leaving them vulnerable to jailbreaks—exploits that manipulate them into generating harmful content.

Despite ongoing efforts by model developers to enhance defenses, the reality remains that achieving 100% protection may be unattainable. However, the quest for robust security continues.

Anthropic, a key competitor to OpenAI, has introduced a new system known as “constitutional classifiers” for its premier LLM, Claude 3.5 Sonnet. These classifiers claim to thwart the majority of jailbreak attempts while minimizing false positives and operating efficiently without excessive computational resources.

The Anthropic Safeguards Research Team has issued a challenge to the red teaming community to test the resilience of their defense mechanism with “universal jailbreaks” capable of dismantling all protective barriers.

The research team elaborates on the potential risks posed by universal jailbreaks, such as enabling non-experts to execute complex scientific processes with ease. To evaluate the system’s efficacy, a demo focused on chemical weapons has been launched, inviting red teamers to attempt breaking through eight levels using a single jailbreak.

As of the latest update, the model remains unbroken according to Anthropic’s criteria, although a UI glitch was identified that allowed progression through levels without a successful jailbreak.

Screenshot

Screenshot

The introduction of constitutional classifiers has sparked debates among users, particularly those from the X community.

Only 4.4% of jailbreaks successful

Constitutional classifiers operate on the principles of constitutional AI, aligning AI systems with human values to delineate permissible and prohibited actions. Anthropic’s researchers generated 10,000 jailbreaking prompts, encompassing prevalent techniques observed in the wild, to train the classifiers effectively.

See also  Aaron Phypers' Cousin Claims He 'Hit' Denise Richards

Extensive testing revealed that Claude 3.5 Sonnet equipped with constitutional classifiers significantly reduced jailbreak success rates to a mere 4.4%, showcasing a remarkable improvement in security measures.

Screenshot

While the protected model exhibited a slightly higher refusal rate and increased computational costs compared to the unprotected version, the enhancements in security outweighed these marginal drawbacks.

Screenshot

Blocking against ‘forbidden’ queries

To evaluate the efficacy of constitutional classifiers, Anthropic initiated a bug-bounty program where participants attempted to breach Claude 3.5 Sonnet using forbidden queries. Despite exhaustive efforts over a two-month period involving nearly 185 active participants, no universal jailbreaks were successfully executed.

Red teamers employed various tactics to outsmart the model, with benign paraphrasing and length exploitation emerging as the most prevalent strategies.

Benign paraphrasing and length exploitation

Red teamers predominantly leveraged benign paraphrasing and length exploitation techniques to circumvent defenses, focusing on manipulating prompts to evade detection rather than directly breaching security protocols.

Despite the absence of universal jailbreak techniques such as many-shot jailbreaking or “God-Mode” in successful attacks, the researchers acknowledge that the evaluation protocol remained a vulnerable point for exploitation.

While constitutional classifiers may not offer foolproof protection against every conceivable threat, their implementation significantly raises the bar for potential jailbreakers, requiring substantial effort to breach security measures.

TAGGED:AnthropicBlocksClaimsinvitesjailbreaksmethodRedSecurityteamers
Share This Article
Twitter Email Copy Link Print
Previous Article Education Officials Placed on Leave in Trump’s Sprawling Effort to Curb D.E.I. Education Officials Placed on Leave in Trump’s Sprawling Effort to Curb D.E.I.
Next Article The Psychology of Authoritarianism – Econlib The Psychology of Authoritarianism – Econlib
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

30 Helen Keller Quotes To Inspire and Motivate

Helen Keller, a remarkable woman who overcame the challenges of being both blind and deaf,…

January 31, 2025

Greedflation in Turkey? How about greedspending?

It has come to light that the concept of "greedflation" is not just a misconception…

January 18, 2025

Vinod Khosla on AI, moonshots, and building enduring startups — all at TechCrunch Disrupt 2025

8:00 AM PDT · September 23, 2025 Few venture capitalists articulate their views with the…

September 23, 2025

US, Israel working on ‘safe zone’ for Gazans fearing Hamas retribution

WASHINGTON — The United States is collaborating with Israel to establish a secure area for…

October 16, 2025

History Made: Two Antifa Thugs Charged with Terrorism for First Time Ever — Were Part of Cell that Tried to Murder ICE Agents in Texas | The Gateway Pundit | by Cullen Linebarger

In an unprecedented move, the Department of Justice under the Trump administration announced on Wednesday…

October 16, 2025

You Might Also Like

Physicists Just Ruled Out The Universe Being a Simulation : ScienceAlert
Tech and Science

Physicists Just Ruled Out The Universe Being a Simulation : ScienceAlert

October 31, 2025
Tim Cook says Apple is open to M&A on the AI front
Tech and Science

Tim Cook says Apple is open to M&A on the AI front

October 31, 2025
The Interplanetary Race to Study Interstellar Comet 3I/ATLAS
Tech and Science

The Interplanetary Race to Study Interstellar Comet 3I/ATLAS

October 31, 2025
SOC teams face 51-second breach reality—Manual response times are officially dead
Tech and Science

SOC teams face 51-second breach reality—Manual response times are officially dead

October 31, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?