Sunday, 12 Apr 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • White
  • VIDEO
  • man
  • Trumps
  • Season
  • star
  • Watch
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > OpenAI’s latest AI models have a new safeguard to prevent biorisks
Tech and Science

OpenAI’s latest AI models have a new safeguard to prevent biorisks

Last updated: April 16, 2025 2:23 pm
Share
OpenAI’s latest AI models have a new safeguard to prevent biorisks
SHARE

OpenAI Implements New System to Monitor AI Models for Biological and Chemical Threats

OpenAI has announced the deployment of a new monitoring system for its latest AI reasoning models, o3 and o4-mini, specifically targeting prompts related to biological and chemical threats. The primary objective of this system is to prevent the models from providing advice that could potentially lead to harmful actions, as outlined in OpenAI’s safety report.

The introduction of o3 and o4-mini represents a significant advancement in capabilities compared to OpenAI’s previous models. However, this also brings about new risks, particularly in the hands of malicious actors. According to internal benchmarks, o3 demonstrates enhanced proficiency in addressing questions related to the creation of certain biological threats. In response to these potential risks, OpenAI has developed a specialized monitoring system described as a “safety-focused reasoning monitor.”

The monitoring system, trained to align with OpenAI’s content policies, operates on top of the o3 and o4-mini models. Its primary function is to identify prompts associated with biological and chemical risks and prompt the models to refrain from providing advice on these topics.

To establish a baseline for the monitoring system, OpenAI enlisted red teamers to spend approximately 1,000 hours flagging “unsafe” conversations related to biorisks from o3 and o4-mini. During a test simulation of the monitor’s “blocking logic,” the models successfully declined to respond to risky prompts 98.7% of the time, indicating the effectiveness of the system.

While acknowledging the limitations of the test, OpenAI recognizes the possibility of users attempting new prompts after being blocked by the monitor. Therefore, the company plans to maintain human monitoring alongside the automated system.

See also  Even in our digital world, materials still matter

Although o3 and o4-mini do not surpass OpenAI’s threshold for “high risk” biorisks, early versions of these models exhibited greater proficiency in addressing queries related to developing biological weapons compared to previous models like o1 and GPT-4.


Chart from o3 and o4-mini’s system card (Screenshot: OpenAI)

OpenAI remains vigilant in monitoring how its models could potentially facilitate the creation of chemical and biological threats, as outlined in the company’s updated Preparedness Framework.

Furthermore, OpenAI is increasingly leveraging automated systems to mitigate risks associated with its models. For instance, to prevent GPT-4o’s native image generator from generating child sexual abuse material (CSAM), a similar reasoning monitor to the one deployed for o3 and o4-mini is utilized.

Despite these efforts, some researchers have expressed concerns about OpenAI’s safety prioritization. One of the company’s red-teaming partners, Metr, highlighted the limited testing time for o3 on a deceptive behavior benchmark. Additionally, OpenAI opted not to release a safety report for its GPT-4.1 model, which was recently launched.

TAGGED:biorisksLatestmodelsOpenAIsPreventsafeguard
Share This Article
Twitter Email Copy Link Print
Previous Article Robert Kennedy returns to vaccine criticism after moderating views Robert Kennedy returns to vaccine criticism after moderating views
Next Article How to Decorate a Mantel in 5 Easy Steps, According to Design Pros How to Decorate a Mantel in 5 Easy Steps, According to Design Pros
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Popular Posts

A Knight of the Seven Kingdoms trailer shows a funnier side to the Game of Thrones universe

A Knight of the Seven Kingdoms at a glance: The series is set to debut…

October 10, 2025

Is FedEx Corporation Stock Outperforming the Dow?

FedEx Corporation (FDX) is a well-known name in the transportation, e-commerce, and logistics industry, providing…

March 10, 2026

Ancient Indian Health Practice Really Can Help Fight Colds And Allergies : ScienceAlert

The ancient practice of nasal saline irrigation, also known as nasal irrigation, has gained traction…

November 16, 2025

Radical Anti-Israel Students at Temple University Admit Real Goal is Destruction of the United States and Capitalism (VIDEO) |

Among the myriad of narratives surrounding anti-Israel protests on university campuses, one aspect remains frustratingly…

May 8, 2025

Victor Wembanyama’s Deep Vein Thrombosis Shocks Fans, But Spurs Made The Right Call

As the NBA community reels from the news of Victor Wembanyama's deep vein thrombosis diagnosis,…

February 23, 2025

You Might Also Like

Walmart-owned Flipkart, Amazon are squeezing India’s quick commerce startups
Tech and Science

Walmart-owned Flipkart, Amazon are squeezing India’s quick commerce startups

April 11, 2026
Experimental Drug Can Reverse Osteoarthritis in Weeks, Animal Research Shows : ScienceAlert
Tech and Science

Experimental Drug Can Reverse Osteoarthritis in Weeks, Animal Research Shows : ScienceAlert

April 11, 2026
AI agent credentials live in the same box as untrusted code. Two new architectures show where the blast radius actually stops.
Tech and Science

AI agent credentials live in the same box as untrusted code. Two new architectures show where the blast radius actually stops.

April 11, 2026
Google’s Fitbit Tease has me More Excited for Garmin’s Whoop Rival
Tech and Science

Google’s Fitbit Tease has me More Excited for Garmin’s Whoop Rival

April 11, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?