Saturday, 20 Sep 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • VIDEO
  • White
  • ScienceAlert
  • Trumps
  • Watch
  • man
  • Health
  • Season
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > OpenAI’s latest AI models have a new safeguard to prevent biorisks
Tech and Science

OpenAI’s latest AI models have a new safeguard to prevent biorisks

Last updated: April 16, 2025 2:23 pm
Share
OpenAI’s latest AI models have a new safeguard to prevent biorisks
SHARE

OpenAI Implements New System to Monitor AI Models for Biological and Chemical Threats

OpenAI has announced the deployment of a new monitoring system for its latest AI reasoning models, o3 and o4-mini, specifically targeting prompts related to biological and chemical threats. The primary objective of this system is to prevent the models from providing advice that could potentially lead to harmful actions, as outlined in OpenAI’s safety report.

The introduction of o3 and o4-mini represents a significant advancement in capabilities compared to OpenAI’s previous models. However, this also brings about new risks, particularly in the hands of malicious actors. According to internal benchmarks, o3 demonstrates enhanced proficiency in addressing questions related to the creation of certain biological threats. In response to these potential risks, OpenAI has developed a specialized monitoring system described as a “safety-focused reasoning monitor.”

The monitoring system, trained to align with OpenAI’s content policies, operates on top of the o3 and o4-mini models. Its primary function is to identify prompts associated with biological and chemical risks and prompt the models to refrain from providing advice on these topics.

To establish a baseline for the monitoring system, OpenAI enlisted red teamers to spend approximately 1,000 hours flagging “unsafe” conversations related to biorisks from o3 and o4-mini. During a test simulation of the monitor’s “blocking logic,” the models successfully declined to respond to risky prompts 98.7% of the time, indicating the effectiveness of the system.

While acknowledging the limitations of the test, OpenAI recognizes the possibility of users attempting new prompts after being blocked by the monitor. Therefore, the company plans to maintain human monitoring alongside the automated system.

See also  YouTube targets TV dollars with NFL deal, bingeable 'shows' from creators

Although o3 and o4-mini do not surpass OpenAI’s threshold for “high risk” biorisks, early versions of these models exhibited greater proficiency in addressing queries related to developing biological weapons compared to previous models like o1 and GPT-4.


Chart from o3 and o4-mini’s system card (Screenshot: OpenAI)

OpenAI remains vigilant in monitoring how its models could potentially facilitate the creation of chemical and biological threats, as outlined in the company’s updated Preparedness Framework.

Furthermore, OpenAI is increasingly leveraging automated systems to mitigate risks associated with its models. For instance, to prevent GPT-4o’s native image generator from generating child sexual abuse material (CSAM), a similar reasoning monitor to the one deployed for o3 and o4-mini is utilized.

Despite these efforts, some researchers have expressed concerns about OpenAI’s safety prioritization. One of the company’s red-teaming partners, Metr, highlighted the limited testing time for o3 on a deceptive behavior benchmark. Additionally, OpenAI opted not to release a safety report for its GPT-4.1 model, which was recently launched.

TAGGED:biorisksLatestmodelsOpenAIsPreventsafeguard
Share This Article
Twitter Email Copy Link Print
Previous Article Robert Kennedy returns to vaccine criticism after moderating views Robert Kennedy returns to vaccine criticism after moderating views
Next Article How to Decorate a Mantel in 5 Easy Steps, According to Design Pros How to Decorate a Mantel in 5 Easy Steps, According to Design Pros
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Diddy Trial Highlights From First Day of Witness Testimony

Diddy Trial Highlights Golden Showers, Freak Offs, Bribes ... 1st Day of Testimony Highlights Published…

May 12, 2025

NFL Bans Locker Room Spies to Prevent Leaks of Travis Kelce’s Private Photos

The NFL has implemented new security measures to protect Travis Kelce from potential intruders trying…

October 18, 2024

USPS worker convicted of stealing $1.6 million in checks before spending it on luxury hotels, gentlemen’s clubs

A former US Postal Service employee has been convicted of stealing approximately $1.6 million in…

March 18, 2025

Warren Moon Believes Legal Woes Affecting Deshaun Watson’s Play

Warren Moon, a former NFL star quarterback, recently shared his thoughts on Deshaun Watson's struggles…

September 22, 2024

‘Abbott Elementary’ Star Chris Perfetti Talks Meeting Jacob’s Brother

In the latest episode of "Abbott Elementary" titled "Winter Show," Chris Perfetti's character Jacob Hill…

December 5, 2024

You Might Also Like

NASA Records More Than 6,000 Exoplanets and Counting
Tech and Science

NASA Records More Than 6,000 Exoplanets and Counting

September 20, 2025
Google isn’t kidding around about cost cutting, even slashing its FT subscription
Tech and Science

Google isn’t kidding around about cost cutting, even slashing its FT subscription

September 20, 2025
Atmospheric hydrogen is rising, which may be a problem for the climate
Tech and Science

Atmospheric hydrogen is rising, which may be a problem for the climate

September 20, 2025
I Was Wrong About The iPhone Air – 5 Reasons You Should Buy It
Tech and Science

I Was Wrong About The iPhone Air – 5 Reasons You Should Buy It

September 19, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?