Thursday, 11 Dec 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • VIDEO
  • House
  • ScienceAlert
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Health
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > OpenAI’s latest AI models have a new safeguard to prevent biorisks
Tech and Science

OpenAI’s latest AI models have a new safeguard to prevent biorisks

Last updated: April 16, 2025 2:23 pm
Share
OpenAI’s latest AI models have a new safeguard to prevent biorisks
SHARE

OpenAI Implements New System to Monitor AI Models for Biological and Chemical Threats

OpenAI has announced the deployment of a new monitoring system for its latest AI reasoning models, o3 and o4-mini, specifically targeting prompts related to biological and chemical threats. The primary objective of this system is to prevent the models from providing advice that could potentially lead to harmful actions, as outlined in OpenAI’s safety report.

The introduction of o3 and o4-mini represents a significant advancement in capabilities compared to OpenAI’s previous models. However, this also brings about new risks, particularly in the hands of malicious actors. According to internal benchmarks, o3 demonstrates enhanced proficiency in addressing questions related to the creation of certain biological threats. In response to these potential risks, OpenAI has developed a specialized monitoring system described as a “safety-focused reasoning monitor.”

The monitoring system, trained to align with OpenAI’s content policies, operates on top of the o3 and o4-mini models. Its primary function is to identify prompts associated with biological and chemical risks and prompt the models to refrain from providing advice on these topics.

To establish a baseline for the monitoring system, OpenAI enlisted red teamers to spend approximately 1,000 hours flagging “unsafe” conversations related to biorisks from o3 and o4-mini. During a test simulation of the monitor’s “blocking logic,” the models successfully declined to respond to risky prompts 98.7% of the time, indicating the effectiveness of the system.

While acknowledging the limitations of the test, OpenAI recognizes the possibility of users attempting new prompts after being blocked by the monitor. Therefore, the company plans to maintain human monitoring alongside the automated system.

See also  Fall COVID Vaccines and the Latest Guidance on Tests and Treatment

Although o3 and o4-mini do not surpass OpenAI’s threshold for “high risk” biorisks, early versions of these models exhibited greater proficiency in addressing queries related to developing biological weapons compared to previous models like o1 and GPT-4.


Chart from o3 and o4-mini’s system card (Screenshot: OpenAI)

OpenAI remains vigilant in monitoring how its models could potentially facilitate the creation of chemical and biological threats, as outlined in the company’s updated Preparedness Framework.

Furthermore, OpenAI is increasingly leveraging automated systems to mitigate risks associated with its models. For instance, to prevent GPT-4o’s native image generator from generating child sexual abuse material (CSAM), a similar reasoning monitor to the one deployed for o3 and o4-mini is utilized.

Despite these efforts, some researchers have expressed concerns about OpenAI’s safety prioritization. One of the company’s red-teaming partners, Metr, highlighted the limited testing time for o3 on a deceptive behavior benchmark. Additionally, OpenAI opted not to release a safety report for its GPT-4.1 model, which was recently launched.

TAGGED:biorisksLatestmodelsOpenAIsPreventsafeguard
Share This Article
Twitter Email Copy Link Print
Previous Article Robert Kennedy returns to vaccine criticism after moderating views Robert Kennedy returns to vaccine criticism after moderating views
Next Article How to Decorate a Mantel in 5 Easy Steps, According to Design Pros How to Decorate a Mantel in 5 Easy Steps, According to Design Pros
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Lauren Sánchez Reflects On ‘Turbulent’ Start To 2025

After facing intense scrutiny following her appearance at President Donald Trump's inauguration, Lauren Sánchez is…

February 12, 2025

Sickle cell patient juggles pain, thoughts of what might have been

You're reading Part 5 of Coercive Care You're reading Part 5 of Coercive Care, a…

November 11, 2024

Washington woman Marcia Norman found buried under shed was tortured with nail gun

A horrific crime has shaken the community of Washington, as authorities revealed the gruesome details…

April 22, 2025

1 Million Have Waited Over 12 Hours In England’s ERs This Year

The emergency services in England are facing a crisis, with more than a million people…

October 28, 2024

Trump And Vance Blame Biden For Elon Musk Caused Chaos

Donald Trump and JD Vance have taken to blaming President Biden for the current chaos…

December 18, 2024

You Might Also Like

The market has ‘switched’ and founders have the power now, VCs say
Tech and Science

The market has ‘switched’ and founders have the power now, VCs say

December 11, 2025
Measles Outbreaks Accelerate as U.S. Inches Closer to a Disease Tipping Point
Tech and Science

Measles Outbreaks Accelerate as U.S. Inches Closer to a Disease Tipping Point

December 11, 2025
Huawei Mate X7 Foldable Launches With Major Upgrades
Tech and Science

Huawei Mate X7 Foldable Launches With Major Upgrades

December 11, 2025
Killer whales and dolphins are ‘being friends’ to hunt salmon together
Tech and Science

Killer whales and dolphins are ‘being friends’ to hunt salmon together

December 11, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?