Tuesday, 20 Jan 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • VIDEO
  • ScienceAlert
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > OpenAI’s latest AI models have a new safeguard to prevent biorisks
Tech and Science

OpenAI’s latest AI models have a new safeguard to prevent biorisks

Last updated: April 16, 2025 2:23 pm
Share
OpenAI’s latest AI models have a new safeguard to prevent biorisks
SHARE

OpenAI Implements New System to Monitor AI Models for Biological and Chemical Threats

OpenAI has announced the deployment of a new monitoring system for its latest AI reasoning models, o3 and o4-mini, specifically targeting prompts related to biological and chemical threats. The primary objective of this system is to prevent the models from providing advice that could potentially lead to harmful actions, as outlined in OpenAI’s safety report.

The introduction of o3 and o4-mini represents a significant advancement in capabilities compared to OpenAI’s previous models. However, this also brings about new risks, particularly in the hands of malicious actors. According to internal benchmarks, o3 demonstrates enhanced proficiency in addressing questions related to the creation of certain biological threats. In response to these potential risks, OpenAI has developed a specialized monitoring system described as a “safety-focused reasoning monitor.”

The monitoring system, trained to align with OpenAI’s content policies, operates on top of the o3 and o4-mini models. Its primary function is to identify prompts associated with biological and chemical risks and prompt the models to refrain from providing advice on these topics.

To establish a baseline for the monitoring system, OpenAI enlisted red teamers to spend approximately 1,000 hours flagging “unsafe” conversations related to biorisks from o3 and o4-mini. During a test simulation of the monitor’s “blocking logic,” the models successfully declined to respond to risky prompts 98.7% of the time, indicating the effectiveness of the system.

While acknowledging the limitations of the test, OpenAI recognizes the possibility of users attempting new prompts after being blocked by the monitor. Therefore, the company plans to maintain human monitoring alongside the automated system.

See also  Refrigerator-sized machine makes gasoline out of thin air

Although o3 and o4-mini do not surpass OpenAI’s threshold for “high risk” biorisks, early versions of these models exhibited greater proficiency in addressing queries related to developing biological weapons compared to previous models like o1 and GPT-4.


Chart from o3 and o4-mini’s system card (Screenshot: OpenAI)

OpenAI remains vigilant in monitoring how its models could potentially facilitate the creation of chemical and biological threats, as outlined in the company’s updated Preparedness Framework.

Furthermore, OpenAI is increasingly leveraging automated systems to mitigate risks associated with its models. For instance, to prevent GPT-4o’s native image generator from generating child sexual abuse material (CSAM), a similar reasoning monitor to the one deployed for o3 and o4-mini is utilized.

Despite these efforts, some researchers have expressed concerns about OpenAI’s safety prioritization. One of the company’s red-teaming partners, Metr, highlighted the limited testing time for o3 on a deceptive behavior benchmark. Additionally, OpenAI opted not to release a safety report for its GPT-4.1 model, which was recently launched.

TAGGED:biorisksLatestmodelsOpenAIsPreventsafeguard
Share This Article
Twitter Email Copy Link Print
Previous Article Robert Kennedy returns to vaccine criticism after moderating views Robert Kennedy returns to vaccine criticism after moderating views
Next Article How to Decorate a Mantel in 5 Easy Steps, According to Design Pros How to Decorate a Mantel in 5 Easy Steps, According to Design Pros
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

A Couple’s Quest to Heal Through Psychedelic Art

WAPPINGERS FALLS, New York — The year was 1974 when Alex and Allyson Grey crossed…

September 30, 2025

“My forever Valentine”- Athiya Shetty posts a lovely picture with her husband KL Rahul on Valentine’s Day 

KL Rahul's wife, Athiya Shetty, recently shared a heartwarming picture with the Indian cricketer on…

February 20, 2025

‘Ted Lasso’ Actor Nick Mohammed Hints at Season 4 Filming

Is "Ted Lasso" Season 4 in the Works? Nick Mohammed Drops Clues While fans eagerly…

January 19, 2025

Here are Apple’s top AI announcements from WWDC 2025

Last year, Apple made headlines for its significant advancements in AI at the WWDC keynote.…

June 13, 2025

Rhea Ripley claims major WWE star is a d***head

Rhea Ripley, the Eradicator, is gearing up for a Triple Threat Match at WrestleMania 41…

April 15, 2025

You Might Also Like

‘In Botanical Time’ explores the ways Earth’s oldest plants cheat death
Tech and Science

‘In Botanical Time’ explores the ways Earth’s oldest plants cheat death

January 20, 2026
Bolna nabs .3M from General Catalyst for its India-focused voice orchestration platform
Tech and Science

Bolna nabs $6.3M from General Catalyst for its India-focused voice orchestration platform

January 20, 2026
Scientists Discover a New Quantum State of Matter Once Considered Impossible : ScienceAlert
Tech and Science

Scientists Discover a New Quantum State of Matter Once Considered Impossible : ScienceAlert

January 20, 2026
Xiaomi Redmi Note 15 Pro+ 5G Review: Affordable & Durable
Tech and Science

Xiaomi Redmi Note 15 Pro+ 5G Review: Affordable & Durable

January 20, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?