Wednesday, 31 Dec 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • VIDEO
  • ScienceAlert
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Health
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > OpenAI’s latest AI models have a new safeguard to prevent biorisks
Tech and Science

OpenAI’s latest AI models have a new safeguard to prevent biorisks

Last updated: April 16, 2025 2:23 pm
Share
OpenAI’s latest AI models have a new safeguard to prevent biorisks
SHARE

OpenAI Implements New System to Monitor AI Models for Biological and Chemical Threats

OpenAI has announced the deployment of a new monitoring system for its latest AI reasoning models, o3 and o4-mini, specifically targeting prompts related to biological and chemical threats. The primary objective of this system is to prevent the models from providing advice that could potentially lead to harmful actions, as outlined in OpenAI’s safety report.

The introduction of o3 and o4-mini represents a significant advancement in capabilities compared to OpenAI’s previous models. However, this also brings about new risks, particularly in the hands of malicious actors. According to internal benchmarks, o3 demonstrates enhanced proficiency in addressing questions related to the creation of certain biological threats. In response to these potential risks, OpenAI has developed a specialized monitoring system described as a “safety-focused reasoning monitor.”

The monitoring system, trained to align with OpenAI’s content policies, operates on top of the o3 and o4-mini models. Its primary function is to identify prompts associated with biological and chemical risks and prompt the models to refrain from providing advice on these topics.

To establish a baseline for the monitoring system, OpenAI enlisted red teamers to spend approximately 1,000 hours flagging “unsafe” conversations related to biorisks from o3 and o4-mini. During a test simulation of the monitor’s “blocking logic,” the models successfully declined to respond to risky prompts 98.7% of the time, indicating the effectiveness of the system.

While acknowledging the limitations of the test, OpenAI recognizes the possibility of users attempting new prompts after being blocked by the monitor. Therefore, the company plans to maintain human monitoring alongside the automated system.

See also  Samsung may invest in $100M round for medical imaging startup Exo

Although o3 and o4-mini do not surpass OpenAI’s threshold for “high risk” biorisks, early versions of these models exhibited greater proficiency in addressing queries related to developing biological weapons compared to previous models like o1 and GPT-4.


Chart from o3 and o4-mini’s system card (Screenshot: OpenAI)

OpenAI remains vigilant in monitoring how its models could potentially facilitate the creation of chemical and biological threats, as outlined in the company’s updated Preparedness Framework.

Furthermore, OpenAI is increasingly leveraging automated systems to mitigate risks associated with its models. For instance, to prevent GPT-4o’s native image generator from generating child sexual abuse material (CSAM), a similar reasoning monitor to the one deployed for o3 and o4-mini is utilized.

Despite these efforts, some researchers have expressed concerns about OpenAI’s safety prioritization. One of the company’s red-teaming partners, Metr, highlighted the limited testing time for o3 on a deceptive behavior benchmark. Additionally, OpenAI opted not to release a safety report for its GPT-4.1 model, which was recently launched.

TAGGED:biorisksLatestmodelsOpenAIsPreventsafeguard
Share This Article
Twitter Email Copy Link Print
Previous Article Robert Kennedy returns to vaccine criticism after moderating views Robert Kennedy returns to vaccine criticism after moderating views
Next Article How to Decorate a Mantel in 5 Easy Steps, According to Design Pros How to Decorate a Mantel in 5 Easy Steps, According to Design Pros
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Better show up in this match”, “How is he still starting

Barcelona fans were left disappointed with Jules Kounde's inclusion in the starting XI to face…

November 2, 2025

Should You Expect Investor Sentiment for Klaviyo (KVYO) to Improve?

Sands Capital, an investment management company, recently released its Q3 2025 investor letter for the…

December 2, 2025

Kour Pour Reclaims the Geometry of Abstraction

Kour Pour, a Los Angeles-based artist, has been challenging the Euro-American art historical canon for…

August 6, 2025

Did Pauly D Shave His Head? Jersey Shore Star Shares Video of Buzzcut

It’s official: the reigning prank war champion is back at it! Jersey Shore: Family Vacation…

October 13, 2025

Brian Austin Green Recovering After Surgery For Perforated Appendix

Brian Austin Green Opens Up About His Surgery for a Perforated Appendix Published April 15,…

April 15, 2025

You Might Also Like

What Is Biophobia? Your Guide to The Hidden Experience of Millions : ScienceAlert
Tech and Science

What Is Biophobia? Your Guide to The Hidden Experience of Millions : ScienceAlert

December 31, 2025
Three supermassive black holes have been spotted merging into one
Tech and Science

Three supermassive black holes have been spotted merging into one

December 31, 2025
This Stunning ‘Blue Marble’ Fruit Isn’t Actually Blue – It’s a Wild Optical Illusion : ScienceAlert
Tech and Science

This Stunning ‘Blue Marble’ Fruit Isn’t Actually Blue – It’s a Wild Optical Illusion : ScienceAlert

December 31, 2025
Cheers! NASA Rings in the New Year with Sparkling ‘Champagne Cluster’ Image
Tech and Science

Cheers! NASA Rings in the New Year with Sparkling ‘Champagne Cluster’ Image

December 31, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?