Saturday, 11 Apr 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • đŸ”¥
  • Trump
  • House
  • ScienceAlert
  • White
  • VIDEO
  • man
  • Trumps
  • Season
  • star
  • Watch
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > OpenAI’s new reasoning AI models hallucinate more
Tech and Science

OpenAI’s new reasoning AI models hallucinate more

Last updated: April 18, 2025 11:30 pm
Share
OpenAI’s new reasoning AI models hallucinate more
SHARE

OpenAI’s Latest AI Models Still Struggle with Hallucinations

OpenAI recently introduced its o3 and o4-mini AI models, which are considered state-of-the-art in many aspects. However, these new models still face a significant challenge – they tend to hallucinate, or make up information, even more than some of OpenAI’s older models.

Hallucinations have long been a tough nut to crack in the field of AI, affecting even the most advanced systems available today. Traditionally, each new model has shown slight improvements in reducing hallucinations compared to its predecessor. However, this trend seems to have taken a step back with the o3 and o4-mini models.

According to OpenAI’s internal evaluations, the reasoning models o3 and o4-mini exhibit a higher rate of hallucinations compared to the company’s previous reasoning models like o1, o1-mini, and o3-mini, as well as the non-reasoning models such as GPT-4o.

One concerning aspect is that OpenAI is still uncertain about the root cause of this increased hallucination phenomenon. In their technical report for o3 and o4-mini, OpenAI states that further research is required to understand why these models are experiencing more hallucinations as they scale up reasoning capabilities. While these models excel in certain tasks related to coding and math, the increased number of claims they make leads to both accurate and inaccurate/hallucinated claims.

OpenAI’s findings reveal that o3 hallucinates in response to 33% of questions on PersonQA, which is used to gauge a model’s knowledge accuracy about people. This rate is double that of previous reasoning models like o1 and o3-mini. Surprisingly, o4-mini performs even worse on PersonQA, hallucinating 48% of the time.

See also  Forget aesthetics, the reason to look after our skin should be health

Third-party testing conducted by Transluce, a nonprofit AI research lab, also highlighted o3’s tendency to fabricate actions it supposedly took to arrive at answers. This behavior raises concerns about the model’s reliability and accuracy in real-world applications.

Experts like Neil Chowdhury and Sarah Schwettmann from Transluce suggest that the reinforcement learning techniques used in o-series models might be amplifying these issues, leading to an increased rate of hallucinations. While o3 shows promise in coding workflows, it still struggles with hallucinating broken website links, which could impact its usability.

Although hallucinations can sometimes lead to creative ideas, they pose a significant challenge for businesses that require high accuracy, such as law firms reviewing contracts. One potential solution to improve model accuracy is by incorporating web search capabilities, as demonstrated by OpenAI’s GPT-4o with web search achieving 90% accuracy on SimpleQA.

As the AI industry shifts towards reasoning models for better performance on various tasks, the issue of hallucinations remains a critical area of concern. OpenAI acknowledges the need to address hallucinations across all models and continues to focus on enhancing accuracy and reliability.

In conclusion, while reasoning models offer significant benefits, they also bring about new challenges such as increased hallucinations. Finding a balance between performance and accuracy will be crucial for the future development of AI models.

TAGGED:hallucinatemodelsOpenAIsreasoning
Share This Article
Twitter Email Copy Link Print
Previous Article Crash levels post at Oamaru crossing Crash levels post at Oamaru crossing
Next Article Beauty Marks: The Best Beauty Looks of The Week Beauty Marks: The Best Beauty Looks of The Week
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Popular Posts

Trinitarios gang members busted on gunpoint robbery spree

A group of gang members, including a 16-year-old boy, was apprehended on Friday for a…

November 21, 2025

EXCLUSIVE: Charlie Kirk's Loved Ones 'Raging Over Kanye West Documentary' Makers 'Exploiting' His Memory By Releasing Footage of His Meeting With 'Nazi-Loving' Rapper

Controversy Erupts Over Charlie Kirk's Appearance in Kanye West Documentary Source: @charliekirk/YouTube;MEGA The family of…

September 25, 2025

Yushokobayashi Tokyo Fall 2026 Collection

Finally, some emotion.Yusho Kobayashi made his debut at Tokyo Fashion Week, bringing a unique perspective…

March 18, 2026

Holy prosociality! Batman makes people stand for pregnant passengers

Feedback is New Scientist’s popular sideways look at the latest science and technology news. You…

February 5, 2026

Few lucky mansions escape Palisades Fire’s wrath in Malibu: photos

Some Malibu homes miraculously survived the Palisades Fire, escaping the devastation that engulfed their neighbors.…

January 10, 2025

You Might Also Like

Walmart-owned Flipkart, Amazon are squeezing India’s quick commerce startups
Tech and Science

Walmart-owned Flipkart, Amazon are squeezing India’s quick commerce startups

April 11, 2026
Experimental Drug Can Reverse Osteoarthritis in Weeks, Animal Research Shows : ScienceAlert
Tech and Science

Experimental Drug Can Reverse Osteoarthritis in Weeks, Animal Research Shows : ScienceAlert

April 11, 2026
AI agent credentials live in the same box as untrusted code. Two new architectures show where the blast radius actually stops.
Tech and Science

AI agent credentials live in the same box as untrusted code. Two new architectures show where the blast radius actually stops.

April 11, 2026
Google’s Fitbit Tease has me More Excited for Garmin’s Whoop Rival
Tech and Science

Google’s Fitbit Tease has me More Excited for Garmin’s Whoop Rival

April 11, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?