Friday, 31 Oct 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • VIDEO
  • House
  • White
  • ScienceAlert
  • Trumps
  • Watch
  • man
  • Health
  • Season
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > OpenAI’s new reasoning AI models hallucinate more
Tech and Science

OpenAI’s new reasoning AI models hallucinate more

Last updated: April 18, 2025 11:30 pm
Share
OpenAI’s new reasoning AI models hallucinate more
SHARE

OpenAI’s Latest AI Models Still Struggle with Hallucinations

OpenAI recently introduced its o3 and o4-mini AI models, which are considered state-of-the-art in many aspects. However, these new models still face a significant challenge – they tend to hallucinate, or make up information, even more than some of OpenAI’s older models.

Hallucinations have long been a tough nut to crack in the field of AI, affecting even the most advanced systems available today. Traditionally, each new model has shown slight improvements in reducing hallucinations compared to its predecessor. However, this trend seems to have taken a step back with the o3 and o4-mini models.

According to OpenAI’s internal evaluations, the reasoning models o3 and o4-mini exhibit a higher rate of hallucinations compared to the company’s previous reasoning models like o1, o1-mini, and o3-mini, as well as the non-reasoning models such as GPT-4o.

One concerning aspect is that OpenAI is still uncertain about the root cause of this increased hallucination phenomenon. In their technical report for o3 and o4-mini, OpenAI states that further research is required to understand why these models are experiencing more hallucinations as they scale up reasoning capabilities. While these models excel in certain tasks related to coding and math, the increased number of claims they make leads to both accurate and inaccurate/hallucinated claims.

OpenAI’s findings reveal that o3 hallucinates in response to 33% of questions on PersonQA, which is used to gauge a model’s knowledge accuracy about people. This rate is double that of previous reasoning models like o1 and o3-mini. Surprisingly, o4-mini performs even worse on PersonQA, hallucinating 48% of the time.

See also  How to keep your old computer running for as long as possible

Third-party testing conducted by Transluce, a nonprofit AI research lab, also highlighted o3’s tendency to fabricate actions it supposedly took to arrive at answers. This behavior raises concerns about the model’s reliability and accuracy in real-world applications.

Experts like Neil Chowdhury and Sarah Schwettmann from Transluce suggest that the reinforcement learning techniques used in o-series models might be amplifying these issues, leading to an increased rate of hallucinations. While o3 shows promise in coding workflows, it still struggles with hallucinating broken website links, which could impact its usability.

Although hallucinations can sometimes lead to creative ideas, they pose a significant challenge for businesses that require high accuracy, such as law firms reviewing contracts. One potential solution to improve model accuracy is by incorporating web search capabilities, as demonstrated by OpenAI’s GPT-4o with web search achieving 90% accuracy on SimpleQA.

As the AI industry shifts towards reasoning models for better performance on various tasks, the issue of hallucinations remains a critical area of concern. OpenAI acknowledges the need to address hallucinations across all models and continues to focus on enhancing accuracy and reliability.

In conclusion, while reasoning models offer significant benefits, they also bring about new challenges such as increased hallucinations. Finding a balance between performance and accuracy will be crucial for the future development of AI models.

TAGGED:hallucinatemodelsOpenAIsreasoning
Share This Article
Twitter Email Copy Link Print
Previous Article Crash levels post at Oamaru crossing Crash levels post at Oamaru crossing
Next Article Beauty Marks: The Best Beauty Looks of The Week Beauty Marks: The Best Beauty Looks of The Week
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Soil Solarization for Weed Control

Gardening has always been a mix of tradition and experimentation, with home gardeners often relying…

March 29, 2025

Biden covered up Ukrainian complaints of corruption ‘double standard,’ secret CIA files reveal

Ukrainian officials expressed feelings of being “puzzled and let down” after Vice President Joe Biden’s…

October 7, 2025

REVEALED: Governor Newsom Attended Luxury Wine Tasting in Napa Valley While Anti-ICE Rioters Burned Cars, Assaulted Law Enforcement |

In a striking juxtaposition to the chaos in Los Angeles, California Governor Gavin Newsom (D)…

June 20, 2025

India seeks to seal interim trade deal with US this week

Unlock the White House Watch newsletter for free Your guide to what Trump’s second term…

June 30, 2025

Naomi Osaka’s Australian Open and the rediscovery of a tennis superpower

Naomi Osaka's journey at the Australian Open has been nothing short of a rollercoaster ride.…

January 16, 2025

You Might Also Like

Red Spider Nebula Holds a Secret in Its Glowing Heart, JWST Reveals : ScienceAlert
Tech and Science

Red Spider Nebula Holds a Secret in Its Glowing Heart, JWST Reveals : ScienceAlert

October 31, 2025
Stopping breaches at machine speed demands agents, not alerts
Tech and Science

Stopping breaches at machine speed demands agents, not alerts

October 31, 2025
Trump’s Baffling Call for Resuming U.S. Nuclear Tests
Tech and Science

Trump’s Baffling Call for Resuming U.S. Nuclear Tests

October 31, 2025
Nanotyrannus: Dinosaur skeleton settles long debate over ‘tiny T. rex’ fossils
Tech and Science

Nanotyrannus: Dinosaur skeleton settles long debate over ‘tiny T. rex’ fossils

October 30, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?