Tuesday, 20 Jan 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • VIDEO
  • ScienceAlert
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > OpenAI’s new reasoning AI models hallucinate more
Tech and Science

OpenAI’s new reasoning AI models hallucinate more

Last updated: April 18, 2025 11:30 pm
Share
OpenAI’s new reasoning AI models hallucinate more
SHARE

OpenAI’s Latest AI Models Still Struggle with Hallucinations

OpenAI recently introduced its o3 and o4-mini AI models, which are considered state-of-the-art in many aspects. However, these new models still face a significant challenge – they tend to hallucinate, or make up information, even more than some of OpenAI’s older models.

Hallucinations have long been a tough nut to crack in the field of AI, affecting even the most advanced systems available today. Traditionally, each new model has shown slight improvements in reducing hallucinations compared to its predecessor. However, this trend seems to have taken a step back with the o3 and o4-mini models.

According to OpenAI’s internal evaluations, the reasoning models o3 and o4-mini exhibit a higher rate of hallucinations compared to the company’s previous reasoning models like o1, o1-mini, and o3-mini, as well as the non-reasoning models such as GPT-4o.

One concerning aspect is that OpenAI is still uncertain about the root cause of this increased hallucination phenomenon. In their technical report for o3 and o4-mini, OpenAI states that further research is required to understand why these models are experiencing more hallucinations as they scale up reasoning capabilities. While these models excel in certain tasks related to coding and math, the increased number of claims they make leads to both accurate and inaccurate/hallucinated claims.

OpenAI’s findings reveal that o3 hallucinates in response to 33% of questions on PersonQA, which is used to gauge a model’s knowledge accuracy about people. This rate is double that of previous reasoning models like o1 and o3-mini. Surprisingly, o4-mini performs even worse on PersonQA, hallucinating 48% of the time.

See also  Will we ever have confirmation of life outside our solar system?

Third-party testing conducted by Transluce, a nonprofit AI research lab, also highlighted o3’s tendency to fabricate actions it supposedly took to arrive at answers. This behavior raises concerns about the model’s reliability and accuracy in real-world applications.

Experts like Neil Chowdhury and Sarah Schwettmann from Transluce suggest that the reinforcement learning techniques used in o-series models might be amplifying these issues, leading to an increased rate of hallucinations. While o3 shows promise in coding workflows, it still struggles with hallucinating broken website links, which could impact its usability.

Although hallucinations can sometimes lead to creative ideas, they pose a significant challenge for businesses that require high accuracy, such as law firms reviewing contracts. One potential solution to improve model accuracy is by incorporating web search capabilities, as demonstrated by OpenAI’s GPT-4o with web search achieving 90% accuracy on SimpleQA.

As the AI industry shifts towards reasoning models for better performance on various tasks, the issue of hallucinations remains a critical area of concern. OpenAI acknowledges the need to address hallucinations across all models and continues to focus on enhancing accuracy and reliability.

In conclusion, while reasoning models offer significant benefits, they also bring about new challenges such as increased hallucinations. Finding a balance between performance and accuracy will be crucial for the future development of AI models.

TAGGED:hallucinatemodelsOpenAIsreasoning
Share This Article
Twitter Email Copy Link Print
Previous Article Crash levels post at Oamaru crossing Crash levels post at Oamaru crossing
Next Article Beauty Marks: The Best Beauty Looks of The Week Beauty Marks: The Best Beauty Looks of The Week
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Dr. Ella Hawkins Reimagines Ancient Artifacts and Prized Objects as Edible Replicas — Colossal

Dr. Ella Hawkins, a Birmingham-based artist and design historian, has found a unique and engaging…

May 31, 2025

Chicago schools face enrollment crisis, 150 buildings half-empty

A Decline in Enrollment Leaves Chicago Schools Half-Empty A recent report has revealed that declining…

June 18, 2025

Top MLB Prospect Kade Anderson Flattered By Paul Skenes, Joe Burrow Comparisons

Top MLB Prospect Kade Anderson Loving Burrow, Skenes Comps ... They're Legends!!! Published June 26,…

June 26, 2025

Numerous Fossils Reveal Jurassic Fish Killed in Same, Bizarre Way : ScienceAlert

An Extinct Genus of Fish from the Jurassic Period Had a Fatal Attraction to Belemnites…

July 12, 2025

Brigitte Bardot, French Screen Legend and Style Icon, Dies at 91

Brigitte Bardot, the iconic French actress, style muse, and animal rights activist, has passed away…

December 28, 2025

You Might Also Like

Xiaomi Redmi Note 15 Pro+ 5G Review: Affordable & Durable
Tech and Science

Xiaomi Redmi Note 15 Pro+ 5G Review: Affordable & Durable

January 20, 2026
World has entered an era of ‘global water bankruptcy,’ U.N. warns
Tech and Science

World has entered an era of ‘global water bankruptcy,’ U.N. warns

January 20, 2026
Google Pixel 10a Price Leaks
Tech and Science

Google Pixel 10a Price Leaks

January 20, 2026
Bubble feeding trick spreads through humpback whale social groups
Tech and Science

Bubble feeding trick spreads through humpback whale social groups

January 20, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?