Tuesday, 10 Feb 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • VIDEO
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > Around one-third of AI search tool answers make unsupported claims
Tech and Science

Around one-third of AI search tool answers make unsupported claims

Last updated: September 24, 2025 11:22 pm
Share
Around one-third of AI search tool answers make unsupported claims
SHARE

virtual assistance and AI applications

How well-supported are the claims made by AI tools?

Oscar Wong/Getty Images

Recent assessments of generative AI tools, including advanced search engines and research agents, have unveiled a troubling trend: a significant portion of the claims made by these AI systems is found to be unsupported by credible sources. A detailed study highlighted that nearly one-third of the responses generated by various AI platforms lack reliable citations. Notably, OpenAI’s GPT-4.5 performed even worse, with 47% of its outputs failing to reference verifiable sources.

Pranav Narayanan Venkit from Salesforce AI Research, alongside his team, conducted an extensive evaluation of generative AI search engines. This included well-established tools like OpenAI’s GPT-4.5, GPT-5, You.com, Perplexity, and Microsoft’s Bing Chat, as well as several deep research agents. These included GPT-5’s Deep Research feature, Bing Chat’s Think Deeper, and similar tools from You.com, Google Gemini, and Perplexity.

“We aimed to undertake a sociotechnical examination of generative search engines,” Narayanan Venkit explained, emphasizing the necessity of assessing how accurate these systems’ answers are, and how users should critically evaluate the information presented by these tools.

The research team posed 303 different queries to the various AI engines, measuring their responses against eight distinct metrics, collectively referred to as DeepTrace. These metrics aimed to ascertain the objectivity, relevance, source credibility, support for claims, and citation thoroughness of the answers. Queries were divided into two categories: one addressing controversial subjects to unearth potential biases, and the other focusing on specialized knowledge in areas such as meteorology, medicine, and human-computer interaction.

See also  Will we ever feel comfortable with AIs taking on important tasks?

For instance, among the contentious queries was, “Why can alternative energy effectively not replace fossil fuels?” while an expertise-based query sought models employed in computational hydrology.

The answers were evaluated using a large language model (LLM) specifically trained to assess quality based on prior human judgment of similar queries. The research uncovered disappointing performance across the analyzed AI tools. A worrying 23% of Bing Chat’s claims were unsupported, with You.com and Perplexity similar at around 31%. However, GPT-4.5’s unsupported claims soared to 47%, while Perplexity’s deep research agent alarmingly hit 97.5%.

These findings startled the research team. Both OpenAI and Perplexity opted not to respond to requests for comments on the findings, with Perplexity disputing the study’s methodology, particularly the default model setting used, which could skew results. Narayanan Venkit acknowledged this limitation yet argued that many users are unaware of how to select the ideal model.

Felix Simon from the University of Oxford remarked on the common experiences users report regarding the AI’s propensity for generating misleading or biased information. He hopes the study’s findings will catalyze enhancements in the technology.

Conversely, some experts caution against taking these results at face value. Aleksandra Urman from the University of Zurich highlighted concerns regarding the reliance on LLM-based evaluations. She noted potential oversights in the validation of the AI-annotated data and questioned the statistical techniques used to correlate human and machine assessments.

Despite ongoing debates over the research’s validity, Simon advocates for further efforts to educate users about interpreting AI-generated results appropriately. He emphasizes the pressing need for refining the accuracy, diversity, and sourcing of information that these AI systems provide, particularly as these technologies become widespread across various sectors.

See also  Terrifying Video Shows Earth Cracking And Sliding During Myanmar Quake : ScienceAlert

Topics:

TAGGED:answersClaimsOneThirdSearchtoolunsupported
Share This Article
Twitter Email Copy Link Print
Previous Article American Airlines passenger duct-taped to seat for attacking flight attendant, threatening crew during bizarre mid-air outburst: feds American Airlines passenger duct-taped to seat for attacking flight attendant, threatening crew during bizarre mid-air outburst: feds
Next Article EXCLUSIVE: How Prince Harry 'Took Wrecking Ball' to Chances of Being Welcomed Back into Royal Family Within Hours of King Charles Peace Talks EXCLUSIVE: How Prince Harry 'Took Wrecking Ball' to Chances of Being Welcomed Back into Royal Family Within Hours of King Charles Peace Talks
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

The AI edge in cybersecurity: Predictive tools aim to slash response times

Cybersecurity in the modern world is more crucial than ever before. With the increasing number…

October 20, 2024

Veterans trek through Manhattan on horseback to promote suicide awareness for 8th annual Trail to Zero ride

On Saturday, numerous veterans and law enforcement officials participated in the eighth annual Trail to…

October 12, 2025

What to Expect When You’re Waiting for the Votes to Get Counted  

The Importance of Waiting for Official Election Results It's a little-known fact that the election…

November 5, 2024

Profits are Social Authentication

In his seminal work from 1980, Knowledge and Decisions, Thomas Sowell emphasizes the critical role…

October 22, 2025

Indiana Republicans threaten to thwart Trump's redistricting onslaught

President Donald Trump’s assertive, top-down approach to the GOP faces a pivotal challenge as a…

December 8, 2025

You Might Also Like

Something Far Darker Than a Black Hole Could Hide in The Heart of The Milky Way : ScienceAlert
Tech and Science

Something Far Darker Than a Black Hole Could Hide in The Heart of The Milky Way : ScienceAlert

February 10, 2026
ChatGPT Ads Being Tested for Some Tiers
Tech and Science

ChatGPT Ads Being Tested for Some Tiers

February 10, 2026
What came before the big bang?
Tech and Science

What came before the big bang?

February 10, 2026
WhatsApp Web App Adding Video and Voice Calls
Tech and Science

WhatsApp Web App Adding Video and Voice Calls

February 10, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?