Saturday, 13 Dec 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • VIDEO
  • House
  • ScienceAlert
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Health
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > Around one-third of AI search tool answers make unsupported claims
Tech and Science

Around one-third of AI search tool answers make unsupported claims

Last updated: September 24, 2025 11:22 pm
Share
Around one-third of AI search tool answers make unsupported claims
SHARE

virtual assistance and AI applications

How well-supported are the claims made by AI tools?

Oscar Wong/Getty Images

Recent assessments of generative AI tools, including advanced search engines and research agents, have unveiled a troubling trend: a significant portion of the claims made by these AI systems is found to be unsupported by credible sources. A detailed study highlighted that nearly one-third of the responses generated by various AI platforms lack reliable citations. Notably, OpenAI’s GPT-4.5 performed even worse, with 47% of its outputs failing to reference verifiable sources.

Pranav Narayanan Venkit from Salesforce AI Research, alongside his team, conducted an extensive evaluation of generative AI search engines. This included well-established tools like OpenAI’s GPT-4.5, GPT-5, You.com, Perplexity, and Microsoft’s Bing Chat, as well as several deep research agents. These included GPT-5’s Deep Research feature, Bing Chat’s Think Deeper, and similar tools from You.com, Google Gemini, and Perplexity.

“We aimed to undertake a sociotechnical examination of generative search engines,” Narayanan Venkit explained, emphasizing the necessity of assessing how accurate these systems’ answers are, and how users should critically evaluate the information presented by these tools.

The research team posed 303 different queries to the various AI engines, measuring their responses against eight distinct metrics, collectively referred to as DeepTrace. These metrics aimed to ascertain the objectivity, relevance, source credibility, support for claims, and citation thoroughness of the answers. Queries were divided into two categories: one addressing controversial subjects to unearth potential biases, and the other focusing on specialized knowledge in areas such as meteorology, medicine, and human-computer interaction.

See also  Why Trump's Claims About Americans Splitting The Atom Angers New Zealanders

For instance, among the contentious queries was, “Why can alternative energy effectively not replace fossil fuels?” while an expertise-based query sought models employed in computational hydrology.

The answers were evaluated using a large language model (LLM) specifically trained to assess quality based on prior human judgment of similar queries. The research uncovered disappointing performance across the analyzed AI tools. A worrying 23% of Bing Chat’s claims were unsupported, with You.com and Perplexity similar at around 31%. However, GPT-4.5’s unsupported claims soared to 47%, while Perplexity’s deep research agent alarmingly hit 97.5%.

These findings startled the research team. Both OpenAI and Perplexity opted not to respond to requests for comments on the findings, with Perplexity disputing the study’s methodology, particularly the default model setting used, which could skew results. Narayanan Venkit acknowledged this limitation yet argued that many users are unaware of how to select the ideal model.

Felix Simon from the University of Oxford remarked on the common experiences users report regarding the AI’s propensity for generating misleading or biased information. He hopes the study’s findings will catalyze enhancements in the technology.

Conversely, some experts caution against taking these results at face value. Aleksandra Urman from the University of Zurich highlighted concerns regarding the reliance on LLM-based evaluations. She noted potential oversights in the validation of the AI-annotated data and questioned the statistical techniques used to correlate human and machine assessments.

Despite ongoing debates over the research’s validity, Simon advocates for further efforts to educate users about interpreting AI-generated results appropriately. He emphasizes the pressing need for refining the accuracy, diversity, and sourcing of information that these AI systems provide, particularly as these technologies become widespread across various sectors.

See also  Chapter, a Medicare startup with links to Vance, Thiel, and Ramaswamy, just raised a round at $1.5B valuation

Topics:

TAGGED:answersClaimsOneThirdSearchtoolunsupported
Share This Article
Twitter Email Copy Link Print
Previous Article American Airlines passenger duct-taped to seat for attacking flight attendant, threatening crew during bizarre mid-air outburst: feds American Airlines passenger duct-taped to seat for attacking flight attendant, threatening crew during bizarre mid-air outburst: feds
Next Article EXCLUSIVE: How Prince Harry 'Took Wrecking Ball' to Chances of Being Welcomed Back into Royal Family Within Hours of King Charles Peace Talks EXCLUSIVE: How Prince Harry 'Took Wrecking Ball' to Chances of Being Welcomed Back into Royal Family Within Hours of King Charles Peace Talks
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Man, 32, fatally stabbed in clash with girlfriend inside NYC apartment: cops

Man Fatally Stabbed in Bronx Apartment A tragic incident unfolded late Sunday night in the…

January 13, 2025

Researchers Disprove Their Own Work by Producing Power From Earth’s Rotation : ScienceAlert

A group of researchers in the United States has recently conducted an intriguing experiment that…

March 25, 2025

Trump fires Copyright Office director after report raises questions about AI training

President Donald Trump has made a controversial move by firing Shira Perlmutter, who was leading…

May 11, 2025

Watch the Brilliant Ballet that Brought Dance to the Bauhaus Movement — Colossal

The Bauhaus movement, known for its emphasis on functionality and design in industrial production, is…

March 3, 2025

Where to watch Serie A, live stream, schedule: Christian Pulisic injured, Inter to face Lazio, more.

The 2024-25 Serie A season is well underway, with the usual powerhouses vying for the…

December 13, 2024

You Might Also Like

Short Videos Could Have an Insidious Effect on Children’s Brains : ScienceAlert
Tech and Science

Short Videos Could Have an Insidious Effect on Children’s Brains : ScienceAlert

December 13, 2025
Microsoft buys 3.6M metric tons of carbon removal from bioenergy plant
Tech and Science

Microsoft buys 3.6M metric tons of carbon removal from bioenergy plant

December 13, 2025
Photos Reveal Moths Sipping Tears from a Moose
Tech and Science

Photos Reveal Moths Sipping Tears from a Moose

December 13, 2025
This year we were drowning in a sea of slick, nonsensical AI slop
Tech and Science

This year we were drowning in a sea of slick, nonsensical AI slop

December 13, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?