Monday, 26 Jan 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • VIDEO
  • ScienceAlert
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
Tech and Science

UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help

Last updated: November 5, 2024 8:05 am
Share
UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
SHARE



Sign up for our daily and weekly newsletters to stay updated with the latest news and exclusive content on AI advancements. Learn More









A group of computer scientists has devised a technique to enhance artificial intelligence’s ability to determine when to utilize tools rather than relying solely on internal knowledge, mimicking the problem-solving approach of human experts.



The study from the University of California San Diego and Tsinghua University showcases a 28% increase in accuracy when AI systems are trained to strike a balance between internal knowledge and external tools, a crucial skill for deploying AI in scientific endeavors.



How researchers trained AI to improve decision-making



“While integrating LLMs with tools can enhance reliability, this approach often leads to excessive reliance on tools, diminishing the model’s capacity to solve simple problems through basic reasoning,” the researchers explain in their study. “In contrast, human experts assess problem complexity using domain knowledge before selecting an appropriate solution approach.”



The novel technique, known as “Adapting While Learning,” involves a two-step process for training AI systems. Initially, the model learns directly from solutions derived using external tools to internalize domain knowledge. Subsequently, it categorizes problems as either “easy” or “difficult” and decides whether to utilize tools based on this classification.





The two-step process researchers developed to teach AI systems when to use tools versus rely on internal knowledge, mirroring how human experts approach problem-solving. (Credit: UC San Diego / Tsinghua University)



Compact AI model surpasses larger systems for intricate tasks



What sets this advancement apart is its focus on efficiency. By utilizing a language model with just 8 billion parameters – significantly smaller than industry behemoths like GPT-4 – the researchers achieved a 28.18% enhancement in answer accuracy and a 13.89% increase in tool usage precision across their test datasets. The model exhibited notable proficiency in specialized scientific tasks, outperforming larger models in specific domains.

See also  From OpenAI’s offices to a deal with Eli Lilly — how Chai Discovery became one of the flashiest names in AI drug development


This success challenges the conventional belief in AI development that bigger models equate to superior outcomes. Instead, the study indicates that teaching AI the discernment between using tools and relying on internal knowledge – akin to instructing a junior scientist on when to trust their calculations versus consulting specialized equipment – may be more critical than sheer computational power.





Examples of how the AI system handles different types of climate science problems: a simple temperature calculation (top) and a complex maritime routing challenge (bottom). (Credit: UC San Diego / Tsinghua University)



The emergence of compact, intelligent AI models



This study aligns with the broader industry trend towards more streamlined AI models in 2024. Leading entities such as Hugging Face, Nvidia, OpenAI, Meta, Anthropic, and H2O.ai have all introduced smaller yet highly capable models this year.



Hugging Face’s SmolLM2, with versions as small as 135 million parameters, can operate directly on smartphones. H2O.ai’s concise document analysis models have surpassed tech giants’ larger systems in specialized tasks. Even OpenAI has entered the realm of small models with GPT-4o Mini, offering comparable capabilities at a reduced cost.



This shift towards “AI downsizing” reflects the growing realization that smaller models can often match or exceed the performance of larger counterparts while utilizing significantly fewer computational resources.



The technical approach involves two distinct learning phases. During training, the model experiences what the researchers term “World Knowledge Distillation” (WKD), where it learns from solutions produced using external tools to build up internal expertise.



The second phase, “Tool Usage Adaptation” (TUA), educates the system to classify problems based on its own confidence and accuracy in resolving them directly. For simpler problems, it maintains the same approach as in WKD. However, for more challenging problems, it learns to transition to using external tools.

See also  San Diego FC, MLS' 30th expansion team, explores new approach to multi-club ownership with youth focus


Business implications: Enhanced efficiency in complex scientific AI systems



For enterprises implementing AI systems, this study addresses a longstanding challenge within the industry. Existing AI systems typically fall into two extremes: either over-relying on external tools, leading to increased computational expenses and sluggish basic operations, or attempting to internally solve all tasks, risking errors on complex problems that demand specialized tools.



This inefficiency is not merely a technical concern but a significant business issue. Companies deploying AI solutions often find themselves incurring high costs for cloud computing resources to run external tools, even for basic tasks that their AI should handle internally. Conversely, organizations opting for standalone AI systems face potential costly errors when these systems attempt intricate calculations without appropriate verification tools.



The researchers’ methodology presents a promising middle ground. By training AI to emulate human-like decision-making regarding tool usage, organizations could potentially reduce computational expenses while maintaining or enhancing accuracy. This is particularly valuable in fields like scientific research, financial modeling, or medical diagnosis, where both efficiency and precision are paramount.



Furthermore, this breakthrough indicates a future where AI systems could serve as more cost-effective and reliable collaborators in scientific endeavors, capable of making nuanced decisions about when to leverage external resources – akin to a seasoned professional who knows precisely when to consult specialized tools versus rely on their expertise.



The significance of understanding when to seek assistance



Beyond its immediate technical accomplishments, this study challenges the prevailing paradigm in AI development that bigger equates to better. By demonstrating that a relatively compact model can outperform larger counterparts through judicious tool usage decisions, the team points towards a more sustainable and pragmatic future for AI.

See also  Brown University shooting suspect identified by cops after dayslong manhunt, warrant out for arrest


The implications extend far beyond academic research. As AI penetrates domains where errors have real-world consequences – from medical diagnostics to climate modeling – the ability to discern when to seek help becomes imperative. This research suggests a future where AI systems are not only powerful but prudent, acknowledging their limitations akin to skilled professionals.



In essence, the researchers have instilled a fundamentally human trait in AI: recognizing that sometimes the wisest choice is to seek assistance.


TAGGED:DiegoknowingResearchersSanTsinghuaUniversity
Share This Article
Twitter Email Copy Link Print
Previous Article How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season
Next Article Natural Remedies to Reduce Stress and Anxiety in Everyday Life Natural Remedies to Reduce Stress and Anxiety in Everyday Life
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Enterprise Products Partners (EPD): A Strong Pick for Passive Income Portfolios in 2025

Enterprise Products Partners L.P. (NYSE: EPD) is a featured company in the list of 12…

September 28, 2025

Travis Scott Accused Of Using A Sample Without Permission

The news of Rashford's potential move to Barcelona has sent shockwaves through the football world,…

July 21, 2025

15-year-old girl sexually assaulted at gunpoint in North Side alley, second teen held at knifepoint

Chicago Police Searching for Suspects in Armed Sexual Assault of Teenage Girl Surveillance images show…

November 23, 2025

Kanzi the Bonobo, Who Learned Language and Made Stone Tools, Dies at Age 44

Kanzi the Bonobo, a remarkable ape known for his ability to communicate with humans using…

March 20, 2025

Cody Rhodes suddenly drops a huge AEW reference

Cody Rhodes Drops Subtle AEW Reference in WWE Video Cody Rhodes, the American Nightmare, has…

November 11, 2025

You Might Also Like

Synthesia hits B valuation, lets employees cash out
Tech and Science

Synthesia hits $4B valuation, lets employees cash out

January 26, 2026
Mars’s gravity may help control Earth’s cycle of ice ages
Tech and Science

Mars’s gravity may help control Earth’s cycle of ice ages

January 26, 2026
‘Lost City’ Deep Beneath The Ocean Is Unlike Anything Seen Before on Earth : ScienceAlert
Tech and Science

‘Lost City’ Deep Beneath The Ocean Is Unlike Anything Seen Before on Earth : ScienceAlert

January 25, 2026
Dolphins with more close friends age more slowly
Tech and Science

Dolphins with more close friends age more slowly

January 25, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?