Wednesday, 7 Jan 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • VIDEO
  • ScienceAlert
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
Tech and Science

UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help

Last updated: November 5, 2024 8:05 am
Share
UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
SHARE



Sign up for our daily and weekly newsletters to stay updated with the latest news and exclusive content on AI advancements. Learn More









A group of computer scientists has devised a technique to enhance artificial intelligence’s ability to determine when to utilize tools rather than relying solely on internal knowledge, mimicking the problem-solving approach of human experts.



The study from the University of California San Diego and Tsinghua University showcases a 28% increase in accuracy when AI systems are trained to strike a balance between internal knowledge and external tools, a crucial skill for deploying AI in scientific endeavors.



How researchers trained AI to improve decision-making



“While integrating LLMs with tools can enhance reliability, this approach often leads to excessive reliance on tools, diminishing the model’s capacity to solve simple problems through basic reasoning,” the researchers explain in their study. “In contrast, human experts assess problem complexity using domain knowledge before selecting an appropriate solution approach.”



The novel technique, known as “Adapting While Learning,” involves a two-step process for training AI systems. Initially, the model learns directly from solutions derived using external tools to internalize domain knowledge. Subsequently, it categorizes problems as either “easy” or “difficult” and decides whether to utilize tools based on this classification.





The two-step process researchers developed to teach AI systems when to use tools versus rely on internal knowledge, mirroring how human experts approach problem-solving. (Credit: UC San Diego / Tsinghua University)



Compact AI model surpasses larger systems for intricate tasks



What sets this advancement apart is its focus on efficiency. By utilizing a language model with just 8 billion parameters – significantly smaller than industry behemoths like GPT-4 – the researchers achieved a 28.18% enhancement in answer accuracy and a 13.89% increase in tool usage precision across their test datasets. The model exhibited notable proficiency in specialized scientific tasks, outperforming larger models in specific domains.

See also  Quantum-inspired algorithm could help reveal hidden cosmic objects


This success challenges the conventional belief in AI development that bigger models equate to superior outcomes. Instead, the study indicates that teaching AI the discernment between using tools and relying on internal knowledge – akin to instructing a junior scientist on when to trust their calculations versus consulting specialized equipment – may be more critical than sheer computational power.





Examples of how the AI system handles different types of climate science problems: a simple temperature calculation (top) and a complex maritime routing challenge (bottom). (Credit: UC San Diego / Tsinghua University)



The emergence of compact, intelligent AI models



This study aligns with the broader industry trend towards more streamlined AI models in 2024. Leading entities such as Hugging Face, Nvidia, OpenAI, Meta, Anthropic, and H2O.ai have all introduced smaller yet highly capable models this year.



Hugging Face’s SmolLM2, with versions as small as 135 million parameters, can operate directly on smartphones. H2O.ai’s concise document analysis models have surpassed tech giants’ larger systems in specialized tasks. Even OpenAI has entered the realm of small models with GPT-4o Mini, offering comparable capabilities at a reduced cost.



This shift towards “AI downsizing” reflects the growing realization that smaller models can often match or exceed the performance of larger counterparts while utilizing significantly fewer computational resources.



The technical approach involves two distinct learning phases. During training, the model experiences what the researchers term “World Knowledge Distillation” (WKD), where it learns from solutions produced using external tools to build up internal expertise.



The second phase, “Tool Usage Adaptation” (TUA), educates the system to classify problems based on its own confidence and accuracy in resolving them directly. For simpler problems, it maintains the same approach as in WKD. However, for more challenging problems, it learns to transition to using external tools.

See also  Anthropic researchers forced Claude to become deceptive — what they discovered could save us from rogue AI


Business implications: Enhanced efficiency in complex scientific AI systems



For enterprises implementing AI systems, this study addresses a longstanding challenge within the industry. Existing AI systems typically fall into two extremes: either over-relying on external tools, leading to increased computational expenses and sluggish basic operations, or attempting to internally solve all tasks, risking errors on complex problems that demand specialized tools.



This inefficiency is not merely a technical concern but a significant business issue. Companies deploying AI solutions often find themselves incurring high costs for cloud computing resources to run external tools, even for basic tasks that their AI should handle internally. Conversely, organizations opting for standalone AI systems face potential costly errors when these systems attempt intricate calculations without appropriate verification tools.



The researchers’ methodology presents a promising middle ground. By training AI to emulate human-like decision-making regarding tool usage, organizations could potentially reduce computational expenses while maintaining or enhancing accuracy. This is particularly valuable in fields like scientific research, financial modeling, or medical diagnosis, where both efficiency and precision are paramount.



Furthermore, this breakthrough indicates a future where AI systems could serve as more cost-effective and reliable collaborators in scientific endeavors, capable of making nuanced decisions about when to leverage external resources – akin to a seasoned professional who knows precisely when to consult specialized tools versus rely on their expertise.



The significance of understanding when to seek assistance



Beyond its immediate technical accomplishments, this study challenges the prevailing paradigm in AI development that bigger equates to better. By demonstrating that a relatively compact model can outperform larger counterparts through judicious tool usage decisions, the team points towards a more sustainable and pragmatic future for AI.

See also  Shape the Future of Your Practice Through the MFA in Studio Art at the University of Arkansas


The implications extend far beyond academic research. As AI penetrates domains where errors have real-world consequences – from medical diagnostics to climate modeling – the ability to discern when to seek help becomes imperative. This research suggests a future where AI systems are not only powerful but prudent, acknowledging their limitations akin to skilled professionals.



In essence, the researchers have instilled a fundamentally human trait in AI: recognizing that sometimes the wisest choice is to seek assistance.


TAGGED:DiegoknowingResearchersSanTsinghuaUniversity
Share This Article
Twitter Email Copy Link Print
Previous Article How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season
Next Article Natural Remedies to Reduce Stress and Anxiety in Everyday Life Natural Remedies to Reduce Stress and Anxiety in Everyday Life
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

New CDC vaccine schedule, NIH research funding: D.C. Diagnosis

Welcome to the latest edition of D.C. Diagnosis, STAT’s newsletter that delves into the politics…

January 6, 2026

What to do if you find a baby bird out of its nest

Spring is a time of renewal and growth in the natural world, with flowers blooming,…

May 17, 2025

Celebs Boldest Nearly Naked Red Carpet Looks of All Time

When it comes to red carpet fashion, celebrities are constantly pushing the boundaries and showing…

April 28, 2025

My Kidney Cancer Taught Me That Patients Aren’t Consumers

Kidney cancer is a serious diagnosis that can bring about concerns for both health and…

June 26, 2025

StrictlyVC at TechCrunch Disrupt 2025: The full LP track agenda revealed

StrictlyVC is excited to return to TechCrunch Disrupt this October, where we will convene the…

September 23, 2025

You Might Also Like

60,000-year-old poison arrowheads show early humans’ skillful hunting
Tech and Science

60,000-year-old poison arrowheads show early humans’ skillful hunting

January 7, 2026
Who Are the Contestants on Taskmaster Season 21?
Tech and Science

Who Are the Contestants on Taskmaster Season 21?

January 7, 2026
Earliest Direct Evidence of Poisoned Arrows Revealed in 60,000-Year-Old Relics : ScienceAlert
Tech and Science

Earliest Direct Evidence of Poisoned Arrows Revealed in 60,000-Year-Old Relics : ScienceAlert

January 7, 2026
4 Things My Next Phone Needs Instead Of New AI Features
Tech and Science

4 Things My Next Phone Needs Instead Of New AI Features

January 7, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?