Friday, 27 Feb 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • VIDEO
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
Tech and Science

UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help

Last updated: November 5, 2024 8:05 am
Share
UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
SHARE



Sign up for our daily and weekly newsletters to stay updated with the latest news and exclusive content on AI advancements. Learn More









A group of computer scientists has devised a technique to enhance artificial intelligence’s ability to determine when to utilize tools rather than relying solely on internal knowledge, mimicking the problem-solving approach of human experts.



The study from the University of California San Diego and Tsinghua University showcases a 28% increase in accuracy when AI systems are trained to strike a balance between internal knowledge and external tools, a crucial skill for deploying AI in scientific endeavors.



How researchers trained AI to improve decision-making



“While integrating LLMs with tools can enhance reliability, this approach often leads to excessive reliance on tools, diminishing the model’s capacity to solve simple problems through basic reasoning,” the researchers explain in their study. “In contrast, human experts assess problem complexity using domain knowledge before selecting an appropriate solution approach.”



The novel technique, known as “Adapting While Learning,” involves a two-step process for training AI systems. Initially, the model learns directly from solutions derived using external tools to internalize domain knowledge. Subsequently, it categorizes problems as either “easy” or “difficult” and decides whether to utilize tools based on this classification.





The two-step process researchers developed to teach AI systems when to use tools versus rely on internal knowledge, mirroring how human experts approach problem-solving. (Credit: UC San Diego / Tsinghua University)



Compact AI model surpasses larger systems for intricate tasks



What sets this advancement apart is its focus on efficiency. By utilizing a language model with just 8 billion parameters – significantly smaller than industry behemoths like GPT-4 – the researchers achieved a 28.18% enhancement in answer accuracy and a 13.89% increase in tool usage precision across their test datasets. The model exhibited notable proficiency in specialized scientific tasks, outperforming larger models in specific domains.

See also  Canucks lineup tonight: Vancouver's projected lineup for game against San Jose Sharks


This success challenges the conventional belief in AI development that bigger models equate to superior outcomes. Instead, the study indicates that teaching AI the discernment between using tools and relying on internal knowledge – akin to instructing a junior scientist on when to trust their calculations versus consulting specialized equipment – may be more critical than sheer computational power.





Examples of how the AI system handles different types of climate science problems: a simple temperature calculation (top) and a complex maritime routing challenge (bottom). (Credit: UC San Diego / Tsinghua University)



The emergence of compact, intelligent AI models



This study aligns with the broader industry trend towards more streamlined AI models in 2024. Leading entities such as Hugging Face, Nvidia, OpenAI, Meta, Anthropic, and H2O.ai have all introduced smaller yet highly capable models this year.



Hugging Face’s SmolLM2, with versions as small as 135 million parameters, can operate directly on smartphones. H2O.ai’s concise document analysis models have surpassed tech giants’ larger systems in specialized tasks. Even OpenAI has entered the realm of small models with GPT-4o Mini, offering comparable capabilities at a reduced cost.



This shift towards “AI downsizing” reflects the growing realization that smaller models can often match or exceed the performance of larger counterparts while utilizing significantly fewer computational resources.



The technical approach involves two distinct learning phases. During training, the model experiences what the researchers term “World Knowledge Distillation” (WKD), where it learns from solutions produced using external tools to build up internal expertise.



The second phase, “Tool Usage Adaptation” (TUA), educates the system to classify problems based on its own confidence and accuracy in resolving them directly. For simpler problems, it maintains the same approach as in WKD. However, for more challenging problems, it learns to transition to using external tools.

See also  Trump barred from fining University of California over alleged discrimination federal judge Rita Lin


Business implications: Enhanced efficiency in complex scientific AI systems



For enterprises implementing AI systems, this study addresses a longstanding challenge within the industry. Existing AI systems typically fall into two extremes: either over-relying on external tools, leading to increased computational expenses and sluggish basic operations, or attempting to internally solve all tasks, risking errors on complex problems that demand specialized tools.



This inefficiency is not merely a technical concern but a significant business issue. Companies deploying AI solutions often find themselves incurring high costs for cloud computing resources to run external tools, even for basic tasks that their AI should handle internally. Conversely, organizations opting for standalone AI systems face potential costly errors when these systems attempt intricate calculations without appropriate verification tools.



The researchers’ methodology presents a promising middle ground. By training AI to emulate human-like decision-making regarding tool usage, organizations could potentially reduce computational expenses while maintaining or enhancing accuracy. This is particularly valuable in fields like scientific research, financial modeling, or medical diagnosis, where both efficiency and precision are paramount.



Furthermore, this breakthrough indicates a future where AI systems could serve as more cost-effective and reliable collaborators in scientific endeavors, capable of making nuanced decisions about when to leverage external resources – akin to a seasoned professional who knows precisely when to consult specialized tools versus rely on their expertise.



The significance of understanding when to seek assistance



Beyond its immediate technical accomplishments, this study challenges the prevailing paradigm in AI development that bigger equates to better. By demonstrating that a relatively compact model can outperform larger counterparts through judicious tool usage decisions, the team points towards a more sustainable and pragmatic future for AI.

See also  Snakes are often the villains. A new book gives them a fair shake


The implications extend far beyond academic research. As AI penetrates domains where errors have real-world consequences – from medical diagnostics to climate modeling – the ability to discern when to seek help becomes imperative. This research suggests a future where AI systems are not only powerful but prudent, acknowledging their limitations akin to skilled professionals.



In essence, the researchers have instilled a fundamentally human trait in AI: recognizing that sometimes the wisest choice is to seek assistance.


TAGGED:DiegoknowingResearchersSanTsinghuaUniversity
Share This Article
Twitter Email Copy Link Print
Previous Article How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season
Next Article Natural Remedies to Reduce Stress and Anxiety in Everyday Life Natural Remedies to Reduce Stress and Anxiety in Everyday Life
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Researchers Identify Psychedelic Cocktail in Ancient Egyptian Mug

The Tampa Museum of Art recently made an intriguing discovery within its permanent collection –…

December 17, 2024

Bernie Sanders To Bring Fighting Oligarchy Tour To Rural West Virginia And North Carolina

PoliticusUSA stands as an independent beacon, rallying public support against the encroachment of oligarchs and…

August 1, 2025

A crucial methane-tracking satellite has died in orbit

An artist’s impression of the MethaneSAT satelliteEnvironmental Defense Fund/NASA A satellite called MethaneSAT, which was…

July 1, 2025

Herpes virus could soon be approved to treat severe skin cancer

Melanoma is a type of skin cancer that can spread elsewhere in the bodySCIENCE PHOTO…

July 8, 2025

10 Bombshells From Netflix Documentary 'Rebel Royals: An Unlikely Love Story' — From Broken Protocol to Scary Health Issues

In a recent Netflix documentary titled Rebel Royals: An Unlikely Love Story, Princess Märtha Louise…

September 29, 2025

You Might Also Like

Frailty can be eased with an infusion of stem cells from young people
Tech and Science

Frailty can be eased with an infusion of stem cells from young people

February 27, 2026
President Trump orders federal agencies to stop using Anthropic after Pentagon dispute
Tech and Science

President Trump orders federal agencies to stop using Anthropic after Pentagon dispute

February 27, 2026
NASA scraps its 2027 moon landing, adds two missions in 2028
Tech and Science

NASA scraps its 2027 moon landing, adds two missions in 2028

February 27, 2026
Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’
Tech and Science

Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

February 27, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?