Sunday, 11 Jan 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • đŸ”„
  • Trump
  • House
  • VIDEO
  • ScienceAlert
  • White
  • man
  • Trumps
  • Watch
  • Season
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
Tech and Science

UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help

Last updated: November 5, 2024 8:05 am
Share
UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
SHARE



Sign up for our daily and weekly newsletters to stay updated with the latest news and exclusive content on AI advancements. Learn More









A group of computer scientists has devised a technique to enhance artificial intelligence’s ability to determine when to utilize tools rather than relying solely on internal knowledge, mimicking the problem-solving approach of human experts.



The study from the University of California San Diego and Tsinghua University showcases a 28% increase in accuracy when AI systems are trained to strike a balance between internal knowledge and external tools, a crucial skill for deploying AI in scientific endeavors.



How researchers trained AI to improve decision-making



“While integrating LLMs with tools can enhance reliability, this approach often leads to excessive reliance on tools, diminishing the model’s capacity to solve simple problems through basic reasoning,” the researchers explain in their study. “In contrast, human experts assess problem complexity using domain knowledge before selecting an appropriate solution approach.”



The novel technique, known as “Adapting While Learning,” involves a two-step process for training AI systems. Initially, the model learns directly from solutions derived using external tools to internalize domain knowledge. Subsequently, it categorizes problems as either “easy” or “difficult” and decides whether to utilize tools based on this classification.





The two-step process researchers developed to teach AI systems when to use tools versus rely on internal knowledge, mirroring how human experts approach problem-solving. (Credit: UC San Diego / Tsinghua University)



Compact AI model surpasses larger systems for intricate tasks



What sets this advancement apart is its focus on efficiency. By utilizing a language model with just 8 billion parameters – significantly smaller than industry behemoths like GPT-4 – the researchers achieved a 28.18% enhancement in answer accuracy and a 13.89% increase in tool usage precision across their test datasets. The model exhibited notable proficiency in specialized scientific tasks, outperforming larger models in specific domains.

See also  What Researchers Are Doing to Protect Christmas Trees in a Warming World


This success challenges the conventional belief in AI development that bigger models equate to superior outcomes. Instead, the study indicates that teaching AI the discernment between using tools and relying on internal knowledge – akin to instructing a junior scientist on when to trust their calculations versus consulting specialized equipment – may be more critical than sheer computational power.





Examples of how the AI system handles different types of climate science problems: a simple temperature calculation (top) and a complex maritime routing challenge (bottom). (Credit: UC San Diego / Tsinghua University)



The emergence of compact, intelligent AI models



This study aligns with the broader industry trend towards more streamlined AI models in 2024. Leading entities such as Hugging Face, Nvidia, OpenAI, Meta, Anthropic, and H2O.ai have all introduced smaller yet highly capable models this year.



Hugging Face’s SmolLM2, with versions as small as 135 million parameters, can operate directly on smartphones. H2O.ai’s concise document analysis models have surpassed tech giants’ larger systems in specialized tasks. Even OpenAI has entered the realm of small models with GPT-4o Mini, offering comparable capabilities at a reduced cost.



This shift towards “AI downsizing” reflects the growing realization that smaller models can often match or exceed the performance of larger counterparts while utilizing significantly fewer computational resources.



The technical approach involves two distinct learning phases. During training, the model experiences what the researchers term “World Knowledge Distillation” (WKD), where it learns from solutions produced using external tools to build up internal expertise.



The second phase, “Tool Usage Adaptation” (TUA), educates the system to classify problems based on its own confidence and accuracy in resolving them directly. For simpler problems, it maintains the same approach as in WKD. However, for more challenging problems, it learns to transition to using external tools.

See also  Five people plead guilty to helping North Koreans infiltrate US companies as 'remote IT workers'


Business implications: Enhanced efficiency in complex scientific AI systems



For enterprises implementing AI systems, this study addresses a longstanding challenge within the industry. Existing AI systems typically fall into two extremes: either over-relying on external tools, leading to increased computational expenses and sluggish basic operations, or attempting to internally solve all tasks, risking errors on complex problems that demand specialized tools.



This inefficiency is not merely a technical concern but a significant business issue. Companies deploying AI solutions often find themselves incurring high costs for cloud computing resources to run external tools, even for basic tasks that their AI should handle internally. Conversely, organizations opting for standalone AI systems face potential costly errors when these systems attempt intricate calculations without appropriate verification tools.



The researchers’ methodology presents a promising middle ground. By training AI to emulate human-like decision-making regarding tool usage, organizations could potentially reduce computational expenses while maintaining or enhancing accuracy. This is particularly valuable in fields like scientific research, financial modeling, or medical diagnosis, where both efficiency and precision are paramount.



Furthermore, this breakthrough indicates a future where AI systems could serve as more cost-effective and reliable collaborators in scientific endeavors, capable of making nuanced decisions about when to leverage external resources – akin to a seasoned professional who knows precisely when to consult specialized tools versus rely on their expertise.



The significance of understanding when to seek assistance



Beyond its immediate technical accomplishments, this study challenges the prevailing paradigm in AI development that bigger equates to better. By demonstrating that a relatively compact model can outperform larger counterparts through judicious tool usage decisions, the team points towards a more sustainable and pragmatic future for AI.

See also  Researchers find high school IQ predicts drinking habits


The implications extend far beyond academic research. As AI penetrates domains where errors have real-world consequences – from medical diagnostics to climate modeling – the ability to discern when to seek help becomes imperative. This research suggests a future where AI systems are not only powerful but prudent, acknowledging their limitations akin to skilled professionals.



In essence, the researchers have instilled a fundamentally human trait in AI: recognizing that sometimes the wisest choice is to seek assistance.


TAGGED:DiegoknowingResearchersSanTsinghuaUniversity
Share This Article
Twitter Email Copy Link Print
Previous Article How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season
Next Article Natural Remedies to Reduce Stress and Anxiety in Everyday Life Natural Remedies to Reduce Stress and Anxiety in Everyday Life
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

10 Minutes of Violence Gave The Moon Its Very Own ‘Grand Canyons’ : ScienceAlert

The Grand Canyon in Arizona is a stunning natural wonder that has been formed over…

February 4, 2025

Trump blames shutdown for GOP election losses

Trump Links Shutdown to GOP Losses, Calls for Filibuster Elimination President Donald Trump, during a…

November 5, 2025

What is ‘Contrast Bathing’ and Why is it Good for You?

Leada Malek, a board-certified sports specialist and physical therapist based in San Francisco, believes that…

September 14, 2024

Judge demands to know if White House is helping return wrongly deported Maryland man : NPR

President Trump meets with President Nayib Bukele of El Salvador in the Oval Office on…

April 15, 2025

Olive Colored Skin Care Tips For A Blissful Summer

These agents can help fade marks without triggering further inflammation. And always, always follow up…

July 29, 2025

You Might Also Like

Bees May Tell Us if We Can Use Math to Talk to Aliens : ScienceAlert
Tech and Science

Bees May Tell Us if We Can Use Math to Talk to Aliens : ScienceAlert

January 11, 2026
inDrive turns to ads and groceries to diversify revenue
Tech and Science

inDrive turns to ads and groceries to diversify revenue

January 11, 2026
Supermassive Black Hole ‘Snowplows’ Can Stifle Star Formation in Spiral Galaxies
Tech and Science

Supermassive Black Hole ‘Snowplows’ Can Stifle Star Formation in Spiral Galaxies

January 11, 2026
Why connecting with nature shouldn’t mean disconnecting from science
Tech and Science

Why connecting with nature shouldn’t mean disconnecting from science

January 11, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?