Wednesday, 11 Jun 2025
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • VIDEO
  • White
  • ScienceAlert
  • Trumps
  • Watch
  • man
  • Health
  • Colossal
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
Tech and Science

UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help

Last updated: November 5, 2024 8:05 am
Share
UC San Diego, Tsinghua University researchers just made AI way better at knowing when to ask for help
SHARE



Sign up for our daily and weekly newsletters to stay updated with the latest news and exclusive content on AI advancements. Learn More









A group of computer scientists has devised a technique to enhance artificial intelligence’s ability to determine when to utilize tools rather than relying solely on internal knowledge, mimicking the problem-solving approach of human experts.



The study from the University of California San Diego and Tsinghua University showcases a 28% increase in accuracy when AI systems are trained to strike a balance between internal knowledge and external tools, a crucial skill for deploying AI in scientific endeavors.



How researchers trained AI to improve decision-making



“While integrating LLMs with tools can enhance reliability, this approach often leads to excessive reliance on tools, diminishing the model’s capacity to solve simple problems through basic reasoning,” the researchers explain in their study. “In contrast, human experts assess problem complexity using domain knowledge before selecting an appropriate solution approach.”



The novel technique, known as “Adapting While Learning,” involves a two-step process for training AI systems. Initially, the model learns directly from solutions derived using external tools to internalize domain knowledge. Subsequently, it categorizes problems as either “easy” or “difficult” and decides whether to utilize tools based on this classification.





The two-step process researchers developed to teach AI systems when to use tools versus rely on internal knowledge, mirroring how human experts approach problem-solving. (Credit: UC San Diego / Tsinghua University)



Compact AI model surpasses larger systems for intricate tasks



What sets this advancement apart is its focus on efficiency. By utilizing a language model with just 8 billion parameters – significantly smaller than industry behemoths like GPT-4 – the researchers achieved a 28.18% enhancement in answer accuracy and a 13.89% increase in tool usage precision across their test datasets. The model exhibited notable proficiency in specialized scientific tasks, outperforming larger models in specific domains.

See also  Founder Sahil Lavingia says he was booted from DOGE after just 55 days 


This success challenges the conventional belief in AI development that bigger models equate to superior outcomes. Instead, the study indicates that teaching AI the discernment between using tools and relying on internal knowledge – akin to instructing a junior scientist on when to trust their calculations versus consulting specialized equipment – may be more critical than sheer computational power.





Examples of how the AI system handles different types of climate science problems: a simple temperature calculation (top) and a complex maritime routing challenge (bottom). (Credit: UC San Diego / Tsinghua University)



The emergence of compact, intelligent AI models



This study aligns with the broader industry trend towards more streamlined AI models in 2024. Leading entities such as Hugging Face, Nvidia, OpenAI, Meta, Anthropic, and H2O.ai have all introduced smaller yet highly capable models this year.



Hugging Face’s SmolLM2, with versions as small as 135 million parameters, can operate directly on smartphones. H2O.ai’s concise document analysis models have surpassed tech giants’ larger systems in specialized tasks. Even OpenAI has entered the realm of small models with GPT-4o Mini, offering comparable capabilities at a reduced cost.



This shift towards “AI downsizing” reflects the growing realization that smaller models can often match or exceed the performance of larger counterparts while utilizing significantly fewer computational resources.



The technical approach involves two distinct learning phases. During training, the model experiences what the researchers term “World Knowledge Distillation” (WKD), where it learns from solutions produced using external tools to build up internal expertise.



The second phase, “Tool Usage Adaptation” (TUA), educates the system to classify problems based on its own confidence and accuracy in resolving them directly. For simpler problems, it maintains the same approach as in WKD. However, for more challenging problems, it learns to transition to using external tools.

See also  Faraday Future founder named co-CEO three years after being sidelined by internal probe


Business implications: Enhanced efficiency in complex scientific AI systems



For enterprises implementing AI systems, this study addresses a longstanding challenge within the industry. Existing AI systems typically fall into two extremes: either over-relying on external tools, leading to increased computational expenses and sluggish basic operations, or attempting to internally solve all tasks, risking errors on complex problems that demand specialized tools.



This inefficiency is not merely a technical concern but a significant business issue. Companies deploying AI solutions often find themselves incurring high costs for cloud computing resources to run external tools, even for basic tasks that their AI should handle internally. Conversely, organizations opting for standalone AI systems face potential costly errors when these systems attempt intricate calculations without appropriate verification tools.



The researchers’ methodology presents a promising middle ground. By training AI to emulate human-like decision-making regarding tool usage, organizations could potentially reduce computational expenses while maintaining or enhancing accuracy. This is particularly valuable in fields like scientific research, financial modeling, or medical diagnosis, where both efficiency and precision are paramount.



Furthermore, this breakthrough indicates a future where AI systems could serve as more cost-effective and reliable collaborators in scientific endeavors, capable of making nuanced decisions about when to leverage external resources – akin to a seasoned professional who knows precisely when to consult specialized tools versus rely on their expertise.



The significance of understanding when to seek assistance



Beyond its immediate technical accomplishments, this study challenges the prevailing paradigm in AI development that bigger equates to better. By demonstrating that a relatively compact model can outperform larger counterparts through judicious tool usage decisions, the team points towards a more sustainable and pragmatic future for AI.

See also  Healthy physical activity levels of young adults decreasing, researchers find


The implications extend far beyond academic research. As AI penetrates domains where errors have real-world consequences – from medical diagnostics to climate modeling – the ability to discern when to seek help becomes imperative. This research suggests a future where AI systems are not only powerful but prudent, acknowledging their limitations akin to skilled professionals.



In essence, the researchers have instilled a fundamentally human trait in AI: recognizing that sometimes the wisest choice is to seek assistance.


TAGGED:DiegoknowingResearchersSanTsinghuaUniversity
Share This Article
Twitter Email Copy Link Print
Previous Article How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season How USMNT’s Antonee Robinson is powering Fulham’s surprisingly strong start to the Premier League season
Next Article Natural Remedies to Reduce Stress and Anxiety in Everyday Life Natural Remedies to Reduce Stress and Anxiety in Everyday Life
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Jeffrey Gibson Records the Land’s Heartbeat in NYC Projections

Artist Jeffrey Gibson's latest project, "The Spirits Are Laughing" (2021), made its debut at Brooklyn…

September 25, 2024

Here’s the inflation breakdown for February 2025 — in one chart

Inflation in the United States showed signs of receding in February, with easing price pressures…

March 12, 2025

Samsung Galaxy A55, A35 & A25 at Incredible Prime Day Prices

The Samsung Galaxy A-series continues to impress as some of the best affordable smartphones on…

October 8, 2024

Breakthrough Gene Therapy For Butterfly Children’s Disease

A groundbreaking new treatment has been approved by the U.S. Food and Drug Administration (FDA)…

April 30, 2025

Study finds effective communication with doctors improves chronic pain outcomes

Chronic pain is a complex and challenging condition that affects millions of adults in the…

November 3, 2024

You Might Also Like

Sam Altman thinks AI will have ‘novel insights’ next year
Tech and Science

Sam Altman thinks AI will have ‘novel insights’ next year

June 11, 2025
‘City-Killer’ Asteroid Even More Likely to Hit The Moon in 2032 : ScienceAlert
Tech and Science

‘City-Killer’ Asteroid Even More Likely to Hit The Moon in 2032 : ScienceAlert

June 10, 2025
Proxima Fusion joins the club of well-funded nuclear contenders with €130M Series A
Tech and Science

Proxima Fusion joins the club of well-funded nuclear contenders with €130M Series A

June 10, 2025
When Letting Your Mind Wander Helps You Learn
Tech and Science

When Letting Your Mind Wander Helps You Learn

June 10, 2025
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?