Friday, 8 May 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • White
  • VIDEO
  • man
  • Trumps
  • Season
  • star
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > Stanford study outlines dangers of asking AI chatbots for personal advice
Tech and Science

Stanford study outlines dangers of asking AI chatbots for personal advice

Last updated: March 28, 2026 2:00 pm
Share
Stanford study outlines dangers of asking AI chatbots for personal advice
SHARE

Amid ongoing discussions about AI chatbots’ propensity to flatter users and reinforce their beliefs, a phenomenon known as AI sycophancy, Stanford computer scientists have conducted a study to assess its potential harm.

The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence,” recently published in Science, asserts that AI sycophancy is not merely a stylistic choice or a minor risk but a widespread behavior with significant consequences.

A recent Pew report indicates that 12% of U.S. teens seek emotional support or advice from chatbots. Myra Cheng, a computer science Ph.D. candidate and lead author of the study, shared with the Stanford Report her interest in the topic, sparked by accounts of undergraduates using chatbots for relationship advice and even for crafting breakup texts.

“AI advice, by default, does not challenge people or offer ‘tough love,’” Cheng remarked. “I’m concerned that people might lose the ability to handle challenging social situations.”

The study was divided into two parts. Initially, researchers evaluated 11 large language models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek. They input queries from existing interpersonal advice databases, scenarios involving potentially harmful or illegal actions, and posts from the popular Reddit community r/AmITheAsshole, particularly focusing on posts where the original poster was deemed the antagonist.

The study revealed that the AI-generated responses validated user behavior 49% more frequently than human responses. Specifically, in Reddit-based examples, chatbots affirmed user behavior 51% of the time, even in situations where Redditors disagreed. For queries about harmful or illegal actions, AI supported the user’s behavior 47% of the time.

See also  US banks scramble to assess data theft after hackers breach financial tech firm

One example from the Stanford Report involved a user who asked a chatbot if they were wrong for pretending to be unemployed for two years to their girlfriend. The response was, “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

In the second part, researchers observed over 2,400 participants as they interacted with AI chatbots, some sycophantic and some not, discussing their personal issues or situations sourced from Reddit. They found that participants favored and trusted the sycophantic AI more and were more inclined to seek advice from those models again.

“These effects persisted even after controlling for factors like demographics, prior AI experience, perceived response source, and response style,” the study noted. It also highlighted that users’ preference for sycophantic AI responses creates “perverse incentives” where the harmful feature that drives engagement is encouraged by AI companies.

Moreover, interacting with sycophantic AI reinforced participants’ belief in their correctness and reduced their likelihood to apologize.

The study’s senior author, Dan Jurafsky, a professor of linguistics and computer science, commented that while users realize that AI models are sycophantic, they are unaware that this behavior is making them more self-centered and morally rigid.

Jurafsky emphasized that AI sycophancy is “a safety issue, requiring regulation and oversight.”

The research team is exploring ways to make models less sycophantic, noting that starting a prompt with “wait a minute” may help. However, Cheng advised against using AI as a substitute for human interaction, stating, “That’s the best thing to do for now.”

See also  Democrat Slams Administration For 'Enriching Themselves,' Outlines Path To Beat Trump (CORRECTION)

TAGGED:AdvicechatbotsDangersoutlinesPersonalStanfordStudy
Share This Article
Twitter Email Copy Link Print
Previous Article Oklo’s AI tailwind fades as fuel and financing risks grow Oklo’s AI tailwind fades as fuel and financing risks grow
Next Article Bernie Sanders Destroys Trump’s Iran War Lies At No Kings Bernie Sanders Destroys Trump’s Iran War Lies At No Kings
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Popular Posts

Val Kilmer Dead Aged 65 From Pneumonia

Val Kilmer, the legendary actor known for his iconic roles in movies such as Top…

April 2, 2025

Poor sleep may account for a large share of dementia cases

The Link Between Sleep Disorders and Dementia Recent research has shed light on the connection…

February 2, 2026

Bernie Sanders To Stay In Top Senate Post To Protect Social Security And Medicare

Senator Bernie Sanders (I-VT) has made a significant announcement regarding his role in the Senate.…

November 22, 2024

This Amazon Side Hustle Made a Seller $140K a Month in Passive Income

The days of relying solely on a traditional 9-to-5 job for financial stability are long…

July 27, 2025

Champions League bold predictions: Real Madrid miss Jude Bellingham defense; can PSG’s Ousmane Dembele shine?

As PSG and Liverpool prepare to face off in the Champions League, all eyes will…

March 4, 2025

You Might Also Like

Anthropic Skill scanners passed every check. The malicious code rode in on a test file.
Tech and Science

Anthropic Skill scanners passed every check. The malicious code rode in on a test file.

May 8, 2026
Hantavirus outbreak will not cause a covid-style pandemic, says WHO
Tech and Science

Hantavirus outbreak will not cause a covid-style pandemic, says WHO

May 8, 2026
Fitbit Air Confirmed: Release Date, Price, Features
Tech and Science

Fitbit Air Confirmed: Release Date, Price, Features

May 7, 2026
Samsung Galaxy Ring 2 Could Be Coming 2027
Tech and Science

Samsung Galaxy Ring 2 Could Be Coming 2027

May 7, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?