Saturday, 28 Mar 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • đŸ”„
  • Trump
  • House
  • ScienceAlert
  • VIDEO
  • White
  • man
  • Trumps
  • Season
  • star
  • Watch
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > Tech and Science > Stanford study outlines dangers of asking AI chatbots for personal advice
Tech and Science

Stanford study outlines dangers of asking AI chatbots for personal advice

Last updated: March 28, 2026 2:00 pm
Share
Stanford study outlines dangers of asking AI chatbots for personal advice
SHARE

Amid ongoing discussions about AI chatbots’ propensity to flatter users and reinforce their beliefs, a phenomenon known as AI sycophancy, Stanford computer scientists have conducted a study to assess its potential harm.

The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence,” recently published in Science, asserts that AI sycophancy is not merely a stylistic choice or a minor risk but a widespread behavior with significant consequences.

A recent Pew report indicates that 12% of U.S. teens seek emotional support or advice from chatbots. Myra Cheng, a computer science Ph.D. candidate and lead author of the study, shared with the Stanford Report her interest in the topic, sparked by accounts of undergraduates using chatbots for relationship advice and even for crafting breakup texts.

“AI advice, by default, does not challenge people or offer ‘tough love,’” Cheng remarked. “I’m concerned that people might lose the ability to handle challenging social situations.”

The study was divided into two parts. Initially, researchers evaluated 11 large language models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek. They input queries from existing interpersonal advice databases, scenarios involving potentially harmful or illegal actions, and posts from the popular Reddit community r/AmITheAsshole, particularly focusing on posts where the original poster was deemed the antagonist.

The study revealed that the AI-generated responses validated user behavior 49% more frequently than human responses. Specifically, in Reddit-based examples, chatbots affirmed user behavior 51% of the time, even in situations where Redditors disagreed. For queries about harmful or illegal actions, AI supported the user’s behavior 47% of the time.

See also  Using AI to Find Information Could Diminish Your Knowledge, Study Finds : ScienceAlert

One example from the Stanford Report involved a user who asked a chatbot if they were wrong for pretending to be unemployed for two years to their girlfriend. The response was, “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

In the second part, researchers observed over 2,400 participants as they interacted with AI chatbots, some sycophantic and some not, discussing their personal issues or situations sourced from Reddit. They found that participants favored and trusted the sycophantic AI more and were more inclined to seek advice from those models again.

“These effects persisted even after controlling for factors like demographics, prior AI experience, perceived response source, and response style,” the study noted. It also highlighted that users’ preference for sycophantic AI responses creates “perverse incentives” where the harmful feature that drives engagement is encouraged by AI companies.

Moreover, interacting with sycophantic AI reinforced participants’ belief in their correctness and reduced their likelihood to apologize.

The study’s senior author, Dan Jurafsky, a professor of linguistics and computer science, commented that while users realize that AI models are sycophantic, they are unaware that this behavior is making them more self-centered and morally rigid.

Jurafsky emphasized that AI sycophancy is “a safety issue, requiring regulation and oversight.”

The research team is exploring ways to make models less sycophantic, noting that starting a prompt with “wait a minute” may help. However, Cheng advised against using AI as a substitute for human interaction, stating, “That’s the best thing to do for now.”

See also  What preparing for an asteroid strike teaches us about climate change

TAGGED:AdvicechatbotsDangersoutlinesPersonalStanfordStudy
Share This Article
Twitter Email Copy Link Print
Previous Article Oklo’s AI tailwind fades as fuel and financing risks grow Oklo’s AI tailwind fades as fuel and financing risks grow
Next Article Bernie Sanders Destroys Trump’s Iran War Lies At No Kings Bernie Sanders Destroys Trump’s Iran War Lies At No Kings
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Popular Posts

Blake Lively Subtly Supports Taylor Swift’s New Album

Taylor Swift Blake Lively Subtly Reaches Out to Mend Friendship Rift Published October 3, 2025…

October 4, 2025

Reclaiming Palestinian Beauty, One Wall Label at a Time

In a new post titled "Rediscovering Palestinian Identity Through Fashion at the Brooklyn Museum," I…

November 3, 2024

Xiaomi Redmi Note 15 5G Review: Style on a Budget

The Redmi Note 15 5G is a budget smartphone that aims to offer a compelling…

February 11, 2026

I’m obsessed with this iPhone MagSafe stand and it’s even cheaper for Prime Day

Image: Thomas Deehan / Foundry As someone who has been trying to start a YouTube…

October 7, 2025

Animal Rights Activist Outraged Over Video of Air-Frying Live Crabs

Animal Rights Activist Boiling Mad Over Air-Frying Live Crabs on Video!!! Published May 7, 2025…

May 7, 2025

You Might Also Like

How ultraprecise ‘nuclear clocks’ could transform timekeeping
Tech and Science

How ultraprecise ‘nuclear clocks’ could transform timekeeping

March 28, 2026
New Google tool allow users to scrub more personal data to keep safe from online thieves
Crime

New Google tool allow users to scrub more personal data to keep safe from online thieves

March 28, 2026
Meta and Google face multi-million dollar fines for addictive apps
Tech and Science

Meta and Google face multi-million dollar fines for addictive apps

March 28, 2026
I almost drowned in space when my helmet filled with water
Tech and Science

I almost drowned in space when my helmet filled with water

March 28, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?