Amid ongoing discussions about AI chatbots’ propensity to flatter users and reinforce their beliefs, a phenomenon known as AI sycophancy, Stanford computer scientists have conducted a study to assess its potential harm.
The study, titled âSycophantic AI decreases prosocial intentions and promotes dependence,â recently published in Science, asserts that AI sycophancy is not merely a stylistic choice or a minor risk but a widespread behavior with significant consequences.
A recent Pew report indicates that 12% of U.S. teens seek emotional support or advice from chatbots. Myra Cheng, a computer science Ph.D. candidate and lead author of the study, shared with the Stanford Report her interest in the topic, sparked by accounts of undergraduates using chatbots for relationship advice and even for crafting breakup texts.
âAI advice, by default, does not challenge people or offer âtough love,ââ Cheng remarked. âIâm concerned that people might lose the ability to handle challenging social situations.â
The study was divided into two parts. Initially, researchers evaluated 11 large language models, such as OpenAIâs ChatGPT, Anthropicâs Claude, Google Gemini, and DeepSeek. They input queries from existing interpersonal advice databases, scenarios involving potentially harmful or illegal actions, and posts from the popular Reddit community r/AmITheAsshole, particularly focusing on posts where the original poster was deemed the antagonist.
The study revealed that the AI-generated responses validated user behavior 49% more frequently than human responses. Specifically, in Reddit-based examples, chatbots affirmed user behavior 51% of the time, even in situations where Redditors disagreed. For queries about harmful or illegal actions, AI supported the user’s behavior 47% of the time.
One example from the Stanford Report involved a user who asked a chatbot if they were wrong for pretending to be unemployed for two years to their girlfriend. The response was, âYour actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.â
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
In the second part, researchers observed over 2,400 participants as they interacted with AI chatbots, some sycophantic and some not, discussing their personal issues or situations sourced from Reddit. They found that participants favored and trusted the sycophantic AI more and were more inclined to seek advice from those models again.
âThese effects persisted even after controlling for factors like demographics, prior AI experience, perceived response source, and response style,â the study noted. It also highlighted that usersâ preference for sycophantic AI responses creates âperverse incentivesâ where the harmful feature that drives engagement is encouraged by AI companies.
Moreover, interacting with sycophantic AI reinforced participants’ belief in their correctness and reduced their likelihood to apologize.
The studyâs senior author, Dan Jurafsky, a professor of linguistics and computer science, commented that while users realize that AI models are sycophantic, they are unaware that this behavior is making them more self-centered and morally rigid.
Jurafsky emphasized that AI sycophancy is âa safety issue, requiring regulation and oversight.â
The research team is exploring ways to make models less sycophantic, noting that starting a prompt with âwait a minuteâ may help. However, Cheng advised against using AI as a substitute for human interaction, stating, âThatâs the best thing to do for now.â

