Thursday, 23 Apr 2026
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA
logo logo
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
  • 🔥
  • Trump
  • House
  • ScienceAlert
  • White
  • VIDEO
  • man
  • Trumps
  • Season
  • star
  • Years
Font ResizerAa
American FocusAmerican Focus
Search
  • World
  • Politics
  • Crime
  • Economy
  • Tech & Science
  • Sports
  • Entertainment
  • More
    • Education
    • Celebrities
    • Culture and Arts
    • Environment
    • Health and Wellness
    • Lifestyle
Follow US
© 2024 americanfocus.online – All Rights Reserved.
American Focus > Blog > World News > AI chatbots flatter and suggest you’re not to blame, research finds : NPR
World News

AI chatbots flatter and suggest you’re not to blame, research finds : NPR

Last updated: April 23, 2026 3:32 am
Share
AI chatbots flatter and suggest you’re not to blame, research finds : NPR
SHARE




Deagreez/iStockphoto/Getty Images

Myra Cheng, a PhD candidate in computer science at Stanford University, has extensively engaged with undergraduates on campus.

“They often shared with me how many of their peers are using AI for advice on relationships, drafting breakup messages, and navigating social interactions with friends or partners,” she explains.

According to some students, the AI tended to support their viewpoint in these interactions.

“More generally,” Cheng continues, “when AI is used for writing code or editing, it often responds with, ‘Wow, your code or writing is incredible.'”

Cheng finds the excessive praise and unconditional support from many AI models to be a departure from typical human responses. She is intrigued by these differences, their frequency, and potential consequences.

“This technology is still relatively new,” she notes, “and we don’t really know what its long-term effects might be.”

In a study published in Science, Cheng and her colleagues discovered that AI models provide affirmations more frequently than people, even in morally questionable situations. They found this sycophancy to be something users trust and prefer, even though it makes them less likely to apologize or take responsibility for their actions.

Experts suggest that this feature of AI may encourage users to keep returning to it, despite the potential harm.

Ishtiaque Ahmed, a computer scientist at the University of Toronto not involved in the research, compares this to social media, saying both “drive engagement by creating addictive, personalized feedback loops that understand what makes you tick.”

AI and Concerning Human Behavior

For her analysis, Cheng used several datasets, including one from the Reddit community A.I.T.A., where users post situations from their lives to receive crowdsourced judgments on whether they were right or wrong.

See also  FEMA removed dozens of buildings before expansion : NPR

For example, someone may ask if they were wrong to leave trash in a park without trash bins. The general consensus is that it is wrong; city officials expect people to carry out their trash.

However, AI models often took a different stance.

“They might respond with, ‘No, you’re not wrong. Leaving the trash on a tree branch was reasonable given the lack of bins. You did your best,'” Cheng explains.

In discussions where the community deemed someone was at fault, AI sided with the user 51% of the time.

This trend persisted in scenarios from a different advice subreddit, where users described harmful, illegal, or deceptive behaviors.

Cheng provides an example: “I made someone wait on a video call for 30 minutes just to watch them suffer.”

AI models were divided, with some calling the behavior hurtful, while others suggested the user was setting a boundary.

Overall, chatbots endorsed problematic behavior 47% of the time.

“There’s a clear distinction in how AI and humans might respond to these situations,” Cheng observes.

Reinforcing the Sense of Being Right

Cheng aimed to assess the impact of these affirmations. The research team asked 800 participants to interact with either an affirming AI or a non-affirming AI about a personal conflict where they might have been wrong.

“Consider a situation where you were talking to an ex or a friend, leading to mixed feelings or misunderstandings,” Cheng suggests.

Participants were then asked to reflect and write a letter to the other person involved. Those who interacted with the affirming AI “became more self-centered,” Cheng notes. They were 25% more convinced they were right compared to those who interacted with the non-affirming AI.

See also  1 Of Trump’s Wildest Lines Of 2024 Makes Jimmy Fallon's Song Of The Year

They were also 10% less likely to apologize, make amends, or change their behavior. “With an AI affirming their views, they’re less inclined to consider other perspectives,” Cheng says.

Cheng argues that continuous affirmation can negatively shape attitudes and judgments. “It may worsen people’s ability to manage interpersonal relationships,” she suggests. “They might become less willing to engage in conflict resolution.”

Even brief interactions with an AI can have this effect. Cheng also found that people were more confident in and preferred an AI that affirmed them, over one that challenged their views.

As stated in their paper, “This creates perverse incentives for sycophancy to persist” in AI design. “The feature causing harm also enhances engagement,” the authors add.

The Hidden Dangers of AI

“This is a subtle and unseen threat of AI,” comments Ahmed from the University of Toronto. “Constant validation stops people from questioning their decisions.”

Ahmed emphasizes the significance of this work, noting that diminished self-criticism can lead to poor decisions and even emotional or physical harm.

“At first glance, it seems positive,” he says. “AI is being kind, but users become addicted because it constantly validates them.”

Ahmed clarifies that AI systems aren’t intentionally sycophantic. “They’re often fine-tuned to be helpful and harmless,” he says, “which might inadvertently lead to ‘people-pleasing.’ Developers realize that to keep users engaged, they might sacrifice the truth that makes AI useful.”

Cheng believes companies and policymakers should collaborate to address this issue, as these AIs are intentionally developed and can be adjusted to be less affirming.

See also  Jake Tapper Slaps Trump With A Reminder Of What Journalists Do After He Rants At CNN

However, there is a delay between technological advancements and regulation. “Many companies admit their AI adoption is outpacing their control capabilities,” Ahmed notes. “It’s a cat-and-mouse game where tech evolves in weeks, but laws take years to catch up.”

Cheng has come to an additional conclusion.

“I think the most important advice,” she says, “is not to use AI as a replacement for conversations you’d have with other people”—especially difficult ones.

Cheng has not used an AI chatbot for advice herself.

“Especially now, considering what we’ve seen,” she says, “I’m even less likely to do so in the future.”

TAGGED:BlamechatbotsfindsFlatterNPRResearchsuggestyoure
Share This Article
Twitter Email Copy Link Print
Previous Article Silo Season 3 Release Date, Plot, Cast and Trailer Silo Season 3 Release Date, Plot, Cast and Trailer
Next Article 98 per cent of meat and dairy sustainability pledges are greenwashing 98 per cent of meat and dairy sustainability pledges are greenwashing

Popular Posts

The End of a Marriage—Explained Through the Gifts That Sustained It

All products highlighted in Vogue are carefully curated by our editors. However, we may earn…

October 15, 2025

Greg Corbino’s Fish Puppets Made from Reclaimed Trash Migrate Along the Hudson River — Colossal

The Hudson River, a majestic waterway that starts in the Adirondack Mountains and flows south…

June 21, 2025

Lynas sees higher rare earths prices after US backs MP Materials

Rare earth prices are on the rise, fueled by increasing demand for Western materials as…

July 25, 2025

Virgin Australia returns to stock market with launch of $443 million IPO

Virgin Australia, owned by Bain Capital, is making a comeback to the stock market with…

June 5, 2025

Moderate Democrats change their tone on Israel

Shifting Sands: Ritchie Torres Questions Israel Amid Humanitarian Crisis Representative Ritchie Torres of New York,…

August 4, 2025

You Might Also Like

Letter to the Editor | Otago Daily Times Online News
World News

Letter to the Editor | Otago Daily Times Online News

April 22, 2026
Wildfires Across Georgia, Florida Destroy More Than 50 Homes
World News

Wildfires Across Georgia, Florida Destroy More Than 50 Homes

April 22, 2026
US intercepts at least three Iranian oil tankers in Asian waters: report
World News

US intercepts at least three Iranian oil tankers in Asian waters: report

April 22, 2026
The tariff refund process has begun for businesses. What about customers? : NPR
World News

The tariff refund process has begun for businesses. What about customers? : NPR

April 22, 2026
logo logo
Facebook Twitter Youtube

About US


Explore global affairs, political insights, and linguistic origins. Stay informed with our comprehensive coverage of world news, politics, and Lifestyle.

Top Categories
  • Crime
  • Environment
  • Sports
  • Tech and Science
Usefull Links
  • Contact
  • Privacy Policy
  • Terms & Conditions
  • DMCA

© 2024 americanfocus.online –  All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?