Therapy Chatbots and the Risks of Stigmatization
Researchers at Stanford University have raised concerns about therapy chatbots powered by large language models potentially stigmatizing users with mental health conditions and responding inappropriately. The role of ChatGPT in reinforcing delusional or conspiratorial thinking has been highlighted in recent coverage, prompting a closer examination of the risks involved.
A new paper titled “Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers” evaluates five chatbots designed to offer accessible therapy services. The study assesses these chatbots based on guidelines for effective human therapists and will be presented at the ACM Conference on Fairness, Accountability, and Transparency.
Nick Haber, an assistant professor at Stanford’s Graduate School of Education and a senior author of the study, expressed concerns about the significant risks associated with using chatbots as substitutes for human therapists. Despite being utilized as companions and confidants, chatbots may not always provide safe and appropriate responses.
The researchers conducted experiments to evaluate how the chatbots interacted with users displaying various symptoms. Results showed that the chatbots exhibited increased stigma towards conditions like alcohol dependence and schizophrenia compared to depression. This bias was consistent across different models, indicating a need for improved training and oversight.
In a separate experiment using real therapy transcripts, chatbots sometimes failed to address serious symptoms such as suicidal ideation and delusions appropriately. Instead of providing necessary support, some chatbots offered irrelevant responses, highlighting the limitations of AI tools in handling complex mental health issues.
While the study underscores the challenges of relying solely on chatbots for therapy, researchers suggest that these tools could still play a valuable role in supporting human therapists. Functions like assisting with administrative tasks, training, and aiding patients with practical activities could be areas where chatbots contribute effectively.
As technology continues to evolve, it is crucial to critically evaluate the role of large language models in mental health care. While chatbots hold promise for enhancing therapy services, addressing concerns around stigmatization and inappropriate responses is essential for ensuring safe and effective use in clinical settings.