State Attorneys General Warn AI Industry Over Delusional Outputs
After a series of concerning incidents involving AI chatbots and mental health, a coalition of state attorneys general has issued a strong warning to top AI companies. The group, consisting of AGs from various U.S. states and territories, joined forces with the National Association of Attorneys General to address the issue of “delusional outputs” from AI systems.
The letter sent to major AI firms such as Microsoft, OpenAI, Google, and others, emphasizes the need for internal safeguards to protect users from harmful AI-generated content. The list of companies also includes Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI.
The letter arrives amidst a growing debate between state and federal authorities regarding AI regulations.
Key recommendations outlined in the letter include third-party audits of large language models to identify delusional or sycophantic ideations, as well as improved incident reporting procedures to alert users of potentially harmful outputs. External parties, such as academic and civil society groups, should have the freedom to evaluate AI systems without fear of retaliation and publish their findings independently.
Highlighting the impact of AI on vulnerable populations, the letter cites several incidents where AI chatbots have contributed to tragic outcomes, including suicides and murder. The letter underlines the need for companies to address mental health incidents with the same urgency as cybersecurity breaches.
In addition, the AGs urge AI companies to establish safety tests for AI models to prevent the production of harmful outputs. These tests should be conducted before the models are released to the public.
Federal vs. State AI Regulations
While the Trump administration has shown support for AI development, tensions have arisen between federal and state authorities regarding AI regulations. Attempts to implement a nationwide moratorium on state-level AI regulations have faced opposition from state officials, leading to a deadlock.
President Trump recently announced plans to issue an executive order limiting states’ authority to regulate AI, citing the need to protect the industry from excessive restrictions.
As the debate over AI regulations continues, it is crucial for AI companies to prioritize user safety and mental health in their development processes.

