The use of teen AI chatbots is on the rise, sparking concerns about the impact on mental health. A recent survey conducted by the Pew Research Center revealed that 64% of U.S. teens aged 13 to 17 have interacted with AI chatbots, with over a quarter using these tools daily. Among the daily users, more than half engage with chatbots multiple times a day or even constantly.
The popularity of AI chatbots among teens is evident, with ChatGPT emerging as the most widely used bot, followed by Google’s Gemini and Meta AI. Interestingly, Black and Hispanic teens show a slightly higher tendency to use chatbots daily compared to their white counterparts. These usage patterns mirror those of adults, indicating a growing reliance on AI among younger age groups.
Despite the convenience and appeal of chatbots, concerns about their impact on mental health have been raised. Features like constant availability, empathetic conversation, and confidence projection can lead teens to seek emotional support or guidance from chatbots instead of humans. Legal actions against AI companies, including OpenAI, maker of ChatGPT, highlight the need for regulations and safeguards to protect teens from potential harm.
In response to these concerns, policymakers are considering measures to regulate the use of AI among minors. President Donald Trump is contemplating an executive order to standardize AI laws across states, while senators are exploring legislation to restrict AI companions for minors. Australia has already implemented a ban on social media accounts for those under 16, reflecting global efforts to address the challenges posed by youth-oriented technology.
The Pew survey underscores the urgency of addressing these issues, as many teens have already embraced AI while regulations are still being formulated. As the debate continues, it is crucial to prioritize the well-being of young users and ensure that AI technologies are developed responsibly and ethically.

