Artificial intelligence (AI) has made its way into children’s toys, with devices like Gabbo, a small fluffy robot designed to interact with kids. However, recent studies have raised concerns about the potential risks associated with these AI-powered toys. Despite the challenges, some experts believe that with proper regulation and oversight, these toys can offer educational and developmental benefits to children.
Jenny Gibson, a researcher at the University of Cambridge, acknowledges the risks associated with AI toys but also emphasizes the importance of considering the potential benefits. She compares the potential risks of AI toys to the risks children face when playing on adventure playgrounds, where they may get hurt but also learn important skills. Gibson believes that banning AI toys altogether may hinder children’s opportunities to learn about AI technology and develop social and cognitive skills.
In a study conducted by Gibson and her colleague Emily Goodacre, they observed children under 6 years old interacting with Gabbo. The researchers found that the toy often misunderstood the children, failed to recognize emotions, and struggled to engage in developmentally important types of play. Despite these challenges, Gibson and Goodacre believe that with stricter regulations, toy-makers can program AI devices to provide more appropriate responses and foster social interactions.
Companies like Curio Interactive, Little Learners, and FoloToy offer a variety of AI-powered toys for children, ranging from robots to bears and puppies. These toys use language models like ChatGPT and promise to engage children in educational conversations. However, concerns have been raised about the safety and appropriateness of these interactions, especially for vulnerable populations like young children.
Carissa Véliz, an ethics researcher at the University of Oxford, highlights the lack of safety standards and oversight in the AI toy industry. She warns that without proper regulations, children could be exposed to potentially harmful content or misinformation. Véliz points out that while some companies have implemented safety features and parental supervision tools, there is still a need for more stringent guidelines to ensure the responsible use of AI in children’s toys.
Overall, the debate around AI-powered toys for children continues, with experts calling for a balance between innovation and safety. As the industry evolves, it will be crucial for regulators, researchers, and toy-makers to work together to create guidelines that prioritize children’s well-being and development. AI-makers should take responsibility for the toys they create, especially when they are targeted towards children. This is the sentiment expressed by Gibson, who believes that AI-makers should revoke access for toy-makers who do not act responsibly. In addition, regulators should step in to ensure the psychological safety of children when using AI-powered toys. Gibson also suggests that parents supervise their children when using such toys to prevent any potential harm.
In response to these concerns, an OpenAI spokesperson stated that they have strict policies in place for developers to uphold when creating AI-powered products. They currently do not partner with any companies that have AI-powered toys for children on the market. However, the UK Government’s Department for Science, Innovation, and Technology (DSIT) did not provide a response regarding the regulation of AI in children’s toys.
The UK government has been focusing on legislation to protect children online, with the Online Safety Act (OSA) coming into effect in July 2025. This act requires websites to block children from accessing harmful content such as pornography. Despite these measures, tech-savvy children can still bypass these restrictions using tools like VPNs to mask their browsing activity.
Recent proposed amendments to the Children’s Wellbeing and Schools Bill aimed to ban children in the UK from using social media and VPNs. However, these amendments were ultimately rejected. The government has committed to further consultation on these issues in the future.
It is essential for AI-makers to prioritize the safety and well-being of children when developing products. With the increasing presence of AI in toys and other consumer products, it is crucial for regulators to establish clear guidelines to protect children from potential harm. By working together, AI-makers, regulators, and parents can create a safer environment for children to enjoy technology responsibly.

