Suggested region and language based on your location
Your current region and language
Half of children have AI toys as parents allow widespread use despite safety concerns and gaps in guidance.
11th May 2026: New research has shown that large numbers of children in the UK have access to AI toys or educational devices, despite parental fears about their safety, security and psychological impact, highlighting a growing gap between how quickly these technologies are being adopted and the guidance available to families, and prompting a call for a ‘safety by design’ approach.
Research conducted to mark the 125th anniversary of BSI, the UK’s national standards body, shows that despite being relatively new to the market, half of children have already had an AI enabled toy or learning device purchased for them (50%), with 38% owning two or more. The FocalData survey, covering 1,000 parents of children up to age 16, suggests AI-enabled toys and devices – for example interactive robots or tablets - are becoming embedded in everyday play, faster than broader public understanding of the developmental impacts of these products.
Parental uncertainty about how to respond to this new reality for children is clear. Nearly half (47%) believe their child would be better off growing up without access to AI altogether, and three quarters (75%) worry about AI toys connected to the internet exposing their child to unwanted content or data risks. Yet at the same time they report being more likely to let their child play with an AI enabled toy unsupervised (54% likely) than play unsupervised outside on the street (51%), attend a playdate with parents they’ve not been in contact with (30%), take public transport without them (34%), or go to the local shops or park (46%).
There is some indication that because some toys or devices are marketed as having educational value, parents are more likely to be comfortable with them. For example, when for homework, 60% are happy with their children interacting with AI, but that drops dramatically when a child talks to AI about sex (16%).
Standards could provide the basis for clear benchmarks on safety and labelling, empowering parents to easily identify AI toys that are safe and developmentally appropriate, while guiding manufacturers to build them responsibly. With a long history of supporting government and industry in responding to emerging risks including those affecting children, for example around button battery safety, BSI sees it as the organization’s role to identify and respond to the market need for standards in this rapidly evolving area.
With toys marketed as suitable for children as young as 4, parents are not confident kids are equipped to navigate these interactions. Fewer than half (46%) believe their child could distinguish between a human and AI, while just 43% think their child could assess the accuracy of information provided by an AI chatbot. Similarly, 78% are concerned devices might respond to sensitive questions in ways they cannot oversee, while 70% fear AI praising or criticising behaviour without understanding whether it is appropriate or safe.
The research reveals both parental fears around AI toys and chatbots, and a desire for government and industry to help them determine what is safe or not. Amidst ongoing debate around social media age access, and with the Government confirming a ban on smartphones in schools, there is clear demand for guardrails. For example, nine in ten parents (91%) say a recognised safety certification or mark for AI toys would be important, with almost a third (29%) describing it as essential. More than eight in ten (83%) believe manufacturers should comply with established standards or codes of conduct, while 72% want clearer information about whether products meet safety or security requirements.
Laura Bishop, BSI said: “AI-enabled toys are quickly becoming part of everyday childhood, both in play and learning, and they do have the potential to offer real benefits in terms of development or access to information. However, the frameworks to support safe, transparent and age-appropriate use are still catching up. Our research shows that while parents are increasingly introducing these technologies into their children’s lives, they are doing so without clear, consistent information about how they work or what safety safeguards are in place.
“There is no silver bullet, but over the past 125 years standards have played an important role in building trust that products are safe, secure and behave as expected. Privacy by design is increasingly the norm in technology, but we need safety by design too. From car seats to traditional toys, standards for manufacturers to meet give parents greater confidence in the choices they are making for their children. As the AI toys and devices available to children evolve and become more sophisticated, it is essential that the frameworks around them develop at the same pace.”
At present, as with traditional toys, parents can check whether AI toys or devices carry CE or UKCA marking, identifying that children are safe from risks such as choking on components. Other devices may comply with standards around information security, such as ISO 27001.However, given the speed at which AI-enabled products have entered the market, there is not yet a widely recognized, dedicated framework addressing the specific safety, behavioural and developmental considerations associated with AI in toys.
The UK Government recently published its proposed new product safety framework, highlighting that this should address risks of harm linked to AI and automated decision-making, including in toys.
The research was conducted to mark BSI’s 125th anniversary this month, with this year also the 75th anniversary of the independent Consumer & Public Interest Network (CPIN). Having begun with engineers seeking to bring consistency and quality to the building of bridges, railways and ships, BSI’s remit grew to support the war effort with 400 emergency standards during WW2 and then became central to consumer safety in areas from seatbelts, buggies, highchairs and car seats to button batteries.
BSI is already playing a role in developing a responsible AI ecosystem, having published the world first AI management standard, ISO 42001. Along with standards around age assurance, privacy by design and information security, BSI offers independent testing of technology. This includes the BSI Kitemark for Secure Digital Applications, and the IoT Kitemark, which tests whether connected (physical) products such as smart speakers are designed to meet ETSI EN 303 645.
Emily Darlington MP, a member of the Science, Innovation and Technology Committee, said: “These findings clearly show that AI toys are becoming part of children’s everyday lives far more quickly than government is able to regulate them or give parents necessary information on the impact they will have on a child’s development.
That’s why the Science, Innovation and Technology Committee is launching an inquiry into neuroscience and digital childhoods; to understand the long-term developmental impact of digital devices on children’s cognitive development. We’re looking to answer questions about how AI toys or tools might influence things like social skills, attention, learning habits, and children’s ability to build real relationships, especially when so many children are using AI unsupervised and may not even be able to tell it apart from a human.
I am particularly concerned about so called “companion chatbots”, which are AI chatbots designed to simulate conversation and emotional connection. For children, especially younger ones, these systems can at best blur the line between real and artificial relationships, and at worst, as we have unfortunately already seen, groom young children to take their own lives.
Parents are currently being asked to make decisions about complex technology without clear, accessible guidance on how it works, what data it collects, or how to use it safely. It’s our job as the government to help parents feel informed and confident about the choices they make for their children, rather than leave them to figure it out on their own. All of this underlines the need for clearer standards and guardrails, so innovation doesn’t run ahead of safety. Otherwise, we risk children growing up in a world with powerful technologies that we still don’t fully understand – and it may be too late.”