Artificial intelligence is everywhere, as demonstrated by recent information showing how it even affects the youngest members of the household. A teddy bear wearing a scarf recently went rogue during a play test with researchers, raising concerns about the capabilities of these toys. This is a worrying development because nowadays, plush toys don’t interact directly with a child’s imagination; instead, some do so through integrated AI chatbots.
Providing a safe space for play not only promotes learning but also contributes to preventing injuries
Sometimes, however, this is a problem. It’s important to remember that through play, children stimulate their imagination, learn to interact with their environment, develop social and physical skills, and actively explore the world around them. On the other hand, according to experts, online chatbots can pose risks to adults, ranging from inducing delusions in a small number of cases to hallucinating fabricated information. Therefore, providing a safe space for play not only promotes learning but also contributes to preventing injuries and health problems in childhood. OpenAI’s GPT-4o has been the preferred model for some AI toys, and the use of an extensive language model (LLM) in children’s toys has raised concerns about children’s safety and whether they should be exposed to these toys.
Pediatricians warn that toys that don’t meet safety standards or aren’t age-appropriate can pose a health risk
It’s also important to consider what safety measures manufacturers should implement. Teddy bears and plush toys have long been a staple in children’s toy collections around the world. Pediatricians warn that toys that don’t meet safety standards or aren’t age-appropriate can pose a health risk, especially for children under five.
These risks are pervasive, especially as the AI toy market booms abroad
It is important to monitor this issue closely, as it affects millions of children worldwide—children who, in the not-too-distant future, will be the adults leading this planet. These risks are pervasive, especially as the AI toy market booms abroad, with 1,500 companies operating in China, according to a report by MIT’s Technology Review. Furthermore, international experts warn in a published report about the risks of malicious use of artificial intelligence by “corrupt states, criminals, or terrorists” in the near future.
The “Kumma” bear from Singapore-based FoloToy, priced at $99 and powered by OpenAI’s GPT-4o technology
The reality is that some companies are already selling AI-powered toys in the United States. Mattel, the maker of Barbie, announced a partnership with OpenAI in June. According to experts, in the next decade, the increasing effectiveness of artificial intelligence (AI) could bolster cybercrime and lead to the use of drones or robots for terrorist purposes. A practical example is the “Kumma” bear from Singapore-based FoloToy, priced at $99 and powered by OpenAI’s GPT-4o technology. According to a report published in November by the Public Interest Research Group’s Educational Fund, Kumma indicated to researchers where to find potentially dangerous objects and engaged in sexually explicit conversations.
Regulating and implementing policies that restrict these new implications of AI is essential
On the other hand, OpenAI has suspended FoloToy for violating its policies, which, according to an OpenAI spokesperson, “prohibit any use of our services to exploit, endanger, or sexualize minors under 18.” Regarding other uses, there are concerns that the misuse of AI could also facilitate election manipulation through social media using automated accounts (bots). Therefore, regulating and implementing policies that restrict these new implications of AI is essential.
