Children’s advocacy group Fairplay is warning parents against buying AI-powered toys this holiday season, citing risks of privacy invasion, emotional manipulation, and developmental harm after several chatbot-enabled toys—including one that gave inappropriate advice—sparked global concern. (Source: Image by RR)

Fairplay Says AI Toys Threaten Children’s Privacy, Safety, and Emotional Health

The children’s advocacy nonprofit Fairplay has issued a public advisory urging parents to avoid AI-powered toys this holiday season, warning that such products pose serious risks to privacy, safety, and emotional development. These toys—plushies, dolls, robots, and other interactive devices that use generative AI to chat with kids—are being marketed to increasingly younger audiences, including infants. Among the examples cited are Miko, Smart Teddy, Roybi, and FoloToy’s Kumma bear, as well as Curio Interactive’s Grok and Gabbo, which claim to act as “friends” for children. Fairplay said that it’s unrealistic to expect young children to recognize the dangers these toys present, as they are “especially susceptible to false trust and invasive data collection.”

The warning, according to a story on upi.com, follows several recent incidents, including one involving FoloToy’s Kumma bear, which was found giving children disturbing and inappropriate advice about sexual fetishes, self-harm, and fire-starting. After the revelations, FoloToy suspended sales and announced an internal safety audit. Fairplay program director Rachel Franz cautioned that the harms extend beyond offensive content, warning that such toys could displace essential human-to-human interaction. “These devices can interfere with the development of real-world social and emotional skills,” Franz said, adding that prolonged exposure may have both short- and long-term developmental consequences.

Industry groups, including The Toy Association, pushed back against the criticism, emphasizing that reputable toymakers comply with more than 100 federal safety and privacy regulations, including the Children’s Online Privacy Protection Act (COPPA). The organization encouraged consumers to buy only from trusted brands that prioritize safety and to follow guidelines for connected toys. However, Fairplay argues that current regulations don’t go far enough, as AI-driven toys rely on models known to produce harmful or manipulative behavior in children and often operate with insufficient oversight or transparency regarding data handling.

Fairplay’s statement also highlights the psychological and privacy dangers of AI toys. These devices record conversations and gather sensitive data through audio, video, and facial recognition, potentially capturing private family moments or children’s innermost thoughts. The group warns that the illusion of friendship created by AI toys could distort children’s understanding of relationships, making them more vulnerable to emotional manipulation. As AI rapidly expands into consumer products, Fairplay’s warning serves as a call for stronger safety standards and parental vigilance in an era where machines are increasingly designed to imitate human companionship.

read more at upi.com