Even now that the data has been secured, Margolis and Thacker argue that raises questions about how many people inside the companies making the AI toys have access to the data they collect, how their access is monitored, and how their credentials are protected. “There are huge privacy implications from this,” Margolis said. “All it takes is one employee with a bad password, and then we’re back to the same place we started, where it’s all exposed on the public internet.”
Margolis added that this type of sensitive information about a child’s mind and feelings can be used for horrific forms of child abuse or manipulation. “Frankly, it’s a kidnapper’s dream,” he said. “We’re talking about information that allows someone to lure a child into a dangerous situation, and it’s important that anyone has access to it.”
Margolis and Thacker pointed out that, beyond the accidental exposure of its data, Bondu also appeared—based on what they saw inside its admin console—to use Google’s Gemini and OpenAI’s GPT5, and as a result could share information about the companies’ children’s conversations. Bondu’s Anam Rafid responded to that point in an email, saying the company uses “business third-party AI services to generate responses and run certain safety checks, which include securely sending relevant conversational content for processing.” But he added that the company is careful to “minimize what is shipped, apply contractual and technical controls, and operate under business configurations where state prompts/outputs are not used to train their models.”
The two researchers also warned that part of the risk for AI toy companies could be that they increasingly use AI to code their products, equipment and web infrastructure. They said they suspect the insecure Bondu console they discovered was itself “vibe-coded” — created using AI programming tools that commonly cause security flaws. Bondu did not respond to WIRED’s question about whether the console will be programmed with AI tools.
Warnings about the risks of AI toys for children have grown in recent months, but have mostly focused on the threat that toy conversations will raise inappropriate topics or even lead to risky behavior or self-harm. NBC News, for example, reported last month that the AI toys its reporters chat with offer detailed explanations of sex terms, tips on how to sharpen knives and acquisitions, and even seem to echo Chinese government propaganda, stating for example that Taiwan is part of China.
Bondu, by contrast, appears to have at least tried to build safeguards into the AI chatbot it gives children access to. The company is even offering a $500 bounty for reports of “an inappropriate response” from the toy. “We’ve had this program for over a year and no one has been able to tell it anything inappropriate,” a line on the company’s website reads.
Yet at the same time, Thacker and Margolis discovered that Bondu simultaneously left all of its sensitive user data completely exposed. “It’s a perfect combination of safety and security,” Thacker said. “Does ‘AI safety’ matter when all the data is exposed?”
Thacker says that before looking into Bondu’s security, he considered giving AI-enabled toys to his own children, like his neighbor. Seeing Bondu’s data exposure firsthand changed his mind.
“Do I really want this in my house? No, I don’t,” he said. “It’s just a privacy nightmare.”








