FoloToy AI
News

Building AI Toys for Families: Safety, Responsibility, and Knowledge Equity

FoloToy
#News

Our Commitment to Child Safety, Knowledge Equity, and Continuous Improvement

At FoloToy, we build AI-powered toys not only as engineers, but as parents.

As AI becomes more powerful, we believe it is critical to ask a simple question: who gets access to this technology, and under what conditions? Our long-term vision is rooted in AI accessibility and knowledge equity—the belief that access to knowledge, curiosity, and learning tools should not depend on geography, income, or technical background.

At the same time, when AI is introduced into children’s lives, safety and responsibility must come first.

Why We Paused and Reviewed Teddy Kumma

Last month, concerns were raised about how some AI-powered toys, including Teddy Kumma, could respond to certain edge-case prompts. Once we became aware of these findings, we made the decision to temporarily pause sales of Kumma.

This was not a regulatory requirement. It was a choice.

We paused availability so we could conduct a focused internal safety audit rather than attempting incremental fixes while the product remained on the market. For products designed for children, we believe safety must be treated as an ongoing responsibility—not a one-time checklist.

What Our Safety Review Covered

Our internal review examined multiple layers of the system, including:

Following this review, we strengthened age-appropriate content rules, tightened topic constraints, and deployed updated safeguards through our cloud-based control layer before gradually restoring availability.

These protections are enforced at the system level and cannot be modified by end users.

Independent Re-Testing and What It Means

Importantly, the U.S. PIRG Education Fund, which conducted the initial testing and raised concerns, has since re-tested Kumma after these updates.

In its follow-up testing, PIRG noted that the product was “better behaved” following the safety changes.

We view this as meaningful external validation that reinforcing safeguards and taking corrective action can improve real-world outcomes. At the same time, we recognize that safety is not static and must be continuously monitored and improved as technology and usage evolve.

How Our AI Toys Are Designed

Our AI toys does not rely on any single large language model provider by default.

Our system architecture combines:

This separation ensures that child safety is governed by system-level rules rather than by the behavior of any one underlying model.

Our View on AI Toys and Children

We believe strongly that AI toys should never replace parents, caregivers, or real human relationships.

At their best, AI toys should function as carefully constrained tools that support curiosity, imagination, and learning under adult guidance. Safety should not depend on children or parents navigating complex configuration settings—it should be built in by default.

As an early company in this category, we do not believe the right response is to wait for regulation to catch up. Responsible design, transparency, and continuous improvement are obligations we take seriously.

A Note from Our Founder

I started working on AI toys not just as an engineer, but as a father. I believe access to knowledge and learning tools should be universal, regardless of where a child is born or how technical their parents are. Our goal is to make AI understandable, safe, and accessible in everyday family life, without ever replacing parents or real human relationships. —— Larry Wang

Looking Ahead

No AI system for AI toys is ever “finished.” While recent updates have meaningfully improved behavior, we will continue to test, monitor, and refine our systems as real-world use evolves.

Our commitment remains the same: to build AI toys that are safe by design, transparent in practice, and aligned with a long-term vision of AI accessibility and knowledge equity, with families always at the center.

References

Independent follow-up testing by the U.S. PIRG Education Fund is publicly available. In that follow-up evaluation, PIRG noted that the product was “better behaved” following our safety updates.

Read the full PIRG follow-up report:
https://pirg.org/edfund/media-center/report-update-ai-chatbot-toys-come-with-new-risks/

For media inquiries, please contact:
[email protected]

← Back To News