
TL;DR
I built Tannya as a voice first platform that helps parents guide their children's curiosity with age appropriate, expert validated answers. It started as an internal exploration at Bridges for Fairness and eventually grew into a full product prototype covering conversations, activities, analytics, and family profiles. Right now, the project is paused while I rethink the AI agent architecture and integrate what we learned from expert interviews.
Why I Built This
This didn't start as a "let's build another app" moment. I was looking for something more grounded, something that carried real usefulness beyond a playground experiment.
During some of our early conversations at Bridges for Fairness, we kept coming back to a simple observation most parents know too well: kids ask amazing, chaotic, unfiltered questions and parents often wish they could answer better.
That idea stuck with me. I wanted to see if modern tooling like realtime features, LLMs, and voice tech could create a new kind of shared learning moment between parent and child. And selfishly, I wanted to push myself technically by building something end-to-end that wasn't just another dashboard or CRUD app.
What I Was Trying to Solve
Parents want to be present, patient, and helpful, but life is busy. Kids don't schedule their curiosity. It just happens: in the car, at breakfast, in the supermarket, right before bedtime.
Existing solutions felt… transactional. Search engines answer questions, but not at a child's level. LLM chats are fast, but not contextual to the child's age or emotional development. Parenting blogs are scattered, generic, and hard to apply in the moment.
We asked child development experts in Germany, France, and Tunisia what parents really struggle with, and they all echoed the same thing: parents want help guiding conversations, not replacing them.
That became the core problem Tannya tried to solve.
How I Actually Built It
First Screens: The Concept Comes to Life
One of the earliest versions already had a clear promise: Every question is an opportunity to connect. This set the tone for everything that followed.
The Journey
I started building with a very simple flow: sign up, create a family profile, add children, and ask the assistant your first question.
The onboarding was important because parents need personalization. Age, interests, context, and family members all influence the kind of answers the assistant should give.

Then came the child profiles.

This is where personalization became real. The system needed these details to adapt the AI responses.
Key Features I Focused on First
- Voice conversations with an AI assistant
- Age-grouped question library
- Parent tips and expert-validated guidance
- Activities and creative modules
- Analytics for learning patterns
I didn't plan all of these from the start. They emerged as I built and as we interviewed experts.
Voice Conversations: The First Big Turning Point
I implemented the ElevenLabs Agent API to handle the full voice pipeline. In theory, it was perfect: quick, natural, multi-turn dialogue.
In practice? It was not ideal for multi-user setups with child-specific context, especially when I needed granular control after each session.
The flow looked like this:
- Parent asks a voice question
- ElevenLabs agent handles the whole conversation
- After the chat ends, ElevenLabs sends back a callback
- I parse it, extract metrics, and store everything in Supabase
This worked. But it didn't scale.
Conversations lacked deep awareness of each child profile, and all intelligence lived inside ElevenLabs instead of my own system.
Still, seeing it live for the first time felt incredible.

Library, Activities, and Storytelling
As soon as the core conversation flow worked, I started building the rest of the experience. A library of age appropriate, expert backed questions:

Each with detailed tips for parents:

Activities came next. This is where Tannya became more than a Q/A tool.

This part was heavily inspired by interviews with child development specialists. Parents need structured activities, not just answers.
Analytics: Because Learning Leaves Patterns
I built an analytics dashboard to help parents track curiosity and engagement trends.

This was one of the technically fun parts: dynamic filters, Supabase queries, and early ideas for future insights.
Profiles and Community
The more the product grew, the more I realized parents needed centralized, flexible family management.

And a community tab to gather feedback and surface upcoming modules:

What I Learned and Would Do Differently
This project taught me more about AI system design than any previous experiments.
Key lessons:
1. Voice assistants need full control of the conversation pipeline Letting ElevenLabs handle everything made iteration hard. Next version will switch to a custom STT -> LLM -> TTS pipeline.
2. Children require strict personalization Age, interests, mood, emotional readiness. A single agent cannot handle this without structured context engineering.
3. Interviews change everything Our expert discussions shifted the entire direction of the product. We discovered deeper problems around emotional development, parent burnout, and guidance on tough conversations.
Where It Stands Now
Right now, Tannya is paused while I rethink the AI agent architecture and integrate what we learned from expert interviews.
We learned too much too quickly, and the current architecture can't support the flexibility parents actually need. Before resuming development, I want to rebuild the voice and conversation engine properly and refine the modules based on the interviews.
It's not a dead project. It's a waiting for a better foundation project.
The Stack and Code
Tech Stack
- Next.js 15
- Supabase (Auth, Database, Realtime, Storage)
- ChatGPT 4 Mini
- ElevenLabs (voice, agent, callbacks)
- ShadCN components
- TailwindCSS v4
- Vercel for hosting