Word Tag, a vocabulary game for kids aged 7–13, had strong gameplay but weak ways to measure progress. Placement relied only on grade level, which left huge gaps: some third graders read simple stories, while others tackled advanced texts. Teachers could assign word lists, but there was no validated way to confirm mastery. Quizzes existed, but they were shallow, slow to adapt, and didn’t feel connected to the game experience.
How could we measure vocabulary growth without breaking the flow of play?
Before designing, we reviewed research on vocabulary learning and assessment
We also synthesized teacher and student feedback from the efficacy study:
These insights shaped our design challenge:
What if assessments could feel like part of the game — quick, adaptive, and even fun?
I designed the diagnostic runner flow in Figma, focusing on how learners moved through questions, received feedback, and completed the session. This design served as the blueprint for a teammate to build a functional web prototype (MVP). We used the MVP in playtesting with children aged 7–13, which allowed us to observe real interactions beyond a clickable mockup and validate the feasibility of the adaptive runner design.
Across two rounds of MVP playtesting with children aged 7–13, the diagnostic proved intuitive, but testing surfaced several stress points that reduced motivation:
These findings directly informed the next round of refinements.
To balance accuracy with engagement, I refined the diagnostic design based on playtesting insights:
While the diagnostic gave learners a stronger start, teachers still needed a way to track progress over time. That led us to design Word Fair — a celebratory event for measuring vocabulary mastery without breaking the flow of play.
I created the event flow for Word Fair: when it would happen, how it was announced, and how players would enter a new carnival plaza for one session. From there, the plaza guided learners through a circuit of mini-games before ending with a prize booth.
Alongside the framework, we proposed six mini-games, each testing a different vocabulary skill (synonyms, definitions, word-in-context, picture matching, odd-one-out reasoning, and real vs. fake words). Once approved, I worked with a teammate to build out game mechanics and feedback interactions (right, wrong, miss).
We ran exploratory playtests with a small group of children, focusing on qualitative feedback and fun ratings.
At this stage of the capstone, our deliverable was insights and recommendations rather than full refinements. Based on playtesting, we advised the product team to:
These recommendations gave the client clear next steps to improve clarity, feedback, and engagement if the mini-games were taken forward.
We designed Word Tag’s assessments as a connected system, giving learners a strong start and teachers meaningful checkpoints over time.
A short runner game used adaptive logic to quickly estimate each learner’s vocabulary level, so personalization began on day one instead of after weeks of play.
A carnival-style progress check replaced daily gameplay for one session. Six mini-games, each mapped to a vocabulary skill, turned assessments into a celebratory event that felt engaging for learners and informative for educators.
Together, these two assessments formed a research-based system that supported both personalized onboarding and long-term progress tracking — measuring vocabulary growth without breaking the flow of play.
Our work showed that assessments don’t need to feel separate from gameplay — they can become part of it.
What kids said during playtesting:
For the product team, the project delivered validated prototypes and clear recommendations, grounded in research and child playtesting.
By thinking in systems, we designed a way for Word Tag to support personalized starts and long-term growth tracking in a way that was engaging for learners and valuable for teachers.
Designing for children in a game setting pushed me outside my comfort zone. I usually work on interface design, but here I had to think about playability and motivation. I realized that making a game playable isn’t only about art or polish — it’s about designing flows and feedback that kids can actually understand and enjoy.
Through playtesting, I also saw that motivation isn’t just about making games fun or easy. Kids wanted to keep playing because they wanted to master challenges and prove their skills. At the same time, I noticed how discouraging feedback could undermine confidence. Striking that balance — enough challenge to drive mastery, but feedback that builds confidence — is exactly the kind of design tension I want to keep exploring.
This project also helped me understand my own design identity more clearly. My strength lies in designing systems, flows, and clarity that make playful ideas usable and motivating. That perspective will continue to guide me as I design at the intersection of learning science and play.