My Works
WordTag Adaptive Assessment
The client wanted a quiz. I designed a system that makes kids forget they're being tested.
Role
Lead Designer
Timeline
Jan – Jul 2025
Tools
Figma
Impact
100% kids rated it fun, not a test
The Problem
Measuring learning without breaking play.
Word Tag knew their game was working. They just couldn't prove it.
The app adapted difficulty during gameplay — but slowly. Initial placement used only grade level, which meant a third grader reading at a sixth-grade level and one struggling with simple sentences started in the same place.
The ask was simple: build a 2-minute diagnostic that fits inside a game. But there was a constraint nobody said out loud — a quiz that feels like a quiz defeats the entire purpose of Word Tag.
The Tension
Three goals. One experience.
01
Data Quality
Enough questions to reliably place a learner.
➝ Needs more time
02
Speed
Under 2 minutes. Fast enough to fit inside onboarding.
➝ Needs less time
03
Motivation
Kids must want to keep going — not feel tested.
➝ Needs no pressure
These three goals are in direct conflict. More questions mean better data — but more time and more pressure. We couldn't optimize for all three simultaneously. We had to choose what to protect.
The Decision
We protected motivation first.
If a child feels pressure, they rush. If they rush, the data is wrong. If the data is wrong, the placement fails — and the entire system breaks.
So we designed around a single principle: the child should never feel like they're being evaluated. They should feel like they're competing against themselves.
This changed every downstream decision. Timer design. Question pacing. Feedback mechanics. Distractor structure. All of it was optimized not just for accuracy — but for the feeling of play.
The System
A system that measures without interrupting.
We didn't design two separate tools. We designed two moments inside the game world — each with a different job, but the same principle: the child should never feel like they're being evaluated.
01
The Framework
We didn't design a quiz and a progress tracker. We designed two moments inside the game world — each with a different job.

02
Pacing as Design
The hardest design decision was time.
A fixed 2-minute timer created pressure — especially for younger kids. Missing questions because time ran out felt like failure, not play.
We switched to per-question time limits (~5–7 seconds, calibrated from observed response averages during playtesting). This kept the session fast while removing the feeling of racing against a clock.

03
Feedback as Motivation
In a test, wrong answers feel like failure. In a game, wrong answers feel like information.
We designed three distinct feedback states — correct, incorrect, missed — each with clear visual and motion cues. The goal wasn't to make wrong answers feel good. It was to make them feel like part of the game, not a judgment.

04
Word Fair
Progress tracking had the same constraint: it couldn't feel like a test.
We proposed replacing one gameplay session with Word Fair — a carnival. Six booths. Each booth is a mini-game testing a different vocabulary skill. Kids move through the circuit, collect prizes, and end at a prize booth.
Teachers get six data points across vocabulary breadth and depth. Kids think they went to a carnival.

The Impact
They wanted to play again. That was the proof.
100%
Of children rated the diagnostic as fun
3
Rounds of playtesting with children aged 7–13
7
Mini-games, each mapping to a distinct vocabulary skill
5-7s
Per-question timing, calibrated from real playtesting data
Kid 1
Playtesting participant, age 8
Kid 2
Playtesting participant, age 10
Reflection
Motivation is a design constraint, not a nice-to-have.
This project changed how I think about measurement. The instinct when designing assessments is to optimize for data quality — more questions, more formats, more precision. But with children, that instinct is backwards.
If the experience kills motivation, the data is wrong anyway. A stressed child doesn't perform the same as an engaged one.
The design lesson I'm taking forward: in learning systems, engagement isn't decoration. It's infrastructure.


