Students and early-career professionals increasingly treat AI as an “answer machine.” Copy-paste culture encourages System 1 thinking — fast, intuitive, and unchecked — while bypassing the deeper evaluation, reflection, and judgment we need for critical thinking.
Our challenge was to design a companion that interrupts this drift and helps users slow down, question, and think.
We grounded our work in research on persuasive technology and cognitive science. The key insight: AI is excellent at generating and analyzing content, but rarely supports higher-order skills like evaluation and synthesis.
At first, we imagined a “Socratic GPT” that would constantly ask users questions. But prior studies showed that overly assertive interventions frustrate people. Instead, we pivoted: rather than the AI interrogating the user, we’d empower the user to probe the AI.
That shift — subtle, respectful, and user-initiated — became our north star.
Our first prototype was ambitious — packed with colors, scores, links, and prompts — but usability testing revealed that more wasn’t better. Users liked the idea of “dig deeper” nudges, yet too many cues created overload and hesitation. We realized we needed to scope back, focusing only on interventions that guided reflection without demanding extra effort.
We then streamlined the system to two highlight types — Orange for objective claims and Teal for subjective statements — supported by a short onboarding flow. We also introduced an interaction model inspired by Bloom’s Taxonomy, giving users simple cognitive “lenses” to choose from. Testing showed the experience was clearer, but accessibility gaps (like icons relying too heavily on color) still caused friction.
In the final iteration, we added clear text labels, improved contrast, and introduced a recap feature to support reflection without slowing the user down. Still, we discovered a subtle problem: the “ask/answer” buttons felt like homework. Even thoughtful nudges can feel like chores if they aren’t natural to the flow — a key insight that shaped our future direction.
The final version of ThinkBot came together as a set of lightweight features that fit naturally into the AI workflow:
Orange marks objective claims worth fact-checking, while teal flags subjective statements for personal scrutiny.
Hovering a highlight opens six cognitive lenses (Remembering → Creating) to prompt deeper engagement.
A short guide explains the highlight system so users know how to interpret cues.
At the end of a session, users receive a concise overview to reinforce reflection without slowing them down.
The biggest impact of ThinkBot was showing that subtle, user-triggered interventions can change how people interact with AI without breaking their flow. Through testing, we saw that when highlights and prompts were lightweight and optional, users naturally slowed down to reflect instead of just copying. This was a powerful validation that critical thinking can be encouraged not by forcing behavior, but by gently guiding it.
At the same time, we learned the limits — if nudges feel like assignments, they lose their effectiveness. This balance between subtle support and user autonomy became the project’s most important takeaway.
Looking back, what makes me proudest is that our design really did meet the challenge we set out to solve: helping people pause, reflect, and not just copy whatever AI gives them. We didn’t create a perfect solution — some ideas still felt like homework — but we proved that small, thoughtful nudges can shift behavior in meaningful ways.
This project also reminded me that good persuasive design doesn’t shout. It respects the user, fits into their flow, and quietly encourages better habits. For me, ThinkBot isn’t just a course project. It’s a glimpse of the kind of human-AI partnership I want to keep designing for — one where technology doesn’t replace our thinking, but helps us think deeper, learn better, and feel more in control.