Designing AI That Teaches You to Think, Not Just Answer

Most AI products are optimized for one thing: getting you to the answer faster. ThinkBot was an experiment in the opposite direction.
The problem it addresses is real. The more fluent AI becomes, the easier it is to outsource judgment entirely — to accept a well-written response without asking whether it's accurate, complete, or even the right question. Psychologists call this a drift toward System 1 thinking: fast, automatic, low-effort. Convenient, but not the same as understanding.
ThinkBot is a browser plugin that sits on top of ChatGPT and introduces just enough friction to interrupt that drift.
The Mechanism
The core interaction is a color-coded overlay. Orange marks objective, verifiable claims. Teal marks subjective or assumptive ones. The distinction is simple, but surfacing it visually forces a question users rarely ask: how confident should I actually be in this?
Clicking a highlight opens what we called Thinking Modes — six prompt types drawn from Bloom's Taxonomy. Recall. Understand. Apply. Analyze. Evaluate. Create. Each one generates a question tailored to the highlighted content rather than a correction or alternative answer.
The design logic here matters. We deliberately chose questions over explanations. Research on AI-mediated learning consistently shows that questions outperform direct feedback in building durable reasoning skills — because a question withholds the conclusion and forces the user to construct it themselves. ThinkBot's job isn't to tell you what to think. It's to make stopping feel worth it.
What Testing Revealed
We tested through both a Figma prototype and a functional GPT agent across multiple rounds with active AI users. The pattern that emerged was consistent: the nudge only works when it feels like a conversation, not a correction.
Users who experienced ThinkBot as a co-pilot — something curious, not judgmental — engaged with the prompts. Users who felt evaluated by it dismissed them. This shaped every copy and interaction decision in the final design. Tone, in an AI tool designed to build critical thinking, is itself a critical design variable.
The Broader Principle
ThinkBot sits in an underexplored category: AI that strengthens human cognition rather than replacing it. Most AI interaction design optimizes for task completion. ThinkBot optimizes for something harder to measure — whether the user is a better thinker after the interaction than before.
That's a different design brief, and it requires different success metrics. Engagement time isn't the goal. Neither is satisfaction score. The question is whether the cognitive speed bump was worth taking.
I don't think ThinkBot is the final answer to AI over-reliance. But it's a proof of concept for a design posture I find increasingly important: building AI experiences that treat users as thinkers, not just recipients.
Most AI products are optimized for one thing: getting you to the answer faster. ThinkBot was an experiment in the opposite direction.
The problem it addresses is real. The more fluent AI becomes, the easier it is to outsource judgment entirely — to accept a well-written response without asking whether it's accurate, complete, or even the right question. Psychologists call this a drift toward System 1 thinking: fast, automatic, low-effort. Convenient, but not the same as understanding.
ThinkBot is a browser plugin that sits on top of ChatGPT and introduces just enough friction to interrupt that drift.
The Mechanism
The core interaction is a color-coded overlay. Orange marks objective, verifiable claims. Teal marks subjective or assumptive ones. The distinction is simple, but surfacing it visually forces a question users rarely ask: how confident should I actually be in this?
Clicking a highlight opens what we called Thinking Modes — six prompt types drawn from Bloom's Taxonomy. Recall. Understand. Apply. Analyze. Evaluate. Create. Each one generates a question tailored to the highlighted content rather than a correction or alternative answer.
The design logic here matters. We deliberately chose questions over explanations. Research on AI-mediated learning consistently shows that questions outperform direct feedback in building durable reasoning skills — because a question withholds the conclusion and forces the user to construct it themselves. ThinkBot's job isn't to tell you what to think. It's to make stopping feel worth it.
What Testing Revealed
We tested through both a Figma prototype and a functional GPT agent across multiple rounds with active AI users. The pattern that emerged was consistent: the nudge only works when it feels like a conversation, not a correction.
Users who experienced ThinkBot as a co-pilot — something curious, not judgmental — engaged with the prompts. Users who felt evaluated by it dismissed them. This shaped every copy and interaction decision in the final design. Tone, in an AI tool designed to build critical thinking, is itself a critical design variable.
The Broader Principle
ThinkBot sits in an underexplored category: AI that strengthens human cognition rather than replacing it. Most AI interaction design optimizes for task completion. ThinkBot optimizes for something harder to measure — whether the user is a better thinker after the interaction than before.
That's a different design brief, and it requires different success metrics. Engagement time isn't the goal. Neither is satisfaction score. The question is whether the cognitive speed bump was worth taking.
I don't think ThinkBot is the final answer to AI over-reliance. But it's a proof of concept for a design posture I find increasingly important: building AI experiences that treat users as thinkers, not just recipients.

