SpeakUp: AI Interview Coach

Helping non-native speakers move from a perfect script to a confident delivery
Overview
I designed an AI coach that helps users bridge the delivery gap by providing moment-level critiques on their verbal performance. By prototyping the AI logic first in Google AI Studio, I ensured the interface was driven by qualitative speech data—like pacing and confidence—rather than just surface-level text analysis, transforming high-stress evaluations into a supportive, mastery-based practice environment.
Role
Product Designer & Prompt Engineer
Context
Course project (The AI Augmented Designer, CMU HCII)
Timeline
Nov–Dec 2025
Skills & Tools
AI Product Design · Prompt Engineering · MVP Strategy · Google AI Studio
The challenge

Bridging the gap between writing and speaking

For many non-native speakers, the real hurdle isn’t a lack of professional expertise, but the painful disconnect between their internalized thoughts and their verbal delivery under pressure. While users often rely on text-based preparation as a safety net, that "perfectly phrased" content frequently crumbles during the dynamic flow of a live interview. Standard LLMs like ChatGPT can polish a script, but they leave users with a massive delivery blind spot regarding pacing, clarity, and tone. In a three-week sprint, I aimed to bridge this gap by transforming a standard LLM into a supportive voice-AI coach that helps users move beyond memorization and toward genuine, moment-level mastery.

Research & Discovery

Validating the AI logic before the interface

I led with an "Intelligence-First" methodology because I recognized that for an AI-driven product, the interface must be a direct reflection of the underlying AI behavior. To bring this to life, I sketched a storyboard to map the user’s emotional journey from the anxiety of the "delivery blind spot" to the relief of receiving objective, transcript-based coaching.

A hand-drawn, seven-panel storyboard arranged in two rows that shows a user named Vanessa progressing from interview anxiety to confident mastery.
This storyboard illustrates a user's transition from the anxiety of a delivery blind spot to a confident, mastery-based practice loop through the SpeakUp experience.

To prove this journey was technically feasible, I used Google AI Studio to verify if an LLM could move beyond simple text editing to analyze the qualitative nuances of speech like pacing and confidence. By engineering specific system instructions, I proved the AI could successfully label a transcript with STAR method beats and generate accurate delivery scores. This validation of the core logic was essential before I invested time in high-fidelity interface design.

A prototype dashboard showing an AI coaching summary, delivery scores, a storytelling checklist, and a highlighted transcript analysis.
The early prototype validated that the AI could accurately evaluate qualitative audio metrics and structural storytelling components before I moved into high-fidelity interaction design.
Design & Iteration

Choosing coaching over evaluation

My process was a series of rapid pivots to balance technical depth with user psychology as I moved from a data-heavy report to a supportive coaching tool.

Stripping back the "data dump”

My first prototype was a dense report with a numerical grade, but testing revealed that static summaries told users how they did without explaining how to improve.

An early-stage interface displaying a text-based coach's summary, categorical delivery scores, and a transcript with general performance highlights.
This first iteration focused on high-level scores and summaries, but testing showed users needed more specific guidance to actually improve their performance.

Strategic pivot to "the moment”

In the next iteration, I prioritized time-stamped highlights as the primary interaction to provide a specific roadmap for improvement. By "locating" the feedback within the transcript, I refined the AI’s tone to be more supportive, transforming the experience from a high-pressure test into a constructive coaching session.

A revised dashboard where interactive feedback cards for clarity and confidence are linked to specific timestamps within the user’s spoken response.
Pivoting to timestamped highlights transformed the tool into a supportive coaching session by providing specific, actionable feedback mapped directly to the transcript.

The realism check

I introduced a "Camera-On" feature because, while I worried about adding pressure, testers found that seeing themselves made the practice feel more realistic and better prepared them for actual interviews.

A practice recording screen featuring a camera view with a timer and recording controls below.
Adding a camera-on feature helped users acclimate to high-stakes interview environments while maintaining psychological safety.
The Solution

A safe space for moment-level practice

Time-Stamped Coaching Bubbles

Unlike generic summaries, this feature links AI critiques directly to specific phrases in the transcript. By "locating" the feedback, users can see exactly where their delivery faltered, making the path to improvement immediate and specific.

A digital interface showing a transcript with a highlighted sentence and a corresponding feedback card in a side panel labeled "Clarity”
AI critiques are linked directly to specific transcript phrases to show users exactly where their delivery can improve

Qualitative Delivery Analysis

The AI analyzes three key pillars—Confidence, Clarity, and Pacing—to help users refine their professional presence beyond simple grammar fixes.

Four white squares with neon green icons representing Confidence, Clarity, Pacing, and Structure with brief descriptive subtitles.
The coach evaluates delivery through Confidence, Clarity, Pacing, and Structure to move beyond basic grammar corrections.

STAR Structure Validation

The AI identifies if the user "hit the STAR beats" (Situation, Task, Action, Result). This ensures the user is telling a high-impact story rather than just recounting facts.

An interface displaying a transcript with green highlights corresponding to the "Action" phase of a STAR-structured behavioral response.
This feature validates whether a user successfully incorporated each part of the STAR method into their storytelling.

The Mastery Practice Loop

The UI encourages a "try again" mindset. Once a user reviews their highlights, they can instantly re-record to master specific segments, turning the process into a dynamic growth cycle.

A dark dashboard section titled "Top Focus" with three numbered advice cards sitting above "Try Again" and "Next Question" buttons.
Strategic "Try Again" and "Next Question" prompts encourage users to immediately apply feedback and build muscle memory.
Impact

Moving users from anxiety to mastery

In a rapid three-week sprint, I focused on behavioral shifts and qualitative validation to measure success.

  • High-Value Feedback: 100% of testers found the time-stamped coaching bubbles more helpful than a generic summary. Users reported feeling "equipped to try again" rather than just judged.
  • The "Try-Again" Loop: The most significant result was a shift in behavior—every tester chose to re-record their response at least once after reviewing the feedback.
  • Strategic Roadmap: Testing identified a clear need for future iterations, specifically around visualizing progress and comparing attempt histories over time.
reflection

AI design is about designing the intelligence first

This project solidified my belief that a product's logic should be the foundation of the user experience. By spending my first week in Google AI Studio rather than Figma, I ensured the coach’s "brain" was actually helpful before I designed the "skin".

I also explored the complex psychology of feedback. When a tester mentioned that a grade made him nervous, it taught me that a designer’s job isn't just to provide data, but to navigate the tension between the fear of being judged and the deep need to see progress. Moving forward, I want to keep exploring how to protect psychological safety while driving mastery.

Other projects

Cover image of the project. Click to view the full project.

Torus Design System & Accessibility

OLI Internship · Design Systems · Accessibility
Architected a scalable, WCAG-compliant foundation to unify Torus' fragmented product UI
Cover image of the project. Click to view the full project.

Word Tag: Playful Assessment Design

Mrs. Wordsmith · Game-Based Learning · MVP Design
Turned testing into play by designing research-based assessments that kids found engaging and motivating