I designed the thinking. Cursor built the interactions.

I joined Perflection AI in April with a clear research brief and a blank Figma file. The product: an AI-assisted coaching platform for golf. The problem: I knew nothing about golf.

In the past, that would have been a real constraint. Domain knowledge shapes everything — the terminology in UI copy, the mental models behind each interaction, the credibility of a prototype when you put it in front of an expert user. A fake-feeling prototype produces fake-feeling feedback.

This time, I didn't fake it.

Content that earns trust

Golf coaching is unusually context-dependent. Coaches don't diagnose a swing in isolation — they factor in a student's injury history, physical condition, experience level, and long-term goals. Generic feedback breaks trust immediately. So the prototype needed content that reflected this reality, not placeholder text.

I used Claude to build out a complete user persona: a senior recreational golfer with a specific background, injury history, and performance goals. From there, I generated realistic student profiles, plausible swing observations, and drill prescriptions written in the language a real coach would use.

The result was a prototype that felt inhabited. When I put it in front of test users, they responded to the actual content — not just the layout. That's a different quality of feedback.

Interactions before every screen was drawn

The traditional sequence goes: research → wireframe → prototype → test. Each handoff has a cost, and for a solo designer without engineering support, the prototype ceiling is usually whatever Figma can simulate.

My process was different. I had complete design screens and a clear interaction logic — but not every state was drawn yet. I gave Cursor my research insights, my design rationale, and my existing screens, then asked it to build the interactions I hadn't implemented yet. It did.

I had a working, testable prototype before I'd finished designing every state.

This isn't about moving fast for its own sake. It's about what becomes possible when the gap between "I have an idea" and "I can test this idea" collapses. The questions you can ask change. The feedback you can get changes.

What this requires

None of this works without upstream clarity.

The reason the Cursor output was usable — not generic — is because the input was specific. Research insights, not vague briefs. Real design rationale, not "make it look good." A complete persona, not a stock user archetype. Designed screens as the foundation, not a rough sketch.

AI amplifies what you bring to it. If you bring noise, you get faster noise. If you bring precision, you get leverage.

The honest version

I'm not arguing that AI replaces domain expertise. I'm arguing that it changes who can acquire enough of it to do good work.

A year ago, designing a credible prototype for a specialized coaching tool — as a non-expert, solo, without an engineer — would have required either cutting corners or significantly more time. Neither produces the best outcome.

The constraint hasn't disappeared. It's just moved. Now the bottleneck is the quality of your thinking, not the hours in your calendar.


Vanessa Chang is a Product Designer at Perflection AI. She builds AI-native products at the intersection of coaching, learning science, and human-AI interaction.

I joined Perflection AI in April with a clear research brief and a blank Figma file. The product: an AI-assisted coaching platform for golf. The problem: I knew nothing about golf.

In the past, that would have been a real constraint. Domain knowledge shapes everything — the terminology in UI copy, the mental models behind each interaction, the credibility of a prototype when you put it in front of an expert user. A fake-feeling prototype produces fake-feeling feedback.

This time, I didn't fake it.

Content that earns trust

Golf coaching is unusually context-dependent. Coaches don't diagnose a swing in isolation — they factor in a student's injury history, physical condition, experience level, and long-term goals. Generic feedback breaks trust immediately. So the prototype needed content that reflected this reality, not placeholder text.

I used Claude to build out a complete user persona: a senior recreational golfer with a specific background, injury history, and performance goals. From there, I generated realistic student profiles, plausible swing observations, and drill prescriptions written in the language a real coach would use.

The result was a prototype that felt inhabited. When I put it in front of test users, they responded to the actual content — not just the layout. That's a different quality of feedback.

Interactions before every screen was drawn

The traditional sequence goes: research → wireframe → prototype → test. Each handoff has a cost, and for a solo designer without engineering support, the prototype ceiling is usually whatever Figma can simulate.

My process was different. I had complete design screens and a clear interaction logic — but not every state was drawn yet. I gave Cursor my research insights, my design rationale, and my existing screens, then asked it to build the interactions I hadn't implemented yet. It did.

I had a working, testable prototype before I'd finished designing every state.

This isn't about moving fast for its own sake. It's about what becomes possible when the gap between "I have an idea" and "I can test this idea" collapses. The questions you can ask change. The feedback you can get changes.

What this requires

None of this works without upstream clarity.

The reason the Cursor output was usable — not generic — is because the input was specific. Research insights, not vague briefs. Real design rationale, not "make it look good." A complete persona, not a stock user archetype. Designed screens as the foundation, not a rough sketch.

AI amplifies what you bring to it. If you bring noise, you get faster noise. If you bring precision, you get leverage.

The honest version

I'm not arguing that AI replaces domain expertise. I'm arguing that it changes who can acquire enough of it to do good work.

A year ago, designing a credible prototype for a specialized coaching tool — as a non-expert, solo, without an engineer — would have required either cutting corners or significantly more time. Neither produces the best outcome.

The constraint hasn't disappeared. It's just moved. Now the bottleneck is the quality of your thinking, not the hours in your calendar.


Vanessa Chang is a Product Designer at Perflection AI. She builds AI-native products at the intersection of coaching, learning science, and human-AI interaction.

Are you interested in working with me?

Let's build something that works —
and works well.

Open to Relocate

Pittsburgh, PA

Copyright © 2026 Vanessa Chang. All Rights Reserved.

Are you interested in working with me?

Let's build something that works —
and works well.

Open to Relocate

Pittsburgh, PA

Copyright © 2026 Vanessa Chang. All Rights Reserved.

Are you interested in working with me?

Let's build something that works —
and works well.

Open to Relocate

Pittsburgh, PA

Copyright © 2026 Vanessa Chang. All Rights Reserved.