The Feedback Loop
Test, give feedback, iterate.
You've talked to Claude. Something got built. Maybe it even works.
But "works" and "right" are different things. The first version is almost never exactly what you wanted. That\'s not failure. That\'s the process.
This module is about the conversation after the first draft - how to test, how to give feedback, and how to iterate without spiraling into perfectionism hell.
The Mindset Shift
Here's the thing most people get wrong: they think iteration means something went wrong.
It doesn\'t. Iteration IS the process. Professional developers don\'t write perfect code on the first try. They write something, test it, fix it, test it again. Over and over. That\'s not amateur hour - that\'s literally how software gets made.
So when Claude builds something and it\'s not quite right, you haven\'t failed. You\'ve completed step one. Now you're on step two: making it better.
The real skill: Building apps with AI isn\'t about getting it right the first time. It\'s about being able to clearly articulate what\'s wrong so the next version is better. That\'s the skill you're developing in this module. Not coding. Communicating.
How to Actually Test
Before you can give feedback, you need to know what\'s working and what isn\'t. Here\'s how to test like someone who knows what they\'re doing.
- Click everything
Every button. Every link. Every dropdown. If it looks clickable, click it. See what happens. Does it do what you expected? - Try to break it
Enter weird data. Leave fields blank. Put numbers where text should go. Submit the form twice. What happens when someone uses it "wrong"? - Use it like a real person would
Pretend you're your actual user. Go through the whole flow. Is it confusing anywhere? Do you have to think too hard? - Check it on your phone
If it\'s supposed to work on mobile, actually look at it on your phone. Not just "it loads" - actually use it. Can you tap the buttons? Can you read the text? Note: If you're testing on localhost, you can only test on your phone after deployment. For now, test on your phone's browser by going to your deployed URL. - Show someone else
Have someone who doesn\'t know what it\'s supposed to do try to use it. Watch them. Don't explain anything. Where do they get stuck?
The testing checklist: Does it load? Do the buttons work? Does it look right? Does it do what it\'s supposed to do? Would my actual user understand how to use it? Would my actual user WANT to use it? If you can answer yes to all of these, you're in good shape. If not, you know what to fix.
The Language of Useful Feedback
The difference between feedback that helps and feedback that doesn't is specificity. Vague feedback makes Claude guess. Specific feedback makes Claude fix.
Here's what vague vs. specific looks like:
| Vague | Specific |
|---|---|
| "It doesn't look right." | "The button is too small to tap on mobile, and the text is hard to read against the background color." |
| "This isn't what I wanted." | "I wanted a list view, but this is showing cards. Can you change it to a simple list with one item per line?" |
| "It's not working." | "When I click the submit button, nothing happens. I expected it to show a confirmation message." |
| "Make it better." | "The spacing feels cramped. Can you add more padding between each section?" |
| "I don't like it." | "The colors feel too corporate. Can we try something warmer - maybe oranges and creams instead of blues and grays?" |
Notice the pattern? Good feedback includes WHAT is wrong and, when possible, what you want INSTEAD.
Feedback Formulas
When you're stuck, use these templates. Fill in the blanks. They force you to be specific.
When something looks wrong:
The [specific element] looks [problem]. I want it to look [desired state] instead.
When something behaves wrong:
When I [action], [what happens]. I expected [what should happen] instead.
When something is missing:
I don't see [missing thing]. Can you add [what you need] to [where it should go]?
When something is unnecessary:
I don\'t need [thing that\'s there]. Can you remove it?
When it's close but not quite:
This is almost right. The part that's not working is [specific part]. Can you change just that to [what you want]?
When you can't articulate it:
Something feels off about [area] but I can\'t pinpoint it. Can you show me 2-3 variations and I\'ll pick the one that feels right?
Common Scenarios
Here\'s what to say in the situations you\'ll actually encounter:
It works, but it's ugly
"The functionality is right, but the visual design needs work. Can you make it look more modern/clean/professional? Specifically, [mention colors, spacing, fonts, or layout that bother you]."
It looks good, but it doesn't work
"The design is great, but when I [specific action], [what goes wrong]. Can you fix the functionality without changing the appearance?"
It's too complicated
"This has too many steps/options/features. Can we simplify it to just [the core thing]? We can add more later, but I want the basic version first."
It's missing something important
"This is good, but it's missing [specific feature]. Before we go further, can you add [what you need]?"
I changed my mind about what I want
"I know I asked for X, but now I\'m thinking Y would work better. Can we pivot? Here\'s what I'm now imagining: [new direction]."
(This is allowed. This is normal. This is how building things works.)
The Perfectionism Trap
Here\'s where a lot of people get stuck: they keep iterating forever because it\'s never quite right.
There\'s a difference between "not done" and "not perfect." You need to know which one you're dealing with.
Not done (keep going):
It doesn\'t do the core thing it\'s supposed to do. It breaks when used normally. A real user couldn\'t figure out how to use it. It\'s missing something essential to function. It looks so bad that no one would trust it.
Not perfect (maybe stop):
It works, but you can imagine it being better. The colors aren\'t exactly what you pictured. There\'s a nice-to-have feature you haven\'t added yet. Someone might find a minor issue if they really tried. You\'ve been tweaking for more than an hour and changes are getting smaller.
If it\'s "not done," keep going. If it\'s "not perfect," consider stopping.
You can always come back later. Version 1 doesn\'t have to be the final version. Ship something that works, learn from real usage, then improve. That\'s how every successful product in the world was built.
The question to ask yourself: If someone used this thing RIGHT NOW, would it solve their problem? If yes, it might be done enough. If no, what's the ONE thing that would make the answer yes? Do that thing. Then ask again.
When to Start Over vs. Keep Fixing
Sometimes the path forward is to keep iterating. Sometimes it\'s faster to start fresh. Here\'s how to know:
Keep iterating when:
- The core structure is right, just needs adjustments
- You're making progress with each round of feedback
- The issues are cosmetic or minor functionality
- Claude understands what you want and is getting closer
- You\'ve invested significant iteration and it\'s 70%+ there
Start fresh when:
- The fundamental approach is wrong, not just the details
- You've gone in circles for more than 30 minutes
- Claude seems confused and keeps misunderstanding
- You've realized your original request was wrong
- The conversation has gotten so long Claude is losing context
How to start fresh well: Don\'t just copy your original prompt into a new chat. Take what you learned from the failed attempt and write a BETTER prompt. Be more specific about what you want. Mention what you DON\'T want (based on what went wrong). The failed attempt taught you something - use it.
Testing Worksheet
Use this for each round of testing. Fill it out, then turn it into feedback for Claude.
What works:
List everything that's working correctly: _______________________________________________
What's broken:
List anything that doesn't work or causes errors: _______________________________________________
What's not quite right:
List things that work but aren't what you wanted: _______________________________________________
Iteration Tracker
Track your rounds of feedback to see your progress (and catch yourself if you're spiraling).
| Round | What I Asked For | Result |
|---|---|---|
| 1 | ||
| 2 | ||
| 3 | ||
| 4 | ||
| 5 | ||
| 6 |
Check yourself: If you\'ve filled in all 6 rows and you're still not happy, stop. Either start a fresh conversation with a clearer prompt, or accept that version 1 is good enough and move on. More than 6 rounds of iteration on the same thing is usually a sign you're either spiraling or your original request was unclear.
Module 4 Complete
You know how to test what Claude builds.
You know how to give feedback that actually helps.
You know when to keep going and when to stop.
You\'re not chasing perfect. You\'re chasing done.
What\'s next - Module 5: Module 5 is called "Making It Real" and it\'s where we move from artifacts to actual deployed apps. You'll set up Claude Code, create a GitHub account, and put your app on the internet where real people can use it. This is where it stops being a prototype and starts being a product.
Done beats perfect. Always.