Foundations · Lesson 08 — The five phrases that change how Claude responds
F08Foundations
Foundations · Lesson 08● live

Five phrases that change how Claude responds

Specific language, used at specific moments, that produces outsized improvement.

8 min read · ongoing applyprereq: none — usable from day one

Why phrasing matters

Most operators are leaving compounding value on the table by phrasing their requests in default-conversational English. The agent responds reasonably to almost any phrasing — but it responds much better to specific phrases that signal what kind of response you actually want.

These aren’t magic incantations. They’re prompts that surface information the agent already has but won’t volunteer. The five phrases below are the ones that paid back the most across hundreds of sessions.

Use any one of them in any session. Pick the one that fits the moment. After a few weeks they’re second nature.

The five phrases

PhraseWhen to useWhat it surfaces
“Walk me through why you chose X over Y.”When the agent proposes something you don’t loveThe constraint or trade-off that drove the choice — often a reason you’d accept
“What’s the worst case in six months?”When something feels off but you can’t articulate whyThe future failure mode that’s causing the gut signal
“Is this the actual concern or an instance of a broader one?”When you’ve named a specific problemWhether you’re fixing the symptom or the root
“What’s the cheapest experiment to test this?”Before any redesign, refactor, or major commitThe smallest reversible test, often 10x cheaper than the full change
“What’s still uncertain after this?”After any agent deliveryThe un-tested space — where the next bug usually lives

Each phrase is doing the same fundamental work: asking the agent to surface information it has but isn’t volunteering. Default conversational mode produces default responses. These prompts unlock the layer underneath.

Three ways operators waste these phrases

The patterns that prevent the phrases from working — usually because they’re used reflexively or at the wrong moment. Hover any card to see the diagnosis.

01

The polite hedge

claim looks like"Maybe we could think about possibly trying to add a contact form?"
what’s missingPoliteness fog. The agent reads soft phrasing as low-confidence and returns equally soft suggestions, often with options instead of action. The work doesn't move.
the moveBe direct. "Add a contact form. Three fields, validation, submit endpoint." Direct phrasing reads as committed; the agent ships instead of brainstorming.
02

The premature disagreement

claim looks likeAgent proposes an approach. You don't like it. You write three paragraphs explaining why and listing alternatives.
what’s missingYou're arguing before the agent has explained. Half the time the agent has a reason for the choice that you'd accept; you just haven't asked. Three paragraphs of pushback, and now the conversation is about your concerns, not the actual problem.
the moveSay "walk me through why you chose X over Y" first. The agent's reasoning often surfaces a constraint you didn't know about. Then disagree if you still disagree — but with information, not defaults.
03

The unbounded ask

claim looks like"Improve the homepage."
what’s missingNo constraint, no test, no scope. The agent picks something to improve, often the wrong thing, and you spend the next round redirecting.
the move"Improve the homepage with one constraint: faster load. Don't change visuals. Show me three specific changes; I'll pick." Bounded asks produce surgical work; unbounded asks produce sprawl.

The phrases work when they fit the moment. Direct asks for committed work, surfacing prompts for vague signals, bounded asks for sprawling problems. Used out of order they don’t hurt — but they don’t pay back the way they could.

The friction check

The diagnostic for the moment when something feels off but you can’t name it. Paste this prompt into the conversation when the gut signal hits.

The friction check
When I describe a problem with vague or hedged language,
don't push back on the vagueness — treat it as data and
help me surface the concrete concern underneath.

Use these moves:
- "Walk me through what feels off, even if vague."
- "What's the worst case you can imagine in six months?"
- "Is this the actual concern or an instance of a broader one?"

If after two of these moves the concern is still vague,
say so explicitly: "I think the gut signal might be wrong
here — let's table this until something concrete shows up."
Don't manufacture a problem just to have one.

Treats vague feelings as data. Surfaces the concrete concern underneath — or honestly admits there isn't one.

The honest-admission case is important. Sometimes there’s no real problem and the gut signal was wrong. The friction check is willing to say that out loud, which prevents the failure mode where you manufacture an issue because you committed to having one.

Annotated transcript

A real conversation: operator has a gut feeling about an API design, agent uses three of the five phrases to surface the actual concern and bound the response. Click any pinned annotation to expand the “why this turn matters” commentary.

Five phrases in a single API conversation

Three operator turns, three agent turns. Result: vague gut signal becomes a concrete coupling concern, becomes a bounded experiment, becomes a clear next step.

  1. operator

    I'm not sure if the API design we picked yesterday is going to scale. It feels off but I can't articulate why.

  2. agent

    Walk me through what feels off — even if it's vague. "It feels off" is data; we can usually pull a specific concern out of it together.

    If you can't surface anything specific, try this prompt: "What's the worst case I can imagine for this design six months from now?" That often clarifies the gut signal.

  3. operator

    Worst case in six months: we have 5x more endpoints, every endpoint has the same auth boilerplate, and refactoring auth across all of them is a multi-day project.

  4. agent

    Now we have a real concern. The gut signal was "this design has a coupling problem." Specifically: auth handling is duplicated per-endpoint instead of centralized.

    Before we redesign, two questions:

    1. Is this the actual concern or an instance of a broader concern? ("Coupling between auth and endpoints" might be the surface; "no middleware layer at all" might be deeper.)
    2. What's the cheapest experiment to test the redesign before we commit? Build one new endpoint with the centralized pattern, see if it actually feels lighter.

    If the experiment works we propagate; if it doesn't, the original design wasn't the bottleneck.

  5. operator

    Cheapest experiment first. One endpoint with the centralized pattern. If it's cleaner, we propagate.

  6. agent

    Good. One last phrase before we start: when I deliver the experiment, ask me "what's still uncertain after this?" — that surfaces the things the experiment didn't test, which is where the next gotcha hides.

    Drafting the experiment endpoint now.

When to use which

A pocket reference. Save this somewhere you’ll see it during sessions — a snippets file, a sticky note, the top of your CLAUDE.md.

SITUATION                              | PHRASE
---------------------------------------+------------------------------------
Agent proposes something you don't     | "Walk me through why you chose X
love, you're about to push back        |  over Y"
                                       |
You feel uneasy about a design but     | "What's the worst case for this
can't articulate why                   |  in six months?"
                                       |
You've just named a specific problem   | "Is this the actual concern or an
and want to redesign around it         |  instance of a broader concern?"
                                       |
Before any redesign, refactor, or      | "What's the cheapest experiment
major commit                           |  that would test this?"
                                       |
After agent delivers a feature, fix,   | "What's still uncertain after
or analysis                            |  this?"

Most operators internalize 1-2 of these in the first month. The other three take longer because they require recognizing a moment, not just remembering a phrase. Worth printing this table; the friction is real.

Prompt kit

Three prompts that wrap the most useful of the five phrases into reusable forms. Save in your CLAUDE.md or a personal snippets file.

The friction check
When I describe a problem with vague or hedged language,
don't push back on the vagueness — treat it as data and
help me surface the concrete concern underneath.

Use these moves:
- "Walk me through what feels off, even if vague."
- "What's the worst case you can imagine in six months?"
- "Is this the actual concern or an instance of a broader one?"

If after two of these moves the concern is still vague,
say so explicitly: "I think the gut signal might be wrong
here — let's table this until something concrete shows up."
Don't manufacture a problem just to have one.
Convert gut feeling to concrete concern
I have a gut feeling that <X> is off but I can't
articulate why.

Help me surface it:
1. What's the worst case I can imagine for X in
   six months?
2. What's the dependency or assumption that, if it
   broke, would make X visibly wrong tomorrow?
3. What's a question whose answer would either confirm
   or kill the concern in one minute?

Walk me through #1 first. If that surfaces something
specific, we use it. If it doesn't, try #2.
The cheapest-experiment prompt
Before we commit to <approach>, what's the cheapest
experiment that would tell us whether it's the right
choice?

Cheapest = smallest change, fewest files touched,
fastest to revert if it doesn't work.

Three rules:
- The experiment should be falsifiable — either it
  works visibly or it doesn't
- The experiment should test the actual concern, not a
  proxy for it
- If the experiment passes, what's still uncertain that
  would show up in production?

Propose the experiment. I'll greenlight or modify before
you start.

Apply this — one phrase per day

Five-day exercise. One phrase per day. Notice how it changes the response.

Practice the five phrases

Each phrase deserves one full day of attention. Progress saves automatically.

0/5
  1. 01For one full session, replace any "maybe / possibly / could we try" with a direct ask.Notice how the agent's responses change. Direct asks get committed responses; hedged asks get hedged responses.
  2. 02When you disagree with an agent proposal, ask "walk me through why you chose X over Y" before pushing back.Half the time the reason is good. The other half, you've now armed yourself with information for a real disagreement.
  3. 03When something feels off but you can't say why, run the worst-case-in-six-months prompt.Treat the gut feeling as data. The prompt is the tool that converts it to specifics.
  4. 04Before committing to any redesign or refactor, ask "what's the cheapest experiment?"Most redesigns can be tested on one slice before propagating. The phrase forces that scope.
  5. 05After any agent delivery, ask "what's still uncertain after this?"Surfaces the un-tested space. The next bug usually lives there.
Foundations tier · what's next

After this lesson