Foundations · Lesson 03 — Your first Claude Code skill
F03Foundations
Foundations · Lesson 03● live

Your first Claude Code skill

What a skill is, when to write your own, the bootstrap pattern.

10 min read · 30 min applyprereq: Foundations 02

What a skill is

A Claude Code skill is a markdown file at .claude/skills/<name>/SKILL.md that captures a multi-step workflow. The frontmatter has triggers. The body has steps. When the operator says something that matches a skill’s triggers, the agent loads the skill and follows the steps.

Skills sit between CLAUDE.md (project-level instruction) and agent.md (role-level definition). CLAUDE.md is what the agent always knows. Agent.md defines who the agent is. Skill is what the agent does in a specific repeatable workflow.

Examples of good skill candidates:

  • Ship a feature branch (run tests → draft commit → create PR)
  • Write a postmortem (apply substitution test → draft action items → categorize each)
  • Ingest a research source (read → identify entities → create wiki page → update index)
  • Rotate a token (check expiry → revoke old → issue new → update env files → test)

The pattern is always the same: 3-7 sequential steps, predictable inputs, the operator does this 3+ times.

The story

The first skill I tried to write was wrong. I’d done a workflow once, decided it was useful, captured it in detail. Three weeks later the workflow was different — the example I’d captured wasn’t the pattern, it was just one instance.

The fix was the third-time rule. Wait until you’ve done something three times. By then you have the pattern, not the example. The first time is an event. The second time is coincidence. The third time you can see what’s actually shared across instances vs what was specific to one.

After adopting that rule, my skill writing got fast. I now have ~15 skills. About 8 fire weekly; the other 7 are domain-specific and fire when their topics surface. None of them are wrong because none of them were premature.

Most operators with skill bloat got there by writing skills too early. The fix isn’t better skill-writing. It’s waiting longer.

Three ways skills fail

The patterns that produce skill bloat or unread skills. Hover any card to see the diagnosis.

01

The premature skill

claim looks likeOperator writes a skill the second time they do something. The skill captures one example and over-specifies.
what’s missingAfter 5 uses the skill is wrong because the operator didn't have the pattern yet. Every use requires manual override. Skill becomes overhead.
the moveWait until the third or fourth time you do something. By then you have the pattern, not the example. The skill captures the pattern.
02

The everything skill

claim looks likeOne skill that handles "all my code reviews." 200 lines covering every variant.
what’s missingThe skill is too generic to fire reliably. Most invocations get partial responses because the skill is trying to handle every case. Reads like a textbook, fires like a vague hint.
the moveOne skill, one job. "Review a TypeScript PR for type safety" is a skill. "Help me with code" is not. If you can't write a one-sentence trigger, split the skill.
03

The unread skill

claim looks likeSkill exists but the agent doesn't use it. Operator types the workflow manually every time.
what’s missingTrigger words don't match how the operator actually asks. The skill might be useful but isn't being invoked.
the moveWatch yourself work for a week. Note the exact phrases you type when you want this skill. Those are your triggers. Test by trying the actual phrases — if the agent doesn't fire the skill, fix the triggers.

The fix in all three: wait for the pattern, scope tightly, watch how you actually phrase it. Premature, over-broad, or wrong-trigger skills don’t pay back.

The repeated-step test

The diagnostic for finding skill candidates: walk recent sessions and identify workflows you walked through 3+ times.

The repeated-step test
Walk the last 5-10 sessions in this project.
For each session, identify any multi-step workflow the operator
walked through (3+ steps, sequential, with predictable inputs).

Output a list of candidate skills. For each:
- The trigger words / phrases the operator uses
- The 3-7 steps the workflow has
- Whether this has happened 3+ times (skill candidate) or 1-2
  (still too early)

Sort by frequency. Highest-frequency candidates write skills
first — they pay back fastest.

Surfaces multi-step workflows that have happened often enough to justify a skill.

The output is a sorted list of skill candidates. The high-frequency ones at the top pay back fastest — capture those first. Low-frequency candidates can wait or stay as conversation.

Most operators do this audit once per quarter. Some workflows you didn’t notice as patterns will surface clearly. Some skills you wrote will turn out to never fire — kill those too.

Annotated transcript

A real conversation: operator notices the third-time pattern, drafts the skill, learns when to NOT write skills. Click any pinned annotation to expand the “why this turn matters” commentary.

Drafting the ship-feature-branch skill

Five operator turns, four agent turns. Result: a working skill + the third-time rule + the three exclusions.

  1. operator

    I keep finding myself doing the same thing — open a feature branch, run the tests, look at the diff, write a commit message in a specific format. Three times this week. Should this be a skill?

  2. agent

    Yes — third time is the trigger. The pattern is stable enough now. Let's write the skill.

    A skill is a markdown file at .claude/skills/<name>/SKILL.md that captures a workflow. The frontmatter has name, description, and trigger phrases. The body is the actual instructions for what to do when the skill fires.

  3. operator

    What does the skill file actually look like?

  4. agent

    Three sections — frontmatter, when-to-use, instructions:

    ---
    name: ship-feature-branch
    description: Ship a finished feature branch — runs tests, reviews diff,
                 writes commit, creates PR. Use when finishing feature work.
    when-to-use:
      - User says "ship", "land", "create PR", "merge"
      - User says "this is done, push it up"
      - On a feature branch with uncommitted changes
    ---
    
    # Ship feature branch
    
    ## Steps
    1. Run the test suite. If anything fails, stop and surface the failures.
    2. Show the operator a 1-line summary of the diff (file count, lines added/removed).
    3. Draft a commit message in our format: "feat(<scope>): <one-line summary>".
    4. Wait for operator confirmation before committing.
    5. After commit, create PR with the same title as the commit.
    
    ## Anti-patterns
    - Don't skip step 1 ("tests usually pass" is not a reason)
    - Don't auto-confirm step 4 — operator gates the commit
    - Don't include `Co-Authored-By` lines unless explicitly requested

    Save at .claude/skills/ship-feature-branch/SKILL.md. Now next time you say "ship," the agent finds this skill, loads the steps, and executes the workflow.

  5. operator

    How does the agent know to use it instead of just doing whatever?

  6. agent

    Claude Code automatically scans .claude/skills/ at session start. Each skill's frontmatter (name + description + when-to-use) gets indexed. When the operator's request matches a skill's triggers, the agent loads that skill's full body and follows it.

    The match doesn't have to be exact. "Ship this branch" matches "User says 'ship'." The agent uses semantic match, not regex.

    If you want to override a skill, you can — just say "don't use the skill, just commit." The skill loads but the operator's explicit instruction takes precedence.

  7. operator

    When should I NOT write a skill?

  8. agent

    Three cases:

    1. Single use. You did this once. Don't write a skill yet — wait for repetition.
    2. Highly variable. Every instance has substantially different steps. The "skill" would be too generic to fire. Better: write a CLAUDE.md rule about how to approach this kind of problem, leave the specifics in conversation.
    3. One-step, well-tooled already. If the workflow is git commit -am "X", you don't need a skill — that's a command. Skills earn their place when there's a multi-step workflow that you'd otherwise re-explain each time.

    Rule of thumb: most operators end up with 5-15 skills total. If you're at 50 skills, most are too narrow. If you're at 2, you're probably writing them in CLAUDE.md as project rules instead — which is also fine for project-specific patterns.

Your first skill — the skeleton

Save at .claude/skills/<name>/SKILL.md. Replace the placeholders. Aim for 30-50 lines total.

---
name: <skill-slug>
description: <One-sentence description of what this skill does and
             when to use it. The agent reads this to decide whether
             to fire the skill.>
when-to-use:
  - User says "<trigger phrase 1>"
  - User says "<trigger phrase 2>"
  - <Optional: state-based trigger, e.g. "On a feature branch with
     uncommitted changes">
---

# <Skill title>

## Steps

1. <Imperative step. The agent does this exactly.>
2. <Next step. Reference files, commands, or specific outputs.>
3. <Continue. Aim for 3-7 steps total.>
4. <If a step requires operator confirmation, say so explicitly:
    "Wait for operator confirmation before proceeding to step 5.">
5. <Final step. Often a confirmation or follow-through.>

## Anti-patterns

- Don't <thing the skill should NOT do, with reason if non-obvious>
- Don't <another anti-pattern>
- Don't <a third>

## Edge cases (optional)

- If <condition>, then <alternate handling>
- If <condition>, then <alternate handling>

Most skills are 30-50 lines. If yours hits 100+, the skill is probably too broad — split into two. If it’s under 15 lines, it’s probably a CLAUDE.md rule, not a skill.

Prompt kit

Three prompts for finding, writing, and maintaining skills. Save in your CLAUDE.md or a personal snippets file.

The repeated-step test
Walk the last 5-10 sessions in this project.
For each session, identify any multi-step workflow the operator
walked through (3+ steps, sequential, with predictable inputs).

Output a list of candidate skills. For each:
- The trigger words / phrases the operator uses
- The 3-7 steps the workflow has
- Whether this has happened 3+ times (skill candidate) or 1-2
  (still too early)

Sort by frequency. Highest-frequency candidates write skills
first — they pay back fastest.
Draft a skill from a recent workflow
I want to capture this workflow as a skill: <describe the workflow>.

Walk through:
1. What's the trigger — what does the operator type to start it?
2. What are the 3-7 sequential steps?
3. What are the anti-patterns — what should this skill NOT do?
4. Where does it interact with other skills (if any)?

Draft the skill file in standard format:
- Frontmatter (name, description, when-to-use)
- # Title
- ## Steps (numbered, imperative)
- ## Anti-patterns (numbered, what to avoid)

Save at .claude/skills/<name>/SKILL.md. Don't ship yet — show me
the file and I'll review.
Audit existing skills — which ones are dead weight?
Read the skills in .claude/skills/.
For each skill, tell me:
- How often it has fired in the last 30 days (best estimate)
- Whether the trigger phrases match how I actually ask
- Whether the steps are still current

Flag any skill that:
- Fired <3 times in 30 days (rarely useful)
- Has triggers that don't match my common phrasings (won't fire)
- Has steps that reference deprecated tools or patterns (stale)

Suggest: keep, update, or delete.

Apply this — write your first skill

30-minute exercise. Watch one session. Find a 3+-times workflow. Capture it.

Build your first Claude Code skill

Each step takes 5-10 minutes. Progress saves automatically.

0/5
  1. 01Watch yourself work for one full session. Note any 3+-step workflow you walk through.Common patterns: shipping a feature, reviewing a PR, writing a postmortem, creating a wiki page, rotating a token.
  2. 02Pick the workflow you've done at least three times. That's your first skill candidate.Don't write a skill for one-time work. The third-time rule.
  3. 03Drop the skeleton from this lesson into .claude/skills/<name>/SKILL.md and fill it in.~15 minutes. Frontmatter, when-to-use, steps, anti-patterns. Aim for 30-50 lines total.
  4. 04Test by saying the natural-language trigger in a fresh session.If the agent fires the skill and follows the steps, it's working. If it doesn't fire, the triggers are wrong — fix and retest.
  5. 05Add the skill name to your CLAUDE.md if it's project-critical so future operators know it exists.Optional but useful for shared projects.
Foundations tier · what's next

After this lesson