Foundations · Lesson 06 — Asking Claude to make a report
F06Foundations
Foundations · Lesson 06● live

Asking Claude to make a report

Inputs → research → structured output. The reporting loop in plain language.

10 min read · 20 min applyprereq: Foundations 05 (writing with AI)

Reports are not documents

A document presents what you already know. A report establishes what was unknown — by going to sources, synthesizing them, and recommending an action. The work the agent does in between is what separates a useful report from a confident-sounding essay.

If the agent never opened a browser, it didn’t write a report. It synthesized from training data, which is months stale, often wrong on numbers, and produces the polished-but-ungrounded output that gives AI reports their bad reputation.

The discipline: pin the methodology before the research starts, pin the output shape so the agent synthesizes instead of dumping, and pin the confidence flags so you can tell what to trust.

The reporting loop

Five stages, in order. Skipping any stage produces a different failure mode (see the patterns below).

StageWhat happensFailure if skipped
1. BriefTopic + situation + methodology pinnedGeneric landscape report
2. ResearchAgent runs WebSearch, pulls sources, caps at scopeStale synthesis from training data
3. SynthesisSources combined into findings, not listed rawWall of links / bibliography
4. RecommendationReasoning trace + flip conditionUnsigned opinion in formal clothing
5. ConfidencePer-finding flag + caveatsOverclaimed certainty across the board

The most-skipped stage is 5 (confidence). Operators read the report, agree with the recommendation, and never check what the agent was unsure about. The unflagged uncertainty is where the bad decisions hide.

Three ways AI reports fail

The patterns that produce confident-sounding reports you can’t actually act on. Hover any card to see the diagnosis.

01

The opinion report

claim looks likeYou ask for a competitive landscape and the agent writes 800 words of analysis with zero links, zero quotes, zero numbers.
what’s missingNo methodology was specified, so the agent did the cheapest thing — synthesized from training data. That synthesis is months stale and you can't audit any single claim.
the movePin the methodology before the ask. "Use WebSearch for current state. Pull at least three sources per claim. Quote the source on every number. If you can't find a source, hedge or omit." The agent follows this if you say it explicitly.
02

The wall of links

claim looks likeAgent runs 30 searches, dumps every result into a list. You're back where you started — drowning in tabs.
what’s missingNo synthesis was requested. The agent treated "report" as "research" and stopped at the data-gathering step. You wanted a conclusion; you got a bibliography.
the moveSpecify the output shape in the brief. "Three findings, each one paragraph, each tied to 2-3 cited sources. One recommendation paragraph at the end. No bare URL lists." The shape forces the synthesis.
03

The unsigned recommendation

claim looks likeThe agent writes "we should pursue option B" without showing what changed its mind, what would change it back, or what it's uncertain about.
what’s missingNo reasoning trace. The recommendation is asserted, not earned. You can't tell whether to trust it because you can't see how it was made.
the moveRequire the reasoning trace as part of the output. "Show the 2-3 considerations that drove the recommendation. Show what would flip your answer. Flag your confidence level." Reports without this are just opinions in formal clothing.

The fix in all three: methodology pinned, output shape pinned, reasoning trace required. Without these the agent defaults to opinion-prose; with them it produces a report you can audit.

The methodology check

The diagnostic for reports. Paste the prompt before any research ask. The agent restates the methodology before searching.

The methodology check
Before you start researching, restate the methodology
back to me in this shape:

  Sources: <how many you'll pull, of what kind>
  Synthesis rule: <how you'll combine sources into findings>
  Output shape: <findings count, format, citation rule>
  Scope cap: <when you stop searching>
  Confidence flag: <how you'll express uncertainty per finding>

Wait for my confirmation before searching. If you can't
research it within these constraints, say so — don't start
and improvise.

Forces the methodology to be concrete and confirmable before any source is touched.

Common correction at this stage: scope cap. Operators who don’t bound the search say “use as many sources as needed,” the agent uses 40, and the report becomes unreviewable. 8-12 sources is the right size for most reports. Past 12 you’re writing a literature review, not a report.

Annotated transcript

A real conversation: operator asks for an SBA-lending report, agent forces the methodology pin and the situation context, agent self-grades confidence at the end. Click any pinned annotation to expand the “why this turn matters” commentary.

Briefing a small-business lending report

Three operator turns, three agent turns. Result: a usable report with cited sources, per-finding confidence, and a flip condition.

  1. operator

    I need a report on the current state of small-business lending for indoor entertainment venues — what programs exist, typical terms, what's worth pursuing.

  2. agent

    Before I run, three pins:

    1. Methodology — I'll use WebSearch for current programs, pull at least three sources per major claim (program name, terms, eligibility). I won't synthesize from training data alone — too stale for lending terms.
    2. Output shape — three findings (one per major program type), each with a paragraph + cited sources. One recommendation paragraph naming the 1-2 best fits for your situation. Confidence flag on each finding.
    3. Scope cap — 8-12 sources total, not 40. Past 12, returns diminish and the report stops being a report.

    One thing I need from you: your situation in three lines. Time horizon, capital needed, current banking relationships. Without these, the recommendation is ungrounded.

  3. operator

    Time horizon: 12-18 months to deployment. Capital needed: ~$1.4M, mostly for tenant improvements + equipment. Current banking: small regional bank, relationship is good but no SBA experience there.

  4. agent

    Tight enough. One callout: "~$1.4M, mostly TI + equipment" lands you squarely in SBA 7(a) territory and probably out of microloan / 504 territory. I'll weight the report accordingly — 7(a) gets the deepest treatment, 504 gets a comparison paragraph, microloans get a sentence saying why they don't apply.

    Researching now. ~5 minutes. I'll report back with the 8-12 sources I used so you can spot-check.

  5. operator

    Go.

  6. agent

    [~5 minutes later]

    Report delivered. Structure:
    - 3 findings (7(a), 504, USDA B&I) with cited sources
    - 1 recommendation paragraph: 7(a) primary, 504 backup
    - Confidence flag per finding (high / medium / low)
    - 11 sources used; list at bottom

    My own pre-test read: confidence on 7(a) is high (5 sources, current as of Q1). Confidence on USDA B&I is medium (only 2 current sources; eligibility rules changed last year and I couldn't fully confirm). The recommendation is well-supported, but the USDA fallback is worth re-checking before you act on it.

    What would change my recommendation: if your bank has zero SBA throughput, 504 might beat 7(a) on closing speed. Worth asking them this week.

Report brief template — copy and paste

Save in a snippets file. Paste before any report ask. Fill in the topic and your situation; the methodology block is reusable as-is.

Report brief: <one-line topic>

MY SITUATION (so the report is grounded, not generic)
-----------------------------------------------------
- <time horizon, scope, key numbers>
- <current state — what I have, what I don't>
- <constraints — what's off the table, what's required>

METHODOLOGY
-----------
- Use WebSearch for current state — topic is
   time-sensitive, training data is stale
- At least 3 sources per major claim
- Cap at 8-12 sources total
- Quote the source on every number / specific claim
- If a claim is helpful but unsourced, hedge or remove

OUTPUT SHAPE
------------
- 3 findings (one paragraph each)
- 1 recommendation paragraph
- Per-finding confidence flag (high / medium / low)
- One sentence: "what would change my recommendation"
- Source list at the bottom (URL + 1-line description)

RESTATE METHODOLOGY before searching, then research,
then deliver. After delivery, self-grade confidence
and flag the weakest source / claim.

The flip-condition sentence does most of the work. A report that says “recommend X, would flip if Y becomes true” gives you the future trigger to revisit. Reports without it become stale silently.

Prompt kit

Three prompts for briefing, running, and auditing reports. Save in your CLAUDE.md or a personal snippets file.

The methodology check
Before you start researching, restate the methodology
back to me in this shape:

  Sources: <how many you'll pull, of what kind>
  Synthesis rule: <how you'll combine sources into findings>
  Output shape: <findings count, format, citation rule>
  Scope cap: <when you stop searching>
  Confidence flag: <how you'll express uncertainty per finding>

Wait for my confirmation before searching. If you can't
research it within these constraints, say so — don't start
and improvise.
Source-pinned report brief
I need a report on: <topic>

My situation (so the report is grounded, not generic):
- <one line>
- <one line>
- <one line>

Methodology:
- Use WebSearch for current state — don't synthesize from
   training alone, the topic is time-sensitive
- Pull at least 3 sources per major claim
- Cap at 8-12 sources total — past that, returns diminish
- Quote the source on every number / quoted claim
- If a claim is helpful but unsourced, hedge or remove

Output shape:
- 3 findings (one paragraph each)
- 1 recommendation paragraph
- Confidence flag per finding (high / medium / low)
- One sentence: "what would change my recommendation"
- Source list at the bottom

Restate the methodology before researching so I can
confirm or correct.
Audit a finished report
Here's the report: <paste>

Audit it on five axes:
1. Sources — how many cited, how recent, how reputable?
2. Source-to-claim ratio — every number / specific claim
   tied to a source? List any unsourced.
3. Synthesis — do the findings combine sources or just
   restate them?
4. Recommendation — is the reasoning visible? Could you
   re-derive it from the findings?
5. Confidence — does the report flag where it's weak?

For any axis below 4, propose the smallest revision.
Don't redo the report — surgical only.

Apply this — your next report

20-minute exercise. One real question. Pinned methodology. Read the confidence flags carefully.

Your first pinned report

Each step takes 3-5 minutes. Progress saves automatically.

0/5
  1. 01Pick a real question you've been Googling for and not finishing.Vendor comparison, regulation check, market scan, competitor landscape — anything where you've opened 12 tabs and made no progress.
  2. 02Write your situation in three lines. The report can't be useful without it.Time horizon, scope, current state. Don't skip — generic reports come from generic situations.
  3. 03Pin the methodology before the agent searches. Confirm the restatement.Use the first prompt in the kit. Most operators skip this; the few who do it consistently get usable reports.
  4. 04Read the agent's confidence flags carefully. Low-confidence findings are leads, not facts.If everything is flagged high-confidence, push back — the agent is overclaiming. Real reports have variation.
  5. 05Save the brief as a template. The next report on a similar topic reuses 80% of the structure.After ~5 reports you'll have a personal library. Each new one takes half the time of the first.
Foundations tier · what's next

After this lesson