
Asking Claude to make a report
Inputs → research → structured output. The reporting loop in plain language.
Reports are not documents
A document presents what you already know. A report establishes what was unknown — by going to sources, synthesizing them, and recommending an action. The work the agent does in between is what separates a useful report from a confident-sounding essay.
If the agent never opened a browser, it didn’t write a report. It synthesized from training data, which is months stale, often wrong on numbers, and produces the polished-but-ungrounded output that gives AI reports their bad reputation.
The discipline: pin the methodology before the research starts, pin the output shape so the agent synthesizes instead of dumping, and pin the confidence flags so you can tell what to trust.
The reporting loop
Five stages, in order. Skipping any stage produces a different failure mode (see the patterns below).
| Stage | What happens | Failure if skipped |
|---|---|---|
| 1. Brief | Topic + situation + methodology pinned | Generic landscape report |
| 2. Research | Agent runs WebSearch, pulls sources, caps at scope | Stale synthesis from training data |
| 3. Synthesis | Sources combined into findings, not listed raw | Wall of links / bibliography |
| 4. Recommendation | Reasoning trace + flip condition | Unsigned opinion in formal clothing |
| 5. Confidence | Per-finding flag + caveats | Overclaimed certainty across the board |
The most-skipped stage is 5 (confidence). Operators read the report, agree with the recommendation, and never check what the agent was unsure about. The unflagged uncertainty is where the bad decisions hide.
Three ways AI reports fail
The patterns that produce confident-sounding reports you can’t actually act on. Hover any card to see the diagnosis.
The opinion report
The wall of links
The unsigned recommendation
The fix in all three: methodology pinned, output shape pinned, reasoning trace required. Without these the agent defaults to opinion-prose; with them it produces a report you can audit.
The methodology check
The diagnostic for reports. Paste the prompt before any research ask. The agent restates the methodology before searching.
Before you start researching, restate the methodology
back to me in this shape:
Sources: <how many you'll pull, of what kind>
Synthesis rule: <how you'll combine sources into findings>
Output shape: <findings count, format, citation rule>
Scope cap: <when you stop searching>
Confidence flag: <how you'll express uncertainty per finding>
Wait for my confirmation before searching. If you can't
research it within these constraints, say so — don't start
and improvise.Forces the methodology to be concrete and confirmable before any source is touched.
Common correction at this stage: scope cap. Operators who don’t bound the search say “use as many sources as needed,” the agent uses 40, and the report becomes unreviewable. 8-12 sources is the right size for most reports. Past 12 you’re writing a literature review, not a report.
Annotated transcript
A real conversation: operator asks for an SBA-lending report, agent forces the methodology pin and the situation context, agent self-grades confidence at the end. Click any pinned annotation to expand the “why this turn matters” commentary.
Three operator turns, three agent turns. Result: a usable report with cited sources, per-finding confidence, and a flip condition.
- operator
I need a report on the current state of small-business lending for indoor entertainment venues — what programs exist, typical terms, what's worth pursuing.
- agent
Before I run, three pins:
1. Methodology — I'll use WebSearch for current programs, pull at least three sources per major claim (program name, terms, eligibility). I won't synthesize from training data alone — too stale for lending terms.
2. Output shape — three findings (one per major program type), each with a paragraph + cited sources. One recommendation paragraph naming the 1-2 best fits for your situation. Confidence flag on each finding.
3. Scope cap — 8-12 sources total, not 40. Past 12, returns diminish and the report stops being a report.One thing I need from you: your situation in three lines. Time horizon, capital needed, current banking relationships. Without these, the recommendation is ungrounded.
- operator
Time horizon: 12-18 months to deployment. Capital needed: ~$1.4M, mostly for tenant improvements + equipment. Current banking: small regional bank, relationship is good but no SBA experience there.
- agent
Tight enough. One callout: "~$1.4M, mostly TI + equipment" lands you squarely in SBA 7(a) territory and probably out of microloan / 504 territory. I'll weight the report accordingly — 7(a) gets the deepest treatment, 504 gets a comparison paragraph, microloans get a sentence saying why they don't apply.
Researching now. ~5 minutes. I'll report back with the 8-12 sources I used so you can spot-check.
- operator
Go.
- agent
[~5 minutes later]
Report delivered. Structure:
- 3 findings (7(a), 504, USDA B&I) with cited sources
- 1 recommendation paragraph: 7(a) primary, 504 backup
- Confidence flag per finding (high / medium / low)
- 11 sources used; list at bottomMy own pre-test read: confidence on 7(a) is high (5 sources, current as of Q1). Confidence on USDA B&I is medium (only 2 current sources; eligibility rules changed last year and I couldn't fully confirm). The recommendation is well-supported, but the USDA fallback is worth re-checking before you act on it.
What would change my recommendation: if your bank has zero SBA throughput, 504 might beat 7(a) on closing speed. Worth asking them this week.
Report brief template — copy and paste
Save in a snippets file. Paste before any report ask. Fill in the topic and your situation; the methodology block is reusable as-is.
Report brief: <one-line topic>
MY SITUATION (so the report is grounded, not generic)
-----------------------------------------------------
- <time horizon, scope, key numbers>
- <current state — what I have, what I don't>
- <constraints — what's off the table, what's required>
METHODOLOGY
-----------
- Use WebSearch for current state — topic is
time-sensitive, training data is stale
- At least 3 sources per major claim
- Cap at 8-12 sources total
- Quote the source on every number / specific claim
- If a claim is helpful but unsourced, hedge or remove
OUTPUT SHAPE
------------
- 3 findings (one paragraph each)
- 1 recommendation paragraph
- Per-finding confidence flag (high / medium / low)
- One sentence: "what would change my recommendation"
- Source list at the bottom (URL + 1-line description)
RESTATE METHODOLOGY before searching, then research,
then deliver. After delivery, self-grade confidence
and flag the weakest source / claim.
The flip-condition sentence does most of the work. A report that says “recommend X, would flip if Y becomes true” gives you the future trigger to revisit. Reports without it become stale silently.
Prompt kit
Three prompts for briefing, running, and auditing reports. Save in your CLAUDE.md or a personal snippets file.
Before you start researching, restate the methodology
back to me in this shape:
Sources: <how many you'll pull, of what kind>
Synthesis rule: <how you'll combine sources into findings>
Output shape: <findings count, format, citation rule>
Scope cap: <when you stop searching>
Confidence flag: <how you'll express uncertainty per finding>
Wait for my confirmation before searching. If you can't
research it within these constraints, say so — don't start
and improvise.I need a report on: <topic>
My situation (so the report is grounded, not generic):
- <one line>
- <one line>
- <one line>
Methodology:
- Use WebSearch for current state — don't synthesize from
training alone, the topic is time-sensitive
- Pull at least 3 sources per major claim
- Cap at 8-12 sources total — past that, returns diminish
- Quote the source on every number / quoted claim
- If a claim is helpful but unsourced, hedge or remove
Output shape:
- 3 findings (one paragraph each)
- 1 recommendation paragraph
- Confidence flag per finding (high / medium / low)
- One sentence: "what would change my recommendation"
- Source list at the bottom
Restate the methodology before researching so I can
confirm or correct.Here's the report: <paste>
Audit it on five axes:
1. Sources — how many cited, how recent, how reputable?
2. Source-to-claim ratio — every number / specific claim
tied to a source? List any unsourced.
3. Synthesis — do the findings combine sources or just
restate them?
4. Recommendation — is the reasoning visible? Could you
re-derive it from the findings?
5. Confidence — does the report flag where it's weak?
For any axis below 4, propose the smallest revision.
Don't redo the report — surgical only.Apply this — your next report
20-minute exercise. One real question. Pinned methodology. Read the confidence flags carefully.
Your first pinned report
Each step takes 3-5 minutes. Progress saves automatically.
- 01Pick a real question you've been Googling for and not finishing.Vendor comparison, regulation check, market scan, competitor landscape — anything where you've opened 12 tabs and made no progress.
- 02Write your situation in three lines. The report can't be useful without it.Time horizon, scope, current state. Don't skip — generic reports come from generic situations.
- 03Pin the methodology before the agent searches. Confirm the restatement.Use the first prompt in the kit. Most operators skip this; the few who do it consistently get usable reports.
- 04Read the agent's confidence flags carefully. Low-confidence findings are leads, not facts.If everything is flagged high-confidence, push back — the agent is overclaiming. Real reports have variation.
- 05Save the brief as a template. The next report on a similar topic reuses 80% of the structure.After ~5 reports you'll have a personal library. Each new one takes half the time of the first.