
Compliance + audit trail for AI work
The audit-trail format regulators and lenders will thank you for. Anchored to the MHG SBA 7(a) loan process — a real in-flight financing.
Why AI work needs a paper trail
Mile High Golf is in the middle of an SBA 7(a) loan process for the Denver NC flagship site at 7521 Eastern Medical Dr. The loan has not closed. MHG is preparing the application package — financial projections, site analysis, business plan, market research — with significant AI assistance from the SUMMIT agent. Curtis Woodie and Olivia Merlock are doing the venue ops work. Sarah Cooley at Ascent RE is advising on the site.
When the underwriter reviews this package, they will ask: how did you arrive at these numbers? Who reviewed this analysis? What are the assumptions behind this projection? If the answers to those questions exist only in an agent session that was never saved — or exist in a session transcript that can’t be cleanly presented to a lender — the loan review stalls.
AI assistance creates a compliance gap that human-only work doesn’t have. A human financial analyst who builds a projection owns that projection — their name is on it, their methodology is documentable, their review of the inputs is assumed. An agent that builds a projection is not an analyst. The operator who submits that projection to a lender needs to be the human who can answer for it. The audit trail is what makes that possible.
The same discipline applies across the TruPath portfolio: QC provisional patent application (patent-sensitive, attorney review required), Crave Athletics NDA (already signed, CIPHER confirms status), investor materials (CEO review required before any distribution). Anywhere AI-assisted work goes to an external party with scrutiny rights, the audit trail is the governance mechanism.
What regulators and lenders actually look at
SBA 7(a) underwriters are not looking for AI disclosures (the SBA doesn’t currently require them). They are looking for the same things they’ve always looked at: reproducibility, accuracy, and the human who stands behind the numbers.
| What they check | What they’re actually asking | What you need |
|---|---|---|
| Projection reproducibility | Can you show us how you got from inputs to these numbers? | Named assumption set file + reproduction test |
| Assumption defensibility | Are the inputs to your projection reasonable for this market? | Comparable venue data, cited sources, human domain review (Curtis/Olivia on ops assumptions) |
| Human accountability | Who reviewed and stands behind this document? | Review record with name, date, attestation |
| Source traceability | Where did the market data come from? | Cited sources in the document; not “agent research” |
The audit trail doesn’t need to disclose AI assistance. It needs to demonstrate human accountability. The “AI-assisted, human-reviewed” standard satisfies all four checks: the human reviewer can reproduce the projection from its inputs, defend the assumptions, and has their name on the attestation.
The rest of Expert · Lesson 16 is for subscribers.
Compliance + audit trail for AI work
- Every Expert-tier lesson — diagnostic prompts, transcripts, prompt kits, full homework
- Every research paper — methodology, figures, tables, reproducibility appendices
- New Expert lessons + papers as they ship (quarterly cadence)
- Foundations + Operating lessons stay free; bundles on GitHub stay free; this tier is the deep stuff
Free while the early catalog ships. Paid tier comes later — subscribe now and you’re grandfathered in.