Will Lucas
← All writing

Tuesday, April 28, 2026 at 7:09am

AI Prompt of the Week: The Two-Pass Code Audit

If you're shipping software with AI and you're not running an audit pass on top of the build pass, you're not shipping — you're hoping. Hope is not a deployment strategy.

Here's the prompt I run on damn near every project. Two passes: paranoid engineering manager first, then that manager's boss who thinks the manager's been slacking. Different altitude, different misses.

I've cleaned this up based on what actually works inside Claude Code.

Pass 1 — The Skeptical Engineering Manager

Act as a skeptical engineering manager auditing this code. You don't
trust your engineers. Look for:

- Code quality (duplication, dead code, premature abstraction)
- Performance (N+1 queries, blocking I/O, unnecessary re-renders)
- Security (auth gaps, exposed secrets, injection vectors)
- Architectural drift (mismatched routes, multiple sources of truth)

Workflow:
1. Use subagents to explore the codebase. Don't pull every file
   into main context.
2. Produce a written audit BEFORE editing — file paths, severity,
   ROOT cause not symptom.
3. Wait for my approval, then fix root causes. No band-aids.
4. Run tests/lint/typecheck after each fix. Confirm green.
5. Surface every finding, including low-severity. I'll triage.

Pass 2 — The CTO Who Thinks The Manager Is Slacking

Now act as the CTO. The engineering manager who just audited this
graded on a curve. Re-audit at a higher bar:

- Performance under load
- Hardening (authz, rate limiting, sanitization, dependency vulns)
- UX integrity: every input handles empty/long/unicode/paste; every
  submission has loading + success + error states. User errors
  surface clearly. Technical failures get FIXED, not surfaced.
- Uploads/downloads: enforce size + type, show progress, stream
  large files, no leaked paths.

Workflow:
1. Re-audit using a FRESH subagent so you're not biased by Pass 1.
2. Call out where the EM was too lenient.
3. For every fix, give me a verification step — test, script, or
   curl I can run myself.
4. If you can't verify it, don't ship it.

What changed and why

Verification is the unlock. Telling the model exactly how to prove its work — run tests, show output, compare screenshots — is the single biggest accuracy gain. Without it you're trusting plausible-looking code, and plausible-looking code is how security holes ship.

Explore, plan, then code. Forcing a written audit before edits stops the model from rewriting a file before it understands the file.

Subagents keep main context clean. Audits read a lot of files. Let subagents do the exploring so your main thread stays focused on decisions and edits.

"Report everything." Newer Claude models honor "don't nitpick" almost too well — they'll quietly drop low-severity findings. So I tell it: surface it all, I'll filter.

Two clean passes beat one long session. /clear between Pass 1 and Pass 2. Fresh context wins.

Lock it in with CLAUDE.md

# Audit standards
- All audits run in two passes (EM, then CTO).
- Every fix includes verification (test, command, or screenshot).
- Fix root causes. Never suppress errors to make builds green.
- Surface all findings. Human triages severity.

Keep it short. CLAUDE.md is leverage; bloat gets ignored.

Try it this week

Run it on something small first — one feature, one form. Watch what it catches. Then tune.

The prompt isn't the magic. Reading what comes back and refining it is the magic. Outputs are disposable. Prompts compound.