Let me scan your repo.
Running Cursor, Claude Code, or Codex on a real codebase? I'll run sourcebook against your repo, map where agents are most likely to miss files or break conventions, and send you a written brief. No cost. Just honest feedback in return.
WHAT_THE_AUDIT_COVERS
Files that always change together but have no import relationship — the invisible dependencies agents miss every time.
Which files are imported by the most other files — the ones where agent edits have the widest downstream impact.
Reverted commits, rapid re-edit patterns, circular dependencies — what agents keep re-trying that your team already ruled out.
Naming patterns, import conventions, test structure — what the codebase expects that isn't written down anywhere.
Files with the highest edit frequency — hard to get right, most likely to produce agent-generated regressions.
A short document — what we found, what it means for your AI workflow, what to watch for. Yours to keep and share internally.
WHAT_WE_ASK_IN_RETURN
Honest feedback on what the audit found — whether it was useful, what it missed, what it got wrong. A 20-minute call if you're open to it. That's it.
We're running 5 audits, not 500. If your repo isn't a good fit we'll say so. If you don't hear back within 48h, spots are full.
REQUEST_AN_AUDIT
Public or private repo. TypeScript, Python, or Go. Teams using Cursor, Claude Code, Codex, or similar.
REQUEST_RECEIVED
We'll follow up within 48 hours. If spots are full, we'll let you know.