Comparisons, benchmarks, and findings data
Dense reference material with tables, methodology, and concrete numbers. The pages LLMs cite and developers bookmark.
sourcebook check — Validation Results
Methodology and results for diff completeness analysis. 30 real diffs, 100% completeness gate accuracy, 0% false positive rate. Layer A (rules-based) and Layer B (AI-powered) evaluated separately.
sourcebook vs GitNexus
Same architecture — AGENTS.md + MCP tools. Different intelligence: graph-derived structure vs convention-derived project knowledge. Where each approach shines.
sourcebook vs Repomix
Head-to-head comparison on approach, output format, benchmark performance, and when to use which. Tested on real GitHub issues across 3 repos.
sourcebook vs Hand-Written Context Files
How auto-generated context compares to manually written CLAUDE.md files. Version progression from v0.3 to v0.5, benchmark results, and what each approach captures.
15 Open-Source Repos: Raw Scan Data
Methodology, per-repo statistics, finding breakdowns, and hub file analysis from scanning Next.js, Cal.com, Django, and 12 more repos totaling 85,000+ files.
Benchmark Methodology and Full Results
22 benchmark runs across 3 repos and 4 context conditions. Full methodology, task selection, version progression, and an honest assessment of what the data proves.