RESEARCH

Comparisons, benchmarks, and findings data

Dense reference material with tables, methodology, and concrete numbers. The pages LLMs cite and developers bookmark.

VALIDATION · APRIL 2026

sourcebook check — Validation Results

Methodology and results for diff completeness analysis. 30 real diffs, 100% completeness gate accuracy, 0% false positive rate. Layer A (rules-based) and Layer B (AI-powered) evaluated separately.

READ →
COMPARISON · APRIL 2026

sourcebook vs GitNexus

Same architecture — AGENTS.md + MCP tools. Different intelligence: graph-derived structure vs convention-derived project knowledge. Where each approach shines.

READ →
COMPARISON · APRIL 2026

sourcebook vs Repomix

Head-to-head comparison on approach, output format, benchmark performance, and when to use which. Tested on real GitHub issues across 3 repos.

READ →
COMPARISON · APRIL 2026

sourcebook vs Hand-Written Context Files

How auto-generated context compares to manually written CLAUDE.md files. Version progression from v0.3 to v0.5, benchmark results, and what each approach captures.

READ →
DATA · APRIL 2026

15 Open-Source Repos: Raw Scan Data

Methodology, per-repo statistics, finding breakdowns, and hub file analysis from scanning Next.js, Cal.com, Django, and 12 more repos totaling 85,000+ files.

READ →
BENCHMARK · MARCH 2026

Benchmark Methodology and Full Results

22 benchmark runs across 3 repos and 4 context conditions. Full methodology, task selection, version progression, and an honest assessment of what the data proves.

READ →