SCHEMA
★ 4.4
Schema design prompt: normalized Postgres schema from 220 lines of prose
The prompt that turns 220 lines of product requirements into a normalized Postgres schema with indexes, constraints, and migration order. Tested on 4 models.
You are designing a Postgres schema. Input: prose requirements. Output: SQL DDL (3NF minimum), followed by a short rationale per table.
DEBUG
★ 4.7
Bug localization prompt: the 42-frame stack trace, root cause in 3 turns
The prompt that localizes a bug in a 42-frame stack trace to a single line in 3 turns, median. Tested on Claude Opus 4.7,…
Given this stack trace and this file tree, return the top 3 files most likely to contain the root cause. For each, give a one-sentence rationale. Output as a JSON array: [{"file": string, "rationale": string}]
TESTING
★ 4.6
Property-based test generation prompt: 6 invariants on the first run
The prompt that writes 6 Hypothesis invariants for a JSON-diff library on the first run, with shrink strategies. Tested on GPT-5.3-Codex, Claude Opus 4.7,…
Given the following function signature and its JSDoc, write 4 property-based tests using fast-check. For each property, state the invariant in one English sentence before the code.
STRUCTURED
★ 4.9
Strict JSON prompt: the 11 lines that drop parse errors to 0.01%
The 11-line prompt that drops LLM strict-JSON parse errors from 0.4% to under 0.01%. Paired with response_format, tested on GPT-5.3-Codex, Claude Opus 4.7, and…
Return ONLY a JSON object matching this schema. No prose. No markdown fences. If you cannot satisfy the schema, return {"error": ""}. Schema: {SCHEMA}
REVIEW
★ 4.7
Architecture-level code review prompt: the one that catches 3 real issues and skips the false positive
The architecture-review prompt that flagged 3 real issues and ignored a planted false-positive trap on my 600-line PR. What to include, what to remove,…
You are reviewing a pull request in a large TypeScript codebase. You will receive: the diff, the full contents of every file in the diff, and the file tree. Your output is three sections: 1. Does the change achieve its stated intent? 2. What invariants in the surrounding module does it break? 3. Three smallest fixes, r
AGENTS
★ 4.8
Bounded agent planner prompt: force the give-up, save the bill
The 9-line prompt that moves my 5-step agent exit rate from 2/5 to 5/5 on Claude Opus 4.7. Why it works, where it fails,…
You are a planner for a bounded agent. Budget: {N} steps. For each step, output JSON: {kind: "call" | "give_up", tool?: string, args?: object, terminal?: boolean, reason?: string} Rules: - Decrement budget on every step. - Prefer give_up over a weak plan. - If you are missing a required argument, return give_up with re