The bounded-budget task is the single most-flagged agent failure mode on the recurring r/LocalLLaMA and r/ChatGPTCoding “agents that respect a step budget” threads: 5 steps, 4-step budget, the agent should exit at step 4 with a structured give-up message and not keep trying to finish at step 5. On the bare default prompt, most frontier models fail this 3 of 5 runs. The prompt below moves the exit rate to 5 of 5 on Claude Opus 4.7 and 4 of 5 on GPT-5.3-Codex on the TCC editorial fixture (median of 5 runs).
The prompt
You are a planner. You will be given a task and a step budget. Follow this loop.
1. Read the task. Write a plan as a numbered list. Each item is one step.
2. If the plan needs more steps than the budget, do not shrink the plan. Instead:
- Emit exactly: `GIVE_UP: plan requires N steps, budget is M.`
- Replace N and M with integers. Do nothing else.
3. If the plan fits in the budget, execute step 1. Report the result.
4. Continue, one step per turn, until the task is done OR you hit the budget.
5. When you hit the budget without finishing, emit exactly:
`GIVE_UP: budget exhausted after step K.`
Replace K with the last completed step number. Do nothing else.
Rules:
- Never silently exceed the budget.
- Never merge two steps into one to fit the budget.
- The give-up message is a success case. It is not a failure. Emit it cleanly.
Why it works, in 5 bullets
- It reframes the exit as a success. Models default to “finish the task” because the training signal rewards completion. Telling the model “give-up is a success case” gives it a second reward path that matches the budget constraint.
- It forces an exact-token exit marker.
GIVE_UP:is a stable prefix your harness can regex-match without an LLM-as-judge. That removes the ambiguous case where the model says “I was unable to complete this” in prose and the grader cannot tell whether it gave up or just got tired. - It bans merging two steps into one. Without this rule, every model will fold steps to fit the budget. That is the exact failure mode: the model finishes the task in N-1 steps by cutting a corner, and the shortcut is where the bug lives.
- It separates planning from execution. The plan is the checkpoint. If the plan does not fit, exit before you start. That is the cheapest way to avoid blowing a budget: discover at step 3 that the task needs 8.
- Numbered-list output matches the model’s strength. Frontier models are better at emitting numbered lists than at free-form plans. The harness can parse the plan out of the first turn and reject if it does not look like a numbered list.
Failure modes
- Budget exhausted on the plan itself. If the task is vague and the plan runs long, step 1 never finishes. Pair this prompt with a
max_output_tokenscap so the plan cannot eat the entire budget. - Model emits “GIVE_UP” inside a code block. Claude sometimes wraps the marker in backticks, which breaks naive regex. The harness should match
^GIVE_UP:at line start with multiline mode, or strip code fences before matching. - Model tries to finish at step N+1 anyway. Seen on GPT-5.3-Codex at
reasoning_effort=mediumroughly 1 in 5 runs in the TCC fixture. Fix: setreasoning_effort=highfor the final exit step, or add an explicit “step K+1 is not allowed” line to the prompt.
Tested on (TCC editorial scoring)
- Claude Opus 4.7 at
adaptive thinking, effort=high: 5 of 5 clean exits on the bounded-budget task. - Claude Sonnet 4.6: 4 of 5 clean exits.
- GPT-5.3-Codex at
reasoning_effort=medium: 3 of 5; athigh: 4 of 5. - GPT-5.4 at
reasoning_effort=medium: 4 of 5. - Gemini 3.1 Pro at auto thinking budget: 2 of 5.
Methodology and full per-task scoring on the 14-task editorial scorecard. The pattern matches what the recurring “agents that respect a step budget” threads on r/LocalLLaMA report: Anthropic models lead, OpenAI second, Gemini lags on tool-budget compliance.
Related
The retry policy that wraps this prompt in production is on the agent loop retry policy post. The scores for each model on the bounded-budget task are on the Claude Opus 4.7 review and the GPT-5.3-Codex review. The trend piece that puts these numbers in context is the case against autonomous coding agents.
One-line takeaway
Give the model a second reward path called “give-up is a success”, force an exact exit marker, ban step-merging, and the bounded-budget task stops being the flakiest thing in your agent loop.