When and why React useEffect needs a cleanup function, the 4 patterns that cover 95% of cases, plus what changed in React 18 Strict Mode (effect runs twice).
Vendor 1M-context numbers keep outperforming my production RAG task by 30+ points. The three reasons the benchmarks lie, and what I trust instead.
Cursor 3 shipped parallel Composer 2 agents and a background agent on April 2, 2026. Two tests moved in my pipeline, four did not. The 90-second summary with numbers.
The RAG parameter defaults that moved my top-1 accuracy from 74% to 91% in 2026. Chunk size, overlap, rerank, hybrid BM25, and the 2 flags people forget.
The 18 Cursor 3 keyboard shortcuts and 6 settings that changed since 2.x. Composer, parallel agents, tab-complete, and the bindings they moved.
The 7 settings that move Claude Opus 4.7 tool-call reliability from 94% to 99.2%. Adaptive thinking, tool_choice, disable_parallel_tool_use, stop_sequences, and the sampling params you must now omit.
GPT-5.4 API parameters, defaults, and the 3 that break your pipeline if you do not set them. Strict JSON, reasoning_effort, tool_choice, and the cost line to watch.
Gemini 3.1 Pro scored 7.8 on refactoring and 7.9 on structured output at $0.21 per task. The domains where cheap wins and where you need to route traffic elsewhere.
Windsurf 2.0 with Cascade 2 scored 7.9 on refactoring and 8.1 on test-gen on a 14-task suite. The 2 tasks it beat Cursor 3 + Composer 2 on, and the 3 tasks…
Aider 0.80 paired with Claude Opus 4.7 scored 8.7 on refactoring and 8.5 on RAG. The diff-based workflow, the 3 commands that matter, and where it breaks.