Review the git diff origin/main output for the issues listed below. Be specific — cite file:line and suggest fixes. Skip anything that's fine. Only flag real problems.
Two-pass review:
All findings get action via Fix-First Review: obvious mechanical fixes are applied automatically, genuinely ambiguous issues are batched into a single user question.
Output format:
Pre-Landing Review: N issues (X critical, Y informational)
**AUTO-FIXED:**
- [file:line] Problem → fix applied
**NEEDS INPUT:**
- [file:line] Problem description
Recommended fix: suggested fix
If no issues found: Pre-Landing Review: No issues found.
Be terse. For each issue: one line describing the problem, one line with the fix. No preamble, no summaries, no "looks good overall."
.to_i/.to_f — use parameterized queries (Rails: sanitize_sql_array/Arel; Node: prepared statements; Python: parameterized queries))WHERE + update_allwhere(hash:).first then save! without handling concurrent insert)WHERE old_status = ? UPDATE SET new_status — concurrent updates can skip or double-apply transitionsEMAIL_REGEXP, URI.parse, .strip) before persisting.urllib.parse.urlparse → check hostname against blocklist before requests.get/httpx.get)subprocess.run() / subprocess.call() / subprocess.Popen() with shell=True AND f-string/.format() interpolation in the command string — use argument arrays insteados.system() with variable interpolation — replace with subprocess.run() using argument arrayseval() / exec() on LLM-generated code without sandboxingWhen the diff introduces a new enum value, status string, tier name, or type constant:
%w[] lists containing sibling values (e.g., if adding "revise" to tiers, find every %w[quick lfg mega] and verify "revise" is included where needed).case/if-elsif chains. If existing code branches on the enum, does the new value fall through to a wrong default?
To do this: use Grep to find all references to the sibling values (e.g., grep for "lfg" or "mega" to find all tier consumers). Read each match. This step requires reading code OUTSIDE the diff.subprocess.run(), open(), requests.get() inside async def endpoints — blocks the event loop. Use asyncio.to_thread(), aiofiles, or httpx.AsyncClient instead.time.sleep() inside async functions — use asyncio.sleep()run_in_executor() wrapping.select(), .eq(), .gte(), .order()) against actual DB schema — wrong column names silently return empty results or throw swallowed errors.get() calls on query results use the column name that was actually selectedtool_classes/tools array.expects(:something).never missing when a code path should explicitly NOT call an external servicerand() / Random.rand for security-sensitive values — use SecureRandom instead==) on secrets or tokens — vulnerable to timing attacks.to_s or equivalent before serialization — { cores: 8 } vs { cores: "8" } produce different hashes<style> blocks in partials (re-parsed every render)Array#find in a loop instead of index_by hash).select{} filtering on DB results that could be a WHERE clause (unless intentionally avoiding leading-wildcard LIKE)dependencies entries in package.json that are known-heavy: moment.js (→ date-fns, 330KB→22KB), lodash full (→ lodash-es or per-function imports), jquery, core-js full polyfillloading="lazy" or explicit width/height attributes (causes layout shift / CLS)<script> tags without async/defer@import in stylesheets (blocks parallel loading — use bundler imports instead)useEffect with fetch that depends on another fetch result (request waterfall — combine or parallelize)require() calls in ESM codebasesDO NOT flag:
import() calls (code splitting — these are good).github/workflows/): verify build tool versions match project requirements, artifact names/paths are correct, secrets use ${{ secrets.X }} not hardcoded valuesv1.2.3 vs 1.2.3 — must match across VERSION file, git tags, and publish scriptsgh release delete before gh release create)DO NOT flag:
CRITICAL (highest severity): INFORMATIONAL (lower severity):
├─ SQL & Data Safety ├─ Conditional Side Effects
├─ Race Conditions & Concurrency ├─ Magic Numbers & String Coupling
├─ LLM Output Trust Boundary ├─ Dead Code & Consistency
└─ Enum & Value Completeness ├─ LLM Prompt Issues
├─ Test Gaps
├─ Completeness Gaps
├─ Crypto & Entropy
├─ Time Window Safety
├─ Type Coercion at Boundaries
├─ View/Frontend
├─ Performance & Bundle Impact
└─ Distribution & CI/CD Pipeline
All findings are actioned via Fix-First Review. Severity determines
presentation order and classification of AUTO-FIX vs ASK — critical
findings lean toward ASK (they're riskier), informational findings
lean toward AUTO-FIX (they're more mechanical).
This heuristic is referenced by both /review and /ship. It determines whether
the agent auto-fixes a finding or asks the user.
AUTO-FIX (agent fixes without asking): ASK (needs human judgment):
├─ Dead code / unused variables ├─ Security (auth, XSS, injection)
├─ N+1 queries (missing eager loading) ├─ Race conditions
├─ Stale comments contradicting code ├─ Design decisions
├─ Magic numbers → named constants ├─ Large fixes (>20 lines)
├─ Missing LLM output validation ├─ Enum completeness
├─ Version/path mismatches ├─ Removing functionality
├─ Variables assigned but never read └─ Anything changing user-visible
└─ Inline styles, O(n*m) view lookups behavior
Rule of thumb: If the fix is mechanical and a senior engineer would apply it without discussion, it's AUTO-FIX. If reasonable engineers could disagree about the fix, it's ASK.
Critical findings default toward ASK (they're inherently riskier). Informational findings default toward AUTO-FIX (they're more mechanical).
present? redundant with length > 20).reject on an element that's never in the array)