name: review version: 1.0.0 description: | Pre-landing PR review. Analyzes diff against the base branch for SQL safety, LLM trust boundary violations, conditional side effects, and other structural issues. Use when asked to "review this PR", "code review", "pre-landing review", or "check my diff". allowed-tools:
_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD" || true
mkdir -p ~/.gstack/sessions
touch ~/.gstack/sessions/"$PPID"
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
echo "LAKE_INTRO: $_LAKE_SEEN"
If output shows UPGRADE_AVAILABLE <old> <new>: read ~/.claude/skills/gstack/gstack-upgrade/SKILL.md and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If JUST_UPGRADED <from> <to>: tell user "Running gstack v{to} (just updated!)" and continue.
If LAKE_INTRO is no: Before continuing, introduce the Completeness Principle.
Tell the user: "gstack follows the Boil the Lake principle — always do the complete
thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean"
Then offer to open the essay in their default browser:
open https://garryslist.org/posts/boil-the-ocean
touch ~/.gstack/.completeness-intro-seen
Only run open if the user says yes. Always run touch to mark as seen. This only happens once.
ALWAYS follow this structure for every AskUserQuestion call:
_BRANCH value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
Per-skill instructions may add additional formatting rules on top of this baseline.
AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:
| Task type | Human team | CC+gstack | Compression |
|---|---|---|---|
| Boilerplate / scaffolding | 2 days | 15 min | ~100x |
| Test writing | 1 day | 15 min | ~50x |
| Feature implementation | 1 week | 30 min | ~30x |
| Bug fix + regression test | 4 hours | 15 min | ~20x |
| Architecture / design | 2 days | 4 hours | ~5x |
| Research / exploration | 1 day | 3 hours | ~3x |
Anti-patterns — DON'T do this:
If _CONTRIB is true: you are in contributor mode. You're a gstack user who also helps make it better.
At the end of each major workflow step (not after every single command), reflect on the gstack tooling you used. Rate your experience 0 to 10. If it wasn't a 10, think about why. If there is an obvious, actionable bug OR an insightful, interesting thing that could have been done better by gstack code or skill markdown — file a field report. Maybe our contributor will help make us better!
Calibration — this is the bar: For example, $B js "await fetch(...)" used to fail with SyntaxError: await is only valid in async functions because gstack didn't wrap expressions in async context. Small, but the input was reasonable and gstack should have handled it — that's the kind of thing worth filing. Things less consequential than this, ignore.
NOT worth filing: user's app bugs, network errors to user's URL, auth failures on user's site, user's own JS logic bugs.
To file: write ~/.gstack/contributor-logs/{slug}.md with all sections below (do not truncate — include every section through the Date/Version footer):
# {Title}
Hey gstack team — ran into this while using /{skill-name}:
**What I was trying to do:** {what the user/agent was attempting}
**What happened instead:** {what actually happened}
**My rating:** {0-10} — {one sentence on why it wasn't a 10}
## Steps to reproduce
1. {step}
## Raw output
{paste the actual error or unexpected output here}
## What would make this a 10
{one sentence: what gstack should have done differently}
**Date:** {YYYY-MM-DD} | **Version:** {gstack version} | **Skill:** /{skill}
Slug: lowercase, hyphens, max 60 chars (e.g. browse-js-no-await). Skip if file already exists. Max 3 reports per session. File inline and continue — don't stop the workflow. Tell user: "Filed gstack field report: {title}"
Determine which branch this PR targets. Use the result as "the base branch" in all subsequent steps.
Check if a PR already exists for this branch:
gh pr view --json baseRefName -q .baseRefName
If this succeeds, use the printed branch name as the base branch.
If no PR exists (command fails), detect the repo's default branch:
gh repo view --json defaultBranchRef -q .defaultBranchRef.name
If both commands fail, fall back to main.
Print the detected base branch name. In every subsequent git diff, git log,
git fetch, git merge, and gh pr create command, substitute the detected
branch name wherever the instructions say "the base branch."
You are running the /review workflow. Analyze the current branch's diff against the base branch for structural issues that tests don't catch.
git branch --show-current to get the current branch.git fetch origin <base> --quiet && git diff origin/<base> --stat to check if there's a diff. If no diff, output the same message and stop.Read .claude/skills/review/checklist.md.
If the file cannot be read, STOP and report the error. Do not proceed without the checklist.
Read .claude/skills/review/greptile-triage.md and follow the fetch, filter, classify, and escalation detection steps.
If no PR exists, gh fails, API returns an error, or there are zero Greptile comments: Skip this step silently. Greptile integration is additive — the review works without it.
If Greptile comments are found: Store the classifications (VALID & ACTIONABLE, VALID BUT ALREADY FIXED, FALSE POSITIVE, SUPPRESSED) — you will need them in Step 5.
Fetch the latest base branch to avoid false positives from stale local state:
git fetch origin <base> --quiet
Run git diff origin/<base> to get the full diff. This includes both committed and uncommitted changes against the latest base branch.
Apply the checklist against the diff in two passes:
Enum & Value Completeness requires reading code OUTSIDE the diff. When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient.
Follow the output format specified in the checklist. Respect the suppressions — do NOT flag items listed in the "DO NOT flag" section.
Check if the diff touches frontend files using gstack-diff-scope:
eval $(~/.claude/skills/gstack/bin/gstack-diff-scope <base> 2>/dev/null)
If SCOPE_FRONTEND=false: Skip design review silently. No output.
If SCOPE_FRONTEND=true:
Check for DESIGN.md. If DESIGN.md or design-system.md exists in the repo root, read it. All design findings are calibrated against it — patterns blessed in DESIGN.md are not flagged. If not found, use universal design principles.
Read .claude/skills/review/design-checklist.md. If the file cannot be read, skip design review with a note: "Design checklist not found — skipping design review."
Read each changed frontend file (full file, not just diff hunks). Frontend files are identified by the patterns listed in the checklist.
Apply the design checklist against the changed files. For each item:
outline: none, !important, font-size < 16px): classify as AUTO-FIXInclude findings in the review output under a "Design Review" header, following the output format in the checklist. Design findings merge with code review findings into the same Fix-First flow.
Log the result for the Review Readiness Dashboard:
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
mkdir -p ~/.gstack/projects/$SLUG
echo '{"skill":"design-review-lite","timestamp":"TIMESTAMP","status":"STATUS","findings":N,"auto_fixed":M}' >> ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl
Substitute: TIMESTAMP = ISO 8601 datetime, STATUS = "clean" if 0 findings or "issues_found", N = total findings, M = auto-fixed count.
Include any design findings alongside the findings from Step 4. They follow the same Fix-First flow in Step 5 — AUTO-FIX for mechanical CSS fixes, ASK for everything else.
Every finding gets action — not just critical ones.
Output a summary header: Pre-Landing Review: N issues (X critical, Y informational)
For each finding, classify as AUTO-FIX or ASK per the Fix-First Heuristic in checklist.md. Critical findings lean toward ASK; informational findings lean toward AUTO-FIX.
Apply each fix directly. For each one, output a one-line summary:
[AUTO-FIXED] [file:line] Problem → what you did
If there are ASK items remaining, present them in ONE AskUserQuestion:
Example format:
I auto-fixed 5 issues. 2 need your input:
1. [CRITICAL] app/models/post.rb:42 — Race condition in status transition
Fix: Add `WHERE status = 'draft'` to the UPDATE
→ A) Fix B) Skip
2. [INFORMATIONAL] app/services/generator.rb:88 — LLM output not type-checked before DB write
Fix: Add JSON schema validation
→ A) Fix B) Skip
RECOMMENDATION: Fix both — #1 is a real race condition, #2 prevents silent data corruption.
If 3 or fewer ASK items, you may use individual AskUserQuestion calls instead of batching.
Apply fixes for items where the user chose "Fix." Output what was fixed.
If no ASK items exist (everything was AUTO-FIX), skip the question entirely.
After outputting your own findings, if Greptile comments were classified in Step 2.5:
Include a Greptile summary in your output header: + N Greptile comments (X valid, Y fixed, Z FP)
Before replying to any comment, run the Escalation Detection algorithm from greptile-triage.md to determine whether to use Tier 1 (friendly) or Tier 2 (firm) reply templates.
VALID & ACTIONABLE comments: These are included in your findings — they follow the Fix-First flow (auto-fixed if mechanical, batched into ASK if not) (A: Fix it now, B: Acknowledge, C: False positive). If the user chooses A (fix), reply using the Fix reply template from greptile-triage.md (include inline diff + explanation). If the user chooses C (false positive), reply using the False Positive reply template (include evidence + suggested re-rank), save to both per-project and global greptile-history.
FALSE POSITIVE comments: Present each one via AskUserQuestion:
If the user chooses A, reply using the False Positive reply template from greptile-triage.md (include evidence + suggested re-rank), save to both per-project and global greptile-history.
VALID BUT ALREADY FIXED comments: Reply using the Already Fixed reply template from greptile-triage.md — no AskUserQuestion needed:
SUPPRESSED comments: Skip silently — these are known false positives from previous triage.
Read TODOS.md in the repository root (if it exists). Cross-reference the PR against open TODOs:
If TODOS.md doesn't exist, skip this step silently.
Cross-reference the diff against documentation files. For each .md file in the repo root (README.md, ARCHITECTURE.md, CONTRIBUTING.md, CLAUDE.md, etc.):
/document-release."This is informational only — never critical. The fix action is /document-release.
If no documentation files exist, skip this step silently.