name: review preamble-tier: 4 version: 1.0.0 description: | Pre-landing PR review. Analyzes diff against the base branch for SQL safety, LLM trust boundary violations, conditional side effects, and other structural issues. Use when asked to "review this PR", "code review", "pre-landing review", or "check my diff". Proactively suggest when the user is about to merge or land code changes. allowed-tools:
_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD" || true
mkdir -p ~/.gstack/sessions
touch ~/.gstack/sessions/"$PPID"
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
_PROACTIVE=$(~/.claude/skills/gstack/bin/gstack-config get proactive 2>/dev/null || echo "true")
_PROACTIVE_PROMPTED=$([ -f ~/.gstack/.proactive-prompted ] && echo "yes" || echo "no")
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"
echo "PROACTIVE: $_PROACTIVE"
echo "PROACTIVE_PROMPTED: $_PROACTIVE_PROMPTED"
source <(~/.claude/skills/gstack/bin/gstack-repo-mode 2>/dev/null) || true
REPO_MODE=${REPO_MODE:-unknown}
echo "REPO_MODE: $REPO_MODE"
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
echo "LAKE_INTRO: $_LAKE_SEEN"
_TEL=$(~/.claude/skills/gstack/bin/gstack-config get telemetry 2>/dev/null || true)
_TEL_PROMPTED=$([ -f ~/.gstack/.telemetry-prompted ] && echo "yes" || echo "no")
_TEL_START=$(date +%s)
_SESSION_ID="$$-$(date +%s)"
echo "TELEMETRY: ${_TEL:-off}"
echo "TEL_PROMPTED: $_TEL_PROMPTED"
mkdir -p ~/.gstack/analytics
echo '{"skill":"review","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}' >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
# zsh-compatible: use find instead of glob to avoid NOMATCH error
for _PF in $(find ~/.gstack/analytics -maxdepth 1 -name '.pending-*' 2>/dev/null); do [ -f "$_PF" ] && ~/.claude/skills/gstack/bin/gstack-telemetry-log --event-type skill_run --skill _pending_finalize --outcome unknown --session-id "$_SESSION_ID" 2>/dev/null || true; break; done
If PROACTIVE is "false", do not proactively suggest gstack skills AND do not
auto-invoke skills based on conversation context. Only run skills the user explicitly
types (e.g., /qa, /ship). If you would have auto-invoked a skill, instead briefly say:
"I think /skillname might help here — want me to run it?" and wait for confirmation.
The user opted out of proactive behavior.
If output shows UPGRADE_AVAILABLE <old> <new>: read ~/.claude/skills/gstack/gstack-upgrade/SKILL.md and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If JUST_UPGRADED <from> <to>: tell user "Running gstack v{to} (just updated!)" and continue.
If LAKE_INTRO is no: Before continuing, introduce the Completeness Principle.
Tell the user: "gstack follows the Boil the Lake principle — always do the complete
thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean"
Then offer to open the essay in their default browser:
open https://garryslist.org/posts/boil-the-ocean
touch ~/.gstack/.completeness-intro-seen
Only run open if the user says yes. Always run touch to mark as seen. This only happens once.
If TEL_PROMPTED is no AND LAKE_INTRO is yes: After the lake intro is handled,
ask the user about telemetry. Use AskUserQuestion:
Help gstack get better! Community mode shares usage data (which skills you use, how long they take, crash info) with a stable device ID so we can track trends and fix bugs faster. No code, file paths, or repo names are ever sent. Change anytime with
gstack-config set telemetry off.
Options:
If A: run ~/.claude/skills/gstack/bin/gstack-config set telemetry community
If B: ask a follow-up AskUserQuestion:
How about anonymous mode? We just learn that someone used gstack — no unique ID, no way to connect sessions. Just a counter that helps us know if anyone's out there.
Options:
If B→A: run ~/.claude/skills/gstack/bin/gstack-config set telemetry anonymous
If B→B: run ~/.claude/skills/gstack/bin/gstack-config set telemetry off
Always run:
touch ~/.gstack/.telemetry-prompted
This only happens once. If TEL_PROMPTED is yes, skip this entirely.
If PROACTIVE_PROMPTED is no AND TEL_PROMPTED is yes: After telemetry is handled,
ask the user about proactive behavior. Use AskUserQuestion:
gstack can proactively figure out when you might need a skill while you work — like suggesting /qa when you say "does this work?" or /investigate when you hit a bug. We recommend keeping this on — it speeds up every part of your workflow.
Options:
If A: run ~/.claude/skills/gstack/bin/gstack-config set proactive true
If B: run ~/.claude/skills/gstack/bin/gstack-config set proactive false
Always run:
touch ~/.gstack/.proactive-prompted
This only happens once. If PROACTIVE_PROMPTED is yes, skip this entirely.
ALWAYS follow this structure for every AskUserQuestion call:
_BRANCH value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
Per-skill instructions may add additional formatting rules on top of this baseline.
AI makes completeness near-free. Always recommend the complete option over shortcuts — the delta is minutes with CC+gstack. A "lake" (100% coverage, all edge cases) is boilable; an "ocean" (full rewrite, multi-quarter migration) is not. Boil lakes, flag oceans.
Effort reference — always show both scales:
| Task type | Human team | CC+gstack | Compression |
|---|---|---|---|
| Boilerplate | 2 days | 15 min | ~100x |
| Tests | 1 day | 15 min | ~50x |
| Feature | 1 week | 30 min | ~30x |
| Bug fix | 4 hours | 15 min | ~20x |
Include Completeness: X/10 for each option (10=all edge cases, 7=happy path, 3=shortcut).
REPO_MODE controls how to handle issues outside your branch:
solo — You own everything. Investigate and offer to fix proactively.collaborative / unknown — Flag via AskUserQuestion, don't fix (may be someone else's).Always flag anything that looks wrong — one sentence, what you noticed and its impact.
Before building anything unfamiliar, search first. See ~/.claude/skills/gstack/ETHOS.md.
Eureka: When first-principles reasoning contradicts conventional wisdom, name it and log:
jq -n --arg ts "$(date -u +%Y-%m-%dT%H:%M:%SZ)" --arg skill "SKILL_NAME" --arg branch "$(git branch --show-current 2>/dev/null)" --arg insight "ONE_LINE_SUMMARY" '{ts:$ts,skill:$skill,branch:$branch,insight:$insight}' >> ~/.gstack/analytics/eureka.jsonl 2>/dev/null || true
If _CONTRIB is true: you are in contributor mode. At the end of each major workflow step, rate your gstack experience 0-10. If not a 10 and there's an actionable bug or improvement — file a field report.
File only: gstack tooling bugs where the input was reasonable but gstack failed. Skip: user app bugs, network errors, auth failures on user's site.
To file: write ~/.gstack/contributor-logs/{slug}.md:
# {Title}
**What I tried:** {action} | **What happened:** {result} | **Rating:** {0-10}
## Repro
1. {step}
## What would make this a 10
{one sentence}
**Date:** {YYYY-MM-DD} | **Version:** {version} | **Skill:** /{skill}
Slug: lowercase hyphens, max 60 chars. Skip if exists. Max 3/session. File inline, don't stop.
When completing a skill workflow, report status using one of:
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
After the skill workflow completes (success, error, or abort), log the telemetry event.
Determine the skill name from the name: field in this file's YAML frontmatter.
Determine the outcome from the workflow result (success if completed normally, error
if it failed, abort if the user interrupted).
PLAN MODE EXCEPTION — ALWAYS RUN: This command writes telemetry to
~/.gstack/analytics/ (user config directory, not project files). The skill
preamble already writes to the same directory — this is the same pattern.
Skipping this command loses session duration and outcome data.
Run this bash:
_TEL_END=$(date +%s)
_TEL_DUR=$(( _TEL_END - _TEL_START ))
rm -f ~/.gstack/analytics/.pending-"$_SESSION_ID" 2>/dev/null || true
~/.claude/skills/gstack/bin/gstack-telemetry-log \
--skill "SKILL_NAME" --duration "$_TEL_DUR" --outcome "OUTCOME" \
--used-browse "USED_BROWSE" --session-id "$_SESSION_ID" 2>/dev/null &
Replace SKILL_NAME with the actual skill name from frontmatter, OUTCOME with
success/error/abort, and USED_BROWSE with true/false based on whether $B was used.
If you cannot determine the outcome, use "unknown". This runs in the background and
never blocks the user.
When you are in plan mode and about to call ExitPlanMode:
## GSTACK REVIEW REPORT section.```bash ~/.claude/skills/gstack/bin/gstack-review-read ```
Then write a ## GSTACK REVIEW REPORT section to the end of the plan file:
---CONFIG---): format the
standard report table with runs/status/findings per skill, same format as the review
skills use.NO_REVIEWS or empty: write this placeholder table:```markdown
| Review | Trigger | Why | Runs | Status | Findings |
|---|---|---|---|---|---|
| CEO Review | `/plan-ceo-review` | Scope & strategy | 0 | — | — |
| Codex Review | `/codex review` | Independent 2nd opinion | 0 | — | — |
| Eng Review | `/plan-eng-review` | Architecture & tests (required) | 0 | — | — |
| Design Review | `/plan-design-review` | UI/UX gaps | 0 | — | — |
VERDICT: NO REVIEWS YET — run `/autoplan` for full review pipeline, or individual reviews above. ```
PLAN MODE EXCEPTION — ALWAYS RUN: This writes to the plan file, which is the one file you are allowed to edit in plan mode. The plan file review report is part of the plan's living status.
First, detect the git hosting platform from the remote URL:
git remote get-url origin 2>/dev/null
gh auth status 2>/dev/null succeeds → platform is GitHub (covers GitHub Enterprise)glab auth status 2>/dev/null succeeds → platform is GitLab (covers self-hosted)Determine which branch this PR/MR targets, or the repo's default branch if no PR/MR exists. Use the result as "the base branch" in all subsequent steps.
If GitHub:
gh pr view --json baseRefName -q .baseRefName — if succeeds, use itgh repo view --json defaultBranchRef -q .defaultBranchRef.name — if succeeds, use itIf GitLab:
glab mr view -F json 2>/dev/null and extract the target_branch field — if succeeds, use itglab repo view -F json 2>/dev/null and extract the default_branch field — if succeeds, use itGit-native fallback (if unknown platform, or CLI commands fail):
git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null | sed 's|refs/remotes/origin/||'git rev-parse --verify origin/main 2>/dev/null → use maingit rev-parse --verify origin/master 2>/dev/null → use masterIf all fail, fall back to main.
Print the detected base branch name. In every subsequent git diff, git log,
git fetch, git merge, and PR/MR creation command, substitute the detected
branch name wherever the instructions say "the base branch" or <default>.
You are running the /review workflow. Analyze the current branch's diff against the base branch for structural issues that tests don't catch.
git branch --show-current to get the current branch.git fetch origin <base> --quiet && git diff origin/<base> --stat to check if there's a diff. If no diff, output the same message and stop.Before reviewing code quality, check: did they build what was requested — nothing more, nothing less?
TODOS.md (if it exists). Read PR description (gh pr view --json body --jq .body 2>/dev/null || true).
Read commit messages (git log origin/<base>..HEAD --oneline).
If no PR exists: rely on commit messages and TODOS.md for stated intent — this is the common case since /review runs before /ship creates the PR.git diff origin/<base>...HEAD --stat and compare the files changed against the stated intent.Conversation context (primary): Check if there is an active plan file in this conversation. The host agent's system messages include plan file paths when in plan mode. If found, use it directly — this is the most reliable signal.
Content-based search (fallback): If no plan file is referenced in conversation context, search by content:
BRANCH=$(git branch --show-current 2>/dev/null | tr '/' '-')
REPO=$(basename "$(git rev-parse --show-toplevel 2>/dev/null)")
# Search common plan file locations
for PLAN_DIR in "$HOME/.claude/plans" "$HOME/.codex/plans" ".gstack/plans"; do
[ -d "$PLAN_DIR" ] || continue
PLAN=$(ls -t "$PLAN_DIR"/*.md 2>/dev/null | xargs grep -l "$BRANCH" 2>/dev/null | head -1)
[ -z "$PLAN" ] && PLAN=$(ls -t "$PLAN_DIR"/*.md 2>/dev/null | xargs grep -l "$REPO" 2>/dev/null | head -1)
[ -z "$PLAN" ] && PLAN=$(find "$PLAN_DIR" -name '*.md' -mmin -1440 -maxdepth 1 2>/dev/null | xargs ls -t 2>/dev/null | head -1)
[ -n "$PLAN" ] && break
done
[ -n "$PLAN" ] && echo "PLAN_FILE: $PLAN" || echo "NO_PLAN_FILE"
Error handling:
Read the plan file. Extract every actionable item — anything that describes work to be done. Look for:
- [ ] ... or - [x] ...Ignore:
## Context, ## Background, ## Problem)## GSTACK REVIEW REPORT)Cap: Extract at most 50 items. If the plan has more, note: "Showing top 50 of N plan items — full list in plan file."
No items found: If the plan contains no extractable actionable items, skip with: "Plan file contains no actionable items — skipping completion audit."
For each item, note:
Run git diff origin/<base>...HEAD and git log origin/<base>..HEAD --oneline to understand what was implemented.
For each extracted plan item, check the diff and classify:
Be conservative with DONE — require clear evidence in the diff. A file being touched is not enough; the specific functionality described must be present. Be generous with CHANGED — if the goal is met by different means, that counts as addressed.
PLAN COMPLETION AUDIT
═══════════════════════════════
Plan: {plan file path}
## Implementation Items
[DONE] Create UserService — src/services/user_service.rb (+142 lines)
[PARTIAL] Add validation — model validates but missing controller checks
[NOT DONE] Add caching layer — no cache-related changes in diff
[CHANGED] "Redis queue" → implemented with Sidekiq instead
## Test Items
[DONE] Unit tests for UserService — test/services/user_service_test.rb
[NOT DONE] E2E test for signup flow
## Migration Items
[DONE] Create users table — db/migrate/20240315_create_users.rb
─────────────────────────────────
COMPLETION: 4/7 DONE, 1 PARTIAL, 1 NOT DONE, 1 CHANGED
─────────────────────────────────
The plan completion results augment the existing Scope Drift Detection. If a plan file is found:
This is INFORMATIONAL — does not block the review (consistent with existing scope drift behavior).
Update the scope drift output to include plan file context:
Scope Check: [CLEAN / DRIFT DETECTED / REQUIREMENTS MISSING]
Intent: <from plan file — 1-line summary>
Plan: <plan file path>
Delivered: <1-line summary of what the diff actually does>
Plan items: N DONE, M PARTIAL, K NOT DONE
[If NOT DONE: list each missing item]
[If scope creep: list each out-of-scope change not in the plan]
No plan file found: Fall back to existing scope drift behavior (check TODOS.md and PR description only).
Evaluate with skepticism (incorporating plan completion results if available):
SCOPE CREEP detection:
MISSING REQUIREMENTS detection:
Output (before the main review begins):
Scope Check: [CLEAN / DRIFT DETECTED / REQUIREMENTS MISSING]
Intent: <1-line summary of what was requested>
Delivered: <1-line summary of what the diff actually does>
[If drift: list each out-of-scope change]
[If missing: list each unaddressed requirement]
This is INFORMATIONAL — does not block the review. Proceed to Step 2.
Read .claude/skills/review/checklist.md.
If the file cannot be read, STOP and report the error. Do not proceed without the checklist.
Read .claude/skills/review/greptile-triage.md and follow the fetch, filter, classify, and escalation detection steps.
If no PR exists, gh fails, API returns an error, or there are zero Greptile comments: Skip this step silently. Greptile integration is additive — the review works without it.
If Greptile comments are found: Store the classifications (VALID & ACTIONABLE, VALID BUT ALREADY FIXED, FALSE POSITIVE, SUPPRESSED) — you will need them in Step 5.
Fetch the latest base branch to avoid false positives from stale local state:
git fetch origin <base> --quiet
Run git diff origin/<base> to get the full diff. This includes both committed and uncommitted changes against the latest base branch.
Apply the checklist against the diff in two passes:
Enum & Value Completeness requires reading code OUTSIDE the diff. When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient.
Search-before-recommending: When recommending a fix pattern (especially for concurrency, caching, auth, or framework-specific behavior):
Takes seconds, prevents recommending outdated patterns. If WebSearch is unavailable, note it and proceed with in-distribution knowledge.
Follow the output format specified in the checklist. Respect the suppressions — do NOT flag items listed in the "DO NOT flag" section.
Check if the diff touches frontend files using gstack-diff-scope:
source <(~/.claude/skills/gstack/bin/gstack-diff-scope <base> 2>/dev/null)
If SCOPE_FRONTEND=false: Skip design review silently. No output.
If SCOPE_FRONTEND=true:
Check for DESIGN.md. If DESIGN.md or design-system.md exists in the repo root, read it. All design findings are calibrated against it — patterns blessed in DESIGN.md are not flagged. If not found, use universal design principles.
Read .claude/skills/review/design-checklist.md. If the file cannot be read, skip design review with a note: "Design checklist not found — skipping design review."
Read each changed frontend file (full file, not just diff hunks). Frontend files are identified by the patterns listed in the checklist.
Apply the design checklist against the changed files. For each item:
outline: none, !important, font-size < 16px): classify as AUTO-FIXInclude findings in the review output under a "Design Review" header, following the output format in the checklist. Design findings merge with code review findings into the same Fix-First flow.
Log the result for the Review Readiness Dashboard:
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"design-review-lite","timestamp":"TIMESTAMP","status":"STATUS","findings":N,"auto_fixed":M,"commit":"COMMIT"}'
Substitute: TIMESTAMP = ISO 8601 datetime, STATUS = "clean" if 0 findings or "issues_found", N = total findings, M = auto-fixed count, COMMIT = output of git rev-parse --short HEAD.
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
If Codex is available, run a lightweight design check on the diff:
TMPERR_DRL=$(mktemp /tmp/codex-drl-XXXXXXXX)
codex exec "Review the git diff on this branch. Run 7 litmus checks (YES/NO each): 1. Brand/product unmistakable in first screen? 2. One strong visual anchor present? 3. Page understandable by scanning headlines only? 4. Each section has one job? 5. Are cards actually necessary? 6. Does motion improve hierarchy or atmosphere? 7. Would design feel premium with all decorative shadows removed? Flag any hard rejections: 1. Generic SaaS card grid as first impression 2. Beautiful image with weak brand 3. Strong headline with no clear action 4. Busy imagery behind text 5. Sections repeating same mood statement 6. Carousel with no narrative purpose 7. App UI made of stacked cards instead of layout 5 most important design findings only. Reference file:line." -C "$(git rev-parse --show-toplevel)" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR_DRL"
Use a 5-minute timeout (timeout: 300000). After the command completes, read stderr:
cat "$TMPERR_DRL" && rm -f "$TMPERR_DRL"
Error handling: All errors are non-blocking. On auth failure, timeout, or empty response — skip with a brief note and continue.
Present Codex output under a CODEX (design): header, merged with the checklist findings above.
Include any design findings alongside the findings from Step 4. They follow the same Fix-First flow in Step 5 — AUTO-FIX for mechanical CSS fixes, ASK for everything else.
100% coverage is the goal. Evaluate every codepath changed in the diff and identify test gaps. Gaps become INFORMATIONAL findings that follow the Fix-First flow.
Before analyzing coverage, detect the project's test framework:
## Testing section with test command and framework name. If found, use that as the authoritative source.# Detect project runtime
[ -f Gemfile ] && echo "RUNTIME:ruby"
[ -f package.json ] && echo "RUNTIME:node"
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
[ -f go.mod ] && echo "RUNTIME:go"
[ -f Cargo.toml ] && echo "RUNTIME:rust"
# Check for existing test infrastructure
ls jest.config.* vitest.config.* playwright.config.* cypress.config.* .rspec pytest.ini phpunit.xml 2>/dev/null
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
Step 1. Trace every codepath changed using git diff origin/<base>...HEAD:
Read every changed file. For each one, trace how data flows through the code — don't just list functions, actually follow the execution:
This is the critical step — you're building a map of every line of code that can execute differently based on input. Every branch in this diagram needs a test.
Step 2. Map user flows, interactions, and error states:
Code coverage isn't enough — you need to cover how real users interact with the changed code. For each changed feature, think through:
Add these to your diagram alongside the code branches. A user flow with no test is just as much a gap as an untested if/else.
Step 3. Check each branch against existing tests:
Go through your diagram branch by branch — both code paths AND user flows. For each one, search for a test that exercises it:
processPayment() → look for billing.test.ts, billing.spec.ts, test/billing_test.rbhelperFn() that has its own branches → those branches need tests tooQuality scoring rubric:
When checking each branch, also determine whether a unit test or E2E/integration test is the right tool:
RECOMMEND E2E (mark as [→E2E] in the diagram):
RECOMMEND EVAL (mark as [→EVAL] in the diagram):
STICK WITH UNIT TESTS:
IRON RULE: When the coverage audit identifies a REGRESSION — code that previously worked but the diff broke — a regression test is written immediately. No AskUserQuestion. No skipping. Regressions are the highest-priority test because they prove something broke.
A regression is when:
When uncertain whether a change is a regression, err on the side of writing the test.
Format: commit as test: regression test for {what broke}
Step 4. Output ASCII coverage diagram:
Include BOTH code paths and user flows in the same diagram. Mark E2E-worthy and eval-worthy paths:
CODE PATH COVERAGE
===========================
[+] src/services/billing.ts
│
├── processPayment()
│ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
│ ├── [GAP] Network timeout — NO TEST
│ └── [GAP] Invalid currency — NO TEST
│
└── refundPayment()
├── [★★ TESTED] Full refund — billing.test.ts:89
└── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101
USER FLOW COVERAGE
===========================
[+] Payment checkout flow
│
├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
├── [GAP] [→E2E] Double-click submit — needs E2E, not just unit
├── [GAP] Navigate away during payment — unit test sufficient
└── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40
[+] Error states
│
├── [★★ TESTED] Card declined message — billing.test.ts:58
├── [GAP] Network timeout UX (what does user see?) — NO TEST
└── [GAP] Empty cart submission — NO TEST
[+] LLM integration
│
└── [GAP] [→EVAL] Prompt template change — needs eval test
─────────────────────────────────
COVERAGE: 5/13 paths tested (38%)
Code paths: 3/5 (60%)
User flows: 2/8 (25%)
QUALITY: ★★★: 2 ★★: 2 ★: 1
GAPS: 8 paths need tests (2 need E2E, 1 needs eval)
─────────────────────────────────
Fast path: All paths covered → "Step 4.75: All new code paths have test coverage ✓" Continue.
Step 5. Generate tests for gaps (Fix-First):
If test framework is detected and gaps were identified:
test: coverage for {feature}If no test framework detected → include gaps as INFORMATIONAL findings only, no generation.
Diff is test-only changes: Skip Step 4.75 entirely: "No new application code paths to audit."
After producing the coverage diagram, check the coverage percentage. Read CLAUDE.md for a ## Test Coverage section with a Minimum: field. If not found, use default: 60%.
If coverage is below the minimum threshold, output a prominent warning before the regular review findings:
⚠️ COVERAGE WARNING: AI-assessed coverage is {X}%. {N} code paths untested.
Consider writing tests before running /ship.
This is INFORMATIONAL — does not block /review. But it makes low coverage visible early so the developer can address it before reaching the /ship coverage gate.
If coverage percentage cannot be determined, skip the warning silently.
This step subsumes the "Test Gaps" category from Pass 2 — do not duplicate findings between the checklist Test Gaps item and this coverage diagram. Include any coverage gaps alongside the findings from Step 4 and Step 4.5. They follow the same Fix-First flow — gaps are INFORMATIONAL findings.
Every finding gets action — not just critical ones.
Output a summary header: Pre-Landing Review: N issues (X critical, Y informational)
For each finding, classify as AUTO-FIX or ASK per the Fix-First Heuristic in checklist.md. Critical findings lean toward ASK; informational findings lean toward AUTO-FIX.
Apply each fix directly. For each one, output a one-line summary:
[AUTO-FIXED] [file:line] Problem → what you did
If there are ASK items remaining, present them in ONE AskUserQuestion:
Example format:
I auto-fixed 5 issues. 2 need your input:
1. [CRITICAL] app/models/post.rb:42 — Race condition in status transition
Fix: Add `WHERE status = 'draft'` to the UPDATE
→ A) Fix B) Skip
2. [INFORMATIONAL] app/services/generator.rb:88 — LLM output not type-checked before DB write
Fix: Add JSON schema validation
→ A) Fix B) Skip
RECOMMENDATION: Fix both — #1 is a real race condition, #2 prevents silent data corruption.
If 3 or fewer ASK items, you may use individual AskUserQuestion calls instead of batching.
Apply fixes for items where the user chose "Fix." Output what was fixed.
If no ASK items exist (everything was AUTO-FIX), skip the question entirely.
Before producing the final review output:
Rationalization prevention: "This looks fine" is not a finding. Either cite evidence it IS fine, or flag it as unverified.
After outputting your own findings, if Greptile comments were classified in Step 2.5:
Include a Greptile summary in your output header: + N Greptile comments (X valid, Y fixed, Z FP)
Before replying to any comment, run the Escalation Detection algorithm from greptile-triage.md to determine whether to use Tier 1 (friendly) or Tier 2 (firm) reply templates.
VALID & ACTIONABLE comments: These are included in your findings — they follow the Fix-First flow (auto-fixed if mechanical, batched into ASK if not) (A: Fix it now, B: Acknowledge, C: False positive). If the user chooses A (fix), reply using the Fix reply template from greptile-triage.md (include inline diff + explanation). If the user chooses C (false positive), reply using the False Positive reply template (include evidence + suggested re-rank), save to both per-project and global greptile-history.
FALSE POSITIVE comments: Present each one via AskUserQuestion:
If the user chooses A, reply using the False Positive reply template from greptile-triage.md (include evidence + suggested re-rank), save to both per-project and global greptile-history.
VALID BUT ALREADY FIXED comments: Reply using the Already Fixed reply template from greptile-triage.md — no AskUserQuestion needed:
SUPPRESSED comments: Skip silently — these are known false positives from previous triage.
Read TODOS.md in the repository root (if it exists). Cross-reference the PR against open TODOs:
If TODOS.md doesn't exist, skip this step silently.
Cross-reference the diff against documentation files. For each .md file in the repo root (README.md, ARCHITECTURE.md, CONTRIBUTING.md, CLAUDE.md, etc.):
/document-release."This is informational only — never critical. The fix action is /document-release.
If no documentation files exist, skip this step silently.
Adversarial review thoroughness scales automatically based on diff size. No configuration needed.
Detect diff size and tool availability:
DIFF_INS=$(git diff origin/<base> --stat | tail -1 | grep -oE '[0-9]+ insertion' | grep -oE '[0-9]+' || echo "0")
DIFF_DEL=$(git diff origin/<base> --stat | tail -1 | grep -oE '[0-9]+ deletion' | grep -oE '[0-9]+' || echo "0")
DIFF_TOTAL=$((DIFF_INS + DIFF_DEL))
which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE"
# Respect old opt-out
OLD_CFG=$(~/.claude/skills/gstack/bin/gstack-config get codex_reviews 2>/dev/null || true)
echo "DIFF_SIZE: $DIFF_TOTAL"
echo "OLD_CFG: ${OLD_CFG:-not_set}"
If OLD_CFG is disabled: skip this step silently. Continue to the next step.
User override: If the user explicitly requested a specific tier (e.g., "run all passes", "paranoid review", "full adversarial", "do all 4 passes", "thorough review"), honor that request regardless of diff size. Jump to the matching tier section.
Auto-select tier based on diff size:
Claude's structured review already ran. Now add a cross-model adversarial challenge.
If Codex is available: run the Codex adversarial challenge. If Codex is NOT available: fall back to the Claude adversarial subagent instead.
Codex adversarial:
TMPERR_ADV=$(mktemp /tmp/codex-adv-XXXXXXXX)
codex exec "Review the changes on this branch against the base branch. Run git diff origin/<base> to see the diff. Your job is to find ways this code will fail in production. Think like an attacker and a chaos engineer. Find edge cases, race conditions, security holes, resource leaks, failure modes, and silent data corruption paths. Be adversarial. Be thorough. No compliments — just the problems." -C "$(git rev-parse --show-toplevel)" -s read-only -c 'model_reasoning_effort="xhigh"' --enable web_search_cached 2>"$TMPERR_ADV"
Set the Bash tool's timeout parameter to 300000 (5 minutes). Do NOT use the timeout shell command — it doesn't exist on macOS. After the command completes, read stderr:
cat "$TMPERR_ADV"
Present the full output verbatim. This is informational — it never blocks shipping.
Error handling: All errors are non-blocking — adversarial review is a quality enhancement, not a prerequisite.
On any Codex error, fall back to the Claude adversarial subagent automatically.
Claude adversarial subagent (fallback when Codex unavailable or errored):
Dispatch via the Agent tool. The subagent has fresh context — no checklist bias from the structured review. This genuine independence catches things the primary reviewer is blind to.
Subagent prompt:
"Read the diff for this branch with git diff origin/<base>. Think like an attacker and a chaos engineer. Your job is to find ways this code will fail in production. Look for: edge cases, race conditions, security holes, resource leaks, failure modes, silent data corruption, logic errors that produce wrong results silently, error handling that swallows failures, and trust boundary violations. Be adversarial. Be thorough. No compliments — just the problems. For each finding, classify as FIXABLE (you know how to fix it) or INVESTIGATE (needs human judgment)."
Present findings under an ADVERSARIAL REVIEW (Claude subagent): header. FIXABLE findings flow into the same Fix-First pipeline as the structured review. INVESTIGATE findings are presented as informational.
If the subagent fails or times out: "Claude adversarial subagent unavailable. Continuing without adversarial review."
Persist the review result:
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"adversarial-review","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","status":"STATUS","source":"SOURCE","tier":"medium","commit":"'"$(git rev-parse --short HEAD)"'"}'
Substitute STATUS: "clean" if no findings, "issues_found" if findings exist. SOURCE: "codex" if Codex ran, "claude" if subagent ran. If both failed, do NOT persist.
Cleanup: Run rm -f "$TMPERR_ADV" after processing (if Codex was used).
Claude's structured review already ran. Now run all three remaining passes for maximum coverage:
1. Codex structured review (if available):
TMPERR=$(mktemp /tmp/codex-review-XXXXXXXX)
codex review --base <base> -c 'model_reasoning_effort="xhigh"' --enable web_search_cached 2>"$TMPERR"
Set the Bash tool's timeout parameter to 300000 (5 minutes). Do NOT use the timeout shell command — it doesn't exist on macOS. Present output under CODEX SAYS (code review): header.
Check for [P1] markers: found → GATE: FAIL, not found → GATE: PASS.
If GATE is FAIL, use AskUserQuestion:
Codex found N critical issues in the diff.
A) Investigate and fix now (recommended)
B) Continue — review will still complete
If A: address the findings. Re-run codex review to verify.
Read stderr for errors (same error handling as medium tier).
After stderr: rm -f "$TMPERR"
2. Claude adversarial subagent: Dispatch a subagent with the adversarial prompt (same prompt as medium tier). This always runs regardless of Codex availability.
3. Codex adversarial challenge (if available): Run codex exec with the adversarial prompt (same as medium tier).
If Codex is not available for steps 1 and 3, note to the user: "Codex CLI not found — large-diff review ran Claude structured + Claude adversarial (2 of 4 passes). Install Codex for full 4-pass coverage: npm install -g @openai/codex"
Persist the review result AFTER all passes complete (not after each sub-step):
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"adversarial-review","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","status":"STATUS","source":"SOURCE","tier":"large","gate":"GATE","commit":"'"$(git rev-parse --short HEAD)"'"}'
Substitute: STATUS = "clean" if no findings across ALL passes, "issues_found" if any pass found issues. SOURCE = "both" if Codex ran, "claude" if only Claude subagent ran. GATE = the Codex structured review gate result ("pass"/"fail"), or "informational" if Codex was unavailable. If all passes failed, do NOT persist.
After all passes complete, synthesize findings across all sources:
ADVERSARIAL REVIEW SYNTHESIS (auto: TIER, N lines):
════════════════════════════════════════════════════════════
High confidence (found by multiple sources): [findings agreed on by >1 pass]
Unique to Claude structured review: [from earlier step]
Unique to Claude adversarial: [from subagent, if ran]
Unique to Codex: [from codex adversarial or code review, if ran]
Models used: Claude structured ✓ Claude adversarial ✓/✗ Codex ✓/✗
════════════════════════════════════════════════════════════
High-confidence findings (agreed on by multiple sources) should be prioritized for fixes.
After all review passes complete, persist the final /review outcome so /ship can
recognize that Eng Review was run on this branch.
Run:
~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"review","timestamp":"TIMESTAMP","status":"STATUS","issues_found":N,"critical":N,"informational":N,"commit":"COMMIT"}'
Substitute:
TIMESTAMP = ISO 8601 datetimeSTATUS = "clean" if there are no remaining unresolved findings after Fix-First handling and adversarial review, otherwise "issues_found"issues_found = total remaining unresolved findingscritical = remaining unresolved critical findingsinformational = remaining unresolved informational findingsCOMMIT = output of git rev-parse --short HEADIf the review exits early before a real review completes (for example, no diff against the base branch), do not write this entry.