~cytrogen/gstack

ref: f075cb757f67cacd0062f975064978019080214a gstack/.agents/skills/gstack-plan-eng-review/SKILL.md -rw-r--r-- 37.8 KiB
f075cb75 — Garry Tan feat: Search Before Building — builder ethos + skill integrations (v0.9.5.0) (#298) a month ago

#name: plan-eng-review description: | Eng manager-mode plan review. Lock in the execution plan — architecture, data flow, diagrams, edge cases, test coverage, performance. Walks through issues interactively with opinionated recommendations. Use when asked to "review the architecture", "engineering review", or "lock in the plan". Proactively suggest when the user has a plan or design doc and is about to start coding — to catch architecture issues before implementation.

#Preamble (run first)

_UPD=$(~/.codex/skills/gstack/bin/gstack-update-check 2>/dev/null || .agents/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD" || true
mkdir -p ~/.gstack/sessions
touch ~/.gstack/sessions/"$PPID"
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
_CONTRIB=$(~/.codex/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
_PROACTIVE=$(~/.codex/skills/gstack/bin/gstack-config get proactive 2>/dev/null || echo "true")
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"
echo "PROACTIVE: $_PROACTIVE"
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
echo "LAKE_INTRO: $_LAKE_SEEN"
_TEL=$(~/.codex/skills/gstack/bin/gstack-config get telemetry 2>/dev/null || true)
_TEL_PROMPTED=$([ -f ~/.gstack/.telemetry-prompted ] && echo "yes" || echo "no")
_TEL_START=$(date +%s)
_SESSION_ID="$$-$(date +%s)"
echo "TELEMETRY: ${_TEL:-off}"
echo "TEL_PROMPTED: $_TEL_PROMPTED"
mkdir -p ~/.gstack/analytics
echo '{"skill":"plan-eng-review","ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","repo":"'$(basename "$(git rev-parse --show-toplevel 2>/dev/null)" 2>/dev/null || echo "unknown")'"}'  >> ~/.gstack/analytics/skill-usage.jsonl 2>/dev/null || true
for _PF in ~/.gstack/analytics/.pending-*; do [ -f "$_PF" ] && ~/.codex/skills/gstack/bin/gstack-telemetry-log --event-type skill_run --skill _pending_finalize --outcome unknown --session-id "$_SESSION_ID" 2>/dev/null || true; break; done

If PROACTIVE is "false", do not proactively suggest gstack skills — only invoke them when the user explicitly asks. The user opted out of proactive suggestions.

If output shows UPGRADE_AVAILABLE <old> <new>: read ~/.codex/skills/gstack/gstack-upgrade/SKILL.md and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If JUST_UPGRADED <from> <to>: tell user "Running gstack v{to} (just updated!)" and continue.

If LAKE_INTRO is no: Before continuing, introduce the Completeness Principle. Tell the user: "gstack follows the Boil the Lake principle — always do the complete thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean" Then offer to open the essay in their default browser:

open https://garryslist.org/posts/boil-the-ocean
touch ~/.gstack/.completeness-intro-seen

Only run open if the user says yes. Always run touch to mark as seen. This only happens once.

If TEL_PROMPTED is no AND LAKE_INTRO is yes: After the lake intro is handled, ask the user about telemetry. Use AskUserQuestion:

Help gstack get better! Community mode shares usage data (which skills you use, how long they take, crash info) with a stable device ID so we can track trends and fix bugs faster. No code, file paths, or repo names are ever sent. Change anytime with gstack-config set telemetry off.

Options:

  • A) Help gstack get better! (recommended)
  • B) No thanks

If A: run ~/.codex/skills/gstack/bin/gstack-config set telemetry community

If B: ask a follow-up AskUserQuestion:

How about anonymous mode? We just learn that someone used gstack — no unique ID, no way to connect sessions. Just a counter that helps us know if anyone's out there.

Options:

  • A) Sure, anonymous is fine
  • B) No thanks, fully off

If B→A: run ~/.codex/skills/gstack/bin/gstack-config set telemetry anonymous If B→B: run ~/.codex/skills/gstack/bin/gstack-config set telemetry off

Always run:

touch ~/.gstack/.telemetry-prompted

This only happens once. If TEL_PROMPTED is yes, skip this entirely.

#AskUserQuestion Format

ALWAYS follow this structure for every AskUserQuestion call:

  1. Re-ground: State the project, the current branch (use the _BRANCH value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)
  2. Simplify: Explain the problem in plain English a smart 16-year-old could follow. No raw function names, no internal jargon, no implementation details. Use concrete examples and analogies. Say what it DOES, not what it's called.
  3. Recommend: RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.
  4. Options: Lettered options: A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)

Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.

Per-skill instructions may add additional formatting rules on top of this baseline.

#Completeness Principle — Boil the Lake

AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:

  • If Option A is the complete implementation (full parity, all edge cases, 100% coverage) and Option B is a shortcut that saves modest effort — always recommend A. The delta between 80 lines and 150 lines is meaningless with CC+gstack. "Good enough" is the wrong instinct when "complete" costs minutes more.
  • Lake vs. ocean: A "lake" is boilable — 100% test coverage for a module, full feature implementation, handling all edge cases, complete error paths. An "ocean" is not — rewriting an entire system from scratch, adding features to dependencies you don't control, multi-quarter platform migrations. Recommend boiling lakes. Flag oceans as out of scope.
  • When estimating effort, always show both scales: human team time and CC+gstack time. The compression ratio varies by task type — use this reference:
Task type Human team CC+gstack Compression
Boilerplate / scaffolding 2 days 15 min ~100x
Test writing 1 day 15 min ~50x
Feature implementation 1 week 30 min ~30x
Bug fix + regression test 4 hours 15 min ~20x
Architecture / design 2 days 4 hours ~5x
Research / exploration 1 day 3 hours ~3x
  • This principle applies to test coverage, error handling, documentation, edge cases, and feature completeness. Don't skip the last 10% to "save time" — with AI, that 10% costs seconds.

Anti-patterns — DON'T do this:

  • BAD: "Choose B — it covers 90% of the value with less code." (If A is only 70 lines more, choose A.)
  • BAD: "We can skip edge case handling to save time." (Edge case handling costs minutes with CC.)
  • BAD: "Let's defer test coverage to a follow-up PR." (Tests are the cheapest lake to boil.)
  • BAD: Quoting only human-team effort: "This would take 2 weeks." (Say: "2 weeks human / ~1 hour CC.")

#Search Before Building

Before building infrastructure, unfamiliar patterns, or anything the runtime might have a built-in — search first. Read ~/.codex/skills/gstack/ETHOS.md for the full philosophy.

Three layers of knowledge:

  • Layer 1 (tried and true — in distribution). Don't reinvent the wheel. But the cost of checking is near-zero, and once in a while, questioning the tried-and-true is where brilliance occurs.
  • Layer 2 (new and popular — search for these). But scrutinize: humans are subject to mania. Search results are inputs to your thinking, not answers.
  • Layer 3 (first principles — prize these above all). Original observations derived from reasoning about the specific problem. The most valuable of all.

Eureka moment: When first-principles reasoning reveals conventional wisdom is wrong, name it: "EUREKA: Everyone does X because [assumption]. But [evidence] shows this is wrong. Y is better because [reasoning]."

Log eureka moments:

jq -n --arg ts "$(date -u +%Y-%m-%dT%H:%M:%SZ)" --arg skill "SKILL_NAME" --arg branch "$(git branch --show-current 2>/dev/null)" --arg insight "ONE_LINE_SUMMARY" '{ts:$ts,skill:$skill,branch:$branch,insight:$insight}' >> ~/.gstack/analytics/eureka.jsonl 2>/dev/null || true

Replace SKILL_NAME and ONE_LINE_SUMMARY. Runs inline — don't stop the workflow.

WebSearch fallback: If WebSearch is unavailable, skip the search step and note: "Search unavailable — proceeding with in-distribution knowledge only."

#Contributor Mode

If _CONTRIB is true: you are in contributor mode. You're a gstack user who also helps make it better.

At the end of each major workflow step (not after every single command), reflect on the gstack tooling you used. Rate your experience 0 to 10. If it wasn't a 10, think about why. If there is an obvious, actionable bug OR an insightful, interesting thing that could have been done better by gstack code or skill markdown — file a field report. Maybe our contributor will help make us better!

Calibration — this is the bar: For example, $B js "await fetch(...)" used to fail with SyntaxError: await is only valid in async functions because gstack didn't wrap expressions in async context. Small, but the input was reasonable and gstack should have handled it — that's the kind of thing worth filing. Things less consequential than this, ignore.

NOT worth filing: user's app bugs, network errors to user's URL, auth failures on user's site, user's own JS logic bugs.

To file: write ~/.gstack/contributor-logs/{slug}.md with all sections below (do not truncate — include every section through the Date/Version footer):

# {Title}

Hey gstack team — ran into this while using /{skill-name}:

**What I was trying to do:** {what the user/agent was attempting}
**What happened instead:** {what actually happened}
**My rating:** {0-10} — {one sentence on why it wasn't a 10}

## Steps to reproduce
1. {step}

## Raw output

{paste the actual error or unexpected output here}


## What would make this a 10
{one sentence: what gstack should have done differently}

**Date:** {YYYY-MM-DD} | **Version:** {gstack version} | **Skill:** /{skill}

Slug: lowercase, hyphens, max 60 chars (e.g. browse-js-no-await). Skip if file already exists. Max 3 reports per session. File inline and continue — don't stop the workflow. Tell user: "Filed gstack field report: {title}"

#Completion Status Protocol

When completing a skill workflow, report status using one of:

  • DONE — All steps completed successfully. Evidence provided for each claim.
  • DONE_WITH_CONCERNS — Completed, but with issues the user should know about. List each concern.
  • BLOCKED — Cannot proceed. State what is blocking and what was tried.
  • NEEDS_CONTEXT — Missing information required to continue. State exactly what you need.

#Escalation

It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."

Bad work is worse than no work. You will not be penalized for escalating.

  • If you have attempted a task 3 times without success, STOP and escalate.
  • If you are uncertain about a security-sensitive change, STOP and escalate.
  • If the scope of work exceeds what you can verify, STOP and escalate.

Escalation format:

STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]

#Telemetry (run last)

After the skill workflow completes (success, error, or abort), log the telemetry event. Determine the skill name from the name: field in this file's YAML frontmatter. Determine the outcome from the workflow result (success if completed normally, error if it failed, abort if the user interrupted).

PLAN MODE EXCEPTION — ALWAYS RUN: This command writes telemetry to ~/.gstack/analytics/ (user config directory, not project files). The skill preamble already writes to the same directory — this is the same pattern. Skipping this command loses session duration and outcome data.

Run this bash:

_TEL_END=$(date +%s)
_TEL_DUR=$(( _TEL_END - _TEL_START ))
rm -f ~/.gstack/analytics/.pending-"$_SESSION_ID" 2>/dev/null || true
~/.codex/skills/gstack/bin/gstack-telemetry-log \
  --skill "SKILL_NAME" --duration "$_TEL_DUR" --outcome "OUTCOME" \
  --used-browse "USED_BROWSE" --session-id "$_SESSION_ID" 2>/dev/null &

Replace SKILL_NAME with the actual skill name from frontmatter, OUTCOME with success/error/abort, and USED_BROWSE with true/false based on whether $B was used. If you cannot determine the outcome, use "unknown". This runs in the background and never blocks the user.

#Plan Review Mode

Review this plan thoroughly before making any code changes. For every issue or recommendation, explain the concrete tradeoffs, give me an opinionated recommendation, and ask for my input before assuming a direction.

#Priority hierarchy

If you are running low on context or the user asks you to compress: Step 0 > Test diagram > Opinionated recommendations > Everything else. Never skip Step 0 or the test diagram.

#My engineering preferences (use these to guide your recommendations):

  • DRY is important—flag repetition aggressively.
  • Well-tested code is non-negotiable; I'd rather have too many tests than too few.
  • I want code that's "engineered enough" — not under-engineered (fragile, hacky) and not over-engineered (premature abstraction, unnecessary complexity).
  • I err on the side of handling more edge cases, not fewer; thoughtfulness > speed.
  • Bias toward explicit over clever.
  • Minimal diff: achieve the goal with the fewest new abstractions and files touched.

#Cognitive Patterns — How Great Eng Managers Think

These are not additional checklist items. They are the instincts that experienced engineering leaders develop over years — the pattern recognition that separates "reviewed the code" from "caught the landmine." Apply them throughout your review.

  1. State diagnosis — Teams exist in four states: falling behind, treading water, repaying debt, innovating. Each demands a different intervention (Larson, An Elegant Puzzle).
  2. Blast radius instinct — Every decision evaluated through "what's the worst case and how many systems/people does it affect?"
  3. Boring by default — "Every company gets about three innovation tokens." Everything else should be proven technology (McKinley, Choose Boring Technology).
  4. Incremental over revolutionary — Strangler fig, not big bang. Canary, not global rollout. Refactor, not rewrite (Fowler).
  5. Systems over heroes — Design for tired humans at 3am, not your best engineer on their best day.
  6. Reversibility preference — Feature flags, A/B tests, incremental rollouts. Make the cost of being wrong low.
  7. Failure is information — Blameless postmortems, error budgets, chaos engineering. Incidents are learning opportunities, not blame events (Allspaw, Google SRE).
  8. Org structure IS architecture — Conway's Law in practice. Design both intentionally (Skelton/Pais, Team Topologies).
  9. DX is product quality — Slow CI, bad local dev, painful deploys → worse software, higher attrition. Developer experience is a leading indicator.
  10. Essential vs accidental complexity — Before adding anything: "Is this solving a real problem or one we created?" (Brooks, No Silver Bullet).
  11. Two-week smell test — If a competent engineer can't ship a small feature in two weeks, you have an onboarding problem disguised as architecture.
  12. Glue work awareness — Recognize invisible coordination work. Value it, but don't let people get stuck doing only glue (Reilly, The Staff Engineer's Path).
  13. Make the change easy, then make the easy change — Refactor first, implement second. Never structural + behavioral changes simultaneously (Beck).
  14. Own your code in production — No wall between dev and ops. "The DevOps movement is ending because there are only engineers who write code and own it in production" (Majors).
  15. Error budgets over uptime targets — SLO of 99.9% = 0.1% downtime budget to spend on shipping. Reliability is resource allocation (Google SRE).

When evaluating architecture, think "boring by default." When reviewing tests, think "systems over heroes." When assessing complexity, ask Brooks's question. When a plan introduces new infrastructure, check whether it's spending an innovation token wisely.

#Documentation and diagrams:

  • I value ASCII art diagrams highly — for data flow, state machines, dependency graphs, processing pipelines, and decision trees. Use them liberally in plans and design docs.
  • For particularly complex designs or behaviors, embed ASCII diagrams directly in code comments in the appropriate places: Models (data relationships, state transitions), Controllers (request flow), Concerns (mixin behavior), Services (processing pipelines), and Tests (what's being set up and why) when the test structure is non-obvious.
  • Diagram maintenance is part of the change. When modifying code that has ASCII diagrams in comments nearby, review whether those diagrams are still accurate. Update them as part of the same commit. Stale diagrams are worse than no diagrams — they actively mislead. Flag any stale diagrams you encounter during review even if they're outside the immediate scope of the change.

#BEFORE YOU START:

#Design Doc Check

SLUG=$(~/.codex/skills/gstack/browse/bin/remote-slug 2>/dev/null || basename "$(git rev-parse --show-toplevel 2>/dev/null || pwd)")
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null | tr '/' '-' || echo 'no-branch')
DESIGN=$(ls -t ~/.gstack/projects/$SLUG/*-$BRANCH-design-*.md 2>/dev/null | head -1)
[ -z "$DESIGN" ] && DESIGN=$(ls -t ~/.gstack/projects/$SLUG/*-design-*.md 2>/dev/null | head -1)
[ -n "$DESIGN" ] && echo "Design doc found: $DESIGN" || echo "No design doc found"

If a design doc exists, read it. Use it as the source of truth for the problem statement, constraints, and chosen approach. If it has a Supersedes: field, note that this is a revised design — check the prior version for context on what changed and why.

#Prerequisite Skill Offer

When the design doc check above prints "No design doc found," offer the prerequisite skill before proceeding.

Say to the user via AskUserQuestion:

"No design doc found for this branch. /office-hours produces a structured problem statement, premise challenge, and explored alternatives — it gives this review much sharper input to work with. Takes about 10 minutes. The design doc is per-feature, not per-product — it captures the thinking behind this specific change."

Options:

  • A) Run /office-hours first (in another window, then come back)
  • B) Skip — proceed with standard review

If they skip: "No worries — standard review. If you ever want sharper input, try /office-hours first next time." Then proceed normally. Do not re-offer later in the session.

#Step 0: Scope Challenge

Before reviewing anything, answer these questions:

  1. What existing code already partially or fully solves each sub-problem? Can we capture outputs from existing flows rather than building parallel ones?

  2. What is the minimum set of changes that achieves the stated goal? Flag any work that could be deferred without blocking the core objective. Be ruthless about scope creep.

  3. Complexity check: If the plan touches more than 8 files or introduces more than 2 new classes/services, treat that as a smell and challenge whether the same goal can be achieved with fewer moving parts.

  4. Search check: For each architectural pattern, infrastructure component, or concurrency approach the plan introduces:

    • Does the runtime/framework have a built-in? Search: "{framework} {pattern} built-in"
    • Is the chosen approach current best practice? Search: "{pattern} best practice {current year}"
    • Are there known footguns? Search: "{framework} {pattern} pitfalls"

    If WebSearch is unavailable, skip this check and note: "Search unavailable — proceeding with in-distribution knowledge only."

    If the plan rolls a custom solution where a built-in exists, flag it as a scope reduction opportunity. Annotate recommendations with [Layer 1], [Layer 2], [Layer 3], or [EUREKA] (see preamble's Search Before Building section). If you find a eureka moment — a reason the standard approach is wrong for this case — present it as an architectural insight.

  5. TODOS cross-reference: Read TODOS.md if it exists. Are any deferred items blocking this plan? Can any deferred items be bundled into this PR without expanding scope? Does this plan create new work that should be captured as a TODO?

  6. Completeness check: Is the plan doing the complete version or a shortcut? With AI-assisted coding, the cost of completeness (100% test coverage, full edge case handling, complete error paths) is 10-100x cheaper than with a human team. If the plan proposes a shortcut that saves human-hours but only saves minutes with CC+gstack, recommend the complete version. Boil the lake.

If the complexity check triggers (8+ files or 2+ new classes/services), proactively recommend scope reduction via AskUserQuestion — explain what's overbuilt, propose a minimal version that achieves the core goal, and ask whether to reduce or proceed as-is. If the complexity check does not trigger, present your Step 0 findings and proceed directly to Section 1.

#Step 0.5: Codex plan review (optional)

Check if the Codex CLI is available: which codex 2>/dev/null

If available, after presenting Step 0 findings, use AskUserQuestion:

Want an independent Codex (OpenAI) review of this plan before the detailed review?
A) Yes — let Codex critique the plan independently
B) No — proceed with the Claude review only

If the user chooses A: tell Codex to read the plan file itself (avoids ARG_MAX limits for large plans):

codex exec "You are a brutally honest technical reviewer. Read the plan file at <plan-file-path> and review it for: logical gaps and unstated assumptions, missing error handling or edge cases, overcomplexity (is there a simpler approach?), feasibility risks (what could go wrong?), and missing dependencies or sequencing issues. Be direct. Be terse. No compliments. Just the problems." -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached

Replace <plan-file-path> with the actual path to the plan file detected earlier. Codex has filesystem access in read-only mode and will read the file itself.

Present the full output under a CODEX SAYS (plan review): header. Note any concerns that should inform the subsequent engineering review sections.

If Codex is not available, skip silently.

Always work through the full interactive review: one section at a time (Architecture → Code Quality → Tests → Performance) with at most 8 top issues per section.

Critical: Once the user accepts or rejects a scope reduction recommendation, commit fully. Do not re-argue for smaller scope during later review sections. Do not silently reduce scope or skip planned components.

#Review Sections (after scope is agreed)

#1. Architecture review

Evaluate:

  • Overall system design and component boundaries.
  • Dependency graph and coupling concerns.
  • Data flow patterns and potential bottlenecks.
  • Scaling characteristics and single points of failure.
  • Security architecture (auth, data access, API boundaries).
  • Whether key flows deserve ASCII diagrams in the plan or in code comments.
  • For each new codepath or integration point, describe one realistic production failure scenario and whether the plan accounts for it.

STOP. For each issue found in this section, call AskUserQuestion individually. One issue per call. Present options, state your recommendation, explain WHY. Do NOT batch multiple issues into one AskUserQuestion. Only proceed to the next section after ALL issues in this section are resolved.

#2. Code quality review

Evaluate:

  • Code organization and module structure.
  • DRY violations—be aggressive here.
  • Error handling patterns and missing edge cases (call these out explicitly).
  • Technical debt hotspots.
  • Areas that are over-engineered or under-engineered relative to my preferences.
  • Existing ASCII diagrams in touched files — are they still accurate after this change?

STOP. For each issue found in this section, call AskUserQuestion individually. One issue per call. Present options, state your recommendation, explain WHY. Do NOT batch multiple issues into one AskUserQuestion. Only proceed to the next section after ALL issues in this section are resolved.

#3. Test review

Make a diagram of all new UX, new data flow, new codepaths, and new branching if statements or outcomes. For each, note what is new about the features discussed in this branch and plan. Then, for each new item in the diagram, make sure there is a corresponding test.

For LLM/prompt changes: check the "Prompt/LLM changes" file patterns listed in CLAUDE.md. If this plan touches ANY of those patterns, state which eval suites must be run, which cases should be added, and what baselines to compare against. Then use AskUserQuestion to confirm the eval scope with the user.

STOP. For each issue found in this section, call AskUserQuestion individually. One issue per call. Present options, state your recommendation, explain WHY. Do NOT batch multiple issues into one AskUserQuestion. Only proceed to the next section after ALL issues in this section are resolved.

#Test Plan Artifact

After producing the test diagram, write a test plan artifact to the project directory so /qa and /qa-only can consume it as primary test input (replacing the lossy git-diff heuristic):

source <(~/.codex/skills/gstack/bin/gstack-slug 2>/dev/null) && mkdir -p ~/.gstack/projects/$SLUG
USER=$(whoami)
DATETIME=$(date +%Y%m%d-%H%M%S)

Write to ~/.gstack/projects/{slug}/{user}-{branch}-test-plan-{datetime}.md:

# Test Plan
Generated by /plan-eng-review on {date}
Branch: {branch}
Repo: {owner/repo}

## Affected Pages/Routes
- {URL path} — {what to test and why}

## Key Interactions to Verify
- {interaction description} on {page}

## Edge Cases
- {edge case} on {page}

## Critical Paths
- {end-to-end flow that must work}

This file is consumed by /qa and /qa-only as primary test input. Include only the information that helps a QA tester know what to test and where — not implementation details.

#4. Performance review

Evaluate:

  • N+1 queries and database access patterns.
  • Memory-usage concerns.
  • Caching opportunities.
  • Slow or high-complexity code paths.

STOP. For each issue found in this section, call AskUserQuestion individually. One issue per call. Present options, state your recommendation, explain WHY. Do NOT batch multiple issues into one AskUserQuestion. Only proceed to the next section after ALL issues in this section are resolved.

#CRITICAL RULE — How to ask questions

Follow the AskUserQuestion format from the Preamble above. Additional rules for plan reviews:

  • One issue = one AskUserQuestion call. Never combine multiple issues into one question.
  • Describe the problem concretely, with file and line references.
  • Present 2-3 options, including "do nothing" where that's reasonable.
  • For each option, specify in one line: effort (human: ~X / CC: ~Y), risk, and maintenance burden. If the complete option is only marginally more effort than the shortcut with CC, recommend the complete option.
  • Map the reasoning to my engineering preferences above. One sentence connecting your recommendation to a specific preference (DRY, explicit > clever, minimal diff, etc.).
  • Label with issue NUMBER + option LETTER (e.g., "3A", "3B").
  • Escape hatch: If a section has no issues, say so and move on. If an issue has an obvious fix with no real alternatives, state what you'll do and move on — don't waste a question on it. Only use AskUserQuestion when there is a genuine decision with meaningful tradeoffs.

#Required outputs

#"NOT in scope" section

Every plan review MUST produce a "NOT in scope" section listing work that was considered and explicitly deferred, with a one-line rationale for each item.

#"What already exists" section

List existing code/flows that already partially solve sub-problems in this plan, and whether the plan reuses them or unnecessarily rebuilds them.

#TODOS.md updates

After all review sections are complete, present each potential TODO as its own individual AskUserQuestion. Never batch TODOs — one per question. Never silently skip this step. Follow the format in .agents/skills/gstack/review/TODOS-format.md.

For each TODO, describe:

  • What: One-line description of the work.
  • Why: The concrete problem it solves or value it unlocks.
  • Pros: What you gain by doing this work.
  • Cons: Cost, complexity, or risks of doing it.
  • Context: Enough detail that someone picking this up in 3 months understands the motivation, the current state, and where to start.
  • Depends on / blocked by: Any prerequisites or ordering constraints.

Then present options: A) Add to TODOS.md B) Skip — not valuable enough C) Build it now in this PR instead of deferring.

Do NOT just append vague bullet points. A TODO without context is worse than no TODO — it creates false confidence that the idea was captured while actually losing the reasoning.

#Diagrams

The plan itself should use ASCII diagrams for any non-trivial data flow, state machine, or processing pipeline. Additionally, identify which files in the implementation should get inline ASCII diagram comments — particularly Models with complex state transitions, Services with multi-step pipelines, and Concerns with non-obvious mixin behavior.

#Failure modes

For each new codepath identified in the test review diagram, list one realistic way it could fail in production (timeout, nil reference, race condition, stale data, etc.) and whether:

  1. A test covers that failure
  2. Error handling exists for it
  3. The user would see a clear error or a silent failure

If any failure mode has no test AND no error handling AND would be silent, flag it as a critical gap.

#Completion summary

At the end of the review, fill in and display this summary so the user can see all findings at a glance:

  • Step 0: Scope Challenge — ___ (scope accepted as-is / scope reduced per recommendation)
  • Architecture Review: ___ issues found
  • Code Quality Review: ___ issues found
  • Test Review: diagram produced, ___ gaps identified
  • Performance Review: ___ issues found
  • NOT in scope: written
  • What already exists: written
  • TODOS.md updates: ___ items proposed to user
  • Failure modes: ___ critical gaps flagged
  • Lake Score: X/Y recommendations chose complete option

#Retrospective learning

Check the git log for this branch. If there are prior commits suggesting a previous review cycle (e.g., review-driven refactors, reverted changes), note what was changed and whether the current plan touches the same areas. Be more aggressive reviewing areas that were previously problematic.

#Formatting rules

  • NUMBER issues (1, 2, 3...) and LETTERS for options (A, B, C...).
  • Label with NUMBER + LETTER (e.g., "3A", "3B").
  • One sentence max per option. Pick in under 5 seconds.
  • After each review section, pause and ask for feedback before moving on.

#Review Log

After producing the Completion Summary above, persist the review result.

PLAN MODE EXCEPTION — ALWAYS RUN: This command writes review metadata to ~/.gstack/ (user config directory, not project files). The skill preamble already writes to ~/.gstack/sessions/ and ~/.gstack/analytics/ — this is the same pattern. The review dashboard depends on this data. Skipping this command breaks the review readiness dashboard in /ship.

~/.codex/skills/gstack/bin/gstack-review-log '{"skill":"plan-eng-review","timestamp":"TIMESTAMP","status":"STATUS","unresolved":N,"critical_gaps":N,"mode":"MODE","commit":"COMMIT"}'

Substitute values from the Completion Summary:

  • TIMESTAMP: current ISO 8601 datetime
  • STATUS: "clean" if 0 unresolved decisions AND 0 critical gaps; otherwise "issues_open"
  • unresolved: number from "Unresolved decisions" count
  • critical_gaps: number from "Failure modes: ___ critical gaps flagged"
  • MODE: FULL_REVIEW / SCOPE_REDUCED
  • COMMIT: output of git rev-parse --short HEAD

#Review Readiness Dashboard

After completing the review, read the review log and config to display the dashboard.

~/.codex/skills/gstack/bin/gstack-review-read

Parse the output. Find the most recent entry for each skill (plan-ceo-review, plan-eng-review, plan-design-review, design-review-lite, codex-review). Ignore entries with timestamps older than 7 days. For Design Review, show whichever is more recent between plan-design-review (full visual audit) and design-review-lite (code-level check). Append "(FULL)" or "(LITE)" to the status to distinguish. Display:

+====================================================================+
|                    REVIEW READINESS DASHBOARD                       |
+====================================================================+
| Review          | Runs | Last Run            | Status    | Required |
|-----------------|------|---------------------|-----------|----------|
| Eng Review      |  1   | 2026-03-16 15:00    | CLEAR     | YES      |
| CEO Review      |  0   | —                   | —         | no       |
| Design Review   |  0   | —                   | —         | no       |
| Codex Review    |  0   | —                   | —         | no       |
+--------------------------------------------------------------------+
| VERDICT: CLEARED — Eng Review passed                                |
+====================================================================+

Review tiers:

  • Eng Review (required by default): The only review that gates shipping. Covers architecture, code quality, tests, performance. Can be disabled globally with `gstack-config set skip_eng_review true` (the "don't bother me" setting).
  • CEO Review (optional): Use your judgment. Recommend it for big product/business changes, new user-facing features, or scope decisions. Skip for bug fixes, refactors, infra, and cleanup.
  • Design Review (optional): Use your judgment. Recommend it for UI/UX changes. Skip for backend-only, infra, or prompt-only changes.
  • Codex Review (enabled by default when Codex CLI is installed): Independent review + adversarial challenge from OpenAI Codex CLI. Shows pass/fail gate. Runs automatically when enabled — configure with `gstack-config set codex_reviews enabled|disabled`.

Verdict logic:

  • CLEARED: Eng Review has >= 1 entry within 7 days with status "clean" (or `skip_eng_review` is `true`)
  • NOT CLEARED: Eng Review missing, stale (>7 days), or has open issues
  • CEO, Design, and Codex reviews are shown for context but never block shipping
  • If `skip_eng_review` config is `true`, Eng Review shows "SKIPPED (global)" and verdict is CLEARED

Staleness detection: After displaying the dashboard, check if any existing reviews may be stale:

  • Parse the `---HEAD---` section from the bash output to get the current HEAD commit hash
  • For each review entry that has a `commit` field: compare it against the current HEAD. If different, count elapsed commits: `git rev-list --count STORED_COMMIT..HEAD`. Display: "Note: {skill} review from {date} may be stale — {N} commits since review"
  • For entries without a `commit` field (legacy entries): display "Note: {skill} review from {date} has no commit tracking — consider re-running for accurate staleness detection"
  • If all reviews match the current HEAD, do not display any staleness notes

#Next Steps — Review Chaining

After displaying the Review Readiness Dashboard, check if additional reviews would be valuable. Read the dashboard output to see which reviews have already been run and whether they are stale.

Suggest /plan-design-review if UI changes exist and no design review has been run — detect from the test diagram, architecture review, or any section that touched frontend components, CSS, views, or user-facing interaction flows. If an existing design review's commit hash shows it predates significant changes found in this eng review, note that it may be stale.

Mention /plan-ceo-review if this is a significant product change and no CEO review exists — this is a soft suggestion, not a push. CEO review is optional. Only mention it if the plan introduces new user-facing features, changes product direction, or expands scope substantially.

Note staleness of existing CEO or design reviews if this eng review found assumptions that contradict them, or if the commit hash shows significant drift.

If no additional reviews are needed (or skip_eng_review is true in the dashboard config, meaning this eng review was optional): state "All relevant reviews complete. Run /ship when ready."

Use AskUserQuestion with only the applicable options:

  • A) Run /plan-design-review (only if UI scope detected and no design review exists)
  • B) Run /plan-ceo-review (only if significant product change and no CEO review exists)
  • C) Ready to implement — run /ship when done

#Unresolved decisions

If the user does not respond to an AskUserQuestion or interrupts to move on, note which decisions were left unresolved. At the end of the review, list these as "Unresolved decisions that may bite you later" — never silently default to an option.