From a4a181ca92028359403d9f92f6f527db48244ee6 Mon Sep 17 00:00:00 2001 From: Garry Tan Date: Mon, 30 Mar 2026 22:07:50 -0600 Subject: [PATCH] =?UTF-8?q?feat:=20Review=20Army=20=E2=80=94=20parallel=20?= =?UTF-8?q?specialist=20reviewers=20for=20/review=20(v0.14.3.0)=20(#692)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * feat: extend gstack-diff-scope with SCOPE_MIGRATIONS, SCOPE_API, SCOPE_AUTH Three new scope signals for Review Army specialist activation: - SCOPE_MIGRATIONS: db/migrate/, prisma/migrations/, alembic/, *.sql - SCOPE_API: *controller*, *route*, *endpoint*, *.graphql, openapi.* - SCOPE_AUTH: *auth*, *session*, *jwt*, *oauth*, *permission*, *role* * feat: add 7 specialist checklist files for Review Army - testing.md (always-on): coverage gaps, flaky patterns, security enforcement - maintainability.md (always-on): dead code, DRY, stale comments - security.md (conditional): OWASP deep analysis, auth bypass, injection - performance.md (conditional): N+1 queries, bundle impact, complexity - data-migration.md (conditional): reversibility, lock duration, backfill - api-contract.md (conditional): breaking changes, versioning, error format - red-team.md (conditional): adversarial analysis, cross-cutting concerns All use standard header with JSON output schema and NO FINDINGS fallback. * feat: Review Army resolver — parallel specialist dispatch + merge New resolver in review-army.ts generates template prose for: - Stack detection and specialist selection - Parallel Agent tool dispatch with learning-informed prompts - JSON finding collection, fingerprint dedup, consensus highlighting - PR quality score computation - Red Team conditional dispatch Registered as REVIEW_ARMY in resolvers/index.ts. * refactor: restructure /review template for Review Army - Replace Steps 4-4.75 with CRITICAL pass + {{REVIEW_ARMY}} - Remove {{DESIGN_REVIEW_LITE}} and {{TEST_COVERAGE_AUDIT_REVIEW}} (subsumed into Design and Testing specialists respectively) - Extract specialist-covered categories from checklist.md - Keep CRITICAL + uncovered INFORMATIONAL in main agent pass * test: Review Army — 14 diff-scope tests + 7 E2E tests - test/diff-scope.test.ts: 14 tests for all 9 scope signals - test/skill-e2e-review-army.test.ts: 7 E2E tests Gate: migration safety, N+1 detection, delivery audit, quality score, JSON findings Periodic: red team, consensus - Updated gen-skill-docs tests for new review structure - Added touchfile entries and tier classifications * docs: update SELF_LEARNING_V0.md with Release 2 status + Release 2.5 Mark Release 2 (Review Army) as in-progress. Add Release 2.5 for deferred expansions (E1 adaptive gating, E3 test stubs, E5 cross-review dedup, E7 specialist tracking). * chore: bump version and changelog (v0.14.3.0) Co-Authored-By: Claude Opus 4.6 --------- Co-authored-by: Claude Opus 4.6 --- CHANGELOG.md | 20 + VERSION | 2 +- bin/gstack-diff-scope | 19 + docs/designs/SELF_LEARNING_V0.md | 34 +- package.json | 2 +- review/SKILL.md | 380 ++++++++-------- review/SKILL.md.tmpl | 22 +- review/checklist.md | 64 +-- review/specialists/api-contract.md | 48 ++ review/specialists/data-migration.md | 47 ++ review/specialists/maintainability.md | 45 ++ review/specialists/performance.md | 51 +++ review/specialists/red-team.md | 44 ++ review/specialists/security.md | 60 +++ review/specialists/testing.md | 45 ++ scripts/resolvers/index.ts | 2 + scripts/resolvers/review-army.ts | 190 ++++++++ scripts/resolvers/review.ts | 63 ++- test/diff-scope.test.ts | 165 +++++++ test/fixtures/review-army-migration.sql | 5 + test/fixtures/review-army-n-plus-one.rb | 12 + test/gen-skill-docs.test.ts | 76 +++- test/helpers/touchfiles.ts | 18 + test/skill-e2e-review-army.test.ts | 562 ++++++++++++++++++++++++ 24 files changed, 1667 insertions(+), 309 deletions(-) create mode 100644 review/specialists/api-contract.md create mode 100644 review/specialists/data-migration.md create mode 100644 review/specialists/maintainability.md create mode 100644 review/specialists/performance.md create mode 100644 review/specialists/red-team.md create mode 100644 review/specialists/security.md create mode 100644 review/specialists/testing.md create mode 100644 scripts/resolvers/review-army.ts create mode 100644 test/diff-scope.test.ts create mode 100644 test/fixtures/review-army-migration.sql create mode 100644 test/fixtures/review-army-n-plus-one.rb create mode 100644 test/skill-e2e-review-army.test.ts diff --git a/CHANGELOG.md b/CHANGELOG.md index 356ae97edb231a0babe6f041cc9eb6e8b16a65b1..7c75826f505afe901ab902462415ed757f5436d7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,25 @@ # Changelog +## [0.14.4.0] - 2026-03-31 — Review Army: Parallel Specialist Reviewers + +Every `/review` now dispatches specialist subagents in parallel. Instead of one agent applying one giant checklist, you get focused reviewers for testing gaps, maintainability, security, performance, data migrations, API contracts, and adversarial red-teaming. Each specialist reads the diff independently with fresh context, outputs structured JSON findings, and the main agent merges, deduplicates, and boosts confidence when multiple specialists flag the same issue. Small diffs (<50 lines) skip specialists entirely for speed. Large diffs (200+ lines) activate the Red Team for adversarial analysis on top. + +### Added + +- **7 specialist reviewers** running in parallel via Agent tool subagents. Always-on: Testing + Maintainability. Conditional: Security (auth scope), Performance (backend/frontend), Data Migration (migration files), API Contract (controllers/routes), Red Team (large diffs or critical findings). +- **JSON finding schema.** Specialists output structured JSON objects with severity, confidence, path, line, category, fix, and fingerprint fields. Reliable parsing, no more pipe-delimited text. +- **Fingerprint-based dedup.** When two specialists flag the same file:line:category, the finding gets boosted confidence and a "MULTI-SPECIALIST CONFIRMED" marker. +- **PR Quality Score.** Every review computes a 0-10 quality score: `10 - (critical * 2 + informational * 0.5)`. Logged to review history for trending via `/retro`. +- **3 new diff-scope signals.** `gstack-diff-scope` now detects SCOPE_MIGRATIONS, SCOPE_API, and SCOPE_AUTH to activate the right specialists. +- **Learning-informed specialist prompts.** Each specialist gets past learnings for its domain injected into the prompt, so reviews get smarter over time. +- **14 new diff-scope tests** covering all 9 scope signals including the 3 new ones. +- **7 new E2E tests** (5 gate, 2 periodic) covering migration safety, N+1 detection, delivery audit, quality score, JSON schema compliance, red team activation, and multi-specialist consensus. + +### Changed + +- **Review checklist refactored.** Categories now covered by specialists (test gaps, dead code, magic numbers, performance, crypto) removed from the main checklist. Main agent focuses on CRITICAL pass only. +- **Delivery Integrity enhanced.** The existing plan completion audit now investigates WHY items are missing (not just that they're missing) and logs plan-file discrepancies as learnings. Commit-message inference is informational only, never persisted. + ## [0.14.3.0] - 2026-03-31 — Always-On Adversarial Review + Scope Drift + Plan Mode Design Tools Every code review now runs adversarial analysis from both Claude and Codex, regardless of diff size. A 5-line auth change gets the same cross-model scrutiny as a 500-line feature. The old "skip adversarial for small diffs" heuristic is gone... diff size was never a good proxy for risk. diff --git a/VERSION b/VERSION index 574c7c4a637364d8de234b04f893841fcfbe8afe..af309671808afa22f9e03f59f534c7e721a8b993 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -0.14.3.0 +0.14.4.0 diff --git a/bin/gstack-diff-scope b/bin/gstack-diff-scope index f656732d2074c1f5660726bb29e5559d692ff120..2cff90c70f60baee4c83af5938976b08a91b55ae 100755 --- a/bin/gstack-diff-scope +++ b/bin/gstack-diff-scope @@ -16,6 +16,9 @@ if [ -z "$FILES" ]; then echo "SCOPE_TESTS=false" echo "SCOPE_DOCS=false" echo "SCOPE_CONFIG=false" + echo "SCOPE_MIGRATIONS=false" + echo "SCOPE_API=false" + echo "SCOPE_AUTH=false" exit 0 fi @@ -25,6 +28,9 @@ PROMPTS=false TESTS=false DOCS=false CONFIG=false +MIGRATIONS=false +API=false +AUTH=false while IFS= read -r f; do case "$f" in @@ -57,6 +63,16 @@ while IFS= read -r f; do .github/*) CONFIG=true ;; requirements.txt|pyproject.toml|go.mod|Cargo.toml|composer.json) CONFIG=true ;; + # Migrations: database migration files + db/migrate/*|*/migrations/*|alembic/*|prisma/migrations/*) MIGRATIONS=true ;; + + # API: routes, controllers, endpoints, GraphQL/OpenAPI schemas + *controller*|*route*|*endpoint*|*/api/*) API=true ;; + *.graphql|*.gql|openapi.*|swagger.*) API=true ;; + + # Auth: authentication, authorization, sessions, permissions + *auth*|*session*|*jwt*|*oauth*|*permission*|*role*) AUTH=true ;; + # Backend: everything else that's code (excluding views/components already matched) *.rb|*.py|*.go|*.rs|*.java|*.php|*.ex|*.exs) BACKEND=true ;; *.ts|*.js) BACKEND=true ;; # Non-component TS/JS is backend @@ -69,3 +85,6 @@ echo "SCOPE_PROMPTS=$PROMPTS" echo "SCOPE_TESTS=$TESTS" echo "SCOPE_DOCS=$DOCS" echo "SCOPE_CONFIG=$CONFIG" +echo "SCOPE_MIGRATIONS=$MIGRATIONS" +echo "SCOPE_API=$API" +echo "SCOPE_AUTH=$AUTH" diff --git a/docs/designs/SELF_LEARNING_V0.md b/docs/designs/SELF_LEARNING_V0.md index 60171849d229c912f82339751aa8b1490534a129..1d99e0122fd4567ea9d9567cada813da2c67d706 100644 --- a/docs/designs/SELF_LEARNING_V0.md +++ b/docs/designs/SELF_LEARNING_V0.md @@ -91,11 +91,35 @@ gstack-review-log pattern. **Headline:** 10 specialist reviewers on every PR. What ships: -- Parallel review agents: always-on (correctness, testing, maintainability) + - conditional (security, performance, API, data-migrations, reliability) + - stack-specific (Rails, TypeScript, Python, frontend-races) -- Red team reviewer activated for large diffs and high-risk domains -- Structured findings with confidence scores + merge/dedup across agents +- 7 parallel specialist subagents: always-on (testing, maintainability) + + conditional (security, performance, data-migration, API contract, design) + + red team (large diffs / critical findings) +- JSON-structured findings with confidence scores + fingerprint dedup across agents +- PR quality score (0-10) logged per review + /retro trending (E2) +- Learning-informed specialist prompts — past pitfalls injected per domain (E4) +- Multi-specialist consensus highlighting — confirmed findings get boosted (E6) +- Enhanced Delivery Integrity via PLAN_COMPLETION_AUDIT — investigation depth, + commit message fallback, plan-file learnings logging +- Checklist refactored: CRITICAL categories stay in main pass, specialist + categories extracted to focused checklists in review/specialists/ + +### Release 2.5: "Review Army Expansions" (v0.15.x) + +**Headline:** Ship after R2 proves stable. Check in on how the core loop is performing. + +Pre-check: review R2 quality metrics (PR quality scores, specialist hit rates, +false positive rates, E2E test stability). If core loop has issues, fix those first. + +What ships: +- E1: Adaptive specialist gating — auto-skip specialists with 0-finding track record. + Store per-project hit rates via gstack-learnings-log. User can force with --security etc. +- E3: Test stub generation — each specialist outputs TEST_STUB alongside findings. + Framework detected from project (Jest/Vitest/RSpec/pytest/Go test). + Flows into Fix-First: AUTO-FIX applies fix + creates test file. +- E5: Cross-review finding dedup — read gstack-review-read for prior review entries. + Suppress findings matching a prior user-skipped finding. +- E7: Specialist performance tracking — log per-specialist metrics via gstack-review-log. + /retro integration: "Top finding specialist: Performance (7 findings)." ### Release 3: "Smart Ceremony" (v0.16) diff --git a/package.json b/package.json index d55bf63023de282b0aac3e38835c342b9facd475..84fa575db5e74e3fe0d1c9806454ee7a89593ddd 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "gstack", - "version": "0.14.2.0", + "version": "0.14.4.0", "description": "Garry's Stack — Claude Code skills + fast headless browser. One repo, one install, entire AI engineering workflow.", "license": "MIT", "type": "module", diff --git a/review/SKILL.md b/review/SKILL.md index 3216c7475b711a3170d90c1c197641dbe5087274..177080ebe7b87892988c1b351c8924eecc51e038 100644 --- a/review/SKILL.md +++ b/review/SKILL.md @@ -608,14 +608,69 @@ COMPLETION: 4/7 DONE, 1 PARTIAL, 1 NOT DONE, 1 CHANGED ───────────────────────────────── ``` +### Fallback Intent Sources (when no plan file found) + +When no plan file is detected, use these secondary intent sources: + +1. **Commit messages:** Run `git log origin/..HEAD --oneline`. Use judgment to extract real intent: + - Commits with actionable verbs ("add", "implement", "fix", "create", "remove", "update") are intent signals + - Skip noise: "WIP", "tmp", "squash", "merge", "chore", "typo", "fixup" + - Extract the intent behind the commit, not the literal message +2. **TODOS.md:** If it exists, check for items related to this branch or recent dates +3. **PR description:** Run `gh pr view --json body -q .body 2>/dev/null` for intent context + +**With fallback sources:** Apply the same Cross-Reference classification (DONE/PARTIAL/NOT DONE/CHANGED) using best-effort matching. Note that fallback-sourced items are lower confidence than plan-file items. + +### Investigation Depth + +For each PARTIAL or NOT DONE item, investigate WHY: + +1. Check `git log origin/..HEAD --oneline` for commits that suggest the work was started, attempted, or reverted +2. Read the relevant code to understand what was built instead +3. Determine the likely reason from this list: + - **Scope cut** — evidence of intentional removal (revert commit, removed TODO) + - **Context exhaustion** — work started but stopped mid-way (partial implementation, no follow-up commits) + - **Misunderstood requirement** — something was built but it doesn't match what the plan described + - **Blocked by dependency** — plan item depends on something that isn't available + - **Genuinely forgotten** — no evidence of any attempt + +Output for each discrepancy: +``` +DISCREPANCY: {PARTIAL|NOT_DONE} | {plan item} | {what was actually delivered} +INVESTIGATION: {likely reason with evidence from git log / code} +IMPACT: {HIGH|MEDIUM|LOW} — {what breaks or degrades if this stays undelivered} +``` + +### Learnings Logging (plan-file discrepancies only) + +**Only for discrepancies sourced from plan files** (not commit messages or TODOS.md), log a learning so future sessions know this pattern occurred: + +```bash +~/.claude/skills/gstack/bin/gstack-learnings-log '{ + "type": "pitfall", + "key": "plan-delivery-gap-KEBAB_SUMMARY", + "insight": "Planned X but delivered Y because Z", + "confidence": 8, + "source": "observed", + "files": ["PLAN_FILE_PATH"] +}' +``` + +Replace KEBAB_SUMMARY with a kebab-case summary of the gap, and fill in the actual values. + +**Do NOT log learnings from commit-message-derived or TODOS.md-derived discrepancies.** These are informational in the review output but too noisy for durable memory. + ### Integration with Scope Drift Detection The plan completion results augment the existing Scope Drift Detection. If a plan file is found: - **NOT DONE items** become additional evidence for **MISSING REQUIREMENTS** in the scope drift report. - **Items in the diff that don't match any plan item** become evidence for **SCOPE CREEP** detection. +- **HIGH-impact discrepancies** trigger AskUserQuestion: + - Show the investigation findings + - Options: A) Stop and implement missing items, B) Ship anyway + create P1 TODOs, C) Intentionally dropped -This is **INFORMATIONAL** — does not block the review (consistent with existing scope drift behavior). +This is **INFORMATIONAL** unless HIGH-impact discrepancies are found (then it gates via AskUserQuestion). Update the scope drift output to include plan file context: @@ -625,11 +680,11 @@ Intent: Plan: Delivered: <1-line summary of what the diff actually does> Plan items: N DONE, M PARTIAL, K NOT DONE -[If NOT DONE: list each missing item] +[If NOT DONE: list each missing item with investigation] [If scope creep: list each out-of-scope change not in the plan] ``` -**No plan file found:** Fall back to existing scope drift behavior (check TODOS.md and PR description only). +**No plan file found:** Use commit messages and TODOS.md as fallback sources (see above). If no intent sources at all, skip with: "No intent sources detected — skipping completion audit." ## Step 2: Read the checklist @@ -699,12 +754,12 @@ matches a past learning, display: This makes the compounding visible. The user should see that gstack is getting smarter on their codebase over time. -## Step 4: Two-pass review +## Step 4: Critical pass (core review) -Apply the checklist against the diff in two passes: +Apply the CRITICAL categories from the checklist against the diff: +SQL & Data Safety, Race Conditions & Concurrency, LLM Output Trust Boundary, Shell Injection, Enum & Value Completeness. -1. **Pass 1 (CRITICAL):** SQL & Data Safety, Race Conditions & Concurrency, LLM Output Trust Boundary, Enum & Value Completeness -2. **Pass 2 (INFORMATIONAL):** Conditional Side Effects, Magic Numbers & String Coupling, Dead Code & Consistency, LLM Prompt Issues, Test Gaps, View/Frontend, Performance & Bundle Impact +Also apply the remaining INFORMATIONAL categories that are still in the checklist (Async/Sync Mixing, Column/Field Name Safety, LLM Prompt Issues, Type Coercion, View/Frontend, Time Window Safety, Completeness Gaps, Distribution & CI/CD). **Enum & Value Completeness requires reading code OUTSIDE the diff.** When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient. @@ -744,258 +799,167 @@ higher confidence. --- -## Step 4.5: Design Review (conditional) +## Step 4.5: Review Army — Specialist Dispatch -## Design Review (conditional, diff-scoped) - -Check if the diff touches frontend files using `gstack-diff-scope`: +### Detect stack and scope ```bash -source <(~/.claude/skills/gstack/bin/gstack-diff-scope 2>/dev/null) +source <(~/.claude/skills/gstack/bin/gstack-diff-scope 2>/dev/null) || true +# Detect stack for specialist context +STACK="" +[ -f Gemfile ] && STACK="${STACK}ruby " +[ -f package.json ] && STACK="${STACK}node " +[ -f requirements.txt ] || [ -f pyproject.toml ] && STACK="${STACK}python " +[ -f go.mod ] && STACK="${STACK}go " +[ -f Cargo.toml ] && STACK="${STACK}rust " +echo "STACK: ${STACK:-unknown}" +DIFF_LINES=$(git diff origin/ --stat | tail -1 | grep -oE '[0-9]+ insertion' | grep -oE '[0-9]+' || echo "0") +echo "DIFF_LINES: $DIFF_LINES" ``` -**If `SCOPE_FRONTEND=false`:** Skip design review silently. No output. - -**If `SCOPE_FRONTEND=true`:** - -1. **Check for DESIGN.md.** If `DESIGN.md` or `design-system.md` exists in the repo root, read it. All design findings are calibrated against it — patterns blessed in DESIGN.md are not flagged. If not found, use universal design principles. - -2. **Read `.claude/skills/review/design-checklist.md`.** If the file cannot be read, skip design review with a note: "Design checklist not found — skipping design review." - -3. **Read each changed frontend file** (full file, not just diff hunks). Frontend files are identified by the patterns listed in the checklist. +### Select specialists -4. **Apply the design checklist** against the changed files. For each item: - - **[HIGH] mechanical CSS fix** (`outline: none`, `!important`, `font-size < 16px`): classify as AUTO-FIX - - **[HIGH/MEDIUM] design judgment needed**: classify as ASK - - **[LOW] intent-based detection**: present as "Possible — verify visually or run /design-review" +Based on the scope signals above, select which specialists to dispatch. -5. **Include findings** in the review output under a "Design Review" header, following the output format in the checklist. Design findings merge with code review findings into the same Fix-First flow. +**Always-on (dispatch on every review with 50+ changed lines):** +1. **Testing** — read `~/.claude/skills/gstack/review/specialists/testing.md` +2. **Maintainability** — read `~/.claude/skills/gstack/review/specialists/maintainability.md` -6. **Log the result** for the Review Readiness Dashboard: +**If DIFF_LINES < 50:** Skip all specialists. Print: "Small diff ($DIFF_LINES lines) — specialists skipped." Continue to Step 5. -```bash -~/.claude/skills/gstack/bin/gstack-review-log '{"skill":"design-review-lite","timestamp":"TIMESTAMP","status":"STATUS","findings":N,"auto_fixed":M,"commit":"COMMIT"}' -``` - -Substitute: TIMESTAMP = ISO 8601 datetime, STATUS = "clean" if 0 findings or "issues_found", N = total findings, M = auto-fixed count, COMMIT = output of `git rev-parse --short HEAD`. - -7. **Codex design voice** (optional, automatic if available): +**Conditional (dispatch if the matching scope signal is true):** +3. **Security** — if SCOPE_AUTH=true, OR if SCOPE_BACKEND=true AND DIFF_LINES > 100. Read `~/.claude/skills/gstack/review/specialists/security.md` +4. **Performance** — if SCOPE_BACKEND=true OR SCOPE_FRONTEND=true. Read `~/.claude/skills/gstack/review/specialists/performance.md` +5. **Data Migration** — if SCOPE_MIGRATIONS=true. Read `~/.claude/skills/gstack/review/specialists/data-migration.md` +6. **API Contract** — if SCOPE_API=true. Read `~/.claude/skills/gstack/review/specialists/api-contract.md` +7. **Design** — if SCOPE_FRONTEND=true. Use the existing design review checklist at `~/.claude/skills/gstack/review/design-checklist.md` -```bash -which codex 2>/dev/null && echo "CODEX_AVAILABLE" || echo "CODEX_NOT_AVAILABLE" -``` - -If Codex is available, run a lightweight design check on the diff: - -```bash -TMPERR_DRL=$(mktemp /tmp/codex-drl-XXXXXXXX) -_REPO_ROOT=$(git rev-parse --show-toplevel) || { echo "ERROR: not in a git repo" >&2; exit 1; } -codex exec "Review the git diff on this branch. Run 7 litmus checks (YES/NO each): 1. Brand/product unmistakable in first screen? 2. One strong visual anchor present? 3. Page understandable by scanning headlines only? 4. Each section has one job? 5. Are cards actually necessary? 6. Does motion improve hierarchy or atmosphere? 7. Would design feel premium with all decorative shadows removed? Flag any hard rejections: 1. Generic SaaS card grid as first impression 2. Beautiful image with weak brand 3. Strong headline with no clear action 4. Busy imagery behind text 5. Sections repeating same mood statement 6. Carousel with no narrative purpose 7. App UI made of stacked cards instead of layout 5 most important design findings only. Reference file:line." -C "$_REPO_ROOT" -s read-only -c 'model_reasoning_effort="high"' --enable web_search_cached 2>"$TMPERR_DRL" -``` - -Use a 5-minute timeout (`timeout: 300000`). After the command completes, read stderr: -```bash -cat "$TMPERR_DRL" && rm -f "$TMPERR_DRL" -``` - -**Error handling:** All errors are non-blocking. On auth failure, timeout, or empty response — skip with a brief note and continue. - -Present Codex output under a `CODEX (design):` header, merged with the checklist findings above. - -Include any design findings alongside the findings from Step 4. They follow the same Fix-First flow in Step 5 — AUTO-FIX for mechanical CSS fixes, ASK for everything else. +Note which specialists were selected and which were skipped. Print the selection: +"Dispatching N specialists: [names]. Skipped: [names] (scope not detected)." --- -## Step 4.75: Test Coverage Diagram +### Dispatch specialists in parallel -100% coverage is the goal. Evaluate every codepath changed in the diff and identify test gaps. Gaps become INFORMATIONAL findings that follow the Fix-First flow. +For each selected specialist, launch an independent subagent via the Agent tool. +**Launch ALL selected specialists in a single message** (multiple Agent tool calls) +so they run in parallel. Each subagent has fresh context — no prior review bias. -### Test Framework Detection +**Each specialist subagent prompt:** -Before analyzing coverage, detect the project's test framework: +Construct the prompt for each specialist. The prompt includes: -1. **Read CLAUDE.md** — look for a `## Testing` section with test command and framework name. If found, use that as the authoritative source. -2. **If CLAUDE.md has no testing section, auto-detect:** +1. The specialist's checklist content (you already read the file above) +2. Stack context: "This is a {STACK} project." +3. Past learnings for this domain (if any exist): ```bash -setopt +o nomatch 2>/dev/null || true # zsh compat -# Detect project runtime -[ -f Gemfile ] && echo "RUNTIME:ruby" -[ -f package.json ] && echo "RUNTIME:node" -[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python" -[ -f go.mod ] && echo "RUNTIME:go" -[ -f Cargo.toml ] && echo "RUNTIME:rust" -# Check for existing test infrastructure -ls jest.config.* vitest.config.* playwright.config.* cypress.config.* .rspec pytest.ini phpunit.xml 2>/dev/null -ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null +~/.claude/skills/gstack/bin/gstack-learnings-search --type pitfall --query "{specialist domain}" --limit 5 2>/dev/null || true ``` -3. **If no framework detected:** still produce the coverage diagram, but skip test generation. - -**Step 1. Trace every codepath changed** using `git diff origin/...HEAD`: - -Read every changed file. For each one, trace how data flows through the code — don't just list functions, actually follow the execution: +If learnings are found, include them: "Past learnings for this domain: {learnings}" -1. **Read the diff.** For each changed file, read the full file (not just the diff hunk) to understand context. -2. **Trace data flow.** Starting from each entry point (route handler, exported function, event listener, component render), follow the data through every branch: - - Where does input come from? (request params, props, database, API call) - - What transforms it? (validation, mapping, computation) - - Where does it go? (database write, API response, rendered output, side effect) - - What can go wrong at each step? (null/undefined, invalid input, network failure, empty collection) -3. **Diagram the execution.** For each changed file, draw an ASCII diagram showing: - - Every function/method that was added or modified - - Every conditional branch (if/else, switch, ternary, guard clause, early return) - - Every error path (try/catch, rescue, error boundary, fallback) - - Every call to another function (trace into it — does IT have untested branches?) - - Every edge: what happens with null input? Empty array? Invalid type? +4. Instructions: -This is the critical step — you're building a map of every line of code that can execute differently based on input. Every branch in this diagram needs a test. +"You are a specialist code reviewer. Read the checklist below, then run +`git diff origin/` to get the full diff. Apply the checklist against the diff. -**Step 2. Map user flows, interactions, and error states:** +For each finding, output a JSON object on its own line: +{\"severity\":\"CRITICAL|INFORMATIONAL\",\"confidence\":N,\"path\":\"file\",\"line\":N,\"category\":\"category\",\"summary\":\"description\",\"fix\":\"recommended fix\",\"fingerprint\":\"path:line:category\",\"specialist\":\"name\"} -Code coverage isn't enough — you need to cover how real users interact with the changed code. For each changed feature, think through: +Required fields: severity, confidence, path, category, summary, specialist. +Optional: line, fix, fingerprint, evidence. -- **User flows:** What sequence of actions does a user take that touches this code? Map the full journey (e.g., "user clicks 'Pay' → form validates → API call → success/failure screen"). Each step in the journey needs a test. -- **Interaction edge cases:** What happens when the user does something unexpected? - - Double-click/rapid resubmit - - Navigate away mid-operation (back button, close tab, click another link) - - Submit with stale data (page sat open for 30 minutes, session expired) - - Slow connection (API takes 10 seconds — what does the user see?) - - Concurrent actions (two tabs, same form) -- **Error states the user can see:** For every error the code handles, what does the user actually experience? - - Is there a clear error message or a silent failure? - - Can the user recover (retry, go back, fix input) or are they stuck? - - What happens with no network? With a 500 from the API? With invalid data from the server? -- **Empty/zero/boundary states:** What does the UI show with zero results? With 10,000 results? With a single character input? With maximum-length input? +If no findings: output `NO FINDINGS` and nothing else. +Do not output anything else — no preamble, no summary, no commentary. -Add these to your diagram alongside the code branches. A user flow with no test is just as much a gap as an untested if/else. +Stack context: {STACK} +Past learnings: {learnings or 'none'} -**Step 3. Check each branch against existing tests:** +CHECKLIST: +{checklist content}" -Go through your diagram branch by branch — both code paths AND user flows. For each one, search for a test that exercises it: -- Function `processPayment()` → look for `billing.test.ts`, `billing.spec.ts`, `test/billing_test.rb` -- An if/else → look for tests covering BOTH the true AND false path -- An error handler → look for a test that triggers that specific error condition -- A call to `helperFn()` that has its own branches → those branches need tests too -- A user flow → look for an integration or E2E test that walks through the journey -- An interaction edge case → look for a test that simulates the unexpected action +**Subagent configuration:** +- Use `subagent_type: "general-purpose"` +- Do NOT use `run_in_background` — all specialists must complete before merge +- If any specialist subagent fails or times out, log the failure and continue with results from successful specialists. Specialists are additive — partial results are better than no results. -Quality scoring rubric: -- ★★★ Tests behavior with edge cases AND error paths -- ★★ Tests correct behavior, happy path only -- ★ Smoke test / existence check / trivial assertion (e.g., "it renders", "it doesn't throw") - -### E2E Test Decision Matrix - -When checking each branch, also determine whether a unit test or E2E/integration test is the right tool: - -**RECOMMEND E2E (mark as [→E2E] in the diagram):** -- Common user flow spanning 3+ components/services (e.g., signup → verify email → first login) -- Integration point where mocking hides real failures (e.g., API → queue → worker → DB) -- Auth/payment/data-destruction flows — too important to trust unit tests alone - -**RECOMMEND EVAL (mark as [→EVAL] in the diagram):** -- Critical LLM call that needs a quality eval (e.g., prompt change → test output still meets quality bar) -- Changes to prompt templates, system instructions, or tool definitions - -**STICK WITH UNIT TESTS:** -- Pure function with clear inputs/outputs -- Internal helper with no side effects -- Edge case of a single function (null input, empty array) -- Obscure/rare flow that isn't customer-facing +--- -### REGRESSION RULE (mandatory) +### Step 4.6: Collect and merge findings -**IRON RULE:** When the coverage audit identifies a REGRESSION — code that previously worked but the diff broke — a regression test is written immediately. No AskUserQuestion. No skipping. Regressions are the highest-priority test because they prove something broke. +After all specialist subagents complete, collect their outputs. -A regression is when: -- The diff modifies existing behavior (not new code) -- The existing test suite (if any) doesn't cover the changed path -- The change introduces a new failure mode for existing callers +**Parse findings:** +For each specialist's output: +1. If output is "NO FINDINGS" — skip, this specialist found nothing +2. Otherwise, parse each line as a JSON object. Skip lines that are not valid JSON. +3. Collect all parsed findings into a single list, tagged with their specialist name. -When uncertain whether a change is a regression, err on the side of writing the test. +**Fingerprint and deduplicate:** +For each finding, compute its fingerprint: +- If `fingerprint` field is present, use it +- Otherwise: `{path}:{line}:{category}` (if line is present) or `{path}:{category}` -Format: commit as `test: regression test for {what broke}` +Group findings by fingerprint. For findings sharing the same fingerprint: +- Keep the finding with the highest confidence score +- Tag it: "MULTI-SPECIALIST CONFIRMED ({specialist1} + {specialist2})" +- Boost confidence by +1 (cap at 10) +- Note the confirming specialists in the output -**Step 4. Output ASCII coverage diagram:** +**Apply confidence gates:** +- Confidence 7+: show normally in the findings output +- Confidence 5-6: show with caveat "Medium confidence — verify this is actually an issue" +- Confidence 3-4: move to appendix (suppress from main findings) +- Confidence 1-2: suppress entirely -Include BOTH code paths and user flows in the same diagram. Mark E2E-worthy and eval-worthy paths: +**Compute PR Quality Score:** +After merging, compute the quality score: +`quality_score = max(0, 10 - (critical_count * 2 + informational_count * 0.5))` +Cap at 10. Log this in the review result at the end. -``` -CODE PATH COVERAGE -=========================== -[+] src/services/billing.ts - │ - ├── processPayment() - │ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42 - │ ├── [GAP] Network timeout — NO TEST - │ └── [GAP] Invalid currency — NO TEST - │ - └── refundPayment() - ├── [★★ TESTED] Full refund — billing.test.ts:89 - └── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101 - -USER FLOW COVERAGE -=========================== -[+] Payment checkout flow - │ - ├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15 - ├── [GAP] [→E2E] Double-click submit — needs E2E, not just unit - ├── [GAP] Navigate away during payment — unit test sufficient - └── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40 - -[+] Error states - │ - ├── [★★ TESTED] Card declined message — billing.test.ts:58 - ├── [GAP] Network timeout UX (what does user see?) — NO TEST - └── [GAP] Empty cart submission — NO TEST - -[+] LLM integration - │ - └── [GAP] [→EVAL] Prompt template change — needs eval test +**Output merged findings:** +Present the merged findings in the same format as the current review: -───────────────────────────────── -COVERAGE: 5/13 paths tested (38%) - Code paths: 3/5 (60%) - User flows: 2/8 (25%) -QUALITY: ★★★: 2 ★★: 2 ★: 1 -GAPS: 8 paths need tests (2 need E2E, 1 needs eval) -───────────────────────────────── ``` +SPECIALIST REVIEW: N findings (X critical, Y informational) from Z specialists -**Fast path:** All paths covered → "Step 4.75: All new code paths have test coverage ✓" Continue. +[For each finding, in order: CRITICAL first, then INFORMATIONAL, sorted by confidence descending] +[SEVERITY] (confidence: N/10, specialist: name) path:line — summary + Fix: recommended fix + [If MULTI-SPECIALIST CONFIRMED: show confirmation note] -**Step 5. Generate tests for gaps (Fix-First):** - -If test framework is detected and gaps were identified: -- Classify each gap as AUTO-FIX or ASK per the Fix-First Heuristic: - - **AUTO-FIX:** Simple unit tests for pure functions, edge cases of existing tested functions - - **ASK:** E2E tests, tests requiring new test infrastructure, tests for ambiguous behavior -- For AUTO-FIX gaps: generate the test, run it, commit as `test: coverage for {feature}` -- For ASK gaps: include in the Fix-First batch question with the other review findings -- For paths marked [→E2E]: always ASK (E2E tests are higher-effort and need user confirmation) -- For paths marked [→EVAL]: always ASK (eval tests need user confirmation on quality criteria) +PR Quality Score: X/10 +``` -If no test framework detected → include gaps as INFORMATIONAL findings only, no generation. +These findings flow into Step 5 Fix-First alongside the CRITICAL pass findings from Step 4. +The Fix-First heuristic applies identically — specialist findings follow the same AUTO-FIX vs ASK classification. -**Diff is test-only changes:** Skip Step 4.75 entirely: "No new application code paths to audit." +--- -### Coverage Warning +### Red Team dispatch (conditional) -After producing the coverage diagram, check the coverage percentage. Read CLAUDE.md for a `## Test Coverage` section with a `Minimum:` field. If not found, use default: 60%. +**Activation:** Only if DIFF_LINES > 200 OR any specialist produced a CRITICAL finding. -If coverage is below the minimum threshold, output a prominent warning **before** the regular review findings: +If activated, dispatch one more subagent via the Agent tool (foreground, not background). -``` -⚠️ COVERAGE WARNING: AI-assessed coverage is {X}%. {N} code paths untested. -Consider writing tests before running /ship. -``` +The Red Team subagent receives: +1. The red-team checklist from `~/.claude/skills/gstack/review/specialists/red-team.md` +2. The merged specialist findings from Step 4.6 (so it knows what was already caught) +3. The git diff command -This is INFORMATIONAL — does not block /review. But it makes low coverage visible early so the developer can address it before reaching the /ship coverage gate. +Prompt: "You are a red team reviewer. The code has already been reviewed by N specialists +who found the following issues: {merged findings summary}. Your job is to find what they +MISSED. Read the checklist, run `git diff origin/`, and look for gaps. +Output findings as JSON objects (same schema as the specialists). Focus on cross-cutting +concerns, integration boundary issues, and failure modes that specialist checklists +don't cover." -If coverage percentage cannot be determined, skip the warning silently. +If the Red Team finds additional issues, merge them into the findings list before +Step 5 Fix-First. Red Team findings are tagged with `"specialist":"red-team"`. -This step subsumes the "Test Gaps" category from Pass 2 — do not duplicate findings between the checklist Test Gaps item and this coverage diagram. Include any coverage gaps alongside the findings from Step 4 and Step 4.5. They follow the same Fix-First flow — gaps are INFORMATIONAL findings. +If the Red Team returns NO FINDINGS, note: "Red Team review: no additional issues found." +If the Red Team subagent fails or times out, skip silently and continue. --- diff --git a/review/SKILL.md.tmpl b/review/SKILL.md.tmpl index 7fb881d67873bc47963ce25ca6574c5d86c93b70..fec5b568d8aa265a56b4d4b5a0c5533892abebfc 100644 --- a/review/SKILL.md.tmpl +++ b/review/SKILL.md.tmpl @@ -73,12 +73,12 @@ Run `git diff origin/` to get the full diff. This includes both committed {{LEARNINGS_SEARCH}} -## Step 4: Two-pass review +## Step 4: Critical pass (core review) -Apply the checklist against the diff in two passes: +Apply the CRITICAL categories from the checklist against the diff: +SQL & Data Safety, Race Conditions & Concurrency, LLM Output Trust Boundary, Shell Injection, Enum & Value Completeness. -1. **Pass 1 (CRITICAL):** SQL & Data Safety, Race Conditions & Concurrency, LLM Output Trust Boundary, Enum & Value Completeness -2. **Pass 2 (INFORMATIONAL):** Conditional Side Effects, Magic Numbers & String Coupling, Dead Code & Consistency, LLM Prompt Issues, Test Gaps, View/Frontend, Performance & Bundle Impact +Also apply the remaining INFORMATIONAL categories that are still in the checklist (Async/Sync Mixing, Column/Field Name Safety, LLM Prompt Issues, Type Coercion, View/Frontend, Time Window Safety, Completeness Gaps, Distribution & CI/CD). **Enum & Value Completeness requires reading code OUTSIDE the diff.** When the diff introduces a new enum value, status, tier, or type constant, use Grep to find all files that reference sibling values, then Read those files to check if the new value is handled. This is the one category where within-diff review is insufficient. @@ -95,19 +95,7 @@ Follow the output format specified in the checklist. Respect the suppressions --- -## Step 4.5: Design Review (conditional) - -{{DESIGN_REVIEW_LITE}} - -Include any design findings alongside the findings from Step 4. They follow the same Fix-First flow in Step 5 — AUTO-FIX for mechanical CSS fixes, ASK for everything else. - ---- - -## Step 4.75: Test Coverage Diagram - -{{TEST_COVERAGE_AUDIT_REVIEW}} - -This step subsumes the "Test Gaps" category from Pass 2 — do not duplicate findings between the checklist Test Gaps item and this coverage diagram. Include any coverage gaps alongside the findings from Step 4 and Step 4.5. They follow the same Fix-First flow — gaps are INFORMATIONAL findings. +{{REVIEW_ARMY}} --- diff --git a/review/checklist.md b/review/checklist.md index cfedcf81f3d7fd453fb348c12762ff436d7c73e9..16aa111bb00fe128320ab66a99714361f2a49232 100644 --- a/review/checklist.md +++ b/review/checklist.md @@ -5,8 +5,9 @@ Review the `git diff origin/main` output for the issues listed below. Be specific — cite `file:line` and suggest fixes. Skip anything that's fine. Only flag real problems. **Two-pass review:** -- **Pass 1 (CRITICAL):** Run SQL & Data Safety and LLM Output Trust Boundary first. Highest severity. -- **Pass 2 (INFORMATIONAL):** Run all remaining categories. Lower severity but still actioned. +- **Pass 1 (CRITICAL):** Run SQL & Data Safety, Race Conditions, LLM Output Trust Boundary, Shell Injection, and Enum Completeness first. Highest severity. +- **Pass 2 (INFORMATIONAL):** Run remaining categories below. Lower severity but still actioned. +- **Specialist categories (handled by parallel subagents, NOT this checklist):** Test Gaps, Dead Code, Magic Numbers, Conditional Side Effects, Performance & Bundle Impact, Crypto & Entropy. See `review/specialists/` for these. All findings get action via Fix-First Review: obvious mechanical fixes are applied automatically, genuinely ambiguous issues are batched into a single user question. @@ -76,42 +77,21 @@ To do this: use Grep to find all references to the sibling values (e.g., grep fo - Check `.get()` calls on query results use the column name that was actually selected - Cross-reference with schema documentation when available -#### Conditional Side Effects -- Code paths that branch on a condition but forget to apply a side effect on one branch. Example: item promoted to verified but URL only attached when a secondary condition is true — the other branch promotes without the URL, creating an inconsistent record. -- Log messages that claim an action happened but the action was conditionally skipped. The log should reflect what actually occurred. - -#### Magic Numbers & String Coupling -- Bare numeric literals used in multiple files — should be named constants documented together -- Error message strings used as query filters elsewhere (grep for the string — is anything matching on it?) - -#### Dead Code & Consistency -- Variables assigned but never read +#### Dead Code & Consistency (version/changelog only — other items handled by maintainability specialist) - Version mismatch between PR title and VERSION/CHANGELOG files - CHANGELOG entries that describe changes inaccurately (e.g., "changed from X to Y" when X never existed) -- Comments/docstrings that describe old behavior after the code changed #### LLM Prompt Issues - 0-indexed lists in prompts (LLMs reliably return 1-indexed) - Prompt text listing available tools/capabilities that don't match what's actually wired up in the `tool_classes`/`tools` array - Word/token limits stated in multiple places that could drift -#### Test Gaps -- Negative-path tests that assert type/status but not the side effects (URL attached? field populated? callback fired?) -- Assertions on string content without checking format (e.g., asserting title present but not URL format) -- `.expects(:something).never` missing when a code path should explicitly NOT call an external service -- Security enforcement features (blocking, rate limiting, auth) without integration tests verifying the enforcement path works end-to-end - #### Completeness Gaps - Shortcut implementations where the complete version would cost <30 minutes CC time (e.g., partial enum handling, incomplete error paths, missing edge cases that are straightforward to add) - Options presented with only human-team effort estimates — should show both human and CC+gstack time - Test coverage gaps where adding the missing tests is a "lake" not an "ocean" (e.g., missing negative-path tests, missing edge case tests that mirror happy-path structure) - Features implemented at 80-90% when 100% is achievable with modest additional code -#### Crypto & Entropy -- Truncation of data instead of hashing (last N chars instead of SHA-256) — less entropy, easier collisions -- `rand()` / `Random.rand` for security-sensitive values — use `SecureRandom` instead -- Non-constant-time comparisons (`==`) on secrets or tokens — vulnerable to timing attacks - #### Time Window Safety - Date-key lookups that assume "today" covers 24h — report at 8am PT only sees midnight→8am under today's key - Mismatched time windows between related features — one uses hourly buckets, another uses daily keys for the same data @@ -125,23 +105,6 @@ To do this: use Grep to find all references to the sibling values (e.g., grep fo - O(n*m) lookups in views (`Array#find` in a loop instead of `index_by` hash) - Ruby-side `.select{}` filtering on DB results that could be a `WHERE` clause (unless intentionally avoiding leading-wildcard `LIKE`) -#### Performance & Bundle Impact -- New `dependencies` entries in package.json that are known-heavy: moment.js (→ date-fns, 330KB→22KB), lodash full (→ lodash-es or per-function imports), jquery, core-js full polyfill -- Significant lockfile growth (many new transitive dependencies from a single addition) -- Images added without `loading="lazy"` or explicit width/height attributes (causes layout shift / CLS) -- Large static assets committed to repo (>500KB per file) -- Synchronous `