feat: CI evals on Ubicloud — 12 parallel runners + Docker image (v0.11.10.0) (#360) * feat: enable within-file E2E test concurrency for 3x faster runs Switch all E2E tests from serial test() to testConcurrentIfSelected() so tests within each file run in parallel. Wall clock drops from ~18min to ~6min (limited by the longest single test, not sequential sum). The concurrent helper was already built in e2e-helpers.ts but never wired up. Each test runs in its own describe block with its own beforeAll/tmpdir — no shared state conflicts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add CI eval workflow on Ubicloud runners Single-job GitHub Actions workflow that runs E2E evals on every PR using Ubicloud runners ($0.006/run — 10x cheaper than GitHub standard). Uses EVALS_CONCURRENCY=40 with the new within-file concurrency for ~6min wall clock. Downloads previous eval artifact from main for comparison, uploads results, and posts a PR comment with pass/fail + cost. Ubicloud setup required: connect GitHub repo via ubicloud.com dashboard, add ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY as repo secrets. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: bump version and changelog (v0.11.6.0) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: optimize CI eval PR comment — aggregate all suites, update-not-duplicate Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: parallelize CI evals — 12 runners (1 per suite) for ~3min wall clock Matrix strategy spins up 12 ubicloud-standard-2 runners simultaneously, one per test file. Separate report job aggregates all artifacts into a single PR comment. Bun dependency cache cuts install from ~30s to ~3s. Runner cost: ~$0.048 (from $0.024) — negligible vs $3-4 API costs. Wall clock: ~3-4min (from ~8min). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add Docker CI image with pre-baked toolchain + deps Dockerfile.ci pre-installs bun, node, claude CLI, gh CLI, and node_modules so eval runners skip all setup. Image rebuilds weekly and on lockfile/Dockerfile changes via ci-image.yml. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: parallelize CI evals — 12 runners (1 per suite) for ~3min wall clock Switch eval workflow to use Docker container image with pre-baked toolchain. Each of 12 matrix runners pulls the image, hardlinks cached node_modules, builds browse, and runs one test suite. Setup drops from ~70s to ~19s per runner. Wall clock is dominated by the slowest individual test, not sequential sum. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: self-bootstrapping CI — build Docker image inline, cache by content hash Move Docker image build into the evals workflow as a dependency job. Image tag is keyed on hash of Dockerfile+lockfile+package.json — only rebuilds when those change. Eliminates chicken-and-egg problem where the image must exist before the first PR run. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: bun.lockb → bun.lock + auth before manifest check This project uses bun.lock (text format), not bun.lockb (binary). Also move Docker login before manifest inspect so GHCR auth works. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: bun.lock is gitignored — use package.json only for Docker cache bun.lock is in .gitignore so it doesn't exist after checkout. Dockerfile and workflows now use package.json only for deps caching. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: symlink node_modules instead of hardlink (cross-device) Docker image layers and workspace are on different filesystems, so cp -al (hardlink) fails. Use ln -s (symlink) instead — zero copy overhead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * debug: add claude CLI smoke test step to diagnose exit_code_1 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ci: retrigger eval workflow * ci: add workflow_dispatch trigger for manual runs * debug: more verbose claude CLI diagnostics * fix: run eval container as non-root — claude CLI rejects --dangerously-skip-permissions as root Claude Code CLI blocks --dangerously-skip-permissions when running as uid=0 for security. Add a 'runner' user to the Docker image and set --user runner on the container. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: install bun to /usr/local so non-root runner user can access it Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: unset CI/GITHUB_ACTIONS env vars for eval runs Claude CLI routing behavior changes when CI=true — it skips skill invocation and uses Bash directly. Unsetting these markers makes Claude behave like a local environment for consistent eval results. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * revert: remove CI env unset — didn't fix routing Unsetting CI/GITHUB_ACTIONS didn't improve routing test results (still 1/11 in container). The issue is model behavior in containerized environments, not env vars. Routing tests will be tracked as a known CI gap. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: copy CLAUDE.md into routing test tmpDirs for skill context In containerized CI, Claude lacks the project context (CLAUDE.md) that guides routing decisions locally. Without it, Claude answers directly with Bash/Agent instead of invoking specific skills. Copying CLAUDE.md gives Claude the same context it has locally. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: routing tests use createRoutingWorkDir with full project context Routing tests now copy CLAUDE.md, README.md, package.json, ETHOS.md, and all SKILL.md files into each test tmpDir. This gives Claude the same project context it has locally, which is needed for correct skill routing decisions in containerized CI environments. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: install skills at top-level .claude/skills/ for CI discovery Claude Code discovers project skills from .claude/skills/<name>/SKILL.md at the top level only. Nesting under .claude/skills/gstack/<name>/ caused Claude to see only one "gstack" skill instead of individual skills like /ship, /qa, /review. This explains 10/11 routing failures in CI — Claude invoked "gstack" or used Bash directly instead of routing to specific skills. Also adds workflow_dispatch trigger and --user runner container option. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: bump version and changelog (v0.11.10.0) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: CI report needs checkout + routing needs user-level skill install Two fixes: 1. Report job: add actions/checkout so `gh pr comment` has git context. Also add pull-requests:write permission for comment posting. 2. Routing tests: install skills to BOTH project-level (.claude/skills/) AND user-level (~/.claude/skills/) since Claude Code discovers from both locations. In CI containers, $HOME differs from workdir. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat: /land-and-deploy, /canary, /benchmark + perf review (v0.7.0) (#183) * feat: add /canary, /benchmark, /land-and-deploy skills (v0.7.0) Three new skills that close the deploy loop: - /canary: standalone post-deploy monitoring with browse daemon - /benchmark: performance regression detection with Web Vitals - /land-and-deploy: merge PR, wait for deploy, canary verify production Incorporates patterns from community PR #151. Co-Authored-By: HMAKT99 <HMAKT99@users.noreply.github.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add Performance & Bundle Impact category to review checklist New Pass 2 (INFORMATIONAL) category catching heavy dependencies (moment.js, lodash full), missing lazy loading, synchronous scripts, CSS @import blocking, fetch waterfalls, and tree-shaking breaks. Both /review and /ship automatically pick this up via checklist.md. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: add {{DEPLOY_BOOTSTRAP}} resolver + deployed row in dashboard - New generateDeployBootstrap() resolver auto-detects deploy platform (Vercel, Netlify, Fly.io, GH Actions, etc.), production URL, and merge method. Persists to CLAUDE.md like test bootstrap. - Review Readiness Dashboard now shows a "Deployed" row from /land-and-deploy JSONL entries (informational, never gates shipping). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: mark 3 TODOs completed, bump v0.7.0, update CHANGELOG Superseded by /land-and-deploy: - /merge skill — review-gated PR merge - Deploy-verify skill - Post-deploy verification (ship + browse) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat: /setup-deploy skill + platform-specific deploy verification - New /setup-deploy skill: interactive guided setup for deploy configuration. Detects Fly.io, Render, Vercel, Netlify, Heroku, Railway, GitHub Actions, and custom deploy scripts. Writes config to CLAUDE.md with custom hooks section for non-standard setups. - Enhanced deploy bootstrap: platform-specific URL resolution (fly.toml app → {app}.fly.dev, render.yaml → {service}.onrender.com, etc.), deploy status commands (fly status, heroku releases), and custom deploy hooks section in CLAUDE.md for manual/scripted deploys. - Platform-specific deploy verification in /land-and-deploy Step 6: Strategy A (GitHub Actions polling), Strategy B (platform CLI: fly/render/heroku), Strategy C (auto-deploy: vercel/netlify), Strategy D (custom hooks from CLAUDE.md). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: E2E + LLM-judge evals for deploy skills - 4 E2E tests: land-and-deploy (Fly.io detection + deploy report), canary (monitoring report structure), benchmark (perf report schema), setup-deploy (platform detection → CLAUDE.md config) - 4 LLM-judge evals: workflow quality for all 4 new skills - Touchfile entries for diff-based test selection (E2E + LLM-judge) - 460 free tests pass, 0 fail Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: harden E2E tests — server lifecycle, timeouts, preamble budget, skip flaky Cross-cutting fixes: - Pre-seed ~/.gstack/.completeness-intro-seen and ~/.gstack/.telemetry-prompted so preamble doesn't burn 3-7 turns on lake intro + telemetry in every test - Each describe block creates its own test server instance instead of sharing a global that dies between suites Test fixes (5 tests): - /qa quick: own server instance + preamble skip - /review SQL injection: timeout 90→180s, maxTurns 15→20, added assertion that review output actually mentions SQL injection - /review design-lite: maxTurns 25→35 + preamble skip (now detects 7/7) - ship-base-branch: both timeouts 90→150/180s + preamble skip - plan-eng artifact: clean stale state in beforeAll, maxTurns 20→25 Skipped (4 flaky/redundant tests): - contributor-mode: tests prompt compliance, not skill functionality - design-consultation-research: WebSearch-dependent, redundant with core - design-consultation-preview: redundant with core test - /qa bootstrap: too ambitious (65 turns, installs vitest) Also: preamble skip added to qa-only, qa-fix-loop, design-consultation-core, and design-consultation-existing prompts. Updated touchfiles entries and touchfiles.test.ts. Added honest comment to codex-review-findings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test: redesign 6 skipped/todo E2E tests + add test.concurrent support Redesigned tests (previously skipped/todo): - contributor-mode: pre-fail approach, 5 turns/30s (was 10 turns/90s) - design-consultation-research: WebSearch-only, 8 turns/90s (was 45/480s) - design-consultation-preview: preview HTML only, 8 turns/90s (was 30/480s) - qa-bootstrap: bootstrap-only, 12 turns/90s (was 65/420s) - /ship workflow: local bare remote, 15 turns/120s (was test.todo) - /setup-browser-cookies: browser detection smoke, 5 turns/45s (was test.todo) Added testConcurrentIfSelected() helper for future parallelization. Updated touchfiles entries for all 6 re-enabled tests. Target: 0 skip, 0 todo, 0 fail across all E2E tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: relax contributor-mode assertions — test structure not exact phrasing * perf: enable test.concurrent for 31 independent E2E tests Convert 18 skill-e2e, 11 routing, and 2 codex tests from sequential to test.concurrent. Only design-consultation tests (4) remain sequential due to shared designDir state. Expected ~6x speedup on Teams high-burst. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add --concurrent flag to bun test + convert remaining 4 sequential tests bun's test.concurrent only works within a describe block, not across describe blocks. Adding --concurrent to the CLI command makes ALL tests concurrent regardless of describe boundaries. Also converted the 4 design-consultation tests to concurrent (each already independent). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: split monolithic E2E test into 8 parallel files Split test/skill-e2e.test.ts (3442 lines) into 8 category files: - skill-e2e-browse.test.ts (7 tests) - skill-e2e-review.test.ts (7 tests) - skill-e2e-qa-bugs.test.ts (3 tests) - skill-e2e-qa-workflow.test.ts (4 tests) - skill-e2e-plan.test.ts (6 tests) - skill-e2e-design.test.ts (7 tests) - skill-e2e-workflow.test.ts (6 tests) - skill-e2e-deploy.test.ts (4 tests) Bun runs each file in its own worker = 10 parallel workers (8 split + routing + codex). Expected: 78 min → ~12 min. Extracted shared helpers to test/helpers/e2e-helpers.ts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: bump default E2E concurrency to 15 * perf: add model pinning infrastructure + rate-limit telemetry to E2E runner Default E2E model changed from Opus to Sonnet (5x faster, 5x cheaper). Session runner now accepts `model` option with EVALS_MODEL env var override. Added timing telemetry (first_response_ms, max_inter_turn_ms) and wall_clock_ms to eval-store for diagnosing rate-limit impact. Added EVALS_FAST test filtering. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve 3 E2E test failures — tmpdir race, wasted turns, brittle assertions plan-design-review-plan-mode: give each test its own tmpdir to eliminate race condition where concurrent tests pollute each other's working directory. ship-local-workflow: inline ship workflow steps in prompt instead of having agent read 700+ line SKILL.md (was wasting 6 of 15 turns on file I/O). design-consultation-core: replace exact section name matching with fuzzy synonym-based matching (e.g. "Colors" matches "Color", "Type System" matches "Typography"). All 7 sections still required, LLM judge still hard fail. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * perf: pin quality tests to Opus, add --retry 2 and test:e2e:fast tier ~10 quality-sensitive tests (planted-bug detection, design quality judge, strategic review, retro analysis) explicitly pinned to Opus. ~30 structure tests default to Sonnet for 5x speed improvement. Added --retry 2 to all E2E scripts for flaky test resilience. Added test:e2e:fast script that excludes 8 slowest tests for quick feedback. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: mark E2E model pinning TODO as shipped Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs: add SKILL.md merge conflict directive to CLAUDE.md When resolving merge conflicts on generated SKILL.md files, always merge the .tmpl templates first, then regenerate — never accept either side's generated output directly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add DEPLOY_BOOTSTRAP resolver to gen-skill-docs The land-and-deploy template referenced {{DEPLOY_BOOTSTRAP}} but no resolver existed, causing gen-skill-docs to fail. Added generateDeployBootstrap() that generates the deploy config detection bash block (check CLAUDE.md for persisted config, auto-detect platform from config files, detect deploy workflows). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * chore: regenerate SKILL.md files after DEPLOY_BOOTSTRAP fix Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: move prompt temp file outside workingDirectory to prevent race condition The .prompt-tmp file was written inside workingDirectory, which gets deleted by afterAll cleanup. With --concurrent --retry, afterAll can interleave with retries, causing "No such file or directory" crashes at 0s (seen in review-design-lite and office-hours-spec-review). Fix: write prompt file to os.tmpdir() with a unique suffix so it survives directory cleanup. Also convert review-design-lite from describeE2E to describeIfSelected for proper diff-based test selection. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add --retry 2 --concurrent flags to test:evals scripts for consistency test:evals and test:evals:all were missing the retry and concurrency flags that test:e2e already had, causing inconsistent behavior between the two script families. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: HMAKT99 <HMAKT99@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>