name: ship version: 1.0.0 description: | Ship workflow: detect + merge base branch, run tests, review diff, bump VERSION, update CHANGELOG, commit, push, create PR. Use when asked to "ship", "deploy", "push to main", "create a PR", or "merge and push". Proactively suggest when the user says code is ready or asks about deploying. allowed-tools:
_UPD=$(~/.claude/skills/gstack/bin/gstack-update-check 2>/dev/null || .claude/skills/gstack/bin/gstack-update-check 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD" || true
mkdir -p ~/.gstack/sessions
touch ~/.gstack/sessions/"$PPID"
_SESSIONS=$(find ~/.gstack/sessions -mmin -120 -type f 2>/dev/null | wc -l | tr -d ' ')
find ~/.gstack/sessions -mmin +120 -type f -delete 2>/dev/null || true
_CONTRIB=$(~/.claude/skills/gstack/bin/gstack-config get gstack_contributor 2>/dev/null || true)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
echo "BRANCH: $_BRANCH"
_LAKE_SEEN=$([ -f ~/.gstack/.completeness-intro-seen ] && echo "yes" || echo "no")
echo "LAKE_INTRO: $_LAKE_SEEN"
_PROACTIVE=$(~/.claude/skills/gstack/bin/gstack-config get proactive 2>/dev/null || echo "true")
echo "PROACTIVE: $_PROACTIVE"
If PROACTIVE is "false", do not proactively suggest gstack skills — only invoke
them when the user explicitly asks. The user opted out of proactive suggestions.
If output shows UPGRADE_AVAILABLE <old> <new>: read ~/.claude/skills/gstack/gstack-upgrade/SKILL.md and follow the "Inline upgrade flow" (auto-upgrade if configured, otherwise AskUserQuestion with 4 options, write snooze state if declined). If JUST_UPGRADED <from> <to>: tell user "Running gstack v{to} (just updated!)" and continue.
If LAKE_INTRO is no: Before continuing, introduce the Completeness Principle.
Tell the user: "gstack follows the Boil the Lake principle — always do the complete
thing when AI makes the marginal cost near-zero. Read more: https://garryslist.org/posts/boil-the-ocean"
Then offer to open the essay in their default browser:
open https://garryslist.org/posts/boil-the-ocean
touch ~/.gstack/.completeness-intro-seen
Only run open if the user says yes. Always run touch to mark as seen. This only happens once.
ALWAYS follow this structure for every AskUserQuestion call:
_BRANCH value printed by the preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work. If both options are 8+, pick the higher; if one is ≤5, flag it.A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
Per-skill instructions may add additional formatting rules on top of this baseline.
AI-assisted coding makes the marginal cost of completeness near-zero. When you present options:
| Task type | Human team | CC+gstack | Compression |
|---|---|---|---|
| Boilerplate / scaffolding | 2 days | 15 min | ~100x |
| Test writing | 1 day | 15 min | ~50x |
| Feature implementation | 1 week | 30 min | ~30x |
| Bug fix + regression test | 4 hours | 15 min | ~20x |
| Architecture / design | 2 days | 4 hours | ~5x |
| Research / exploration | 1 day | 3 hours | ~3x |
Anti-patterns — DON'T do this:
If _CONTRIB is true: you are in contributor mode. You're a gstack user who also helps make it better.
At the end of each major workflow step (not after every single command), reflect on the gstack tooling you used. Rate your experience 0 to 10. If it wasn't a 10, think about why. If there is an obvious, actionable bug OR an insightful, interesting thing that could have been done better by gstack code or skill markdown — file a field report. Maybe our contributor will help make us better!
Calibration — this is the bar: For example, $B js "await fetch(...)" used to fail with SyntaxError: await is only valid in async functions because gstack didn't wrap expressions in async context. Small, but the input was reasonable and gstack should have handled it — that's the kind of thing worth filing. Things less consequential than this, ignore.
NOT worth filing: user's app bugs, network errors to user's URL, auth failures on user's site, user's own JS logic bugs.
To file: write ~/.gstack/contributor-logs/{slug}.md with all sections below (do not truncate — include every section through the Date/Version footer):
# {Title}
Hey gstack team — ran into this while using /{skill-name}:
**What I was trying to do:** {what the user/agent was attempting}
**What happened instead:** {what actually happened}
**My rating:** {0-10} — {one sentence on why it wasn't a 10}
## Steps to reproduce
1. {step}
## Raw output
{paste the actual error or unexpected output here}
## What would make this a 10
{one sentence: what gstack should have done differently}
**Date:** {YYYY-MM-DD} | **Version:** {gstack version} | **Skill:** /{skill}
Slug: lowercase, hyphens, max 60 chars (e.g. browse-js-no-await). Skip if file already exists. Max 3 reports per session. File inline and continue — don't stop the workflow. Tell user: "Filed gstack field report: {title}"
When completing a skill workflow, report status using one of:
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
Determine which branch this PR targets. Use the result as "the base branch" in all subsequent steps.
Check if a PR already exists for this branch:
gh pr view --json baseRefName -q .baseRefName
If this succeeds, use the printed branch name as the base branch.
If no PR exists (command fails), detect the repo's default branch:
gh repo view --json defaultBranchRef -q .defaultBranchRef.name
If both commands fail, fall back to main.
Print the detected base branch name. In every subsequent git diff, git log,
git fetch, git merge, and gh pr create command, substitute the detected
branch name wherever the instructions say "the base branch."
You are running the /ship workflow. This is a non-interactive, fully automated workflow. Do NOT ask for confirmation at any step. The user said /ship which means DO IT. Run straight through and output the PR URL at the end.
Only stop for:
Never stop for:
Check the current branch. If on the base branch or the repo's default branch, abort: "You're on the base branch. Ship from a feature branch."
Run git status (never use -uall). Uncommitted changes are always included — no need to ask.
Run git diff <base>...HEAD --stat and git log <base>..HEAD --oneline to understand what's being shipped.
Check review readiness:
After completing the review, read the review log and config to display the dashboard.
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
cat ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_REVIEWS"
echo "---CONFIG---"
~/.claude/skills/gstack/bin/gstack-config get skip_eng_review 2>/dev/null || echo "false"
Parse the output. Find the most recent entry for each skill (plan-ceo-review, plan-eng-review, plan-design-review, design-review-lite). Ignore entries with timestamps older than 7 days. For Design Review, show whichever is more recent between plan-design-review (full visual audit) and design-review-lite (code-level check). Append "(FULL)" or "(LITE)" to the status to distinguish. Display:
+====================================================================+
| REVIEW READINESS DASHBOARD |
+====================================================================+
| Review | Runs | Last Run | Status | Required |
|-----------------|------|---------------------|-----------|----------|
| Eng Review | 1 | 2026-03-16 15:00 | CLEAR | YES |
| CEO Review | 0 | — | — | no |
| Design Review | 0 | — | — | no |
+--------------------------------------------------------------------+
| VERDICT: CLEARED — Eng Review passed |
+====================================================================+
Review tiers:
Verdict logic:
If the Eng Review is NOT "CLEAR":
Check for a prior override on this branch:
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
grep '"skill":"ship-review-override"' ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl 2>/dev/null || echo "NO_OVERRIDE"
If an override exists, display the dashboard and note "Review gate previously accepted — continuing." Do NOT ask again.
If no override exists, use AskUserQuestion:
eval $(~/.claude/skills/gstack/bin/gstack-diff-scope <base> 2>/dev/null). If SCOPE_FRONTEND=true and no design review (plan-design-review or design-review-lite) exists in the dashboard, mention: "Design Review not run — this PR changes frontend code. The lite design check will run automatically in Step 3.5, but consider running /design-review for a full visual audit post-implementation." Still never block.If the user chooses A or C, persist the decision so future /ship runs on this branch skip the gate:
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
echo '{"skill":"ship-review-override","timestamp":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'","decision":"USER_CHOICE"}' >> ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl
Substitute USER_CHOICE with "ship_anyway" or "not_relevant".
Fetch and merge the base branch into the feature branch so tests run against the merged state:
git fetch origin <base> && git merge origin/<base> --no-edit
If there are merge conflicts: Try to auto-resolve if they are simple (VERSION, schema.rb, CHANGELOG ordering). If conflicts are complex or ambiguous, STOP and show them.
If already up to date: Continue silently.
Detect existing test framework and project runtime:
# Detect project runtime
[ -f Gemfile ] && echo "RUNTIME:ruby"
[ -f package.json ] && echo "RUNTIME:node"
[ -f requirements.txt ] || [ -f pyproject.toml ] && echo "RUNTIME:python"
[ -f go.mod ] && echo "RUNTIME:go"
[ -f Cargo.toml ] && echo "RUNTIME:rust"
[ -f composer.json ] && echo "RUNTIME:php"
[ -f mix.exs ] && echo "RUNTIME:elixir"
# Detect sub-frameworks
[ -f Gemfile ] && grep -q "rails" Gemfile 2>/dev/null && echo "FRAMEWORK:rails"
[ -f package.json ] && grep -q '"next"' package.json 2>/dev/null && echo "FRAMEWORK:nextjs"
# Check for existing test infrastructure
ls jest.config.* vitest.config.* playwright.config.* .rspec pytest.ini pyproject.toml phpunit.xml 2>/dev/null
ls -d test/ tests/ spec/ __tests__/ cypress/ e2e/ 2>/dev/null
# Check opt-out marker
[ -f .gstack/no-test-bootstrap ] && echo "BOOTSTRAP_DECLINED"
If test framework detected (config files or test directories found): Print "Test framework detected: {name} ({N} existing tests). Skipping bootstrap." Read 2-3 existing test files to learn conventions (naming, imports, assertion style, setup patterns). Store conventions as prose context for use in Phase 8e.5 or Step 3.4. Skip the rest of bootstrap.
If BOOTSTRAP_DECLINED appears: Print "Test bootstrap previously declined — skipping." Skip the rest of bootstrap.
If NO runtime detected (no config files found): Use AskUserQuestion:
"I couldn't detect your project's language. What runtime are you using?"
Options: A) Node.js/TypeScript B) Ruby/Rails C) Python D) Go E) Rust F) PHP G) Elixir H) This project doesn't need tests.
If user picks H → write .gstack/no-test-bootstrap and continue without tests.
If runtime detected but no test framework — bootstrap:
Use WebSearch to find current best practices for the detected runtime:
"[runtime] best test framework 2025 2026""[framework A] vs [framework B] comparison"If WebSearch is unavailable, use this built-in knowledge table:
| Runtime | Primary recommendation | Alternative |
|---|---|---|
| Ruby/Rails | minitest + fixtures + capybara | rspec + factory_bot + shoulda-matchers |
| Node.js | vitest + @testing-library | jest + @testing-library |
| Next.js | vitest + @testing-library/react + playwright | jest + cypress |
| Python | pytest + pytest-cov | unittest |
| Go | stdlib testing + testify | stdlib only |
| Rust | cargo test (built-in) + mockall | — |
| PHP | phpunit + mockery | pest |
| Elixir | ExUnit (built-in) + ex_machina | — |
Use AskUserQuestion: "I detected this is a [Runtime/Framework] project with no test framework. I researched current best practices. Here are the options: A) [Primary] — [rationale]. Includes: [packages]. Supports: unit, integration, smoke, e2e B) [Alternative] — [rationale]. Includes: [packages] C) Skip — don't set up testing right now RECOMMENDATION: Choose A because [reason based on project context]"
If user picks C → write .gstack/no-test-bootstrap. Tell user: "If you change your mind later, delete .gstack/no-test-bootstrap and re-run." Continue without tests.
If multiple runtimes detected (monorepo) → ask which runtime to set up first, with option to do both sequentially.
If package installation fails → debug once. If still failing → revert with git checkout -- package.json package-lock.json (or equivalent for the runtime). Warn user and continue without tests.
Generate 3-5 real tests for existing code:
git log --since=30.days --name-only --format="" | sort | uniq -c | sort -rn | head -10expect(x).toBeDefined() — test what the code DOES.Never import secrets, API keys, or credentials in test files. Use environment variables or test fixtures.
# Run the full test suite to confirm everything works
{detected test command}
If tests fail → debug once. If still failing → revert all bootstrap changes and warn user.
# Check CI provider
ls -d .github/ 2>/dev/null && echo "CI:github"
ls .gitlab-ci.yml .circleci/ bitrise.yml 2>/dev/null
If .github/ exists (or no CI detected — default to GitHub Actions):
Create .github/workflows/test.yml with:
runs-on: ubuntu-latestIf non-GitHub CI detected → skip CI generation with note: "Detected {provider} — CI pipeline generation supports GitHub Actions only. Add test step to your existing pipeline manually."
First check: If TESTING.md already exists → read it and update/append rather than overwriting. Never destroy existing content.
Write TESTING.md with:
First check: If CLAUDE.md already has a ## Testing section → skip. Don't duplicate.
Append a ## Testing section:
git status --porcelain
Only commit if there are changes. Stage all bootstrap files (config, test directory, TESTING.md, CLAUDE.md, .github/workflows/test.yml if created):
git commit -m "chore: bootstrap test framework ({framework name})"
Do NOT run RAILS_ENV=test bin/rails db:migrate — bin/test-lane already calls
db:test:prepare internally, which loads the schema into the correct lane database.
Running bare test migrations without INSTANCE hits an orphan DB and corrupts structure.sql.
Run both test suites in parallel:
bin/test-lane 2>&1 | tee /tmp/ship_tests.txt &
npm run test 2>&1 | tee /tmp/ship_vitest.txt &
wait
After both complete, read the output files and check pass/fail.
If any test fails: Show the failures and STOP. Do not proceed.
If all pass: Continue silently — just note the counts briefly.
Evals are mandatory when prompt-related files change. Skip this step entirely if no prompt files are in the diff.
1. Check if the diff touches prompt-related files:
git diff origin/<base> --name-only
Match against these patterns (from CLAUDE.md):
app/services/*_prompt_builder.rbapp/services/*_generation_service.rb, *_writer_service.rb, *_designer_service.rbapp/services/*_evaluator.rb, *_scorer.rb, *_classifier_service.rb, *_analyzer.rbapp/services/concerns/*voice*.rb, *writing*.rb, *prompt*.rb, *token*.rbapp/services/chat_tools/*.rb, app/services/x_thread_tools/*.rbconfig/system_prompts/*.txttest/evals/**/* (eval infrastructure changes affect all suites)If no matches: Print "No prompt-related files changed — skipping evals." and continue to Step 3.5.
2. Identify affected eval suites:
Each eval runner (test/evals/*_eval_runner.rb) declares PROMPT_SOURCE_FILES listing which source files affect it. Grep these to find which suites match the changed files:
grep -l "changed_file_basename" test/evals/*_eval_runner.rb
Map runner → test file: post_generation_eval_runner.rb → post_generation_eval_test.rb.
Special cases:
test/evals/judges/*.rb, test/evals/support/*.rb, or test/evals/fixtures/ affect ALL suites that use those judges/support files. Check imports in the eval test files to determine which.config/system_prompts/*.txt — grep eval runners for the prompt filename to find affected suites.3. Run affected suites at EVAL_JUDGE_TIER=full:
/ship is a pre-merge gate, so always use full tier (Sonnet structural + Opus persona judges).
EVAL_JUDGE_TIER=full EVAL_VERBOSE=1 bin/test-lane --eval test/evals/<suite>_eval_test.rb 2>&1 | tee /tmp/ship_evals.txt
If multiple suites need to run, run them sequentially (each needs a test lane). If the first suite fails, stop immediately — don't burn API cost on remaining suites.
4. Check results:
5. Save eval output — include eval results and cost dashboard in the PR body (Step 8).
Tier reference (for context — /ship always uses full):
| Tier | When | Speed (cached) | Cost |
|---|---|---|---|
fast (Haiku) |
Dev iteration, smoke tests | ~5s (14x faster) | ~$0.07/run |
standard (Sonnet) |
Default dev, bin/test-lane --eval |
~17s (4x faster) | ~$0.37/run |
full (Opus persona) |
/ship and pre-merge |
~72s (baseline) | ~$1.27/run |
100% coverage is the goal — every untested path is a path where bugs hide and vibe coding becomes yolo coding. Evaluate what was ACTUALLY coded (from the diff), not what was planned.
0. Before/after test count:
# Count test files before any generation
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
Store this number for the PR body.
1. Trace every codepath changed using git diff origin/<base>...HEAD:
Read every changed file. For each one, trace how data flows through the code — don't just list functions, actually follow the execution:
This is the critical step — you're building a map of every line of code that can execute differently based on input. Every branch in this diagram needs a test.
2. Map user flows, interactions, and error states:
Code coverage isn't enough — you need to cover how real users interact with the changed code. For each changed feature, think through:
Add these to your diagram alongside the code branches. A user flow with no test is just as much a gap as an untested if/else.
3. Check each branch against existing tests:
Go through your diagram branch by branch — both code paths AND user flows. For each one, search for a test that exercises it:
processPayment() → look for billing.test.ts, billing.spec.ts, test/billing_test.rbhelperFn() that has its own branches → those branches need tests tooQuality scoring rubric:
4. Output ASCII coverage diagram:
Include BOTH code paths and user flows in the same diagram:
CODE PATH COVERAGE
===========================
[+] src/services/billing.ts
│
├── processPayment()
│ ├── [★★★ TESTED] Happy path + card declined + timeout — billing.test.ts:42
│ ├── [GAP] Network timeout — NO TEST
│ └── [GAP] Invalid currency — NO TEST
│
└── refundPayment()
├── [★★ TESTED] Full refund — billing.test.ts:89
└── [★ TESTED] Partial refund (checks non-throw only) — billing.test.ts:101
USER FLOW COVERAGE
===========================
[+] Payment checkout flow
│
├── [★★★ TESTED] Complete purchase — checkout.e2e.ts:15
├── [GAP] Double-click submit — NO TEST
├── [GAP] Navigate away during payment — NO TEST
└── [★ TESTED] Form validation errors (checks render only) — checkout.test.ts:40
[+] Error states
│
├── [★★ TESTED] Card declined message — billing.test.ts:58
├── [GAP] Network timeout UX (what does user see?) — NO TEST
└── [GAP] Empty cart submission — NO TEST
─────────────────────────────────
COVERAGE: 5/12 paths tested (42%)
Code paths: 3/5 (60%)
User flows: 2/7 (29%)
QUALITY: ★★★: 2 ★★: 2 ★: 1
GAPS: 7 paths need tests
─────────────────────────────────
Fast path: All paths covered → "Step 3.4: All new code paths have test coverage ✓" Continue.
5. Generate tests for uncovered paths:
If test framework detected (or bootstrapped in Step 2.5):
test: coverage for {feature}Caps: 30 code paths max, 20 tests generated max (code + user flow combined), 2-min per-test exploration cap.
If no test framework AND user declined bootstrap → diagram only, no generation. Note: "Test generation skipped — no test framework configured."
Diff is test-only changes: Skip Step 3.4 entirely: "No new application code paths to audit."
6. After-count and coverage summary:
# Count test files after generation
find . -name '*.test.*' -o -name '*.spec.*' -o -name '*_test.*' -o -name '*_spec.*' | grep -v node_modules | wc -l
For PR body: Tests: {before} → {after} (+{delta} new)
Coverage line: Test Coverage Audit: N new code paths. M covered (X%). K tests generated, J committed.
Review the diff for structural issues that tests don't catch.
Read .claude/skills/review/checklist.md. If the file cannot be read, STOP and report the error.
Run git diff origin/<base> to get the full diff (scoped to feature changes against the freshly-fetched base branch).
Apply the review checklist in two passes:
Check if the diff touches frontend files using gstack-diff-scope:
eval $(~/.claude/skills/gstack/bin/gstack-diff-scope <base> 2>/dev/null)
If SCOPE_FRONTEND=false: Skip design review silently. No output.
If SCOPE_FRONTEND=true:
Check for DESIGN.md. If DESIGN.md or design-system.md exists in the repo root, read it. All design findings are calibrated against it — patterns blessed in DESIGN.md are not flagged. If not found, use universal design principles.
Read .claude/skills/review/design-checklist.md. If the file cannot be read, skip design review with a note: "Design checklist not found — skipping design review."
Read each changed frontend file (full file, not just diff hunks). Frontend files are identified by the patterns listed in the checklist.
Apply the design checklist against the changed files. For each item:
outline: none, !important, font-size < 16px): classify as AUTO-FIXInclude findings in the review output under a "Design Review" header, following the output format in the checklist. Design findings merge with code review findings into the same Fix-First flow.
Log the result for the Review Readiness Dashboard:
eval $(~/.claude/skills/gstack/bin/gstack-slug 2>/dev/null)
mkdir -p ~/.gstack/projects/$SLUG
echo '{"skill":"design-review-lite","timestamp":"TIMESTAMP","status":"STATUS","findings":N,"auto_fixed":M}' >> ~/.gstack/projects/$SLUG/$BRANCH-reviews.jsonl
Substitute: TIMESTAMP = ISO 8601 datetime, STATUS = "clean" if 0 findings or "issues_found", N = total findings, M = auto-fixed count.
Include any design findings alongside the code review findings. They follow the same Fix-First flow below.
Classify each finding as AUTO-FIX or ASK per the Fix-First Heuristic in checklist.md. Critical findings lean toward ASK; informational lean toward AUTO-FIX.
Auto-fix all AUTO-FIX items. Apply each fix. Output one line per fix:
[AUTO-FIXED] [file:line] Problem → what you did
If ASK items remain, present them in ONE AskUserQuestion:
After all fixes (auto + user-approved):
git add <fixed-files> && git commit -m "fix: pre-landing review fixes"), then STOP and tell the user to run /ship again to re-test.Output summary: Pre-Landing Review: N issues — M auto-fixed, K asked (J fixed, L skipped)
If no issues found: Pre-Landing Review: No issues found.
Save the review output — it goes into the PR body in Step 8.
Read .claude/skills/review/greptile-triage.md and follow the fetch, filter, classify, and escalation detection steps.
If no PR exists, gh fails, API returns an error, or there are zero Greptile comments: Skip this step silently. Continue to Step 4.
If Greptile comments are found:
Include a Greptile summary in your output: + N Greptile comments (X valid, Y fixed, Z FP)
Before replying to any comment, run the Escalation Detection algorithm from greptile-triage.md to determine whether to use Tier 1 (friendly) or Tier 2 (firm) reply templates.
For each classified comment:
VALID & ACTIONABLE: Use AskUserQuestion with:
RECOMMENDATION: Choose A because [one-line reason]git add <fixed-files> && git commit -m "fix: address Greptile review — <brief description>"), reply using the Fix reply template from greptile-triage.md (include inline diff + explanation), and save to both per-project and global greptile-history (type: fix).VALID BUT ALREADY FIXED: Reply using the Already Fixed reply template from greptile-triage.md — no AskUserQuestion needed:
FALSE POSITIVE: Use AskUserQuestion:
SUPPRESSED: Skip silently — these are known false positives from previous triage.
After all comments are resolved: If any fixes were applied, the tests from Step 3 are now stale. Re-run tests (Step 3) before continuing to Step 4. If no fixes were applied, continue to Step 4.
Read the current VERSION file (4-digit format: MAJOR.MINOR.PATCH.MICRO)
Auto-decide the bump level based on the diff:
git diff origin/<base>...HEAD --stat | tail -1)Compute the new version:
0.19.1.0 + PATCH → 0.19.2.0Write the new version to the VERSION file.
Read CHANGELOG.md header to know the format.
Auto-generate the entry from ALL commits on the branch (not just recent ones):
git log <base>..HEAD --oneline to see every commit being shippedgit diff <base>...HEAD to see the full diff against the base branch### Added — new features### Changed — changes to existing functionality### Fixed — bug fixes### Removed — removed features## [X.Y.Z.W] - YYYY-MM-DDDo NOT ask the user to describe changes. Infer from the diff and commit history.
Cross-reference the project's TODOS.md against the changes being shipped. Mark completed items automatically; prompt only if the file is missing or disorganized.
Read .claude/skills/review/TODOS-format.md for the canonical format reference.
1. Check if TODOS.md exists in the repository root.
If TODOS.md does not exist: Use AskUserQuestion:
TODOS.md with a skeleton (# TODOS heading + ## Completed section). Continue to step 3.2. Check structure and organization:
Read TODOS.md and verify it follows the recommended structure:
## <Skill/Component> headings**Priority:** field with P0-P4 value## Completed section at the bottomIf disorganized (missing priority fields, no component groupings, no Completed section): Use AskUserQuestion:
3. Detect completed TODOs:
This step is fully automatic — no user interaction.
Use the diff and commit history already gathered in earlier steps:
git diff <base>...HEAD (full diff against the base branch)git log <base>..HEAD --oneline (all commits being shipped)For each TODO item, check if the changes in this PR complete it by:
Be conservative: Only mark a TODO as completed if there is clear evidence in the diff. If uncertain, leave it alone.
4. Move completed items to the ## Completed section at the bottom. Append: **Completed:** vX.Y.Z (YYYY-MM-DD)
5. Output summary:
TODOS.md: N items marked complete (item1, item2, ...). M items remaining.TODOS.md: No completed items detected. M items remaining.TODOS.md: Created. / TODOS.md: Reorganized.6. Defensive: If TODOS.md cannot be written (permission error, disk full), warn the user and continue. Never stop the ship workflow for a TODOS failure.
Save this summary — it goes into the PR body in Step 8.
Goal: Create small, logical commits that work well with git bisect and help LLMs understand what changed.
Analyze the diff and group changes into logical commits. Each commit should represent one coherent change — not one file, but one logical unit.
Commit ordering (earlier commits first):
Rules for splitting:
Each commit must be independently valid — no broken imports, no references to code that doesn't exist yet. Order commits so dependencies come first.
Compose each commit message:
<type>: <summary> (type = feat/fix/chore/refactor/docs)git commit -m "$(cat <<'EOF'
chore: bump version and changelog (vX.Y.Z.W)
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
EOF
)"
IRON LAW: NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE.
Before pushing, re-verify if code changed during Steps 4-6:
Test verification: If ANY code changed after Step 3's test run (fixes from review findings, CHANGELOG edits don't count), re-run the test suite. Paste fresh output. Stale output from Step 3 is NOT acceptable.
Build verification: If the project has a build step, run it. Paste output.
Rationalization prevention:
If tests fail here: STOP. Do not push. Fix the issue and return to Step 3.
Claiming work is complete without verification is dishonesty, not efficiency.
Push to the remote with upstream tracking:
git push -u origin <branch-name>
Create a pull request using gh:
gh pr create --base <base> --title "<type>: <summary>" --body "$(cat <<'EOF'
## Summary
<bullet points from CHANGELOG>
## Test Coverage
<coverage diagram from Step 3.4, or "All new code paths have test coverage.">
<If Step 3.4 ran: "Tests: {before} → {after} (+{delta} new)">
## Pre-Landing Review
<findings from Step 3.5 code review, or "No issues found.">
## Design Review
<If design review ran: "Design Review (lite): N findings — M auto-fixed, K skipped. AI Slop: clean/N issues.">
<If no frontend files changed: "No frontend files changed — design review skipped.">
## Eval Results
<If evals ran: suite names, pass/fail counts, cost dashboard summary. If skipped: "No prompt-related files changed — evals skipped.">
## Greptile Review
<If Greptile comments were found: bullet list with [FIXED] / [FALSE POSITIVE] / [ALREADY FIXED] tag + one-line summary per comment>
<If no Greptile comments found: "No Greptile comments.">
<If no PR existed during Step 3.75: omit this section entirely>
## TODOS
<If items marked complete: bullet list of completed items with version>
<If no items completed: "No TODO items completed in this PR.">
<If TODOS.md created or reorganized: note that>
<If TODOS.md doesn't exist and user skipped: omit this section>
## Test plan
- [x] All Rails tests pass (N runs, 0 failures)
- [x] All Vitest tests pass (N tests)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
EOF
)"
Output the PR URL — this should be the final output the user sees.
git push only.YYYY-MM-DD/ship, next thing they see is the review + PR URL.