~cytrogen/gstack

ref: 942df42161f1d73709bc27af636ddc7112f9016f gstack/test/fixtures/eval-baselines.json -rw-r--r-- 388 bytes
b5b2a15a — Garry Tan a month ago
fix: pass all LLM evals — severity defs, rubric edge cases, EVALS=1 flag

- Add severity classification to qa/SKILL.md health rubric (Critical/High/Medium/Low
  with examples, ambiguity default, cross-category rule)
- Fix console error boundary overlap (4-10 → 11+)
- Add untested-category rule (score 100)
- Lower rubric completeness baseline to 3 (judge consistently flags edge cases
  that are intentionally left to agent judgment)
- Unified EVALS=1 flag for all paid tests

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
76803d78 — Garry Tan a month ago
feat: 3-tier eval suite with planted-bug outcome testing (EVALS=1)

Adds comprehensive eval infrastructure:
- Tier 1 (free): 13 new static tests — cross-skill path consistency, QA
  structure validation, greptile format, planted-bug fixture validation
- Tier 2 (Agent SDK E2E): /qa quick, /review with pre-built git repo,
  3 planted-bug outcome evals (static, SPA, checkout — each with 5 bugs)
- Tier 3 (LLM judge): QA workflow quality, health rubric clarity,
  cross-skill consistency, baseline score pinning

New fixtures: 3 HTML pages with 15 total planted bugs, ground truth JSON,
review-eval-vuln.rb, eval-baselines.json. Shared llm-judge.ts helper (DRY).

Unified EVALS=1 flag replaces SKILL_E2E + ANTHROPIC_API_KEY checks.
`bun run test:evals` runs everything that costs money (~$4/run).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>