mirror of
https://github.com/obra/superpowers.git
synced 2026-05-14 13:09:05 +08:00
Compare commits
28 Commits
codex/supe
...
codex/sup-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ad2db13001 | ||
|
|
3d6dc90c6d | ||
|
|
a152bb3932 | ||
|
|
3dfb376268 | ||
|
|
491df7360c | ||
|
|
9088f563e7 | ||
|
|
d4cf61b4c8 | ||
|
|
7f02ccd91b | ||
|
|
35e42a16ce | ||
|
|
58082d04f8 | ||
|
|
3dc0ea6876 | ||
|
|
0bf37499b4 | ||
|
|
f7c5312265 | ||
|
|
f5175fb31a | ||
|
|
45c7dc2cce | ||
|
|
39d29a6c28 | ||
|
|
f1d2005de3 | ||
|
|
c0a65f1b4d | ||
|
|
f10cddac0d | ||
|
|
371f41596b | ||
|
|
6f0adebe96 | ||
|
|
fd5b53cb85 | ||
|
|
be0357f98a | ||
|
|
3b412a3836 | ||
|
|
2e46e9590d | ||
|
|
58f821314d | ||
|
|
81472cc9e6 | ||
|
|
b4363df1b9 |
@@ -19,7 +19,5 @@
|
||||
"workflows"
|
||||
],
|
||||
"skills": "./skills/",
|
||||
"agents": "./agents/",
|
||||
"commands": "./commands/",
|
||||
"hooks": "./hooks/hooks-cursor.json"
|
||||
}
|
||||
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -5,3 +5,9 @@
|
||||
node_modules/
|
||||
inspo
|
||||
triage/
|
||||
|
||||
# Eval harness — drill ships its own gitignore at evals/.gitignore;
|
||||
# these are belt-and-suspenders entries for tools that don't recurse.
|
||||
evals/results/
|
||||
evals/.venv/
|
||||
evals/.env
|
||||
|
||||
3
.gitmodules
vendored
Normal file
3
.gitmodules
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
[submodule "evals"]
|
||||
path = evals
|
||||
url = git@github.com:prime-radiant-inc/superpowers-evals.git
|
||||
21
.pre-commit-config.yaml
Normal file
21
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
repos:
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: evals-ruff-check
|
||||
name: evals ruff check
|
||||
entry: uv --project evals run ruff check
|
||||
language: system
|
||||
files: ^evals/.*\.py$
|
||||
|
||||
- id: evals-ruff-format-check
|
||||
name: evals ruff format --check
|
||||
entry: uv --project evals run ruff format --check
|
||||
language: system
|
||||
files: ^evals/.*\.py$
|
||||
|
||||
- id: evals-ty-check
|
||||
name: evals ty check
|
||||
entry: uv --directory evals run ty check
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: ^evals/.*\.py$
|
||||
@@ -94,6 +94,10 @@ Skills are not prose — they are code that shapes agent behavior. If you modify
|
||||
- Show before/after eval results in your PR
|
||||
- Do not modify carefully-tuned content (Red Flags tables, rationalization lists, "human partner" language) without evidence the change is an improvement
|
||||
|
||||
## Eval harness
|
||||
|
||||
Skill-behavior evals live in the `evals/` submodule — after cloning, run `git submodule update --init evals`, then see `evals/README.md`. Drill (the harness) drives real tmux sessions of Claude Code / Codex / Gemini CLI and judges skill compliance with an LLM verifier. Plugin-infrastructure tests still live at `tests/`.
|
||||
|
||||
## Understand the Project Before Contributing
|
||||
|
||||
Before proposing changes to skill design, workflow philosophy, or architecture, read existing skills and understand the project's design decisions. Superpowers has its own tested philosophy about skill design, agent behavior shaping, and terminology (e.g., "your human partner" is deliberate, not interchangeable with "the user"). Changes that rewrite the project's voice or restructure its approach without understanding why it exists will be rejected.
|
||||
|
||||
@@ -25,7 +25,7 @@ If Superpowers has helped you do stuff that makes money and you are so inclined,
|
||||
|
||||
Thanks!
|
||||
|
||||
- Jesse
|
||||
\- Jesse
|
||||
|
||||
|
||||
## Installation
|
||||
@@ -214,6 +214,8 @@ The general contribution process for Superpowers is below. Keep in mind that we
|
||||
4. Follow the `writing-skills` skill for creating and testing new and modified skills
|
||||
5. Submit a PR, being sure to fill in the pull request template.
|
||||
|
||||
Skill-behavior tests use the eval harness submodule at `evals/`. After cloning this repo, run `git submodule update --init evals`, then see `evals/README.md` for setup. Plugin-infrastructure tests live at `tests/` and run via the relevant `run-*.sh` or `npm test`.
|
||||
|
||||
See `skills/writing-skills/SKILL.md` for the complete guide.
|
||||
|
||||
## Updating
|
||||
|
||||
@@ -50,6 +50,8 @@ New `sync-to-codex-plugin` script mirrors superpowers into the OpenAI Codex plug
|
||||
- **Single source of truth** — the persona/checklist that previously lived in both `agents/code-reviewer.md` and the skill's placeholder template (and drifted independently) is now one file.
|
||||
- **`subagent-driven-development` follows suit** — its `code-quality-reviewer-prompt.md` now dispatches `Task (general-purpose)` instead of the named agent.
|
||||
- **Behavioral test added** — `tests/claude-code/test-requesting-code-review.sh` plants real bugs (SQL injection, plaintext password handling, credential logging) into a tiny project and asserts the dispatched reviewer flags every planted issue at Critical/Important severity and refuses to approve the diff.
|
||||
|
||||
> Note: `tests/claude-code/test-requesting-code-review.sh` and `tests/claude-code/test-document-review-system.sh` (mentioned later in this document) were lifted into drill scenarios on 2026-05-06 and removed from `tests/`. See `evals/scenarios/code-review-catches-planted-bugs.yaml` and `evals/scenarios/spec-reviewer-catches-planted-flaws.yaml`. The references above and below are preserved as dated artifacts of the work this section describes.
|
||||
- **Codex and Copilot workaround docs trimmed** — the "Named agent dispatch" sections in `references/codex-tools.md` and `references/copilot-tools.md` documented how to flatten a named agent into a generic dispatch. With no named agents shipping, the workaround is unnecessary; both sections were dropped.
|
||||
|
||||
### Subagent-Driven Development
|
||||
|
||||
@@ -555,6 +555,8 @@ Should show exactly 6 files changed (5 skill files + 1 test file). No other file
|
||||
If test runner exists:
|
||||
```bash
|
||||
# Run skill-triggering tests
|
||||
# Note: tests/skill-triggering/ was lifted into drill scenarios on 2026-05-06.
|
||||
# See evals/scenarios/triggering-*.yaml. The reference below is a dated artifact.
|
||||
./tests/skill-triggering/run-all.sh 2>/dev/null || echo "Skill triggering tests not available in this environment"
|
||||
|
||||
# Run SDD integration test
|
||||
|
||||
@@ -275,23 +275,16 @@ If no native tool is available, create a worktree manually using git.
|
||||
|
||||
Follow this priority order:
|
||||
|
||||
1. **Check existing directories:**
|
||||
1. **Check your instructions for a worktree directory preference.** If specified, use it without asking.
|
||||
|
||||
2. **Check existing project-local directories:**
|
||||
```bash
|
||||
ls -d .worktrees 2>/dev/null # Preferred (hidden)
|
||||
ls -d worktrees 2>/dev/null # Alternative
|
||||
```
|
||||
If found, use that directory. If both exist, `.worktrees` wins.
|
||||
|
||||
2. **Check for existing global directory:**
|
||||
```bash
|
||||
project=$(basename "$(git rev-parse --show-toplevel)")
|
||||
ls -d ~/.config/superpowers/worktrees/$project 2>/dev/null
|
||||
```
|
||||
If found, use it (backward compatibility with legacy global path).
|
||||
|
||||
3. **Check your instructions for a worktree directory preference.** If specified, use it without asking.
|
||||
|
||||
4. **Default to `.worktrees/`.**
|
||||
3. **Default to `.worktrees/`.**
|
||||
|
||||
#### Safety Verification (project-local directories only)
|
||||
|
||||
@@ -305,16 +298,11 @@ git check-ignore -q .worktrees 2>/dev/null || git check-ignore -q worktrees 2>/d
|
||||
|
||||
**Why critical:** Prevents accidentally committing worktree contents to repository.
|
||||
|
||||
Global directories (`~/.config/superpowers/worktrees/`) need no verification.
|
||||
|
||||
#### Create the Worktree
|
||||
|
||||
```bash
|
||||
project=$(basename "$(git rev-parse --show-toplevel)")
|
||||
|
||||
# Determine path based on chosen location
|
||||
# For project-local: path="$LOCATION/$BRANCH_NAME"
|
||||
# For global: path="~/.config/superpowers/worktrees/$project/$BRANCH_NAME"
|
||||
path="$LOCATION/$BRANCH_NAME"
|
||||
|
||||
git worktree add "$path" -b "$BRANCH_NAME"
|
||||
cd "$path"
|
||||
@@ -387,7 +375,6 @@ Ready to implement <feature-name>
|
||||
| `worktrees/` exists | Use it (verify ignored) |
|
||||
| Both exist | Use `.worktrees/` |
|
||||
| Neither exists | Check instruction file, then default `.worktrees/` |
|
||||
| Global path exists | Use it (backward compat) |
|
||||
| Directory not ignored | Add to .gitignore + commit |
|
||||
| Permission error on create | Sandbox fallback, work in place |
|
||||
| Tests fail during baseline | Report failures + ask |
|
||||
@@ -464,7 +451,7 @@ git commit -m "feat: rewrite using-git-worktrees with detect-and-defer (PRI-974)
|
||||
Step 0: GIT_DIR != GIT_COMMON detection (skip if already isolated)
|
||||
Step 0 consent: opt-in prompt before creating worktree (#991)
|
||||
Step 1a: native tool preference (short, first, declarative)
|
||||
Step 1b: git worktree fallback with hooks symlink and legacy path compat
|
||||
Step 1b: git worktree fallback with project-local directory policy
|
||||
Submodule guard prevents false detection
|
||||
Platform-neutral instruction file references (#1049)"
|
||||
```
|
||||
@@ -663,7 +650,7 @@ WORKTREE_PATH=$(git rev-parse --show-toplevel)
|
||||
|
||||
**If `GIT_DIR == GIT_COMMON`:** Normal repo, no worktree to clean up. Done.
|
||||
|
||||
**If worktree path is under `.worktrees/` or `~/.config/superpowers/worktrees/`:** Superpowers created this worktree — we own cleanup.
|
||||
**If worktree path is under `.worktrees/` or `worktrees/`:** Superpowers created this worktree — we own cleanup.
|
||||
|
||||
```bash
|
||||
MAIN_ROOT=$(git -C "$(git rev-parse --git-common-dir)/.." rev-parse --show-toplevel)
|
||||
@@ -707,7 +694,7 @@ git worktree prune # Self-healing: clean up any stale registrations
|
||||
|
||||
**Cleaning up harness-owned worktrees**
|
||||
- **Problem:** Removing a worktree the harness created causes phantom state
|
||||
- **Fix:** Only clean up worktrees under `.worktrees/` or `~/.config/superpowers/worktrees/`
|
||||
- **Fix:** Only clean up worktrees under `.worktrees/` or `worktrees/`
|
||||
|
||||
**No confirmation for discard**
|
||||
- **Problem:** Accidentally delete work
|
||||
|
||||
1374
docs/superpowers/plans/2026-05-06-lift-drill-into-evals.md
Normal file
1374
docs/superpowers/plans/2026-05-06-lift-drill-into-evals.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -46,7 +46,7 @@ The skill describes the goal ("ensure work happens in an isolated workspace") an
|
||||
|
||||
### Provenance-based ownership
|
||||
|
||||
Whoever creates the worktree owns its cleanup. If the harness created it, superpowers doesn't touch it. If superpowers created it (via git fallback), superpowers cleans it up. The heuristic: if the worktree lives under `.worktrees/` or `~/.config/superpowers/worktrees/`, superpowers owns it. Anything else (`.claude/worktrees/`, `~/.codex/worktrees/`, `.gemini/worktrees/`) belongs to the harness.
|
||||
Whoever creates the worktree owns its cleanup. If the harness created it, superpowers doesn't touch it. If superpowers created it (via git fallback), superpowers cleans it up. The heuristic: if the worktree lives under `.worktrees/` or `worktrees/`, superpowers owns it. Anything else (`.claude/worktrees/`, `~/.codex/worktrees/`, `.gemini/worktrees/`, or old user-global Superpowers paths) belongs to the harness or user and is left alone.
|
||||
|
||||
## Design
|
||||
|
||||
@@ -110,12 +110,11 @@ File splitting (Step 1b in a separate skill) was tested and proven unnecessary.
|
||||
When no native tool is available, create a worktree manually.
|
||||
|
||||
**Directory selection** (priority order):
|
||||
1. Check for existing `.worktrees/` or `worktrees/` directory — if found, use it. If both exist, `.worktrees/` wins.
|
||||
2. Check for existing `~/.config/superpowers/worktrees/<project>/` directory — if found, use it (backward compatibility with legacy global path).
|
||||
3. Check the project's agent instruction file (CLAUDE.md, GEMINI.md, AGENTS.md, .cursorrules, or equivalent) for a worktree directory preference.
|
||||
4. Default to `.worktrees/`.
|
||||
1. Check the project's agent instruction file (CLAUDE.md, GEMINI.md, AGENTS.md, .cursorrules, or equivalent) for a worktree directory preference.
|
||||
2. Check for existing `.worktrees/` or `worktrees/` directory — if found, use it. If both exist, `.worktrees/` wins.
|
||||
3. Default to `.worktrees/`.
|
||||
|
||||
No interactive directory selection prompt. The global path (`~/.config/superpowers/worktrees/`) is no longer offered as a choice to new users, but existing worktrees at that location are detected and used for backward compatibility.
|
||||
No interactive directory selection prompt. Old user-global Superpowers worktree paths are not detected or offered; new manual worktrees are project-local unless the user explicitly specifies another location.
|
||||
|
||||
**Safety verification** (project-local directories only):
|
||||
|
||||
@@ -232,7 +231,7 @@ if GIT_DIR == GIT_COMMON:
|
||||
# Normal repo, no worktree to clean up
|
||||
done
|
||||
|
||||
if worktree path is under .worktrees/ or ~/.config/superpowers/worktrees/:
|
||||
if worktree path is under .worktrees/ or worktrees/:
|
||||
# Superpowers created it — we own cleanup
|
||||
cd to main repo root # Bug #238 fix
|
||||
git worktree remove <path>
|
||||
@@ -318,7 +317,7 @@ As of 2026-04-06, Claude Code is the only harness with an agent-callable mid-ses
|
||||
|
||||
### Provenance heuristic
|
||||
|
||||
The `.worktrees/` or `~/.config/superpowers/worktrees/` = ours, anything else = hands off` heuristic works for every current harness. If a future harness adopts `.worktrees/` as its convention, we'd have a false positive (superpowers tries to clean up a harness-owned worktree). Similarly, if a user manually runs `git worktree add .worktrees/experiment` without superpowers, we'd incorrectly claim ownership. Both are low risk — every harness uses branded paths, and manual `.worktrees/` creation is unlikely — but worth noting.
|
||||
The `.worktrees/` or `worktrees/` = ours, anything else = hands off` heuristic works for every current harness. If a future harness adopts one of those project-local directories as its convention, we'd have a false positive (superpowers tries to clean up a harness-owned worktree). Similarly, if a user manually runs `git worktree add .worktrees/experiment` without superpowers, we'd incorrectly claim ownership. Both are low risk — every harness uses branded paths, and manual `.worktrees/` creation is unlikely — but worth noting.
|
||||
|
||||
### Detached HEAD finishing
|
||||
|
||||
|
||||
@@ -0,0 +1,247 @@
|
||||
# Lift drill into superpowers as `evals/` — design
|
||||
|
||||
## Background
|
||||
|
||||
Drill is a Python skill-compliance benchmark that lives in its own repo at `obra/drill`. It drives real tmux sessions, runs an LLM actor as a simulated user, runs an LLM verifier on the resulting transcript, and reports pass/fail per scenario. It supports Claude Code, Codex, Gemini CLI, and (per recent commits) OpenCode and Copilot CLI.
|
||||
|
||||
Drill is already the *de facto* eval harness for superpowers. The PRI-1397 commit series in the drill repo lifted ~22 superpowers bash tests into drill scenarios, and the most recent superpowers commit (`a2292c5`) explicitly removed a redundant bash test with the message *"replaced by drill behavioral coverage"*. Migration momentum exists; this spec completes it.
|
||||
|
||||
This work moves drill into superpowers under `evals/`, deletes the redundant bash tests after per-file verification of drill scenario coverage, and updates docs so contributors land on the new structure.
|
||||
|
||||
## Goals
|
||||
|
||||
1. `evals/` is the canonical eval harness in superpowers — full drill source, scenarios, fixtures, prompts, backend configs, and tests.
|
||||
2. Bash tests in `superpowers/tests/` that have been individually verified as 100% covered by drill scenarios are deleted; the rest are preserved.
|
||||
3. The split between `tests/` (plugin infrastructure: bash + node + python integration tests) and `evals/` (LLM behavior with actor + verifier) is meaningful and documented.
|
||||
4. Top-level docs (`README.md`, `CLAUDE.md`, `docs/testing.md`) point contributors at the right place.
|
||||
5. The standalone `obra/drill` repo continues to exist (this PR does not touch it) and gets archived as a separate manual step after this PR merges.
|
||||
|
||||
## Non-goals
|
||||
|
||||
- **CI integration.** Manual-only here. The natural follow-up is "tiered": fast subset on every PR, full sweep nightly + on-demand. That requires API budget decisions, GitHub Actions secrets, and a runner image with `tmux` + `node` + `python` + `claude` / `codex` / `gemini` CLIs installed. Out of scope.
|
||||
- **Scenario co-location with skills.** Scenarios stay centralized at `evals/scenarios/`. If we later decide each skill should own its scenarios, that's a path-find-and-rename operation; the YAML format does not change.
|
||||
- **Renaming the internal Python package** (`drill` → `evals`). The directory is `evals/` (user-facing); the Python package keeps its `drill` name to keep the diff small. A short note in `evals/README.md` explains.
|
||||
- **Drill repo archival.** This PR does not touch `obra/drill`. After merge, the drill repo is archived manually (read-only on GitHub, README pointer to `obra/superpowers/evals/`).
|
||||
- **Lifting `tests/claude-code/analyze-token-usage.py` into `evals/bin/`.** Useful utility, not test code. Can move later; not required by this PR.
|
||||
|
||||
## Branching
|
||||
|
||||
Branch off `dev` as `f/evals-lift`. This work is independent of the open `f/cross-platform` PR — no shared file changes besides possibly `README.md`, which is small enough to resolve at merge time if it conflicts.
|
||||
|
||||
## Architecture after the move
|
||||
|
||||
```
|
||||
superpowers/
|
||||
evals/ ← NEW (full drill copy)
|
||||
pyproject.toml (Python 3.11, uv-managed)
|
||||
uv.lock
|
||||
.gitignore (drill's own; results/, .venv/, .env)
|
||||
README.md (was drill's README; install instructions updated)
|
||||
CLAUDE.md (was drill's CLAUDE.md; paths updated)
|
||||
docs/
|
||||
design.md (drill's design — preserved verbatim, cross-linked from this spec)
|
||||
manual-testing.md
|
||||
pressure-and-red-testing.md
|
||||
drill/ (Python package; name kept; cli, engine, actor, verifier, etc.)
|
||||
backends/ (claude-*.yaml, codex.yaml, gemini.yaml)
|
||||
scenarios/ (32+ YAML scenarios)
|
||||
setup_helpers/ (15 Python helpers; create_base_repo, sdd_*, spec_*, worktree, etc.)
|
||||
fixtures/ (template-repo, sdd-go-fractals, sdd-svelte-todo)
|
||||
prompts/ (actor.md, verifier.md)
|
||||
bin/ (assertion helper scripts: tool-called, tool-count, etc.)
|
||||
tests/ (drill's own pytest suite)
|
||||
|
||||
tests/ ← bash tests preserved by default
|
||||
brainstorm-server/ ← KEEP (node tests for brainstorm-server JS code)
|
||||
opencode/ ← KEEP (plugin loading tests)
|
||||
codex-plugin-sync/ ← KEEP (sync verification)
|
||||
claude-code/ ← MOSTLY KEEP — see deletion gate
|
||||
explicit-skill-requests/ ← KEEP unless verified replaced
|
||||
skill-triggering/ ← KEEP unless verified replaced
|
||||
subagent-driven-dev/ ← KEEP unless verified replaced
|
||||
|
||||
docs/
|
||||
testing.md ← UPDATED (split into "Plugin tests" + "Skill behavior evals")
|
||||
superpowers/
|
||||
specs/
|
||||
2026-05-06-lift-drill-into-evals-design.md ← THIS SPEC
|
||||
|
||||
README.md ← small Contributing-section pointer to evals/
|
||||
CLAUDE.md ← one-line "Eval harness lives at evals/" pointer
|
||||
```
|
||||
|
||||
The `tests/` and `evals/` directories serve clearly distinct roles after this PR:
|
||||
|
||||
- **`tests/`** — does the plugin's non-LLM code work? Unit and integration tests for the brainstorm-server JS code, OpenCode plugin loading, codex-plugin-sync sync verification. Bash + node + python.
|
||||
- **`evals/`** — do agents behave correctly on real LLM sessions? Drill scenarios with actor + verifier. Python-only, runs real tmux sessions.
|
||||
|
||||
## Deletion gate (per bash test)
|
||||
|
||||
A bash test is deleted *only if* a drill scenario verifiably covers every assertion it makes. The implementation plan documents this verification per file: read the bash test, list its checks, find the drill scenario, confirm each check has a matching `verify.assertions` or `verify.criteria` entry. If even one check is missing, the option is to either extend the drill scenario or keep the bash test. Default keeps it.
|
||||
|
||||
**Tentative coverage map** (commit-message-based; needs per-file verification before any deletion):
|
||||
|
||||
| Bash test | Claimed drill replacement | Coverage status |
|
||||
|-----------|---------------------------|-----------------|
|
||||
| `tests/skill-triggering/prompts/*` (6 prompt files) | `triggering-*.yaml` (6 scenarios) | candidate — verify per-prompt before deleting |
|
||||
| `tests/skill-triggering/run-test.sh`, `run-all.sh` | n/a (runners, not tests) | **keep** — runner scripts |
|
||||
| `tests/explicit-skill-requests/prompts/please-use-brainstorming.txt` | needs verification — drill has no obvious counterpart yet | likely **keep** unless drill scenario added |
|
||||
| `tests/explicit-skill-requests/prompts/use-systematic-debugging.txt` | needs verification — drill has no obvious counterpart | likely **keep** unless drill scenario added |
|
||||
| `tests/explicit-skill-requests/run-claude-describes-sdd.sh` | partially → `mid-conversation-skill-invocation.yaml` | candidate — verify per-script |
|
||||
| `tests/explicit-skill-requests/run-haiku-test.sh` | no drill scenario covers Haiku-specific behavior | **keep** |
|
||||
| `tests/explicit-skill-requests/run-multiturn-test.sh`, `run-extended-multiturn-test.sh` | no drill scenario covers multi-turn build-up | **keep** unless drill scenarios added |
|
||||
| `tests/explicit-skill-requests/run-test.sh`, `run-all.sh` | n/a (runners) | **keep** |
|
||||
| `tests/subagent-driven-dev/go-fractals/`, `tests/subagent-driven-dev/svelte-todo/` | `sdd-go-fractals.yaml`, `sdd-svelte-todo.yaml` | candidate — verify before deleting (these include real assertions about test suites passing) |
|
||||
| `tests/claude-code/test-document-review-system.sh` | `spec-reviewer-catches-planted-flaws.yaml` | candidate — verify before deleting |
|
||||
| `tests/claude-code/test-requesting-code-review.sh` | `code-review-catches-planted-bugs.yaml` | candidate — verify before deleting |
|
||||
| `tests/claude-code/test-subagent-driven-development-integration.sh` | `sdd-rejects-extra-features.yaml` (YAGNI subset) | **partial** — bash test also asserts ≥3 commits / `npm test` passes / runs `analyze-token-usage.py`. Drill scenario asserts forbidden-exports + reviewer-as-gate. Mostly disjoint — almost certainly **keep + extend drill scenario**. |
|
||||
| `tests/claude-code/test-subagent-driven-development.sh` | meta/documentation test (asks agent to *describe* SDD); no drill scenario covers description tests | **keep** unless drill scenario added |
|
||||
| `tests/claude-code/test-worktree-native-preference.sh` | `worktree-creation-under-pressure.yaml` | candidate — verify before deleting |
|
||||
| `tests/claude-code/test-helpers.sh`, `run-skill-tests.sh`, `analyze-token-usage.py` | n/a (utilities, not tests) | **keep** — libraries/tools |
|
||||
|
||||
## Verification protocol (subagent-gated)
|
||||
|
||||
Every change in the implementation plan gets cross-checked by an independent subagent before commit.
|
||||
|
||||
| Change category | Subagent verification |
|
||||
|----------------|----------------------|
|
||||
| Each bash-test deletion | Dispatch a subagent with: (a) the bash test file content, (b) the candidate drill scenario YAML, (c) the prompt: *"List every assertion the bash test makes. List every verify entry in the drill scenario. For each bash assertion, find a matching drill check or report it as unmatched. Output a per-assertion table."* The subagent's output is the gate — only delete if every bash assertion has a match. |
|
||||
| Initial `evals/` copy | Subagent verifies: (a) drill SHA being copied is recorded in the lift commit message so provenance is auditable; (b) **per-file SHA-256 checksum** matches drill repo for every file (not just file count); (c) excluded paths (`.git/`, `.venv/`, `results/`, `.env`, `__pycache__/`, `*.egg-info/`, any `.private-journal/`) are absent from `evals/`; (d) all backend YAMLs reference paths that exist post-move; (e) `pyproject.toml`, `uv.lock`, `.gitignore` are intact. |
|
||||
| Drill's own pytest suite | Subagent runs `cd evals && uv run pytest` after the path-default change. Drill ships its own pytest suite at `evals/tests/` including `test_backend.py` which exercises `SUPERPOWERS_ROOT` env-var behavior — these tests must update to match the helper and continue to pass. |
|
||||
| Reference scrubbing after deletion | Subagent greps the entire superpowers tree (excluding `node_modules/`, `.venv/`, and `evals/`) for references to deleted bash test paths. Search targets: `docs/`, `docs/superpowers/plans/`, `RELEASE-NOTES.md`, `CLAUDE.md`, `GEMINI.md`, `AGENTS.md`, `README.md`, `.github/`, `scripts/`, `.opencode/INSTALL.md`, `.codex-plugin/INSTALL.md`, `lefthook.yml`. Any hit is either updated or surfaces a missed dependency. |
|
||||
| Path defaults change (`SUPERPOWERS_ROOT` default) | Subagent runs at least one cheap drill scenario after the path changes (e.g., `triggering-test-driven-development`) and confirms it still passes. Real validation, not just code review. |
|
||||
| Final pre-PR adversarial review | Two subagents in parallel, "5 points to whoever finds the most legitimate issues" framing — same protocol used on the cross-platform PR. Verify both source code and behavior. |
|
||||
|
||||
Each subagent task gets its own bullet in the implementation plan with explicit inputs and pass criteria. The subagent's output is summarized in the relevant commit message ("Subagent verification: …") so the trail is auditable.
|
||||
|
||||
## Concrete path/config edits
|
||||
|
||||
**Verified prior to writing this spec.** `drill/cli.py` defines `PROJECT_ROOT = Path(__file__).parent.parent`. After the move, `cli.py` lives at `evals/drill/cli.py`, so `PROJECT_ROOT` resolves to `evals/` and `PROJECT_ROOT.parent` resolves to the superpowers repo root. That's the value `SUPERPOWERS_ROOT` should take by default.
|
||||
|
||||
**YAML substitution audit.** Only the four `claude*.yaml` backend configs interpolate `${SUPERPOWERS_ROOT}` into `args` (for the `--plugin-dir` flag); `codex.yaml` and `gemini.yaml` only list `SUPERPOWERS_ROOT` in `required_env` (consumed by `engine.py:233` / `setup.py:25`'s `os.environ["SUPERPOWERS_ROOT"]` lookups in pre/post-run hooks). The helper's `os.environ` mutation covers both code paths.
|
||||
|
||||
| File | Current | After |
|
||||
|------|---------|-------|
|
||||
| `drill/cli.py` | `load_dotenv(PROJECT_ROOT / ".env")` at module import; nothing about `SUPERPOWERS_ROOT` | After `load_dotenv`, call new helper `_set_superpowers_root_default()` that sets `os.environ["SUPERPOWERS_ROOT"]` to `str(PROJECT_ROOT.parent)` if and only if not already set. Order: `load_dotenv` → set default → click group definitions. |
|
||||
| `drill/engine.py:233`, `drill/setup.py:25` | Direct `os.environ["SUPERPOWERS_ROOT"]` access (KeyError if unset) | Unchanged. The CLI startup hook guarantees the env var is set by the time the engine/setup execute. |
|
||||
| `backends/claude*.yaml` (5 files) | `${SUPERPOWERS_ROOT}` substituted in `args` for `--plugin-dir` | Unchanged. YAML substitution reads `os.environ` at backend-load time, which is after CLI startup. |
|
||||
| `backends/codex.yaml`, `backends/gemini.yaml` | `SUPERPOWERS_ROOT` in `required_env` only | Drop from `required_env` (the helper supplies it). `claude*.yaml` keep `required_env` for backward compat (env var works as override). |
|
||||
| `evals/tests/test_backend.py` | Tests assert `SUPERPOWERS_ROOT` is in `required_env` lists, plus path-resolution tests | Update tests to match the new contract: helper-supplied default, env override still works, `required_env` no longer required for codex/gemini. |
|
||||
| `evals/README.md` | "export SUPERPOWERS_ROOT=/path/to/superpowers" | Drop the export line; note that the env var auto-defaults to the parent of `evals/`; mention the only required setup is `ANTHROPIC_API_KEY` (or `OPENAI_API_KEY` / Gemini auth). |
|
||||
| `evals/CLAUDE.md` | Same | Same |
|
||||
| `evals/.gitignore` | drill's existing patterns (`results/`, `.venv/`, `__pycache__/`, `.env`, `*.pyc`, `*.egg-info/`, `dist/`, `build/`, `.claude/`) | Copied verbatim. Patterns are relative to file location, so they apply correctly under `evals/`. |
|
||||
| `evals/lefthook.yml` | drill ships `lefthook.yml` defining `pre-commit: uv run ruff check && uv run ty check` | Move to `evals/lefthook.yml`. Either (a) install lefthook at the superpowers root and have it federate to `evals/lefthook.yml`, or (b) document that contributors run `cd evals && lefthook run pre-commit` manually. **Decision in implementation: option (b) for simplicity** — superpowers' top-level workflow doesn't change. |
|
||||
|
||||
`.env` placement: keep `evals/.env` (gitignored). Contributors source it from there or set `ANTHROPIC_API_KEY` in their shell environment.
|
||||
|
||||
**Top-level superpowers files needing small additions:**
|
||||
|
||||
- `superpowers/.gitignore`: add `evals/results/`, `evals/.venv/`, `evals/.env` (belt-and-suspenders; evals/.gitignore already covers these locally).
|
||||
- `superpowers/CLAUDE.md`: add a one-line pointer "Eval harness lives at `evals/` — see `evals/README.md`" so agents discover it.
|
||||
- `superpowers/docs/testing.md`: split into "## Plugin tests" (existing tests/ content, with the deleted-test references trimmed) and "## Skill behavior evals" (one-paragraph summary + pointer to `evals/`).
|
||||
- `superpowers/README.md`: add a single line in the Contributing section pointing at `evals/` for skill-behavior testing.
|
||||
|
||||
## Migration ordering
|
||||
|
||||
Each step is a separate commit (or small group of commits). Step 2 is the biggest single commit (the verbatim drill copy); subsequent steps are small and atomic.
|
||||
|
||||
```
|
||||
1. Branch off `dev` (f/evals-lift)
|
||||
|
||||
2. Copy drill repo into evals/ (single commit, easy to revert)
|
||||
├─ Record drill SHA at copy time → commit message
|
||||
├─ Use `rsync -a --exclude=.git --exclude=.venv --exclude=results
|
||||
│ --exclude=.env --exclude=__pycache__ --exclude='*.egg-info'
|
||||
│ --exclude=.private-journal /path/to/drill/ evals/`
|
||||
│ (rsync chosen over `cp -r` for explicit excludes; verify with
|
||||
│ `find evals -name '.git' -type d` returns nothing)
|
||||
├─ Subagent gate: per-file SHA-256 checksum matches drill repo for every
|
||||
│ non-excluded file; excluded paths absent from evals/
|
||||
└─ Smoke check: `cd evals && uv sync` succeeds (proves install only;
|
||||
not a behavioral test)
|
||||
|
||||
3. Update path defaults
|
||||
├─ Add _set_superpowers_root_default() helper to drill/cli.py
|
||||
├─ Wire it after load_dotenv, before click group definition
|
||||
├─ Update evals/README.md and evals/CLAUDE.md (drop SUPERPOWERS_ROOT install step)
|
||||
├─ Drop SUPERPOWERS_ROOT from required_env in codex.yaml/gemini.yaml
|
||||
│ (keep in claude*.yaml as override)
|
||||
└─ Update evals/tests/test_backend.py to match new contract
|
||||
|
||||
4. Validate from new location (TWO checks)
|
||||
├─ Run drill's own pytest: `cd evals && uv run pytest` — must pass
|
||||
└─ Run cheap drill scenario: `cd evals && uv run drill run
|
||||
triggering-test-driven-development -b claude` — must pass.
|
||||
Real behavioral validation, not just code review.
|
||||
|
||||
5. Bash test deletion phase — per-file with subagent gate
|
||||
For each file in the candidate-deletion list:
|
||||
a. Subagent compares bash test assertions vs drill scenario verify block
|
||||
b. Pass criterion: every bash assertion has a matching drill check
|
||||
c. If pass → delete the bash test file (one commit per file or per
|
||||
coherent group)
|
||||
d. If fail → either extend drill scenario (separate commit + verify) or
|
||||
keep the bash test (no commit)
|
||||
|
||||
6. Stale-reference scrub
|
||||
├─ Subagent greps the superpowers tree (excluding node_modules/, .venv/,
|
||||
│ evals/) for deleted file paths
|
||||
├─ Search targets: docs/, docs/superpowers/plans/, RELEASE-NOTES.md,
|
||||
│ CLAUDE.md, GEMINI.md, AGENTS.md, README.md, .github/, scripts/,
|
||||
│ .opencode/INSTALL.md, .codex-plugin/INSTALL.md, lefthook.yml
|
||||
├─ Update active references (e.g., docs/testing.md, README.md install)
|
||||
└─ Historical references in docs/superpowers/plans/*.md and
|
||||
RELEASE-NOTES.md are PRESERVED with a brief annotation
|
||||
("(test removed; behavior covered by drill scenario X)") rather
|
||||
than rewritten — these are dated artifacts, not living docs.
|
||||
|
||||
7. Top-level docs
|
||||
├─ docs/testing.md split
|
||||
├─ CLAUDE.md pointer
|
||||
└─ README.md Contributing section
|
||||
|
||||
8. Re-run smoke checks (regression gate)
|
||||
├─ `cd evals && uv run pytest`
|
||||
└─ `cd evals && uv run drill run triggering-test-driven-development -b claude`
|
||||
|
||||
9. Final adversarial review
|
||||
└─ Two parallel subagents, full diff, "5 points to whoever finds the
|
||||
most legitimate issues" framing. Address findings before push.
|
||||
|
||||
10. Push branch + open PR against dev
|
||||
└─ PR description includes: drill SHA pinned at copy, archival action
|
||||
item ("after merge: archive obra/drill, add README pointer to
|
||||
obra/superpowers/evals/"), per-deleted-file coverage receipts.
|
||||
```
|
||||
|
||||
## Verification (post-implementation)
|
||||
|
||||
The implementation plan must show:
|
||||
|
||||
- All non-excluded drill source files present at `evals/` after step 2 (subagent **per-file SHA-256 checksum diff** vs `obra/drill@<recorded-sha>`).
|
||||
- Excluded paths (`.git/`, `.venv/`, `results/`, `.env`, `__pycache__/`, `*.egg-info/`, `.private-journal/`) absent from `evals/`.
|
||||
- The step-2 commit message records the drill source SHA.
|
||||
- `cd evals && uv sync` succeeds without `SUPERPOWERS_ROOT` set.
|
||||
- `cd evals && uv run pytest` passes (drill's own pytest suite).
|
||||
- `cd evals && uv run drill list` returns the same scenario count as the standalone drill repo at the recorded SHA.
|
||||
- `cd evals && uv run drill run triggering-test-driven-development -b claude` passes (proves path defaults work end-to-end).
|
||||
- For each deleted bash test: subagent verification table in the commit message showing every assertion mapped to a drill check.
|
||||
- Grep for deleted file paths returns zero hits across living superpowers docs (post step 6); historical refs in `docs/superpowers/plans/*.md` and `RELEASE-NOTES.md` are annotated, not rewritten.
|
||||
- `docs/testing.md` has both "Plugin tests" and "Skill behavior evals" sections.
|
||||
- The drill repo's history is untouched; `obra/drill` is unaffected by this PR.
|
||||
- PR description names the action item to archive `obra/drill` after merge.
|
||||
|
||||
## Open questions
|
||||
|
||||
None. All clarifying decisions have been made:
|
||||
|
||||
| Question | Decision |
|
||||
|----------|----------|
|
||||
| Where does drill live in superpowers? | `evals/` (rename from drill); standalone repo archived as separate step |
|
||||
| Fate of redundant bash tests? | Delete per-file with subagent verification of coverage; default keep |
|
||||
| Scenarios layout? | Centralized at `evals/scenarios/` |
|
||||
| Python toolchain placement? | Self-contained at `evals/` |
|
||||
| CI integration? | Manual-only this PR; documented future path |
|
||||
| Migration mechanics? | Plain copy; drill repo's history preserved in archived repo, not in-tree |
|
||||
| Internal Python package name? | Keep as `drill` (directory is `evals/`) |
|
||||
| Branching strategy? | Independent off `dev` (not stacked on `f/cross-platform`) |
|
||||
313
docs/testing.md
313
docs/testing.md
@@ -1,303 +1,34 @@
|
||||
# Testing Superpowers Skills
|
||||
# Testing Superpowers
|
||||
|
||||
This document describes how to test Superpowers skills, particularly the integration tests for complex skills like `subagent-driven-development`.
|
||||
Superpowers has two distinct kinds of tests, each in its own directory:
|
||||
|
||||
## Overview
|
||||
- **`tests/`** — does the plugin's non-LLM code work? Bash + node + python integration tests for brainstorm-server JS, OpenCode plugin loading, codex-plugin sync, and analysis utilities.
|
||||
- **`evals/`** — do agents behave correctly on real LLM sessions? Python harness driving real tmux sessions of Claude Code / Codex / Gemini CLI, with an LLM actor and verifier judging skill compliance.
|
||||
|
||||
Testing skills that involve subagents, workflows, and complex interactions requires running actual Claude Code sessions in headless mode and verifying their behavior through session transcripts.
|
||||
## Plugin tests
|
||||
|
||||
## Test Structure
|
||||
Live in `tests/`. Currently:
|
||||
|
||||
```
|
||||
tests/
|
||||
├── claude-code/
|
||||
│ ├── test-helpers.sh # Shared test utilities
|
||||
│ ├── test-subagent-driven-development-integration.sh
|
||||
│ ├── analyze-token-usage.py # Token analysis tool
|
||||
│ └── run-skill-tests.sh # Test runner (if exists)
|
||||
```
|
||||
- `tests/brainstorm-server/` — node test suite for the brainstorm server JS code.
|
||||
- `tests/opencode/` — bash tests for OpenCode plugin loading, bootstrap caching, and tool registration.
|
||||
- `tests/codex-plugin-sync/` — bash sync verification.
|
||||
- `tests/claude-code/test-helpers.sh`, `analyze-token-usage.py` — utilities used by remaining bash tests.
|
||||
- `tests/claude-code/test-subagent-driven-development.sh` — agent-can-describe-SDD test (no drill counterpart; tests description-recall, not behavior).
|
||||
- `tests/claude-code/test-subagent-driven-development-integration.sh` — extended SDD integration with token analysis (drill covers the YAGNI subset; bash adds commit-count, TodoWrite, and token telemetry assertions).
|
||||
- `tests/claude-code/test-worktree-native-preference.sh` — RED-GREEN-REFACTOR validation for worktree skill (drill covers the PRESSURE phase; bash also covers RED/GREEN baselines).
|
||||
- `tests/explicit-skill-requests/` — Haiku-specific, multi-turn, and skill-name-prompted tests not covered by drill.
|
||||
|
||||
## Running Tests
|
||||
Run plugin tests via the relevant directory's `run-*.sh` or `npm test`.
|
||||
|
||||
### Integration Tests
|
||||
## Skill behavior evals
|
||||
|
||||
Integration tests execute real Claude Code sessions with actual skills:
|
||||
Live in `evals/`. Drill is the harness; scenarios live at `evals/scenarios/*.yaml`. See `evals/README.md` for setup. Quick start:
|
||||
|
||||
```bash
|
||||
# Run the subagent-driven-development integration test
|
||||
cd tests/claude-code
|
||||
./test-subagent-driven-development-integration.sh
|
||||
cd evals
|
||||
uv sync --extra dev
|
||||
export ANTHROPIC_API_KEY=sk-...
|
||||
uv run drill run triggering-test-driven-development -b claude
|
||||
```
|
||||
|
||||
**Note:** Integration tests can take 10-30 minutes as they execute real implementation plans with multiple subagents.
|
||||
|
||||
### Requirements
|
||||
|
||||
- Must run from the **superpowers plugin directory** (not from temp directories)
|
||||
- Claude Code must be installed and available as `claude` command
|
||||
- Local dev marketplace must be enabled: `"superpowers@superpowers-dev": true` in `~/.claude/settings.json`
|
||||
|
||||
## Integration Test: subagent-driven-development
|
||||
|
||||
### What It Tests
|
||||
|
||||
The integration test verifies the `subagent-driven-development` skill correctly:
|
||||
|
||||
1. **Plan Loading**: Reads the plan once at the beginning
|
||||
2. **Full Task Text**: Provides complete task descriptions to subagents (doesn't make them read files)
|
||||
3. **Self-Review**: Ensures subagents perform self-review before reporting
|
||||
4. **Review Order**: Runs spec compliance review before code quality review
|
||||
5. **Review Loops**: Uses review loops when issues are found
|
||||
6. **Independent Verification**: Spec reviewer reads code independently, doesn't trust implementer reports
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Setup**: Creates a temporary Node.js project with a minimal implementation plan
|
||||
2. **Execution**: Runs Claude Code in headless mode with the skill
|
||||
3. **Verification**: Parses the session transcript (`.jsonl` file) to verify:
|
||||
- Skill tool was invoked
|
||||
- Subagents were dispatched (Task tool)
|
||||
- TodoWrite was used for tracking
|
||||
- Implementation files were created
|
||||
- Tests pass
|
||||
- Git commits show proper workflow
|
||||
4. **Token Analysis**: Shows token usage breakdown by subagent
|
||||
|
||||
### Test Output
|
||||
|
||||
```
|
||||
========================================
|
||||
Integration Test: subagent-driven-development
|
||||
========================================
|
||||
|
||||
Test project: /tmp/tmp.xyz123
|
||||
|
||||
=== Verification Tests ===
|
||||
|
||||
Test 1: Skill tool invoked...
|
||||
[PASS] subagent-driven-development skill was invoked
|
||||
|
||||
Test 2: Subagents dispatched...
|
||||
[PASS] 7 subagents dispatched
|
||||
|
||||
Test 3: Task tracking...
|
||||
[PASS] TodoWrite used 5 time(s)
|
||||
|
||||
Test 6: Implementation verification...
|
||||
[PASS] src/math.js created
|
||||
[PASS] add function exists
|
||||
[PASS] multiply function exists
|
||||
[PASS] test/math.test.js created
|
||||
[PASS] Tests pass
|
||||
|
||||
Test 7: Git commit history...
|
||||
[PASS] Multiple commits created (3 total)
|
||||
|
||||
Test 8: No extra features added...
|
||||
[PASS] No extra features added
|
||||
|
||||
=========================================
|
||||
Token Usage Analysis
|
||||
=========================================
|
||||
|
||||
Usage Breakdown:
|
||||
----------------------------------------------------------------------------------------------------
|
||||
Agent Description Msgs Input Output Cache Cost
|
||||
----------------------------------------------------------------------------------------------------
|
||||
main Main session (coordinator) 34 27 3,996 1,213,703 $ 4.09
|
||||
3380c209 implementing Task 1: Create Add Function 1 2 787 24,989 $ 0.09
|
||||
34b00fde implementing Task 2: Create Multiply Function 1 4 644 25,114 $ 0.09
|
||||
3801a732 reviewing whether an implementation matches... 1 5 703 25,742 $ 0.09
|
||||
4c142934 doing a final code review... 1 6 854 25,319 $ 0.09
|
||||
5f017a42 a code reviewer. Review Task 2... 1 6 504 22,949 $ 0.08
|
||||
a6b7fbe4 a code reviewer. Review Task 1... 1 6 515 22,534 $ 0.08
|
||||
f15837c0 reviewing whether an implementation matches... 1 6 416 22,485 $ 0.07
|
||||
----------------------------------------------------------------------------------------------------
|
||||
|
||||
TOTALS:
|
||||
Total messages: 41
|
||||
Input tokens: 62
|
||||
Output tokens: 8,419
|
||||
Cache creation tokens: 132,742
|
||||
Cache read tokens: 1,382,835
|
||||
|
||||
Total input (incl cache): 1,515,639
|
||||
Total tokens: 1,524,058
|
||||
|
||||
Estimated cost: $4.67
|
||||
(at $3/$15 per M tokens for input/output)
|
||||
|
||||
========================================
|
||||
Test Summary
|
||||
========================================
|
||||
|
||||
STATUS: PASSED
|
||||
```
|
||||
|
||||
## Token Analysis Tool
|
||||
|
||||
### Usage
|
||||
|
||||
Analyze token usage from any Claude Code session:
|
||||
|
||||
```bash
|
||||
python3 tests/claude-code/analyze-token-usage.py ~/.claude/projects/<project-dir>/<session-id>.jsonl
|
||||
```
|
||||
|
||||
### Finding Session Files
|
||||
|
||||
Session transcripts are stored in `~/.claude/projects/` with the working directory path encoded:
|
||||
|
||||
```bash
|
||||
# Example for /Users/yourname/Documents/GitHub/superpowers/superpowers
|
||||
SESSION_DIR="$HOME/.claude/projects/-Users-yourname-Documents-GitHub-superpowers-superpowers"
|
||||
|
||||
# Find recent sessions
|
||||
ls -lt "$SESSION_DIR"/*.jsonl | head -5
|
||||
```
|
||||
|
||||
### What It Shows
|
||||
|
||||
- **Main session usage**: Token usage by the coordinator (you or main Claude instance)
|
||||
- **Per-subagent breakdown**: Each Task invocation with:
|
||||
- Agent ID
|
||||
- Description (extracted from prompt)
|
||||
- Message count
|
||||
- Input/output tokens
|
||||
- Cache usage
|
||||
- Estimated cost
|
||||
- **Totals**: Overall token usage and cost estimate
|
||||
|
||||
### Understanding the Output
|
||||
|
||||
- **High cache reads**: Good - means prompt caching is working
|
||||
- **High input tokens on main**: Expected - coordinator has full context
|
||||
- **Similar costs per subagent**: Expected - each gets similar task complexity
|
||||
- **Cost per task**: Typical range is $0.05-$0.15 per subagent depending on task
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Skills Not Loading
|
||||
|
||||
**Problem**: Skill not found when running headless tests
|
||||
|
||||
**Solutions**:
|
||||
1. Ensure you're running FROM the superpowers directory: `cd /path/to/superpowers && tests/...`
|
||||
2. Check `~/.claude/settings.json` has `"superpowers@superpowers-dev": true` in `enabledPlugins`
|
||||
3. Verify skill exists in `skills/` directory
|
||||
|
||||
### Permission Errors
|
||||
|
||||
**Problem**: Claude blocked from writing files or accessing directories
|
||||
|
||||
**Solutions**:
|
||||
1. Use `--permission-mode bypassPermissions` flag
|
||||
2. Use `--add-dir /path/to/temp/dir` to grant access to test directories
|
||||
3. Check file permissions on test directories
|
||||
|
||||
### Test Timeouts
|
||||
|
||||
**Problem**: Test takes too long and times out
|
||||
|
||||
**Solutions**:
|
||||
1. Increase timeout: `timeout 1800 claude ...` (30 minutes)
|
||||
2. Check for infinite loops in skill logic
|
||||
3. Review subagent task complexity
|
||||
|
||||
### Session File Not Found
|
||||
|
||||
**Problem**: Can't find session transcript after test run
|
||||
|
||||
**Solutions**:
|
||||
1. Check the correct project directory in `~/.claude/projects/`
|
||||
2. Use `find ~/.claude/projects -name "*.jsonl" -mmin -60` to find recent sessions
|
||||
3. Verify test actually ran (check for errors in test output)
|
||||
|
||||
## Writing New Integration Tests
|
||||
|
||||
### Template
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
# Create test project
|
||||
TEST_PROJECT=$(create_test_project)
|
||||
trap "cleanup_test_project $TEST_PROJECT" EXIT
|
||||
|
||||
# Set up test files...
|
||||
cd "$TEST_PROJECT"
|
||||
|
||||
# Run Claude with skill
|
||||
PROMPT="Your test prompt here"
|
||||
cd "$SCRIPT_DIR/../.." && timeout 1800 claude -p "$PROMPT" \
|
||||
--allowed-tools=all \
|
||||
--add-dir "$TEST_PROJECT" \
|
||||
--permission-mode bypassPermissions \
|
||||
2>&1 | tee output.txt
|
||||
|
||||
# Find and analyze session
|
||||
WORKING_DIR_ESCAPED=$(echo "$SCRIPT_DIR/../.." | sed 's/\\//-/g' | sed 's/^-//')
|
||||
SESSION_DIR="$HOME/.claude/projects/$WORKING_DIR_ESCAPED"
|
||||
SESSION_FILE=$(find "$SESSION_DIR" -name "*.jsonl" -type f -mmin -60 | sort -r | head -1)
|
||||
|
||||
# Verify behavior by parsing session transcript
|
||||
if grep -q '"name":"Skill".*"skill":"your-skill-name"' "$SESSION_FILE"; then
|
||||
echo "[PASS] Skill was invoked"
|
||||
fi
|
||||
|
||||
# Show token analysis
|
||||
python3 "$SCRIPT_DIR/analyze-token-usage.py" "$SESSION_FILE"
|
||||
```
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Always cleanup**: Use trap to cleanup temp directories
|
||||
2. **Parse transcripts**: Don't grep user-facing output - parse the `.jsonl` session file
|
||||
3. **Grant permissions**: Use `--permission-mode bypassPermissions` and `--add-dir`
|
||||
4. **Run from plugin dir**: Skills only load when running from the superpowers directory
|
||||
5. **Show token usage**: Always include token analysis for cost visibility
|
||||
6. **Test real behavior**: Verify actual files created, tests passing, commits made
|
||||
|
||||
## Session Transcript Format
|
||||
|
||||
Session transcripts are JSONL (JSON Lines) files where each line is a JSON object representing a message or tool result.
|
||||
|
||||
### Key Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"content": [...],
|
||||
"usage": {
|
||||
"input_tokens": 27,
|
||||
"output_tokens": 3996,
|
||||
"cache_read_input_tokens": 1213703
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Tool Results
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "user",
|
||||
"toolUseResult": {
|
||||
"agentId": "3380c209",
|
||||
"usage": {
|
||||
"input_tokens": 2,
|
||||
"output_tokens": 787,
|
||||
"cache_read_input_tokens": 24989
|
||||
},
|
||||
"prompt": "You are implementing Task 1...",
|
||||
"content": [{"type": "text", "text": "..."}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The `agentId` field links to subagent sessions, and the `usage` field contains token usage for that specific subagent invocation.
|
||||
Drill scenarios are slow (3-30+ minutes each) and run real LLM sessions. They are not part of CI today; the natural follow-up is a tiered model (fast subset on PR, full sweep nightly + on-demand).
|
||||
|
||||
1
evals
Submodule
1
evals
Submodule
Submodule evals added at f7ac1941d5
@@ -69,6 +69,7 @@ EXCLUDES=(
|
||||
# Directories not shipped by canonical Codex plugins
|
||||
"/commands/"
|
||||
"/docs/"
|
||||
"/evals/"
|
||||
"/hooks/"
|
||||
"/lib/"
|
||||
"/scripts/"
|
||||
|
||||
@@ -180,7 +180,7 @@ WORKTREE_PATH=$(git rev-parse --show-toplevel)
|
||||
|
||||
**If `GIT_DIR == GIT_COMMON`:** Normal repo, no worktree to clean up. Done.
|
||||
|
||||
**If worktree path is under `.worktrees/`, `worktrees/`, or `~/.config/superpowers/worktrees/`:** Superpowers created this worktree — we own cleanup.
|
||||
**If worktree path is under `.worktrees/` or `worktrees/`:** Superpowers created this worktree — we own cleanup.
|
||||
|
||||
```bash
|
||||
MAIN_ROOT=$(git -C "$(git rev-parse --git-common-dir)/.." rev-parse --show-toplevel)
|
||||
@@ -224,7 +224,7 @@ git worktree prune # Self-healing: clean up any stale registrations
|
||||
|
||||
**Cleaning up harness-owned worktrees**
|
||||
- **Problem:** Removing a worktree the harness created causes phantom state
|
||||
- **Fix:** Only clean up worktrees under `.worktrees/`, `worktrees/`, or `~/.config/superpowers/worktrees/`
|
||||
- **Fix:** Only clean up worktrees under `.worktrees/` or `worktrees/`
|
||||
|
||||
**No confirmation for discard**
|
||||
- **Problem:** Accidentally delete work
|
||||
|
||||
@@ -126,7 +126,7 @@ Push back when:
|
||||
- Reference working tests/code
|
||||
- Involve your human partner if architectural
|
||||
|
||||
**Signal if uncomfortable pushing back out loud:** "Strange things are afoot at the Circle K"
|
||||
**If you're uncomfortable pushing back out loud:** Name that tension, then tell your partner about the issue you've seen. They'll appreciate your honesty.
|
||||
|
||||
## Acknowledging Correct Feedback
|
||||
|
||||
|
||||
@@ -356,7 +356,7 @@ Never fix bugs without a test.
|
||||
|
||||
## Testing Anti-Patterns
|
||||
|
||||
When adding mocks or test utilities, read @testing-anti-patterns.md to avoid common pitfalls:
|
||||
When adding mocks or test utilities, read [testing-anti-patterns.md](testing-anti-patterns.md) to avoid common pitfalls:
|
||||
- Testing mock behavior instead of real behavior
|
||||
- Adding test-only methods to production classes
|
||||
- Mocking without understanding dependencies
|
||||
|
||||
@@ -30,7 +30,7 @@ BRANCH=$(git branch --show-current)
|
||||
git rev-parse --show-superproject-working-tree 2>/dev/null
|
||||
```
|
||||
|
||||
**If `GIT_DIR != GIT_COMMON` (and not a submodule):** You are already in a linked worktree. Skip to Step 3 (Project Setup). Do NOT create another worktree.
|
||||
**If `GIT_DIR != GIT_COMMON` (and not a submodule):** You are already in a linked worktree. Skip to Step 2 (Project Setup). Do NOT create another worktree.
|
||||
|
||||
Report with branch state:
|
||||
- On a branch: "Already in isolated workspace at `<path>` on branch `<name>`."
|
||||
@@ -42,7 +42,7 @@ Has the user already indicated their worktree preference in your instructions? I
|
||||
|
||||
> "Would you like me to set up an isolated worktree? It protects your current branch from changes."
|
||||
|
||||
Honor any existing declared preference without asking. If the user declines consent, work in place and skip to Step 3.
|
||||
Honor any existing declared preference without asking. If the user declines consent, work in place and skip to Step 2.
|
||||
|
||||
## Step 1: Create Isolated Workspace
|
||||
|
||||
@@ -50,7 +50,7 @@ Honor any existing declared preference without asking. If the user declines cons
|
||||
|
||||
### 1a. Native Worktree Tools (preferred)
|
||||
|
||||
The user has asked for an isolated workspace (Step 0 consent). Do you already have a way to create a worktree? It might be a tool with a name like `EnterWorktree`, `WorktreeCreate`, a `/worktree` command, or a `--worktree` flag. If you do, use it and skip to Step 3.
|
||||
The user has asked for an isolated workspace (Step 0 consent). Do you already have a way to create a worktree? It might be a tool with a name like `EnterWorktree`, `WorktreeCreate`, a `/worktree` command, or a `--worktree` flag. If you do, use it and skip to Step 2.
|
||||
|
||||
Native tools handle directory placement, branch creation, and cleanup automatically. Using `git worktree add` when you have a native tool creates phantom state your harness can't see or manage.
|
||||
|
||||
@@ -73,14 +73,7 @@ Follow this priority order. Explicit user preference always beats observed files
|
||||
```
|
||||
If found, use it. If both exist, `.worktrees` wins.
|
||||
|
||||
3. **Check for an existing global directory:**
|
||||
```bash
|
||||
project=$(basename "$(git rev-parse --show-toplevel)")
|
||||
ls -d ~/.config/superpowers/worktrees/$project 2>/dev/null
|
||||
```
|
||||
If found, use it (backward compatibility with legacy global path).
|
||||
|
||||
4. **If there is no other guidance available**, default to `.worktrees/` at the project root.
|
||||
3. **If there is no other guidance available**, default to `.worktrees/` at the project root.
|
||||
|
||||
#### Safety Verification (project-local directories only)
|
||||
|
||||
@@ -94,16 +87,11 @@ git check-ignore -q .worktrees 2>/dev/null || git check-ignore -q worktrees 2>/d
|
||||
|
||||
**Why critical:** Prevents accidentally committing worktree contents to repository.
|
||||
|
||||
Global directories (`~/.config/superpowers/worktrees/`) need no verification.
|
||||
|
||||
#### Create the Worktree
|
||||
|
||||
```bash
|
||||
project=$(basename "$(git rev-parse --show-toplevel)")
|
||||
|
||||
# Determine path based on chosen location
|
||||
# For project-local: path="$LOCATION/$BRANCH_NAME"
|
||||
# For global: path="~/.config/superpowers/worktrees/$project/$BRANCH_NAME"
|
||||
path="$LOCATION/$BRANCH_NAME"
|
||||
|
||||
git worktree add "$path" -b "$BRANCH_NAME"
|
||||
cd "$path"
|
||||
@@ -111,7 +99,7 @@ cd "$path"
|
||||
|
||||
**Sandbox fallback:** If `git worktree add` fails with a permission error (sandbox denial), tell the user the sandbox blocked worktree creation and you're working in the current directory instead. Then run setup and baseline tests in place.
|
||||
|
||||
## Step 3: Project Setup
|
||||
## Step 2: Project Setup
|
||||
|
||||
Auto-detect and run appropriate setup:
|
||||
|
||||
@@ -130,7 +118,7 @@ if [ -f pyproject.toml ]; then poetry install; fi
|
||||
if [ -f go.mod ]; then go mod download; fi
|
||||
```
|
||||
|
||||
## Step 4: Verify Clean Baseline
|
||||
## Step 3: Verify Clean Baseline
|
||||
|
||||
Run tests to ensure workspace starts clean:
|
||||
|
||||
@@ -163,7 +151,6 @@ Ready to implement <feature-name>
|
||||
| `worktrees/` exists | Use it (verify ignored) |
|
||||
| Both exist | Use `.worktrees/` |
|
||||
| Neither exists | Check instruction file, then default `.worktrees/` |
|
||||
| Global path exists | Use it (backward compat) |
|
||||
| Directory not ignored | Add to .gitignore + commit |
|
||||
| Permission error on create | Sandbox fallback, work in place |
|
||||
| Tests fail during baseline | Report failures + ask |
|
||||
@@ -189,7 +176,7 @@ Ready to implement <feature-name>
|
||||
### Assuming directory location
|
||||
|
||||
- **Problem:** Creates inconsistency, violates project conventions
|
||||
- **Fix:** Follow priority: existing > global legacy > instruction file > default
|
||||
- **Fix:** Follow priority: explicit instructions > existing project-local directory > default
|
||||
|
||||
### Proceeding with failing tests
|
||||
|
||||
@@ -209,7 +196,7 @@ Ready to implement <feature-name>
|
||||
**Always:**
|
||||
- Run Step 0 detection first
|
||||
- Prefer native tools over git fallback
|
||||
- Follow directory priority: existing > global legacy > instruction file > default
|
||||
- Follow directory priority: explicit instructions > existing project-local directory > default
|
||||
- Verify directory is ignored for project-local
|
||||
- Auto-detect and run project setup
|
||||
- Verify clean test baseline
|
||||
|
||||
@@ -553,7 +553,7 @@ Run same scenarios WITH skill. Agent should now comply.
|
||||
|
||||
Agent found new rationalization? Add explicit counter. Re-test until bulletproof.
|
||||
|
||||
**Testing methodology:** See @testing-skills-with-subagents.md for the complete testing methodology:
|
||||
**Testing methodology:** See [testing-skills-with-subagents.md](testing-skills-with-subagents.md) for the complete testing methodology:
|
||||
- How to write pressure scenarios
|
||||
- Pressure types (time, sunk cost, authority, exhaustion)
|
||||
- Plugging holes systematically
|
||||
|
||||
@@ -115,17 +115,12 @@ Full workflow execution test (~10-30 minutes):
|
||||
- Subagents follow the skill correctly
|
||||
- Final code is functional and tested
|
||||
|
||||
#### test-requesting-code-review.sh
|
||||
Behavioral test for the code reviewer subagent (~5 minutes):
|
||||
- Builds a tiny project with a baseline commit
|
||||
- Adds a second commit that plants two real bugs (SQL injection, plaintext password handling)
|
||||
- Dispatches the code reviewer via the requesting-code-review skill
|
||||
- Verifies the reviewer flags the planted bugs at Critical/Important severity and refuses to approve
|
||||
|
||||
**What it tests:**
|
||||
- The skill actually dispatches a working code reviewer subagent
|
||||
- The reviewer template produces reviewers that catch obvious security bugs
|
||||
- The reviewer is not sycophantic — it does not approve a diff with planted Critical issues
|
||||
#### test-worktree-native-preference.sh
|
||||
RED-GREEN-REFACTOR validation for the using-git-worktrees skill (~5 minutes):
|
||||
- RED: skill without Step 1a — agent should use `git worktree add`
|
||||
- GREEN: skill with Step 1a — agent should use the native EnterWorktree tool
|
||||
- PRESSURE: same as GREEN under urgency framing with pre-existing `.worktrees/`
|
||||
- Drill scenario `worktree-creation-under-pressure.yaml` covers the PRESSURE phase only
|
||||
|
||||
## Adding New Tests
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ fi
|
||||
# Parse command line arguments
|
||||
VERBOSE=false
|
||||
SPECIFIC_TEST=""
|
||||
TIMEOUT=300 # Default 5 minute timeout per test
|
||||
TIMEOUT=600 # Default 10 minute timeout per test
|
||||
RUN_INTEGRATION=false
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
@@ -73,13 +73,13 @@ done
|
||||
|
||||
# List of skill tests to run (fast unit tests)
|
||||
tests=(
|
||||
"test-worktree-path-policy.sh"
|
||||
"test-subagent-driven-development.sh"
|
||||
)
|
||||
|
||||
# Integration tests (slow, full execution)
|
||||
integration_tests=(
|
||||
"test-subagent-driven-development-integration.sh"
|
||||
"test-requesting-code-review.sh"
|
||||
)
|
||||
|
||||
# Add integration tests if requested
|
||||
|
||||
@@ -1,177 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Integration Test: Document Review System
|
||||
# Actually runs spec/plan review and verifies reviewers catch issues
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
echo "========================================"
|
||||
echo " Integration Test: Document Review System"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
echo "This test verifies the document review system by:"
|
||||
echo " 1. Creating a spec with intentional errors"
|
||||
echo " 2. Running the spec document reviewer"
|
||||
echo " 3. Verifying the reviewer catches the errors"
|
||||
echo ""
|
||||
|
||||
# Create test project
|
||||
TEST_PROJECT=$(create_test_project)
|
||||
echo "Test project: $TEST_PROJECT"
|
||||
|
||||
# Trap to cleanup
|
||||
trap "cleanup_test_project $TEST_PROJECT" EXIT
|
||||
|
||||
cd "$TEST_PROJECT"
|
||||
|
||||
# Create directory structure
|
||||
mkdir -p docs/superpowers/specs
|
||||
|
||||
# Create a spec document WITH INTENTIONAL ERRORS for the reviewer to catch
|
||||
cat > docs/superpowers/specs/test-feature-design.md <<'EOF'
|
||||
# Test Feature Design
|
||||
|
||||
## Overview
|
||||
|
||||
This is a test feature that does something useful.
|
||||
|
||||
## Requirements
|
||||
|
||||
1. The feature should work correctly
|
||||
2. It should be fast
|
||||
3. TODO: Add more requirements here
|
||||
|
||||
## Architecture
|
||||
|
||||
The feature will use a simple architecture with:
|
||||
- A frontend component
|
||||
- A backend service
|
||||
- Error handling will be specified later once we understand the failure modes better
|
||||
|
||||
## Data Flow
|
||||
|
||||
Data flows from the frontend to the backend.
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
Tests will be written to cover the main functionality.
|
||||
EOF
|
||||
|
||||
# Initialize git repo
|
||||
git init --quiet
|
||||
git config user.email "test@test.com"
|
||||
git config user.name "Test User"
|
||||
git add .
|
||||
git commit -m "Initial commit with test spec" --quiet
|
||||
|
||||
echo ""
|
||||
echo "Created test spec with intentional errors:"
|
||||
echo " - TODO placeholder in Requirements section"
|
||||
echo " - 'specified later' deferral in Architecture section"
|
||||
echo ""
|
||||
echo "Running spec document reviewer..."
|
||||
echo ""
|
||||
|
||||
# Run Claude to review the spec
|
||||
OUTPUT_FILE="$TEST_PROJECT/claude-output.txt"
|
||||
|
||||
PROMPT="You are testing the spec document reviewer.
|
||||
|
||||
Read the spec-document-reviewer-prompt.md template in skills/brainstorming/ to understand the review format.
|
||||
|
||||
Then review the spec at $TEST_PROJECT/docs/superpowers/specs/test-feature-design.md using the criteria from that template.
|
||||
|
||||
Look for:
|
||||
- TODOs, placeholders, 'TBD', incomplete sections
|
||||
- Sections saying 'to be defined later' or 'will spec when X is done'
|
||||
- Sections noticeably less detailed than others
|
||||
|
||||
Output your review in the format specified in the template."
|
||||
|
||||
echo "================================================================================"
|
||||
cd "$SCRIPT_DIR/../.." && timeout 120 claude -p "$PROMPT" --permission-mode bypassPermissions 2>&1 | tee "$OUTPUT_FILE" || {
|
||||
echo ""
|
||||
echo "================================================================================"
|
||||
echo "EXECUTION FAILED (exit code: $?)"
|
||||
exit 1
|
||||
}
|
||||
echo "================================================================================"
|
||||
|
||||
echo ""
|
||||
echo "Analyzing reviewer output..."
|
||||
echo ""
|
||||
|
||||
# Verification tests
|
||||
FAILED=0
|
||||
|
||||
echo "=== Verification Tests ==="
|
||||
echo ""
|
||||
|
||||
# Test 1: Reviewer found the TODO
|
||||
echo "Test 1: Reviewer found TODO..."
|
||||
if grep -qi "TODO" "$OUTPUT_FILE" && grep -qi "requirements\|Requirements" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer identified TODO in Requirements section"
|
||||
else
|
||||
echo " [FAIL] Reviewer did not identify TODO"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 2: Reviewer found the "specified later" deferral
|
||||
echo "Test 2: Reviewer found 'specified later' deferral..."
|
||||
if grep -qi "specified later\|later\|defer\|incomplete\|error handling" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer identified deferred content"
|
||||
else
|
||||
echo " [FAIL] Reviewer did not identify deferred content"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 3: Reviewer output includes Issues section
|
||||
echo "Test 3: Review output format..."
|
||||
if grep -qi "issues\|Issues" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Review includes Issues section"
|
||||
else
|
||||
echo " [FAIL] Review missing Issues section"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 4: Reviewer did NOT approve (found issues)
|
||||
echo "Test 4: Reviewer verdict..."
|
||||
if grep -qi "Issues Found\|❌\|not approved\|issues found" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer correctly found issues (not approved)"
|
||||
elif grep -qi "Approved\|✅" "$OUTPUT_FILE" && ! grep -qi "Issues Found\|❌" "$OUTPUT_FILE"; then
|
||||
echo " [FAIL] Reviewer incorrectly approved spec with errors"
|
||||
FAILED=$((FAILED + 1))
|
||||
else
|
||||
echo " [PASS] Reviewer identified problems (ambiguous format but found issues)"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
echo "========================================"
|
||||
echo " Test Summary"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
|
||||
if [ $FAILED -eq 0 ]; then
|
||||
echo "STATUS: PASSED"
|
||||
echo "All verification tests passed!"
|
||||
echo ""
|
||||
echo "The spec document reviewer correctly:"
|
||||
echo " ✓ Found TODO placeholder"
|
||||
echo " ✓ Found 'specified later' deferral"
|
||||
echo " ✓ Produced properly formatted review"
|
||||
echo " ✓ Did not approve spec with errors"
|
||||
exit 0
|
||||
else
|
||||
echo "STATUS: FAILED"
|
||||
echo "Failed $FAILED verification tests"
|
||||
echo ""
|
||||
echo "Output saved to: $OUTPUT_FILE"
|
||||
echo ""
|
||||
echo "Review the output to see what went wrong."
|
||||
exit 1
|
||||
fi
|
||||
@@ -9,14 +9,14 @@ run_claude() {
|
||||
local allowed_tools="${3:-}"
|
||||
local output_file=$(mktemp)
|
||||
|
||||
# Build command
|
||||
local cmd="claude -p \"$prompt\""
|
||||
# Build command as an argv array so timeout wraps claude directly.
|
||||
local cmd=(claude -p "$prompt")
|
||||
if [ -n "$allowed_tools" ]; then
|
||||
cmd="$cmd --allowed-tools=$allowed_tools"
|
||||
cmd+=(--allowed-tools="$allowed_tools")
|
||||
fi
|
||||
|
||||
# Run Claude in headless mode with timeout
|
||||
if timeout "$timeout" bash -c "$cmd" > "$output_file" 2>&1; then
|
||||
if timeout "$timeout" "${cmd[@]}" > "$output_file" 2>&1; then
|
||||
cat "$output_file"
|
||||
rm -f "$output_file"
|
||||
return 0
|
||||
|
||||
@@ -1,214 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Integration Test: requesting-code-review skill
|
||||
# Verifies the code reviewer dispatched via the skill catches a planted bug
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
echo "========================================"
|
||||
echo " Integration Test: requesting-code-review"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
echo "This test verifies the code reviewer subagent by:"
|
||||
echo " 1. Setting up a tiny project with a baseline commit"
|
||||
echo " 2. Adding a second commit that plants an obvious bug"
|
||||
echo " 3. Dispatching the code reviewer via the requesting-code-review skill"
|
||||
echo " 4. Verifying the reviewer flags the planted bug as Critical/Important"
|
||||
echo ""
|
||||
|
||||
TEST_PROJECT=$(create_test_project)
|
||||
echo "Test project: $TEST_PROJECT"
|
||||
trap "cleanup_test_project $TEST_PROJECT" EXIT
|
||||
|
||||
cd "$TEST_PROJECT"
|
||||
|
||||
# Baseline: a small "safe" implementation
|
||||
mkdir -p src
|
||||
cat > src/db.js <<'EOF'
|
||||
import { Database } from "./database-driver.js";
|
||||
|
||||
const db = new Database();
|
||||
|
||||
export async function findUserByEmail(email) {
|
||||
if (typeof email !== "string" || !email) {
|
||||
throw new Error("email required");
|
||||
}
|
||||
return db.query(
|
||||
"SELECT id, email, created_at FROM users WHERE email = ?",
|
||||
[email],
|
||||
);
|
||||
}
|
||||
EOF
|
||||
|
||||
cat > package.json <<'EOF'
|
||||
{ "name": "test-codereview", "version": "1.0.0", "type": "module" }
|
||||
EOF
|
||||
|
||||
git init --quiet
|
||||
git config user.email "test@test.com"
|
||||
git config user.name "Test User"
|
||||
git add .
|
||||
git commit -m "Initial: parameterized findUserByEmail" --quiet
|
||||
BASE_SHA=$(git rev-parse HEAD)
|
||||
|
||||
# Second commit: plant two real bugs
|
||||
# 1. SQL injection — switch from parameterized to string concatenation
|
||||
# 2. Logs the user's password hash on every successful login
|
||||
cat > src/db.js <<'EOF'
|
||||
import { Database } from "./database-driver.js";
|
||||
|
||||
const db = new Database();
|
||||
|
||||
export async function findUserByEmail(email) {
|
||||
return db.query(
|
||||
"SELECT id, email, password_hash, created_at FROM users WHERE email = '" + email + "'",
|
||||
);
|
||||
}
|
||||
|
||||
export async function login(email, password) {
|
||||
const user = await findUserByEmail(email);
|
||||
if (user && user.password_hash === hash(password)) {
|
||||
console.log("login success", { email, password_hash: user.password_hash });
|
||||
return user;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function hash(s) { return s; }
|
||||
EOF
|
||||
|
||||
git add .
|
||||
git commit -m "Refactor user lookup, add login" --quiet
|
||||
HEAD_SHA=$(git rev-parse HEAD)
|
||||
|
||||
echo ""
|
||||
echo "Planted bugs in $BASE_SHA..$HEAD_SHA:"
|
||||
echo " - SQL injection (string concat instead of parameterized query)"
|
||||
echo " - Password hash logged in plaintext on every successful login"
|
||||
echo " - hash() is the identity function (passwords stored & compared in plaintext)"
|
||||
echo ""
|
||||
|
||||
OUTPUT_FILE="$TEST_PROJECT/claude-output.txt"
|
||||
|
||||
PROMPT="I just finished a refactor. The change is between commits $BASE_SHA and $HEAD_SHA on the current branch.
|
||||
|
||||
Use the superpowers:requesting-code-review skill to review these changes before I merge. Follow the skill exactly: dispatch the code reviewer subagent with the template, give the subagent the SHA range, and report back what it found.
|
||||
|
||||
Print the reviewer's full output."
|
||||
|
||||
# Run claude from inside the test project so its session JSONL lands in a
|
||||
# project-specific directory under ~/.claude/projects/, isolated from any
|
||||
# other concurrent claude sessions.
|
||||
echo "Running Claude (plugin-dir: $PLUGIN_DIR, cwd: $TEST_PROJECT)..."
|
||||
echo "================================================================================"
|
||||
cd "$TEST_PROJECT" && timeout 600 claude -p "$PROMPT" \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--permission-mode bypassPermissions 2>&1 | tee "$OUTPUT_FILE" || {
|
||||
echo ""
|
||||
echo "================================================================================"
|
||||
echo "EXECUTION FAILED (exit code: $?)"
|
||||
exit 1
|
||||
}
|
||||
echo "================================================================================"
|
||||
|
||||
echo ""
|
||||
echo "Analyzing reviewer output..."
|
||||
echo ""
|
||||
|
||||
# Find the session transcript. Because we ran claude from $TEST_PROJECT (a
|
||||
# unique tmp dir), its sessions live in their own ~/.claude/projects/ folder.
|
||||
# Resolve the real path (macOS mktemp returns /var/... but claude normalizes
|
||||
# it to /private/var/...) and replicate claude's normalization (every
|
||||
# non-alphanumeric char becomes `-`).
|
||||
TEST_PROJECT_REAL=$(cd "$TEST_PROJECT" && pwd -P)
|
||||
SESSION_DIR="$HOME/.claude/projects/$(echo "$TEST_PROJECT_REAL" | sed 's|[^a-zA-Z0-9]|-|g')"
|
||||
# `|| true` prevents pipefail killing the script if ls gets SIGPIPE'd by head.
|
||||
SESSION_FILE=$(ls -t "$SESSION_DIR"/*.jsonl 2>/dev/null | head -1 || true)
|
||||
|
||||
FAILED=0
|
||||
|
||||
echo "=== Verification Tests ==="
|
||||
echo ""
|
||||
|
||||
# Test 1: Skill was actually invoked, and a subagent was actually dispatched
|
||||
echo "Test 1: requesting-code-review skill invoked + reviewer subagent dispatched..."
|
||||
if [ -z "$SESSION_FILE" ] || [ ! -f "$SESSION_FILE" ]; then
|
||||
echo " [FAIL] Could not locate session transcript in $SESSION_DIR"
|
||||
FAILED=$((FAILED + 1))
|
||||
elif ! grep -q '"skill":"superpowers:requesting-code-review"' "$SESSION_FILE"; then
|
||||
echo " [FAIL] requesting-code-review skill was not invoked"
|
||||
echo " Session: $SESSION_FILE"
|
||||
FAILED=$((FAILED + 1))
|
||||
elif ! grep -q '"name":"Agent"' "$SESSION_FILE"; then
|
||||
echo " [FAIL] Skill ran but no subagent was dispatched"
|
||||
FAILED=$((FAILED + 1))
|
||||
else
|
||||
echo " [PASS] Skill invoked and subagent dispatched"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 2: Reviewer caught the SQL injection
|
||||
echo "Test 2: SQL injection flagged..."
|
||||
if grep -qiE "sql injection|injection|string concat|parameterize|prepared statement|sanitiz" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer flagged the SQL injection vector"
|
||||
else
|
||||
echo " [FAIL] Reviewer missed the SQL injection — most obvious planted bug"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 3: Reviewer caught the credential / password issue (either logging or no real hashing)
|
||||
echo "Test 3: Credential handling issue flagged..."
|
||||
if grep -qiE "password|credential|secret|plaintext|log.*hash|hash.*log|sensitive" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer flagged a credential / password handling issue"
|
||||
else
|
||||
echo " [FAIL] Reviewer missed the password/credential issues"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 4: Reviewer marked at least one issue as Critical or Important (not just Minor)
|
||||
echo "Test 4: Severity classification..."
|
||||
if grep -qiE "critical|important|severe|high.*risk|security" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer classified findings at Critical/Important severity"
|
||||
else
|
||||
echo " [FAIL] Reviewer did not classify findings as Critical or Important"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 5: Reviewer did NOT approve the diff for merge
|
||||
echo "Test 5: Reviewer verdict..."
|
||||
# A correct reviewer says No or "With fixes". A broken/sycophantic reviewer says Yes/Ready.
|
||||
if grep -qiE "ready to merge.*yes|approved.*for merge|^\s*yes\s*$|safe to merge" "$OUTPUT_FILE" \
|
||||
&& ! grep -qiE "ready to merge.*no|with fixes|do not merge|not ready|block.*merge" "$OUTPUT_FILE"; then
|
||||
echo " [FAIL] Reviewer approved a diff with planted Critical bugs"
|
||||
FAILED=$((FAILED + 1))
|
||||
else
|
||||
echo " [PASS] Reviewer did not approve the diff"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "========================================"
|
||||
echo " Test Summary"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
|
||||
if [ $FAILED -eq 0 ]; then
|
||||
echo "STATUS: PASSED"
|
||||
echo "The code reviewer correctly:"
|
||||
echo " ✓ Was dispatched via the requesting-code-review skill"
|
||||
echo " ✓ Flagged the SQL injection"
|
||||
echo " ✓ Flagged the credential handling issues"
|
||||
echo " ✓ Classified findings at Critical/Important severity"
|
||||
echo " ✓ Did not approve the diff for merge"
|
||||
exit 0
|
||||
else
|
||||
echo "STATUS: FAILED"
|
||||
echo "Failed $FAILED verification tests"
|
||||
echo ""
|
||||
echo "Output saved to: $OUTPUT_FILE"
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,6 +1,17 @@
|
||||
#!/usr/bin/env bash
|
||||
# Integration Test: subagent-driven-development workflow
|
||||
# Actually executes a plan and verifies the new workflow behaviors
|
||||
#
|
||||
# Drill coverage: evals/scenarios/sdd-rejects-extra-features.yaml covers the
|
||||
# YAGNI enforcement subset (forbidden exports + reviewer-as-gate semantics)
|
||||
# and is stricter on that axis. This bash test additionally asserts:
|
||||
# - >=3 git commits (initial + per-task commits, exercising SDD's
|
||||
# commit-per-task workflow shape)
|
||||
# - >=2 Agent/Task subagent dispatches (drill only asserts >=1)
|
||||
# - TodoWrite usage (drill makes no assertion)
|
||||
# - test/math.test.js exists (drill relies on `npm test` succeeding)
|
||||
# - analyze-token-usage.py token-budget telemetry
|
||||
# Kept until those assertions are added to drill or explicitly retired.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
|
||||
@@ -1,18 +1,26 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test: subagent-driven-development skill
|
||||
# Verifies that the skill is loaded and follows correct workflow
|
||||
#
|
||||
# No drill coverage: this test asks the agent to *describe* SDD (string-
|
||||
# matches its verbal explanation against expected keywords like
|
||||
# "self-review", "skeptical", "worktree", "Step 1", "loop"). Drill scenarios
|
||||
# test behavior (real subagent dispatch, plan-following, review loops),
|
||||
# not description-recall. Kept by design.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
CLAUDE_PROMPT_TIMEOUT="${CLAUDE_PROMPT_TIMEOUT:-90}"
|
||||
|
||||
echo "=== Test: subagent-driven-development skill ==="
|
||||
echo ""
|
||||
|
||||
# Test 1: Verify skill can be loaded
|
||||
echo "Test 1: Skill loading..."
|
||||
|
||||
output=$(run_claude "What is the subagent-driven-development skill? Describe its key steps briefly." 30)
|
||||
output=$(run_claude "What is the subagent-driven-development skill? Describe its key steps briefly." "$CLAUDE_PROMPT_TIMEOUT")
|
||||
|
||||
if assert_contains "$output" "subagent-driven-development\|Subagent-Driven Development\|Subagent Driven" "Skill is recognized"; then
|
||||
: # pass
|
||||
@@ -31,9 +39,11 @@ echo ""
|
||||
# Test 2: Verify skill describes correct workflow order
|
||||
echo "Test 2: Workflow ordering..."
|
||||
|
||||
output=$(run_claude "In the subagent-driven-development skill, what comes first: spec compliance review or code quality review? Be specific about the order." 30)
|
||||
output=$(run_claude "In the subagent-driven-development skill, what comes first: spec compliance review or code quality review? Answer using exactly this structure:
|
||||
First: <review type>
|
||||
Second: <review type>" "$CLAUDE_PROMPT_TIMEOUT")
|
||||
|
||||
if assert_order "$output" "spec.*compliance" "code.*quality" "Spec compliance before code quality"; then
|
||||
if assert_order "$output" "First:.*spec.*compliance" "Second:.*code.*quality" "Spec compliance before code quality"; then
|
||||
: # pass
|
||||
else
|
||||
exit 1
|
||||
@@ -44,15 +54,17 @@ echo ""
|
||||
# Test 3: Verify self-review is mentioned
|
||||
echo "Test 3: Self-review requirement..."
|
||||
|
||||
output=$(run_claude "Does the subagent-driven-development skill require implementers to do self-review? What should they check?" 30)
|
||||
output=$(run_claude "Does the subagent-driven-development skill require implementers to self-review before handoff, and can self-review replace the external reviews? Answer using exactly this structure:
|
||||
Self-review required: <yes or no>
|
||||
Self-review replaces external review: <yes or no>" "$CLAUDE_PROMPT_TIMEOUT")
|
||||
|
||||
if assert_contains "$output" "self-review\|self review" "Mentions self-review"; then
|
||||
if assert_contains "$output" "Self-review required:.*yes" "Mentions self-review"; then
|
||||
: # pass
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if assert_contains "$output" "completeness\|Completeness" "Checks completeness"; then
|
||||
if assert_contains "$output" "Self-review replaces external review:.*no" "Self-review does not replace external review"; then
|
||||
: # pass
|
||||
else
|
||||
exit 1
|
||||
@@ -63,7 +75,7 @@ echo ""
|
||||
# Test 4: Verify plan is read once
|
||||
echo "Test 4: Plan reading efficiency..."
|
||||
|
||||
output=$(run_claude "In subagent-driven-development, how many times should the controller read the plan file? When does this happen?" 30)
|
||||
output=$(run_claude "In subagent-driven-development, how many times should the controller read the plan file? When does this happen?" "$CLAUDE_PROMPT_TIMEOUT")
|
||||
|
||||
if assert_contains "$output" "once\|one time\|single" "Read plan once"; then
|
||||
: # pass
|
||||
@@ -82,7 +94,7 @@ echo ""
|
||||
# Test 5: Verify spec compliance reviewer is skeptical
|
||||
echo "Test 5: Spec compliance reviewer mindset..."
|
||||
|
||||
output=$(run_claude "What is the spec compliance reviewer's attitude toward the implementer's report in subagent-driven-development?" 30)
|
||||
output=$(run_claude "What is the spec compliance reviewer's attitude toward the implementer's report in subagent-driven-development?" "$CLAUDE_PROMPT_TIMEOUT")
|
||||
|
||||
if assert_contains "$output" "not trust\|don't trust\|skeptical\|verify.*independently\|suspiciously" "Reviewer is skeptical"; then
|
||||
: # pass
|
||||
@@ -101,7 +113,7 @@ echo ""
|
||||
# Test 6: Verify review loops
|
||||
echo "Test 6: Review loop requirements..."
|
||||
|
||||
output=$(run_claude "In subagent-driven-development, what happens if a reviewer finds issues? Is it a one-time review or a loop?" 30)
|
||||
output=$(run_claude "In subagent-driven-development, what happens if a reviewer finds issues? Is it a one-time review or a loop?" "$CLAUDE_PROMPT_TIMEOUT")
|
||||
|
||||
if assert_contains "$output" "loop\|again\|repeat\|until.*approved\|until.*compliant" "Review loops mentioned"; then
|
||||
: # pass
|
||||
@@ -120,7 +132,9 @@ echo ""
|
||||
# Test 7: Verify full task text is provided
|
||||
echo "Test 7: Task context provision..."
|
||||
|
||||
output=$(run_claude "In subagent-driven-development, how does the controller provide task information to the implementer subagent? Does it make them read a file or provide it directly?" 30)
|
||||
output=$(run_claude "In subagent-driven-development, how does the controller provide task information to the implementer subagent? Answer using exactly this structure:
|
||||
Controller provides: <directly or by file>
|
||||
Implementer must read plan file: <yes or no>" "$CLAUDE_PROMPT_TIMEOUT")
|
||||
|
||||
if assert_contains "$output" "provide.*directly\|full.*text\|paste\|include.*prompt" "Provides text directly"; then
|
||||
: # pass
|
||||
@@ -128,7 +142,7 @@ else
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if assert_not_contains "$output" "read.*file\|open.*file" "Doesn't make subagent read file"; then
|
||||
if assert_contains "$output" "Implementer must read plan file:.*no" "Doesn't make subagent read file"; then
|
||||
: # pass
|
||||
else
|
||||
exit 1
|
||||
@@ -139,7 +153,7 @@ echo ""
|
||||
# Test 8: Verify worktree requirement
|
||||
echo "Test 8: Worktree requirement..."
|
||||
|
||||
output=$(run_claude "What workflow skills are required before using subagent-driven-development? List any prerequisites or required skills." 30)
|
||||
output=$(run_claude "What workflow skills are required before using subagent-driven-development? List any prerequisites or required skills." "$CLAUDE_PROMPT_TIMEOUT")
|
||||
|
||||
if assert_contains "$output" "using-git-worktrees\|worktree" "Mentions worktree requirement"; then
|
||||
: # pass
|
||||
@@ -152,7 +166,7 @@ echo ""
|
||||
# Test 9: Verify main branch warning
|
||||
echo "Test 9: Main branch red flag..."
|
||||
|
||||
output=$(run_claude "In subagent-driven-development, is it okay to start implementation directly on the main branch?" 30)
|
||||
output=$(run_claude "In subagent-driven-development, is it okay to start implementation directly on the main branch?" "$CLAUDE_PROMPT_TIMEOUT")
|
||||
|
||||
if assert_contains "$output" "worktree\|feature.*branch\|not.*main\|never.*main\|avoid.*main\|don't.*main\|consent\|permission" "Warns against main branch"; then
|
||||
: # pass
|
||||
|
||||
@@ -2,6 +2,11 @@
|
||||
# Test: Does the agent prefer native worktree tools (EnterWorktree) over git worktree add?
|
||||
# Framework: RED-GREEN-REFACTOR per testing-skills-with-subagents.md
|
||||
#
|
||||
# Drill coverage: evals/scenarios/worktree-creation-under-pressure.yaml lifts
|
||||
# only the PRESSURE phase (existing .worktrees/ + urgency framing). The RED
|
||||
# and GREEN baselines below are not covered by drill — kept here so the
|
||||
# RED-GREEN-REFACTOR validation remains rerunnable end-to-end.
|
||||
#
|
||||
# RED: Skill without Step 1a (no native tool preference). Agent should use git worktree add.
|
||||
# GREEN: Skill with Step 1a (explicit tool naming + consent bridge). Agent should use EnterWorktree.
|
||||
# PRESSURE: Same as GREEN but under time pressure with existing .worktrees/ dir.
|
||||
|
||||
69
tests/claude-code/test-worktree-path-policy.sh
Executable file
69
tests/claude-code/test-worktree-path-policy.sh
Executable file
@@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env bash
|
||||
# Regression check: Superpowers should not route new worktrees through the old
|
||||
# global worktree directory.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
USING_SKILL="$REPO_ROOT/skills/using-git-worktrees/SKILL.md"
|
||||
FINISHING_SKILL="$REPO_ROOT/skills/finishing-a-development-branch/SKILL.md"
|
||||
ROTOTILL_SPEC="$REPO_ROOT/docs/superpowers/specs/2026-04-06-worktree-rototill-design.md"
|
||||
ROTOTILL_PLAN="$REPO_ROOT/docs/superpowers/plans/2026-04-06-worktree-rototill.md"
|
||||
|
||||
failures=0
|
||||
|
||||
assert_contains() {
|
||||
local file="$1"
|
||||
local pattern="$2"
|
||||
local label="$3"
|
||||
|
||||
if grep -Fq "$pattern" "$file"; then
|
||||
echo " [PASS] $label"
|
||||
else
|
||||
echo " [FAIL] $label"
|
||||
echo " Expected to find: $pattern"
|
||||
echo " In file: $file"
|
||||
failures=$((failures + 1))
|
||||
fi
|
||||
}
|
||||
|
||||
assert_not_contains() {
|
||||
local file="$1"
|
||||
local pattern="$2"
|
||||
local label="$3"
|
||||
|
||||
if grep -Fq "$pattern" "$file"; then
|
||||
echo " [FAIL] $label"
|
||||
echo " Did not expect to find: $pattern"
|
||||
echo " In file: $file"
|
||||
failures=$((failures + 1))
|
||||
else
|
||||
echo " [PASS] $label"
|
||||
fi
|
||||
}
|
||||
|
||||
echo "=== Worktree Path Policy Test ==="
|
||||
echo ""
|
||||
|
||||
assert_not_contains "$USING_SKILL" "~/.config/superpowers/worktrees" "using-git-worktrees does not mention old global path"
|
||||
assert_not_contains "$USING_SKILL" "global legacy" "using-git-worktrees does not use unclear global legacy shorthand"
|
||||
assert_not_contains "$USING_SKILL" "Global path" "using-git-worktrees has no global path quick-reference row"
|
||||
assert_contains "$USING_SKILL" 'default to `.worktrees/` at the project root' "using-git-worktrees defaults new manual worktrees to .worktrees/"
|
||||
|
||||
assert_not_contains "$FINISHING_SKILL" "~/.config/superpowers/worktrees" "finishing-a-development-branch does not treat old global path as owned"
|
||||
assert_contains "$FINISHING_SKILL" '`.worktrees/` or `worktrees/`' "finishing-a-development-branch keeps project-local cleanup ownership"
|
||||
|
||||
assert_not_contains "$ROTOTILL_SPEC" "~/.config/superpowers/worktrees" "rototill spec does not preserve old global path policy"
|
||||
assert_not_contains "$ROTOTILL_PLAN" "~/.config/superpowers/worktrees" "rototill plan does not preserve old global path policy"
|
||||
assert_not_contains "$ROTOTILL_PLAN" "legacy path compat" "rototill plan does not advertise legacy path compatibility"
|
||||
|
||||
echo ""
|
||||
|
||||
if [ "$failures" -gt 0 ]; then
|
||||
echo "STATUS: FAILED ($failures failures)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "STATUS: PASSED"
|
||||
@@ -177,6 +177,7 @@ write_upstream_fixture() {
|
||||
"$repo/.codex-plugin" \
|
||||
"$repo/.private-journal" \
|
||||
"$repo/assets" \
|
||||
"$repo/evals/drill" \
|
||||
"$repo/scripts" \
|
||||
"$repo/skills/example"
|
||||
|
||||
@@ -215,6 +216,7 @@ EOF
|
||||
EOF
|
||||
|
||||
printf 'png fixture\n' > "$repo/assets/app-icon.png"
|
||||
printf 'eval harness fixture\n' > "$repo/evals/drill/README.md"
|
||||
|
||||
cat > "$repo/skills/example/SKILL.md" <<'EOF'
|
||||
# Example Skill
|
||||
@@ -233,6 +235,7 @@ EOF
|
||||
.gitignore \
|
||||
assets/app-icon.png \
|
||||
assets/superpowers-small.svg \
|
||||
evals/drill/README.md \
|
||||
package.json \
|
||||
scripts/sync-to-codex-plugin.sh \
|
||||
skills/example/SKILL.md
|
||||
@@ -542,6 +545,7 @@ main() {
|
||||
assert_contains "$preview_section" ".private-journal/keep.txt" "Preview includes tracked ignored file"
|
||||
assert_not_contains "$preview_section" ".private-journal/leak.txt" "Preview excludes ignored untracked file"
|
||||
assert_not_contains "$preview_section" "ignored-cache/" "Preview excludes pure ignored directories"
|
||||
assert_not_contains "$preview_section" "evals/" "Preview excludes eval harness"
|
||||
assert_not_contains "$preview_output" "Overlay file (.codex-plugin/plugin.json) will be regenerated" "Preview omits overlay regeneration note"
|
||||
assert_not_contains "$preview_output" "Assets (superpowers-small.svg, app-icon.png) will be seeded from" "Preview omits assets seeding note"
|
||||
assert_contains "$preview_section" "skills/example/SKILL.md" "Preview reflects dirty tracked destination file"
|
||||
|
||||
@@ -1,100 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test where Claude explicitly describes subagent-driven-development before user requests it
|
||||
# This mimics the original failure scenario
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
TIMESTAMP=$(date +%s)
|
||||
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/explicit-skill-requests/claude-describes"
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
PROJECT_DIR="$OUTPUT_DIR/project"
|
||||
mkdir -p "$PROJECT_DIR/docs/superpowers/plans"
|
||||
|
||||
echo "=== Test: Claude Describes SDD First ==="
|
||||
echo "Output dir: $OUTPUT_DIR"
|
||||
echo ""
|
||||
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
# Create a plan
|
||||
cat > "$PROJECT_DIR/docs/superpowers/plans/auth-system.md" << 'EOF'
|
||||
# Auth System Implementation Plan
|
||||
|
||||
## Task 1: Add User Model
|
||||
Create user model with email and password fields.
|
||||
|
||||
## Task 2: Add Auth Routes
|
||||
Create login and register endpoints.
|
||||
|
||||
## Task 3: Add JWT Middleware
|
||||
Protect routes with JWT validation.
|
||||
EOF
|
||||
|
||||
# Turn 1: Have Claude describe execution options including SDD
|
||||
echo ">>> Turn 1: Ask Claude to describe execution options..."
|
||||
claude -p "I have a plan at docs/superpowers/plans/auth-system.md. Tell me about my options for executing it, including what subagent-driven-development means and how it works." \
|
||||
--model haiku \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--dangerously-skip-permissions \
|
||||
--max-turns 3 \
|
||||
--output-format stream-json \
|
||||
> "$OUTPUT_DIR/turn1.json" 2>&1 || true
|
||||
echo "Done."
|
||||
|
||||
# Turn 2: THE CRITICAL TEST - now that Claude has explained it
|
||||
echo ">>> Turn 2: Request subagent-driven-development..."
|
||||
FINAL_LOG="$OUTPUT_DIR/turn2.json"
|
||||
claude -p "subagent-driven-development, please" \
|
||||
--continue \
|
||||
--model haiku \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--dangerously-skip-permissions \
|
||||
--max-turns 2 \
|
||||
--output-format stream-json \
|
||||
> "$FINAL_LOG" 2>&1 || true
|
||||
echo "Done."
|
||||
echo ""
|
||||
|
||||
echo "=== Results ==="
|
||||
|
||||
# Check Turn 1 to see if Claude described SDD
|
||||
echo "Turn 1 - Claude's description of options (excerpt):"
|
||||
grep '"type":"assistant"' "$OUTPUT_DIR/turn1.json" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 800 || echo " (could not extract)"
|
||||
echo ""
|
||||
echo "---"
|
||||
echo ""
|
||||
|
||||
# Check final turn
|
||||
SKILL_PATTERN='"skill":"([^"]*:)?subagent-driven-development"'
|
||||
if grep -q '"name":"Skill"' "$FINAL_LOG" && grep -qE "$SKILL_PATTERN" "$FINAL_LOG"; then
|
||||
echo "PASS: Skill was triggered after Claude described it"
|
||||
TRIGGERED=true
|
||||
else
|
||||
echo "FAIL: Skill was NOT triggered (Claude may have thought it already knew)"
|
||||
TRIGGERED=false
|
||||
|
||||
echo ""
|
||||
echo "Tools invoked in final turn:"
|
||||
grep '"type":"tool_use"' "$FINAL_LOG" | grep -o '"name":"[^"]*"' | sort -u | head -10 || echo " (none)"
|
||||
|
||||
echo ""
|
||||
echo "Final turn response:"
|
||||
grep '"type":"assistant"' "$FINAL_LOG" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 800 || echo " (could not extract)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Skills triggered in final turn:"
|
||||
grep -o '"skill":"[^"]*"' "$FINAL_LOG" 2>/dev/null | sort -u || echo " (none)"
|
||||
|
||||
echo ""
|
||||
echo "Logs in: $OUTPUT_DIR"
|
||||
|
||||
if [ "$TRIGGERED" = "true" ]; then
|
||||
exit 0
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,8 +0,0 @@
|
||||
I have 4 independent test failures happening in different modules:
|
||||
|
||||
1. tests/auth/login.test.ts - "should redirect after login" is failing
|
||||
2. tests/api/users.test.ts - "should return user list" returns 500
|
||||
3. tests/components/Button.test.tsx - snapshot mismatch
|
||||
4. tests/utils/date.test.ts - timezone handling broken
|
||||
|
||||
These are unrelated issues in different parts of the codebase. Can you investigate all of them?
|
||||
@@ -1 +0,0 @@
|
||||
I have a plan document at docs/superpowers/plans/2024-01-15-auth-system.md that needs to be executed. Please implement it.
|
||||
@@ -1,3 +0,0 @@
|
||||
I just finished implementing the user authentication feature. All the code is committed. Can you review the changes before I merge to main?
|
||||
|
||||
The commits are between abc123 and def456.
|
||||
@@ -1,11 +0,0 @@
|
||||
The tests are failing with this error:
|
||||
|
||||
```
|
||||
FAIL src/utils/parser.test.ts
|
||||
● Parser › should handle nested objects
|
||||
TypeError: Cannot read property 'value' of undefined
|
||||
at parse (src/utils/parser.ts:42:18)
|
||||
at Object.<anonymous> (src/utils/parser.test.ts:28:20)
|
||||
```
|
||||
|
||||
Can you figure out what's going wrong and fix it?
|
||||
@@ -1,7 +0,0 @@
|
||||
I need to add a new feature to validate email addresses. It should:
|
||||
- Check that there's an @ symbol
|
||||
- Check that there's at least one character before the @
|
||||
- Check that there's a dot in the domain part
|
||||
- Return true/false
|
||||
|
||||
Can you implement this?
|
||||
@@ -1,10 +0,0 @@
|
||||
Here's the spec for our new authentication system:
|
||||
|
||||
Requirements:
|
||||
- Users can register with email/password
|
||||
- Users can log in and receive a JWT token
|
||||
- Protected routes require valid JWT
|
||||
- Tokens expire after 24 hours
|
||||
- Support password reset via email
|
||||
|
||||
We need to implement this. There are multiple steps involved - user model, auth routes, middleware, email service integration.
|
||||
@@ -1,60 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Run all skill triggering tests
|
||||
# Usage: ./run-all.sh
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROMPTS_DIR="$SCRIPT_DIR/prompts"
|
||||
|
||||
SKILLS=(
|
||||
"systematic-debugging"
|
||||
"test-driven-development"
|
||||
"writing-plans"
|
||||
"dispatching-parallel-agents"
|
||||
"executing-plans"
|
||||
"requesting-code-review"
|
||||
)
|
||||
|
||||
echo "=== Running Skill Triggering Tests ==="
|
||||
echo ""
|
||||
|
||||
PASSED=0
|
||||
FAILED=0
|
||||
RESULTS=()
|
||||
|
||||
for skill in "${SKILLS[@]}"; do
|
||||
prompt_file="$PROMPTS_DIR/${skill}.txt"
|
||||
|
||||
if [ ! -f "$prompt_file" ]; then
|
||||
echo "⚠️ SKIP: No prompt file for $skill"
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "Testing: $skill"
|
||||
|
||||
if "$SCRIPT_DIR/run-test.sh" "$skill" "$prompt_file" 3 2>&1 | tee /tmp/skill-test-$skill.log; then
|
||||
PASSED=$((PASSED + 1))
|
||||
RESULTS+=("✅ $skill")
|
||||
else
|
||||
FAILED=$((FAILED + 1))
|
||||
RESULTS+=("❌ $skill")
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "---"
|
||||
echo ""
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "=== Summary ==="
|
||||
for result in "${RESULTS[@]}"; do
|
||||
echo " $result"
|
||||
done
|
||||
echo ""
|
||||
echo "Passed: $PASSED"
|
||||
echo "Failed: $FAILED"
|
||||
|
||||
if [ $FAILED -gt 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,88 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test skill triggering with naive prompts
|
||||
# Usage: ./run-test.sh <skill-name> <prompt-file>
|
||||
#
|
||||
# Tests whether Claude triggers a skill based on a natural prompt
|
||||
# (without explicitly mentioning the skill)
|
||||
|
||||
set -e
|
||||
|
||||
SKILL_NAME="$1"
|
||||
PROMPT_FILE="$2"
|
||||
MAX_TURNS="${3:-3}"
|
||||
|
||||
if [ -z "$SKILL_NAME" ] || [ -z "$PROMPT_FILE" ]; then
|
||||
echo "Usage: $0 <skill-name> <prompt-file> [max-turns]"
|
||||
echo "Example: $0 systematic-debugging ./test-prompts/debugging.txt"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the directory where this script lives (should be tests/skill-triggering)
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# Get the superpowers plugin root (two levels up from tests/skill-triggering)
|
||||
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
TIMESTAMP=$(date +%s)
|
||||
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/skill-triggering/${SKILL_NAME}"
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
# Read prompt from file
|
||||
PROMPT=$(cat "$PROMPT_FILE")
|
||||
|
||||
echo "=== Skill Triggering Test ==="
|
||||
echo "Skill: $SKILL_NAME"
|
||||
echo "Prompt file: $PROMPT_FILE"
|
||||
echo "Max turns: $MAX_TURNS"
|
||||
echo "Output dir: $OUTPUT_DIR"
|
||||
echo ""
|
||||
|
||||
# Copy prompt for reference
|
||||
cp "$PROMPT_FILE" "$OUTPUT_DIR/prompt.txt"
|
||||
|
||||
# Run Claude
|
||||
LOG_FILE="$OUTPUT_DIR/claude-output.json"
|
||||
cd "$OUTPUT_DIR"
|
||||
|
||||
echo "Plugin dir: $PLUGIN_DIR"
|
||||
echo "Running claude -p with naive prompt..."
|
||||
timeout 300 claude -p "$PROMPT" \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--dangerously-skip-permissions \
|
||||
--max-turns "$MAX_TURNS" \
|
||||
--output-format stream-json \
|
||||
> "$LOG_FILE" 2>&1 || true
|
||||
|
||||
echo ""
|
||||
echo "=== Results ==="
|
||||
|
||||
# Check if skill was triggered (look for Skill tool invocation)
|
||||
# In stream-json, tool invocations have "name":"Skill" (not "tool":"Skill")
|
||||
# Match either "skill":"skillname" or "skill":"namespace:skillname"
|
||||
SKILL_PATTERN='"skill":"([^"]*:)?'"${SKILL_NAME}"'"'
|
||||
if grep -q '"name":"Skill"' "$LOG_FILE" && grep -qE "$SKILL_PATTERN" "$LOG_FILE"; then
|
||||
echo "✅ PASS: Skill '$SKILL_NAME' was triggered"
|
||||
TRIGGERED=true
|
||||
else
|
||||
echo "❌ FAIL: Skill '$SKILL_NAME' was NOT triggered"
|
||||
TRIGGERED=false
|
||||
fi
|
||||
|
||||
# Show what skills WERE triggered
|
||||
echo ""
|
||||
echo "Skills triggered in this run:"
|
||||
grep -o '"skill":"[^"]*"' "$LOG_FILE" 2>/dev/null | sort -u || echo " (none)"
|
||||
|
||||
# Show first assistant message
|
||||
echo ""
|
||||
echo "First assistant response (truncated):"
|
||||
grep '"type":"assistant"' "$LOG_FILE" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 500 || echo " (could not extract)"
|
||||
|
||||
echo ""
|
||||
echo "Full log: $LOG_FILE"
|
||||
echo "Timestamp: $TIMESTAMP"
|
||||
|
||||
if [ "$TRIGGERED" = "true" ]; then
|
||||
exit 0
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,81 +0,0 @@
|
||||
# Go Fractals CLI - Design
|
||||
|
||||
## Overview
|
||||
|
||||
A command-line tool that generates ASCII art fractals. Supports two fractal types with configurable output.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Sierpinski triangle
|
||||
fractals sierpinski --size 32 --depth 5
|
||||
|
||||
# Mandelbrot set
|
||||
fractals mandelbrot --width 80 --height 24 --iterations 100
|
||||
|
||||
# Custom character
|
||||
fractals sierpinski --size 16 --char '#'
|
||||
|
||||
# Help
|
||||
fractals --help
|
||||
fractals sierpinski --help
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
### `sierpinski`
|
||||
|
||||
Generates a Sierpinski triangle using recursive subdivision.
|
||||
|
||||
Flags:
|
||||
- `--size` (default: 32) - Width of the triangle base in characters
|
||||
- `--depth` (default: 5) - Recursion depth
|
||||
- `--char` (default: '*') - Character to use for filled points
|
||||
|
||||
Output: Triangle printed to stdout, one line per row.
|
||||
|
||||
### `mandelbrot`
|
||||
|
||||
Renders the Mandelbrot set as ASCII art. Maps iteration count to characters.
|
||||
|
||||
Flags:
|
||||
- `--width` (default: 80) - Output width in characters
|
||||
- `--height` (default: 24) - Output height in characters
|
||||
- `--iterations` (default: 100) - Maximum iterations for escape calculation
|
||||
- `--char` (default: gradient) - Single character, or omit for gradient " .:-=+*#%@"
|
||||
|
||||
Output: Rectangle printed to stdout.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
cmd/
|
||||
fractals/
|
||||
main.go # Entry point, CLI setup
|
||||
internal/
|
||||
sierpinski/
|
||||
sierpinski.go # Algorithm
|
||||
sierpinski_test.go
|
||||
mandelbrot/
|
||||
mandelbrot.go # Algorithm
|
||||
mandelbrot_test.go
|
||||
cli/
|
||||
root.go # Root command, help
|
||||
sierpinski.go # Sierpinski subcommand
|
||||
mandelbrot.go # Mandelbrot subcommand
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
- Go 1.21+
|
||||
- `github.com/spf13/cobra` for CLI
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. `fractals --help` shows usage
|
||||
2. `fractals sierpinski` outputs a recognizable triangle
|
||||
3. `fractals mandelbrot` outputs a recognizable Mandelbrot set
|
||||
4. `--size`, `--width`, `--height`, `--depth`, `--iterations` flags work
|
||||
5. `--char` customizes output character
|
||||
6. Invalid inputs produce clear error messages
|
||||
7. All tests pass
|
||||
@@ -1,172 +0,0 @@
|
||||
# Go Fractals CLI - Implementation Plan
|
||||
|
||||
Execute this plan using the `superpowers:subagent-driven-development` skill.
|
||||
|
||||
## Context
|
||||
|
||||
Building a CLI tool that generates ASCII fractals. See `design.md` for full specification.
|
||||
|
||||
## Tasks
|
||||
|
||||
### Task 1: Project Setup
|
||||
|
||||
Create the Go module and directory structure.
|
||||
|
||||
**Do:**
|
||||
- Initialize `go.mod` with module name `github.com/superpowers-test/fractals`
|
||||
- Create directory structure: `cmd/fractals/`, `internal/sierpinski/`, `internal/mandelbrot/`, `internal/cli/`
|
||||
- Create minimal `cmd/fractals/main.go` that prints "fractals cli"
|
||||
- Add `github.com/spf13/cobra` dependency
|
||||
|
||||
**Verify:**
|
||||
- `go build ./cmd/fractals` succeeds
|
||||
- `./fractals` prints "fractals cli"
|
||||
|
||||
---
|
||||
|
||||
### Task 2: CLI Framework with Help
|
||||
|
||||
Set up Cobra root command with help output.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/cli/root.go` with root command
|
||||
- Configure help text showing available subcommands
|
||||
- Wire root command into `main.go`
|
||||
|
||||
**Verify:**
|
||||
- `./fractals --help` shows usage with "sierpinski" and "mandelbrot" listed as available commands
|
||||
- `./fractals` (no args) shows help
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Sierpinski Algorithm
|
||||
|
||||
Implement the Sierpinski triangle generation algorithm.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/sierpinski/sierpinski.go`
|
||||
- Implement `Generate(size, depth int, char rune) []string` that returns lines of the triangle
|
||||
- Use recursive midpoint subdivision algorithm
|
||||
- Create `internal/sierpinski/sierpinski_test.go` with tests:
|
||||
- Small triangle (size=4, depth=2) matches expected output
|
||||
- Size=1 returns single character
|
||||
- Depth=0 returns filled triangle
|
||||
|
||||
**Verify:**
|
||||
- `go test ./internal/sierpinski/...` passes
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Sierpinski CLI Integration
|
||||
|
||||
Wire the Sierpinski algorithm to a CLI subcommand.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/cli/sierpinski.go` with `sierpinski` subcommand
|
||||
- Add flags: `--size` (default 32), `--depth` (default 5), `--char` (default '*')
|
||||
- Call `sierpinski.Generate()` and print result to stdout
|
||||
|
||||
**Verify:**
|
||||
- `./fractals sierpinski` outputs a triangle
|
||||
- `./fractals sierpinski --size 16 --depth 3` outputs smaller triangle
|
||||
- `./fractals sierpinski --help` shows flag documentation
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Mandelbrot Algorithm
|
||||
|
||||
Implement the Mandelbrot set ASCII renderer.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/mandelbrot/mandelbrot.go`
|
||||
- Implement `Render(width, height, maxIter int, char string) []string`
|
||||
- Map complex plane region (-2.5 to 1.0 real, -1.0 to 1.0 imaginary) to output dimensions
|
||||
- Map iteration count to character gradient " .:-=+*#%@" (or single char if provided)
|
||||
- Create `internal/mandelbrot/mandelbrot_test.go` with tests:
|
||||
- Output dimensions match requested width/height
|
||||
- Known point inside set (0,0) maps to max-iteration character
|
||||
- Known point outside set (2,0) maps to low-iteration character
|
||||
|
||||
**Verify:**
|
||||
- `go test ./internal/mandelbrot/...` passes
|
||||
|
||||
---
|
||||
|
||||
### Task 6: Mandelbrot CLI Integration
|
||||
|
||||
Wire the Mandelbrot algorithm to a CLI subcommand.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/cli/mandelbrot.go` with `mandelbrot` subcommand
|
||||
- Add flags: `--width` (default 80), `--height` (default 24), `--iterations` (default 100), `--char` (default "")
|
||||
- Call `mandelbrot.Render()` and print result to stdout
|
||||
|
||||
**Verify:**
|
||||
- `./fractals mandelbrot` outputs recognizable Mandelbrot set
|
||||
- `./fractals mandelbrot --width 40 --height 12` outputs smaller version
|
||||
- `./fractals mandelbrot --help` shows flag documentation
|
||||
|
||||
---
|
||||
|
||||
### Task 7: Character Set Configuration
|
||||
|
||||
Ensure `--char` flag works consistently across both commands.
|
||||
|
||||
**Do:**
|
||||
- Verify Sierpinski `--char` flag passes character to algorithm
|
||||
- For Mandelbrot, `--char` should use single character instead of gradient
|
||||
- Add tests for custom character output
|
||||
|
||||
**Verify:**
|
||||
- `./fractals sierpinski --char '#'` uses '#' character
|
||||
- `./fractals mandelbrot --char '.'` uses '.' for all filled points
|
||||
- Tests pass
|
||||
|
||||
---
|
||||
|
||||
### Task 8: Input Validation and Error Handling
|
||||
|
||||
Add validation for invalid inputs.
|
||||
|
||||
**Do:**
|
||||
- Sierpinski: size must be > 0, depth must be >= 0
|
||||
- Mandelbrot: width/height must be > 0, iterations must be > 0
|
||||
- Return clear error messages for invalid inputs
|
||||
- Add tests for error cases
|
||||
|
||||
**Verify:**
|
||||
- `./fractals sierpinski --size 0` prints error, exits non-zero
|
||||
- `./fractals mandelbrot --width -1` prints error, exits non-zero
|
||||
- Error messages are clear and helpful
|
||||
|
||||
---
|
||||
|
||||
### Task 9: Integration Tests
|
||||
|
||||
Add integration tests that invoke the CLI.
|
||||
|
||||
**Do:**
|
||||
- Create `cmd/fractals/main_test.go` or `test/integration_test.go`
|
||||
- Test full CLI invocation for both commands
|
||||
- Verify output format and exit codes
|
||||
- Test error cases return non-zero exit
|
||||
|
||||
**Verify:**
|
||||
- `go test ./...` passes all tests including integration tests
|
||||
|
||||
---
|
||||
|
||||
### Task 10: README
|
||||
|
||||
Document usage and examples.
|
||||
|
||||
**Do:**
|
||||
- Create `README.md` with:
|
||||
- Project description
|
||||
- Installation: `go install ./cmd/fractals`
|
||||
- Usage examples for both commands
|
||||
- Example output (small samples)
|
||||
|
||||
**Verify:**
|
||||
- README accurately describes the tool
|
||||
- Examples in README actually work
|
||||
@@ -1,45 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Scaffold the Go Fractals test project
|
||||
# Usage: ./scaffold.sh /path/to/target/directory
|
||||
|
||||
set -e
|
||||
|
||||
TARGET_DIR="${1:?Usage: $0 <target-directory>}"
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
|
||||
# Create target directory
|
||||
mkdir -p "$TARGET_DIR"
|
||||
cd "$TARGET_DIR"
|
||||
|
||||
# Initialize git repo
|
||||
git init
|
||||
|
||||
# Copy design and plan
|
||||
cp "$SCRIPT_DIR/design.md" .
|
||||
cp "$SCRIPT_DIR/plan.md" .
|
||||
|
||||
# Create .claude settings to allow reads/writes in this directory
|
||||
mkdir -p .claude
|
||||
cat > .claude/settings.local.json << 'SETTINGS'
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Read(**)",
|
||||
"Edit(**)",
|
||||
"Write(**)",
|
||||
"Bash(go:*)",
|
||||
"Bash(mkdir:*)",
|
||||
"Bash(git:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
SETTINGS
|
||||
|
||||
# Create initial commit
|
||||
git add .
|
||||
git commit -m "Initial project setup with design and plan"
|
||||
|
||||
echo "Scaffolded Go Fractals project at: $TARGET_DIR"
|
||||
echo ""
|
||||
echo "To run the test:"
|
||||
echo " claude -p \"Execute this plan using superpowers:subagent-driven-development. Plan: $TARGET_DIR/plan.md\" --plugin-dir /path/to/superpowers"
|
||||
@@ -1,106 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Run a subagent-driven-development test
|
||||
# Usage: ./run-test.sh <test-name> [--plugin-dir <path>]
|
||||
#
|
||||
# Example:
|
||||
# ./run-test.sh go-fractals
|
||||
# ./run-test.sh svelte-todo --plugin-dir /path/to/superpowers
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
TEST_NAME="${1:?Usage: $0 <test-name> [--plugin-dir <path>]}"
|
||||
shift
|
||||
|
||||
# Parse optional arguments
|
||||
PLUGIN_DIR=""
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--plugin-dir)
|
||||
PLUGIN_DIR="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Default plugin dir to parent of tests directory
|
||||
if [[ -z "$PLUGIN_DIR" ]]; then
|
||||
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
fi
|
||||
|
||||
# Verify test exists
|
||||
TEST_DIR="$SCRIPT_DIR/$TEST_NAME"
|
||||
if [[ ! -d "$TEST_DIR" ]]; then
|
||||
echo "Error: Test '$TEST_NAME' not found at $TEST_DIR"
|
||||
echo "Available tests:"
|
||||
ls -1 "$SCRIPT_DIR" | grep -v '\.sh$' | grep -v '\.md$'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create timestamped output directory
|
||||
TIMESTAMP=$(date +%s)
|
||||
OUTPUT_BASE="/tmp/superpowers-tests/$TIMESTAMP/subagent-driven-development"
|
||||
OUTPUT_DIR="$OUTPUT_BASE/$TEST_NAME"
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
echo "=== Subagent-Driven Development Test ==="
|
||||
echo "Test: $TEST_NAME"
|
||||
echo "Output: $OUTPUT_DIR"
|
||||
echo "Plugin: $PLUGIN_DIR"
|
||||
echo ""
|
||||
|
||||
# Scaffold the project
|
||||
echo ">>> Scaffolding project..."
|
||||
"$TEST_DIR/scaffold.sh" "$OUTPUT_DIR/project"
|
||||
echo ""
|
||||
|
||||
# Prepare the prompt
|
||||
PLAN_PATH="$OUTPUT_DIR/project/plan.md"
|
||||
PROMPT="Execute this plan using superpowers:subagent-driven-development. The plan is at: $PLAN_PATH"
|
||||
|
||||
# Run Claude with JSON output for token tracking
|
||||
LOG_FILE="$OUTPUT_DIR/claude-output.json"
|
||||
echo ">>> Running Claude..."
|
||||
echo "Prompt: $PROMPT"
|
||||
echo "Log file: $LOG_FILE"
|
||||
echo ""
|
||||
|
||||
# Run claude and capture output
|
||||
# Using stream-json to get token usage stats
|
||||
# --dangerously-skip-permissions for automated testing (subagents don't inherit parent settings)
|
||||
cd "$OUTPUT_DIR/project"
|
||||
claude -p "$PROMPT" \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--dangerously-skip-permissions \
|
||||
--output-format stream-json \
|
||||
--verbose \
|
||||
> "$LOG_FILE" 2>&1 || true
|
||||
|
||||
# Extract final stats
|
||||
echo ""
|
||||
echo ">>> Test complete"
|
||||
echo "Project directory: $OUTPUT_DIR/project"
|
||||
echo "Claude log: $LOG_FILE"
|
||||
echo ""
|
||||
|
||||
# Show token usage if available
|
||||
if command -v jq &> /dev/null; then
|
||||
echo ">>> Token usage:"
|
||||
# Extract usage from the last message with usage info
|
||||
jq -s '[.[] | select(.type == "result")] | last | .usage' "$LOG_FILE" 2>/dev/null || echo "(could not parse usage)"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo ">>> Next steps:"
|
||||
echo "1. Review the project: cd $OUTPUT_DIR/project"
|
||||
echo "2. Review Claude's log: less $LOG_FILE"
|
||||
echo "3. Check if tests pass:"
|
||||
if [[ "$TEST_NAME" == "go-fractals" ]]; then
|
||||
echo " cd $OUTPUT_DIR/project && go test ./..."
|
||||
elif [[ "$TEST_NAME" == "svelte-todo" ]]; then
|
||||
echo " cd $OUTPUT_DIR/project && npm test && npx playwright test"
|
||||
fi
|
||||
@@ -1,70 +0,0 @@
|
||||
# Svelte Todo List - Design
|
||||
|
||||
## Overview
|
||||
|
||||
A simple todo list application built with Svelte. Supports creating, completing, and deleting todos with localStorage persistence.
|
||||
|
||||
## Features
|
||||
|
||||
- Add new todos
|
||||
- Mark todos as complete/incomplete
|
||||
- Delete todos
|
||||
- Filter by: All / Active / Completed
|
||||
- Clear all completed todos
|
||||
- Persist to localStorage
|
||||
- Show count of remaining items
|
||||
|
||||
## User Interface
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Svelte Todos │
|
||||
├─────────────────────────────────────────┤
|
||||
│ [________________________] [Add] │
|
||||
├─────────────────────────────────────────┤
|
||||
│ [ ] Buy groceries [x] │
|
||||
│ [✓] Walk the dog [x] │
|
||||
│ [ ] Write code [x] │
|
||||
├─────────────────────────────────────────┤
|
||||
│ 2 items left │
|
||||
│ [All] [Active] [Completed] [Clear ✓] │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
```
|
||||
src/
|
||||
App.svelte # Main app, state management
|
||||
lib/
|
||||
TodoInput.svelte # Text input + Add button
|
||||
TodoList.svelte # List container
|
||||
TodoItem.svelte # Single todo with checkbox, text, delete
|
||||
FilterBar.svelte # Filter buttons + clear completed
|
||||
store.ts # Svelte store for todos
|
||||
storage.ts # localStorage persistence
|
||||
```
|
||||
|
||||
## Data Model
|
||||
|
||||
```typescript
|
||||
interface Todo {
|
||||
id: string; // UUID
|
||||
text: string; // Todo text
|
||||
completed: boolean;
|
||||
}
|
||||
|
||||
type Filter = 'all' | 'active' | 'completed';
|
||||
```
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. Can add a todo by typing and pressing Enter or clicking Add
|
||||
2. Can toggle todo completion by clicking checkbox
|
||||
3. Can delete a todo by clicking X button
|
||||
4. Filter buttons show correct subset of todos
|
||||
5. "X items left" shows count of incomplete todos
|
||||
6. "Clear completed" removes all completed todos
|
||||
7. Todos persist across page refresh (localStorage)
|
||||
8. Empty state shows helpful message
|
||||
9. All tests pass
|
||||
@@ -1,222 +0,0 @@
|
||||
# Svelte Todo List - Implementation Plan
|
||||
|
||||
Execute this plan using the `superpowers:subagent-driven-development` skill.
|
||||
|
||||
## Context
|
||||
|
||||
Building a todo list app with Svelte. See `design.md` for full specification.
|
||||
|
||||
## Tasks
|
||||
|
||||
### Task 1: Project Setup
|
||||
|
||||
Create the Svelte project with Vite.
|
||||
|
||||
**Do:**
|
||||
- Run `npm create vite@latest . -- --template svelte-ts`
|
||||
- Install dependencies with `npm install`
|
||||
- Verify dev server works
|
||||
- Clean up default Vite template content from App.svelte
|
||||
|
||||
**Verify:**
|
||||
- `npm run dev` starts server
|
||||
- App shows minimal "Svelte Todos" heading
|
||||
- `npm run build` succeeds
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Todo Store
|
||||
|
||||
Create the Svelte store for todo state management.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/store.ts`
|
||||
- Define `Todo` interface with id, text, completed
|
||||
- Create writable store with initial empty array
|
||||
- Export functions: `addTodo(text)`, `toggleTodo(id)`, `deleteTodo(id)`, `clearCompleted()`
|
||||
- Create `src/lib/store.test.ts` with tests for each function
|
||||
|
||||
**Verify:**
|
||||
- Tests pass: `npm run test` (install vitest if needed)
|
||||
|
||||
---
|
||||
|
||||
### Task 3: localStorage Persistence
|
||||
|
||||
Add persistence layer for todos.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/storage.ts`
|
||||
- Implement `loadTodos(): Todo[]` and `saveTodos(todos: Todo[])`
|
||||
- Handle JSON parse errors gracefully (return empty array)
|
||||
- Integrate with store: load on init, save on change
|
||||
- Add tests for load/save/error handling
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Manual test: add todo, refresh page, todo persists
|
||||
|
||||
---
|
||||
|
||||
### Task 4: TodoInput Component
|
||||
|
||||
Create the input component for adding todos.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/TodoInput.svelte`
|
||||
- Text input bound to local state
|
||||
- Add button calls `addTodo()` and clears input
|
||||
- Enter key also submits
|
||||
- Disable Add button when input is empty
|
||||
- Add component tests
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Component renders input and button
|
||||
|
||||
---
|
||||
|
||||
### Task 5: TodoItem Component
|
||||
|
||||
Create the single todo item component.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/TodoItem.svelte`
|
||||
- Props: `todo: Todo`
|
||||
- Checkbox toggles completion (calls `toggleTodo`)
|
||||
- Text with strikethrough when completed
|
||||
- Delete button (X) calls `deleteTodo`
|
||||
- Add component tests
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Component renders checkbox, text, delete button
|
||||
|
||||
---
|
||||
|
||||
### Task 6: TodoList Component
|
||||
|
||||
Create the list container component.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/TodoList.svelte`
|
||||
- Props: `todos: Todo[]`
|
||||
- Renders TodoItem for each todo
|
||||
- Shows "No todos yet" when empty
|
||||
- Add component tests
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Component renders list of TodoItems
|
||||
|
||||
---
|
||||
|
||||
### Task 7: FilterBar Component
|
||||
|
||||
Create the filter and status bar component.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/FilterBar.svelte`
|
||||
- Props: `todos: Todo[]`, `filter: Filter`, `onFilterChange: (f: Filter) => void`
|
||||
- Show count: "X items left" (incomplete count)
|
||||
- Three filter buttons: All, Active, Completed
|
||||
- Active filter is visually highlighted
|
||||
- "Clear completed" button (hidden when no completed todos)
|
||||
- Add component tests
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Component renders count, filters, clear button
|
||||
|
||||
---
|
||||
|
||||
### Task 8: App Integration
|
||||
|
||||
Wire all components together in App.svelte.
|
||||
|
||||
**Do:**
|
||||
- Import all components and store
|
||||
- Add filter state (default: 'all')
|
||||
- Compute filtered todos based on filter state
|
||||
- Render: heading, TodoInput, TodoList, FilterBar
|
||||
- Pass appropriate props to each component
|
||||
|
||||
**Verify:**
|
||||
- App renders all components
|
||||
- Adding todos works
|
||||
- Toggling works
|
||||
- Deleting works
|
||||
|
||||
---
|
||||
|
||||
### Task 9: Filter Functionality
|
||||
|
||||
Ensure filtering works end-to-end.
|
||||
|
||||
**Do:**
|
||||
- Verify filter buttons change displayed todos
|
||||
- 'all' shows all todos
|
||||
- 'active' shows only incomplete todos
|
||||
- 'completed' shows only completed todos
|
||||
- Clear completed removes completed todos and resets filter if needed
|
||||
- Add integration tests
|
||||
|
||||
**Verify:**
|
||||
- Filter tests pass
|
||||
- Manual verification of all filter states
|
||||
|
||||
---
|
||||
|
||||
### Task 10: Styling and Polish
|
||||
|
||||
Add CSS styling for usability.
|
||||
|
||||
**Do:**
|
||||
- Style the app to match the design mockup
|
||||
- Completed todos have strikethrough and muted color
|
||||
- Active filter button is highlighted
|
||||
- Input has focus styles
|
||||
- Delete button appears on hover (or always on mobile)
|
||||
- Responsive layout
|
||||
|
||||
**Verify:**
|
||||
- App is visually usable
|
||||
- Styles don't break functionality
|
||||
|
||||
---
|
||||
|
||||
### Task 11: End-to-End Tests
|
||||
|
||||
Add Playwright tests for full user flows.
|
||||
|
||||
**Do:**
|
||||
- Install Playwright: `npm init playwright@latest`
|
||||
- Create `tests/todo.spec.ts`
|
||||
- Test flows:
|
||||
- Add a todo
|
||||
- Complete a todo
|
||||
- Delete a todo
|
||||
- Filter todos
|
||||
- Clear completed
|
||||
- Persistence (add, reload, verify)
|
||||
|
||||
**Verify:**
|
||||
- `npx playwright test` passes
|
||||
|
||||
---
|
||||
|
||||
### Task 12: README
|
||||
|
||||
Document the project.
|
||||
|
||||
**Do:**
|
||||
- Create `README.md` with:
|
||||
- Project description
|
||||
- Setup: `npm install`
|
||||
- Development: `npm run dev`
|
||||
- Testing: `npm test` and `npx playwright test`
|
||||
- Build: `npm run build`
|
||||
|
||||
**Verify:**
|
||||
- README accurately describes the project
|
||||
- Instructions work
|
||||
@@ -1,46 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Scaffold the Svelte Todo test project
|
||||
# Usage: ./scaffold.sh /path/to/target/directory
|
||||
|
||||
set -e
|
||||
|
||||
TARGET_DIR="${1:?Usage: $0 <target-directory>}"
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
|
||||
# Create target directory
|
||||
mkdir -p "$TARGET_DIR"
|
||||
cd "$TARGET_DIR"
|
||||
|
||||
# Initialize git repo
|
||||
git init
|
||||
|
||||
# Copy design and plan
|
||||
cp "$SCRIPT_DIR/design.md" .
|
||||
cp "$SCRIPT_DIR/plan.md" .
|
||||
|
||||
# Create .claude settings to allow reads/writes in this directory
|
||||
mkdir -p .claude
|
||||
cat > .claude/settings.local.json << 'SETTINGS'
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Read(**)",
|
||||
"Edit(**)",
|
||||
"Write(**)",
|
||||
"Bash(npm:*)",
|
||||
"Bash(npx:*)",
|
||||
"Bash(mkdir:*)",
|
||||
"Bash(git:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
SETTINGS
|
||||
|
||||
# Create initial commit
|
||||
git add .
|
||||
git commit -m "Initial project setup with design and plan"
|
||||
|
||||
echo "Scaffolded Svelte Todo project at: $TARGET_DIR"
|
||||
echo ""
|
||||
echo "To run the test:"
|
||||
echo " claude -p \"Execute this plan using superpowers:subagent-driven-development. Plan: $TARGET_DIR/plan.md\" --plugin-dir /path/to/superpowers"
|
||||
Reference in New Issue
Block a user