mirror of
https://github.com/obra/superpowers.git
synced 2026-05-15 13:39:05 +08:00
Compare commits
34 Commits
codex/supe
...
f/cross-pl
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2f5c75d91c | ||
|
|
211618a254 | ||
|
|
72b5f19b60 | ||
|
|
f1db631021 | ||
|
|
2d4447b1e0 | ||
|
|
72b59619ad | ||
|
|
d25618db58 | ||
|
|
3d6dc90c6d | ||
|
|
a152bb3932 | ||
|
|
3dfb376268 | ||
|
|
491df7360c | ||
|
|
9088f563e7 | ||
|
|
d4cf61b4c8 | ||
|
|
7f02ccd91b | ||
|
|
35e42a16ce | ||
|
|
58082d04f8 | ||
|
|
3dc0ea6876 | ||
|
|
0bf37499b4 | ||
|
|
f7c5312265 | ||
|
|
f5175fb31a | ||
|
|
45c7dc2cce | ||
|
|
39d29a6c28 | ||
|
|
f1d2005de3 | ||
|
|
c0a65f1b4d | ||
|
|
f10cddac0d | ||
|
|
371f41596b | ||
|
|
6f0adebe96 | ||
|
|
fd5b53cb85 | ||
|
|
be0357f98a | ||
|
|
3b412a3836 | ||
|
|
2e46e9590d | ||
|
|
58f821314d | ||
|
|
81472cc9e6 | ||
|
|
b4363df1b9 |
@@ -19,7 +19,5 @@
|
||||
"workflows"
|
||||
],
|
||||
"skills": "./skills/",
|
||||
"agents": "./agents/",
|
||||
"commands": "./commands/",
|
||||
"hooks": "./hooks/hooks-cursor.json"
|
||||
}
|
||||
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@@ -5,3 +5,9 @@
|
||||
node_modules/
|
||||
inspo
|
||||
triage/
|
||||
|
||||
# Eval harness — drill ships its own gitignore at evals/.gitignore;
|
||||
# these are belt-and-suspenders entries for tools that don't recurse.
|
||||
evals/results/
|
||||
evals/.venv/
|
||||
evals/.env
|
||||
|
||||
3
.gitmodules
vendored
Normal file
3
.gitmodules
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
[submodule "evals"]
|
||||
path = evals
|
||||
url = git@github.com:prime-radiant-inc/superpowers-evals.git
|
||||
@@ -45,7 +45,7 @@ Use OpenCode's native `skill` tool:
|
||||
|
||||
```
|
||||
use skill tool to list skills
|
||||
use skill tool to load superpowers/brainstorming
|
||||
use skill tool to load brainstorming
|
||||
```
|
||||
|
||||
## Updating
|
||||
@@ -98,11 +98,16 @@ Then use the installed package path in `opencode.json`:
|
||||
|
||||
### Tool mapping
|
||||
|
||||
When skills reference Claude Code tools:
|
||||
- `TodoWrite` → `todowrite`
|
||||
- `Task` with subagents → `@mention` syntax
|
||||
- `Skill` tool → OpenCode's native `skill` tool
|
||||
- File operations → your native tools
|
||||
Skills speak in actions ("create a todo", "dispatch a subagent", "read a file"). On OpenCode these resolve to:
|
||||
|
||||
- "Create a todo" / "mark complete in todo list" → `todowrite`
|
||||
- `Subagent (general-purpose):` template → `task` tool with `subagent_type: "general"` (or `"explore"` for codebase exploration)
|
||||
- "Invoke a skill" → OpenCode's native `skill` tool
|
||||
- "Read a file" → `read`
|
||||
- "Create a file" / "edit a file" / "delete a file" → `apply_patch`
|
||||
- "Run a shell command" → `bash`
|
||||
- "Search file contents" / "find files by name" → `grep`, `glob`
|
||||
- "Fetch a URL" → `webfetch`
|
||||
|
||||
## Getting Help
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
/**
|
||||
* Superpowers plugin for OpenCode.ai
|
||||
*
|
||||
* Injects superpowers bootstrap context via system prompt transform.
|
||||
* Injects superpowers bootstrap context via message transform.
|
||||
* Auto-registers skills directory via config hook (no symlinks needed).
|
||||
*/
|
||||
|
||||
@@ -74,11 +74,15 @@ export const SuperpowersPlugin = async ({ client, directory }) => {
|
||||
const { content } = extractAndStripFrontmatter(fullContent);
|
||||
|
||||
const toolMapping = `**Tool Mapping for OpenCode:**
|
||||
When skills reference tools you don't have, substitute OpenCode equivalents:
|
||||
- \`TodoWrite\` → \`todowrite\`
|
||||
- \`Task\` tool with subagents → Use OpenCode's subagent system (@mention)
|
||||
- \`Skill\` tool → OpenCode's native \`skill\` tool
|
||||
- \`Read\`, \`Write\`, \`Edit\`, \`Bash\` → Your native tools
|
||||
When skills request actions, substitute OpenCode equivalents:
|
||||
- Create or update todos → \`todowrite\`
|
||||
- \`Subagent (general-purpose):\` → \`task\` with \`subagent_type: "general"\`
|
||||
- Invoke a skill → OpenCode's native \`skill\` tool
|
||||
- Read files → \`read\`
|
||||
- Create, edit, or delete files → \`apply_patch\`
|
||||
- Run shell commands → \`bash\`
|
||||
- Search files → \`grep\`, \`glob\`
|
||||
- Fetch a URL → \`webfetch\`
|
||||
|
||||
Use OpenCode's native \`skill\` tool to list and load skills.`;
|
||||
|
||||
|
||||
21
.pre-commit-config.yaml
Normal file
21
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
repos:
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: evals-ruff-check
|
||||
name: evals ruff check
|
||||
entry: uv --project evals run ruff check
|
||||
language: system
|
||||
files: ^evals/.*\.py$
|
||||
|
||||
- id: evals-ruff-format-check
|
||||
name: evals ruff format --check
|
||||
entry: uv --project evals run ruff format --check
|
||||
language: system
|
||||
files: ^evals/.*\.py$
|
||||
|
||||
- id: evals-ty-check
|
||||
name: evals ty check
|
||||
entry: uv --directory evals run ty check
|
||||
language: system
|
||||
pass_filenames: false
|
||||
files: ^evals/.*\.py$
|
||||
@@ -94,6 +94,10 @@ Skills are not prose — they are code that shapes agent behavior. If you modify
|
||||
- Show before/after eval results in your PR
|
||||
- Do not modify carefully-tuned content (Red Flags tables, rationalization lists, "human partner" language) without evidence the change is an improvement
|
||||
|
||||
## Eval harness
|
||||
|
||||
Skill-behavior evals live in the `evals/` submodule — after cloning, run `git submodule update --init evals`, then see `evals/README.md`. Drill (the harness) drives real tmux sessions of Claude Code / Codex / Gemini CLI and judges skill compliance with an LLM verifier. Plugin-infrastructure tests still live at `tests/`.
|
||||
|
||||
## Understand the Project Before Contributing
|
||||
|
||||
Before proposing changes to skill design, workflow philosophy, or architecture, read existing skills and understand the project's design decisions. Superpowers has its own tested philosophy about skill design, agent behavior shaping, and terminology (e.g., "your human partner" is deliberate, not interchangeable with "the user"). Changes that rewrite the project's voice or restructure its approach without understanding why it exists will be rejected.
|
||||
|
||||
64
README.md
64
README.md
@@ -4,7 +4,7 @@ Superpowers is a complete software development methodology for your coding agent
|
||||
|
||||
## Quickstart
|
||||
|
||||
Give your agent Superpowers: [Claude Code](#claude-code), [Codex CLI](#codex-cli), [Codex App](#codex-app), [Factory Droid](#factory-droid), [Gemini CLI](#gemini-cli), [OpenCode](#opencode), [Cursor](#cursor), [GitHub Copilot CLI](#github-copilot-cli).
|
||||
Give your agent Superpowers: [Claude Code](#claude-code), [Codex App](#codex-app), [Codex CLI](#codex-cli), [Cursor](#cursor), [Factory Droid](#factory-droid), [Gemini CLI](#gemini-cli), [GitHub Copilot CLI](#github-copilot-cli), [OpenCode](#opencode).
|
||||
|
||||
## How it works
|
||||
|
||||
@@ -14,7 +14,7 @@ Once it's teased a spec out of the conversation, it shows it to you in chunks sh
|
||||
|
||||
After you've signed off on the design, your agent puts together an implementation plan that's clear enough for an enthusiastic junior engineer with poor taste, no judgement, no project context, and an aversion to testing to follow. It emphasizes true red/green TDD, YAGNI (You Aren't Gonna Need It), and DRY.
|
||||
|
||||
Next up, once you say "go", it launches a *subagent-driven-development* process, having agents work through each engineering task, inspecting and reviewing their work, and continuing forward. It's not uncommon for Claude to be able to work autonomously for a couple hours at a time without deviating from the plan you put together.
|
||||
Next up, once you say "go", it launches a *subagent-driven-development* process, having agents work through each engineering task, inspecting and reviewing their work, and continuing forward. It's not uncommon for your agent to work autonomously for a couple hours at a time without deviating from the plan you put together.
|
||||
|
||||
There's a bunch more to it, but that's the core of the system. And because the skills trigger automatically, you don't need to do anything special. Your coding agent just has Superpowers.
|
||||
|
||||
@@ -25,7 +25,7 @@ If Superpowers has helped you do stuff that makes money and you are so inclined,
|
||||
|
||||
Thanks!
|
||||
|
||||
- Jesse
|
||||
\- Jesse
|
||||
|
||||
|
||||
## Installation
|
||||
@@ -60,6 +60,14 @@ The Superpowers marketplace provides Superpowers and some other related plugins
|
||||
/plugin install superpowers@superpowers-marketplace
|
||||
```
|
||||
|
||||
### Codex App
|
||||
|
||||
Superpowers is available via the [official Codex plugin marketplace](https://github.com/openai/plugins).
|
||||
|
||||
- In the Codex app, click on Plugins in the sidebar.
|
||||
- You should see `Superpowers` in the Coding section.
|
||||
- Click the `+` next to Superpowers and follow the prompts.
|
||||
|
||||
### Codex CLI
|
||||
|
||||
Superpowers is available via the [official Codex plugin marketplace](https://github.com/openai/plugins).
|
||||
@@ -78,13 +86,15 @@ Superpowers is available via the [official Codex plugin marketplace](https://git
|
||||
|
||||
- Select `Install Plugin`.
|
||||
|
||||
### Codex App
|
||||
### Cursor
|
||||
|
||||
Superpowers is available via the [official Codex plugin marketplace](https://github.com/openai/plugins).
|
||||
- In Cursor Agent chat, install from marketplace:
|
||||
|
||||
- In the Codex app, click on Plugins in the sidebar.
|
||||
- You should see `Superpowers` in the Coding section.
|
||||
- Click the `+` next to Superpowers and follow the prompts.
|
||||
```text
|
||||
/add-plugin superpowers
|
||||
```
|
||||
|
||||
- Or search for "superpowers" in the plugin marketplace.
|
||||
|
||||
### Factory Droid
|
||||
|
||||
@@ -114,29 +124,6 @@ Superpowers is available via the [official Codex plugin marketplace](https://git
|
||||
gemini extensions update superpowers
|
||||
```
|
||||
|
||||
### OpenCode
|
||||
|
||||
OpenCode uses its own plugin install; install Superpowers separately even if you
|
||||
already use it in another harness.
|
||||
|
||||
- Tell OpenCode:
|
||||
|
||||
```
|
||||
Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.opencode/INSTALL.md
|
||||
```
|
||||
|
||||
- Detailed docs: [docs/README.opencode.md](docs/README.opencode.md)
|
||||
|
||||
### Cursor
|
||||
|
||||
- In Cursor Agent chat, install from marketplace:
|
||||
|
||||
```text
|
||||
/add-plugin superpowers
|
||||
```
|
||||
|
||||
- Or search for "superpowers" in the plugin marketplace.
|
||||
|
||||
### GitHub Copilot CLI
|
||||
|
||||
- Register the marketplace:
|
||||
@@ -151,6 +138,19 @@ already use it in another harness.
|
||||
copilot plugin install superpowers@superpowers-marketplace
|
||||
```
|
||||
|
||||
### OpenCode
|
||||
|
||||
OpenCode uses its own plugin install; install Superpowers separately even if you
|
||||
already use it in another harness.
|
||||
|
||||
- Tell OpenCode:
|
||||
|
||||
```
|
||||
Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.opencode/INSTALL.md
|
||||
```
|
||||
|
||||
- Detailed docs: [docs/README.opencode.md](docs/README.opencode.md)
|
||||
|
||||
## The Basic Workflow
|
||||
|
||||
1. **brainstorming** - Activates before writing code. Refines rough ideas through questions, explores alternatives, presents design in sections for validation. Saves design document.
|
||||
@@ -214,6 +214,8 @@ The general contribution process for Superpowers is below. Keep in mind that we
|
||||
4. Follow the `writing-skills` skill for creating and testing new and modified skills
|
||||
5. Submit a PR, being sure to fill in the pull request template.
|
||||
|
||||
Skill-behavior tests use the eval harness submodule at `evals/`. After cloning this repo, run `git submodule update --init evals`, then see `evals/README.md` for setup. Plugin-infrastructure tests live at `tests/` and run via the relevant `run-*.sh` or `npm test`.
|
||||
|
||||
See `skills/writing-skills/SKILL.md` for the complete guide.
|
||||
|
||||
## Updating
|
||||
|
||||
@@ -50,6 +50,8 @@ New `sync-to-codex-plugin` script mirrors superpowers into the OpenAI Codex plug
|
||||
- **Single source of truth** — the persona/checklist that previously lived in both `agents/code-reviewer.md` and the skill's placeholder template (and drifted independently) is now one file.
|
||||
- **`subagent-driven-development` follows suit** — its `code-quality-reviewer-prompt.md` now dispatches `Task (general-purpose)` instead of the named agent.
|
||||
- **Behavioral test added** — `tests/claude-code/test-requesting-code-review.sh` plants real bugs (SQL injection, plaintext password handling, credential logging) into a tiny project and asserts the dispatched reviewer flags every planted issue at Critical/Important severity and refuses to approve the diff.
|
||||
|
||||
> Note: `tests/claude-code/test-requesting-code-review.sh` and `tests/claude-code/test-document-review-system.sh` (mentioned later in this document) were lifted into drill scenarios on 2026-05-06 and removed from `tests/`. See `evals/scenarios/code-review-catches-planted-bugs.yaml` and `evals/scenarios/spec-reviewer-catches-planted-flaws.yaml`. The references above and below are preserved as dated artifacts of the work this section describes.
|
||||
- **Codex and Copilot workaround docs trimmed** — the "Named agent dispatch" sections in `references/codex-tools.md` and `references/copilot-tools.md` documented how to flatten a named agent into a generic dispatch. With no named agents shipping, the workaround is unnecessary; both sections were dropped.
|
||||
|
||||
### Subagent-Driven Development
|
||||
|
||||
@@ -50,7 +50,7 @@ use skill tool to list skills
|
||||
### Loading a Skill
|
||||
|
||||
```
|
||||
use skill tool to load superpowers/brainstorming
|
||||
use skill tool to load brainstorming
|
||||
```
|
||||
|
||||
### Personal Skills
|
||||
@@ -99,17 +99,23 @@ To pin a specific version, use a branch or tag:
|
||||
|
||||
The plugin does two things:
|
||||
|
||||
1. **Injects bootstrap context** via the `experimental.chat.system.transform` hook, adding superpowers awareness to every conversation.
|
||||
1. **Injects bootstrap context** via the `experimental.chat.messages.transform` hook, adding superpowers awareness to every conversation.
|
||||
2. **Registers the skills directory** via the `config` hook, so OpenCode discovers all superpowers skills without symlinks or manual config.
|
||||
|
||||
### Tool Mapping
|
||||
|
||||
Skills written for Claude Code are automatically adapted for OpenCode:
|
||||
Skills speak in actions rather than naming any one runtime's tools. On OpenCode these resolve to:
|
||||
|
||||
- `TodoWrite` → `todowrite`
|
||||
- `Task` with subagents → OpenCode's `@mention` system
|
||||
- `Skill` tool → OpenCode's native `skill` tool
|
||||
- File operations → Native OpenCode tools
|
||||
- "Create a todo" / "mark complete in todo list" → `todowrite`
|
||||
- `Subagent (general-purpose):` template → OpenCode's `task` tool with `subagent_type: "general"` (or `"explore"` for codebase exploration)
|
||||
- "Invoke a skill" → OpenCode's native `skill` tool
|
||||
- "Read a file" → `read`
|
||||
- "Create a file" / "edit a file" / "delete a file" → `apply_patch`
|
||||
- "Run a shell command" → `bash`
|
||||
- "Search file contents" / "find files by name" → `grep`, `glob`
|
||||
- "Fetch a URL" → `webfetch`
|
||||
|
||||
(Verified against the installed OpenCode CLI's tool inventory.)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
@@ -147,7 +153,7 @@ Then use the installed package path in `opencode.json`:
|
||||
|
||||
### Bootstrap not appearing
|
||||
|
||||
1. Check OpenCode version supports `experimental.chat.system.transform` hook
|
||||
1. Check OpenCode version supports `experimental.chat.messages.transform` hook
|
||||
2. Restart OpenCode after config changes
|
||||
|
||||
## Getting Help
|
||||
|
||||
@@ -555,6 +555,8 @@ Should show exactly 6 files changed (5 skill files + 1 test file). No other file
|
||||
If test runner exists:
|
||||
```bash
|
||||
# Run skill-triggering tests
|
||||
# Note: tests/skill-triggering/ was lifted into drill scenarios on 2026-05-06.
|
||||
# See evals/scenarios/triggering-*.yaml. The reference below is a dated artifact.
|
||||
./tests/skill-triggering/run-all.sh 2>/dev/null || echo "Skill triggering tests not available in this environment"
|
||||
|
||||
# Run SDD integration test
|
||||
|
||||
1374
docs/superpowers/plans/2026-05-06-lift-drill-into-evals.md
Normal file
1374
docs/superpowers/plans/2026-05-06-lift-drill-into-evals.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,77 @@
|
||||
# Platform-neutral config-file references — Phase B design
|
||||
|
||||
## Background
|
||||
|
||||
Phase A (see `2026-05-05-platform-neutral-prose-design.md`) replaced generic third-person "Claude" prose with agent-neutral forms. This phase tackles the next category: references to the per-platform instruction file (CLAUDE.md, AGENTS.md, GEMINI.md) inside skills.
|
||||
|
||||
The plugin runs on multiple harnesses, and each one reads its own instruction file. Where a skill names CLAUDE.md as if it were the only file, that's a Claude-Code-centric assumption that doesn't hold on Codex / Gemini CLI / OpenCode.
|
||||
|
||||
## In scope
|
||||
|
||||
Two specific lines in active skills:
|
||||
|
||||
1. **`skills/writing-skills/SKILL.md:58`** — `Project-specific conventions (put in CLAUDE.md)`
|
||||
2. **`skills/receiving-code-review/SKILL.md:30`** — `"You're absolutely right!" (explicit CLAUDE.md violation)`
|
||||
|
||||
## Out of scope
|
||||
|
||||
- **`skills/using-superpowers/SKILL.md:22, 26`** — instruction-priority list. The list already names all three (CLAUDE.md, GEMINI.md, AGENTS.md) inclusively, which is correct: the section is making a real claim about *what counts as user instruction* on a multi-platform plugin. No change needed.
|
||||
- **Historical / example artifacts**:
|
||||
- `skills/systematic-debugging/CREATION-LOG.md` — attribution path (`~/.claude/CLAUDE.md`) is a historical fact.
|
||||
- `skills/writing-skills/examples/CLAUDE_MD_TESTING.md` — the entire file is a worked example testing CLAUDE.md content variants. The filename, body, and the reference from `testing-skills-with-subagents.md` all stay; normalizing them defeats the example.
|
||||
- **Platform-tooling references** — Phase D candidates:
|
||||
- `skills/using-superpowers/SKILL.md:40` (Gemini CLI tool mapping note about GEMINI.md)
|
||||
- `skills/using-superpowers/references/gemini-tools.md` (`save_memory` persists to GEMINI.md)
|
||||
|
||||
## Substitution rules
|
||||
|
||||
Two distinct calls, one per in-scope line.
|
||||
|
||||
### Rule 1: "where to put project-specific conventions"
|
||||
|
||||
`writing-skills/SKILL.md:58`:
|
||||
|
||||
- **Before:** `Project-specific conventions (put in CLAUDE.md)`
|
||||
- **After:** `Project-specific conventions (put in your instructions file)`
|
||||
|
||||
Use a generic phrase rather than picking one filename. Different harnesses read different files (CLAUDE.md, AGENTS.md, GEMINI.md, etc.) and the skill should not assume one. The platform-tools reference docs (`references/{codex,copilot,gemini}-tools.md`) are the right place to name each platform's preferred file.
|
||||
|
||||
### Rule 2: the "(explicit CLAUDE.md violation)" parenthetical
|
||||
|
||||
`receiving-code-review/SKILL.md:30`:
|
||||
|
||||
- **Before:** `"You're absolutely right!" (explicit CLAUDE.md violation)`
|
||||
- **After:** `"You're absolutely right!" (explicit instruction-file violation)`
|
||||
|
||||
The parenthetical is doing real work — it signals this phrase isn't just stylistically bad, it actively violates rules many users put in their instruction files. "Instruction file" is the natural cross-platform term covering AGENTS.md / CLAUDE.md / GEMINI.md collectively, and keeps the original signal without picking one filename or softening to "common".
|
||||
|
||||
## Commit plan
|
||||
|
||||
Atomic commits, in order:
|
||||
|
||||
1. **`writing-skills/SKILL.md`** — CLAUDE.md → "your instructions file" in the "where to put project conventions" line
|
||||
2. **`receiving-code-review/SKILL.md`** — CLAUDE.md → instruction-file in the violation parenthetical
|
||||
3. **Platform-tools reference docs** — add the preferred per-platform instructions filename (CLAUDE.md, AGENTS.md, GEMINI.md, etc.) to each `references/{codex,copilot,gemini}-tools.md` so readers can resolve "your instructions file" to a real filename.
|
||||
|
||||
Each commit message names "Phase B" and the slice.
|
||||
|
||||
## Verification
|
||||
|
||||
After each commit:
|
||||
|
||||
- Read the surrounding paragraph to confirm grammar and meaning still parse.
|
||||
- `grep -n "CLAUDE\.md" <touched-file>` — no remaining hits in active prose (carve-outs already documented).
|
||||
|
||||
After both commits:
|
||||
|
||||
- `grep -rn "CLAUDE\.md" skills/` should return only the documented carve-outs (CREATION-LOG, CLAUDE_MD_TESTING and its inbound reference, the priority list in using-superpowers).
|
||||
|
||||
## Non-goals
|
||||
|
||||
- Do not touch the priority list ordering in `using-superpowers/SKILL.md`. Reordering CLAUDE.md / GEMINI.md / AGENTS.md is an aesthetic change, not a substitution, and out of scope here.
|
||||
- Do not rename `examples/CLAUDE_MD_TESTING.md` or change its content.
|
||||
- Do not modify Gemini-CLI-specific tooling references (Phase D candidates).
|
||||
|
||||
## Implementation note
|
||||
|
||||
Phase B as written here covered three commits and the three non-Claude-Code platform-tools refs. Implementation went one step further: a fourth ref, `references/claude-code-tools.md`, was added in commit `8505703` for symmetry, so Claude Code's instructions-file conventions and tool-name list live alongside the others rather than implicitly in the surrounding skill prose. That addition wasn't anticipated in this spec but is consistent with its intent.
|
||||
@@ -0,0 +1,94 @@
|
||||
# Platform-neutral prose — Phase A design
|
||||
|
||||
## Background
|
||||
|
||||
Superpowers ships to multiple agent runtimes (Claude Code, Codex, Cursor, OpenCode, Copilot CLI, Gemini CLI). Skill content and supporting docs were written first for Claude Code and use "Claude" in places where any runtime's agent applies. OpenAI's vendored fork (openai/plugins#217) attempted a wholesale rewrite that was actively wrong in places — rewriting historical attribution paths, model names, and platform-specific install instructions — and we want to avoid that mistake while still removing platform-centric prose where it is genuinely incidental.
|
||||
|
||||
The full effort is broken into phases by reference category. **This spec covers Phase A only:** generic third-person prose mentioning "Claude" in non-platform-specific contexts. Later phases (config-file references, marketing copy, tool-name references) are out of scope here and will get their own specs.
|
||||
|
||||
## In scope
|
||||
|
||||
Generic prose mentions of "Claude" in:
|
||||
|
||||
- `skills/*/SKILL.md` and supporting `.md` files in active skill directories
|
||||
- `skills/writing-skills/anthropic-best-practices.md`
|
||||
- `README.md` (only where the mention is generic prose, not platform marketing)
|
||||
|
||||
Plus one coined-term rename: **Claude Search Optimization (CSO) → Skill Discovery Optimization (SDO)** in `skills/writing-skills/SKILL.md`.
|
||||
|
||||
## Out of scope
|
||||
|
||||
- **Platform/runtime statements** — "In Claude Code:", install instructions, tool-mapping references. (Phase D candidate.)
|
||||
- **Config-file references** — CLAUDE.md, AGENTS.md, GEMINI.md priority lists and "where to put project conventions" callouts. (Phase B.)
|
||||
- **Tool-name references** — `Skill`, `Bash`, `Read`, `Task`, `TodoWrite`. Skills are written in Claude Code's tool vocabulary; the existing `references/{codex,copilot,gemini}-tools.md` files map them. (At the time this spec was written, the plan was to defer or skip these. Phase E ended up doing them — replacing tool names with action language across active skills and unifying the platform-tools refs around the same vocabulary.)
|
||||
- **Marketing copy** in README — "Superpowers for Claude Code", platform-named install sections. (Phase C.)
|
||||
- **Historical artifacts** — `docs/plans/*.md`, `docs/superpowers/specs/*.md`, `CREATION-LOG.md`. These are dated, point-in-time documents; rewriting them rewrites history.
|
||||
- **Model identifiers** — Claude Haiku / Sonnet / Opus. These are real product names.
|
||||
- **Filename / URL references** — `CLAUDE.md`, `claude.com`, `claude-plugin/`, paths under `~/.claude/`.
|
||||
- **`anthropic-best-practices.md` filename** — the file remains named after its source even though we rewrite the prose inside it.
|
||||
|
||||
## Replacement style
|
||||
|
||||
Use a mix that reads naturally in English:
|
||||
|
||||
- **Second person — "your agent"** when addressing the skill author about *their* runtime
|
||||
- "your agent reads the description"
|
||||
- **Third person — "the agent" / "agents" / "an agent"** when describing system behavior generically
|
||||
- "Future agents find your skills"
|
||||
- "Use words an agent would search for"
|
||||
- "Agents read SKILL.md only when the skill becomes relevant"
|
||||
|
||||
Pick whichever fits the surrounding sentence; do not force consistency at the cost of awkward phrasing. Pluralize when natural ("future agents", "agents read") rather than always saying "the agent".
|
||||
|
||||
### Carve-outs that stay as "Claude"
|
||||
|
||||
- Model names: Claude Haiku, Claude Sonnet, Claude Opus
|
||||
- Filenames and URLs: `CLAUDE.md`, `claude.com`, `~/.claude/`
|
||||
- Branded platform name "Claude Code" wherever it refers to the runtime as such (handled in later phases)
|
||||
|
||||
### Coined-term rename
|
||||
|
||||
- **Claude Search Optimization (CSO) → Skill Discovery Optimization (SDO)**
|
||||
- Appears in `skills/writing-skills/SKILL.md` as a section heading and in nearby prose. Rename the heading, the acronym, and any in-file cross-references.
|
||||
|
||||
## Files affected
|
||||
|
||||
Approximate counts based on a `grep` filtered to exclude carve-outs:
|
||||
|
||||
| File | Generic-prose mentions |
|
||||
|------|------------------------|
|
||||
| `skills/writing-skills/SKILL.md` | ~12 (includes CSO heading + body) |
|
||||
| `skills/writing-skills/anthropic-best-practices.md` | ~30 |
|
||||
| `skills/writing-skills/examples/CLAUDE_MD_TESTING.md` | ~1 — filename stays (it's a CLAUDE.md test artifact); the "Variant C: Claude.AI Emphatic Style" heading also stays (it's a label naming a specific style) |
|
||||
| `README.md` | ~1 |
|
||||
|
||||
Final list confirmed during implementation by re-running the filtered grep.
|
||||
|
||||
## Commit plan
|
||||
|
||||
Four atomic commits, in order:
|
||||
|
||||
1. **Rename CSO → SDO** in `skills/writing-skills/SKILL.md`. Mechanical, isolated, easy to revert if we change our minds about the term.
|
||||
2. **Active skills prose** — generic "Claude" → "agent" forms across `skills/*/SKILL.md` and supporting `.md`, excluding `anthropic-best-practices.md`.
|
||||
3. **`anthropic-best-practices.md` prose** — same substitution rules. Separate commit because this file is a vendored adaptation of an external doc; isolating the change makes future reconciliation with upstream easier to read.
|
||||
4. **README.md prose** *(only if any generic-prose mentions remain after filtering)*. Skipped if empty.
|
||||
|
||||
Each commit message names the phase ("Phase A") and the slice ("rename CSO to SDO", "agent prose in active skills", etc.) so the series is self-documenting.
|
||||
|
||||
## Verification
|
||||
|
||||
After each commit:
|
||||
|
||||
- `grep -rn "Claude" <touched-paths>` — every remaining hit must fall into a documented carve-out (model name, filename, URL, "Claude Code" platform name, historical artifact).
|
||||
- Read the touched file end-to-end — substitutions should not have broken sentence flow, pronoun agreement, or list parallelism.
|
||||
- No tests to run; this is prose-only.
|
||||
|
||||
After the final commit:
|
||||
|
||||
- Skim each modified skill in a live session to confirm nothing reads awkwardly.
|
||||
|
||||
## Non-goals
|
||||
|
||||
- Do not change behavior, structure, headings (other than CSO→SDO), examples, code blocks, or YAML frontmatter.
|
||||
- Do not introduce new sections, callouts, or compatibility notes.
|
||||
- Do not "improve" prose beyond the substitution while editing.
|
||||
@@ -0,0 +1,47 @@
|
||||
# Platform-neutral README ordering — Phase C design
|
||||
|
||||
## Background
|
||||
|
||||
Phases A and B (see `2026-05-05-platform-neutral-prose-design.md` and `2026-05-05-platform-neutral-config-refs-design.md`) already neutralized generic Claude prose and config-file references in the README. The remaining platform-leaning signal is layout: the README's two platform listings put Claude Code first and aren't strictly alphabetical elsewhere.
|
||||
|
||||
This phase fixes the ordering. No prose changes.
|
||||
|
||||
## In scope
|
||||
|
||||
1. **Quickstart platform list** (`README.md:7`) — the inline link list of supported harnesses
|
||||
2. **Installation section ordering** (`README.md:35–152`) — the per-harness install sub-sections
|
||||
|
||||
## Out of scope
|
||||
|
||||
- Prose, marketplace names, plugin IDs, URLs — all factually correct as-is.
|
||||
- Visual weight of the Claude Code section (which has two sub-sections — official Anthropic marketplace and Superpowers marketplace). Both are real install paths; collapsing them would hide accurate info.
|
||||
- Section headings and content within each install block — only the ordering of the blocks changes.
|
||||
|
||||
## Substitution
|
||||
|
||||
Both listings reorder to strict alphabetical:
|
||||
|
||||
| Old order | New order |
|
||||
|-----------|-----------|
|
||||
| Claude Code | Claude Code |
|
||||
| Codex CLI | Codex App |
|
||||
| Codex App | Codex CLI |
|
||||
| Factory Droid | Cursor |
|
||||
| Gemini CLI | Factory Droid |
|
||||
| OpenCode | Gemini CLI |
|
||||
| Cursor | GitHub Copilot CLI |
|
||||
| GitHub Copilot CLI | OpenCode |
|
||||
|
||||
Three moves: Codex App swaps with Codex CLI; Cursor moves up two slots; GitHub Copilot CLI moves up one.
|
||||
|
||||
Claude Code remains first by alphabetical chance (`Cl…` precedes `Co…`).
|
||||
|
||||
## Commit plan
|
||||
|
||||
One atomic commit covering both listings, since changing one without the other would create inconsistency between the quickstart and the installation section.
|
||||
|
||||
## Verification
|
||||
|
||||
- Quickstart anchors (`#claude-code`, `#codex-app`, etc.) still resolve to existing `### …` headings — no headings renamed.
|
||||
- Each install sub-section's body is byte-identical pre/post; only positions changed.
|
||||
- `git diff README.md` shows section moves only, no content edits.
|
||||
@@ -0,0 +1,247 @@
|
||||
# Lift drill into superpowers as `evals/` — design
|
||||
|
||||
## Background
|
||||
|
||||
Drill is a Python skill-compliance benchmark that lives in its own repo at `obra/drill`. It drives real tmux sessions, runs an LLM actor as a simulated user, runs an LLM verifier on the resulting transcript, and reports pass/fail per scenario. It supports Claude Code, Codex, Gemini CLI, and (per recent commits) OpenCode and Copilot CLI.
|
||||
|
||||
Drill is already the *de facto* eval harness for superpowers. The PRI-1397 commit series in the drill repo lifted ~22 superpowers bash tests into drill scenarios, and the most recent superpowers commit (`a2292c5`) explicitly removed a redundant bash test with the message *"replaced by drill behavioral coverage"*. Migration momentum exists; this spec completes it.
|
||||
|
||||
This work moves drill into superpowers under `evals/`, deletes the redundant bash tests after per-file verification of drill scenario coverage, and updates docs so contributors land on the new structure.
|
||||
|
||||
## Goals
|
||||
|
||||
1. `evals/` is the canonical eval harness in superpowers — full drill source, scenarios, fixtures, prompts, backend configs, and tests.
|
||||
2. Bash tests in `superpowers/tests/` that have been individually verified as 100% covered by drill scenarios are deleted; the rest are preserved.
|
||||
3. The split between `tests/` (plugin infrastructure: bash + node + python integration tests) and `evals/` (LLM behavior with actor + verifier) is meaningful and documented.
|
||||
4. Top-level docs (`README.md`, `CLAUDE.md`, `docs/testing.md`) point contributors at the right place.
|
||||
5. The standalone `obra/drill` repo continues to exist (this PR does not touch it) and gets archived as a separate manual step after this PR merges.
|
||||
|
||||
## Non-goals
|
||||
|
||||
- **CI integration.** Manual-only here. The natural follow-up is "tiered": fast subset on every PR, full sweep nightly + on-demand. That requires API budget decisions, GitHub Actions secrets, and a runner image with `tmux` + `node` + `python` + `claude` / `codex` / `gemini` CLIs installed. Out of scope.
|
||||
- **Scenario co-location with skills.** Scenarios stay centralized at `evals/scenarios/`. If we later decide each skill should own its scenarios, that's a path-find-and-rename operation; the YAML format does not change.
|
||||
- **Renaming the internal Python package** (`drill` → `evals`). The directory is `evals/` (user-facing); the Python package keeps its `drill` name to keep the diff small. A short note in `evals/README.md` explains.
|
||||
- **Drill repo archival.** This PR does not touch `obra/drill`. After merge, the drill repo is archived manually (read-only on GitHub, README pointer to `obra/superpowers/evals/`).
|
||||
- **Lifting `tests/claude-code/analyze-token-usage.py` into `evals/bin/`.** Useful utility, not test code. Can move later; not required by this PR.
|
||||
|
||||
## Branching
|
||||
|
||||
Branch off `dev` as `f/evals-lift`. This work is independent of the open `f/cross-platform` PR — no shared file changes besides possibly `README.md`, which is small enough to resolve at merge time if it conflicts.
|
||||
|
||||
## Architecture after the move
|
||||
|
||||
```
|
||||
superpowers/
|
||||
evals/ ← NEW (full drill copy)
|
||||
pyproject.toml (Python 3.11, uv-managed)
|
||||
uv.lock
|
||||
.gitignore (drill's own; results/, .venv/, .env)
|
||||
README.md (was drill's README; install instructions updated)
|
||||
CLAUDE.md (was drill's CLAUDE.md; paths updated)
|
||||
docs/
|
||||
design.md (drill's design — preserved verbatim, cross-linked from this spec)
|
||||
manual-testing.md
|
||||
pressure-and-red-testing.md
|
||||
drill/ (Python package; name kept; cli, engine, actor, verifier, etc.)
|
||||
backends/ (claude-*.yaml, codex.yaml, gemini.yaml)
|
||||
scenarios/ (32+ YAML scenarios)
|
||||
setup_helpers/ (15 Python helpers; create_base_repo, sdd_*, spec_*, worktree, etc.)
|
||||
fixtures/ (template-repo, sdd-go-fractals, sdd-svelte-todo)
|
||||
prompts/ (actor.md, verifier.md)
|
||||
bin/ (assertion helper scripts: tool-called, tool-count, etc.)
|
||||
tests/ (drill's own pytest suite)
|
||||
|
||||
tests/ ← bash tests preserved by default
|
||||
brainstorm-server/ ← KEEP (node tests for brainstorm-server JS code)
|
||||
opencode/ ← KEEP (plugin loading tests)
|
||||
codex-plugin-sync/ ← KEEP (sync verification)
|
||||
claude-code/ ← MOSTLY KEEP — see deletion gate
|
||||
explicit-skill-requests/ ← KEEP unless verified replaced
|
||||
skill-triggering/ ← KEEP unless verified replaced
|
||||
subagent-driven-dev/ ← KEEP unless verified replaced
|
||||
|
||||
docs/
|
||||
testing.md ← UPDATED (split into "Plugin tests" + "Skill behavior evals")
|
||||
superpowers/
|
||||
specs/
|
||||
2026-05-06-lift-drill-into-evals-design.md ← THIS SPEC
|
||||
|
||||
README.md ← small Contributing-section pointer to evals/
|
||||
CLAUDE.md ← one-line "Eval harness lives at evals/" pointer
|
||||
```
|
||||
|
||||
The `tests/` and `evals/` directories serve clearly distinct roles after this PR:
|
||||
|
||||
- **`tests/`** — does the plugin's non-LLM code work? Unit and integration tests for the brainstorm-server JS code, OpenCode plugin loading, codex-plugin-sync sync verification. Bash + node + python.
|
||||
- **`evals/`** — do agents behave correctly on real LLM sessions? Drill scenarios with actor + verifier. Python-only, runs real tmux sessions.
|
||||
|
||||
## Deletion gate (per bash test)
|
||||
|
||||
A bash test is deleted *only if* a drill scenario verifiably covers every assertion it makes. The implementation plan documents this verification per file: read the bash test, list its checks, find the drill scenario, confirm each check has a matching `verify.assertions` or `verify.criteria` entry. If even one check is missing, the option is to either extend the drill scenario or keep the bash test. Default keeps it.
|
||||
|
||||
**Tentative coverage map** (commit-message-based; needs per-file verification before any deletion):
|
||||
|
||||
| Bash test | Claimed drill replacement | Coverage status |
|
||||
|-----------|---------------------------|-----------------|
|
||||
| `tests/skill-triggering/prompts/*` (6 prompt files) | `triggering-*.yaml` (6 scenarios) | candidate — verify per-prompt before deleting |
|
||||
| `tests/skill-triggering/run-test.sh`, `run-all.sh` | n/a (runners, not tests) | **keep** — runner scripts |
|
||||
| `tests/explicit-skill-requests/prompts/please-use-brainstorming.txt` | needs verification — drill has no obvious counterpart yet | likely **keep** unless drill scenario added |
|
||||
| `tests/explicit-skill-requests/prompts/use-systematic-debugging.txt` | needs verification — drill has no obvious counterpart | likely **keep** unless drill scenario added |
|
||||
| `tests/explicit-skill-requests/run-claude-describes-sdd.sh` | partially → `mid-conversation-skill-invocation.yaml` | candidate — verify per-script |
|
||||
| `tests/explicit-skill-requests/run-haiku-test.sh` | no drill scenario covers Haiku-specific behavior | **keep** |
|
||||
| `tests/explicit-skill-requests/run-multiturn-test.sh`, `run-extended-multiturn-test.sh` | no drill scenario covers multi-turn build-up | **keep** unless drill scenarios added |
|
||||
| `tests/explicit-skill-requests/run-test.sh`, `run-all.sh` | n/a (runners) | **keep** |
|
||||
| `tests/subagent-driven-dev/go-fractals/`, `tests/subagent-driven-dev/svelte-todo/` | `sdd-go-fractals.yaml`, `sdd-svelte-todo.yaml` | candidate — verify before deleting (these include real assertions about test suites passing) |
|
||||
| `tests/claude-code/test-document-review-system.sh` | `spec-reviewer-catches-planted-flaws.yaml` | candidate — verify before deleting |
|
||||
| `tests/claude-code/test-requesting-code-review.sh` | `code-review-catches-planted-bugs.yaml` | candidate — verify before deleting |
|
||||
| `tests/claude-code/test-subagent-driven-development-integration.sh` | `sdd-rejects-extra-features.yaml` (YAGNI subset) | **partial** — bash test also asserts ≥3 commits / `npm test` passes / runs `analyze-token-usage.py`. Drill scenario asserts forbidden-exports + reviewer-as-gate. Mostly disjoint — almost certainly **keep + extend drill scenario**. |
|
||||
| `tests/claude-code/test-subagent-driven-development.sh` | meta/documentation test (asks agent to *describe* SDD); no drill scenario covers description tests | **keep** unless drill scenario added |
|
||||
| `tests/claude-code/test-worktree-native-preference.sh` | `worktree-creation-under-pressure.yaml` | candidate — verify before deleting |
|
||||
| `tests/claude-code/test-helpers.sh`, `run-skill-tests.sh`, `analyze-token-usage.py` | n/a (utilities, not tests) | **keep** — libraries/tools |
|
||||
|
||||
## Verification protocol (subagent-gated)
|
||||
|
||||
Every change in the implementation plan gets cross-checked by an independent subagent before commit.
|
||||
|
||||
| Change category | Subagent verification |
|
||||
|----------------|----------------------|
|
||||
| Each bash-test deletion | Dispatch a subagent with: (a) the bash test file content, (b) the candidate drill scenario YAML, (c) the prompt: *"List every assertion the bash test makes. List every verify entry in the drill scenario. For each bash assertion, find a matching drill check or report it as unmatched. Output a per-assertion table."* The subagent's output is the gate — only delete if every bash assertion has a match. |
|
||||
| Initial `evals/` copy | Subagent verifies: (a) drill SHA being copied is recorded in the lift commit message so provenance is auditable; (b) **per-file SHA-256 checksum** matches drill repo for every file (not just file count); (c) excluded paths (`.git/`, `.venv/`, `results/`, `.env`, `__pycache__/`, `*.egg-info/`, any `.private-journal/`) are absent from `evals/`; (d) all backend YAMLs reference paths that exist post-move; (e) `pyproject.toml`, `uv.lock`, `.gitignore` are intact. |
|
||||
| Drill's own pytest suite | Subagent runs `cd evals && uv run pytest` after the path-default change. Drill ships its own pytest suite at `evals/tests/` including `test_backend.py` which exercises `SUPERPOWERS_ROOT` env-var behavior — these tests must update to match the helper and continue to pass. |
|
||||
| Reference scrubbing after deletion | Subagent greps the entire superpowers tree (excluding `node_modules/`, `.venv/`, and `evals/`) for references to deleted bash test paths. Search targets: `docs/`, `docs/superpowers/plans/`, `RELEASE-NOTES.md`, `CLAUDE.md`, `GEMINI.md`, `AGENTS.md`, `README.md`, `.github/`, `scripts/`, `.opencode/INSTALL.md`, `.codex-plugin/INSTALL.md`, `lefthook.yml`. Any hit is either updated or surfaces a missed dependency. |
|
||||
| Path defaults change (`SUPERPOWERS_ROOT` default) | Subagent runs at least one cheap drill scenario after the path changes (e.g., `triggering-test-driven-development`) and confirms it still passes. Real validation, not just code review. |
|
||||
| Final pre-PR adversarial review | Two subagents in parallel, "5 points to whoever finds the most legitimate issues" framing — same protocol used on the cross-platform PR. Verify both source code and behavior. |
|
||||
|
||||
Each subagent task gets its own bullet in the implementation plan with explicit inputs and pass criteria. The subagent's output is summarized in the relevant commit message ("Subagent verification: …") so the trail is auditable.
|
||||
|
||||
## Concrete path/config edits
|
||||
|
||||
**Verified prior to writing this spec.** `drill/cli.py` defines `PROJECT_ROOT = Path(__file__).parent.parent`. After the move, `cli.py` lives at `evals/drill/cli.py`, so `PROJECT_ROOT` resolves to `evals/` and `PROJECT_ROOT.parent` resolves to the superpowers repo root. That's the value `SUPERPOWERS_ROOT` should take by default.
|
||||
|
||||
**YAML substitution audit.** Only the four `claude*.yaml` backend configs interpolate `${SUPERPOWERS_ROOT}` into `args` (for the `--plugin-dir` flag); `codex.yaml` and `gemini.yaml` only list `SUPERPOWERS_ROOT` in `required_env` (consumed by `engine.py:233` / `setup.py:25`'s `os.environ["SUPERPOWERS_ROOT"]` lookups in pre/post-run hooks). The helper's `os.environ` mutation covers both code paths.
|
||||
|
||||
| File | Current | After |
|
||||
|------|---------|-------|
|
||||
| `drill/cli.py` | `load_dotenv(PROJECT_ROOT / ".env")` at module import; nothing about `SUPERPOWERS_ROOT` | After `load_dotenv`, call new helper `_set_superpowers_root_default()` that sets `os.environ["SUPERPOWERS_ROOT"]` to `str(PROJECT_ROOT.parent)` if and only if not already set. Order: `load_dotenv` → set default → click group definitions. |
|
||||
| `drill/engine.py:233`, `drill/setup.py:25` | Direct `os.environ["SUPERPOWERS_ROOT"]` access (KeyError if unset) | Unchanged. The CLI startup hook guarantees the env var is set by the time the engine/setup execute. |
|
||||
| `backends/claude*.yaml` (5 files) | `${SUPERPOWERS_ROOT}` substituted in `args` for `--plugin-dir` | Unchanged. YAML substitution reads `os.environ` at backend-load time, which is after CLI startup. |
|
||||
| `backends/codex.yaml`, `backends/gemini.yaml` | `SUPERPOWERS_ROOT` in `required_env` only | Drop from `required_env` (the helper supplies it). `claude*.yaml` keep `required_env` for backward compat (env var works as override). |
|
||||
| `evals/tests/test_backend.py` | Tests assert `SUPERPOWERS_ROOT` is in `required_env` lists, plus path-resolution tests | Update tests to match the new contract: helper-supplied default, env override still works, `required_env` no longer required for codex/gemini. |
|
||||
| `evals/README.md` | "export SUPERPOWERS_ROOT=/path/to/superpowers" | Drop the export line; note that the env var auto-defaults to the parent of `evals/`; mention the only required setup is `ANTHROPIC_API_KEY` (or `OPENAI_API_KEY` / Gemini auth). |
|
||||
| `evals/CLAUDE.md` | Same | Same |
|
||||
| `evals/.gitignore` | drill's existing patterns (`results/`, `.venv/`, `__pycache__/`, `.env`, `*.pyc`, `*.egg-info/`, `dist/`, `build/`, `.claude/`) | Copied verbatim. Patterns are relative to file location, so they apply correctly under `evals/`. |
|
||||
| `evals/lefthook.yml` | drill ships `lefthook.yml` defining `pre-commit: uv run ruff check && uv run ty check` | Move to `evals/lefthook.yml`. Either (a) install lefthook at the superpowers root and have it federate to `evals/lefthook.yml`, or (b) document that contributors run `cd evals && lefthook run pre-commit` manually. **Decision in implementation: option (b) for simplicity** — superpowers' top-level workflow doesn't change. |
|
||||
|
||||
`.env` placement: keep `evals/.env` (gitignored). Contributors source it from there or set `ANTHROPIC_API_KEY` in their shell environment.
|
||||
|
||||
**Top-level superpowers files needing small additions:**
|
||||
|
||||
- `superpowers/.gitignore`: add `evals/results/`, `evals/.venv/`, `evals/.env` (belt-and-suspenders; evals/.gitignore already covers these locally).
|
||||
- `superpowers/CLAUDE.md`: add a one-line pointer "Eval harness lives at `evals/` — see `evals/README.md`" so agents discover it.
|
||||
- `superpowers/docs/testing.md`: split into "## Plugin tests" (existing tests/ content, with the deleted-test references trimmed) and "## Skill behavior evals" (one-paragraph summary + pointer to `evals/`).
|
||||
- `superpowers/README.md`: add a single line in the Contributing section pointing at `evals/` for skill-behavior testing.
|
||||
|
||||
## Migration ordering
|
||||
|
||||
Each step is a separate commit (or small group of commits). Step 2 is the biggest single commit (the verbatim drill copy); subsequent steps are small and atomic.
|
||||
|
||||
```
|
||||
1. Branch off `dev` (f/evals-lift)
|
||||
|
||||
2. Copy drill repo into evals/ (single commit, easy to revert)
|
||||
├─ Record drill SHA at copy time → commit message
|
||||
├─ Use `rsync -a --exclude=.git --exclude=.venv --exclude=results
|
||||
│ --exclude=.env --exclude=__pycache__ --exclude='*.egg-info'
|
||||
│ --exclude=.private-journal /path/to/drill/ evals/`
|
||||
│ (rsync chosen over `cp -r` for explicit excludes; verify with
|
||||
│ `find evals -name '.git' -type d` returns nothing)
|
||||
├─ Subagent gate: per-file SHA-256 checksum matches drill repo for every
|
||||
│ non-excluded file; excluded paths absent from evals/
|
||||
└─ Smoke check: `cd evals && uv sync` succeeds (proves install only;
|
||||
not a behavioral test)
|
||||
|
||||
3. Update path defaults
|
||||
├─ Add _set_superpowers_root_default() helper to drill/cli.py
|
||||
├─ Wire it after load_dotenv, before click group definition
|
||||
├─ Update evals/README.md and evals/CLAUDE.md (drop SUPERPOWERS_ROOT install step)
|
||||
├─ Drop SUPERPOWERS_ROOT from required_env in codex.yaml/gemini.yaml
|
||||
│ (keep in claude*.yaml as override)
|
||||
└─ Update evals/tests/test_backend.py to match new contract
|
||||
|
||||
4. Validate from new location (TWO checks)
|
||||
├─ Run drill's own pytest: `cd evals && uv run pytest` — must pass
|
||||
└─ Run cheap drill scenario: `cd evals && uv run drill run
|
||||
triggering-test-driven-development -b claude` — must pass.
|
||||
Real behavioral validation, not just code review.
|
||||
|
||||
5. Bash test deletion phase — per-file with subagent gate
|
||||
For each file in the candidate-deletion list:
|
||||
a. Subagent compares bash test assertions vs drill scenario verify block
|
||||
b. Pass criterion: every bash assertion has a matching drill check
|
||||
c. If pass → delete the bash test file (one commit per file or per
|
||||
coherent group)
|
||||
d. If fail → either extend drill scenario (separate commit + verify) or
|
||||
keep the bash test (no commit)
|
||||
|
||||
6. Stale-reference scrub
|
||||
├─ Subagent greps the superpowers tree (excluding node_modules/, .venv/,
|
||||
│ evals/) for deleted file paths
|
||||
├─ Search targets: docs/, docs/superpowers/plans/, RELEASE-NOTES.md,
|
||||
│ CLAUDE.md, GEMINI.md, AGENTS.md, README.md, .github/, scripts/,
|
||||
│ .opencode/INSTALL.md, .codex-plugin/INSTALL.md, lefthook.yml
|
||||
├─ Update active references (e.g., docs/testing.md, README.md install)
|
||||
└─ Historical references in docs/superpowers/plans/*.md and
|
||||
RELEASE-NOTES.md are PRESERVED with a brief annotation
|
||||
("(test removed; behavior covered by drill scenario X)") rather
|
||||
than rewritten — these are dated artifacts, not living docs.
|
||||
|
||||
7. Top-level docs
|
||||
├─ docs/testing.md split
|
||||
├─ CLAUDE.md pointer
|
||||
└─ README.md Contributing section
|
||||
|
||||
8. Re-run smoke checks (regression gate)
|
||||
├─ `cd evals && uv run pytest`
|
||||
└─ `cd evals && uv run drill run triggering-test-driven-development -b claude`
|
||||
|
||||
9. Final adversarial review
|
||||
└─ Two parallel subagents, full diff, "5 points to whoever finds the
|
||||
most legitimate issues" framing. Address findings before push.
|
||||
|
||||
10. Push branch + open PR against dev
|
||||
└─ PR description includes: drill SHA pinned at copy, archival action
|
||||
item ("after merge: archive obra/drill, add README pointer to
|
||||
obra/superpowers/evals/"), per-deleted-file coverage receipts.
|
||||
```
|
||||
|
||||
## Verification (post-implementation)
|
||||
|
||||
The implementation plan must show:
|
||||
|
||||
- All non-excluded drill source files present at `evals/` after step 2 (subagent **per-file SHA-256 checksum diff** vs `obra/drill@<recorded-sha>`).
|
||||
- Excluded paths (`.git/`, `.venv/`, `results/`, `.env`, `__pycache__/`, `*.egg-info/`, `.private-journal/`) absent from `evals/`.
|
||||
- The step-2 commit message records the drill source SHA.
|
||||
- `cd evals && uv sync` succeeds without `SUPERPOWERS_ROOT` set.
|
||||
- `cd evals && uv run pytest` passes (drill's own pytest suite).
|
||||
- `cd evals && uv run drill list` returns the same scenario count as the standalone drill repo at the recorded SHA.
|
||||
- `cd evals && uv run drill run triggering-test-driven-development -b claude` passes (proves path defaults work end-to-end).
|
||||
- For each deleted bash test: subagent verification table in the commit message showing every assertion mapped to a drill check.
|
||||
- Grep for deleted file paths returns zero hits across living superpowers docs (post step 6); historical refs in `docs/superpowers/plans/*.md` and `RELEASE-NOTES.md` are annotated, not rewritten.
|
||||
- `docs/testing.md` has both "Plugin tests" and "Skill behavior evals" sections.
|
||||
- The drill repo's history is untouched; `obra/drill` is unaffected by this PR.
|
||||
- PR description names the action item to archive `obra/drill` after merge.
|
||||
|
||||
## Open questions
|
||||
|
||||
None. All clarifying decisions have been made:
|
||||
|
||||
| Question | Decision |
|
||||
|----------|----------|
|
||||
| Where does drill live in superpowers? | `evals/` (rename from drill); standalone repo archived as separate step |
|
||||
| Fate of redundant bash tests? | Delete per-file with subagent verification of coverage; default keep |
|
||||
| Scenarios layout? | Centralized at `evals/scenarios/` |
|
||||
| Python toolchain placement? | Self-contained at `evals/` |
|
||||
| CI integration? | Manual-only this PR; documented future path |
|
||||
| Migration mechanics? | Plain copy; drill repo's history preserved in archived repo, not in-tree |
|
||||
| Internal Python package name? | Keep as `drill` (directory is `evals/`) |
|
||||
| Branching strategy? | Independent off `dev` (not stacked on `f/cross-platform`) |
|
||||
313
docs/testing.md
313
docs/testing.md
@@ -1,303 +1,34 @@
|
||||
# Testing Superpowers Skills
|
||||
# Testing Superpowers
|
||||
|
||||
This document describes how to test Superpowers skills, particularly the integration tests for complex skills like `subagent-driven-development`.
|
||||
Superpowers has two distinct kinds of tests, each in its own directory:
|
||||
|
||||
## Overview
|
||||
- **`tests/`** — does the plugin's non-LLM code work? Bash + node + python integration tests for brainstorm-server JS, OpenCode plugin loading, codex-plugin sync, and analysis utilities.
|
||||
- **`evals/`** — do agents behave correctly on real LLM sessions? Python harness driving real tmux sessions of Claude Code / Codex / Gemini CLI, with an LLM actor and verifier judging skill compliance.
|
||||
|
||||
Testing skills that involve subagents, workflows, and complex interactions requires running actual Claude Code sessions in headless mode and verifying their behavior through session transcripts.
|
||||
## Plugin tests
|
||||
|
||||
## Test Structure
|
||||
Live in `tests/`. Currently:
|
||||
|
||||
```
|
||||
tests/
|
||||
├── claude-code/
|
||||
│ ├── test-helpers.sh # Shared test utilities
|
||||
│ ├── test-subagent-driven-development-integration.sh
|
||||
│ ├── analyze-token-usage.py # Token analysis tool
|
||||
│ └── run-skill-tests.sh # Test runner (if exists)
|
||||
```
|
||||
- `tests/brainstorm-server/` — node test suite for the brainstorm server JS code.
|
||||
- `tests/opencode/` — bash tests for OpenCode plugin loading, bootstrap caching, and tool registration.
|
||||
- `tests/codex-plugin-sync/` — bash sync verification.
|
||||
- `tests/claude-code/test-helpers.sh`, `analyze-token-usage.py` — utilities used by remaining bash tests.
|
||||
- `tests/claude-code/test-subagent-driven-development.sh` — agent-can-describe-SDD test (no drill counterpart; tests description-recall, not behavior).
|
||||
- `tests/claude-code/test-subagent-driven-development-integration.sh` — extended SDD integration with token analysis (drill covers the YAGNI subset; bash adds commit-count, Claude Code task-tracking, and token telemetry assertions).
|
||||
- `tests/claude-code/test-worktree-native-preference.sh` — RED-GREEN-REFACTOR validation for worktree skill (drill covers the PRESSURE phase; bash also covers RED/GREEN baselines).
|
||||
- `tests/explicit-skill-requests/` — Haiku-specific, multi-turn, and skill-name-prompted tests not covered by drill.
|
||||
|
||||
## Running Tests
|
||||
Run plugin tests via the relevant directory's `run-*.sh` or `npm test`.
|
||||
|
||||
### Integration Tests
|
||||
## Skill behavior evals
|
||||
|
||||
Integration tests execute real Claude Code sessions with actual skills:
|
||||
Live in `evals/`. Drill is the harness; scenarios live at `evals/scenarios/*.yaml`. See `evals/README.md` for setup. Quick start:
|
||||
|
||||
```bash
|
||||
# Run the subagent-driven-development integration test
|
||||
cd tests/claude-code
|
||||
./test-subagent-driven-development-integration.sh
|
||||
cd evals
|
||||
uv sync --extra dev
|
||||
export ANTHROPIC_API_KEY=sk-...
|
||||
uv run drill run triggering-test-driven-development -b claude
|
||||
```
|
||||
|
||||
**Note:** Integration tests can take 10-30 minutes as they execute real implementation plans with multiple subagents.
|
||||
|
||||
### Requirements
|
||||
|
||||
- Must run from the **superpowers plugin directory** (not from temp directories)
|
||||
- Claude Code must be installed and available as `claude` command
|
||||
- Local dev marketplace must be enabled: `"superpowers@superpowers-dev": true` in `~/.claude/settings.json`
|
||||
|
||||
## Integration Test: subagent-driven-development
|
||||
|
||||
### What It Tests
|
||||
|
||||
The integration test verifies the `subagent-driven-development` skill correctly:
|
||||
|
||||
1. **Plan Loading**: Reads the plan once at the beginning
|
||||
2. **Full Task Text**: Provides complete task descriptions to subagents (doesn't make them read files)
|
||||
3. **Self-Review**: Ensures subagents perform self-review before reporting
|
||||
4. **Review Order**: Runs spec compliance review before code quality review
|
||||
5. **Review Loops**: Uses review loops when issues are found
|
||||
6. **Independent Verification**: Spec reviewer reads code independently, doesn't trust implementer reports
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Setup**: Creates a temporary Node.js project with a minimal implementation plan
|
||||
2. **Execution**: Runs Claude Code in headless mode with the skill
|
||||
3. **Verification**: Parses the session transcript (`.jsonl` file) to verify:
|
||||
- Skill tool was invoked
|
||||
- Subagents were dispatched (Task tool)
|
||||
- TodoWrite was used for tracking
|
||||
- Implementation files were created
|
||||
- Tests pass
|
||||
- Git commits show proper workflow
|
||||
4. **Token Analysis**: Shows token usage breakdown by subagent
|
||||
|
||||
### Test Output
|
||||
|
||||
```
|
||||
========================================
|
||||
Integration Test: subagent-driven-development
|
||||
========================================
|
||||
|
||||
Test project: /tmp/tmp.xyz123
|
||||
|
||||
=== Verification Tests ===
|
||||
|
||||
Test 1: Skill tool invoked...
|
||||
[PASS] subagent-driven-development skill was invoked
|
||||
|
||||
Test 2: Subagents dispatched...
|
||||
[PASS] 7 subagents dispatched
|
||||
|
||||
Test 3: Task tracking...
|
||||
[PASS] TodoWrite used 5 time(s)
|
||||
|
||||
Test 6: Implementation verification...
|
||||
[PASS] src/math.js created
|
||||
[PASS] add function exists
|
||||
[PASS] multiply function exists
|
||||
[PASS] test/math.test.js created
|
||||
[PASS] Tests pass
|
||||
|
||||
Test 7: Git commit history...
|
||||
[PASS] Multiple commits created (3 total)
|
||||
|
||||
Test 8: No extra features added...
|
||||
[PASS] No extra features added
|
||||
|
||||
=========================================
|
||||
Token Usage Analysis
|
||||
=========================================
|
||||
|
||||
Usage Breakdown:
|
||||
----------------------------------------------------------------------------------------------------
|
||||
Agent Description Msgs Input Output Cache Cost
|
||||
----------------------------------------------------------------------------------------------------
|
||||
main Main session (coordinator) 34 27 3,996 1,213,703 $ 4.09
|
||||
3380c209 implementing Task 1: Create Add Function 1 2 787 24,989 $ 0.09
|
||||
34b00fde implementing Task 2: Create Multiply Function 1 4 644 25,114 $ 0.09
|
||||
3801a732 reviewing whether an implementation matches... 1 5 703 25,742 $ 0.09
|
||||
4c142934 doing a final code review... 1 6 854 25,319 $ 0.09
|
||||
5f017a42 a code reviewer. Review Task 2... 1 6 504 22,949 $ 0.08
|
||||
a6b7fbe4 a code reviewer. Review Task 1... 1 6 515 22,534 $ 0.08
|
||||
f15837c0 reviewing whether an implementation matches... 1 6 416 22,485 $ 0.07
|
||||
----------------------------------------------------------------------------------------------------
|
||||
|
||||
TOTALS:
|
||||
Total messages: 41
|
||||
Input tokens: 62
|
||||
Output tokens: 8,419
|
||||
Cache creation tokens: 132,742
|
||||
Cache read tokens: 1,382,835
|
||||
|
||||
Total input (incl cache): 1,515,639
|
||||
Total tokens: 1,524,058
|
||||
|
||||
Estimated cost: $4.67
|
||||
(at $3/$15 per M tokens for input/output)
|
||||
|
||||
========================================
|
||||
Test Summary
|
||||
========================================
|
||||
|
||||
STATUS: PASSED
|
||||
```
|
||||
|
||||
## Token Analysis Tool
|
||||
|
||||
### Usage
|
||||
|
||||
Analyze token usage from any Claude Code session:
|
||||
|
||||
```bash
|
||||
python3 tests/claude-code/analyze-token-usage.py ~/.claude/projects/<project-dir>/<session-id>.jsonl
|
||||
```
|
||||
|
||||
### Finding Session Files
|
||||
|
||||
Session transcripts are stored in `~/.claude/projects/` with the working directory path encoded:
|
||||
|
||||
```bash
|
||||
# Example for /Users/yourname/Documents/GitHub/superpowers/superpowers
|
||||
SESSION_DIR="$HOME/.claude/projects/-Users-yourname-Documents-GitHub-superpowers-superpowers"
|
||||
|
||||
# Find recent sessions
|
||||
ls -lt "$SESSION_DIR"/*.jsonl | head -5
|
||||
```
|
||||
|
||||
### What It Shows
|
||||
|
||||
- **Main session usage**: Token usage by the coordinator (you or main Claude instance)
|
||||
- **Per-subagent breakdown**: Each Task invocation with:
|
||||
- Agent ID
|
||||
- Description (extracted from prompt)
|
||||
- Message count
|
||||
- Input/output tokens
|
||||
- Cache usage
|
||||
- Estimated cost
|
||||
- **Totals**: Overall token usage and cost estimate
|
||||
|
||||
### Understanding the Output
|
||||
|
||||
- **High cache reads**: Good - means prompt caching is working
|
||||
- **High input tokens on main**: Expected - coordinator has full context
|
||||
- **Similar costs per subagent**: Expected - each gets similar task complexity
|
||||
- **Cost per task**: Typical range is $0.05-$0.15 per subagent depending on task
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Skills Not Loading
|
||||
|
||||
**Problem**: Skill not found when running headless tests
|
||||
|
||||
**Solutions**:
|
||||
1. Ensure you're running FROM the superpowers directory: `cd /path/to/superpowers && tests/...`
|
||||
2. Check `~/.claude/settings.json` has `"superpowers@superpowers-dev": true` in `enabledPlugins`
|
||||
3. Verify skill exists in `skills/` directory
|
||||
|
||||
### Permission Errors
|
||||
|
||||
**Problem**: Claude blocked from writing files or accessing directories
|
||||
|
||||
**Solutions**:
|
||||
1. Use `--permission-mode bypassPermissions` flag
|
||||
2. Use `--add-dir /path/to/temp/dir` to grant access to test directories
|
||||
3. Check file permissions on test directories
|
||||
|
||||
### Test Timeouts
|
||||
|
||||
**Problem**: Test takes too long and times out
|
||||
|
||||
**Solutions**:
|
||||
1. Increase timeout: `timeout 1800 claude ...` (30 minutes)
|
||||
2. Check for infinite loops in skill logic
|
||||
3. Review subagent task complexity
|
||||
|
||||
### Session File Not Found
|
||||
|
||||
**Problem**: Can't find session transcript after test run
|
||||
|
||||
**Solutions**:
|
||||
1. Check the correct project directory in `~/.claude/projects/`
|
||||
2. Use `find ~/.claude/projects -name "*.jsonl" -mmin -60` to find recent sessions
|
||||
3. Verify test actually ran (check for errors in test output)
|
||||
|
||||
## Writing New Integration Tests
|
||||
|
||||
### Template
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
# Create test project
|
||||
TEST_PROJECT=$(create_test_project)
|
||||
trap "cleanup_test_project $TEST_PROJECT" EXIT
|
||||
|
||||
# Set up test files...
|
||||
cd "$TEST_PROJECT"
|
||||
|
||||
# Run Claude with skill
|
||||
PROMPT="Your test prompt here"
|
||||
cd "$SCRIPT_DIR/../.." && timeout 1800 claude -p "$PROMPT" \
|
||||
--allowed-tools=all \
|
||||
--add-dir "$TEST_PROJECT" \
|
||||
--permission-mode bypassPermissions \
|
||||
2>&1 | tee output.txt
|
||||
|
||||
# Find and analyze session
|
||||
WORKING_DIR_ESCAPED=$(echo "$SCRIPT_DIR/../.." | sed 's/\\//-/g' | sed 's/^-//')
|
||||
SESSION_DIR="$HOME/.claude/projects/$WORKING_DIR_ESCAPED"
|
||||
SESSION_FILE=$(find "$SESSION_DIR" -name "*.jsonl" -type f -mmin -60 | sort -r | head -1)
|
||||
|
||||
# Verify behavior by parsing session transcript
|
||||
if grep -q '"name":"Skill".*"skill":"your-skill-name"' "$SESSION_FILE"; then
|
||||
echo "[PASS] Skill was invoked"
|
||||
fi
|
||||
|
||||
# Show token analysis
|
||||
python3 "$SCRIPT_DIR/analyze-token-usage.py" "$SESSION_FILE"
|
||||
```
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Always cleanup**: Use trap to cleanup temp directories
|
||||
2. **Parse transcripts**: Don't grep user-facing output - parse the `.jsonl` session file
|
||||
3. **Grant permissions**: Use `--permission-mode bypassPermissions` and `--add-dir`
|
||||
4. **Run from plugin dir**: Skills only load when running from the superpowers directory
|
||||
5. **Show token usage**: Always include token analysis for cost visibility
|
||||
6. **Test real behavior**: Verify actual files created, tests passing, commits made
|
||||
|
||||
## Session Transcript Format
|
||||
|
||||
Session transcripts are JSONL (JSON Lines) files where each line is a JSON object representing a message or tool result.
|
||||
|
||||
### Key Fields
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"content": [...],
|
||||
"usage": {
|
||||
"input_tokens": 27,
|
||||
"output_tokens": 3996,
|
||||
"cache_read_input_tokens": 1213703
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Tool Results
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "user",
|
||||
"toolUseResult": {
|
||||
"agentId": "3380c209",
|
||||
"usage": {
|
||||
"input_tokens": 2,
|
||||
"output_tokens": 787,
|
||||
"cache_read_input_tokens": 24989
|
||||
},
|
||||
"prompt": "You are implementing Task 1...",
|
||||
"content": [{"type": "text", "text": "..."}]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The `agentId` field links to subagent sessions, and the `usage` field contains token usage for that specific subagent invocation.
|
||||
Drill scenarios are slow (3-30+ minutes each) and run real LLM sessions. They are not part of CI today; the natural follow-up is a tiered model (fast subset on PR, full sweep nightly + on-demand).
|
||||
|
||||
1
evals
Submodule
1
evals
Submodule
Submodule evals added at f7ac1941d5
@@ -69,6 +69,7 @@ EXCLUDES=(
|
||||
# Directories not shipped by canonical Codex plugins
|
||||
"/commands/"
|
||||
"/docs/"
|
||||
"/evals/"
|
||||
"/hooks/"
|
||||
"/lib/"
|
||||
"/scripts/"
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
* - Scrollable main content area
|
||||
* - CSS helpers for common UI patterns
|
||||
*
|
||||
* Content is injected via placeholder comment in #claude-content.
|
||||
* Content is injected via placeholder comment in #frame-content.
|
||||
*/
|
||||
|
||||
* { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
@@ -77,7 +77,7 @@
|
||||
.header .status::before { content: ''; width: 6px; height: 6px; background: var(--success); border-radius: 50%; }
|
||||
|
||||
.main { flex: 1; overflow-y: auto; }
|
||||
#claude-content { padding: 2rem; min-height: 100%; }
|
||||
#frame-content { padding: 2rem; min-height: 100%; }
|
||||
|
||||
.indicator-bar {
|
||||
background: var(--bg-secondary);
|
||||
@@ -201,7 +201,7 @@
|
||||
</div>
|
||||
|
||||
<div class="main">
|
||||
<div id="claude-content">
|
||||
<div id="frame-content">
|
||||
<!-- CONTENT -->
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -7,7 +7,7 @@ Use this template when dispatching a spec document reviewer subagent.
|
||||
**Dispatch after:** Spec document is written to docs/superpowers/specs/
|
||||
|
||||
```
|
||||
Task tool (general-purpose):
|
||||
Subagent (general-purpose):
|
||||
description: "Review spec document"
|
||||
prompt: |
|
||||
You are a spec document reviewer. Verify this spec is complete and ready for planning.
|
||||
|
||||
@@ -49,20 +49,13 @@ Save `screen_dir` and `state_dir` from the response. Tell user to open the URL.
|
||||
|
||||
**Launching the server by platform:**
|
||||
|
||||
**Claude Code (macOS / Linux):**
|
||||
**Claude Code:**
|
||||
```bash
|
||||
# Default mode works — the script backgrounds the server itself
|
||||
# Default mode works — the script backgrounds the server itself.
|
||||
scripts/start-server.sh --project-dir /path/to/project
|
||||
```
|
||||
|
||||
**Claude Code (Windows):**
|
||||
```bash
|
||||
# Windows auto-detects and uses foreground mode, which blocks the tool call.
|
||||
# Use run_in_background: true on the Bash tool call so the server survives
|
||||
# across conversation turns.
|
||||
scripts/start-server.sh --project-dir /path/to/project
|
||||
```
|
||||
When calling this via the Bash tool, set `run_in_background: true`. Then read `$STATE_DIR/server-info` on the next turn to get the URL and port.
|
||||
On Windows, the script auto-detects and switches to foreground mode (which blocks the tool call). Use `run_in_background: true` on the Bash tool call so the server survives across conversation turns, then read `$STATE_DIR/server-info` on the next turn to get the URL and port.
|
||||
|
||||
**Codex:**
|
||||
```bash
|
||||
@@ -78,6 +71,14 @@ scripts/start-server.sh --project-dir /path/to/project
|
||||
scripts/start-server.sh --project-dir /path/to/project --foreground
|
||||
```
|
||||
|
||||
**Copilot CLI:**
|
||||
```bash
|
||||
# Use --foreground and start the server via the bash tool with mode: "async"
|
||||
# so the process survives across turns. Capture the returned shellId for
|
||||
# read_bash / stop_bash if you need to interact with it later.
|
||||
scripts/start-server.sh --project-dir /path/to/project --foreground
|
||||
```
|
||||
|
||||
**Other environments:** The server must keep running in the background across conversation turns. If your environment reaps detached processes, use `--foreground` and launch the command with your platform's background execution mechanism.
|
||||
|
||||
If the URL is unreachable from your browser (common in remote/containerized setups), bind a non-loopback host:
|
||||
@@ -97,7 +98,7 @@ Use `--url-host` to control what hostname is printed in the returned URL JSON.
|
||||
- Before each write, check that `$STATE_DIR/server-info` exists. If it doesn't (or `$STATE_DIR/server-stopped` exists), the server has shut down — restart it with `start-server.sh` before continuing. The server auto-exits after 30 minutes of inactivity.
|
||||
- Use semantic filenames: `platform.html`, `visual-style.html`, `layout.html`
|
||||
- **Never reuse filenames** — each screen gets a fresh file
|
||||
- Use Write tool — **never use cat/heredoc** (dumps noise into terminal)
|
||||
- Use your file-creation tool — **never use cat/heredoc** (dumps noise into terminal)
|
||||
- Server automatically serves the newest file
|
||||
|
||||
2. **Tell user what to expect and end your turn:**
|
||||
|
||||
@@ -65,14 +65,17 @@ Each agent gets:
|
||||
|
||||
### 3. Dispatch in Parallel
|
||||
|
||||
```typescript
|
||||
// In Claude Code / AI environment
|
||||
Task("Fix agent-tool-abort.test.ts failures")
|
||||
Task("Fix batch-completion-behavior.test.ts failures")
|
||||
Task("Fix tool-approval-race-conditions.test.ts failures")
|
||||
// All three run concurrently
|
||||
Issue all three subagent dispatches in the same response — they run in parallel:
|
||||
|
||||
```text
|
||||
Subagent (general-purpose): "Fix agent-tool-abort.test.ts failures"
|
||||
Subagent (general-purpose): "Fix batch-completion-behavior.test.ts failures"
|
||||
Subagent (general-purpose): "Fix tool-approval-race-conditions.test.ts failures"
|
||||
# All three run concurrently.
|
||||
```
|
||||
|
||||
Multiple dispatch calls in one response = parallel execution. One per response = sequential.
|
||||
|
||||
### 4. Review and Integrate
|
||||
|
||||
When agents return:
|
||||
|
||||
@@ -11,7 +11,7 @@ Load plan, review critically, execute all tasks, report when complete.
|
||||
|
||||
**Announce at start:** "I'm using the executing-plans skill to implement this plan."
|
||||
|
||||
**Note:** Tell your human partner that Superpowers works much better with access to subagents. The quality of its work will be significantly higher if run on a platform with subagent support (such as Claude Code or Codex). If subagents are available, use superpowers:subagent-driven-development instead of this skill.
|
||||
**Note:** Tell your human partner that Superpowers works much better with access to subagents. The quality of its work will be significantly higher if run on a platform with subagent support (Claude Code, Codex CLI, Codex App, Copilot CLI, and Gemini CLI all qualify; see the per-platform tool refs in `../using-superpowers/references/`). If subagents are available, use superpowers:subagent-driven-development instead of this skill.
|
||||
|
||||
## The Process
|
||||
|
||||
@@ -19,7 +19,7 @@ Load plan, review critically, execute all tasks, report when complete.
|
||||
1. Read plan file
|
||||
2. Review critically - identify any questions or concerns about the plan
|
||||
3. If concerns: Raise them with your human partner before starting
|
||||
4. If no concerns: Create TodoWrite and proceed
|
||||
4. If no concerns: Create todos for the plan items and proceed
|
||||
|
||||
### Step 2: Execute Tasks
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ WHEN receiving code review feedback:
|
||||
## Forbidden Responses
|
||||
|
||||
**NEVER:**
|
||||
- "You're absolutely right!" (explicit CLAUDE.md violation)
|
||||
- "You're absolutely right!" (explicit instruction-file violation)
|
||||
- "Great point!" / "Excellent feedback!" (performative)
|
||||
- "Let me implement that now" (before verification)
|
||||
|
||||
@@ -126,7 +126,7 @@ Push back when:
|
||||
- Reference working tests/code
|
||||
- Involve your human partner if architectural
|
||||
|
||||
**Signal if uncomfortable pushing back out loud:** "Strange things are afoot at the Circle K"
|
||||
**If you're uncomfortable pushing back out loud:** Name that tension, then tell your partner about the issue you've seen. They'll appreciate your honesty.
|
||||
|
||||
## Acknowledging Correct Feedback
|
||||
|
||||
|
||||
@@ -31,7 +31,7 @@ HEAD_SHA=$(git rev-parse HEAD)
|
||||
|
||||
**2. Dispatch code reviewer subagent:**
|
||||
|
||||
Use Task tool with `general-purpose` type, fill template at `code-reviewer.md`
|
||||
Dispatch a `general-purpose` subagent, filling the template at [code-reviewer.md](code-reviewer.md)
|
||||
|
||||
**Placeholders:**
|
||||
- `{DESCRIPTION}` - Brief summary of what you built
|
||||
@@ -100,4 +100,4 @@ You: [Fix progress indicators]
|
||||
- Show code/tests that prove it works
|
||||
- Request clarification
|
||||
|
||||
See template at: requesting-code-review/code-reviewer.md
|
||||
See template at: [code-reviewer.md](code-reviewer.md)
|
||||
|
||||
@@ -5,7 +5,7 @@ Use this template when dispatching a code reviewer subagent.
|
||||
**Purpose:** Review completed work against requirements and code quality standards before it cascades into more work.
|
||||
|
||||
```
|
||||
Task tool (general-purpose):
|
||||
Subagent (general-purpose):
|
||||
description: "Review code changes"
|
||||
prompt: |
|
||||
You are a Senior Code Reviewer with expertise in software architecture,
|
||||
|
||||
@@ -57,15 +57,15 @@ digraph process {
|
||||
"Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [shape=box];
|
||||
"Code quality reviewer subagent approves?" [shape=diamond];
|
||||
"Implementer subagent fixes quality issues" [shape=box];
|
||||
"Mark task complete in TodoWrite" [shape=box];
|
||||
"Mark task complete in todo list" [shape=box];
|
||||
}
|
||||
|
||||
"Read plan, extract all tasks with full text, note context, create TodoWrite" [shape=box];
|
||||
"Read plan, extract all tasks with full text, note context, create todos" [shape=box];
|
||||
"More tasks remain?" [shape=diamond];
|
||||
"Dispatch final code reviewer subagent for entire implementation" [shape=box];
|
||||
"Use superpowers:finishing-a-development-branch" [shape=box style=filled fillcolor=lightgreen];
|
||||
|
||||
"Read plan, extract all tasks with full text, note context, create TodoWrite" -> "Dispatch implementer subagent (./implementer-prompt.md)";
|
||||
"Read plan, extract all tasks with full text, note context, create todos" -> "Dispatch implementer subagent (./implementer-prompt.md)";
|
||||
"Dispatch implementer subagent (./implementer-prompt.md)" -> "Implementer subagent asks questions?";
|
||||
"Implementer subagent asks questions?" -> "Answer questions, provide context" [label="yes"];
|
||||
"Answer questions, provide context" -> "Dispatch implementer subagent (./implementer-prompt.md)";
|
||||
@@ -78,8 +78,8 @@ digraph process {
|
||||
"Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" -> "Code quality reviewer subagent approves?";
|
||||
"Code quality reviewer subagent approves?" -> "Implementer subagent fixes quality issues" [label="no"];
|
||||
"Implementer subagent fixes quality issues" -> "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [label="re-review"];
|
||||
"Code quality reviewer subagent approves?" -> "Mark task complete in TodoWrite" [label="yes"];
|
||||
"Mark task complete in TodoWrite" -> "More tasks remain?";
|
||||
"Code quality reviewer subagent approves?" -> "Mark task complete in todo list" [label="yes"];
|
||||
"Mark task complete in todo list" -> "More tasks remain?";
|
||||
"More tasks remain?" -> "Dispatch implementer subagent (./implementer-prompt.md)" [label="yes"];
|
||||
"More tasks remain?" -> "Dispatch final code reviewer subagent for entire implementation" [label="no"];
|
||||
"Dispatch final code reviewer subagent for entire implementation" -> "Use superpowers:finishing-a-development-branch";
|
||||
@@ -121,9 +121,9 @@ Implementer subagents report one of four statuses. Handle each appropriately:
|
||||
|
||||
## Prompt Templates
|
||||
|
||||
- `./implementer-prompt.md` - Dispatch implementer subagent
|
||||
- `./spec-reviewer-prompt.md` - Dispatch spec compliance reviewer subagent
|
||||
- `./code-quality-reviewer-prompt.md` - Dispatch code quality reviewer subagent
|
||||
- [implementer-prompt.md](implementer-prompt.md) - Dispatch implementer subagent
|
||||
- [spec-reviewer-prompt.md](spec-reviewer-prompt.md) - Dispatch spec compliance reviewer subagent
|
||||
- [code-quality-reviewer-prompt.md](code-quality-reviewer-prompt.md) - Dispatch code quality reviewer subagent
|
||||
|
||||
## Example Workflow
|
||||
|
||||
@@ -132,7 +132,7 @@ You: I'm using Subagent-Driven Development to execute this plan.
|
||||
|
||||
[Read plan file once: docs/superpowers/plans/feature-plan.md]
|
||||
[Extract all 5 tasks with full text and context]
|
||||
[Create TodoWrite with all tasks]
|
||||
[Create todos for all tasks]
|
||||
|
||||
Task 1: Hook installation script
|
||||
|
||||
|
||||
@@ -7,8 +7,8 @@ Use this template when dispatching a code quality reviewer subagent.
|
||||
**Only dispatch after spec compliance review passes.**
|
||||
|
||||
```
|
||||
Task tool (general-purpose):
|
||||
Use template at requesting-code-review/code-reviewer.md
|
||||
Subagent (general-purpose):
|
||||
Use template at ../requesting-code-review/code-reviewer.md
|
||||
|
||||
DESCRIPTION: [task summary, from implementer's report]
|
||||
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
Use this template when dispatching an implementer subagent.
|
||||
|
||||
```
|
||||
Task tool (general-purpose):
|
||||
Subagent (general-purpose):
|
||||
description: "Implement Task N: [task name]"
|
||||
prompt: |
|
||||
You are implementing Task N: [task name]
|
||||
|
||||
@@ -5,7 +5,7 @@ Use this template when dispatching a spec compliance reviewer subagent.
|
||||
**Purpose:** Verify implementer built what was requested (nothing more, nothing less)
|
||||
|
||||
```
|
||||
Task tool (general-purpose):
|
||||
Subagent (general-purpose):
|
||||
description: "Review spec compliance for Task N"
|
||||
prompt: |
|
||||
You are reviewing whether an implementation matches its specification.
|
||||
|
||||
@@ -356,7 +356,7 @@ Never fix bugs without a test.
|
||||
|
||||
## Testing Anti-Patterns
|
||||
|
||||
When adding mocks or test utilities, read @testing-anti-patterns.md to avoid common pitfalls:
|
||||
When adding mocks or test utilities, read [testing-anti-patterns.md](testing-anti-patterns.md) to avoid common pitfalls:
|
||||
- Testing mock behavior instead of real behavior
|
||||
- Adding test-only methods to production classes
|
||||
- Mocking without understanding dependencies
|
||||
|
||||
@@ -30,7 +30,7 @@ BRANCH=$(git branch --show-current)
|
||||
git rev-parse --show-superproject-working-tree 2>/dev/null
|
||||
```
|
||||
|
||||
**If `GIT_DIR != GIT_COMMON` (and not a submodule):** You are already in a linked worktree. Skip to Step 3 (Project Setup). Do NOT create another worktree.
|
||||
**If `GIT_DIR != GIT_COMMON` (and not a submodule):** You are already in a linked worktree. Skip to Step 2 (Project Setup). Do NOT create another worktree.
|
||||
|
||||
Report with branch state:
|
||||
- On a branch: "Already in isolated workspace at `<path>` on branch `<name>`."
|
||||
@@ -42,7 +42,7 @@ Has the user already indicated their worktree preference in your instructions? I
|
||||
|
||||
> "Would you like me to set up an isolated worktree? It protects your current branch from changes."
|
||||
|
||||
Honor any existing declared preference without asking. If the user declines consent, work in place and skip to Step 3.
|
||||
Honor any existing declared preference without asking. If the user declines consent, work in place and skip to Step 2.
|
||||
|
||||
## Step 1: Create Isolated Workspace
|
||||
|
||||
@@ -50,7 +50,7 @@ Honor any existing declared preference without asking. If the user declines cons
|
||||
|
||||
### 1a. Native Worktree Tools (preferred)
|
||||
|
||||
The user has asked for an isolated workspace (Step 0 consent). Do you already have a way to create a worktree? It might be a tool with a name like `EnterWorktree`, `WorktreeCreate`, a `/worktree` command, or a `--worktree` flag. If you do, use it and skip to Step 3.
|
||||
The user has asked for an isolated workspace (Step 0 consent). Do you already have a way to create a worktree? It might be a tool with a name like `EnterWorktree`, `WorktreeCreate`, a `/worktree` command, or a `--worktree` flag. If you do, use it and skip to Step 2.
|
||||
|
||||
Native tools handle directory placement, branch creation, and cleanup automatically. Using `git worktree add` when you have a native tool creates phantom state your harness can't see or manage.
|
||||
|
||||
@@ -99,7 +99,7 @@ cd "$path"
|
||||
|
||||
**Sandbox fallback:** If `git worktree add` fails with a permission error (sandbox denial), tell the user the sandbox blocked worktree creation and you're working in the current directory instead. Then run setup and baseline tests in place.
|
||||
|
||||
## Step 3: Project Setup
|
||||
## Step 2: Project Setup
|
||||
|
||||
Auto-detect and run appropriate setup:
|
||||
|
||||
@@ -118,7 +118,7 @@ if [ -f pyproject.toml ]; then poetry install; fi
|
||||
if [ -f go.mod ]; then go mod download; fi
|
||||
```
|
||||
|
||||
## Step 4: Verify Clean Baseline
|
||||
## Step 3: Verify Clean Baseline
|
||||
|
||||
Run tests to ensure workspace starts clean:
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
name: using-superpowers
|
||||
description: Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions
|
||||
description: Use when starting any conversation - establishes how to find and use skills, requiring skill invocation before ANY response including clarifying questions
|
||||
---
|
||||
|
||||
<SUBAGENT-STOP>
|
||||
@@ -27,9 +27,13 @@ If CLAUDE.md, GEMINI.md, or AGENTS.md says "don't use TDD" and a skill says "alw
|
||||
|
||||
## How to Access Skills
|
||||
|
||||
**In Claude Code:** Use the `Skill` tool. When you invoke a skill, its content is loaded and presented to you—follow it directly. Never use the Read tool on skill files.
|
||||
**Never read skill files manually with file tools** — always use your platform's skill-loading mechanism so the skill is properly activated.
|
||||
|
||||
**In Copilot CLI:** Use the `skill` tool. Skills are auto-discovered from installed plugins. The `skill` tool works the same as Claude Code's `Skill` tool.
|
||||
**In Claude Code:** Use the `Skill` tool. When you invoke a skill, its content is loaded and presented to you — follow it directly.
|
||||
|
||||
**In Codex:** Skills load natively. Follow the instructions presented when a skill activates.
|
||||
|
||||
**In Copilot CLI:** Use the `skill` tool. Skills are auto-discovered from installed plugins.
|
||||
|
||||
**In Gemini CLI:** Skills activate via the `activate_skill` tool. Gemini loads skill metadata at session start and activates the full content on demand.
|
||||
|
||||
@@ -37,7 +41,7 @@ If CLAUDE.md, GEMINI.md, or AGENTS.md says "don't use TDD" and a skill says "alw
|
||||
|
||||
## Platform Adaptation
|
||||
|
||||
Skills use Claude Code tool names. Non-CC platforms: see `references/copilot-tools.md` (Copilot CLI), `references/codex-tools.md` (Codex) for tool equivalents. Gemini CLI users get the tool mapping loaded automatically via GEMINI.md.
|
||||
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file") rather than naming any one runtime's tools. For per-platform tool equivalents and instructions-file conventions, see [claude-code-tools.md](references/claude-code-tools.md), [codex-tools.md](references/codex-tools.md), [copilot-tools.md](references/copilot-tools.md), and [gemini-tools.md](references/gemini-tools.md). Gemini CLI users get the tool mapping loaded automatically via GEMINI.md.
|
||||
|
||||
# Using Skills
|
||||
|
||||
@@ -48,30 +52,30 @@ Skills use Claude Code tool names. Non-CC platforms: see `references/copilot-too
|
||||
```dot
|
||||
digraph skill_flow {
|
||||
"User message received" [shape=doublecircle];
|
||||
"About to EnterPlanMode?" [shape=doublecircle];
|
||||
"About to enter plan mode?" [shape=doublecircle];
|
||||
"Already brainstormed?" [shape=diamond];
|
||||
"Invoke brainstorming skill" [shape=box];
|
||||
"Might any skill apply?" [shape=diamond];
|
||||
"Invoke Skill tool" [shape=box];
|
||||
"Invoke the skill" [shape=box];
|
||||
"Announce: 'Using [skill] to [purpose]'" [shape=box];
|
||||
"Has checklist?" [shape=diamond];
|
||||
"Create TodoWrite todo per item" [shape=box];
|
||||
"Create a todo per item" [shape=box];
|
||||
"Follow skill exactly" [shape=box];
|
||||
"Respond (including clarifications)" [shape=doublecircle];
|
||||
|
||||
"About to EnterPlanMode?" -> "Already brainstormed?";
|
||||
"About to enter plan mode?" -> "Already brainstormed?";
|
||||
"Already brainstormed?" -> "Invoke brainstorming skill" [label="no"];
|
||||
"Already brainstormed?" -> "Might any skill apply?" [label="yes"];
|
||||
"Invoke brainstorming skill" -> "Might any skill apply?";
|
||||
|
||||
"User message received" -> "Might any skill apply?";
|
||||
"Might any skill apply?" -> "Invoke Skill tool" [label="yes, even 1%"];
|
||||
"Might any skill apply?" -> "Invoke the skill" [label="yes, even 1%"];
|
||||
"Might any skill apply?" -> "Respond (including clarifications)" [label="definitely not"];
|
||||
"Invoke Skill tool" -> "Announce: 'Using [skill] to [purpose]'";
|
||||
"Invoke the skill" -> "Announce: 'Using [skill] to [purpose]'";
|
||||
"Announce: 'Using [skill] to [purpose]'" -> "Has checklist?";
|
||||
"Has checklist?" -> "Create TodoWrite todo per item" [label="yes"];
|
||||
"Has checklist?" -> "Create a todo per item" [label="yes"];
|
||||
"Has checklist?" -> "Follow skill exactly" [label="no"];
|
||||
"Create TodoWrite todo per item" -> "Follow skill exactly";
|
||||
"Create a todo per item" -> "Follow skill exactly";
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
50
skills/using-superpowers/references/claude-code-tools.md
Normal file
50
skills/using-superpowers/references/claude-code-tools.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# Claude Code Tool Mapping
|
||||
|
||||
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file"). On Claude Code these resolve to the tools below.
|
||||
|
||||
## Tools
|
||||
|
||||
| Action skills request | Claude Code tool |
|
||||
|----------------------|------------------|
|
||||
| Read a file | `Read` |
|
||||
| Create a new file | `Write` |
|
||||
| Edit a file | `Edit` |
|
||||
| Run a shell command | `Bash` |
|
||||
| Search file contents | `Grep` |
|
||||
| Find files by name | `Glob` |
|
||||
| Fetch a URL | `WebFetch` |
|
||||
| Search the web | `WebSearch` |
|
||||
| Invoke a skill | `Skill` |
|
||||
| Dispatch a subagent (`Subagent (general-purpose):` template) | `Agent` (older releases named this `Task`) |
|
||||
| Multiple parallel dispatches | Multiple `Agent` calls in one response |
|
||||
| Task tracking ("create a todo", "mark complete") | `TaskCreate`, `TaskUpdate`, `TaskList`, `TaskGet`; `TodoWrite` in `claude -p` / Agent SDK unless `CLAUDE_CODE_ENABLE_TASKS=1` is set |
|
||||
| Background-process / subagent lifecycle (read output, cancel) | `TaskOutput`, `TaskStop` — these are distinct from the todo tools above and apply to running shells, agents, and remote sessions |
|
||||
|
||||
## Instructions file
|
||||
|
||||
When a skill mentions "your instructions file", on Claude Code this is **`CLAUDE.md`**. Claude Code walks up the directory tree from the current working directory and concatenates every `CLAUDE.md` and `CLAUDE.local.md` it finds along the way. Standard locations:
|
||||
|
||||
| Scope | Location |
|
||||
|-------|----------|
|
||||
| Project (team-shared) | `./CLAUDE.md` or `./.claude/CLAUDE.md` |
|
||||
| User global | `~/.claude/CLAUDE.md` |
|
||||
| Local-private (gitignored) | `./CLAUDE.local.md` |
|
||||
| Managed policy (org-wide) | `/Library/Application Support/ClaudeCode/CLAUDE.md` (macOS), `/etc/claude-code/CLAUDE.md` (Linux/WSL), `C:\Program Files\ClaudeCode\CLAUDE.md` (Windows) |
|
||||
|
||||
CLAUDE.md files can pull in additional content with `@path/to/file` imports (relative or absolute, max five hops deep). Subdirectory `CLAUDE.md` files are also discovered automatically and loaded on-demand when Claude Code reads files in those subdirectories.
|
||||
|
||||
Claude Code does **not** read `AGENTS.md` directly. If a project already maintains `AGENTS.md` for other agents, import it from `CLAUDE.md` so both runtimes share the same instructions:
|
||||
|
||||
```markdown
|
||||
@AGENTS.md
|
||||
|
||||
## Claude Code
|
||||
|
||||
(Claude-Code-specific instructions go here.)
|
||||
```
|
||||
|
||||
For path-scoped rules and larger-project organization, see `.claude/rules/` (rules can be scoped to specific files via `paths` frontmatter and load on demand).
|
||||
|
||||
## Personal skills directory
|
||||
|
||||
User-level skills live at **`~/.claude/skills/`**. Each skill is a subdirectory containing a `SKILL.md` (with `name` and `description` frontmatter) plus any supporting files. Claude Code does not currently recognize the cross-runtime `~/.agents/skills/` path that Codex, Copilot CLI, and Gemini CLI read; if you're relying on cross-runtime support in the future, verify against the [official skills docs](https://code.claude.com/docs/en/skills).
|
||||
@@ -1,17 +1,30 @@
|
||||
# Codex Tool Mapping
|
||||
|
||||
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
|
||||
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file"). On Codex these resolve to the tools below.
|
||||
|
||||
| Skill references | Codex equivalent |
|
||||
|-----------------|------------------|
|
||||
| `Task` tool (dispatch subagent) | `spawn_agent` (see [Subagent dispatch requires multi-agent support](#subagent-dispatch-requires-multi-agent-support)) |
|
||||
| Multiple `Task` calls (parallel) | Multiple `spawn_agent` calls |
|
||||
| Task returns result | `wait_agent` |
|
||||
| Task completes automatically | `close_agent` to free slot |
|
||||
| `TodoWrite` (task tracking) | `update_plan` |
|
||||
| `Skill` tool (invoke a skill) | Skills load natively — just follow the instructions |
|
||||
| `Read`, `Write`, `Edit` (files) | Use your native file tools |
|
||||
| `Bash` (run commands) | Use your native shell tools |
|
||||
| Action skills request | Codex equivalent |
|
||||
|----------------------|------------------|
|
||||
| Read a file | `shell` (e.g., `cat`, `head`, `tail`) — Codex reads files via shell |
|
||||
| Create / edit / delete a file | `apply_patch` (structured diff for create, update, delete) |
|
||||
| Run a shell command | `shell` |
|
||||
| Search file contents | `shell` (e.g., `grep`, `rg`) |
|
||||
| Find files by name | `shell` (e.g., `find`, `ls`) |
|
||||
| Fetch a URL | `shell` with `curl` / `wget` — Codex has no native fetch tool |
|
||||
| Search the web | `web_search` (enabled by default; configurable in `config.toml` via the top-level `web_search` setting — `live`, `cached`, or `disabled`) |
|
||||
| Invoke a skill | Skills load natively — just follow the instructions |
|
||||
| Dispatch a subagent (`Subagent (general-purpose):` template) | `spawn_agent` (see [Subagent dispatch requires multi-agent support](#subagent-dispatch-requires-multi-agent-support)) |
|
||||
| Multiple parallel dispatches | Multiple `spawn_agent` calls in one response |
|
||||
| Wait for subagent result | `wait_agent` |
|
||||
| Free up subagent slot when done | `close_agent` |
|
||||
| Task tracking ("create a todo", "mark complete") | `update_plan` |
|
||||
|
||||
## Instructions file
|
||||
|
||||
When a skill mentions "your instructions file", on Codex this is **`AGENTS.md`** at the project root. Codex also reads `~/.codex/AGENTS.md` for global context, and an `AGENTS.override.md` (in the project tree or `~/.codex/`) takes precedence when present. Codex walks from the project root down to the current working directory, concatenating `AGENTS.md` files it finds along the way, up to `project_doc_max_bytes` (32 KiB by default).
|
||||
|
||||
## Personal skills directory
|
||||
|
||||
User-level skills live at **`$CODEX_HOME/skills/`** (default `~/.codex/skills/`). Codex also reads the cross-runtime path **`~/.agents/skills/`** (shared with Copilot CLI and Gemini CLI). When both directories exist at the same scope, Codex loads them both as separate skill catalogs — Codex's docs don't currently document a precedence between them. Each skill is a subdirectory containing a `SKILL.md` (with `name` and `description` frontmatter).
|
||||
|
||||
## Subagent dispatch requires multi-agent support
|
||||
|
||||
|
||||
@@ -1,31 +1,38 @@
|
||||
# Copilot CLI Tool Mapping
|
||||
|
||||
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
|
||||
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file"). On Copilot CLI these resolve to the tools below.
|
||||
|
||||
| Skill references | Copilot CLI equivalent |
|
||||
|-----------------|----------------------|
|
||||
| `Read` (file reading) | `view` |
|
||||
| `Write` (file creation) | `create` |
|
||||
| `Edit` (file editing) | `edit` |
|
||||
| `Bash` (run commands) | `bash` |
|
||||
| `Grep` (search file content) | `grep` |
|
||||
| `Glob` (search files by name) | `glob` |
|
||||
| `Skill` tool (invoke a skill) | `skill` |
|
||||
| `WebFetch` | `web_fetch` |
|
||||
| `Task` tool (dispatch subagent) | `task` with `agent_type: "general-purpose"` or `"explore"` |
|
||||
| Multiple `Task` calls (parallel) | Multiple `task` calls |
|
||||
| Task status/output | `read_agent`, `list_agents` |
|
||||
| `TodoWrite` (task tracking) | `sql` with built-in `todos` table |
|
||||
| `WebSearch` | No equivalent — use `web_fetch` with a search engine URL |
|
||||
| `EnterPlanMode` / `ExitPlanMode` | No equivalent — stay in the main session |
|
||||
| Action skills request | Copilot CLI equivalent |
|
||||
|----------------------|----------------------|
|
||||
| Read a file | `view` |
|
||||
| Create / edit / delete a file | `apply_patch` (Copilot CLI has no separate create/edit/write tools) |
|
||||
| Run a shell command | `bash` |
|
||||
| Search file contents | `rg` (ripgrep; Copilot CLI does not expose a `grep` tool) |
|
||||
| Find files by name | `glob` |
|
||||
| Fetch a URL | `web_fetch` |
|
||||
| Search the web | `web_search` |
|
||||
| Invoke a skill | `skill` |
|
||||
| Dispatch a subagent (`Subagent (general-purpose):` template) | `task` with `agent_type: "general-purpose"` (other accepted types: `explore`, `task`, `code-review`, `research`, `configure-copilot`) |
|
||||
| Multiple parallel dispatches | Multiple `task` calls in one response |
|
||||
| Subagent status/output/control | `read_agent`, `list_agents`, `write_agent` |
|
||||
| Task tracking ("create a todo", "mark complete") | `update_todo` |
|
||||
| Enter / exit plan mode | No equivalent — stay in the main session |
|
||||
|
||||
## Instructions file
|
||||
|
||||
When a skill mentions "your instructions file", on Copilot CLI this is **`AGENTS.md`** at the repository root. If both `AGENTS.md` and `.github/copilot-instructions.md` are present, Copilot reads both.
|
||||
|
||||
## Personal skills directory
|
||||
|
||||
User-level skills live at **`~/.copilot/skills/`**. Copilot CLI also recognizes the cross-runtime alias **`~/.agents/skills/`**, which is shared with Codex and Gemini CLI. Each skill is a subdirectory containing a `SKILL.md` (with `name` and `description` frontmatter).
|
||||
|
||||
## Async shell sessions
|
||||
|
||||
Copilot CLI supports persistent async shell sessions, which have no direct Claude Code equivalent:
|
||||
Copilot CLI supports persistent async shell sessions:
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `bash` with `async: true` | Start a long-running command in the background |
|
||||
| `bash` with `mode: "async"` (and optionally `detach: true`) | Start a long-running command in the background; returns a `shellId` |
|
||||
| `write_bash` | Send input to a running async session |
|
||||
| `read_bash` | Read output from an async session |
|
||||
| `stop_bash` | Terminate an async session |
|
||||
|
||||
@@ -1,51 +1,63 @@
|
||||
# Gemini CLI Tool Mapping
|
||||
|
||||
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
|
||||
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file"). On Gemini CLI these resolve to the tools below.
|
||||
|
||||
| Skill references | Gemini CLI equivalent |
|
||||
|-----------------|----------------------|
|
||||
| `Read` (file reading) | `read_file` |
|
||||
| `Write` (file creation) | `write_file` |
|
||||
| `Edit` (file editing) | `replace` |
|
||||
| `Bash` (run commands) | `run_shell_command` |
|
||||
| `Grep` (search file content) | `grep_search` |
|
||||
| `Glob` (search files by name) | `glob` |
|
||||
| `TodoWrite` (task tracking) | `write_todos` |
|
||||
| `Skill` tool (invoke a skill) | `activate_skill` |
|
||||
| `WebSearch` | `google_web_search` |
|
||||
| `WebFetch` | `web_fetch` |
|
||||
| `Task` tool (dispatch subagent) | `@agent-name` (see [Subagent support](#subagent-support)) |
|
||||
| Action skills request | Gemini CLI equivalent |
|
||||
|----------------------|----------------------|
|
||||
| Read a file | `read_file` |
|
||||
| Read multiple files at once | `read_many_files` |
|
||||
| Create a new file | `write_file` |
|
||||
| Edit a file | `replace` |
|
||||
| Run a shell command | `run_shell_command` |
|
||||
| Search file contents | `grep_search` |
|
||||
| Find files by name | `glob` |
|
||||
| List files and subdirectories | `list_directory` |
|
||||
| Fetch a URL | `web_fetch` |
|
||||
| Search the web | `google_web_search` |
|
||||
| Invoke a skill | `activate_skill` |
|
||||
| Dispatch a subagent (`Subagent (general-purpose):` template) | `invoke_agent` with `agent_name: "generalist"` (invocable via `@generalist` chat syntax — see [Subagent support](#subagent-support)) |
|
||||
| Multiple parallel dispatches | Multiple `invoke_agent` calls in the same response |
|
||||
| Task tracking ("create a todo", "mark complete") | `write_todos` (statuses: pending, in_progress, completed, cancelled, blocked) |
|
||||
|
||||
## Instructions file
|
||||
|
||||
When a skill mentions "your instructions file", on Gemini CLI this is **`GEMINI.md`**. Gemini CLI loads `GEMINI.md` hierarchically: global at `~/.gemini/GEMINI.md`, project-level files in workspace directories and their ancestors, and sub-directory `GEMINI.md` files when a tool accesses files in those directories.
|
||||
|
||||
## Personal skills directory
|
||||
|
||||
User-level skills live at **`~/.gemini/skills/`**, with **`~/.agents/skills/`** as a cross-runtime alias (shared with Codex and Copilot CLI). When both directories exist at the same scope, `.agents/skills/` takes precedence. Each skill is a subdirectory containing a `SKILL.md` (with `name` and `description` frontmatter).
|
||||
|
||||
## Subagent support
|
||||
|
||||
Gemini CLI supports subagents natively via the `@` syntax. Use the built-in `@generalist` agent to dispatch any task — it has access to all tools and follows the prompt you provide.
|
||||
Gemini CLI dispatches subagents through the `invoke_agent` tool, which takes `agent_name` and `prompt` parameters. The same dispatch is also surfaced as a chat-syntax shortcut: typing `@generalist <prompt>` is equivalent to calling `invoke_agent` with `agent_name: "generalist"`. Built-in agent names include `generalist`, `cli_help`, `codebase_investigator`, and (with browser tooling enabled) `browser_agent`.
|
||||
|
||||
When a skill says to dispatch a named agent type, use `@generalist` with the full prompt from the skill's prompt template:
|
||||
Skills dispatch with `Subagent (general-purpose):` and either reference a prompt-template file (e.g., `superpowers:subagent-driven-development`'s `./implementer-prompt.md`) or supply an inline prompt. On Gemini CLI:
|
||||
|
||||
| Skill instruction | Gemini CLI equivalent |
|
||||
|-------------------|----------------------|
|
||||
| `Task tool (superpowers:implementer)` | `@generalist` with the filled `implementer-prompt.md` template |
|
||||
| `Task tool (superpowers:spec-reviewer)` | `@generalist` with the filled `spec-reviewer-prompt.md` template |
|
||||
| `Task tool (superpowers:code-reviewer)` | `@code-reviewer` (bundled agent) or `@generalist` with the filled review prompt |
|
||||
| `Task tool (superpowers:code-quality-reviewer)` | `@generalist` with the filled `code-quality-reviewer-prompt.md` template |
|
||||
| `Task tool (general-purpose)` with inline prompt | `@generalist` with your inline prompt |
|
||||
| Skill dispatch form | Gemini CLI equivalent |
|
||||
|---------------------|----------------------|
|
||||
| References a `*-prompt.md` template (implementer, spec-reviewer, code-quality-reviewer, code-reviewer, etc.) | Fill the template, then `invoke_agent` with `agent_name: "generalist"` and the filled prompt |
|
||||
| References `superpowers:requesting-code-review`'s `./code-reviewer.md` | `invoke_agent` with `agent_name: "generalist"` and the filled review template |
|
||||
| Inline prompt (no template referenced) | `invoke_agent` with `agent_name: "generalist"` and your inline prompt |
|
||||
|
||||
### Prompt filling
|
||||
|
||||
Skills provide prompt templates with placeholders like `{WHAT_WAS_IMPLEMENTED}` or `[FULL TEXT of task]`. Fill all placeholders and pass the complete prompt as the message to `@generalist`. The prompt template itself contains the agent's role, review criteria, and expected output format — `@generalist` will follow it.
|
||||
Skills provide prompt templates with placeholders like `{WHAT_WAS_IMPLEMENTED}` or `[FULL TEXT of task]`. Fill all placeholders before passing the complete prompt to `invoke_agent`. The prompt template itself contains the agent's role, review criteria, and expected output format — the subagent will follow it.
|
||||
|
||||
### Parallel dispatch
|
||||
|
||||
Gemini CLI supports parallel subagent dispatch. When a skill asks you to dispatch multiple independent subagent tasks in parallel, request all of those `@generalist` or named subagent tasks together in the same prompt. Keep dependent tasks sequential, but do not serialize independent subagent tasks just to preserve a simpler history.
|
||||
Gemini CLI supports parallel subagent dispatch. Issue multiple `invoke_agent` calls in the same response (or multiple `@generalist` invocations in one prompt) to run independent subagent work in parallel. Keep dependent tasks sequential, but do not serialize independent subagent tasks just to preserve a simpler history.
|
||||
|
||||
## Additional Gemini CLI tools
|
||||
|
||||
These tools are available in Gemini CLI but have no Claude Code equivalent:
|
||||
These tools are unique to Gemini CLI:
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `list_directory` | List files and subdirectories |
|
||||
| `save_memory` | Persist facts to GEMINI.md across sessions |
|
||||
| `ask_user` | Request structured input from the user |
|
||||
| `tracker_create_task` | Rich task management (create, update, list, visualize) |
|
||||
| `enter_plan_mode` / `exit_plan_mode` | Switch to read-only research mode before making changes |
|
||||
| `save_memory` (legacy) | Persist facts across sessions when `experimental.memoryV2 = false` |
|
||||
| `get_internal_docs` | Look up Gemini CLI's bundled documentation |
|
||||
| `ask_user` | Pose structured questions to the user (text / single-select / multi-select) |
|
||||
| `enter_plan_mode` / `exit_plan_mode` | Switch into and out of read-only plan mode |
|
||||
| `update_topic` | Update the current conversation's topic / strategic-intent metadata |
|
||||
| `complete_task` | Signal that a Gemini subagent has completed and return its result to the parent agent |
|
||||
| `tracker_create_task`, `tracker_update_task`, `tracker_get_task`, `tracker_list_tasks`, `tracker_add_dependency`, `tracker_visualize` | Rich task tracker with dependency and visualization support |
|
||||
| `read_mcp_resource`, `list_mcp_resources` | MCP resource access |
|
||||
|
||||
@@ -7,7 +7,7 @@ Use this template when dispatching a plan document reviewer subagent.
|
||||
**Dispatch after:** The complete plan is written.
|
||||
|
||||
```
|
||||
Task tool (general-purpose):
|
||||
Subagent (general-purpose):
|
||||
description: "Review plan document"
|
||||
prompt: |
|
||||
You are a plan document reviewer. Verify this plan is complete and ready for implementation.
|
||||
|
||||
@@ -9,7 +9,7 @@ description: Use when creating new skills, editing existing skills, or verifying
|
||||
|
||||
**Writing skills IS Test-Driven Development applied to process documentation.**
|
||||
|
||||
**Personal skills live in agent-specific directories (`~/.claude/skills` for Claude Code, `~/.agents/skills/` for Codex)**
|
||||
**Personal skills live in your runtime's skills directory** — see [claude-code-tools.md](../using-superpowers/references/claude-code-tools.md), [codex-tools.md](../using-superpowers/references/codex-tools.md), [copilot-tools.md](../using-superpowers/references/copilot-tools.md), or [gemini-tools.md](../using-superpowers/references/gemini-tools.md) for the path on your runtime. Codex, Copilot CLI, and Gemini CLI all also recognize `~/.agents/skills/` as a cross-runtime alias.
|
||||
|
||||
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
|
||||
|
||||
@@ -21,7 +21,7 @@ You write test cases (pressure scenarios with subagents), watch them fail (basel
|
||||
|
||||
## What is a Skill?
|
||||
|
||||
A **skill** is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
|
||||
A **skill** is a reference guide for proven techniques, patterns, or tools. Skills help future agents find and apply effective approaches.
|
||||
|
||||
**Skills are:** Reusable techniques, patterns, tools, reference guides
|
||||
|
||||
@@ -55,7 +55,7 @@ The entire skill creation process follows RED-GREEN-REFACTOR.
|
||||
**Don't create for:**
|
||||
- One-off solutions
|
||||
- Standard practices well-documented elsewhere
|
||||
- Project-specific conventions (put in CLAUDE.md)
|
||||
- Project-specific conventions (put in your instructions file)
|
||||
- Mechanical constraints (if it's enforceable with regex/validation, automate it—save documentation for judgment calls)
|
||||
|
||||
## Skill Types
|
||||
@@ -99,7 +99,7 @@ skills/
|
||||
- `description`: Third-person, describes ONLY when to use (NOT what it does)
|
||||
- Start with "Use when..." to focus on triggering conditions
|
||||
- Include specific symptoms, situations, and contexts
|
||||
- **NEVER summarize the skill's process or workflow** (see CSO section for why)
|
||||
- **NEVER summarize the skill's process or workflow** (see SDO section for why)
|
||||
- Keep under 500 characters if possible
|
||||
|
||||
```markdown
|
||||
@@ -137,13 +137,13 @@ Concrete results
|
||||
```
|
||||
|
||||
|
||||
## Claude Search Optimization (CSO)
|
||||
## Skill Discovery Optimization (SDO)
|
||||
|
||||
**Critical for discovery:** Future Claude needs to FIND your skill
|
||||
**Critical for discovery:** Future agents need to FIND your skill
|
||||
|
||||
### 1. Rich Description Field
|
||||
|
||||
**Purpose:** Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
|
||||
**Purpose:** Your agent reads the description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
|
||||
|
||||
**Format:** Start with "Use when..." to focus on triggering conditions
|
||||
|
||||
@@ -151,14 +151,14 @@ Concrete results
|
||||
|
||||
The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow in the description.
|
||||
|
||||
**Why this matters:** Testing revealed that when a description summarizes the skill's workflow, Claude may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused Claude to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).
|
||||
**Why this matters:** Testing revealed that when a description summarizes the skill's workflow, an agent may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused an agent to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).
|
||||
|
||||
When the description was changed to just "Use when executing implementation plans with independent tasks" (no workflow summary), Claude correctly read the flowchart and followed the two-stage review process.
|
||||
When the description was changed to just "Use when executing implementation plans with independent tasks" (no workflow summary), the agent correctly read the flowchart and followed the two-stage review process.
|
||||
|
||||
**The trap:** Descriptions that summarize workflow create a shortcut Claude will take. The skill body becomes documentation Claude skips.
|
||||
**The trap:** Descriptions that summarize workflow create a shortcut agents will take. The skill body becomes documentation agents skip.
|
||||
|
||||
```yaml
|
||||
# ❌ BAD: Summarizes workflow - Claude may follow this instead of reading skill
|
||||
# ❌ BAD: Summarizes workflow - agents may follow this instead of reading skill
|
||||
description: Use when executing plans - dispatches subagent per task with code review between tasks
|
||||
|
||||
# ❌ BAD: Too much process detail
|
||||
@@ -198,7 +198,7 @@ description: Use when using React Router and handling authentication redirects
|
||||
|
||||
### 2. Keyword Coverage
|
||||
|
||||
Use words Claude would search for:
|
||||
Use words an agent would search for:
|
||||
- Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
|
||||
- Symptoms: "flaky", "hanging", "zombie", "pollution"
|
||||
- Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
|
||||
@@ -275,7 +275,7 @@ wc -w skills/path/SKILL.md
|
||||
- `creating-skills`, `testing-skills`, `debugging-with-logs`
|
||||
- Active, describes the action you're taking
|
||||
|
||||
### 4. Cross-Referencing Other Skills
|
||||
### 5. Cross-Referencing Other Skills
|
||||
|
||||
**When writing documentation that references other skills:**
|
||||
|
||||
@@ -313,7 +313,7 @@ digraph when_flowchart {
|
||||
- Linear instructions → Numbered lists
|
||||
- Labels without semantic meaning (step1, helper2)
|
||||
|
||||
See @graphviz-conventions.dot for graphviz style rules.
|
||||
See `graphviz-conventions.dot` in this directory for graphviz style rules.
|
||||
|
||||
**Visualizing for your human partner:** Use `render-graphs.js` in this directory to render a skill's flowcharts to SVG:
|
||||
```bash
|
||||
@@ -522,7 +522,7 @@ Make it easy for agents to self-check when rationalizing:
|
||||
**All of these mean: Delete code. Start over with TDD.**
|
||||
```
|
||||
|
||||
### Update CSO for Violation Symptoms
|
||||
### Update SDO for Violation Symptoms
|
||||
|
||||
Add to description: symptoms of when you're ABOUT to violate the rule:
|
||||
|
||||
@@ -553,7 +553,7 @@ Run same scenarios WITH skill. Agent should now comply.
|
||||
|
||||
Agent found new rationalization? Add explicit counter. Re-test until bulletproof.
|
||||
|
||||
**Testing methodology:** See @testing-skills-with-subagents.md for the complete testing methodology:
|
||||
**Testing methodology:** See [testing-skills-with-subagents.md](testing-skills-with-subagents.md) for the complete testing methodology:
|
||||
- How to write pressure scenarios
|
||||
- Pressure types (time, sunk cost, authority, exhaustion)
|
||||
- Plugging holes systematically
|
||||
@@ -595,7 +595,7 @@ Deploying untested skills = deploying untested code. It's a violation of quality
|
||||
|
||||
## Skill Creation Checklist (TDD Adapted)
|
||||
|
||||
**IMPORTANT: Use TodoWrite to create todos for EACH checklist item below.**
|
||||
**IMPORTANT: Create a todo for EACH checklist item below.**
|
||||
|
||||
**RED Phase - Write Failing Test:**
|
||||
- [ ] Create pressure scenarios (3+ combined pressures for discipline skills)
|
||||
@@ -634,9 +634,10 @@ Deploying untested skills = deploying untested code. It's a violation of quality
|
||||
|
||||
## Discovery Workflow
|
||||
|
||||
How future Claude finds your skill:
|
||||
How future agents find your skill:
|
||||
|
||||
1. **Encounters problem** ("tests are flaky")
|
||||
2. **Searches skills** (greps descriptions, browses categories)
|
||||
3. **Finds SKILL** (description matches)
|
||||
4. **Scans overview** (is this relevant?)
|
||||
5. **Reads patterns** (quick reference table)
|
||||
|
||||
@@ -1,30 +1,30 @@
|
||||
# Skill authoring best practices
|
||||
|
||||
> Learn how to write effective Skills that Claude can discover and use successfully.
|
||||
> Learn how to write effective Skills that agents can discover and use successfully.
|
||||
|
||||
Good Skills are concise, well-structured, and tested with real usage. This guide provides practical authoring decisions to help you write Skills that Claude can discover and use effectively.
|
||||
Good Skills are concise, well-structured, and tested with real usage. This guide provides practical authoring decisions to help you write Skills that agents can discover and use effectively.
|
||||
|
||||
For conceptual background on how Skills work, see the [Skills overview](/en/docs/agents-and-tools/agent-skills/overview).
|
||||
For conceptual background on how Skills work, see the [Skills overview](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview).
|
||||
|
||||
## Core principles
|
||||
|
||||
### Concise is key
|
||||
|
||||
The [context window](https://platform.claude.com/docs/en/build-with-claude/context-windows) is a public good. Your Skill shares the context window with everything else Claude needs to know, including:
|
||||
The [context window](https://platform.claude.com/docs/en/build-with-claude/context-windows) is a public good. Your Skill shares the context window with everything else your agent needs to know, including:
|
||||
|
||||
* The system prompt
|
||||
* Conversation history
|
||||
* Other Skills' metadata
|
||||
* Your actual request
|
||||
|
||||
Not every token in your Skill has an immediate cost. At startup, only the metadata (name and description) from all Skills is pre-loaded. Claude reads SKILL.md only when the Skill becomes relevant, and reads additional files only as needed. However, being concise in SKILL.md still matters: once Claude loads it, every token competes with conversation history and other context.
|
||||
Not every token in your Skill has an immediate cost. At startup, only the metadata (name and description) from all Skills is pre-loaded. Agents read SKILL.md only when the Skill becomes relevant, and read additional files only as needed. However, being concise in SKILL.md still matters: once an agent loads it, every token competes with conversation history and other context.
|
||||
|
||||
**Default assumption**: Claude is already very smart
|
||||
**Default assumption**: Agents are already very smart
|
||||
|
||||
Only add context Claude doesn't already have. Challenge each piece of information:
|
||||
Only add context agents don't already have. Challenge each piece of information:
|
||||
|
||||
* "Does Claude really need this explanation?"
|
||||
* "Can I assume Claude knows this?"
|
||||
* "Does the agent really need this explanation?"
|
||||
* "Can I assume the agent knows this?"
|
||||
* "Does this paragraph justify its token cost?"
|
||||
|
||||
**Good example: Concise** (approximately 50 tokens):
|
||||
@@ -54,7 +54,7 @@ recommend pdfplumber because it's easy to use and handles most cases well.
|
||||
First, you'll need to install it using pip. Then you can use the code below...
|
||||
```
|
||||
|
||||
The concise version assumes Claude knows what PDFs are and how libraries work.
|
||||
The concise version assumes the agent knows what PDFs are and how libraries work.
|
||||
|
||||
### Set appropriate degrees of freedom
|
||||
|
||||
@@ -124,10 +124,10 @@ python scripts/migrate.py --verify --backup
|
||||
Do not modify the command or add additional flags.
|
||||
````
|
||||
|
||||
**Analogy**: Think of Claude as a robot exploring a path:
|
||||
**Analogy**: Think of the agent as a robot exploring a path:
|
||||
|
||||
* **Narrow bridge with cliffs on both sides**: There's only one safe way forward. Provide specific guardrails and exact instructions (low freedom). Example: database migrations that must run in exact sequence.
|
||||
* **Open field with no hazards**: Many paths lead to success. Give general direction and trust Claude to find the best route (high freedom). Example: code reviews where context determines the best approach.
|
||||
* **Open field with no hazards**: Many paths lead to success. Give general direction and trust the agent to find the best route (high freedom). Example: code reviews where context determines the best approach.
|
||||
|
||||
### Test with all models you plan to use
|
||||
|
||||
@@ -149,7 +149,7 @@ What works perfectly for Opus might need more detail for Haiku. If you plan to u
|
||||
* `name` - Human-readable name of the Skill (64 characters maximum)
|
||||
* `description` - One-line description of what the Skill does and when to use it (1024 characters maximum)
|
||||
|
||||
For complete Skill structure details, see the [Skills overview](/en/docs/agents-and-tools/agent-skills/overview#skill-structure).
|
||||
For complete Skill structure details, see the [Skills overview](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#skill-structure).
|
||||
</Note>
|
||||
|
||||
### Naming conventions
|
||||
@@ -196,7 +196,7 @@ The `description` field enables Skill discovery and should include both what the
|
||||
|
||||
**Be specific and include key terms**. Include both what the Skill does and specific triggers/contexts for when to use it.
|
||||
|
||||
Each Skill has exactly one description field. The description is critical for skill selection: Claude uses it to choose the right Skill from potentially 100+ available Skills. Your description must provide enough detail for Claude to know when to select this Skill, while the rest of SKILL.md provides the implementation details.
|
||||
Each Skill has exactly one description field. The description is critical for skill selection: agents use it to choose the right Skill from potentially 100+ available Skills. Your description must provide enough detail for an agent to know when to select this Skill, while the rest of SKILL.md provides the implementation details.
|
||||
|
||||
Effective examples:
|
||||
|
||||
@@ -234,7 +234,7 @@ description: Does stuff with files
|
||||
|
||||
### Progressive disclosure patterns
|
||||
|
||||
SKILL.md serves as an overview that points Claude to detailed materials as needed, like a table of contents in an onboarding guide. For an explanation of how progressive disclosure works, see [How Skills work](/en/docs/agents-and-tools/agent-skills/overview#how-skills-work) in the overview.
|
||||
SKILL.md serves as an overview that points agents to detailed materials as needed, like a table of contents in an onboarding guide. For an explanation of how progressive disclosure works, see [How Skills work](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#how-skills-work) in the overview.
|
||||
|
||||
**Practical guidance:**
|
||||
|
||||
@@ -248,7 +248,7 @@ A basic Skill starts with just a SKILL.md file containing metadata and instructi
|
||||
|
||||
<img src="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=87782ff239b297d9a9e8e1b72ed72db9" alt="Simple SKILL.md file showing YAML frontmatter and markdown body" data-og-width="2048" width="2048" data-og-height="1153" height="1153" data-path="images/agent-skills-simple-file.png" data-optimize="true" data-opv="3" srcset="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=280&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=c61cc33b6f5855809907f7fda94cd80e 280w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=560&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=90d2c0c1c76b36e8d485f49e0810dbfd 560w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=840&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=ad17d231ac7b0bea7e5b4d58fb4aeabb 840w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=1100&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=f5d0a7a3c668435bb0aee9a3a8f8c329 1100w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=1650&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=0e927c1af9de5799cfe557d12249f6e6 1650w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=2500&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=46bbb1a51dd4c8202a470ac8c80a893d 2500w" />
|
||||
|
||||
As your Skill grows, you can bundle additional content that Claude loads only when needed:
|
||||
As your Skill grows, you can bundle additional content that agents load only when needed:
|
||||
|
||||
<img src="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=a5e0aa41e3d53985a7e3e43668a33ea3" alt="Bundling additional reference files like reference.md and forms.md." data-og-width="2048" width="2048" data-og-height="1327" height="1327" data-path="images/agent-skills-bundling-content.png" data-optimize="true" data-opv="3" srcset="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=280&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=f8a0e73783e99b4a643d79eac86b70a2 280w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=560&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=dc510a2a9d3f14359416b706f067904a 560w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=840&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=82cd6286c966303f7dd914c28170e385 840w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=1100&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=56f3be36c77e4fe4b523df209a6824c6 1100w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=1650&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=d22b5161b2075656417d56f41a74f3dd 1650w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=2500&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=3dd4bdd6850ffcc96c6c45fcb0acd6eb 2500w" />
|
||||
|
||||
@@ -292,11 +292,11 @@ with pdfplumber.open("file.pdf") as pdf:
|
||||
**Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
|
||||
````
|
||||
|
||||
Claude loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
|
||||
Agents load FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
|
||||
|
||||
#### Pattern 2: Domain-specific organization
|
||||
|
||||
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context. When a user asks about sales metrics, Claude only needs to read sales-related schemas, not finance or marketing data. This keeps token usage low and context focused.
|
||||
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context. When a user asks about sales metrics, the agent only needs to read sales-related schemas, not finance or marketing data. This keeps token usage low and context focused.
|
||||
|
||||
```
|
||||
bigquery-skill/
|
||||
@@ -348,13 +348,13 @@ For simple edits, modify the XML directly.
|
||||
**For OOXML details**: See [OOXML.md](OOXML.md)
|
||||
```
|
||||
|
||||
Claude reads REDLINING.md or OOXML.md only when the user needs those features.
|
||||
Agents read REDLINING.md or OOXML.md only when the user needs those features.
|
||||
|
||||
### Avoid deeply nested references
|
||||
|
||||
Claude may partially read files when they're referenced from other referenced files. When encountering nested references, Claude might use commands like `head -100` to preview content rather than reading entire files, resulting in incomplete information.
|
||||
Agents may partially read files when they're referenced from other referenced files. When encountering nested references, an agent might use commands like `head -100` to preview content rather than reading entire files, resulting in incomplete information.
|
||||
|
||||
**Keep references one level deep from SKILL.md**. All reference files should link directly from SKILL.md to ensure Claude reads complete files when needed.
|
||||
**Keep references one level deep from SKILL.md**. All reference files should link directly from SKILL.md to ensure agents read complete files when needed.
|
||||
|
||||
**Bad example: Too deep**:
|
||||
|
||||
@@ -382,7 +382,7 @@ Here's the actual information...
|
||||
|
||||
### Structure longer reference files with table of contents
|
||||
|
||||
For reference files longer than 100 lines, include a table of contents at the top. This ensures Claude can see the full scope of available information even when previewing with partial reads.
|
||||
For reference files longer than 100 lines, include a table of contents at the top. This ensures agents can see the full scope of available information even when previewing with partial reads.
|
||||
|
||||
**Example**:
|
||||
|
||||
@@ -403,7 +403,7 @@ For reference files longer than 100 lines, include a table of contents at the to
|
||||
...
|
||||
```
|
||||
|
||||
Claude can then read the complete file or jump to specific sections as needed.
|
||||
Agents can then read the complete file or jump to specific sections as needed.
|
||||
|
||||
For details on how this filesystem-based architecture enables progressive disclosure, see the [Runtime environment](#runtime-environment) section in the Advanced section below.
|
||||
|
||||
@@ -411,7 +411,7 @@ For details on how this filesystem-based architecture enables progressive disclo
|
||||
|
||||
### Use workflows for complex tasks
|
||||
|
||||
Break complex operations into clear, sequential steps. For particularly complex workflows, provide a checklist that Claude can copy into its response and check off as it progresses.
|
||||
Break complex operations into clear, sequential steps. For particularly complex workflows, provide a checklist that the agent can copy into its response and check off as it progresses.
|
||||
|
||||
**Example 1: Research synthesis workflow** (for Skills without code):
|
||||
|
||||
@@ -498,7 +498,7 @@ Run: `python scripts/verify_output.py output.pdf`
|
||||
If verification fails, return to Step 2.
|
||||
````
|
||||
|
||||
Clear steps prevent Claude from skipping critical validation. The checklist helps both Claude and you track progress through multi-step workflows.
|
||||
Clear steps prevent agents from skipping critical validation. The checklist helps both you and the agent track progress through multi-step workflows.
|
||||
|
||||
### Implement feedback loops
|
||||
|
||||
@@ -524,7 +524,7 @@ This pattern greatly improves output quality.
|
||||
5. Finalize and save the document
|
||||
```
|
||||
|
||||
This shows the validation loop pattern using reference documents instead of scripts. The "validator" is STYLE\_GUIDE.md, and Claude performs the check by reading and comparing.
|
||||
This shows the validation loop pattern using reference documents instead of scripts. The "validator" is STYLE\_GUIDE.md, and the agent performs the check by reading and comparing.
|
||||
|
||||
**Example 2: Document editing process** (for Skills with code):
|
||||
|
||||
@@ -593,7 +593,7 @@ Choose one term and use it throughout the Skill:
|
||||
* Mix "field", "box", "element", "control"
|
||||
* Mix "extract", "pull", "get", "retrieve"
|
||||
|
||||
Consistency helps Claude understand and follow instructions.
|
||||
Consistency helps agents understand and follow instructions.
|
||||
|
||||
## Common patterns
|
||||
|
||||
@@ -688,11 +688,11 @@ chore: update dependencies and refactor error handling
|
||||
Follow this style: type(scope): brief description, then detailed explanation.
|
||||
````
|
||||
|
||||
Examples help Claude understand the desired style and level of detail more clearly than descriptions alone.
|
||||
Examples help agents understand the desired style and level of detail more clearly than descriptions alone.
|
||||
|
||||
### Conditional workflow pattern
|
||||
|
||||
Guide Claude through decision points:
|
||||
Guide agents through decision points:
|
||||
|
||||
```markdown theme={null}
|
||||
## Document modification workflow
|
||||
@@ -715,7 +715,7 @@ Guide Claude through decision points:
|
||||
```
|
||||
|
||||
<Tip>
|
||||
If workflows become large or complicated with many steps, consider pushing them into separate files and tell Claude to read the appropriate file based on the task at hand.
|
||||
If workflows become large or complicated with many steps, consider pushing them into separate files and tell the agent to read the appropriate file based on the task at hand.
|
||||
</Tip>
|
||||
|
||||
## Evaluation and iteration
|
||||
@@ -726,9 +726,9 @@ Guide Claude through decision points:
|
||||
|
||||
**Evaluation-driven development:**
|
||||
|
||||
1. **Identify gaps**: Run Claude on representative tasks without a Skill. Document specific failures or missing context
|
||||
1. **Identify gaps**: Run your agent on representative tasks without a Skill. Document specific failures or missing context
|
||||
2. **Create evaluations**: Build three scenarios that test these gaps
|
||||
3. **Establish baseline**: Measure Claude's performance without the Skill
|
||||
3. **Establish baseline**: Measure the agent's performance without the Skill
|
||||
4. **Write minimal instructions**: Create just enough content to address the gaps and pass evaluations
|
||||
5. **Iterate**: Execute evaluations, compare against baseline, and refine
|
||||
|
||||
@@ -753,51 +753,51 @@ This approach ensures you're solving actual problems rather than anticipating re
|
||||
This example demonstrates a data-driven evaluation with a simple testing rubric. We do not currently provide a built-in way to run these evaluations. Users can create their own evaluation system. Evaluations are your source of truth for measuring Skill effectiveness.
|
||||
</Note>
|
||||
|
||||
### Develop Skills iteratively with Claude
|
||||
### Develop Skills iteratively with the agent
|
||||
|
||||
The most effective Skill development process involves Claude itself. Work with one instance of Claude ("Claude A") to create a Skill that will be used by other instances ("Claude B"). Claude A helps you design and refine instructions, while Claude B tests them in real tasks. This works because Claude models understand both how to write effective agent instructions and what information agents need.
|
||||
The most effective Skill development process involves the agent itself. Work with one instance ("Agent A") to create a Skill that will be used by other instances ("Agent B"). Agent A helps you design and refine instructions, while Agent B tests them in real tasks. This works because the underlying models understand both how to write effective agent instructions and what information agents need.
|
||||
|
||||
**Creating a new Skill:**
|
||||
|
||||
1. **Complete a task without a Skill**: Work through a problem with Claude A using normal prompting. As you work, you'll naturally provide context, explain preferences, and share procedural knowledge. Notice what information you repeatedly provide.
|
||||
1. **Complete a task without a Skill**: Work through a problem with Agent A using normal prompting. As you work, you'll naturally provide context, explain preferences, and share procedural knowledge. Notice what information you repeatedly provide.
|
||||
|
||||
2. **Identify the reusable pattern**: After completing the task, identify what context you provided that would be useful for similar future tasks.
|
||||
|
||||
**Example**: If you worked through a BigQuery analysis, you might have provided table names, field definitions, filtering rules (like "always exclude test accounts"), and common query patterns.
|
||||
|
||||
3. **Ask Claude A to create a Skill**: "Create a Skill that captures this BigQuery analysis pattern we just used. Include the table schemas, naming conventions, and the rule about filtering test accounts."
|
||||
3. **Ask Agent A to create a Skill**: "Create a Skill that captures this BigQuery analysis pattern we just used. Include the table schemas, naming conventions, and the rule about filtering test accounts."
|
||||
|
||||
<Tip>
|
||||
Claude models understand the Skill format and structure natively. You don't need special system prompts or a "writing skills" skill to get Claude to help create Skills. Simply ask Claude to create a Skill and it will generate properly structured SKILL.md content with appropriate frontmatter and body content.
|
||||
Modern agents understand the Skill format and structure natively. You don't need special system prompts or a "writing skills" skill to get help creating Skills. Simply ask the agent to create a Skill and it will generate properly structured SKILL.md content with appropriate frontmatter and body content.
|
||||
</Tip>
|
||||
|
||||
4. **Review for conciseness**: Check that Claude A hasn't added unnecessary explanations. Ask: "Remove the explanation about what win rate means - Claude already knows that."
|
||||
4. **Review for conciseness**: Check that Agent A hasn't added unnecessary explanations. Ask: "Remove the explanation about what win rate means - the agent already knows that."
|
||||
|
||||
5. **Improve information architecture**: Ask Claude A to organize the content more effectively. For example: "Organize this so the table schema is in a separate reference file. We might add more tables later."
|
||||
5. **Improve information architecture**: Ask Agent A to organize the content more effectively. For example: "Organize this so the table schema is in a separate reference file. We might add more tables later."
|
||||
|
||||
6. **Test on similar tasks**: Use the Skill with Claude B (a fresh instance with the Skill loaded) on related use cases. Observe whether Claude B finds the right information, applies rules correctly, and handles the task successfully.
|
||||
6. **Test on similar tasks**: Use the Skill with Agent B (a fresh instance with the Skill loaded) on related use cases. Observe whether Agent B finds the right information, applies rules correctly, and handles the task successfully.
|
||||
|
||||
7. **Iterate based on observation**: If Claude B struggles or misses something, return to Claude A with specifics: "When Claude used this Skill, it forgot to filter by date for Q4. Should we add a section about date filtering patterns?"
|
||||
7. **Iterate based on observation**: If Agent B struggles or misses something, return to Agent A with specifics: "When the agent used this Skill, it forgot to filter by date for Q4. Should we add a section about date filtering patterns?"
|
||||
|
||||
**Iterating on existing Skills:**
|
||||
|
||||
The same hierarchical pattern continues when improving Skills. You alternate between:
|
||||
|
||||
* **Working with Claude A** (the expert who helps refine the Skill)
|
||||
* **Testing with Claude B** (the agent using the Skill to perform real work)
|
||||
* **Observing Claude B's behavior** and bringing insights back to Claude A
|
||||
* **Working with Agent A** (the expert who helps refine the Skill)
|
||||
* **Testing with Agent B** (the agent using the Skill to perform real work)
|
||||
* **Observing Agent B's behavior** and bringing insights back to Agent A
|
||||
|
||||
1. **Use the Skill in real workflows**: Give Claude B (with the Skill loaded) actual tasks, not test scenarios
|
||||
1. **Use the Skill in real workflows**: Give Agent B (with the Skill loaded) actual tasks, not test scenarios
|
||||
|
||||
2. **Observe Claude B's behavior**: Note where it struggles, succeeds, or makes unexpected choices
|
||||
2. **Observe Agent B's behavior**: Note where it struggles, succeeds, or makes unexpected choices
|
||||
|
||||
**Example observation**: "When I asked Claude B for a regional sales report, it wrote the query but forgot to filter out test accounts, even though the Skill mentions this rule."
|
||||
**Example observation**: "When I asked Agent B for a regional sales report, it wrote the query but forgot to filter out test accounts, even though the Skill mentions this rule."
|
||||
|
||||
3. **Return to Claude A for improvements**: Share the current SKILL.md and describe what you observed. Ask: "I noticed Claude B forgot to filter test accounts when I asked for a regional report. The Skill mentions filtering, but maybe it's not prominent enough?"
|
||||
3. **Return to Agent A for improvements**: Share the current SKILL.md and describe what you observed. Ask: "I noticed Agent B forgot to filter test accounts when I asked for a regional report. The Skill mentions filtering, but maybe it's not prominent enough?"
|
||||
|
||||
4. **Review Claude A's suggestions**: Claude A might suggest reorganizing to make rules more prominent, using stronger language like "MUST filter" instead of "always filter", or restructuring the workflow section.
|
||||
4. **Review Agent A's suggestions**: Agent A might suggest reorganizing to make rules more prominent, using stronger language like "MUST filter" instead of "always filter", or restructuring the workflow section.
|
||||
|
||||
5. **Apply and test changes**: Update the Skill with Claude A's refinements, then test again with Claude B on similar requests
|
||||
5. **Apply and test changes**: Update the Skill with Agent A's refinements, then test again with Agent B on similar requests
|
||||
|
||||
6. **Repeat based on usage**: Continue this observe-refine-test cycle as you encounter new scenarios. Each iteration improves the Skill based on real agent behavior, not assumptions.
|
||||
|
||||
@@ -807,18 +807,18 @@ The same hierarchical pattern continues when improving Skills. You alternate bet
|
||||
2. Ask: Does the Skill activate when expected? Are instructions clear? What's missing?
|
||||
3. Incorporate feedback to address blind spots in your own usage patterns
|
||||
|
||||
**Why this approach works**: Claude A understands agent needs, you provide domain expertise, Claude B reveals gaps through real usage, and iterative refinement improves Skills based on observed behavior rather than assumptions.
|
||||
**Why this approach works**: Agent A understands agent needs, you provide domain expertise, Agent B reveals gaps through real usage, and iterative refinement improves Skills based on observed behavior rather than assumptions.
|
||||
|
||||
### Observe how Claude navigates Skills
|
||||
### Observe how agents navigate Skills
|
||||
|
||||
As you iterate on Skills, pay attention to how Claude actually uses them in practice. Watch for:
|
||||
As you iterate on Skills, pay attention to how agents actually use them in practice. Watch for:
|
||||
|
||||
* **Unexpected exploration paths**: Does Claude read files in an order you didn't anticipate? This might indicate your structure isn't as intuitive as you thought
|
||||
* **Missed connections**: Does Claude fail to follow references to important files? Your links might need to be more explicit or prominent
|
||||
* **Overreliance on certain sections**: If Claude repeatedly reads the same file, consider whether that content should be in the main SKILL.md instead
|
||||
* **Ignored content**: If Claude never accesses a bundled file, it might be unnecessary or poorly signaled in the main instructions
|
||||
* **Unexpected exploration paths**: Does the agent read files in an order you didn't anticipate? This might indicate your structure isn't as intuitive as you thought
|
||||
* **Missed connections**: Does the agent fail to follow references to important files? Your links might need to be more explicit or prominent
|
||||
* **Overreliance on certain sections**: If the agent repeatedly reads the same file, consider whether that content should be in the main SKILL.md instead
|
||||
* **Ignored content**: If the agent never accesses a bundled file, it might be unnecessary or poorly signaled in the main instructions
|
||||
|
||||
Iterate based on these observations rather than assumptions. The 'name' and 'description' in your Skill's metadata are particularly critical. Claude uses these when deciding whether to trigger the Skill in response to the current task. Make sure they clearly describe what the Skill does and when it should be used.
|
||||
Iterate based on these observations rather than assumptions. The 'name' and 'description' in your Skill's metadata are particularly critical. Agents use these when deciding whether to trigger the Skill in response to the current task. Make sure they clearly describe what the Skill does and when it should be used.
|
||||
|
||||
## Anti-patterns to avoid
|
||||
|
||||
@@ -854,7 +854,7 @@ The sections below focus on Skills that include executable scripts. If your Skil
|
||||
|
||||
### Solve, don't punt
|
||||
|
||||
When writing scripts for Skills, handle error conditions rather than punting to Claude.
|
||||
When writing scripts for Skills, handle error conditions rather than punting to the agent.
|
||||
|
||||
**Good example: Handle errors explicitly**:
|
||||
|
||||
@@ -876,15 +876,15 @@ def process_file(path):
|
||||
return ''
|
||||
```
|
||||
|
||||
**Bad example: Punt to Claude**:
|
||||
**Bad example: Punt to the agent**:
|
||||
|
||||
```python theme={null}
|
||||
def process_file(path):
|
||||
# Just fail and let Claude figure it out
|
||||
# Just fail and let the agent figure it out
|
||||
return open(path).read()
|
||||
```
|
||||
|
||||
Configuration parameters should also be justified and documented to avoid "voodoo constants" (Ousterhout's law). If you don't know the right value, how will Claude determine it?
|
||||
Configuration parameters should also be justified and documented to avoid "voodoo constants" (Ousterhout's law). If you don't know the right value, how will the agent determine it?
|
||||
|
||||
**Good example: Self-documenting**:
|
||||
|
||||
@@ -907,7 +907,7 @@ RETRIES = 5 # Why 5?
|
||||
|
||||
### Provide utility scripts
|
||||
|
||||
Even if Claude could write a script, pre-made scripts offer advantages:
|
||||
Even if your agent could write a script, pre-made scripts offer advantages:
|
||||
|
||||
**Benefits of utility scripts**:
|
||||
|
||||
@@ -918,9 +918,9 @@ Even if Claude could write a script, pre-made scripts offer advantages:
|
||||
|
||||
<img src="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=4bbc45f2c2e0bee9f2f0d5da669bad00" alt="Bundling executable scripts alongside instruction files" data-og-width="2048" width="2048" data-og-height="1154" height="1154" data-path="images/agent-skills-executable-scripts.png" data-optimize="true" data-opv="3" srcset="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=280&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=9a04e6535a8467bfeea492e517de389f 280w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=560&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=e49333ad90141af17c0d7651cca7216b 560w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=840&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=954265a5df52223d6572b6214168c428 840w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=1100&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=2ff7a2d8f2a83ee8af132b29f10150fd 1100w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=1650&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=48ab96245e04077f4d15e9170e081cfb 1650w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=2500&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=0301a6c8b3ee879497cc5b5483177c90 2500w" />
|
||||
|
||||
The diagram above shows how executable scripts work alongside instruction files. The instruction file (forms.md) references the script, and Claude can execute it without loading its contents into context.
|
||||
The diagram above shows how executable scripts work alongside instruction files. The instruction file (forms.md) references the script, and the agent can execute it without loading its contents into context.
|
||||
|
||||
**Important distinction**: Make clear in your instructions whether Claude should:
|
||||
**Important distinction**: Make clear in your instructions whether the agent should:
|
||||
|
||||
* **Execute the script** (most common): "Run `analyze_form.py` to extract fields"
|
||||
* **Read it as reference** (for complex logic): "See `analyze_form.py` for the field extraction algorithm"
|
||||
@@ -962,7 +962,7 @@ python scripts/fill_form.py input.pdf fields.json output.pdf
|
||||
|
||||
### Use visual analysis
|
||||
|
||||
When inputs can be rendered as images, have Claude analyze them:
|
||||
When inputs can be rendered as images, have the agent analyze them:
|
||||
|
||||
````markdown theme={null}
|
||||
## Form layout analysis
|
||||
@@ -973,20 +973,20 @@ When inputs can be rendered as images, have Claude analyze them:
|
||||
```
|
||||
|
||||
2. Analyze each page image to identify form fields
|
||||
3. Claude can see field locations and types visually
|
||||
3. The agent can see field locations and types visually
|
||||
````
|
||||
|
||||
<Note>
|
||||
In this example, you'd need to write the `pdf_to_images.py` script.
|
||||
</Note>
|
||||
|
||||
Claude's vision capabilities help understand layouts and structures.
|
||||
Agent vision capabilities help understand layouts and structures.
|
||||
|
||||
### Create verifiable intermediate outputs
|
||||
|
||||
When Claude performs complex, open-ended tasks, it can make mistakes. The "plan-validate-execute" pattern catches errors early by having Claude first create a plan in a structured format, then validate that plan with a script before executing it.
|
||||
When agents perform complex, open-ended tasks, they can make mistakes. The "plan-validate-execute" pattern catches errors early by having the agent first create a plan in a structured format, then validate that plan with a script before executing it.
|
||||
|
||||
**Example**: Imagine asking Claude to update 50 form fields in a PDF based on a spreadsheet. Without validation, Claude might reference non-existent fields, create conflicting values, miss required fields, or apply updates incorrectly.
|
||||
**Example**: Imagine asking the agent to update 50 form fields in a PDF based on a spreadsheet. Without validation, it might reference non-existent fields, create conflicting values, miss required fields, or apply updates incorrectly.
|
||||
|
||||
**Solution**: Use the workflow pattern shown above (PDF form filling), but add an intermediate `changes.json` file that gets validated before applying changes. The workflow becomes: analyze → **create plan file** → **validate plan** → execute → verify.
|
||||
|
||||
@@ -994,12 +994,12 @@ When Claude performs complex, open-ended tasks, it can make mistakes. The "plan-
|
||||
|
||||
* **Catches errors early**: Validation finds problems before changes are applied
|
||||
* **Machine-verifiable**: Scripts provide objective verification
|
||||
* **Reversible planning**: Claude can iterate on the plan without touching originals
|
||||
* **Reversible planning**: The agent can iterate on the plan without touching originals
|
||||
* **Clear debugging**: Error messages point to specific problems
|
||||
|
||||
**When to use**: Batch operations, destructive changes, complex validation rules, high-stakes operations.
|
||||
|
||||
**Implementation tip**: Make validation scripts verbose with specific error messages like "Field 'signature\_date' not found. Available fields: customer\_name, order\_total, signature\_date\_signed" to help Claude fix issues.
|
||||
**Implementation tip**: Make validation scripts verbose with specific error messages like "Field 'signature\_date' not found. Available fields: customer\_name, order\_total, signature\_date\_signed" to help the agent fix issues.
|
||||
|
||||
### Package dependencies
|
||||
|
||||
@@ -1008,32 +1008,32 @@ Skills run in the code execution environment with platform-specific limitations:
|
||||
* **claude.ai**: Can install packages from npm and PyPI and pull from GitHub repositories
|
||||
* **Anthropic API**: Has no network access and no runtime package installation
|
||||
|
||||
List required packages in your SKILL.md and verify they're available in the [code execution tool documentation](/en/docs/agents-and-tools/tool-use/code-execution-tool).
|
||||
List required packages in your SKILL.md and verify they're available in the [code execution tool documentation](https://platform.claude.com/docs/en/agents-and-tools/tool-use/code-execution-tool).
|
||||
|
||||
### Runtime environment
|
||||
|
||||
Skills run in a code execution environment with filesystem access, bash commands, and code execution capabilities. For the conceptual explanation of this architecture, see [The Skills architecture](/en/docs/agents-and-tools/agent-skills/overview#the-skills-architecture) in the overview.
|
||||
Skills run in a code execution environment with filesystem access, bash commands, and code execution capabilities. For the conceptual explanation of this architecture, see [The Skills architecture](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#the-skills-architecture) in the overview.
|
||||
|
||||
**How this affects your authoring:**
|
||||
|
||||
**How Claude accesses Skills:**
|
||||
**How agents access Skills:**
|
||||
|
||||
1. **Metadata pre-loaded**: At startup, the name and description from all Skills' YAML frontmatter are loaded into the system prompt
|
||||
2. **Files read on-demand**: Claude uses bash Read tools to access SKILL.md and other files from the filesystem when needed
|
||||
2. **Files read on-demand**: Agents use their file-reading tools to access SKILL.md and other files from the filesystem when needed
|
||||
3. **Scripts executed efficiently**: Utility scripts can be executed via bash without loading their full contents into context. Only the script's output consumes tokens
|
||||
4. **No context penalty for large files**: Reference files, data, or documentation don't consume context tokens until actually read
|
||||
|
||||
* **File paths matter**: Claude navigates your skill directory like a filesystem. Use forward slashes (`reference/guide.md`), not backslashes
|
||||
* **File paths matter**: Agents navigate your skill directory like a filesystem. Use forward slashes (`reference/guide.md`), not backslashes
|
||||
* **Name files descriptively**: Use names that indicate content: `form_validation_rules.md`, not `doc2.md`
|
||||
* **Organize for discovery**: Structure directories by domain or feature
|
||||
* Good: `reference/finance.md`, `reference/sales.md`
|
||||
* Bad: `docs/file1.md`, `docs/file2.md`
|
||||
* **Bundle comprehensive resources**: Include complete API docs, extensive examples, large datasets; no context penalty until accessed
|
||||
* **Prefer scripts for deterministic operations**: Write `validate_form.py` rather than asking Claude to generate validation code
|
||||
* **Prefer scripts for deterministic operations**: Write `validate_form.py` rather than asking the agent to generate validation code
|
||||
* **Make execution intent clear**:
|
||||
* "Run `analyze_form.py` to extract fields" (execute)
|
||||
* "See `analyze_form.py` for the extraction algorithm" (read as reference)
|
||||
* **Test file access patterns**: Verify Claude can navigate your directory structure by testing with real requests
|
||||
* **Test file access patterns**: Verify the agent can navigate your directory structure by testing with real requests
|
||||
|
||||
**Example:**
|
||||
|
||||
@@ -1046,9 +1046,9 @@ bigquery-skill/
|
||||
└── product.md (usage analytics)
|
||||
```
|
||||
|
||||
When the user asks about revenue, Claude reads SKILL.md, sees the reference to `reference/finance.md`, and invokes bash to read just that file. The sales.md and product.md files remain on the filesystem, consuming zero context tokens until needed. This filesystem-based model is what enables progressive disclosure. Claude can navigate and selectively load exactly what each task requires.
|
||||
When the user asks about revenue, the agent reads SKILL.md, sees the reference to `reference/finance.md`, and invokes bash to read just that file. The sales.md and product.md files remain on the filesystem, consuming zero context tokens until needed. This filesystem-based model is what enables progressive disclosure. Agents can navigate and selectively load exactly what each task requires.
|
||||
|
||||
For complete details on the technical architecture, see [How Skills work](/en/docs/agents-and-tools/agent-skills/overview#how-skills-work) in the Skills overview.
|
||||
For complete details on the technical architecture, see [How Skills work](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#how-skills-work) in the Skills overview.
|
||||
|
||||
### MCP tool references
|
||||
|
||||
@@ -1068,7 +1068,7 @@ Where:
|
||||
* `BigQuery` and `GitHub` are MCP server names
|
||||
* `bigquery_schema` and `create_issue` are the tool names within those servers
|
||||
|
||||
Without the server prefix, Claude may fail to locate the tool, especially when multiple MCP servers are available.
|
||||
Without the server prefix, agents may fail to locate the tool, especially when multiple MCP servers are available.
|
||||
|
||||
### Avoid assuming tools are installed
|
||||
|
||||
@@ -1092,11 +1092,11 @@ reader = PdfReader("file.pdf")
|
||||
|
||||
### YAML frontmatter requirements
|
||||
|
||||
The SKILL.md frontmatter requires `name` (64 characters max) and `description` (1024 characters max) fields. See the [Skills overview](/en/docs/agents-and-tools/agent-skills/overview#skill-structure) for complete structure details.
|
||||
The SKILL.md frontmatter requires `name` (64 characters max) and `description` (1024 characters max) fields. See the [Skills overview](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#skill-structure) for complete structure details.
|
||||
|
||||
### Token budgets
|
||||
|
||||
Keep SKILL.md body under 500 lines for optimal performance. If your content exceeds this, split it into separate files using the progressive disclosure patterns described earlier. For architectural details, see the [Skills overview](/en/docs/agents-and-tools/agent-skills/overview#how-skills-work).
|
||||
Keep SKILL.md body under 500 lines for optimal performance. If your content exceeds this, split it into separate files using the progressive disclosure patterns described earlier. For architectural details, see the [Skills overview](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#how-skills-work).
|
||||
|
||||
## Checklist for effective Skills
|
||||
|
||||
@@ -1117,7 +1117,7 @@ Before sharing a Skill, verify:
|
||||
|
||||
### Code and scripts
|
||||
|
||||
* [ ] Scripts solve problems rather than punt to Claude
|
||||
* [ ] Scripts solve problems rather than punt to the agent
|
||||
* [ ] Error handling is explicit and helpful
|
||||
* [ ] No "voodoo constants" (all values justified)
|
||||
* [ ] Required packages listed in instructions and verified as available
|
||||
@@ -1136,15 +1136,15 @@ Before sharing a Skill, verify:
|
||||
## Next steps
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Get started with Agent Skills" icon="rocket" href="/en/docs/agents-and-tools/agent-skills/quickstart">
|
||||
<Card title="Get started with Agent Skills" icon="rocket" href="https://platform.claude.com/docs/en/agents-and-tools/agent-skills/quickstart">
|
||||
Create your first Skill
|
||||
</Card>
|
||||
|
||||
<Card title="Use Skills in Claude Code" icon="terminal" href="/en/docs/claude-code/skills">
|
||||
<Card title="Use Skills in Claude Code" icon="terminal" href="https://code.claude.com/docs/en/skills">
|
||||
Create and manage Skills in Claude Code
|
||||
</Card>
|
||||
|
||||
<Card title="Use Skills with the API" icon="code" href="/en/api/skills-guide">
|
||||
<Card title="Use Skills with the API" icon="code" href="https://platform.claude.com/docs/en/build-with-claude/skills-guide">
|
||||
Upload and use Skills programmatically
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
@@ -33,7 +33,7 @@ LLMs respond to the same persuasion principles as humans. Understanding this psy
|
||||
**How it works in skills:**
|
||||
- Require announcements: "Announce skill usage"
|
||||
- Force explicit choices: "Choose A, B, or C"
|
||||
- Use tracking: TodoWrite for checklists
|
||||
- Use tracking: todos for checklists
|
||||
|
||||
**When to use:**
|
||||
- Ensuring skills are actually followed
|
||||
@@ -80,8 +80,8 @@ LLMs respond to the same persuasion principles as humans. Understanding this psy
|
||||
|
||||
**Example:**
|
||||
```markdown
|
||||
✅ Checklists without TodoWrite tracking = steps get skipped. Every time.
|
||||
❌ Some people find TodoWrite helpful for checklists.
|
||||
✅ Checklists without todo tracking = steps get skipped. Every time.
|
||||
❌ Some people find a todo list helpful for checklists.
|
||||
```
|
||||
|
||||
### 5. Unity
|
||||
|
||||
@@ -406,7 +406,7 @@ async function runTests() {
|
||||
assert(template.includes('indicator-bar'), 'Should have indicator bar');
|
||||
assert(template.includes('indicator-text'), 'Should have indicator text');
|
||||
assert(template.includes('<!-- CONTENT -->'), 'Should have content placeholder');
|
||||
assert(template.includes('claude-content'), 'Should have content container');
|
||||
assert(template.includes('frame-content'), 'Should have content container');
|
||||
return Promise.resolve();
|
||||
});
|
||||
|
||||
|
||||
@@ -115,17 +115,12 @@ Full workflow execution test (~10-30 minutes):
|
||||
- Subagents follow the skill correctly
|
||||
- Final code is functional and tested
|
||||
|
||||
#### test-requesting-code-review.sh
|
||||
Behavioral test for the code reviewer subagent (~5 minutes):
|
||||
- Builds a tiny project with a baseline commit
|
||||
- Adds a second commit that plants two real bugs (SQL injection, plaintext password handling)
|
||||
- Dispatches the code reviewer via the requesting-code-review skill
|
||||
- Verifies the reviewer flags the planted bugs at Critical/Important severity and refuses to approve
|
||||
|
||||
**What it tests:**
|
||||
- The skill actually dispatches a working code reviewer subagent
|
||||
- The reviewer template produces reviewers that catch obvious security bugs
|
||||
- The reviewer is not sycophantic — it does not approve a diff with planted Critical issues
|
||||
#### test-worktree-native-preference.sh
|
||||
RED-GREEN-REFACTOR validation for the using-git-worktrees skill (~5 minutes):
|
||||
- RED: skill without Step 1a — agent should use `git worktree add`
|
||||
- GREEN: skill with Step 1a — agent should use the native EnterWorktree tool
|
||||
- PRESSURE: same as GREEN under urgency framing with pre-existing `.worktrees/`
|
||||
- Drill scenario `worktree-creation-under-pressure.yaml` covers the PRESSURE phase only
|
||||
|
||||
## Adding New Tests
|
||||
|
||||
|
||||
@@ -80,7 +80,6 @@ tests=(
|
||||
# Integration tests (slow, full execution)
|
||||
integration_tests=(
|
||||
"test-subagent-driven-development-integration.sh"
|
||||
"test-requesting-code-review.sh"
|
||||
)
|
||||
|
||||
# Add integration tests if requested
|
||||
|
||||
@@ -1,177 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Integration Test: Document Review System
|
||||
# Actually runs spec/plan review and verifies reviewers catch issues
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
echo "========================================"
|
||||
echo " Integration Test: Document Review System"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
echo "This test verifies the document review system by:"
|
||||
echo " 1. Creating a spec with intentional errors"
|
||||
echo " 2. Running the spec document reviewer"
|
||||
echo " 3. Verifying the reviewer catches the errors"
|
||||
echo ""
|
||||
|
||||
# Create test project
|
||||
TEST_PROJECT=$(create_test_project)
|
||||
echo "Test project: $TEST_PROJECT"
|
||||
|
||||
# Trap to cleanup
|
||||
trap "cleanup_test_project $TEST_PROJECT" EXIT
|
||||
|
||||
cd "$TEST_PROJECT"
|
||||
|
||||
# Create directory structure
|
||||
mkdir -p docs/superpowers/specs
|
||||
|
||||
# Create a spec document WITH INTENTIONAL ERRORS for the reviewer to catch
|
||||
cat > docs/superpowers/specs/test-feature-design.md <<'EOF'
|
||||
# Test Feature Design
|
||||
|
||||
## Overview
|
||||
|
||||
This is a test feature that does something useful.
|
||||
|
||||
## Requirements
|
||||
|
||||
1. The feature should work correctly
|
||||
2. It should be fast
|
||||
3. TODO: Add more requirements here
|
||||
|
||||
## Architecture
|
||||
|
||||
The feature will use a simple architecture with:
|
||||
- A frontend component
|
||||
- A backend service
|
||||
- Error handling will be specified later once we understand the failure modes better
|
||||
|
||||
## Data Flow
|
||||
|
||||
Data flows from the frontend to the backend.
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
Tests will be written to cover the main functionality.
|
||||
EOF
|
||||
|
||||
# Initialize git repo
|
||||
git init --quiet
|
||||
git config user.email "test@test.com"
|
||||
git config user.name "Test User"
|
||||
git add .
|
||||
git commit -m "Initial commit with test spec" --quiet
|
||||
|
||||
echo ""
|
||||
echo "Created test spec with intentional errors:"
|
||||
echo " - TODO placeholder in Requirements section"
|
||||
echo " - 'specified later' deferral in Architecture section"
|
||||
echo ""
|
||||
echo "Running spec document reviewer..."
|
||||
echo ""
|
||||
|
||||
# Run Claude to review the spec
|
||||
OUTPUT_FILE="$TEST_PROJECT/claude-output.txt"
|
||||
|
||||
PROMPT="You are testing the spec document reviewer.
|
||||
|
||||
Read the spec-document-reviewer-prompt.md template in skills/brainstorming/ to understand the review format.
|
||||
|
||||
Then review the spec at $TEST_PROJECT/docs/superpowers/specs/test-feature-design.md using the criteria from that template.
|
||||
|
||||
Look for:
|
||||
- TODOs, placeholders, 'TBD', incomplete sections
|
||||
- Sections saying 'to be defined later' or 'will spec when X is done'
|
||||
- Sections noticeably less detailed than others
|
||||
|
||||
Output your review in the format specified in the template."
|
||||
|
||||
echo "================================================================================"
|
||||
cd "$SCRIPT_DIR/../.." && timeout 120 claude -p "$PROMPT" --permission-mode bypassPermissions 2>&1 | tee "$OUTPUT_FILE" || {
|
||||
echo ""
|
||||
echo "================================================================================"
|
||||
echo "EXECUTION FAILED (exit code: $?)"
|
||||
exit 1
|
||||
}
|
||||
echo "================================================================================"
|
||||
|
||||
echo ""
|
||||
echo "Analyzing reviewer output..."
|
||||
echo ""
|
||||
|
||||
# Verification tests
|
||||
FAILED=0
|
||||
|
||||
echo "=== Verification Tests ==="
|
||||
echo ""
|
||||
|
||||
# Test 1: Reviewer found the TODO
|
||||
echo "Test 1: Reviewer found TODO..."
|
||||
if grep -qi "TODO" "$OUTPUT_FILE" && grep -qi "requirements\|Requirements" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer identified TODO in Requirements section"
|
||||
else
|
||||
echo " [FAIL] Reviewer did not identify TODO"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 2: Reviewer found the "specified later" deferral
|
||||
echo "Test 2: Reviewer found 'specified later' deferral..."
|
||||
if grep -qi "specified later\|later\|defer\|incomplete\|error handling" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer identified deferred content"
|
||||
else
|
||||
echo " [FAIL] Reviewer did not identify deferred content"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 3: Reviewer output includes Issues section
|
||||
echo "Test 3: Review output format..."
|
||||
if grep -qi "issues\|Issues" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Review includes Issues section"
|
||||
else
|
||||
echo " [FAIL] Review missing Issues section"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 4: Reviewer did NOT approve (found issues)
|
||||
echo "Test 4: Reviewer verdict..."
|
||||
if grep -qi "Issues Found\|❌\|not approved\|issues found" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer correctly found issues (not approved)"
|
||||
elif grep -qi "Approved\|✅" "$OUTPUT_FILE" && ! grep -qi "Issues Found\|❌" "$OUTPUT_FILE"; then
|
||||
echo " [FAIL] Reviewer incorrectly approved spec with errors"
|
||||
FAILED=$((FAILED + 1))
|
||||
else
|
||||
echo " [PASS] Reviewer identified problems (ambiguous format but found issues)"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Summary
|
||||
echo "========================================"
|
||||
echo " Test Summary"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
|
||||
if [ $FAILED -eq 0 ]; then
|
||||
echo "STATUS: PASSED"
|
||||
echo "All verification tests passed!"
|
||||
echo ""
|
||||
echo "The spec document reviewer correctly:"
|
||||
echo " ✓ Found TODO placeholder"
|
||||
echo " ✓ Found 'specified later' deferral"
|
||||
echo " ✓ Produced properly formatted review"
|
||||
echo " ✓ Did not approve spec with errors"
|
||||
exit 0
|
||||
else
|
||||
echo "STATUS: FAILED"
|
||||
echo "Failed $FAILED verification tests"
|
||||
echo ""
|
||||
echo "Output saved to: $OUTPUT_FILE"
|
||||
echo ""
|
||||
echo "Review the output to see what went wrong."
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,214 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Integration Test: requesting-code-review skill
|
||||
# Verifies the code reviewer dispatched via the skill catches a planted bug
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
source "$SCRIPT_DIR/test-helpers.sh"
|
||||
|
||||
echo "========================================"
|
||||
echo " Integration Test: requesting-code-review"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
echo "This test verifies the code reviewer subagent by:"
|
||||
echo " 1. Setting up a tiny project with a baseline commit"
|
||||
echo " 2. Adding a second commit that plants an obvious bug"
|
||||
echo " 3. Dispatching the code reviewer via the requesting-code-review skill"
|
||||
echo " 4. Verifying the reviewer flags the planted bug as Critical/Important"
|
||||
echo ""
|
||||
|
||||
TEST_PROJECT=$(create_test_project)
|
||||
echo "Test project: $TEST_PROJECT"
|
||||
trap "cleanup_test_project $TEST_PROJECT" EXIT
|
||||
|
||||
cd "$TEST_PROJECT"
|
||||
|
||||
# Baseline: a small "safe" implementation
|
||||
mkdir -p src
|
||||
cat > src/db.js <<'EOF'
|
||||
import { Database } from "./database-driver.js";
|
||||
|
||||
const db = new Database();
|
||||
|
||||
export async function findUserByEmail(email) {
|
||||
if (typeof email !== "string" || !email) {
|
||||
throw new Error("email required");
|
||||
}
|
||||
return db.query(
|
||||
"SELECT id, email, created_at FROM users WHERE email = ?",
|
||||
[email],
|
||||
);
|
||||
}
|
||||
EOF
|
||||
|
||||
cat > package.json <<'EOF'
|
||||
{ "name": "test-codereview", "version": "1.0.0", "type": "module" }
|
||||
EOF
|
||||
|
||||
git init --quiet
|
||||
git config user.email "test@test.com"
|
||||
git config user.name "Test User"
|
||||
git add .
|
||||
git commit -m "Initial: parameterized findUserByEmail" --quiet
|
||||
BASE_SHA=$(git rev-parse HEAD)
|
||||
|
||||
# Second commit: plant two real bugs
|
||||
# 1. SQL injection — switch from parameterized to string concatenation
|
||||
# 2. Logs the user's password hash on every successful login
|
||||
cat > src/db.js <<'EOF'
|
||||
import { Database } from "./database-driver.js";
|
||||
|
||||
const db = new Database();
|
||||
|
||||
export async function findUserByEmail(email) {
|
||||
return db.query(
|
||||
"SELECT id, email, password_hash, created_at FROM users WHERE email = '" + email + "'",
|
||||
);
|
||||
}
|
||||
|
||||
export async function login(email, password) {
|
||||
const user = await findUserByEmail(email);
|
||||
if (user && user.password_hash === hash(password)) {
|
||||
console.log("login success", { email, password_hash: user.password_hash });
|
||||
return user;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function hash(s) { return s; }
|
||||
EOF
|
||||
|
||||
git add .
|
||||
git commit -m "Refactor user lookup, add login" --quiet
|
||||
HEAD_SHA=$(git rev-parse HEAD)
|
||||
|
||||
echo ""
|
||||
echo "Planted bugs in $BASE_SHA..$HEAD_SHA:"
|
||||
echo " - SQL injection (string concat instead of parameterized query)"
|
||||
echo " - Password hash logged in plaintext on every successful login"
|
||||
echo " - hash() is the identity function (passwords stored & compared in plaintext)"
|
||||
echo ""
|
||||
|
||||
OUTPUT_FILE="$TEST_PROJECT/claude-output.txt"
|
||||
|
||||
PROMPT="I just finished a refactor. The change is between commits $BASE_SHA and $HEAD_SHA on the current branch.
|
||||
|
||||
Use the superpowers:requesting-code-review skill to review these changes before I merge. Follow the skill exactly: dispatch the code reviewer subagent with the template, give the subagent the SHA range, and report back what it found.
|
||||
|
||||
Print the reviewer's full output."
|
||||
|
||||
# Run claude from inside the test project so its session JSONL lands in a
|
||||
# project-specific directory under ~/.claude/projects/, isolated from any
|
||||
# other concurrent claude sessions.
|
||||
echo "Running Claude (plugin-dir: $PLUGIN_DIR, cwd: $TEST_PROJECT)..."
|
||||
echo "================================================================================"
|
||||
cd "$TEST_PROJECT" && timeout 600 claude -p "$PROMPT" \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--permission-mode bypassPermissions 2>&1 | tee "$OUTPUT_FILE" || {
|
||||
echo ""
|
||||
echo "================================================================================"
|
||||
echo "EXECUTION FAILED (exit code: $?)"
|
||||
exit 1
|
||||
}
|
||||
echo "================================================================================"
|
||||
|
||||
echo ""
|
||||
echo "Analyzing reviewer output..."
|
||||
echo ""
|
||||
|
||||
# Find the session transcript. Because we ran claude from $TEST_PROJECT (a
|
||||
# unique tmp dir), its sessions live in their own ~/.claude/projects/ folder.
|
||||
# Resolve the real path (macOS mktemp returns /var/... but claude normalizes
|
||||
# it to /private/var/...) and replicate claude's normalization (every
|
||||
# non-alphanumeric char becomes `-`).
|
||||
TEST_PROJECT_REAL=$(cd "$TEST_PROJECT" && pwd -P)
|
||||
SESSION_DIR="$HOME/.claude/projects/$(echo "$TEST_PROJECT_REAL" | sed 's|[^a-zA-Z0-9]|-|g')"
|
||||
# `|| true` prevents pipefail killing the script if ls gets SIGPIPE'd by head.
|
||||
SESSION_FILE=$(ls -t "$SESSION_DIR"/*.jsonl 2>/dev/null | head -1 || true)
|
||||
|
||||
FAILED=0
|
||||
|
||||
echo "=== Verification Tests ==="
|
||||
echo ""
|
||||
|
||||
# Test 1: Skill was actually invoked, and a subagent was actually dispatched
|
||||
echo "Test 1: requesting-code-review skill invoked + reviewer subagent dispatched..."
|
||||
if [ -z "$SESSION_FILE" ] || [ ! -f "$SESSION_FILE" ]; then
|
||||
echo " [FAIL] Could not locate session transcript in $SESSION_DIR"
|
||||
FAILED=$((FAILED + 1))
|
||||
elif ! grep -q '"skill":"superpowers:requesting-code-review"' "$SESSION_FILE"; then
|
||||
echo " [FAIL] requesting-code-review skill was not invoked"
|
||||
echo " Session: $SESSION_FILE"
|
||||
FAILED=$((FAILED + 1))
|
||||
elif ! grep -q '"name":"Agent"' "$SESSION_FILE"; then
|
||||
echo " [FAIL] Skill ran but no subagent was dispatched"
|
||||
FAILED=$((FAILED + 1))
|
||||
else
|
||||
echo " [PASS] Skill invoked and subagent dispatched"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 2: Reviewer caught the SQL injection
|
||||
echo "Test 2: SQL injection flagged..."
|
||||
if grep -qiE "sql injection|injection|string concat|parameterize|prepared statement|sanitiz" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer flagged the SQL injection vector"
|
||||
else
|
||||
echo " [FAIL] Reviewer missed the SQL injection — most obvious planted bug"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 3: Reviewer caught the credential / password issue (either logging or no real hashing)
|
||||
echo "Test 3: Credential handling issue flagged..."
|
||||
if grep -qiE "password|credential|secret|plaintext|log.*hash|hash.*log|sensitive" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer flagged a credential / password handling issue"
|
||||
else
|
||||
echo " [FAIL] Reviewer missed the password/credential issues"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 4: Reviewer marked at least one issue as Critical or Important (not just Minor)
|
||||
echo "Test 4: Severity classification..."
|
||||
if grep -qiE "critical|important|severe|high.*risk|security" "$OUTPUT_FILE"; then
|
||||
echo " [PASS] Reviewer classified findings at Critical/Important severity"
|
||||
else
|
||||
echo " [FAIL] Reviewer did not classify findings as Critical or Important"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 5: Reviewer did NOT approve the diff for merge
|
||||
echo "Test 5: Reviewer verdict..."
|
||||
# A correct reviewer says No or "With fixes". A broken/sycophantic reviewer says Yes/Ready.
|
||||
if grep -qiE "ready to merge.*yes|approved.*for merge|^\s*yes\s*$|safe to merge" "$OUTPUT_FILE" \
|
||||
&& ! grep -qiE "ready to merge.*no|with fixes|do not merge|not ready|block.*merge" "$OUTPUT_FILE"; then
|
||||
echo " [FAIL] Reviewer approved a diff with planted Critical bugs"
|
||||
FAILED=$((FAILED + 1))
|
||||
else
|
||||
echo " [PASS] Reviewer did not approve the diff"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
echo "========================================"
|
||||
echo " Test Summary"
|
||||
echo "========================================"
|
||||
echo ""
|
||||
|
||||
if [ $FAILED -eq 0 ]; then
|
||||
echo "STATUS: PASSED"
|
||||
echo "The code reviewer correctly:"
|
||||
echo " ✓ Was dispatched via the requesting-code-review skill"
|
||||
echo " ✓ Flagged the SQL injection"
|
||||
echo " ✓ Flagged the credential handling issues"
|
||||
echo " ✓ Classified findings at Critical/Important severity"
|
||||
echo " ✓ Did not approve the diff for merge"
|
||||
exit 0
|
||||
else
|
||||
echo "STATUS: FAILED"
|
||||
echo "Failed $FAILED verification tests"
|
||||
echo ""
|
||||
echo "Output saved to: $OUTPUT_FILE"
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,6 +1,17 @@
|
||||
#!/usr/bin/env bash
|
||||
# Integration Test: subagent-driven-development workflow
|
||||
# Actually executes a plan and verifies the new workflow behaviors
|
||||
#
|
||||
# Drill coverage: evals/scenarios/sdd-rejects-extra-features.yaml covers the
|
||||
# YAGNI enforcement subset (forbidden exports + reviewer-as-gate semantics)
|
||||
# and is stricter on that axis. This bash test additionally asserts:
|
||||
# - >=3 git commits (initial + per-task commits, exercising SDD's
|
||||
# commit-per-task workflow shape)
|
||||
# - >=2 Claude Code subagent dispatches via Agent or Task (drill only asserts >=1)
|
||||
# - Claude Code task-tracking tool usage (drill makes no assertion)
|
||||
# - test/math.test.js exists (drill relies on `npm test` succeeding)
|
||||
# - analyze-token-usage.py token-budget telemetry
|
||||
# Kept until those assertions are added to drill or explicitly retired.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
@@ -213,13 +224,13 @@ else
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Test 3: TodoWrite was used for tracking
|
||||
# Test 3: Claude Code task-tracking tool was used
|
||||
echo "Test 3: Task tracking..."
|
||||
todo_count=$(grep -c '"name":"TodoWrite"' "$SESSION_FILE" || echo "0")
|
||||
todo_count=$(grep -cE '"name":"(TodoWrite|TaskCreate|TaskUpdate|TaskList|TaskGet)"' "$SESSION_FILE" || echo "0")
|
||||
if [ "$todo_count" -ge 1 ]; then
|
||||
echo " [PASS] TodoWrite used $todo_count time(s) for task tracking"
|
||||
echo " [PASS] Task tracking used $todo_count time(s)"
|
||||
else
|
||||
echo " [FAIL] TodoWrite not used"
|
||||
echo " [FAIL] No Claude Code task-tracking tool used"
|
||||
FAILED=$((FAILED + 1))
|
||||
fi
|
||||
echo ""
|
||||
|
||||
@@ -1,6 +1,12 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test: subagent-driven-development skill
|
||||
# Verifies that the skill is loaded and follows correct workflow
|
||||
#
|
||||
# No drill coverage: this test asks the agent to *describe* SDD (string-
|
||||
# matches its verbal explanation against expected keywords like
|
||||
# "self-review", "skeptical", "worktree", "Step 1", "loop"). Drill scenarios
|
||||
# test behavior (real subagent dispatch, plan-following, review loops),
|
||||
# not description-recall. Kept by design.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
|
||||
@@ -2,6 +2,11 @@
|
||||
# Test: Does the agent prefer native worktree tools (EnterWorktree) over git worktree add?
|
||||
# Framework: RED-GREEN-REFACTOR per testing-skills-with-subagents.md
|
||||
#
|
||||
# Drill coverage: evals/scenarios/worktree-creation-under-pressure.yaml lifts
|
||||
# only the PRESSURE phase (existing .worktrees/ + urgency framing). The RED
|
||||
# and GREEN baselines below are not covered by drill — kept here so the
|
||||
# RED-GREEN-REFACTOR validation remains rerunnable end-to-end.
|
||||
#
|
||||
# RED: Skill without Step 1a (no native tool preference). Agent should use git worktree add.
|
||||
# GREEN: Skill with Step 1a (explicit tool naming + consent bridge). Agent should use EnterWorktree.
|
||||
# PRESSURE: Same as GREEN but under time pressure with existing .worktrees/ dir.
|
||||
|
||||
@@ -177,6 +177,7 @@ write_upstream_fixture() {
|
||||
"$repo/.codex-plugin" \
|
||||
"$repo/.private-journal" \
|
||||
"$repo/assets" \
|
||||
"$repo/evals/drill" \
|
||||
"$repo/scripts" \
|
||||
"$repo/skills/example"
|
||||
|
||||
@@ -215,6 +216,7 @@ EOF
|
||||
EOF
|
||||
|
||||
printf 'png fixture\n' > "$repo/assets/app-icon.png"
|
||||
printf 'eval harness fixture\n' > "$repo/evals/drill/README.md"
|
||||
|
||||
cat > "$repo/skills/example/SKILL.md" <<'EOF'
|
||||
# Example Skill
|
||||
@@ -233,6 +235,7 @@ EOF
|
||||
.gitignore \
|
||||
assets/app-icon.png \
|
||||
assets/superpowers-small.svg \
|
||||
evals/drill/README.md \
|
||||
package.json \
|
||||
scripts/sync-to-codex-plugin.sh \
|
||||
skills/example/SKILL.md
|
||||
@@ -542,6 +545,7 @@ main() {
|
||||
assert_contains "$preview_section" ".private-journal/keep.txt" "Preview includes tracked ignored file"
|
||||
assert_not_contains "$preview_section" ".private-journal/leak.txt" "Preview excludes ignored untracked file"
|
||||
assert_not_contains "$preview_section" "ignored-cache/" "Preview excludes pure ignored directories"
|
||||
assert_not_contains "$preview_section" "evals/" "Preview excludes eval harness"
|
||||
assert_not_contains "$preview_output" "Overlay file (.codex-plugin/plugin.json) will be regenerated" "Preview omits overlay regeneration note"
|
||||
assert_not_contains "$preview_output" "Assets (superpowers-small.svg, app-icon.png) will be seeded from" "Preview omits assets seeding note"
|
||||
assert_contains "$preview_section" "skills/example/SKILL.md" "Preview reflects dirty tracked destination file"
|
||||
|
||||
@@ -1,100 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test where Claude explicitly describes subagent-driven-development before user requests it
|
||||
# This mimics the original failure scenario
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
TIMESTAMP=$(date +%s)
|
||||
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/explicit-skill-requests/claude-describes"
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
PROJECT_DIR="$OUTPUT_DIR/project"
|
||||
mkdir -p "$PROJECT_DIR/docs/superpowers/plans"
|
||||
|
||||
echo "=== Test: Claude Describes SDD First ==="
|
||||
echo "Output dir: $OUTPUT_DIR"
|
||||
echo ""
|
||||
|
||||
cd "$PROJECT_DIR"
|
||||
|
||||
# Create a plan
|
||||
cat > "$PROJECT_DIR/docs/superpowers/plans/auth-system.md" << 'EOF'
|
||||
# Auth System Implementation Plan
|
||||
|
||||
## Task 1: Add User Model
|
||||
Create user model with email and password fields.
|
||||
|
||||
## Task 2: Add Auth Routes
|
||||
Create login and register endpoints.
|
||||
|
||||
## Task 3: Add JWT Middleware
|
||||
Protect routes with JWT validation.
|
||||
EOF
|
||||
|
||||
# Turn 1: Have Claude describe execution options including SDD
|
||||
echo ">>> Turn 1: Ask Claude to describe execution options..."
|
||||
claude -p "I have a plan at docs/superpowers/plans/auth-system.md. Tell me about my options for executing it, including what subagent-driven-development means and how it works." \
|
||||
--model haiku \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--dangerously-skip-permissions \
|
||||
--max-turns 3 \
|
||||
--output-format stream-json \
|
||||
> "$OUTPUT_DIR/turn1.json" 2>&1 || true
|
||||
echo "Done."
|
||||
|
||||
# Turn 2: THE CRITICAL TEST - now that Claude has explained it
|
||||
echo ">>> Turn 2: Request subagent-driven-development..."
|
||||
FINAL_LOG="$OUTPUT_DIR/turn2.json"
|
||||
claude -p "subagent-driven-development, please" \
|
||||
--continue \
|
||||
--model haiku \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--dangerously-skip-permissions \
|
||||
--max-turns 2 \
|
||||
--output-format stream-json \
|
||||
> "$FINAL_LOG" 2>&1 || true
|
||||
echo "Done."
|
||||
echo ""
|
||||
|
||||
echo "=== Results ==="
|
||||
|
||||
# Check Turn 1 to see if Claude described SDD
|
||||
echo "Turn 1 - Claude's description of options (excerpt):"
|
||||
grep '"type":"assistant"' "$OUTPUT_DIR/turn1.json" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 800 || echo " (could not extract)"
|
||||
echo ""
|
||||
echo "---"
|
||||
echo ""
|
||||
|
||||
# Check final turn
|
||||
SKILL_PATTERN='"skill":"([^"]*:)?subagent-driven-development"'
|
||||
if grep -q '"name":"Skill"' "$FINAL_LOG" && grep -qE "$SKILL_PATTERN" "$FINAL_LOG"; then
|
||||
echo "PASS: Skill was triggered after Claude described it"
|
||||
TRIGGERED=true
|
||||
else
|
||||
echo "FAIL: Skill was NOT triggered (Claude may have thought it already knew)"
|
||||
TRIGGERED=false
|
||||
|
||||
echo ""
|
||||
echo "Tools invoked in final turn:"
|
||||
grep '"type":"tool_use"' "$FINAL_LOG" | grep -o '"name":"[^"]*"' | sort -u | head -10 || echo " (none)"
|
||||
|
||||
echo ""
|
||||
echo "Final turn response:"
|
||||
grep '"type":"assistant"' "$FINAL_LOG" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 800 || echo " (could not extract)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Skills triggered in final turn:"
|
||||
grep -o '"skill":"[^"]*"' "$FINAL_LOG" 2>/dev/null | sort -u || echo " (none)"
|
||||
|
||||
echo ""
|
||||
echo "Logs in: $OUTPUT_DIR"
|
||||
|
||||
if [ "$TRIGGERED" = "true" ]; then
|
||||
exit 0
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
@@ -109,7 +109,7 @@ if [ -n "$FIRST_SKILL_LINE" ]; then
|
||||
PREMATURE_TOOLS=$(head -n "$FIRST_SKILL_LINE" "$TURN3_LOG" | \
|
||||
grep '"type":"tool_use"' | \
|
||||
grep -v '"name":"Skill"' | \
|
||||
grep -v '"name":"TodoWrite"' || true)
|
||||
grep -vE '"name":"(TodoWrite|TaskCreate|TaskUpdate|TaskList|TaskGet)"' || true)
|
||||
if [ -n "$PREMATURE_TOOLS" ]; then
|
||||
echo "WARNING: Tools invoked BEFORE Skill tool in Turn 3:"
|
||||
echo "$PREMATURE_TOOLS" | head -5
|
||||
|
||||
@@ -103,11 +103,11 @@ echo "Checking for premature action..."
|
||||
FIRST_SKILL_LINE=$(grep -n '"name":"Skill"' "$LOG_FILE" | head -1 | cut -d: -f1)
|
||||
if [ -n "$FIRST_SKILL_LINE" ]; then
|
||||
# Check if any non-Skill, non-system tools were invoked before the first Skill invocation
|
||||
# Filter out system messages, TodoWrite (planning is ok), and other non-action tools
|
||||
# Filter out task tracking tools (planning is ok) and other non-action tools
|
||||
PREMATURE_TOOLS=$(head -n "$FIRST_SKILL_LINE" "$LOG_FILE" | \
|
||||
grep '"type":"tool_use"' | \
|
||||
grep -v '"name":"Skill"' | \
|
||||
grep -v '"name":"TodoWrite"' || true)
|
||||
grep -vE '"name":"(TodoWrite|TaskCreate|TaskUpdate|TaskList|TaskGet)"' || true)
|
||||
if [ -n "$PREMATURE_TOOLS" ]; then
|
||||
echo "WARNING: Tools invoked BEFORE Skill tool:"
|
||||
echo "$PREMATURE_TOOLS" | head -5
|
||||
|
||||
@@ -44,6 +44,10 @@ const result = {
|
||||
scenario,
|
||||
firstBootstrapParts: countBootstrapParts(firstOutput),
|
||||
secondBootstrapParts: countBootstrapParts(secondOutput),
|
||||
staleMentionMapping: bootstrapText(firstOutput).includes('@mention'),
|
||||
staleTaskMapping: bootstrapText(firstOutput).includes('`Task` tool with subagents'),
|
||||
mapsSubagentToTask: bootstrapText(firstOutput).includes('`task` with `subagent_type: "general"`'),
|
||||
mapsMutationToApplyPatch: bootstrapText(firstOutput).includes('`apply_patch`'),
|
||||
firstReadCount: afterFirst.readCount,
|
||||
secondReadCount: afterSecond.readCount,
|
||||
firstExistsCount: afterFirst.existsCount,
|
||||
@@ -83,6 +87,12 @@ function countBootstrapParts(output) {
|
||||
).length;
|
||||
}
|
||||
|
||||
function bootstrapText(output) {
|
||||
return output.messages[0].parts.find(
|
||||
(part) => part.type === 'text' && part.text.includes('EXTREMELY_IMPORTANT')
|
||||
)?.text || '';
|
||||
}
|
||||
|
||||
function assertPresentBootstrap(result) {
|
||||
const failures = [];
|
||||
if (result.firstBootstrapParts !== 1) {
|
||||
@@ -100,6 +110,18 @@ function assertPresentBootstrap(result) {
|
||||
if (result.secondExistsCount !== result.firstExistsCount) {
|
||||
failures.push(`expected cached second transform to do no additional exists checks, got ${result.secondExistsCount - result.firstExistsCount}`);
|
||||
}
|
||||
if (result.staleMentionMapping) {
|
||||
failures.push('expected OpenCode bootstrap not to teach @mention subagent syntax');
|
||||
}
|
||||
if (result.staleTaskMapping) {
|
||||
failures.push('expected OpenCode bootstrap not to teach stale Task-tool mapping');
|
||||
}
|
||||
if (!result.mapsSubagentToTask) {
|
||||
failures.push('expected OpenCode bootstrap to map general-purpose subagents to task with subagent_type');
|
||||
}
|
||||
if (!result.mapsMutationToApplyPatch) {
|
||||
failures.push('expected OpenCode bootstrap to map file mutation to apply_patch');
|
||||
}
|
||||
return failures;
|
||||
}
|
||||
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
I have 4 independent test failures happening in different modules:
|
||||
|
||||
1. tests/auth/login.test.ts - "should redirect after login" is failing
|
||||
2. tests/api/users.test.ts - "should return user list" returns 500
|
||||
3. tests/components/Button.test.tsx - snapshot mismatch
|
||||
4. tests/utils/date.test.ts - timezone handling broken
|
||||
|
||||
These are unrelated issues in different parts of the codebase. Can you investigate all of them?
|
||||
@@ -1 +0,0 @@
|
||||
I have a plan document at docs/superpowers/plans/2024-01-15-auth-system.md that needs to be executed. Please implement it.
|
||||
@@ -1,3 +0,0 @@
|
||||
I just finished implementing the user authentication feature. All the code is committed. Can you review the changes before I merge to main?
|
||||
|
||||
The commits are between abc123 and def456.
|
||||
@@ -1,11 +0,0 @@
|
||||
The tests are failing with this error:
|
||||
|
||||
```
|
||||
FAIL src/utils/parser.test.ts
|
||||
● Parser › should handle nested objects
|
||||
TypeError: Cannot read property 'value' of undefined
|
||||
at parse (src/utils/parser.ts:42:18)
|
||||
at Object.<anonymous> (src/utils/parser.test.ts:28:20)
|
||||
```
|
||||
|
||||
Can you figure out what's going wrong and fix it?
|
||||
@@ -1,7 +0,0 @@
|
||||
I need to add a new feature to validate email addresses. It should:
|
||||
- Check that there's an @ symbol
|
||||
- Check that there's at least one character before the @
|
||||
- Check that there's a dot in the domain part
|
||||
- Return true/false
|
||||
|
||||
Can you implement this?
|
||||
@@ -1,10 +0,0 @@
|
||||
Here's the spec for our new authentication system:
|
||||
|
||||
Requirements:
|
||||
- Users can register with email/password
|
||||
- Users can log in and receive a JWT token
|
||||
- Protected routes require valid JWT
|
||||
- Tokens expire after 24 hours
|
||||
- Support password reset via email
|
||||
|
||||
We need to implement this. There are multiple steps involved - user model, auth routes, middleware, email service integration.
|
||||
@@ -1,60 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Run all skill triggering tests
|
||||
# Usage: ./run-all.sh
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROMPTS_DIR="$SCRIPT_DIR/prompts"
|
||||
|
||||
SKILLS=(
|
||||
"systematic-debugging"
|
||||
"test-driven-development"
|
||||
"writing-plans"
|
||||
"dispatching-parallel-agents"
|
||||
"executing-plans"
|
||||
"requesting-code-review"
|
||||
)
|
||||
|
||||
echo "=== Running Skill Triggering Tests ==="
|
||||
echo ""
|
||||
|
||||
PASSED=0
|
||||
FAILED=0
|
||||
RESULTS=()
|
||||
|
||||
for skill in "${SKILLS[@]}"; do
|
||||
prompt_file="$PROMPTS_DIR/${skill}.txt"
|
||||
|
||||
if [ ! -f "$prompt_file" ]; then
|
||||
echo "⚠️ SKIP: No prompt file for $skill"
|
||||
continue
|
||||
fi
|
||||
|
||||
echo "Testing: $skill"
|
||||
|
||||
if "$SCRIPT_DIR/run-test.sh" "$skill" "$prompt_file" 3 2>&1 | tee /tmp/skill-test-$skill.log; then
|
||||
PASSED=$((PASSED + 1))
|
||||
RESULTS+=("✅ $skill")
|
||||
else
|
||||
FAILED=$((FAILED + 1))
|
||||
RESULTS+=("❌ $skill")
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "---"
|
||||
echo ""
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "=== Summary ==="
|
||||
for result in "${RESULTS[@]}"; do
|
||||
echo " $result"
|
||||
done
|
||||
echo ""
|
||||
echo "Passed: $PASSED"
|
||||
echo "Failed: $FAILED"
|
||||
|
||||
if [ $FAILED -gt 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,88 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test skill triggering with naive prompts
|
||||
# Usage: ./run-test.sh <skill-name> <prompt-file>
|
||||
#
|
||||
# Tests whether Claude triggers a skill based on a natural prompt
|
||||
# (without explicitly mentioning the skill)
|
||||
|
||||
set -e
|
||||
|
||||
SKILL_NAME="$1"
|
||||
PROMPT_FILE="$2"
|
||||
MAX_TURNS="${3:-3}"
|
||||
|
||||
if [ -z "$SKILL_NAME" ] || [ -z "$PROMPT_FILE" ]; then
|
||||
echo "Usage: $0 <skill-name> <prompt-file> [max-turns]"
|
||||
echo "Example: $0 systematic-debugging ./test-prompts/debugging.txt"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the directory where this script lives (should be tests/skill-triggering)
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# Get the superpowers plugin root (two levels up from tests/skill-triggering)
|
||||
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
TIMESTAMP=$(date +%s)
|
||||
OUTPUT_DIR="/tmp/superpowers-tests/${TIMESTAMP}/skill-triggering/${SKILL_NAME}"
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
# Read prompt from file
|
||||
PROMPT=$(cat "$PROMPT_FILE")
|
||||
|
||||
echo "=== Skill Triggering Test ==="
|
||||
echo "Skill: $SKILL_NAME"
|
||||
echo "Prompt file: $PROMPT_FILE"
|
||||
echo "Max turns: $MAX_TURNS"
|
||||
echo "Output dir: $OUTPUT_DIR"
|
||||
echo ""
|
||||
|
||||
# Copy prompt for reference
|
||||
cp "$PROMPT_FILE" "$OUTPUT_DIR/prompt.txt"
|
||||
|
||||
# Run Claude
|
||||
LOG_FILE="$OUTPUT_DIR/claude-output.json"
|
||||
cd "$OUTPUT_DIR"
|
||||
|
||||
echo "Plugin dir: $PLUGIN_DIR"
|
||||
echo "Running claude -p with naive prompt..."
|
||||
timeout 300 claude -p "$PROMPT" \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--dangerously-skip-permissions \
|
||||
--max-turns "$MAX_TURNS" \
|
||||
--output-format stream-json \
|
||||
> "$LOG_FILE" 2>&1 || true
|
||||
|
||||
echo ""
|
||||
echo "=== Results ==="
|
||||
|
||||
# Check if skill was triggered (look for Skill tool invocation)
|
||||
# In stream-json, tool invocations have "name":"Skill" (not "tool":"Skill")
|
||||
# Match either "skill":"skillname" or "skill":"namespace:skillname"
|
||||
SKILL_PATTERN='"skill":"([^"]*:)?'"${SKILL_NAME}"'"'
|
||||
if grep -q '"name":"Skill"' "$LOG_FILE" && grep -qE "$SKILL_PATTERN" "$LOG_FILE"; then
|
||||
echo "✅ PASS: Skill '$SKILL_NAME' was triggered"
|
||||
TRIGGERED=true
|
||||
else
|
||||
echo "❌ FAIL: Skill '$SKILL_NAME' was NOT triggered"
|
||||
TRIGGERED=false
|
||||
fi
|
||||
|
||||
# Show what skills WERE triggered
|
||||
echo ""
|
||||
echo "Skills triggered in this run:"
|
||||
grep -o '"skill":"[^"]*"' "$LOG_FILE" 2>/dev/null | sort -u || echo " (none)"
|
||||
|
||||
# Show first assistant message
|
||||
echo ""
|
||||
echo "First assistant response (truncated):"
|
||||
grep '"type":"assistant"' "$LOG_FILE" | head -1 | jq -r '.message.content[0].text // .message.content' 2>/dev/null | head -c 500 || echo " (could not extract)"
|
||||
|
||||
echo ""
|
||||
echo "Full log: $LOG_FILE"
|
||||
echo "Timestamp: $TIMESTAMP"
|
||||
|
||||
if [ "$TRIGGERED" = "true" ]; then
|
||||
exit 0
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
@@ -1,81 +0,0 @@
|
||||
# Go Fractals CLI - Design
|
||||
|
||||
## Overview
|
||||
|
||||
A command-line tool that generates ASCII art fractals. Supports two fractal types with configurable output.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Sierpinski triangle
|
||||
fractals sierpinski --size 32 --depth 5
|
||||
|
||||
# Mandelbrot set
|
||||
fractals mandelbrot --width 80 --height 24 --iterations 100
|
||||
|
||||
# Custom character
|
||||
fractals sierpinski --size 16 --char '#'
|
||||
|
||||
# Help
|
||||
fractals --help
|
||||
fractals sierpinski --help
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
### `sierpinski`
|
||||
|
||||
Generates a Sierpinski triangle using recursive subdivision.
|
||||
|
||||
Flags:
|
||||
- `--size` (default: 32) - Width of the triangle base in characters
|
||||
- `--depth` (default: 5) - Recursion depth
|
||||
- `--char` (default: '*') - Character to use for filled points
|
||||
|
||||
Output: Triangle printed to stdout, one line per row.
|
||||
|
||||
### `mandelbrot`
|
||||
|
||||
Renders the Mandelbrot set as ASCII art. Maps iteration count to characters.
|
||||
|
||||
Flags:
|
||||
- `--width` (default: 80) - Output width in characters
|
||||
- `--height` (default: 24) - Output height in characters
|
||||
- `--iterations` (default: 100) - Maximum iterations for escape calculation
|
||||
- `--char` (default: gradient) - Single character, or omit for gradient " .:-=+*#%@"
|
||||
|
||||
Output: Rectangle printed to stdout.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
cmd/
|
||||
fractals/
|
||||
main.go # Entry point, CLI setup
|
||||
internal/
|
||||
sierpinski/
|
||||
sierpinski.go # Algorithm
|
||||
sierpinski_test.go
|
||||
mandelbrot/
|
||||
mandelbrot.go # Algorithm
|
||||
mandelbrot_test.go
|
||||
cli/
|
||||
root.go # Root command, help
|
||||
sierpinski.go # Sierpinski subcommand
|
||||
mandelbrot.go # Mandelbrot subcommand
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
- Go 1.21+
|
||||
- `github.com/spf13/cobra` for CLI
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. `fractals --help` shows usage
|
||||
2. `fractals sierpinski` outputs a recognizable triangle
|
||||
3. `fractals mandelbrot` outputs a recognizable Mandelbrot set
|
||||
4. `--size`, `--width`, `--height`, `--depth`, `--iterations` flags work
|
||||
5. `--char` customizes output character
|
||||
6. Invalid inputs produce clear error messages
|
||||
7. All tests pass
|
||||
@@ -1,172 +0,0 @@
|
||||
# Go Fractals CLI - Implementation Plan
|
||||
|
||||
Execute this plan using the `superpowers:subagent-driven-development` skill.
|
||||
|
||||
## Context
|
||||
|
||||
Building a CLI tool that generates ASCII fractals. See `design.md` for full specification.
|
||||
|
||||
## Tasks
|
||||
|
||||
### Task 1: Project Setup
|
||||
|
||||
Create the Go module and directory structure.
|
||||
|
||||
**Do:**
|
||||
- Initialize `go.mod` with module name `github.com/superpowers-test/fractals`
|
||||
- Create directory structure: `cmd/fractals/`, `internal/sierpinski/`, `internal/mandelbrot/`, `internal/cli/`
|
||||
- Create minimal `cmd/fractals/main.go` that prints "fractals cli"
|
||||
- Add `github.com/spf13/cobra` dependency
|
||||
|
||||
**Verify:**
|
||||
- `go build ./cmd/fractals` succeeds
|
||||
- `./fractals` prints "fractals cli"
|
||||
|
||||
---
|
||||
|
||||
### Task 2: CLI Framework with Help
|
||||
|
||||
Set up Cobra root command with help output.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/cli/root.go` with root command
|
||||
- Configure help text showing available subcommands
|
||||
- Wire root command into `main.go`
|
||||
|
||||
**Verify:**
|
||||
- `./fractals --help` shows usage with "sierpinski" and "mandelbrot" listed as available commands
|
||||
- `./fractals` (no args) shows help
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Sierpinski Algorithm
|
||||
|
||||
Implement the Sierpinski triangle generation algorithm.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/sierpinski/sierpinski.go`
|
||||
- Implement `Generate(size, depth int, char rune) []string` that returns lines of the triangle
|
||||
- Use recursive midpoint subdivision algorithm
|
||||
- Create `internal/sierpinski/sierpinski_test.go` with tests:
|
||||
- Small triangle (size=4, depth=2) matches expected output
|
||||
- Size=1 returns single character
|
||||
- Depth=0 returns filled triangle
|
||||
|
||||
**Verify:**
|
||||
- `go test ./internal/sierpinski/...` passes
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Sierpinski CLI Integration
|
||||
|
||||
Wire the Sierpinski algorithm to a CLI subcommand.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/cli/sierpinski.go` with `sierpinski` subcommand
|
||||
- Add flags: `--size` (default 32), `--depth` (default 5), `--char` (default '*')
|
||||
- Call `sierpinski.Generate()` and print result to stdout
|
||||
|
||||
**Verify:**
|
||||
- `./fractals sierpinski` outputs a triangle
|
||||
- `./fractals sierpinski --size 16 --depth 3` outputs smaller triangle
|
||||
- `./fractals sierpinski --help` shows flag documentation
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Mandelbrot Algorithm
|
||||
|
||||
Implement the Mandelbrot set ASCII renderer.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/mandelbrot/mandelbrot.go`
|
||||
- Implement `Render(width, height, maxIter int, char string) []string`
|
||||
- Map complex plane region (-2.5 to 1.0 real, -1.0 to 1.0 imaginary) to output dimensions
|
||||
- Map iteration count to character gradient " .:-=+*#%@" (or single char if provided)
|
||||
- Create `internal/mandelbrot/mandelbrot_test.go` with tests:
|
||||
- Output dimensions match requested width/height
|
||||
- Known point inside set (0,0) maps to max-iteration character
|
||||
- Known point outside set (2,0) maps to low-iteration character
|
||||
|
||||
**Verify:**
|
||||
- `go test ./internal/mandelbrot/...` passes
|
||||
|
||||
---
|
||||
|
||||
### Task 6: Mandelbrot CLI Integration
|
||||
|
||||
Wire the Mandelbrot algorithm to a CLI subcommand.
|
||||
|
||||
**Do:**
|
||||
- Create `internal/cli/mandelbrot.go` with `mandelbrot` subcommand
|
||||
- Add flags: `--width` (default 80), `--height` (default 24), `--iterations` (default 100), `--char` (default "")
|
||||
- Call `mandelbrot.Render()` and print result to stdout
|
||||
|
||||
**Verify:**
|
||||
- `./fractals mandelbrot` outputs recognizable Mandelbrot set
|
||||
- `./fractals mandelbrot --width 40 --height 12` outputs smaller version
|
||||
- `./fractals mandelbrot --help` shows flag documentation
|
||||
|
||||
---
|
||||
|
||||
### Task 7: Character Set Configuration
|
||||
|
||||
Ensure `--char` flag works consistently across both commands.
|
||||
|
||||
**Do:**
|
||||
- Verify Sierpinski `--char` flag passes character to algorithm
|
||||
- For Mandelbrot, `--char` should use single character instead of gradient
|
||||
- Add tests for custom character output
|
||||
|
||||
**Verify:**
|
||||
- `./fractals sierpinski --char '#'` uses '#' character
|
||||
- `./fractals mandelbrot --char '.'` uses '.' for all filled points
|
||||
- Tests pass
|
||||
|
||||
---
|
||||
|
||||
### Task 8: Input Validation and Error Handling
|
||||
|
||||
Add validation for invalid inputs.
|
||||
|
||||
**Do:**
|
||||
- Sierpinski: size must be > 0, depth must be >= 0
|
||||
- Mandelbrot: width/height must be > 0, iterations must be > 0
|
||||
- Return clear error messages for invalid inputs
|
||||
- Add tests for error cases
|
||||
|
||||
**Verify:**
|
||||
- `./fractals sierpinski --size 0` prints error, exits non-zero
|
||||
- `./fractals mandelbrot --width -1` prints error, exits non-zero
|
||||
- Error messages are clear and helpful
|
||||
|
||||
---
|
||||
|
||||
### Task 9: Integration Tests
|
||||
|
||||
Add integration tests that invoke the CLI.
|
||||
|
||||
**Do:**
|
||||
- Create `cmd/fractals/main_test.go` or `test/integration_test.go`
|
||||
- Test full CLI invocation for both commands
|
||||
- Verify output format and exit codes
|
||||
- Test error cases return non-zero exit
|
||||
|
||||
**Verify:**
|
||||
- `go test ./...` passes all tests including integration tests
|
||||
|
||||
---
|
||||
|
||||
### Task 10: README
|
||||
|
||||
Document usage and examples.
|
||||
|
||||
**Do:**
|
||||
- Create `README.md` with:
|
||||
- Project description
|
||||
- Installation: `go install ./cmd/fractals`
|
||||
- Usage examples for both commands
|
||||
- Example output (small samples)
|
||||
|
||||
**Verify:**
|
||||
- README accurately describes the tool
|
||||
- Examples in README actually work
|
||||
@@ -1,45 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Scaffold the Go Fractals test project
|
||||
# Usage: ./scaffold.sh /path/to/target/directory
|
||||
|
||||
set -e
|
||||
|
||||
TARGET_DIR="${1:?Usage: $0 <target-directory>}"
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
|
||||
# Create target directory
|
||||
mkdir -p "$TARGET_DIR"
|
||||
cd "$TARGET_DIR"
|
||||
|
||||
# Initialize git repo
|
||||
git init
|
||||
|
||||
# Copy design and plan
|
||||
cp "$SCRIPT_DIR/design.md" .
|
||||
cp "$SCRIPT_DIR/plan.md" .
|
||||
|
||||
# Create .claude settings to allow reads/writes in this directory
|
||||
mkdir -p .claude
|
||||
cat > .claude/settings.local.json << 'SETTINGS'
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Read(**)",
|
||||
"Edit(**)",
|
||||
"Write(**)",
|
||||
"Bash(go:*)",
|
||||
"Bash(mkdir:*)",
|
||||
"Bash(git:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
SETTINGS
|
||||
|
||||
# Create initial commit
|
||||
git add .
|
||||
git commit -m "Initial project setup with design and plan"
|
||||
|
||||
echo "Scaffolded Go Fractals project at: $TARGET_DIR"
|
||||
echo ""
|
||||
echo "To run the test:"
|
||||
echo " claude -p \"Execute this plan using superpowers:subagent-driven-development. Plan: $TARGET_DIR/plan.md\" --plugin-dir /path/to/superpowers"
|
||||
@@ -1,106 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Run a subagent-driven-development test
|
||||
# Usage: ./run-test.sh <test-name> [--plugin-dir <path>]
|
||||
#
|
||||
# Example:
|
||||
# ./run-test.sh go-fractals
|
||||
# ./run-test.sh svelte-todo --plugin-dir /path/to/superpowers
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
TEST_NAME="${1:?Usage: $0 <test-name> [--plugin-dir <path>]}"
|
||||
shift
|
||||
|
||||
# Parse optional arguments
|
||||
PLUGIN_DIR=""
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--plugin-dir)
|
||||
PLUGIN_DIR="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Default plugin dir to parent of tests directory
|
||||
if [[ -z "$PLUGIN_DIR" ]]; then
|
||||
PLUGIN_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
fi
|
||||
|
||||
# Verify test exists
|
||||
TEST_DIR="$SCRIPT_DIR/$TEST_NAME"
|
||||
if [[ ! -d "$TEST_DIR" ]]; then
|
||||
echo "Error: Test '$TEST_NAME' not found at $TEST_DIR"
|
||||
echo "Available tests:"
|
||||
ls -1 "$SCRIPT_DIR" | grep -v '\.sh$' | grep -v '\.md$'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create timestamped output directory
|
||||
TIMESTAMP=$(date +%s)
|
||||
OUTPUT_BASE="/tmp/superpowers-tests/$TIMESTAMP/subagent-driven-development"
|
||||
OUTPUT_DIR="$OUTPUT_BASE/$TEST_NAME"
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
echo "=== Subagent-Driven Development Test ==="
|
||||
echo "Test: $TEST_NAME"
|
||||
echo "Output: $OUTPUT_DIR"
|
||||
echo "Plugin: $PLUGIN_DIR"
|
||||
echo ""
|
||||
|
||||
# Scaffold the project
|
||||
echo ">>> Scaffolding project..."
|
||||
"$TEST_DIR/scaffold.sh" "$OUTPUT_DIR/project"
|
||||
echo ""
|
||||
|
||||
# Prepare the prompt
|
||||
PLAN_PATH="$OUTPUT_DIR/project/plan.md"
|
||||
PROMPT="Execute this plan using superpowers:subagent-driven-development. The plan is at: $PLAN_PATH"
|
||||
|
||||
# Run Claude with JSON output for token tracking
|
||||
LOG_FILE="$OUTPUT_DIR/claude-output.json"
|
||||
echo ">>> Running Claude..."
|
||||
echo "Prompt: $PROMPT"
|
||||
echo "Log file: $LOG_FILE"
|
||||
echo ""
|
||||
|
||||
# Run claude and capture output
|
||||
# Using stream-json to get token usage stats
|
||||
# --dangerously-skip-permissions for automated testing (subagents don't inherit parent settings)
|
||||
cd "$OUTPUT_DIR/project"
|
||||
claude -p "$PROMPT" \
|
||||
--plugin-dir "$PLUGIN_DIR" \
|
||||
--dangerously-skip-permissions \
|
||||
--output-format stream-json \
|
||||
--verbose \
|
||||
> "$LOG_FILE" 2>&1 || true
|
||||
|
||||
# Extract final stats
|
||||
echo ""
|
||||
echo ">>> Test complete"
|
||||
echo "Project directory: $OUTPUT_DIR/project"
|
||||
echo "Claude log: $LOG_FILE"
|
||||
echo ""
|
||||
|
||||
# Show token usage if available
|
||||
if command -v jq &> /dev/null; then
|
||||
echo ">>> Token usage:"
|
||||
# Extract usage from the last message with usage info
|
||||
jq -s '[.[] | select(.type == "result")] | last | .usage' "$LOG_FILE" 2>/dev/null || echo "(could not parse usage)"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
echo ">>> Next steps:"
|
||||
echo "1. Review the project: cd $OUTPUT_DIR/project"
|
||||
echo "2. Review Claude's log: less $LOG_FILE"
|
||||
echo "3. Check if tests pass:"
|
||||
if [[ "$TEST_NAME" == "go-fractals" ]]; then
|
||||
echo " cd $OUTPUT_DIR/project && go test ./..."
|
||||
elif [[ "$TEST_NAME" == "svelte-todo" ]]; then
|
||||
echo " cd $OUTPUT_DIR/project && npm test && npx playwright test"
|
||||
fi
|
||||
@@ -1,70 +0,0 @@
|
||||
# Svelte Todo List - Design
|
||||
|
||||
## Overview
|
||||
|
||||
A simple todo list application built with Svelte. Supports creating, completing, and deleting todos with localStorage persistence.
|
||||
|
||||
## Features
|
||||
|
||||
- Add new todos
|
||||
- Mark todos as complete/incomplete
|
||||
- Delete todos
|
||||
- Filter by: All / Active / Completed
|
||||
- Clear all completed todos
|
||||
- Persist to localStorage
|
||||
- Show count of remaining items
|
||||
|
||||
## User Interface
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Svelte Todos │
|
||||
├─────────────────────────────────────────┤
|
||||
│ [________________________] [Add] │
|
||||
├─────────────────────────────────────────┤
|
||||
│ [ ] Buy groceries [x] │
|
||||
│ [✓] Walk the dog [x] │
|
||||
│ [ ] Write code [x] │
|
||||
├─────────────────────────────────────────┤
|
||||
│ 2 items left │
|
||||
│ [All] [Active] [Completed] [Clear ✓] │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
```
|
||||
src/
|
||||
App.svelte # Main app, state management
|
||||
lib/
|
||||
TodoInput.svelte # Text input + Add button
|
||||
TodoList.svelte # List container
|
||||
TodoItem.svelte # Single todo with checkbox, text, delete
|
||||
FilterBar.svelte # Filter buttons + clear completed
|
||||
store.ts # Svelte store for todos
|
||||
storage.ts # localStorage persistence
|
||||
```
|
||||
|
||||
## Data Model
|
||||
|
||||
```typescript
|
||||
interface Todo {
|
||||
id: string; // UUID
|
||||
text: string; // Todo text
|
||||
completed: boolean;
|
||||
}
|
||||
|
||||
type Filter = 'all' | 'active' | 'completed';
|
||||
```
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. Can add a todo by typing and pressing Enter or clicking Add
|
||||
2. Can toggle todo completion by clicking checkbox
|
||||
3. Can delete a todo by clicking X button
|
||||
4. Filter buttons show correct subset of todos
|
||||
5. "X items left" shows count of incomplete todos
|
||||
6. "Clear completed" removes all completed todos
|
||||
7. Todos persist across page refresh (localStorage)
|
||||
8. Empty state shows helpful message
|
||||
9. All tests pass
|
||||
@@ -1,222 +0,0 @@
|
||||
# Svelte Todo List - Implementation Plan
|
||||
|
||||
Execute this plan using the `superpowers:subagent-driven-development` skill.
|
||||
|
||||
## Context
|
||||
|
||||
Building a todo list app with Svelte. See `design.md` for full specification.
|
||||
|
||||
## Tasks
|
||||
|
||||
### Task 1: Project Setup
|
||||
|
||||
Create the Svelte project with Vite.
|
||||
|
||||
**Do:**
|
||||
- Run `npm create vite@latest . -- --template svelte-ts`
|
||||
- Install dependencies with `npm install`
|
||||
- Verify dev server works
|
||||
- Clean up default Vite template content from App.svelte
|
||||
|
||||
**Verify:**
|
||||
- `npm run dev` starts server
|
||||
- App shows minimal "Svelte Todos" heading
|
||||
- `npm run build` succeeds
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Todo Store
|
||||
|
||||
Create the Svelte store for todo state management.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/store.ts`
|
||||
- Define `Todo` interface with id, text, completed
|
||||
- Create writable store with initial empty array
|
||||
- Export functions: `addTodo(text)`, `toggleTodo(id)`, `deleteTodo(id)`, `clearCompleted()`
|
||||
- Create `src/lib/store.test.ts` with tests for each function
|
||||
|
||||
**Verify:**
|
||||
- Tests pass: `npm run test` (install vitest if needed)
|
||||
|
||||
---
|
||||
|
||||
### Task 3: localStorage Persistence
|
||||
|
||||
Add persistence layer for todos.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/storage.ts`
|
||||
- Implement `loadTodos(): Todo[]` and `saveTodos(todos: Todo[])`
|
||||
- Handle JSON parse errors gracefully (return empty array)
|
||||
- Integrate with store: load on init, save on change
|
||||
- Add tests for load/save/error handling
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Manual test: add todo, refresh page, todo persists
|
||||
|
||||
---
|
||||
|
||||
### Task 4: TodoInput Component
|
||||
|
||||
Create the input component for adding todos.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/TodoInput.svelte`
|
||||
- Text input bound to local state
|
||||
- Add button calls `addTodo()` and clears input
|
||||
- Enter key also submits
|
||||
- Disable Add button when input is empty
|
||||
- Add component tests
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Component renders input and button
|
||||
|
||||
---
|
||||
|
||||
### Task 5: TodoItem Component
|
||||
|
||||
Create the single todo item component.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/TodoItem.svelte`
|
||||
- Props: `todo: Todo`
|
||||
- Checkbox toggles completion (calls `toggleTodo`)
|
||||
- Text with strikethrough when completed
|
||||
- Delete button (X) calls `deleteTodo`
|
||||
- Add component tests
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Component renders checkbox, text, delete button
|
||||
|
||||
---
|
||||
|
||||
### Task 6: TodoList Component
|
||||
|
||||
Create the list container component.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/TodoList.svelte`
|
||||
- Props: `todos: Todo[]`
|
||||
- Renders TodoItem for each todo
|
||||
- Shows "No todos yet" when empty
|
||||
- Add component tests
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Component renders list of TodoItems
|
||||
|
||||
---
|
||||
|
||||
### Task 7: FilterBar Component
|
||||
|
||||
Create the filter and status bar component.
|
||||
|
||||
**Do:**
|
||||
- Create `src/lib/FilterBar.svelte`
|
||||
- Props: `todos: Todo[]`, `filter: Filter`, `onFilterChange: (f: Filter) => void`
|
||||
- Show count: "X items left" (incomplete count)
|
||||
- Three filter buttons: All, Active, Completed
|
||||
- Active filter is visually highlighted
|
||||
- "Clear completed" button (hidden when no completed todos)
|
||||
- Add component tests
|
||||
|
||||
**Verify:**
|
||||
- Tests pass
|
||||
- Component renders count, filters, clear button
|
||||
|
||||
---
|
||||
|
||||
### Task 8: App Integration
|
||||
|
||||
Wire all components together in App.svelte.
|
||||
|
||||
**Do:**
|
||||
- Import all components and store
|
||||
- Add filter state (default: 'all')
|
||||
- Compute filtered todos based on filter state
|
||||
- Render: heading, TodoInput, TodoList, FilterBar
|
||||
- Pass appropriate props to each component
|
||||
|
||||
**Verify:**
|
||||
- App renders all components
|
||||
- Adding todos works
|
||||
- Toggling works
|
||||
- Deleting works
|
||||
|
||||
---
|
||||
|
||||
### Task 9: Filter Functionality
|
||||
|
||||
Ensure filtering works end-to-end.
|
||||
|
||||
**Do:**
|
||||
- Verify filter buttons change displayed todos
|
||||
- 'all' shows all todos
|
||||
- 'active' shows only incomplete todos
|
||||
- 'completed' shows only completed todos
|
||||
- Clear completed removes completed todos and resets filter if needed
|
||||
- Add integration tests
|
||||
|
||||
**Verify:**
|
||||
- Filter tests pass
|
||||
- Manual verification of all filter states
|
||||
|
||||
---
|
||||
|
||||
### Task 10: Styling and Polish
|
||||
|
||||
Add CSS styling for usability.
|
||||
|
||||
**Do:**
|
||||
- Style the app to match the design mockup
|
||||
- Completed todos have strikethrough and muted color
|
||||
- Active filter button is highlighted
|
||||
- Input has focus styles
|
||||
- Delete button appears on hover (or always on mobile)
|
||||
- Responsive layout
|
||||
|
||||
**Verify:**
|
||||
- App is visually usable
|
||||
- Styles don't break functionality
|
||||
|
||||
---
|
||||
|
||||
### Task 11: End-to-End Tests
|
||||
|
||||
Add Playwright tests for full user flows.
|
||||
|
||||
**Do:**
|
||||
- Install Playwright: `npm init playwright@latest`
|
||||
- Create `tests/todo.spec.ts`
|
||||
- Test flows:
|
||||
- Add a todo
|
||||
- Complete a todo
|
||||
- Delete a todo
|
||||
- Filter todos
|
||||
- Clear completed
|
||||
- Persistence (add, reload, verify)
|
||||
|
||||
**Verify:**
|
||||
- `npx playwright test` passes
|
||||
|
||||
---
|
||||
|
||||
### Task 12: README
|
||||
|
||||
Document the project.
|
||||
|
||||
**Do:**
|
||||
- Create `README.md` with:
|
||||
- Project description
|
||||
- Setup: `npm install`
|
||||
- Development: `npm run dev`
|
||||
- Testing: `npm test` and `npx playwright test`
|
||||
- Build: `npm run build`
|
||||
|
||||
**Verify:**
|
||||
- README accurately describes the project
|
||||
- Instructions work
|
||||
@@ -1,46 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Scaffold the Svelte Todo test project
|
||||
# Usage: ./scaffold.sh /path/to/target/directory
|
||||
|
||||
set -e
|
||||
|
||||
TARGET_DIR="${1:?Usage: $0 <target-directory>}"
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
|
||||
# Create target directory
|
||||
mkdir -p "$TARGET_DIR"
|
||||
cd "$TARGET_DIR"
|
||||
|
||||
# Initialize git repo
|
||||
git init
|
||||
|
||||
# Copy design and plan
|
||||
cp "$SCRIPT_DIR/design.md" .
|
||||
cp "$SCRIPT_DIR/plan.md" .
|
||||
|
||||
# Create .claude settings to allow reads/writes in this directory
|
||||
mkdir -p .claude
|
||||
cat > .claude/settings.local.json << 'SETTINGS'
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Read(**)",
|
||||
"Edit(**)",
|
||||
"Write(**)",
|
||||
"Bash(npm:*)",
|
||||
"Bash(npx:*)",
|
||||
"Bash(mkdir:*)",
|
||||
"Bash(git:*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
SETTINGS
|
||||
|
||||
# Create initial commit
|
||||
git add .
|
||||
git commit -m "Initial project setup with design and plan"
|
||||
|
||||
echo "Scaffolded Svelte Todo project at: $TARGET_DIR"
|
||||
echo ""
|
||||
echo "To run the test:"
|
||||
echo " claude -p \"Execute this plan using superpowers:subagent-driven-development. Plan: $TARGET_DIR/plan.md\" --plugin-dir /path/to/superpowers"
|
||||
Reference in New Issue
Block a user