Compare commits

..

11 Commits

Author SHA1 Message Date
Drew Ritter
15d3b46614 Align Pi mapping with action vocabulary 2026-05-13 17:53:35 -07:00
Drew Ritter
c9116738e0 Bump evals submodule for Pi backend 2026-05-13 17:53:31 -07:00
Jesse Vincent
83a00caef9 chore: keep pi extension under .pi 2026-05-13 17:51:47 -07:00
Jesse Vincent
a1e1cd4ece feat: add pi superpowers package extension 2026-05-13 17:51:47 -07:00
Jesse Vincent
dd0db70312 docs: plan pi extension and evals work 2026-05-13 17:50:03 -07:00
Drew Ritter
d4d99117f2 Tighten cross-platform tool references 2026-05-13 17:46:28 -07:00
Jesse Vincent
01034bcf8f Phase E: action-language tool vocabulary
Replace Claude-Code-specific tool names in skill prose, prompt
templates, and OpenCode-facing docs with action-language descriptions
that resolve to each runtime's native tool via the per-platform refs.

Changes by category:

- Prose mentions ("Use TodoWrite to track...", "Use Task tool with
  general-purpose type") → action language ("Track each item as a
  todo", "Dispatch a general-purpose subagent")

- Prompt template headers (6 files): "Task tool (general-purpose):"
  → "Subagent (general-purpose):" — preserves the type information
  without naming Claude Code's specific dispatch tool

- DOT flowchart node labels: "Invoke Skill tool" → "Invoke the
  skill"; "Create TodoWrite todo per item" → "Create a todo per
  item"

- OpenCode INSTALL.md and docs/README.opencode.md: replace the old
  "TodoWrite → todowrite, Task → @mention" mapping (which both
  taught a vocabulary skills no longer use AND was wrong about
  @mention being a real OpenCode syntax) with an action-language
  mapping verified against the installed OpenCode CLI's tool
  inventory.

The platform-tools refs landed in Phase B already document each
runtime's resolution; skills now speak in the actions those refs
map. Tool names that genuinely belong only in the per-platform
dispatch section ("In Claude Code: Use the `Skill` tool") and the
Claude-Code-specific Bash run_in_background flag note in
visual-companion remain — those are intentional carve-outs.
2026-05-13 17:46:28 -07:00
Jesse Vincent
b87a5e4721 Phase D: cross-runtime tweaks (visual-companion, executing-plans, test)
Misc platform/runtime statements and adjacencies that don't fit the
prose, config-ref, README-ordering, or tool-vocabulary buckets:

- visual-companion frame template: rename CSS/HTML id #claude-content
  → #frame-content. The id is purely styling — nothing external
  references it. The brainstorm-server test that asserted the old
  string is updated in lockstep.

- visual-companion launch instructions: add a Copilot CLI section
  alongside Claude Code, Codex, and Gemini CLI; combine the Claude
  Code (macOS / Linux) and (Windows) sections so heading style
  matches the other (non-OS-qualified) platforms.

- visual-companion: "Use Write tool" → "Use your file-creation tool"
  for the cat/heredoc warning. The prohibition is what's load-
  bearing, not the tool name.

- executing-plans/SKILL.md: list all subagent-capable runtimes
  (Claude Code, Codex CLI, Codex App, Copilot CLI, Gemini CLI) and
  point at the per-platform tool refs as the source of truth.

- executing-plans/SKILL.md: relative path "using-superpowers/
  references/" → "../using-superpowers/references/" to resolve
  correctly from the executing-plans/ directory.

No bundled spec doc here — Phase D was scope-extension work that
took place across rounds, with no standalone spec authored.
2026-05-13 17:46:28 -07:00
Jesse Vincent
e47d6f4f85 Phase C: alphabetize README platform listings + spec
Quickstart link list and the per-harness install sub-sections both
reorder to strict alphabetical:

  Claude Code, Codex App, Codex CLI, Cursor, Factory Droid,
  Gemini CLI, GitHub Copilot CLI, OpenCode

Three blocks moved (Codex App swaps with Codex CLI; Cursor moves up
two slots; GitHub Copilot CLI moves up one). Claude Code stays first
by alphabetical chance.

Each install sub-section's content is byte-identical pre/post —
only the positions change. Quickstart anchors verified against the
new heading order.
2026-05-13 17:46:28 -07:00
Jesse Vincent
5c0402736e Phase B: config-file refs + per-platform tool refs + spec
Two structural changes:

1. Generalize CLAUDE.md-specific guidance:
   - "Project-specific conventions (put in CLAUDE.md)" → "(put in
     your instructions file)" in writing-skills/SKILL.md
   - "(explicit CLAUDE.md violation)" → "(explicit instruction-file
     violation)" in receiving-code-review/SKILL.md
   - The instruction-priority list in using-superpowers/SKILL.md
     stays inclusive (CLAUDE.md, GEMINI.md, AGENTS.md) — that's
     load-bearing, not a substitution opportunity.

2. Per-platform tool reference files at skills/using-superpowers/
   references/{claude-code,codex,copilot,gemini}-tools.md. Each ref
   documents:
   - The runtime's preferred instructions file (CLAUDE.md, AGENTS.md,
     GEMINI.md, etc.) and how it loads
   - The runtime's personal-skills directory + cross-runtime
     ~/.agents/skills/ path where applicable
   - Action-language → tool-name mapping table

Tool names and table content reflect the source-verified state from
direct inspection of openai/codex, google-gemini/gemini-cli,
sst/opencode, and the installed @github/copilot package. Filenames
and behaviors are sourced from each runtime's official docs.

Files in this commit also pick up later-phase changes that
accumulated on the same files (using-superpowers/SKILL.md "How to
Access Skills" overhaul, action-language flowchart, refs' final
table content). The bundled spec records original scope.
2026-05-13 17:46:28 -07:00
Jesse Vincent
d0e413b591 Phase A: agent-neutral prose + CSO → SDO + spec
Replace generic third-person "Claude" with "agents" / "your agent"
forms across active skill prose, the README intro, and the vendored
anthropic-best-practices.md reference. Carve-outs preserved:
historical attribution paths, the "Variant C: Claude.AI Emphatic
Style" example label, model identifiers (Haiku/Sonnet/Opus), and the
"In Claude Code:" per-platform skill-dispatch list.

Coined-term rename: "Claude Search Optimization (CSO)" → "Skill
Discovery Optimization (SDO)" in writing-skills/SKILL.md.

Files in this commit also pick up later-phase changes that
accumulated on the same files (dispatching-parallel-agents code-
example transformation, writing-skills numbering and path fixes).
The bundled spec at docs/superpowers/specs/ records the original
scope and the carve-outs.

README.md gets only its prose change here; the alphabetization
lands in Phase C's commit.
2026-05-13 17:46:28 -07:00
40 changed files with 1084 additions and 285 deletions

View File

@@ -45,7 +45,7 @@ Use OpenCode's native `skill` tool:
```
use skill tool to list skills
use skill tool to load superpowers/brainstorming
use skill tool to load brainstorming
```
## Updating
@@ -98,11 +98,16 @@ Then use the installed package path in `opencode.json`:
### Tool mapping
When skills reference Claude Code tools:
- `TodoWrite``todowrite`
- `Task` with subagents → `@mention` syntax
- `Skill` tool → OpenCode's native `skill` tool
- File operations → your native tools
Skills speak in actions ("create a todo", "dispatch a subagent", "read a file"). On OpenCode these resolve to:
- "Create a todo" / "mark complete in todo list" → `todowrite`
- `Subagent (general-purpose):` template → `task` tool with `subagent_type: "general"` (or `"explore"` for codebase exploration)
- "Invoke a skill" → OpenCode's native `skill` tool
- "Read a file" → `read`
- "Create a file" / "edit a file" / "delete a file" → `apply_patch`
- "Run a shell command" → `bash`
- "Search file contents" / "find files by name" → `grep`, `glob`
- "Fetch a URL" → `webfetch`
## Getting Help

View File

@@ -1,7 +1,7 @@
/**
* Superpowers plugin for OpenCode.ai
*
* Injects superpowers bootstrap context via system prompt transform.
* Injects superpowers bootstrap context via message transform.
* Auto-registers skills directory via config hook (no symlinks needed).
*/
@@ -74,11 +74,15 @@ export const SuperpowersPlugin = async ({ client, directory }) => {
const { content } = extractAndStripFrontmatter(fullContent);
const toolMapping = `**Tool Mapping for OpenCode:**
When skills reference tools you don't have, substitute OpenCode equivalents:
- \`TodoWrite\`\`todowrite\`
- \`Task\` tool with subagents → Use OpenCode's subagent system (@mention)
- \`Skill\` tool → OpenCode's native \`skill\` tool
- \`Read\`, \`Write\`, \`Edit\`, \`Bash\` → Your native tools
When skills request actions, substitute OpenCode equivalents:
- Create or update todos\`todowrite\`
- \`Subagent (general-purpose):\`\`task\` with \`subagent_type: "general"\`
- Invoke a skill → OpenCode's native \`skill\` tool
- Read files → \`read\`
- Create, edit, or delete files → \`apply_patch\`
- Run shell commands → \`bash\`
- Search files → \`grep\`, \`glob\`
- Fetch a URL → \`webfetch\`
Use OpenCode's native \`skill\` tool to list and load skills.`;

View File

@@ -0,0 +1,121 @@
import { readFileSync } from "node:fs";
import { dirname, resolve } from "node:path";
import { fileURLToPath } from "node:url";
import type { ExtensionAPI } from "@earendil-works/pi-coding-agent";
const EXTREMELY_IMPORTANT_MARKER = "<EXTREMELY_IMPORTANT>";
const BOOTSTRAP_MARKER = "superpowers:using-superpowers bootstrap for pi";
const extensionDir = dirname(fileURLToPath(import.meta.url));
const packageRoot = resolve(extensionDir, "../..");
const skillsDir = resolve(packageRoot, "skills");
const bootstrapSkillPath = resolve(skillsDir, "using-superpowers", "SKILL.md");
let cachedBootstrap: string | null | undefined;
export default function superpowersPiExtension(pi: ExtensionAPI) {
let injectBootstrap = true;
pi.on("resources_discover", async () => ({
skillPaths: [skillsDir],
}));
pi.on("session_start", async () => {
injectBootstrap = true;
});
pi.on("session_compact", async () => {
injectBootstrap = true;
});
pi.on("agent_end", async () => {
injectBootstrap = false;
});
pi.on("context", async (event) => {
if (!injectBootstrap) return;
if (event.messages.some(messageContainsBootstrap)) return;
const bootstrap = getBootstrapContent();
if (!bootstrap) return;
const bootstrapMessage = {
role: "user" as const,
content: [{ type: "text" as const, text: bootstrap }],
timestamp: Date.now(),
};
const insertAt = firstNonCompactionSummaryIndex(event.messages);
return {
messages: [
...event.messages.slice(0, insertAt),
bootstrapMessage,
...event.messages.slice(insertAt),
],
};
});
}
function getBootstrapContent(): string | null {
if (cachedBootstrap !== undefined) return cachedBootstrap;
try {
const skillContent = readFileSync(bootstrapSkillPath, "utf8");
const body = stripFrontmatter(skillContent);
cachedBootstrap = `${EXTREMELY_IMPORTANT_MARKER}
${BOOTSTRAP_MARKER}
You have superpowers.
The using-superpowers skill content is included below and is already loaded for this Pi session. Follow it now. Do not try to load using-superpowers again.
${body}
${piToolMapping()}
</EXTREMELY_IMPORTANT>`;
return cachedBootstrap;
} catch {
cachedBootstrap = null;
return null;
}
}
function stripFrontmatter(content: string): string {
const match = content.match(/^---\n[\s\S]*?\n---\n([\s\S]*)$/);
return (match ? match[1] : content).trim();
}
function piToolMapping(): string {
return `## Pi tool mapping
Pi has native skills but does not expose Claude Code's \`Skill\` tool. When a Superpowers instruction says to invoke a skill, use Pi's native skill system instead: load the relevant \`SKILL.md\` with \`read\` when the skill applies, or let a human invoke \`/skill:name\` explicitly.
Pi's built-in coding tools are lowercase: \`read\`, \`write\`, \`edit\`, \`bash\`, plus optional \`grep\`, \`find\`, and \`ls\`. Use those for the corresponding actions: read a file, create or edit files, run shell commands, search file contents, find files by name, and list directories.
Pi does not ship a standard subagent tool. If a subagent tool such as \`subagent\` from \`pi-subagents\` is available, use it for Superpowers subagent workflows. If no subagent tool is available, do the work in this session or explain the missing capability instead of inventing \`Task\` calls.
Pi does not ship a standard task-list tool. If an installed todo/task tool is available, use it. Otherwise track work in plan files or a repo-local \`TODO.md\` when task tracking is needed. Treat older \`TodoWrite\` references as this task-tracking action.`;
}
function messageContainsBootstrap(message: unknown): boolean {
const content = (message as { content?: unknown }).content;
if (typeof content === "string") return content.includes(BOOTSTRAP_MARKER);
if (!Array.isArray(content)) return false;
return content.some((part) => {
return (
part &&
typeof part === "object" &&
(part as { type?: unknown }).type === "text" &&
typeof (part as { text?: unknown }).text === "string" &&
(part as { text: string }).text.includes(BOOTSTRAP_MARKER)
);
});
}
function firstNonCompactionSummaryIndex(messages: unknown[]): number {
let index = 0;
while ((messages[index] as { role?: unknown } | undefined)?.role === "compactionSummary") {
index += 1;
}
return index;
}

View File

@@ -4,7 +4,7 @@ Superpowers is a complete software development methodology for your coding agent
## Quickstart
Give your agent Superpowers: [Claude Code](#claude-code), [Codex CLI](#codex-cli), [Codex App](#codex-app), [Factory Droid](#factory-droid), [Gemini CLI](#gemini-cli), [OpenCode](#opencode), [Cursor](#cursor), [GitHub Copilot CLI](#github-copilot-cli).
Give your agent Superpowers: [Claude Code](#claude-code), [Codex App](#codex-app), [Codex CLI](#codex-cli), [Cursor](#cursor), [Factory Droid](#factory-droid), [Gemini CLI](#gemini-cli), [GitHub Copilot CLI](#github-copilot-cli), [OpenCode](#opencode), [Pi](#pi).
## How it works
@@ -14,7 +14,7 @@ Once it's teased a spec out of the conversation, it shows it to you in chunks sh
After you've signed off on the design, your agent puts together an implementation plan that's clear enough for an enthusiastic junior engineer with poor taste, no judgement, no project context, and an aversion to testing to follow. It emphasizes true red/green TDD, YAGNI (You Aren't Gonna Need It), and DRY.
Next up, once you say "go", it launches a *subagent-driven-development* process, having agents work through each engineering task, inspecting and reviewing their work, and continuing forward. It's not uncommon for Claude to be able to work autonomously for a couple hours at a time without deviating from the plan you put together.
Next up, once you say "go", it launches a *subagent-driven-development* process, having agents work through each engineering task, inspecting and reviewing their work, and continuing forward. It's not uncommon for your agent to work autonomously for a couple hours at a time without deviating from the plan you put together.
There's a bunch more to it, but that's the core of the system. And because the skills trigger automatically, you don't need to do anything special. Your coding agent just has Superpowers.
@@ -60,6 +60,14 @@ The Superpowers marketplace provides Superpowers and some other related plugins
/plugin install superpowers@superpowers-marketplace
```
### Codex App
Superpowers is available via the [official Codex plugin marketplace](https://github.com/openai/plugins).
- In the Codex app, click on Plugins in the sidebar.
- You should see `Superpowers` in the Coding section.
- Click the `+` next to Superpowers and follow the prompts.
### Codex CLI
Superpowers is available via the [official Codex plugin marketplace](https://github.com/openai/plugins).
@@ -78,13 +86,15 @@ Superpowers is available via the [official Codex plugin marketplace](https://git
- Select `Install Plugin`.
### Codex App
### Cursor
Superpowers is available via the [official Codex plugin marketplace](https://github.com/openai/plugins).
- In Cursor Agent chat, install from marketplace:
- In the Codex app, click on Plugins in the sidebar.
- You should see `Superpowers` in the Coding section.
- Click the `+` next to Superpowers and follow the prompts.
```text
/add-plugin superpowers
```
- Or search for "superpowers" in the plugin marketplace.
### Factory Droid
@@ -114,29 +124,6 @@ Superpowers is available via the [official Codex plugin marketplace](https://git
gemini extensions update superpowers
```
### OpenCode
OpenCode uses its own plugin install; install Superpowers separately even if you
already use it in another harness.
- Tell OpenCode:
```
Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.opencode/INSTALL.md
```
- Detailed docs: [docs/README.opencode.md](docs/README.opencode.md)
### Cursor
- In Cursor Agent chat, install from marketplace:
```text
/add-plugin superpowers
```
- Or search for "superpowers" in the plugin marketplace.
### GitHub Copilot CLI
- Register the marketplace:
@@ -151,6 +138,35 @@ already use it in another harness.
copilot plugin install superpowers@superpowers-marketplace
```
### OpenCode
OpenCode uses its own plugin install; install Superpowers separately even if you
already use it in another harness.
- Tell OpenCode:
```
Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.opencode/INSTALL.md
```
- Detailed docs: [docs/README.opencode.md](docs/README.opencode.md)
### Pi
Install Superpowers as a Pi package from this repository:
```bash
pi install git:github.com/obra/superpowers
```
For local development, run Pi with this checkout loaded as a temporary package:
```bash
pi -e /path/to/superpowers
```
The Pi package loads the Superpowers skills and a small extension that injects the `using-superpowers` bootstrap at session startup and again after compaction. Pi has native skills, so no compatibility `Skill` tool is required. Subagent and task-list tools remain optional Pi companion packages.
## The Basic Workflow
1. **brainstorming** - Activates before writing code. Refines rough ideas through questions, explores alternatives, presents design in sections for validation. Saves design document.

View File

@@ -50,7 +50,7 @@ use skill tool to list skills
### Loading a Skill
```
use skill tool to load superpowers/brainstorming
use skill tool to load brainstorming
```
### Personal Skills
@@ -99,17 +99,23 @@ To pin a specific version, use a branch or tag:
The plugin does two things:
1. **Injects bootstrap context** via the `experimental.chat.system.transform` hook, adding superpowers awareness to every conversation.
1. **Injects bootstrap context** via the `experimental.chat.messages.transform` hook, adding superpowers awareness to every conversation.
2. **Registers the skills directory** via the `config` hook, so OpenCode discovers all superpowers skills without symlinks or manual config.
### Tool Mapping
Skills written for Claude Code are automatically adapted for OpenCode:
Skills speak in actions rather than naming any one runtime's tools. On OpenCode these resolve to:
- `TodoWrite``todowrite`
- `Task` with subagents → OpenCode's `@mention` system
- `Skill` tool → OpenCode's native `skill` tool
- File operations → Native OpenCode tools
- "Create a todo" / "mark complete in todo list"`todowrite`
- `Subagent (general-purpose):` template → OpenCode's `task` tool with `subagent_type: "general"` (or `"explore"` for codebase exploration)
- "Invoke a skill" → OpenCode's native `skill` tool
- "Read a file" → `read`
- "Create a file" / "edit a file" / "delete a file" → `apply_patch`
- "Run a shell command" → `bash`
- "Search file contents" / "find files by name" → `grep`, `glob`
- "Fetch a URL" → `webfetch`
(Verified against the installed OpenCode CLI's tool inventory.)
## Troubleshooting
@@ -147,7 +153,7 @@ Then use the installed package path in `opencode.json`:
### Bootstrap not appearing
1. Check OpenCode version supports `experimental.chat.system.transform` hook
1. Check OpenCode version supports `experimental.chat.messages.transform` hook
2. Restart OpenCode after config changes
## Getting Help

View File

@@ -0,0 +1,143 @@
# Pi Extension and Evals Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Add first-class Pi package support for Superpowers and add Pi as a Drill eval backend.
**Architecture:** The Pi package is declared in the root `package.json` and loads existing `skills/` plus a small Pi extension. The extension injects the `using-superpowers` bootstrap into provider context as a user-role message on session startup and after compaction, with Pi-specific tool mapping. Drill gains a `pi` backend, Pi session-log normalization, and tests.
**Tech Stack:** Pi TypeScript extension API, Node built-in test runner, Drill Python eval harness, pytest.
---
### Task 1: Pi package manifest and extension tests
**Files:**
- Modify: `package.json`
- Create: `tests/pi/test-pi-extension.mjs`
- [ ] **Step 1: Write failing package/extension tests**
Create `tests/pi/test-pi-extension.mjs` with tests that import `extensions/superpowers.ts`, register fake Pi handlers, and assert:
- root `package.json` has `keywords` containing `pi-package`
- root `package.json` has `pi.skills: ["./skills"]`
- root `package.json` has `pi.extensions: ["./extensions/superpowers.ts"]`
- the extension registers `resources_discover`, `session_start`, `session_compact`, `context`, and `agent_end`
- startup `context` injects exactly one user-role bootstrap message
- `agent_end` clears startup injection
- `session_compact` re-enables injection
- the extension does not register `session_before_compact`
- [ ] **Step 2: Run tests and verify RED**
Run: `node --experimental-strip-types --test tests/pi/test-pi-extension.mjs`
Expected: FAIL because `extensions/superpowers.ts` does not exist and `package.json` lacks the `pi` manifest.
- [ ] **Step 3: Implement manifest fields**
Update `package.json` with `description`, `keywords`, `pi.extensions`, and `pi.skills` while preserving existing `name`, `version`, `type`, and `main`.
- [ ] **Step 4: Implement `extensions/superpowers.ts`**
Create a zero-runtime-dependency extension that:
- locates the package root from `import.meta.url`
- reads `skills/using-superpowers/SKILL.md`
- strips YAML frontmatter
- appends Pi-specific tool mapping
- exposes `resources_discover` with the skills path
- marks bootstrap pending on `session_start` and `session_compact`
- injects a user-role bootstrap message in `context`
- inserts post-compact bootstrap after leading `compactionSummary` messages
- clears pending bootstrap on `agent_end`
- [ ] **Step 5: Run tests and verify GREEN**
Run: `node --experimental-strip-types --test tests/pi/test-pi-extension.mjs`
Expected: PASS.
### Task 2: Pi tool mapping reference
**Files:**
- Create: `skills/using-superpowers/references/pi-tools.md`
- Modify: `tests/pi/test-pi-extension.mjs`
- [ ] **Step 1: Write failing test for Pi reference doc**
Add assertions that `skills/using-superpowers/references/pi-tools.md` exists and documents mappings for `Skill`, `Task`, `TodoWrite`, and built-in tool names.
- [ ] **Step 2: Run tests and verify RED**
Run: `node --experimental-strip-types --test tests/pi/test-pi-extension.mjs`
Expected: FAIL because `pi-tools.md` does not exist.
- [ ] **Step 3: Add Pi reference doc**
Create `skills/using-superpowers/references/pi-tools.md` explaining Pi-native skills, optional `pi-subagents`, no canonical todo/tasklist plugin, and built-in lowercase tools.
- [ ] **Step 4: Run tests and verify GREEN**
Run: `node --experimental-strip-types --test tests/pi/test-pi-extension.mjs`
Expected: PASS.
### Task 3: Drill Pi backend and session log normalization
**Files:**
- Create: `evals/backends/pi.yaml`
- Modify: `evals/drill/backend.py`
- Modify: `evals/drill/engine.py`
- Modify: `evals/drill/normalizer.py`
- Modify: `evals/tests/test_backend.py`
- Modify: `evals/tests/test_normalizer.py`
- [ ] **Step 1: Write failing backend/normalizer tests**
Add pytest coverage for:
- `load_backend("pi")` returns `family == "pi"`
- Pi backend command starts with `pi` and includes `-e ${SUPERPOWERS_ROOT}`
- `_resolve_log_dir()` for Pi points under `~/.pi/agent/sessions`
- `filter_pi_logs_by_cwd()` keeps only session files whose header `cwd` matches the scenario workdir
- `normalize_pi_logs()` extracts `toolCall` blocks from Pi assistant session entries and maps built-in lowercase tools to canonical names
- [ ] **Step 2: Run tests and verify RED**
Run: `uv run pytest evals/tests/test_backend.py evals/tests/test_normalizer.py -q`
Expected: FAIL because the Pi backend and normalizer do not exist.
- [ ] **Step 3: Add `evals/backends/pi.yaml`**
Configure the backend to run `pi -e ${SUPERPOWERS_ROOT}`, use permissive TUI readiness, `/quit` shutdown, and Pi session log location.
- [ ] **Step 4: Implement Pi family support**
Update `Backend.family`, `Engine._resolve_log_dir`, `Engine._collect_tool_calls`, and `normalizer.py` with Pi log filtering and normalizing.
- [ ] **Step 5: Run tests and verify GREEN**
Run: `uv run pytest evals/tests/test_backend.py evals/tests/test_normalizer.py -q`
Expected: PASS.
### Task 4: Documentation and full verification
**Files:**
- Modify: `README.md`
- Modify: `evals/README.md`
- [ ] **Step 1: Document Pi install and eval backend**
Add Pi to README quickstart/install list and add backend entry/usage to `evals/README.md`.
- [ ] **Step 2: Run verification**
Run:
```bash
node --experimental-strip-types --test tests/pi/test-pi-extension.mjs
uv run pytest evals/tests/test_backend.py evals/tests/test_setup.py evals/tests/test_normalizer.py -q
```
Expected: all tests pass.

View File

@@ -0,0 +1,77 @@
# Platform-neutral config-file references — Phase B design
## Background
Phase A (see `2026-05-05-platform-neutral-prose-design.md`) replaced generic third-person "Claude" prose with agent-neutral forms. This phase tackles the next category: references to the per-platform instruction file (CLAUDE.md, AGENTS.md, GEMINI.md) inside skills.
The plugin runs on multiple harnesses, and each one reads its own instruction file. Where a skill names CLAUDE.md as if it were the only file, that's a Claude-Code-centric assumption that doesn't hold on Codex / Gemini CLI / OpenCode.
## In scope
Two specific lines in active skills:
1. **`skills/writing-skills/SKILL.md:58`** — `Project-specific conventions (put in CLAUDE.md)`
2. **`skills/receiving-code-review/SKILL.md:30`** — `"You're absolutely right!" (explicit CLAUDE.md violation)`
## Out of scope
- **`skills/using-superpowers/SKILL.md:22, 26`** — instruction-priority list. The list already names all three (CLAUDE.md, GEMINI.md, AGENTS.md) inclusively, which is correct: the section is making a real claim about *what counts as user instruction* on a multi-platform plugin. No change needed.
- **Historical / example artifacts**:
- `skills/systematic-debugging/CREATION-LOG.md` — attribution path (`~/.claude/CLAUDE.md`) is a historical fact.
- `skills/writing-skills/examples/CLAUDE_MD_TESTING.md` — the entire file is a worked example testing CLAUDE.md content variants. The filename, body, and the reference from `testing-skills-with-subagents.md` all stay; normalizing them defeats the example.
- **Platform-tooling references** — Phase D candidates:
- `skills/using-superpowers/SKILL.md:40` (Gemini CLI tool mapping note about GEMINI.md)
- `skills/using-superpowers/references/gemini-tools.md` (`save_memory` persists to GEMINI.md)
## Substitution rules
Two distinct calls, one per in-scope line.
### Rule 1: "where to put project-specific conventions"
`writing-skills/SKILL.md:58`:
- **Before:** `Project-specific conventions (put in CLAUDE.md)`
- **After:** `Project-specific conventions (put in your instructions file)`
Use a generic phrase rather than picking one filename. Different harnesses read different files (CLAUDE.md, AGENTS.md, GEMINI.md, etc.) and the skill should not assume one. The platform-tools reference docs (`references/{codex,copilot,gemini}-tools.md`) are the right place to name each platform's preferred file.
### Rule 2: the "(explicit CLAUDE.md violation)" parenthetical
`receiving-code-review/SKILL.md:30`:
- **Before:** `"You're absolutely right!" (explicit CLAUDE.md violation)`
- **After:** `"You're absolutely right!" (explicit instruction-file violation)`
The parenthetical is doing real work — it signals this phrase isn't just stylistically bad, it actively violates rules many users put in their instruction files. "Instruction file" is the natural cross-platform term covering AGENTS.md / CLAUDE.md / GEMINI.md collectively, and keeps the original signal without picking one filename or softening to "common".
## Commit plan
Atomic commits, in order:
1. **`writing-skills/SKILL.md`** — CLAUDE.md → "your instructions file" in the "where to put project conventions" line
2. **`receiving-code-review/SKILL.md`** — CLAUDE.md → instruction-file in the violation parenthetical
3. **Platform-tools reference docs** — add the preferred per-platform instructions filename (CLAUDE.md, AGENTS.md, GEMINI.md, etc.) to each `references/{codex,copilot,gemini}-tools.md` so readers can resolve "your instructions file" to a real filename.
Each commit message names "Phase B" and the slice.
## Verification
After each commit:
- Read the surrounding paragraph to confirm grammar and meaning still parse.
- `grep -n "CLAUDE\.md" <touched-file>` — no remaining hits in active prose (carve-outs already documented).
After both commits:
- `grep -rn "CLAUDE\.md" skills/` should return only the documented carve-outs (CREATION-LOG, CLAUDE_MD_TESTING and its inbound reference, the priority list in using-superpowers).
## Non-goals
- Do not touch the priority list ordering in `using-superpowers/SKILL.md`. Reordering CLAUDE.md / GEMINI.md / AGENTS.md is an aesthetic change, not a substitution, and out of scope here.
- Do not rename `examples/CLAUDE_MD_TESTING.md` or change its content.
- Do not modify Gemini-CLI-specific tooling references (Phase D candidates).
## Implementation note
Phase B as written here covered three commits and the three non-Claude-Code platform-tools refs. Implementation went one step further: a fourth ref, `references/claude-code-tools.md`, was added in commit `8505703` for symmetry, so Claude Code's instructions-file conventions and tool-name list live alongside the others rather than implicitly in the surrounding skill prose. That addition wasn't anticipated in this spec but is consistent with its intent.

View File

@@ -0,0 +1,94 @@
# Platform-neutral prose — Phase A design
## Background
Superpowers ships to multiple agent runtimes (Claude Code, Codex, Cursor, OpenCode, Copilot CLI, Gemini CLI). Skill content and supporting docs were written first for Claude Code and use "Claude" in places where any runtime's agent applies. OpenAI's vendored fork (openai/plugins#217) attempted a wholesale rewrite that was actively wrong in places — rewriting historical attribution paths, model names, and platform-specific install instructions — and we want to avoid that mistake while still removing platform-centric prose where it is genuinely incidental.
The full effort is broken into phases by reference category. **This spec covers Phase A only:** generic third-person prose mentioning "Claude" in non-platform-specific contexts. Later phases (config-file references, marketing copy, tool-name references) are out of scope here and will get their own specs.
## In scope
Generic prose mentions of "Claude" in:
- `skills/*/SKILL.md` and supporting `.md` files in active skill directories
- `skills/writing-skills/anthropic-best-practices.md`
- `README.md` (only where the mention is generic prose, not platform marketing)
Plus one coined-term rename: **Claude Search Optimization (CSO) → Skill Discovery Optimization (SDO)** in `skills/writing-skills/SKILL.md`.
## Out of scope
- **Platform/runtime statements** — "In Claude Code:", install instructions, tool-mapping references. (Phase D candidate.)
- **Config-file references** — CLAUDE.md, AGENTS.md, GEMINI.md priority lists and "where to put project conventions" callouts. (Phase B.)
- **Tool-name references** — `Skill`, `Bash`, `Read`, `Task`, `TodoWrite`. Skills are written in Claude Code's tool vocabulary; the existing `references/{codex,copilot,gemini}-tools.md` files map them. (At the time this spec was written, the plan was to defer or skip these. Phase E ended up doing them — replacing tool names with action language across active skills and unifying the platform-tools refs around the same vocabulary.)
- **Marketing copy** in README — "Superpowers for Claude Code", platform-named install sections. (Phase C.)
- **Historical artifacts** — `docs/plans/*.md`, `docs/superpowers/specs/*.md`, `CREATION-LOG.md`. These are dated, point-in-time documents; rewriting them rewrites history.
- **Model identifiers** — Claude Haiku / Sonnet / Opus. These are real product names.
- **Filename / URL references** — `CLAUDE.md`, `claude.com`, `claude-plugin/`, paths under `~/.claude/`.
- **`anthropic-best-practices.md` filename** — the file remains named after its source even though we rewrite the prose inside it.
## Replacement style
Use a mix that reads naturally in English:
- **Second person — "your agent"** when addressing the skill author about *their* runtime
- "your agent reads the description"
- **Third person — "the agent" / "agents" / "an agent"** when describing system behavior generically
- "Future agents find your skills"
- "Use words an agent would search for"
- "Agents read SKILL.md only when the skill becomes relevant"
Pick whichever fits the surrounding sentence; do not force consistency at the cost of awkward phrasing. Pluralize when natural ("future agents", "agents read") rather than always saying "the agent".
### Carve-outs that stay as "Claude"
- Model names: Claude Haiku, Claude Sonnet, Claude Opus
- Filenames and URLs: `CLAUDE.md`, `claude.com`, `~/.claude/`
- Branded platform name "Claude Code" wherever it refers to the runtime as such (handled in later phases)
### Coined-term rename
- **Claude Search Optimization (CSO) → Skill Discovery Optimization (SDO)**
- Appears in `skills/writing-skills/SKILL.md` as a section heading and in nearby prose. Rename the heading, the acronym, and any in-file cross-references.
## Files affected
Approximate counts based on a `grep` filtered to exclude carve-outs:
| File | Generic-prose mentions |
|------|------------------------|
| `skills/writing-skills/SKILL.md` | ~12 (includes CSO heading + body) |
| `skills/writing-skills/anthropic-best-practices.md` | ~30 |
| `skills/writing-skills/examples/CLAUDE_MD_TESTING.md` | ~1 — filename stays (it's a CLAUDE.md test artifact); the "Variant C: Claude.AI Emphatic Style" heading also stays (it's a label naming a specific style) |
| `README.md` | ~1 |
Final list confirmed during implementation by re-running the filtered grep.
## Commit plan
Four atomic commits, in order:
1. **Rename CSO → SDO** in `skills/writing-skills/SKILL.md`. Mechanical, isolated, easy to revert if we change our minds about the term.
2. **Active skills prose** — generic "Claude" → "agent" forms across `skills/*/SKILL.md` and supporting `.md`, excluding `anthropic-best-practices.md`.
3. **`anthropic-best-practices.md` prose** — same substitution rules. Separate commit because this file is a vendored adaptation of an external doc; isolating the change makes future reconciliation with upstream easier to read.
4. **README.md prose** *(only if any generic-prose mentions remain after filtering)*. Skipped if empty.
Each commit message names the phase ("Phase A") and the slice ("rename CSO to SDO", "agent prose in active skills", etc.) so the series is self-documenting.
## Verification
After each commit:
- `grep -rn "Claude" <touched-paths>` — every remaining hit must fall into a documented carve-out (model name, filename, URL, "Claude Code" platform name, historical artifact).
- Read the touched file end-to-end — substitutions should not have broken sentence flow, pronoun agreement, or list parallelism.
- No tests to run; this is prose-only.
After the final commit:
- Skim each modified skill in a live session to confirm nothing reads awkwardly.
## Non-goals
- Do not change behavior, structure, headings (other than CSO→SDO), examples, code blocks, or YAML frontmatter.
- Do not introduce new sections, callouts, or compatibility notes.
- Do not "improve" prose beyond the substitution while editing.

View File

@@ -0,0 +1,47 @@
# Platform-neutral README ordering — Phase C design
## Background
Phases A and B (see `2026-05-05-platform-neutral-prose-design.md` and `2026-05-05-platform-neutral-config-refs-design.md`) already neutralized generic Claude prose and config-file references in the README. The remaining platform-leaning signal is layout: the README's two platform listings put Claude Code first and aren't strictly alphabetical elsewhere.
This phase fixes the ordering. No prose changes.
## In scope
1. **Quickstart platform list** (`README.md:7`) — the inline link list of supported harnesses
2. **Installation section ordering** (`README.md:35152`) — the per-harness install sub-sections
## Out of scope
- Prose, marketplace names, plugin IDs, URLs — all factually correct as-is.
- Visual weight of the Claude Code section (which has two sub-sections — official Anthropic marketplace and Superpowers marketplace). Both are real install paths; collapsing them would hide accurate info.
- Section headings and content within each install block — only the ordering of the blocks changes.
## Substitution
Both listings reorder to strict alphabetical:
| Old order | New order |
|-----------|-----------|
| Claude Code | Claude Code |
| Codex CLI | Codex App |
| Codex App | Codex CLI |
| Factory Droid | Cursor |
| Gemini CLI | Factory Droid |
| OpenCode | Gemini CLI |
| Cursor | GitHub Copilot CLI |
| GitHub Copilot CLI | OpenCode |
Three moves: Codex App swaps with Codex CLI; Cursor moves up two slots; GitHub Copilot CLI moves up one.
Claude Code remains first by alphabetical chance (`Cl…` precedes `Co…`).
## Commit plan
One atomic commit covering both listings, since changing one without the other would create inconsistency between the quickstart and the installation section.
## Verification
- Quickstart anchors (`#claude-code`, `#codex-app`, etc.) still resolve to existing `### …` headings — no headings renamed.
- Each install sub-section's body is byte-identical pre/post; only positions changed.
- `git diff README.md` shows section moves only, no content edits.

View File

@@ -14,7 +14,7 @@ Live in `tests/`. Currently:
- `tests/codex-plugin-sync/` — bash sync verification.
- `tests/claude-code/test-helpers.sh`, `analyze-token-usage.py` — utilities used by remaining bash tests.
- `tests/claude-code/test-subagent-driven-development.sh` — agent-can-describe-SDD test (no drill counterpart; tests description-recall, not behavior).
- `tests/claude-code/test-subagent-driven-development-integration.sh` — extended SDD integration with token analysis (drill covers the YAGNI subset; bash adds commit-count, TodoWrite, and token telemetry assertions).
- `tests/claude-code/test-subagent-driven-development-integration.sh` — extended SDD integration with token analysis (drill covers the YAGNI subset; bash adds commit-count, Claude Code task-tracking, and token telemetry assertions).
- `tests/claude-code/test-worktree-native-preference.sh` — RED-GREEN-REFACTOR validation for worktree skill (drill covers the PRESSURE phase; bash also covers RED/GREEN baselines).
- `tests/explicit-skill-requests/` — Haiku-specific, multi-turn, and skill-name-prompted tests not covered by drill.

2
evals

Submodule evals updated: f88b0e8ab9...29957de826

View File

@@ -1,6 +1,23 @@
{
"name": "superpowers",
"version": "5.1.0",
"description": "Superpowers skills and runtime bootstrap for coding agents",
"type": "module",
"main": ".opencode/plugins/superpowers.js"
"main": ".opencode/plugins/superpowers.js",
"keywords": [
"pi-package",
"skills",
"tdd",
"debugging",
"collaboration",
"workflow"
],
"pi": {
"extensions": [
"./.pi/extensions/superpowers.ts"
],
"skills": [
"./skills"
]
}
}

View File

@@ -13,7 +13,7 @@
* - Scrollable main content area
* - CSS helpers for common UI patterns
*
* Content is injected via placeholder comment in #claude-content.
* Content is injected via placeholder comment in #frame-content.
*/
* { box-sizing: border-box; margin: 0; padding: 0; }
@@ -77,7 +77,7 @@
.header .status::before { content: ''; width: 6px; height: 6px; background: var(--success); border-radius: 50%; }
.main { flex: 1; overflow-y: auto; }
#claude-content { padding: 2rem; min-height: 100%; }
#frame-content { padding: 2rem; min-height: 100%; }
.indicator-bar {
background: var(--bg-secondary);
@@ -201,7 +201,7 @@
</div>
<div class="main">
<div id="claude-content">
<div id="frame-content">
<!-- CONTENT -->
</div>
</div>

View File

@@ -7,7 +7,7 @@ Use this template when dispatching a spec document reviewer subagent.
**Dispatch after:** Spec document is written to docs/superpowers/specs/
```
Task tool (general-purpose):
Subagent (general-purpose):
description: "Review spec document"
prompt: |
You are a spec document reviewer. Verify this spec is complete and ready for planning.

View File

@@ -49,20 +49,13 @@ Save `screen_dir` and `state_dir` from the response. Tell user to open the URL.
**Launching the server by platform:**
**Claude Code (macOS / Linux):**
**Claude Code:**
```bash
# Default mode works — the script backgrounds the server itself
# Default mode works — the script backgrounds the server itself.
scripts/start-server.sh --project-dir /path/to/project
```
**Claude Code (Windows):**
```bash
# Windows auto-detects and uses foreground mode, which blocks the tool call.
# Use run_in_background: true on the Bash tool call so the server survives
# across conversation turns.
scripts/start-server.sh --project-dir /path/to/project
```
When calling this via the Bash tool, set `run_in_background: true`. Then read `$STATE_DIR/server-info` on the next turn to get the URL and port.
On Windows, the script auto-detects and switches to foreground mode (which blocks the tool call). Use `run_in_background: true` on the Bash tool call so the server survives across conversation turns, then read `$STATE_DIR/server-info` on the next turn to get the URL and port.
**Codex:**
```bash
@@ -78,6 +71,14 @@ scripts/start-server.sh --project-dir /path/to/project
scripts/start-server.sh --project-dir /path/to/project --foreground
```
**Copilot CLI:**
```bash
# Use --foreground and start the server via the bash tool with mode: "async"
# so the process survives across turns. Capture the returned shellId for
# read_bash / stop_bash if you need to interact with it later.
scripts/start-server.sh --project-dir /path/to/project --foreground
```
**Other environments:** The server must keep running in the background across conversation turns. If your environment reaps detached processes, use `--foreground` and launch the command with your platform's background execution mechanism.
If the URL is unreachable from your browser (common in remote/containerized setups), bind a non-loopback host:
@@ -97,7 +98,7 @@ Use `--url-host` to control what hostname is printed in the returned URL JSON.
- Before each write, check that `$STATE_DIR/server-info` exists. If it doesn't (or `$STATE_DIR/server-stopped` exists), the server has shut down — restart it with `start-server.sh` before continuing. The server auto-exits after 30 minutes of inactivity.
- Use semantic filenames: `platform.html`, `visual-style.html`, `layout.html`
- **Never reuse filenames** — each screen gets a fresh file
- Use Write tool — **never use cat/heredoc** (dumps noise into terminal)
- Use your file-creation tool — **never use cat/heredoc** (dumps noise into terminal)
- Server automatically serves the newest file
2. **Tell user what to expect and end your turn:**

View File

@@ -65,14 +65,17 @@ Each agent gets:
### 3. Dispatch in Parallel
```typescript
// In Claude Code / AI environment
Task("Fix agent-tool-abort.test.ts failures")
Task("Fix batch-completion-behavior.test.ts failures")
Task("Fix tool-approval-race-conditions.test.ts failures")
// All three run concurrently
Issue all three subagent dispatches in the same response — they run in parallel:
```text
Subagent (general-purpose): "Fix agent-tool-abort.test.ts failures"
Subagent (general-purpose): "Fix batch-completion-behavior.test.ts failures"
Subagent (general-purpose): "Fix tool-approval-race-conditions.test.ts failures"
# All three run concurrently.
```
Multiple dispatch calls in one response = parallel execution. One per response = sequential.
### 4. Review and Integrate
When agents return:

View File

@@ -11,7 +11,7 @@ Load plan, review critically, execute all tasks, report when complete.
**Announce at start:** "I'm using the executing-plans skill to implement this plan."
**Note:** Tell your human partner that Superpowers works much better with access to subagents. The quality of its work will be significantly higher if run on a platform with subagent support (such as Claude Code or Codex). If subagents are available, use superpowers:subagent-driven-development instead of this skill.
**Note:** Tell your human partner that Superpowers works much better with access to subagents. The quality of its work will be significantly higher if run on a platform with subagent support (Claude Code, Codex CLI, Codex App, Copilot CLI, and Gemini CLI all qualify; see the per-platform tool refs in `../using-superpowers/references/`). If subagents are available, use superpowers:subagent-driven-development instead of this skill.
## The Process
@@ -19,7 +19,7 @@ Load plan, review critically, execute all tasks, report when complete.
1. Read plan file
2. Review critically - identify any questions or concerns about the plan
3. If concerns: Raise them with your human partner before starting
4. If no concerns: Create TodoWrite and proceed
4. If no concerns: Create todos for the plan items and proceed
### Step 2: Execute Tasks

View File

@@ -27,7 +27,7 @@ WHEN receiving code review feedback:
## Forbidden Responses
**NEVER:**
- "You're absolutely right!" (explicit CLAUDE.md violation)
- "You're absolutely right!" (explicit instruction-file violation)
- "Great point!" / "Excellent feedback!" (performative)
- "Let me implement that now" (before verification)

View File

@@ -31,7 +31,7 @@ HEAD_SHA=$(git rev-parse HEAD)
**2. Dispatch code reviewer subagent:**
Use Task tool with `general-purpose` type, fill template at `code-reviewer.md`
Dispatch a `general-purpose` subagent, filling the template at [code-reviewer.md](code-reviewer.md)
**Placeholders:**
- `{DESCRIPTION}` - Brief summary of what you built
@@ -100,4 +100,4 @@ You: [Fix progress indicators]
- Show code/tests that prove it works
- Request clarification
See template at: requesting-code-review/code-reviewer.md
See template at: [code-reviewer.md](code-reviewer.md)

View File

@@ -5,7 +5,7 @@ Use this template when dispatching a code reviewer subagent.
**Purpose:** Review completed work against requirements and code quality standards before it cascades into more work.
```
Task tool (general-purpose):
Subagent (general-purpose):
description: "Review code changes"
prompt: |
You are a Senior Code Reviewer with expertise in software architecture,

View File

@@ -57,15 +57,15 @@ digraph process {
"Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [shape=box];
"Code quality reviewer subagent approves?" [shape=diamond];
"Implementer subagent fixes quality issues" [shape=box];
"Mark task complete in TodoWrite" [shape=box];
"Mark task complete in todo list" [shape=box];
}
"Read plan, extract all tasks with full text, note context, create TodoWrite" [shape=box];
"Read plan, extract all tasks with full text, note context, create todos" [shape=box];
"More tasks remain?" [shape=diamond];
"Dispatch final code reviewer subagent for entire implementation" [shape=box];
"Use superpowers:finishing-a-development-branch" [shape=box style=filled fillcolor=lightgreen];
"Read plan, extract all tasks with full text, note context, create TodoWrite" -> "Dispatch implementer subagent (./implementer-prompt.md)";
"Read plan, extract all tasks with full text, note context, create todos" -> "Dispatch implementer subagent (./implementer-prompt.md)";
"Dispatch implementer subagent (./implementer-prompt.md)" -> "Implementer subagent asks questions?";
"Implementer subagent asks questions?" -> "Answer questions, provide context" [label="yes"];
"Answer questions, provide context" -> "Dispatch implementer subagent (./implementer-prompt.md)";
@@ -78,8 +78,8 @@ digraph process {
"Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" -> "Code quality reviewer subagent approves?";
"Code quality reviewer subagent approves?" -> "Implementer subagent fixes quality issues" [label="no"];
"Implementer subagent fixes quality issues" -> "Dispatch code quality reviewer subagent (./code-quality-reviewer-prompt.md)" [label="re-review"];
"Code quality reviewer subagent approves?" -> "Mark task complete in TodoWrite" [label="yes"];
"Mark task complete in TodoWrite" -> "More tasks remain?";
"Code quality reviewer subagent approves?" -> "Mark task complete in todo list" [label="yes"];
"Mark task complete in todo list" -> "More tasks remain?";
"More tasks remain?" -> "Dispatch implementer subagent (./implementer-prompt.md)" [label="yes"];
"More tasks remain?" -> "Dispatch final code reviewer subagent for entire implementation" [label="no"];
"Dispatch final code reviewer subagent for entire implementation" -> "Use superpowers:finishing-a-development-branch";
@@ -121,9 +121,9 @@ Implementer subagents report one of four statuses. Handle each appropriately:
## Prompt Templates
- `./implementer-prompt.md` - Dispatch implementer subagent
- `./spec-reviewer-prompt.md` - Dispatch spec compliance reviewer subagent
- `./code-quality-reviewer-prompt.md` - Dispatch code quality reviewer subagent
- [implementer-prompt.md](implementer-prompt.md) - Dispatch implementer subagent
- [spec-reviewer-prompt.md](spec-reviewer-prompt.md) - Dispatch spec compliance reviewer subagent
- [code-quality-reviewer-prompt.md](code-quality-reviewer-prompt.md) - Dispatch code quality reviewer subagent
## Example Workflow
@@ -132,7 +132,7 @@ You: I'm using Subagent-Driven Development to execute this plan.
[Read plan file once: docs/superpowers/plans/feature-plan.md]
[Extract all 5 tasks with full text and context]
[Create TodoWrite with all tasks]
[Create todos for all tasks]
Task 1: Hook installation script

View File

@@ -7,8 +7,8 @@ Use this template when dispatching a code quality reviewer subagent.
**Only dispatch after spec compliance review passes.**
```
Task tool (general-purpose):
Use template at requesting-code-review/code-reviewer.md
Subagent (general-purpose):
Use template at ../requesting-code-review/code-reviewer.md
DESCRIPTION: [task summary, from implementer's report]
PLAN_OR_REQUIREMENTS: Task N from [plan-file]

View File

@@ -3,7 +3,7 @@
Use this template when dispatching an implementer subagent.
```
Task tool (general-purpose):
Subagent (general-purpose):
description: "Implement Task N: [task name]"
prompt: |
You are implementing Task N: [task name]

View File

@@ -5,7 +5,7 @@ Use this template when dispatching a spec compliance reviewer subagent.
**Purpose:** Verify implementer built what was requested (nothing more, nothing less)
```
Task tool (general-purpose):
Subagent (general-purpose):
description: "Review spec compliance for Task N"
prompt: |
You are reviewing whether an implementation matches its specification.

View File

@@ -1,6 +1,6 @@
---
name: using-superpowers
description: Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions
description: Use when starting any conversation - establishes how to find and use skills, requiring skill invocation before ANY response including clarifying questions
---
<SUBAGENT-STOP>
@@ -27,9 +27,13 @@ If CLAUDE.md, GEMINI.md, or AGENTS.md says "don't use TDD" and a skill says "alw
## How to Access Skills
**In Claude Code:** Use the `Skill` tool. When you invoke a skill, its content is loaded and presented to you—follow it directly. Never use the Read tool on skill files.
**Never read skill files manually with file tools** — always use your platform's skill-loading mechanism so the skill is properly activated.
**In Copilot CLI:** Use the `skill` tool. Skills are auto-discovered from installed plugins. The `skill` tool works the same as Claude Code's `Skill` tool.
**In Claude Code:** Use the `Skill` tool. When you invoke a skill, its content is loaded and presented to you — follow it directly.
**In Codex:** Skills load natively. Follow the instructions presented when a skill activates.
**In Copilot CLI:** Use the `skill` tool. Skills are auto-discovered from installed plugins.
**In Gemini CLI:** Skills activate via the `activate_skill` tool. Gemini loads skill metadata at session start and activates the full content on demand.
@@ -37,7 +41,7 @@ If CLAUDE.md, GEMINI.md, or AGENTS.md says "don't use TDD" and a skill says "alw
## Platform Adaptation
Skills use Claude Code tool names. Non-CC platforms: see `references/copilot-tools.md` (Copilot CLI), `references/codex-tools.md` (Codex) for tool equivalents. Gemini CLI users get the tool mapping loaded automatically via GEMINI.md.
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file") rather than naming any one runtime's tools. For per-platform tool equivalents and instructions-file conventions, see [claude-code-tools.md](references/claude-code-tools.md), [codex-tools.md](references/codex-tools.md), [copilot-tools.md](references/copilot-tools.md), [gemini-tools.md](references/gemini-tools.md), and [pi-tools.md](references/pi-tools.md). Gemini CLI users get the tool mapping loaded automatically via GEMINI.md.
# Using Skills
@@ -48,30 +52,30 @@ Skills use Claude Code tool names. Non-CC platforms: see `references/copilot-too
```dot
digraph skill_flow {
"User message received" [shape=doublecircle];
"About to EnterPlanMode?" [shape=doublecircle];
"About to enter plan mode?" [shape=doublecircle];
"Already brainstormed?" [shape=diamond];
"Invoke brainstorming skill" [shape=box];
"Might any skill apply?" [shape=diamond];
"Invoke Skill tool" [shape=box];
"Invoke the skill" [shape=box];
"Announce: 'Using [skill] to [purpose]'" [shape=box];
"Has checklist?" [shape=diamond];
"Create TodoWrite todo per item" [shape=box];
"Create a todo per item" [shape=box];
"Follow skill exactly" [shape=box];
"Respond (including clarifications)" [shape=doublecircle];
"About to EnterPlanMode?" -> "Already brainstormed?";
"About to enter plan mode?" -> "Already brainstormed?";
"Already brainstormed?" -> "Invoke brainstorming skill" [label="no"];
"Already brainstormed?" -> "Might any skill apply?" [label="yes"];
"Invoke brainstorming skill" -> "Might any skill apply?";
"User message received" -> "Might any skill apply?";
"Might any skill apply?" -> "Invoke Skill tool" [label="yes, even 1%"];
"Might any skill apply?" -> "Invoke the skill" [label="yes, even 1%"];
"Might any skill apply?" -> "Respond (including clarifications)" [label="definitely not"];
"Invoke Skill tool" -> "Announce: 'Using [skill] to [purpose]'";
"Invoke the skill" -> "Announce: 'Using [skill] to [purpose]'";
"Announce: 'Using [skill] to [purpose]'" -> "Has checklist?";
"Has checklist?" -> "Create TodoWrite todo per item" [label="yes"];
"Has checklist?" -> "Create a todo per item" [label="yes"];
"Has checklist?" -> "Follow skill exactly" [label="no"];
"Create TodoWrite todo per item" -> "Follow skill exactly";
"Create a todo per item" -> "Follow skill exactly";
}
```

View File

@@ -0,0 +1,50 @@
# Claude Code Tool Mapping
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file"). On Claude Code these resolve to the tools below.
## Tools
| Action skills request | Claude Code tool |
|----------------------|------------------|
| Read a file | `Read` |
| Create a new file | `Write` |
| Edit a file | `Edit` |
| Run a shell command | `Bash` |
| Search file contents | `Grep` |
| Find files by name | `Glob` |
| Fetch a URL | `WebFetch` |
| Search the web | `WebSearch` |
| Invoke a skill | `Skill` |
| Dispatch a subagent (`Subagent (general-purpose):` template) | `Agent` (older releases named this `Task`) |
| Multiple parallel dispatches | Multiple `Agent` calls in one response |
| Task tracking ("create a todo", "mark complete") | `TaskCreate`, `TaskUpdate`, `TaskList`, `TaskGet`; `TodoWrite` in `claude -p` / Agent SDK unless `CLAUDE_CODE_ENABLE_TASKS=1` is set |
| Background-process / subagent lifecycle (read output, cancel) | `TaskOutput`, `TaskStop` — these are distinct from the todo tools above and apply to running shells, agents, and remote sessions |
## Instructions file
When a skill mentions "your instructions file", on Claude Code this is **`CLAUDE.md`**. Claude Code walks up the directory tree from the current working directory and concatenates every `CLAUDE.md` and `CLAUDE.local.md` it finds along the way. Standard locations:
| Scope | Location |
|-------|----------|
| Project (team-shared) | `./CLAUDE.md` or `./.claude/CLAUDE.md` |
| User global | `~/.claude/CLAUDE.md` |
| Local-private (gitignored) | `./CLAUDE.local.md` |
| Managed policy (org-wide) | `/Library/Application Support/ClaudeCode/CLAUDE.md` (macOS), `/etc/claude-code/CLAUDE.md` (Linux/WSL), `C:\Program Files\ClaudeCode\CLAUDE.md` (Windows) |
CLAUDE.md files can pull in additional content with `@path/to/file` imports (relative or absolute, max five hops deep). Subdirectory `CLAUDE.md` files are also discovered automatically and loaded on-demand when Claude Code reads files in those subdirectories.
Claude Code does **not** read `AGENTS.md` directly. If a project already maintains `AGENTS.md` for other agents, import it from `CLAUDE.md` so both runtimes share the same instructions:
```markdown
@AGENTS.md
## Claude Code
(Claude-Code-specific instructions go here.)
```
For path-scoped rules and larger-project organization, see `.claude/rules/` (rules can be scoped to specific files via `paths` frontmatter and load on demand).
## Personal skills directory
User-level skills live at **`~/.claude/skills/`**. Each skill is a subdirectory containing a `SKILL.md` (with `name` and `description` frontmatter) plus any supporting files. Claude Code does not currently recognize the cross-runtime `~/.agents/skills/` path that Codex, Copilot CLI, and Gemini CLI read; if you're relying on cross-runtime support in the future, verify against the [official skills docs](https://code.claude.com/docs/en/skills).

View File

@@ -1,17 +1,30 @@
# Codex Tool Mapping
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file"). On Codex these resolve to the tools below.
| Skill references | Codex equivalent |
|-----------------|------------------|
| `Task` tool (dispatch subagent) | `spawn_agent` (see [Subagent dispatch requires multi-agent support](#subagent-dispatch-requires-multi-agent-support)) |
| Multiple `Task` calls (parallel) | Multiple `spawn_agent` calls |
| Task returns result | `wait_agent` |
| Task completes automatically | `close_agent` to free slot |
| `TodoWrite` (task tracking) | `update_plan` |
| `Skill` tool (invoke a skill) | Skills load natively — just follow the instructions |
| `Read`, `Write`, `Edit` (files) | Use your native file tools |
| `Bash` (run commands) | Use your native shell tools |
| Action skills request | Codex equivalent |
|----------------------|------------------|
| Read a file | `shell` (e.g., `cat`, `head`, `tail`) — Codex reads files via shell |
| Create / edit / delete a file | `apply_patch` (structured diff for create, update, delete) |
| Run a shell command | `shell` |
| Search file contents | `shell` (e.g., `grep`, `rg`) |
| Find files by name | `shell` (e.g., `find`, `ls`) |
| Fetch a URL | `shell` with `curl` / `wget` — Codex has no native fetch tool |
| Search the web | `web_search` (enabled by default; configurable in `config.toml` via the top-level `web_search` setting — `live`, `cached`, or `disabled`) |
| Invoke a skill | Skills load natively — just follow the instructions |
| Dispatch a subagent (`Subagent (general-purpose):` template) | `spawn_agent` (see [Subagent dispatch requires multi-agent support](#subagent-dispatch-requires-multi-agent-support)) |
| Multiple parallel dispatches | Multiple `spawn_agent` calls in one response |
| Wait for subagent result | `wait_agent` |
| Free up subagent slot when done | `close_agent` |
| Task tracking ("create a todo", "mark complete") | `update_plan` |
## Instructions file
When a skill mentions "your instructions file", on Codex this is **`AGENTS.md`** at the project root. Codex also reads `~/.codex/AGENTS.md` for global context, and an `AGENTS.override.md` (in the project tree or `~/.codex/`) takes precedence when present. Codex walks from the project root down to the current working directory, concatenating `AGENTS.md` files it finds along the way, up to `project_doc_max_bytes` (32 KiB by default).
## Personal skills directory
User-level skills live at **`$CODEX_HOME/skills/`** (default `~/.codex/skills/`). Codex also reads the cross-runtime path **`~/.agents/skills/`** (shared with Copilot CLI and Gemini CLI). When both directories exist at the same scope, Codex loads them both as separate skill catalogs — Codex's docs don't currently document a precedence between them. Each skill is a subdirectory containing a `SKILL.md` (with `name` and `description` frontmatter).
## Subagent dispatch requires multi-agent support

View File

@@ -1,31 +1,38 @@
# Copilot CLI Tool Mapping
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file"). On Copilot CLI these resolve to the tools below.
| Skill references | Copilot CLI equivalent |
|-----------------|----------------------|
| `Read` (file reading) | `view` |
| `Write` (file creation) | `create` |
| `Edit` (file editing) | `edit` |
| `Bash` (run commands) | `bash` |
| `Grep` (search file content) | `grep` |
| `Glob` (search files by name) | `glob` |
| `Skill` tool (invoke a skill) | `skill` |
| `WebFetch` | `web_fetch` |
| `Task` tool (dispatch subagent) | `task` with `agent_type: "general-purpose"` or `"explore"` |
| Multiple `Task` calls (parallel) | Multiple `task` calls |
| Task status/output | `read_agent`, `list_agents` |
| `TodoWrite` (task tracking) | `sql` with built-in `todos` table |
| `WebSearch` | No equivalent — use `web_fetch` with a search engine URL |
| `EnterPlanMode` / `ExitPlanMode` | No equivalent — stay in the main session |
| Action skills request | Copilot CLI equivalent |
|----------------------|----------------------|
| Read a file | `view` |
| Create / edit / delete a file | `apply_patch` (Copilot CLI has no separate create/edit/write tools) |
| Run a shell command | `bash` |
| Search file contents | `rg` (ripgrep; Copilot CLI does not expose a `grep` tool) |
| Find files by name | `glob` |
| Fetch a URL | `web_fetch` |
| Search the web | `web_search` |
| Invoke a skill | `skill` |
| Dispatch a subagent (`Subagent (general-purpose):` template) | `task` with `agent_type: "general-purpose"` (other accepted types: `explore`, `task`, `code-review`, `research`, `configure-copilot`) |
| Multiple parallel dispatches | Multiple `task` calls in one response |
| Subagent status/output/control | `read_agent`, `list_agents`, `write_agent` |
| Task tracking ("create a todo", "mark complete") | `update_todo` |
| Enter / exit plan mode | No equivalent — stay in the main session |
## Instructions file
When a skill mentions "your instructions file", on Copilot CLI this is **`AGENTS.md`** at the repository root. If both `AGENTS.md` and `.github/copilot-instructions.md` are present, Copilot reads both.
## Personal skills directory
User-level skills live at **`~/.copilot/skills/`**. Copilot CLI also recognizes the cross-runtime alias **`~/.agents/skills/`**, which is shared with Codex and Gemini CLI. Each skill is a subdirectory containing a `SKILL.md` (with `name` and `description` frontmatter).
## Async shell sessions
Copilot CLI supports persistent async shell sessions, which have no direct Claude Code equivalent:
Copilot CLI supports persistent async shell sessions:
| Tool | Purpose |
|------|---------|
| `bash` with `async: true` | Start a long-running command in the background |
| `bash` with `mode: "async"` (and optionally `detach: true`) | Start a long-running command in the background; returns a `shellId` |
| `write_bash` | Send input to a running async session |
| `read_bash` | Read output from an async session |
| `stop_bash` | Terminate an async session |

View File

@@ -1,51 +1,63 @@
# Gemini CLI Tool Mapping
Skills use Claude Code tool names. When you encounter these in a skill, use your platform equivalent:
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file"). On Gemini CLI these resolve to the tools below.
| Skill references | Gemini CLI equivalent |
|-----------------|----------------------|
| `Read` (file reading) | `read_file` |
| `Write` (file creation) | `write_file` |
| `Edit` (file editing) | `replace` |
| `Bash` (run commands) | `run_shell_command` |
| `Grep` (search file content) | `grep_search` |
| `Glob` (search files by name) | `glob` |
| `TodoWrite` (task tracking) | `write_todos` |
| `Skill` tool (invoke a skill) | `activate_skill` |
| `WebSearch` | `google_web_search` |
| `WebFetch` | `web_fetch` |
| `Task` tool (dispatch subagent) | `@agent-name` (see [Subagent support](#subagent-support)) |
| Action skills request | Gemini CLI equivalent |
|----------------------|----------------------|
| Read a file | `read_file` |
| Read multiple files at once | `read_many_files` |
| Create a new file | `write_file` |
| Edit a file | `replace` |
| Run a shell command | `run_shell_command` |
| Search file contents | `grep_search` |
| Find files by name | `glob` |
| List files and subdirectories | `list_directory` |
| Fetch a URL | `web_fetch` |
| Search the web | `google_web_search` |
| Invoke a skill | `activate_skill` |
| Dispatch a subagent (`Subagent (general-purpose):` template) | `invoke_agent` with `agent_name: "generalist"` (invocable via `@generalist` chat syntax — see [Subagent support](#subagent-support)) |
| Multiple parallel dispatches | Multiple `invoke_agent` calls in the same response |
| Task tracking ("create a todo", "mark complete") | `write_todos` (statuses: pending, in_progress, completed, cancelled, blocked) |
## Instructions file
When a skill mentions "your instructions file", on Gemini CLI this is **`GEMINI.md`**. Gemini CLI loads `GEMINI.md` hierarchically: global at `~/.gemini/GEMINI.md`, project-level files in workspace directories and their ancestors, and sub-directory `GEMINI.md` files when a tool accesses files in those directories.
## Personal skills directory
User-level skills live at **`~/.gemini/skills/`**, with **`~/.agents/skills/`** as a cross-runtime alias (shared with Codex and Copilot CLI). When both directories exist at the same scope, `.agents/skills/` takes precedence. Each skill is a subdirectory containing a `SKILL.md` (with `name` and `description` frontmatter).
## Subagent support
Gemini CLI supports subagents natively via the `@` syntax. Use the built-in `@generalist` agent to dispatch any task — it has access to all tools and follows the prompt you provide.
Gemini CLI dispatches subagents through the `invoke_agent` tool, which takes `agent_name` and `prompt` parameters. The same dispatch is also surfaced as a chat-syntax shortcut: typing `@generalist <prompt>` is equivalent to calling `invoke_agent` with `agent_name: "generalist"`. Built-in agent names include `generalist`, `cli_help`, `codebase_investigator`, and (with browser tooling enabled) `browser_agent`.
When a skill says to dispatch a named agent type, use `@generalist` with the full prompt from the skill's prompt template:
Skills dispatch with `Subagent (general-purpose):` and either reference a prompt-template file (e.g., `superpowers:subagent-driven-development`'s `./implementer-prompt.md`) or supply an inline prompt. On Gemini CLI:
| Skill instruction | Gemini CLI equivalent |
|-------------------|----------------------|
| `Task tool (superpowers:implementer)` | `@generalist` with the filled `implementer-prompt.md` template |
| `Task tool (superpowers:spec-reviewer)` | `@generalist` with the filled `spec-reviewer-prompt.md` template |
| `Task tool (superpowers:code-reviewer)` | `@code-reviewer` (bundled agent) or `@generalist` with the filled review prompt |
| `Task tool (superpowers:code-quality-reviewer)` | `@generalist` with the filled `code-quality-reviewer-prompt.md` template |
| `Task tool (general-purpose)` with inline prompt | `@generalist` with your inline prompt |
| Skill dispatch form | Gemini CLI equivalent |
|---------------------|----------------------|
| References a `*-prompt.md` template (implementer, spec-reviewer, code-quality-reviewer, code-reviewer, etc.) | Fill the template, then `invoke_agent` with `agent_name: "generalist"` and the filled prompt |
| References `superpowers:requesting-code-review`'s `./code-reviewer.md` | `invoke_agent` with `agent_name: "generalist"` and the filled review template |
| Inline prompt (no template referenced) | `invoke_agent` with `agent_name: "generalist"` and your inline prompt |
### Prompt filling
Skills provide prompt templates with placeholders like `{WHAT_WAS_IMPLEMENTED}` or `[FULL TEXT of task]`. Fill all placeholders and pass the complete prompt as the message to `@generalist`. The prompt template itself contains the agent's role, review criteria, and expected output format — `@generalist` will follow it.
Skills provide prompt templates with placeholders like `{WHAT_WAS_IMPLEMENTED}` or `[FULL TEXT of task]`. Fill all placeholders before passing the complete prompt to `invoke_agent`. The prompt template itself contains the agent's role, review criteria, and expected output format — the subagent will follow it.
### Parallel dispatch
Gemini CLI supports parallel subagent dispatch. When a skill asks you to dispatch multiple independent subagent tasks in parallel, request all of those `@generalist` or named subagent tasks together in the same prompt. Keep dependent tasks sequential, but do not serialize independent subagent tasks just to preserve a simpler history.
Gemini CLI supports parallel subagent dispatch. Issue multiple `invoke_agent` calls in the same response (or multiple `@generalist` invocations in one prompt) to run independent subagent work in parallel. Keep dependent tasks sequential, but do not serialize independent subagent tasks just to preserve a simpler history.
## Additional Gemini CLI tools
These tools are available in Gemini CLI but have no Claude Code equivalent:
These tools are unique to Gemini CLI:
| Tool | Purpose |
|------|---------|
| `list_directory` | List files and subdirectories |
| `save_memory` | Persist facts to GEMINI.md across sessions |
| `ask_user` | Request structured input from the user |
| `tracker_create_task` | Rich task management (create, update, list, visualize) |
| `enter_plan_mode` / `exit_plan_mode` | Switch to read-only research mode before making changes |
| `save_memory` (legacy) | Persist facts across sessions when `experimental.memoryV2 = false` |
| `get_internal_docs` | Look up Gemini CLI's bundled documentation |
| `ask_user` | Pose structured questions to the user (text / single-select / multi-select) |
| `enter_plan_mode` / `exit_plan_mode` | Switch into and out of read-only plan mode |
| `update_topic` | Update the current conversation's topic / strategic-intent metadata |
| `complete_task` | Signal that a Gemini subagent has completed and return its result to the parent agent |
| `tracker_create_task`, `tracker_update_task`, `tracker_get_task`, `tracker_list_tasks`, `tracker_add_dependency`, `tracker_visualize` | Rich task tracker with dependency and visualization support |
| `read_mcp_resource`, `list_mcp_resources` | MCP resource access |

View File

@@ -0,0 +1,28 @@
# Pi Tool Mapping
Skills speak in actions ("dispatch a subagent", "create a todo", "read a file"). On Pi these resolve to the tools below.
| Action skills request | Pi equivalent |
| --- | --- |
| Invoke a skill | Pi native skills: load the relevant `SKILL.md` with `read`, or let the human use `/skill:name` |
| Read a file | `read` |
| Create a file | `write` |
| Edit a file | `edit` |
| Run a shell command | `bash` |
| Search file contents | `grep` when active; otherwise `bash` with `rg`/`grep` |
| Find files by name | `find` or `bash` with shell globs |
| List files and subdirectories | `ls` when active; otherwise `bash` with `ls` |
| Dispatch a subagent (`Subagent (general-purpose):` template) | Use an installed subagent tool such as `subagent` from `pi-subagents` if available |
| Task tracking ("create a todo", "mark complete") | Use an installed todo/task tool if available, otherwise track tasks in the plan or `TODO.md` |
## Skills
Pi discovers skills from configured skill directories and installed Pi packages. A Superpowers Pi package should expose `skills/` through its `pi.skills` manifest entry. Pi does not expose Claude Code's `Skill` tool, but the agent should still follow the Superpowers rule: when a skill applies, load and follow it before responding.
## Subagents
Pi core does not ship a standard subagent tool. The `pi-subagents` package is a strong optional companion and provides a `subagent` tool with single-agent, chain, parallel, async, forked-context, and resume/status workflows. If no subagent tool is available, do not fabricate `Task` calls; execute sequentially in the current session or explain that the optional subagent capability is not installed.
## Task lists
Pi core does not ship a standard task-list tool. If a todo/task extension is installed, use its documented tool. Otherwise use Superpowers plan files, checklists in Markdown, or a repo-local `TODO.md` for task tracking. Older Superpowers docs may refer to `TodoWrite`; treat that as the task-tracking action above.

View File

@@ -7,7 +7,7 @@ Use this template when dispatching a plan document reviewer subagent.
**Dispatch after:** The complete plan is written.
```
Task tool (general-purpose):
Subagent (general-purpose):
description: "Review plan document"
prompt: |
You are a plan document reviewer. Verify this plan is complete and ready for implementation.

View File

@@ -9,7 +9,7 @@ description: Use when creating new skills, editing existing skills, or verifying
**Writing skills IS Test-Driven Development applied to process documentation.**
**Personal skills live in agent-specific directories (`~/.claude/skills` for Claude Code, `~/.agents/skills/` for Codex)**
**Personal skills live in your runtime's skills directory** — see [claude-code-tools.md](../using-superpowers/references/claude-code-tools.md), [codex-tools.md](../using-superpowers/references/codex-tools.md), [copilot-tools.md](../using-superpowers/references/copilot-tools.md), or [gemini-tools.md](../using-superpowers/references/gemini-tools.md) for the path on your runtime. Codex, Copilot CLI, and Gemini CLI all also recognize `~/.agents/skills/` as a cross-runtime alias.
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
@@ -21,7 +21,7 @@ You write test cases (pressure scenarios with subagents), watch them fail (basel
## What is a Skill?
A **skill** is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
A **skill** is a reference guide for proven techniques, patterns, or tools. Skills help future agents find and apply effective approaches.
**Skills are:** Reusable techniques, patterns, tools, reference guides
@@ -55,7 +55,7 @@ The entire skill creation process follows RED-GREEN-REFACTOR.
**Don't create for:**
- One-off solutions
- Standard practices well-documented elsewhere
- Project-specific conventions (put in CLAUDE.md)
- Project-specific conventions (put in your instructions file)
- Mechanical constraints (if it's enforceable with regex/validation, automate it—save documentation for judgment calls)
## Skill Types
@@ -99,7 +99,7 @@ skills/
- `description`: Third-person, describes ONLY when to use (NOT what it does)
- Start with "Use when..." to focus on triggering conditions
- Include specific symptoms, situations, and contexts
- **NEVER summarize the skill's process or workflow** (see CSO section for why)
- **NEVER summarize the skill's process or workflow** (see SDO section for why)
- Keep under 500 characters if possible
```markdown
@@ -137,13 +137,13 @@ Concrete results
```
## Claude Search Optimization (CSO)
## Skill Discovery Optimization (SDO)
**Critical for discovery:** Future Claude needs to FIND your skill
**Critical for discovery:** Future agents need to FIND your skill
### 1. Rich Description Field
**Purpose:** Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
**Purpose:** Your agent reads the description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
**Format:** Start with "Use when..." to focus on triggering conditions
@@ -151,14 +151,14 @@ Concrete results
The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow in the description.
**Why this matters:** Testing revealed that when a description summarizes the skill's workflow, Claude may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused Claude to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).
**Why this matters:** Testing revealed that when a description summarizes the skill's workflow, an agent may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused an agent to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).
When the description was changed to just "Use when executing implementation plans with independent tasks" (no workflow summary), Claude correctly read the flowchart and followed the two-stage review process.
When the description was changed to just "Use when executing implementation plans with independent tasks" (no workflow summary), the agent correctly read the flowchart and followed the two-stage review process.
**The trap:** Descriptions that summarize workflow create a shortcut Claude will take. The skill body becomes documentation Claude skips.
**The trap:** Descriptions that summarize workflow create a shortcut agents will take. The skill body becomes documentation agents skip.
```yaml
# ❌ BAD: Summarizes workflow - Claude may follow this instead of reading skill
# ❌ BAD: Summarizes workflow - agents may follow this instead of reading skill
description: Use when executing plans - dispatches subagent per task with code review between tasks
# ❌ BAD: Too much process detail
@@ -198,7 +198,7 @@ description: Use when using React Router and handling authentication redirects
### 2. Keyword Coverage
Use words Claude would search for:
Use words an agent would search for:
- Error messages: "Hook timed out", "ENOTEMPTY", "race condition"
- Symptoms: "flaky", "hanging", "zombie", "pollution"
- Synonyms: "timeout/hang/freeze", "cleanup/teardown/afterEach"
@@ -275,7 +275,7 @@ wc -w skills/path/SKILL.md
- `creating-skills`, `testing-skills`, `debugging-with-logs`
- Active, describes the action you're taking
### 4. Cross-Referencing Other Skills
### 5. Cross-Referencing Other Skills
**When writing documentation that references other skills:**
@@ -313,7 +313,7 @@ digraph when_flowchart {
- Linear instructions → Numbered lists
- Labels without semantic meaning (step1, helper2)
See @graphviz-conventions.dot for graphviz style rules.
See `graphviz-conventions.dot` in this directory for graphviz style rules.
**Visualizing for your human partner:** Use `render-graphs.js` in this directory to render a skill's flowcharts to SVG:
```bash
@@ -522,7 +522,7 @@ Make it easy for agents to self-check when rationalizing:
**All of these mean: Delete code. Start over with TDD.**
```
### Update CSO for Violation Symptoms
### Update SDO for Violation Symptoms
Add to description: symptoms of when you're ABOUT to violate the rule:
@@ -595,7 +595,7 @@ Deploying untested skills = deploying untested code. It's a violation of quality
## Skill Creation Checklist (TDD Adapted)
**IMPORTANT: Use TodoWrite to create todos for EACH checklist item below.**
**IMPORTANT: Create a todo for EACH checklist item below.**
**RED Phase - Write Failing Test:**
- [ ] Create pressure scenarios (3+ combined pressures for discipline skills)
@@ -634,9 +634,10 @@ Deploying untested skills = deploying untested code. It's a violation of quality
## Discovery Workflow
How future Claude finds your skill:
How future agents find your skill:
1. **Encounters problem** ("tests are flaky")
2. **Searches skills** (greps descriptions, browses categories)
3. **Finds SKILL** (description matches)
4. **Scans overview** (is this relevant?)
5. **Reads patterns** (quick reference table)

View File

@@ -1,30 +1,30 @@
# Skill authoring best practices
> Learn how to write effective Skills that Claude can discover and use successfully.
> Learn how to write effective Skills that agents can discover and use successfully.
Good Skills are concise, well-structured, and tested with real usage. This guide provides practical authoring decisions to help you write Skills that Claude can discover and use effectively.
Good Skills are concise, well-structured, and tested with real usage. This guide provides practical authoring decisions to help you write Skills that agents can discover and use effectively.
For conceptual background on how Skills work, see the [Skills overview](/en/docs/agents-and-tools/agent-skills/overview).
For conceptual background on how Skills work, see the [Skills overview](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview).
## Core principles
### Concise is key
The [context window](https://platform.claude.com/docs/en/build-with-claude/context-windows) is a public good. Your Skill shares the context window with everything else Claude needs to know, including:
The [context window](https://platform.claude.com/docs/en/build-with-claude/context-windows) is a public good. Your Skill shares the context window with everything else your agent needs to know, including:
* The system prompt
* Conversation history
* Other Skills' metadata
* Your actual request
Not every token in your Skill has an immediate cost. At startup, only the metadata (name and description) from all Skills is pre-loaded. Claude reads SKILL.md only when the Skill becomes relevant, and reads additional files only as needed. However, being concise in SKILL.md still matters: once Claude loads it, every token competes with conversation history and other context.
Not every token in your Skill has an immediate cost. At startup, only the metadata (name and description) from all Skills is pre-loaded. Agents read SKILL.md only when the Skill becomes relevant, and read additional files only as needed. However, being concise in SKILL.md still matters: once an agent loads it, every token competes with conversation history and other context.
**Default assumption**: Claude is already very smart
**Default assumption**: Agents are already very smart
Only add context Claude doesn't already have. Challenge each piece of information:
Only add context agents don't already have. Challenge each piece of information:
* "Does Claude really need this explanation?"
* "Can I assume Claude knows this?"
* "Does the agent really need this explanation?"
* "Can I assume the agent knows this?"
* "Does this paragraph justify its token cost?"
**Good example: Concise** (approximately 50 tokens):
@@ -54,7 +54,7 @@ recommend pdfplumber because it's easy to use and handles most cases well.
First, you'll need to install it using pip. Then you can use the code below...
```
The concise version assumes Claude knows what PDFs are and how libraries work.
The concise version assumes the agent knows what PDFs are and how libraries work.
### Set appropriate degrees of freedom
@@ -124,10 +124,10 @@ python scripts/migrate.py --verify --backup
Do not modify the command or add additional flags.
````
**Analogy**: Think of Claude as a robot exploring a path:
**Analogy**: Think of the agent as a robot exploring a path:
* **Narrow bridge with cliffs on both sides**: There's only one safe way forward. Provide specific guardrails and exact instructions (low freedom). Example: database migrations that must run in exact sequence.
* **Open field with no hazards**: Many paths lead to success. Give general direction and trust Claude to find the best route (high freedom). Example: code reviews where context determines the best approach.
* **Open field with no hazards**: Many paths lead to success. Give general direction and trust the agent to find the best route (high freedom). Example: code reviews where context determines the best approach.
### Test with all models you plan to use
@@ -149,7 +149,7 @@ What works perfectly for Opus might need more detail for Haiku. If you plan to u
* `name` - Human-readable name of the Skill (64 characters maximum)
* `description` - One-line description of what the Skill does and when to use it (1024 characters maximum)
For complete Skill structure details, see the [Skills overview](/en/docs/agents-and-tools/agent-skills/overview#skill-structure).
For complete Skill structure details, see the [Skills overview](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#skill-structure).
</Note>
### Naming conventions
@@ -196,7 +196,7 @@ The `description` field enables Skill discovery and should include both what the
**Be specific and include key terms**. Include both what the Skill does and specific triggers/contexts for when to use it.
Each Skill has exactly one description field. The description is critical for skill selection: Claude uses it to choose the right Skill from potentially 100+ available Skills. Your description must provide enough detail for Claude to know when to select this Skill, while the rest of SKILL.md provides the implementation details.
Each Skill has exactly one description field. The description is critical for skill selection: agents use it to choose the right Skill from potentially 100+ available Skills. Your description must provide enough detail for an agent to know when to select this Skill, while the rest of SKILL.md provides the implementation details.
Effective examples:
@@ -234,7 +234,7 @@ description: Does stuff with files
### Progressive disclosure patterns
SKILL.md serves as an overview that points Claude to detailed materials as needed, like a table of contents in an onboarding guide. For an explanation of how progressive disclosure works, see [How Skills work](/en/docs/agents-and-tools/agent-skills/overview#how-skills-work) in the overview.
SKILL.md serves as an overview that points agents to detailed materials as needed, like a table of contents in an onboarding guide. For an explanation of how progressive disclosure works, see [How Skills work](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#how-skills-work) in the overview.
**Practical guidance:**
@@ -248,7 +248,7 @@ A basic Skill starts with just a SKILL.md file containing metadata and instructi
<img src="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=87782ff239b297d9a9e8e1b72ed72db9" alt="Simple SKILL.md file showing YAML frontmatter and markdown body" data-og-width="2048" width="2048" data-og-height="1153" height="1153" data-path="images/agent-skills-simple-file.png" data-optimize="true" data-opv="3" srcset="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=280&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=c61cc33b6f5855809907f7fda94cd80e 280w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=560&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=90d2c0c1c76b36e8d485f49e0810dbfd 560w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=840&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=ad17d231ac7b0bea7e5b4d58fb4aeabb 840w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=1100&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=f5d0a7a3c668435bb0aee9a3a8f8c329 1100w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=1650&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=0e927c1af9de5799cfe557d12249f6e6 1650w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-simple-file.png?w=2500&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=46bbb1a51dd4c8202a470ac8c80a893d 2500w" />
As your Skill grows, you can bundle additional content that Claude loads only when needed:
As your Skill grows, you can bundle additional content that agents load only when needed:
<img src="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=a5e0aa41e3d53985a7e3e43668a33ea3" alt="Bundling additional reference files like reference.md and forms.md." data-og-width="2048" width="2048" data-og-height="1327" height="1327" data-path="images/agent-skills-bundling-content.png" data-optimize="true" data-opv="3" srcset="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=280&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=f8a0e73783e99b4a643d79eac86b70a2 280w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=560&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=dc510a2a9d3f14359416b706f067904a 560w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=840&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=82cd6286c966303f7dd914c28170e385 840w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=1100&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=56f3be36c77e4fe4b523df209a6824c6 1100w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=1650&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=d22b5161b2075656417d56f41a74f3dd 1650w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-bundling-content.png?w=2500&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=3dd4bdd6850ffcc96c6c45fcb0acd6eb 2500w" />
@@ -292,11 +292,11 @@ with pdfplumber.open("file.pdf") as pdf:
**Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
````
Claude loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
Agents load FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
#### Pattern 2: Domain-specific organization
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context. When a user asks about sales metrics, Claude only needs to read sales-related schemas, not finance or marketing data. This keeps token usage low and context focused.
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context. When a user asks about sales metrics, the agent only needs to read sales-related schemas, not finance or marketing data. This keeps token usage low and context focused.
```
bigquery-skill/
@@ -348,13 +348,13 @@ For simple edits, modify the XML directly.
**For OOXML details**: See [OOXML.md](OOXML.md)
```
Claude reads REDLINING.md or OOXML.md only when the user needs those features.
Agents read REDLINING.md or OOXML.md only when the user needs those features.
### Avoid deeply nested references
Claude may partially read files when they're referenced from other referenced files. When encountering nested references, Claude might use commands like `head -100` to preview content rather than reading entire files, resulting in incomplete information.
Agents may partially read files when they're referenced from other referenced files. When encountering nested references, an agent might use commands like `head -100` to preview content rather than reading entire files, resulting in incomplete information.
**Keep references one level deep from SKILL.md**. All reference files should link directly from SKILL.md to ensure Claude reads complete files when needed.
**Keep references one level deep from SKILL.md**. All reference files should link directly from SKILL.md to ensure agents read complete files when needed.
**Bad example: Too deep**:
@@ -382,7 +382,7 @@ Here's the actual information...
### Structure longer reference files with table of contents
For reference files longer than 100 lines, include a table of contents at the top. This ensures Claude can see the full scope of available information even when previewing with partial reads.
For reference files longer than 100 lines, include a table of contents at the top. This ensures agents can see the full scope of available information even when previewing with partial reads.
**Example**:
@@ -403,7 +403,7 @@ For reference files longer than 100 lines, include a table of contents at the to
...
```
Claude can then read the complete file or jump to specific sections as needed.
Agents can then read the complete file or jump to specific sections as needed.
For details on how this filesystem-based architecture enables progressive disclosure, see the [Runtime environment](#runtime-environment) section in the Advanced section below.
@@ -411,7 +411,7 @@ For details on how this filesystem-based architecture enables progressive disclo
### Use workflows for complex tasks
Break complex operations into clear, sequential steps. For particularly complex workflows, provide a checklist that Claude can copy into its response and check off as it progresses.
Break complex operations into clear, sequential steps. For particularly complex workflows, provide a checklist that the agent can copy into its response and check off as it progresses.
**Example 1: Research synthesis workflow** (for Skills without code):
@@ -498,7 +498,7 @@ Run: `python scripts/verify_output.py output.pdf`
If verification fails, return to Step 2.
````
Clear steps prevent Claude from skipping critical validation. The checklist helps both Claude and you track progress through multi-step workflows.
Clear steps prevent agents from skipping critical validation. The checklist helps both you and the agent track progress through multi-step workflows.
### Implement feedback loops
@@ -524,7 +524,7 @@ This pattern greatly improves output quality.
5. Finalize and save the document
```
This shows the validation loop pattern using reference documents instead of scripts. The "validator" is STYLE\_GUIDE.md, and Claude performs the check by reading and comparing.
This shows the validation loop pattern using reference documents instead of scripts. The "validator" is STYLE\_GUIDE.md, and the agent performs the check by reading and comparing.
**Example 2: Document editing process** (for Skills with code):
@@ -593,7 +593,7 @@ Choose one term and use it throughout the Skill:
* Mix "field", "box", "element", "control"
* Mix "extract", "pull", "get", "retrieve"
Consistency helps Claude understand and follow instructions.
Consistency helps agents understand and follow instructions.
## Common patterns
@@ -688,11 +688,11 @@ chore: update dependencies and refactor error handling
Follow this style: type(scope): brief description, then detailed explanation.
````
Examples help Claude understand the desired style and level of detail more clearly than descriptions alone.
Examples help agents understand the desired style and level of detail more clearly than descriptions alone.
### Conditional workflow pattern
Guide Claude through decision points:
Guide agents through decision points:
```markdown theme={null}
## Document modification workflow
@@ -715,7 +715,7 @@ Guide Claude through decision points:
```
<Tip>
If workflows become large or complicated with many steps, consider pushing them into separate files and tell Claude to read the appropriate file based on the task at hand.
If workflows become large or complicated with many steps, consider pushing them into separate files and tell the agent to read the appropriate file based on the task at hand.
</Tip>
## Evaluation and iteration
@@ -726,9 +726,9 @@ Guide Claude through decision points:
**Evaluation-driven development:**
1. **Identify gaps**: Run Claude on representative tasks without a Skill. Document specific failures or missing context
1. **Identify gaps**: Run your agent on representative tasks without a Skill. Document specific failures or missing context
2. **Create evaluations**: Build three scenarios that test these gaps
3. **Establish baseline**: Measure Claude's performance without the Skill
3. **Establish baseline**: Measure the agent's performance without the Skill
4. **Write minimal instructions**: Create just enough content to address the gaps and pass evaluations
5. **Iterate**: Execute evaluations, compare against baseline, and refine
@@ -753,51 +753,51 @@ This approach ensures you're solving actual problems rather than anticipating re
This example demonstrates a data-driven evaluation with a simple testing rubric. We do not currently provide a built-in way to run these evaluations. Users can create their own evaluation system. Evaluations are your source of truth for measuring Skill effectiveness.
</Note>
### Develop Skills iteratively with Claude
### Develop Skills iteratively with the agent
The most effective Skill development process involves Claude itself. Work with one instance of Claude ("Claude A") to create a Skill that will be used by other instances ("Claude B"). Claude A helps you design and refine instructions, while Claude B tests them in real tasks. This works because Claude models understand both how to write effective agent instructions and what information agents need.
The most effective Skill development process involves the agent itself. Work with one instance ("Agent A") to create a Skill that will be used by other instances ("Agent B"). Agent A helps you design and refine instructions, while Agent B tests them in real tasks. This works because the underlying models understand both how to write effective agent instructions and what information agents need.
**Creating a new Skill:**
1. **Complete a task without a Skill**: Work through a problem with Claude A using normal prompting. As you work, you'll naturally provide context, explain preferences, and share procedural knowledge. Notice what information you repeatedly provide.
1. **Complete a task without a Skill**: Work through a problem with Agent A using normal prompting. As you work, you'll naturally provide context, explain preferences, and share procedural knowledge. Notice what information you repeatedly provide.
2. **Identify the reusable pattern**: After completing the task, identify what context you provided that would be useful for similar future tasks.
**Example**: If you worked through a BigQuery analysis, you might have provided table names, field definitions, filtering rules (like "always exclude test accounts"), and common query patterns.
3. **Ask Claude A to create a Skill**: "Create a Skill that captures this BigQuery analysis pattern we just used. Include the table schemas, naming conventions, and the rule about filtering test accounts."
3. **Ask Agent A to create a Skill**: "Create a Skill that captures this BigQuery analysis pattern we just used. Include the table schemas, naming conventions, and the rule about filtering test accounts."
<Tip>
Claude models understand the Skill format and structure natively. You don't need special system prompts or a "writing skills" skill to get Claude to help create Skills. Simply ask Claude to create a Skill and it will generate properly structured SKILL.md content with appropriate frontmatter and body content.
Modern agents understand the Skill format and structure natively. You don't need special system prompts or a "writing skills" skill to get help creating Skills. Simply ask the agent to create a Skill and it will generate properly structured SKILL.md content with appropriate frontmatter and body content.
</Tip>
4. **Review for conciseness**: Check that Claude A hasn't added unnecessary explanations. Ask: "Remove the explanation about what win rate means - Claude already knows that."
4. **Review for conciseness**: Check that Agent A hasn't added unnecessary explanations. Ask: "Remove the explanation about what win rate means - the agent already knows that."
5. **Improve information architecture**: Ask Claude A to organize the content more effectively. For example: "Organize this so the table schema is in a separate reference file. We might add more tables later."
5. **Improve information architecture**: Ask Agent A to organize the content more effectively. For example: "Organize this so the table schema is in a separate reference file. We might add more tables later."
6. **Test on similar tasks**: Use the Skill with Claude B (a fresh instance with the Skill loaded) on related use cases. Observe whether Claude B finds the right information, applies rules correctly, and handles the task successfully.
6. **Test on similar tasks**: Use the Skill with Agent B (a fresh instance with the Skill loaded) on related use cases. Observe whether Agent B finds the right information, applies rules correctly, and handles the task successfully.
7. **Iterate based on observation**: If Claude B struggles or misses something, return to Claude A with specifics: "When Claude used this Skill, it forgot to filter by date for Q4. Should we add a section about date filtering patterns?"
7. **Iterate based on observation**: If Agent B struggles or misses something, return to Agent A with specifics: "When the agent used this Skill, it forgot to filter by date for Q4. Should we add a section about date filtering patterns?"
**Iterating on existing Skills:**
The same hierarchical pattern continues when improving Skills. You alternate between:
* **Working with Claude A** (the expert who helps refine the Skill)
* **Testing with Claude B** (the agent using the Skill to perform real work)
* **Observing Claude B's behavior** and bringing insights back to Claude A
* **Working with Agent A** (the expert who helps refine the Skill)
* **Testing with Agent B** (the agent using the Skill to perform real work)
* **Observing Agent B's behavior** and bringing insights back to Agent A
1. **Use the Skill in real workflows**: Give Claude B (with the Skill loaded) actual tasks, not test scenarios
1. **Use the Skill in real workflows**: Give Agent B (with the Skill loaded) actual tasks, not test scenarios
2. **Observe Claude B's behavior**: Note where it struggles, succeeds, or makes unexpected choices
2. **Observe Agent B's behavior**: Note where it struggles, succeeds, or makes unexpected choices
**Example observation**: "When I asked Claude B for a regional sales report, it wrote the query but forgot to filter out test accounts, even though the Skill mentions this rule."
**Example observation**: "When I asked Agent B for a regional sales report, it wrote the query but forgot to filter out test accounts, even though the Skill mentions this rule."
3. **Return to Claude A for improvements**: Share the current SKILL.md and describe what you observed. Ask: "I noticed Claude B forgot to filter test accounts when I asked for a regional report. The Skill mentions filtering, but maybe it's not prominent enough?"
3. **Return to Agent A for improvements**: Share the current SKILL.md and describe what you observed. Ask: "I noticed Agent B forgot to filter test accounts when I asked for a regional report. The Skill mentions filtering, but maybe it's not prominent enough?"
4. **Review Claude A's suggestions**: Claude A might suggest reorganizing to make rules more prominent, using stronger language like "MUST filter" instead of "always filter", or restructuring the workflow section.
4. **Review Agent A's suggestions**: Agent A might suggest reorganizing to make rules more prominent, using stronger language like "MUST filter" instead of "always filter", or restructuring the workflow section.
5. **Apply and test changes**: Update the Skill with Claude A's refinements, then test again with Claude B on similar requests
5. **Apply and test changes**: Update the Skill with Agent A's refinements, then test again with Agent B on similar requests
6. **Repeat based on usage**: Continue this observe-refine-test cycle as you encounter new scenarios. Each iteration improves the Skill based on real agent behavior, not assumptions.
@@ -807,18 +807,18 @@ The same hierarchical pattern continues when improving Skills. You alternate bet
2. Ask: Does the Skill activate when expected? Are instructions clear? What's missing?
3. Incorporate feedback to address blind spots in your own usage patterns
**Why this approach works**: Claude A understands agent needs, you provide domain expertise, Claude B reveals gaps through real usage, and iterative refinement improves Skills based on observed behavior rather than assumptions.
**Why this approach works**: Agent A understands agent needs, you provide domain expertise, Agent B reveals gaps through real usage, and iterative refinement improves Skills based on observed behavior rather than assumptions.
### Observe how Claude navigates Skills
### Observe how agents navigate Skills
As you iterate on Skills, pay attention to how Claude actually uses them in practice. Watch for:
As you iterate on Skills, pay attention to how agents actually use them in practice. Watch for:
* **Unexpected exploration paths**: Does Claude read files in an order you didn't anticipate? This might indicate your structure isn't as intuitive as you thought
* **Missed connections**: Does Claude fail to follow references to important files? Your links might need to be more explicit or prominent
* **Overreliance on certain sections**: If Claude repeatedly reads the same file, consider whether that content should be in the main SKILL.md instead
* **Ignored content**: If Claude never accesses a bundled file, it might be unnecessary or poorly signaled in the main instructions
* **Unexpected exploration paths**: Does the agent read files in an order you didn't anticipate? This might indicate your structure isn't as intuitive as you thought
* **Missed connections**: Does the agent fail to follow references to important files? Your links might need to be more explicit or prominent
* **Overreliance on certain sections**: If the agent repeatedly reads the same file, consider whether that content should be in the main SKILL.md instead
* **Ignored content**: If the agent never accesses a bundled file, it might be unnecessary or poorly signaled in the main instructions
Iterate based on these observations rather than assumptions. The 'name' and 'description' in your Skill's metadata are particularly critical. Claude uses these when deciding whether to trigger the Skill in response to the current task. Make sure they clearly describe what the Skill does and when it should be used.
Iterate based on these observations rather than assumptions. The 'name' and 'description' in your Skill's metadata are particularly critical. Agents use these when deciding whether to trigger the Skill in response to the current task. Make sure they clearly describe what the Skill does and when it should be used.
## Anti-patterns to avoid
@@ -854,7 +854,7 @@ The sections below focus on Skills that include executable scripts. If your Skil
### Solve, don't punt
When writing scripts for Skills, handle error conditions rather than punting to Claude.
When writing scripts for Skills, handle error conditions rather than punting to the agent.
**Good example: Handle errors explicitly**:
@@ -876,15 +876,15 @@ def process_file(path):
return ''
```
**Bad example: Punt to Claude**:
**Bad example: Punt to the agent**:
```python theme={null}
def process_file(path):
# Just fail and let Claude figure it out
# Just fail and let the agent figure it out
return open(path).read()
```
Configuration parameters should also be justified and documented to avoid "voodoo constants" (Ousterhout's law). If you don't know the right value, how will Claude determine it?
Configuration parameters should also be justified and documented to avoid "voodoo constants" (Ousterhout's law). If you don't know the right value, how will the agent determine it?
**Good example: Self-documenting**:
@@ -907,7 +907,7 @@ RETRIES = 5 # Why 5?
### Provide utility scripts
Even if Claude could write a script, pre-made scripts offer advantages:
Even if your agent could write a script, pre-made scripts offer advantages:
**Benefits of utility scripts**:
@@ -918,9 +918,9 @@ Even if Claude could write a script, pre-made scripts offer advantages:
<img src="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=4bbc45f2c2e0bee9f2f0d5da669bad00" alt="Bundling executable scripts alongside instruction files" data-og-width="2048" width="2048" data-og-height="1154" height="1154" data-path="images/agent-skills-executable-scripts.png" data-optimize="true" data-opv="3" srcset="https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=280&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=9a04e6535a8467bfeea492e517de389f 280w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=560&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=e49333ad90141af17c0d7651cca7216b 560w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=840&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=954265a5df52223d6572b6214168c428 840w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=1100&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=2ff7a2d8f2a83ee8af132b29f10150fd 1100w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=1650&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=48ab96245e04077f4d15e9170e081cfb 1650w, https://mintcdn.com/anthropic-claude-docs/4Bny2bjzuGBK7o00/images/agent-skills-executable-scripts.png?w=2500&fit=max&auto=format&n=4Bny2bjzuGBK7o00&q=85&s=0301a6c8b3ee879497cc5b5483177c90 2500w" />
The diagram above shows how executable scripts work alongside instruction files. The instruction file (forms.md) references the script, and Claude can execute it without loading its contents into context.
The diagram above shows how executable scripts work alongside instruction files. The instruction file (forms.md) references the script, and the agent can execute it without loading its contents into context.
**Important distinction**: Make clear in your instructions whether Claude should:
**Important distinction**: Make clear in your instructions whether the agent should:
* **Execute the script** (most common): "Run `analyze_form.py` to extract fields"
* **Read it as reference** (for complex logic): "See `analyze_form.py` for the field extraction algorithm"
@@ -962,7 +962,7 @@ python scripts/fill_form.py input.pdf fields.json output.pdf
### Use visual analysis
When inputs can be rendered as images, have Claude analyze them:
When inputs can be rendered as images, have the agent analyze them:
````markdown theme={null}
## Form layout analysis
@@ -973,20 +973,20 @@ When inputs can be rendered as images, have Claude analyze them:
```
2. Analyze each page image to identify form fields
3. Claude can see field locations and types visually
3. The agent can see field locations and types visually
````
<Note>
In this example, you'd need to write the `pdf_to_images.py` script.
</Note>
Claude's vision capabilities help understand layouts and structures.
Agent vision capabilities help understand layouts and structures.
### Create verifiable intermediate outputs
When Claude performs complex, open-ended tasks, it can make mistakes. The "plan-validate-execute" pattern catches errors early by having Claude first create a plan in a structured format, then validate that plan with a script before executing it.
When agents perform complex, open-ended tasks, they can make mistakes. The "plan-validate-execute" pattern catches errors early by having the agent first create a plan in a structured format, then validate that plan with a script before executing it.
**Example**: Imagine asking Claude to update 50 form fields in a PDF based on a spreadsheet. Without validation, Claude might reference non-existent fields, create conflicting values, miss required fields, or apply updates incorrectly.
**Example**: Imagine asking the agent to update 50 form fields in a PDF based on a spreadsheet. Without validation, it might reference non-existent fields, create conflicting values, miss required fields, or apply updates incorrectly.
**Solution**: Use the workflow pattern shown above (PDF form filling), but add an intermediate `changes.json` file that gets validated before applying changes. The workflow becomes: analyze → **create plan file** → **validate plan** → execute → verify.
@@ -994,12 +994,12 @@ When Claude performs complex, open-ended tasks, it can make mistakes. The "plan-
* **Catches errors early**: Validation finds problems before changes are applied
* **Machine-verifiable**: Scripts provide objective verification
* **Reversible planning**: Claude can iterate on the plan without touching originals
* **Reversible planning**: The agent can iterate on the plan without touching originals
* **Clear debugging**: Error messages point to specific problems
**When to use**: Batch operations, destructive changes, complex validation rules, high-stakes operations.
**Implementation tip**: Make validation scripts verbose with specific error messages like "Field 'signature\_date' not found. Available fields: customer\_name, order\_total, signature\_date\_signed" to help Claude fix issues.
**Implementation tip**: Make validation scripts verbose with specific error messages like "Field 'signature\_date' not found. Available fields: customer\_name, order\_total, signature\_date\_signed" to help the agent fix issues.
### Package dependencies
@@ -1008,32 +1008,32 @@ Skills run in the code execution environment with platform-specific limitations:
* **claude.ai**: Can install packages from npm and PyPI and pull from GitHub repositories
* **Anthropic API**: Has no network access and no runtime package installation
List required packages in your SKILL.md and verify they're available in the [code execution tool documentation](/en/docs/agents-and-tools/tool-use/code-execution-tool).
List required packages in your SKILL.md and verify they're available in the [code execution tool documentation](https://platform.claude.com/docs/en/agents-and-tools/tool-use/code-execution-tool).
### Runtime environment
Skills run in a code execution environment with filesystem access, bash commands, and code execution capabilities. For the conceptual explanation of this architecture, see [The Skills architecture](/en/docs/agents-and-tools/agent-skills/overview#the-skills-architecture) in the overview.
Skills run in a code execution environment with filesystem access, bash commands, and code execution capabilities. For the conceptual explanation of this architecture, see [The Skills architecture](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#the-skills-architecture) in the overview.
**How this affects your authoring:**
**How Claude accesses Skills:**
**How agents access Skills:**
1. **Metadata pre-loaded**: At startup, the name and description from all Skills' YAML frontmatter are loaded into the system prompt
2. **Files read on-demand**: Claude uses bash Read tools to access SKILL.md and other files from the filesystem when needed
2. **Files read on-demand**: Agents use their file-reading tools to access SKILL.md and other files from the filesystem when needed
3. **Scripts executed efficiently**: Utility scripts can be executed via bash without loading their full contents into context. Only the script's output consumes tokens
4. **No context penalty for large files**: Reference files, data, or documentation don't consume context tokens until actually read
* **File paths matter**: Claude navigates your skill directory like a filesystem. Use forward slashes (`reference/guide.md`), not backslashes
* **File paths matter**: Agents navigate your skill directory like a filesystem. Use forward slashes (`reference/guide.md`), not backslashes
* **Name files descriptively**: Use names that indicate content: `form_validation_rules.md`, not `doc2.md`
* **Organize for discovery**: Structure directories by domain or feature
* Good: `reference/finance.md`, `reference/sales.md`
* Bad: `docs/file1.md`, `docs/file2.md`
* **Bundle comprehensive resources**: Include complete API docs, extensive examples, large datasets; no context penalty until accessed
* **Prefer scripts for deterministic operations**: Write `validate_form.py` rather than asking Claude to generate validation code
* **Prefer scripts for deterministic operations**: Write `validate_form.py` rather than asking the agent to generate validation code
* **Make execution intent clear**:
* "Run `analyze_form.py` to extract fields" (execute)
* "See `analyze_form.py` for the extraction algorithm" (read as reference)
* **Test file access patterns**: Verify Claude can navigate your directory structure by testing with real requests
* **Test file access patterns**: Verify the agent can navigate your directory structure by testing with real requests
**Example:**
@@ -1046,9 +1046,9 @@ bigquery-skill/
└── product.md (usage analytics)
```
When the user asks about revenue, Claude reads SKILL.md, sees the reference to `reference/finance.md`, and invokes bash to read just that file. The sales.md and product.md files remain on the filesystem, consuming zero context tokens until needed. This filesystem-based model is what enables progressive disclosure. Claude can navigate and selectively load exactly what each task requires.
When the user asks about revenue, the agent reads SKILL.md, sees the reference to `reference/finance.md`, and invokes bash to read just that file. The sales.md and product.md files remain on the filesystem, consuming zero context tokens until needed. This filesystem-based model is what enables progressive disclosure. Agents can navigate and selectively load exactly what each task requires.
For complete details on the technical architecture, see [How Skills work](/en/docs/agents-and-tools/agent-skills/overview#how-skills-work) in the Skills overview.
For complete details on the technical architecture, see [How Skills work](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#how-skills-work) in the Skills overview.
### MCP tool references
@@ -1068,7 +1068,7 @@ Where:
* `BigQuery` and `GitHub` are MCP server names
* `bigquery_schema` and `create_issue` are the tool names within those servers
Without the server prefix, Claude may fail to locate the tool, especially when multiple MCP servers are available.
Without the server prefix, agents may fail to locate the tool, especially when multiple MCP servers are available.
### Avoid assuming tools are installed
@@ -1092,11 +1092,11 @@ reader = PdfReader("file.pdf")
### YAML frontmatter requirements
The SKILL.md frontmatter requires `name` (64 characters max) and `description` (1024 characters max) fields. See the [Skills overview](/en/docs/agents-and-tools/agent-skills/overview#skill-structure) for complete structure details.
The SKILL.md frontmatter requires `name` (64 characters max) and `description` (1024 characters max) fields. See the [Skills overview](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#skill-structure) for complete structure details.
### Token budgets
Keep SKILL.md body under 500 lines for optimal performance. If your content exceeds this, split it into separate files using the progressive disclosure patterns described earlier. For architectural details, see the [Skills overview](/en/docs/agents-and-tools/agent-skills/overview#how-skills-work).
Keep SKILL.md body under 500 lines for optimal performance. If your content exceeds this, split it into separate files using the progressive disclosure patterns described earlier. For architectural details, see the [Skills overview](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/overview#how-skills-work).
## Checklist for effective Skills
@@ -1117,7 +1117,7 @@ Before sharing a Skill, verify:
### Code and scripts
* [ ] Scripts solve problems rather than punt to Claude
* [ ] Scripts solve problems rather than punt to the agent
* [ ] Error handling is explicit and helpful
* [ ] No "voodoo constants" (all values justified)
* [ ] Required packages listed in instructions and verified as available
@@ -1136,15 +1136,15 @@ Before sharing a Skill, verify:
## Next steps
<CardGroup cols={2}>
<Card title="Get started with Agent Skills" icon="rocket" href="/en/docs/agents-and-tools/agent-skills/quickstart">
<Card title="Get started with Agent Skills" icon="rocket" href="https://platform.claude.com/docs/en/agents-and-tools/agent-skills/quickstart">
Create your first Skill
</Card>
<Card title="Use Skills in Claude Code" icon="terminal" href="/en/docs/claude-code/skills">
<Card title="Use Skills in Claude Code" icon="terminal" href="https://code.claude.com/docs/en/skills">
Create and manage Skills in Claude Code
</Card>
<Card title="Use Skills with the API" icon="code" href="/en/api/skills-guide">
<Card title="Use Skills with the API" icon="code" href="https://platform.claude.com/docs/en/build-with-claude/skills-guide">
Upload and use Skills programmatically
</Card>
</CardGroup>

View File

@@ -33,7 +33,7 @@ LLMs respond to the same persuasion principles as humans. Understanding this psy
**How it works in skills:**
- Require announcements: "Announce skill usage"
- Force explicit choices: "Choose A, B, or C"
- Use tracking: TodoWrite for checklists
- Use tracking: todos for checklists
**When to use:**
- Ensuring skills are actually followed
@@ -80,8 +80,8 @@ LLMs respond to the same persuasion principles as humans. Understanding this psy
**Example:**
```markdown
✅ Checklists without TodoWrite tracking = steps get skipped. Every time.
❌ Some people find TodoWrite helpful for checklists.
✅ Checklists without todo tracking = steps get skipped. Every time.
❌ Some people find a todo list helpful for checklists.
```
### 5. Unity

View File

@@ -406,7 +406,7 @@ async function runTests() {
assert(template.includes('indicator-bar'), 'Should have indicator bar');
assert(template.includes('indicator-text'), 'Should have indicator text');
assert(template.includes('<!-- CONTENT -->'), 'Should have content placeholder');
assert(template.includes('claude-content'), 'Should have content container');
assert(template.includes('frame-content'), 'Should have content container');
return Promise.resolve();
});

View File

@@ -7,8 +7,8 @@
# and is stricter on that axis. This bash test additionally asserts:
# - >=3 git commits (initial + per-task commits, exercising SDD's
# commit-per-task workflow shape)
# - >=2 Agent/Task subagent dispatches (drill only asserts >=1)
# - TodoWrite usage (drill makes no assertion)
# - >=2 Claude Code subagent dispatches via Agent or Task (drill only asserts >=1)
# - Claude Code task-tracking tool usage (drill makes no assertion)
# - test/math.test.js exists (drill relies on `npm test` succeeding)
# - analyze-token-usage.py token-budget telemetry
# Kept until those assertions are added to drill or explicitly retired.
@@ -224,13 +224,13 @@ else
fi
echo ""
# Test 3: TodoWrite was used for tracking
# Test 3: Claude Code task-tracking tool was used
echo "Test 3: Task tracking..."
todo_count=$(grep -c '"name":"TodoWrite"' "$SESSION_FILE" || echo "0")
todo_count=$(grep -cE '"name":"(TodoWrite|TaskCreate|TaskUpdate|TaskList|TaskGet)"' "$SESSION_FILE" || echo "0")
if [ "$todo_count" -ge 1 ]; then
echo " [PASS] TodoWrite used $todo_count time(s) for task tracking"
echo " [PASS] Task tracking used $todo_count time(s)"
else
echo " [FAIL] TodoWrite not used"
echo " [FAIL] No Claude Code task-tracking tool used"
FAILED=$((FAILED + 1))
fi
echo ""

View File

@@ -109,7 +109,7 @@ if [ -n "$FIRST_SKILL_LINE" ]; then
PREMATURE_TOOLS=$(head -n "$FIRST_SKILL_LINE" "$TURN3_LOG" | \
grep '"type":"tool_use"' | \
grep -v '"name":"Skill"' | \
grep -v '"name":"TodoWrite"' || true)
grep -vE '"name":"(TodoWrite|TaskCreate|TaskUpdate|TaskList|TaskGet)"' || true)
if [ -n "$PREMATURE_TOOLS" ]; then
echo "WARNING: Tools invoked BEFORE Skill tool in Turn 3:"
echo "$PREMATURE_TOOLS" | head -5

View File

@@ -103,11 +103,11 @@ echo "Checking for premature action..."
FIRST_SKILL_LINE=$(grep -n '"name":"Skill"' "$LOG_FILE" | head -1 | cut -d: -f1)
if [ -n "$FIRST_SKILL_LINE" ]; then
# Check if any non-Skill, non-system tools were invoked before the first Skill invocation
# Filter out system messages, TodoWrite (planning is ok), and other non-action tools
# Filter out task tracking tools (planning is ok) and other non-action tools
PREMATURE_TOOLS=$(head -n "$FIRST_SKILL_LINE" "$LOG_FILE" | \
grep '"type":"tool_use"' | \
grep -v '"name":"Skill"' | \
grep -v '"name":"TodoWrite"' || true)
grep -vE '"name":"(TodoWrite|TaskCreate|TaskUpdate|TaskList|TaskGet)"' || true)
if [ -n "$PREMATURE_TOOLS" ]; then
echo "WARNING: Tools invoked BEFORE Skill tool:"
echo "$PREMATURE_TOOLS" | head -5

View File

@@ -44,6 +44,10 @@ const result = {
scenario,
firstBootstrapParts: countBootstrapParts(firstOutput),
secondBootstrapParts: countBootstrapParts(secondOutput),
staleMentionMapping: bootstrapText(firstOutput).includes('@mention'),
staleTaskMapping: bootstrapText(firstOutput).includes('`Task` tool with subagents'),
mapsSubagentToTask: bootstrapText(firstOutput).includes('`task` with `subagent_type: "general"`'),
mapsMutationToApplyPatch: bootstrapText(firstOutput).includes('`apply_patch`'),
firstReadCount: afterFirst.readCount,
secondReadCount: afterSecond.readCount,
firstExistsCount: afterFirst.existsCount,
@@ -83,6 +87,12 @@ function countBootstrapParts(output) {
).length;
}
function bootstrapText(output) {
return output.messages[0].parts.find(
(part) => part.type === 'text' && part.text.includes('EXTREMELY_IMPORTANT')
)?.text || '';
}
function assertPresentBootstrap(result) {
const failures = [];
if (result.firstBootstrapParts !== 1) {
@@ -100,6 +110,18 @@ function assertPresentBootstrap(result) {
if (result.secondExistsCount !== result.firstExistsCount) {
failures.push(`expected cached second transform to do no additional exists checks, got ${result.secondExistsCount - result.firstExistsCount}`);
}
if (result.staleMentionMapping) {
failures.push('expected OpenCode bootstrap not to teach @mention subagent syntax');
}
if (result.staleTaskMapping) {
failures.push('expected OpenCode bootstrap not to teach stale Task-tool mapping');
}
if (!result.mapsSubagentToTask) {
failures.push('expected OpenCode bootstrap to map general-purpose subagents to task with subagent_type');
}
if (!result.mapsMutationToApplyPatch) {
failures.push('expected OpenCode bootstrap to map file mutation to apply_patch');
}
return failures;
}

View File

@@ -0,0 +1,128 @@
import assert from 'node:assert/strict';
import { readFile } from 'node:fs/promises';
import { existsSync } from 'node:fs';
import { dirname, resolve } from 'node:path';
import { fileURLToPath, pathToFileURL } from 'node:url';
import test from 'node:test';
const __dirname = dirname(fileURLToPath(import.meta.url));
const repoRoot = resolve(__dirname, '../..');
const packageJsonPath = resolve(repoRoot, 'package.json');
const extensionPath = resolve(repoRoot, '.pi/extensions/superpowers.ts');
const piToolsPath = resolve(repoRoot, 'skills/using-superpowers/references/pi-tools.md');
async function readPackageJson() {
return JSON.parse(await readFile(packageJsonPath, 'utf8'));
}
async function loadExtension() {
const handlers = new Map();
const pi = {
on(event, handler) {
if (!handlers.has(event)) handlers.set(event, []);
handlers.get(event).push(handler);
},
};
const mod = await import(pathToFileURL(extensionPath).href + `?cachebust=${Date.now()}-${Math.random()}`);
mod.default(pi);
return { handlers };
}
function firstHandler(handlers, event) {
const eventHandlers = handlers.get(event) ?? [];
assert.equal(eventHandlers.length, 1, `expected one ${event} handler`);
return eventHandlers[0];
}
function textOf(message) {
if (typeof message.content === 'string') return message.content;
return message.content
.filter((part) => part.type === 'text')
.map((part) => part.text)
.join('\n');
}
test('package.json declares a pi package with skills and extension resources', async () => {
const pkg = await readPackageJson();
assert.equal(pkg.name, 'superpowers');
assert.ok(pkg.keywords.includes('pi-package'));
assert.deepEqual(pkg.pi.skills, ['./skills']);
assert.deepEqual(pkg.pi.extensions, ['./.pi/extensions/superpowers.ts']);
});
test('extension registers lifecycle hooks without pre-compaction injection', async () => {
const { handlers } = await loadExtension();
for (const event of ['resources_discover', 'session_start', 'session_compact', 'context', 'agent_end']) {
assert.equal((handlers.get(event) ?? []).length, 1, `missing ${event} handler`);
}
assert.equal((handlers.get('session_before_compact') ?? []).length, 0);
});
test('resources_discover contributes the bundled skills directory', async () => {
const { handlers } = await loadExtension();
const discover = firstHandler(handlers, 'resources_discover');
const result = await discover({ type: 'resources_discover', cwd: repoRoot, reason: 'startup' }, {});
assert.deepEqual(result.skillPaths, [resolve(repoRoot, 'skills')]);
});
test('startup context injects the bootstrap as one user message until agent_end', async () => {
const { handlers } = await loadExtension();
const sessionStart = firstHandler(handlers, 'session_start');
const context = firstHandler(handlers, 'context');
const agentEnd = firstHandler(handlers, 'agent_end');
await sessionStart({ type: 'session_start', reason: 'startup' }, {});
const originalMessages = [
{ role: 'user', content: [{ type: 'text', text: 'Let us make a react todo list' }], timestamp: 1 },
];
const result = await context({ type: 'context', messages: originalMessages }, {});
assert.equal(result.messages.length, 2);
assert.equal(result.messages[0].role, 'user');
assert.match(textOf(result.messages[0]), /You have superpowers/);
assert.match(textOf(result.messages[0]), /Pi tool mapping/);
assert.equal(result.messages[1], originalMessages[0]);
const repeatedProviderRequest = await context({ type: 'context', messages: originalMessages }, {});
assert.equal(repeatedProviderRequest.messages.length, 2);
assert.match(textOf(repeatedProviderRequest.messages[0]), /You have superpowers/);
const alreadyInjected = await context({ type: 'context', messages: result.messages }, {});
assert.equal(alreadyInjected, undefined, 'bootstrap should not duplicate when already present');
await agentEnd({ type: 'agent_end', messages: [] }, {});
const afterEnd = await context({ type: 'context', messages: originalMessages }, {});
assert.equal(afterEnd, undefined, 'startup bootstrap should clear after agent_end');
});
test('session_compact injects bootstrap after compaction summaries, not before compaction', async () => {
const { handlers } = await loadExtension();
const sessionCompact = firstHandler(handlers, 'session_compact');
const context = firstHandler(handlers, 'context');
await sessionCompact({ type: 'session_compact', compactionEntry: {}, fromExtension: false }, {});
const summary = { role: 'compactionSummary', summary: 'Prior work summary', tokensBefore: 123, timestamp: 1 };
const user = { role: 'user', content: [{ type: 'text', text: 'Continue' }], timestamp: 2 };
const result = await context({ type: 'context', messages: [summary, user] }, {});
assert.equal(result.messages.length, 3);
assert.equal(result.messages[0], summary);
assert.equal(result.messages[1].role, 'user');
assert.match(textOf(result.messages[1]), /You have superpowers/);
assert.equal(result.messages[2], user);
});
test('pi tools reference documents pi-specific mappings', async () => {
assert.equal(existsSync(piToolsPath), true, 'pi-tools.md should exist');
const text = await readFile(piToolsPath, 'utf8');
for (const expected of ['Skill', 'Task', 'TodoWrite', 'read', 'write', 'edit', 'bash']) {
assert.match(text, new RegExp(expected));
}
});