mirror of
https://github.com/obra/superpowers.git
synced 2026-05-13 04:29:04 +08:00
Replace Claude-Code-specific tool names in skill prose, prompt
templates, and OpenCode-facing docs with action-language descriptions
that resolve to each runtime's native tool via the per-platform refs.
Changes by category:
- Prose mentions ("Use TodoWrite to track...", "Use Task tool with
general-purpose type") → action language ("Track each item as a
todo", "Dispatch a general-purpose subagent")
- Prompt template headers (6 files): "Task tool (general-purpose):"
→ "Subagent (general-purpose):" — preserves the type information
without naming Claude Code's specific dispatch tool
- DOT flowchart node labels: "Invoke Skill tool" → "Invoke the
skill"; "Create TodoWrite todo per item" → "Create a todo per
item"
- OpenCode INSTALL.md and docs/README.opencode.md: replace the old
"TodoWrite → todowrite, Task → @mention" mapping (which both
taught a vocabulary skills no longer use AND was wrong about
@mention being a real OpenCode syntax) with an action-language
mapping verified against the installed OpenCode CLI's tool
inventory.
The platform-tools refs landed in Phase B already document each
runtime's resolution; skills now speak in the actions those refs
map. Tool names that genuinely belong only in the per-platform
dispatch section ("In Claude Code: Use the `Skill` tool") and the
Claude-Code-specific Bash run_in_background flag note in
visual-companion remain — those are intentional carve-outs.
62 lines
2.0 KiB
Markdown
62 lines
2.0 KiB
Markdown
# Spec Compliance Reviewer Prompt Template
|
|
|
|
Use this template when dispatching a spec compliance reviewer subagent.
|
|
|
|
**Purpose:** Verify implementer built what was requested (nothing more, nothing less)
|
|
|
|
```
|
|
Subagent (general-purpose):
|
|
description: "Review spec compliance for Task N"
|
|
prompt: |
|
|
You are reviewing whether an implementation matches its specification.
|
|
|
|
## What Was Requested
|
|
|
|
[FULL TEXT of task requirements]
|
|
|
|
## What Implementer Claims They Built
|
|
|
|
[From implementer's report]
|
|
|
|
## CRITICAL: Do Not Trust the Report
|
|
|
|
The implementer finished suspiciously quickly. Their report may be incomplete,
|
|
inaccurate, or optimistic. You MUST verify everything independently.
|
|
|
|
**DO NOT:**
|
|
- Take their word for what they implemented
|
|
- Trust their claims about completeness
|
|
- Accept their interpretation of requirements
|
|
|
|
**DO:**
|
|
- Read the actual code they wrote
|
|
- Compare actual implementation to requirements line by line
|
|
- Check for missing pieces they claimed to implement
|
|
- Look for extra features they didn't mention
|
|
|
|
## Your Job
|
|
|
|
Read the implementation code and verify:
|
|
|
|
**Missing requirements:**
|
|
- Did they implement everything that was requested?
|
|
- Are there requirements they skipped or missed?
|
|
- Did they claim something works but didn't actually implement it?
|
|
|
|
**Extra/unneeded work:**
|
|
- Did they build things that weren't requested?
|
|
- Did they over-engineer or add unnecessary features?
|
|
- Did they add "nice to haves" that weren't in spec?
|
|
|
|
**Misunderstandings:**
|
|
- Did they interpret requirements differently than intended?
|
|
- Did they solve the wrong problem?
|
|
- Did they implement the right feature but wrong way?
|
|
|
|
**Verify by reading code, not by trusting report.**
|
|
|
|
Report:
|
|
- ✅ Spec compliant (if everything matches after code inspection)
|
|
- ❌ Issues found: [list specifically what's missing or extra, with file:line references]
|
|
```
|