mirror of
https://github.com/obra/superpowers.git
synced 2026-04-25 19:19:06 +08:00
Initial commit: Superpowers plugin v1.0.0
Core skills library as Claude Code plugin: - Testing skills: TDD, async testing, anti-patterns - Debugging skills: Systematic debugging, root cause tracing - Collaboration skills: Brainstorming, planning, code review - Meta skills: Creating and testing skills Features: - SessionStart hook for context injection - Skills-search tool for discovery - Commands: /brainstorm, /write-plan, /execute-plan - Data directory at ~/.superpowers/
This commit is contained in:
25
skills/collaboration/INDEX.md
Normal file
25
skills/collaboration/INDEX.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Collaboration Skills
|
||||
|
||||
Working effectively with other agents and developers.
|
||||
|
||||
## Available Skills
|
||||
|
||||
- skills/collaboration/brainstorming - Interactive idea refinement using Socratic method to develop fully-formed designs. Use when your human partner has new idea to explore. Before writing implementation plans.
|
||||
|
||||
- skills/collaboration/writing-plans - Create detailed implementation plans with bite-sized tasks for engineers with zero codebase context. Use after brainstorming/design is complete. Before implementation begins.
|
||||
|
||||
- skills/collaboration/using-git-worktrees - Create isolated git worktrees with smart directory selection and safety verification. Use when starting feature implementation in isolation. When brainstorming transitions to code. Before executing plans.
|
||||
|
||||
- skills/collaboration/executing-plans - Execute detailed plans in batches with review checkpoints. Use when have complete implementation plan to execute. When implementing in separate session from planning.
|
||||
|
||||
- skills/collaboration/subagent-driven-development - Execute plan by dispatching fresh subagent per task with code review between tasks. Alternative to executing-plans when staying in same session. Use when tasks are independent. When want fast iteration with review checkpoints.
|
||||
|
||||
- skills/collaboration/finishing-a-development-branch - Complete feature development with structured options for merge, PR, or cleanup. Use after completing implementation. When all tests passing. At end of executing-plans or subagent-driven-development.
|
||||
|
||||
- skills/collaboration/remembering-conversations - Search previous Claude Code conversations for facts, patterns, decisions, and context using semantic or text search. Use when your human partner mentions "we discussed this before". When debugging similar issues. When looking for architectural decisions or code patterns from past work. Before reinventing solutions. When searching for git SHAs or error messages.
|
||||
|
||||
- skills/collaboration/dispatching-parallel-agents - Use multiple Claude agents to investigate and fix independent problems concurrently. Use when you have 3+ unrelated failures that can be debugged in parallel.
|
||||
|
||||
- skills/collaboration/requesting-code-review - Dispatch code-reviewer subagent to review implementation against plan before proceeding. Use after completing task. After major feature. Before merging. When executing plans (after each task).
|
||||
|
||||
- skills/collaboration/receiving-code-review - Receive and act on code review feedback with technical rigor, not performative agreement or blind implementation. Use when receiving feedback from your human partner or external reviewers. Before implementing review suggestions. When feedback seems wrong or unclear.
|
||||
56
skills/collaboration/brainstorming/SKILL.md
Normal file
56
skills/collaboration/brainstorming/SKILL.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
name: Brainstorming Ideas Into Designs
|
||||
description: Interactive idea refinement using Socratic method to develop fully-formed designs
|
||||
when_to_use: When your human partner says "I've got an idea", "Let's make/build/create", "I want to implement/add", "What if we". When starting design for complex feature. Before writing implementation plans. When idea needs refinement and exploration. ACTIVATE THIS AUTOMATICALLY when your human partner describes a feature or project idea - don't wait for /brainstorm command.
|
||||
version: 2.0.0
|
||||
---
|
||||
|
||||
# Brainstorming Ideas Into Designs
|
||||
|
||||
## Overview
|
||||
|
||||
Transform rough ideas into fully-formed designs through structured questioning and alternative exploration.
|
||||
|
||||
**Core principle:** Ask questions to understand, explore alternatives, present design incrementally for validation.
|
||||
|
||||
**Announce at start:** "I'm using the Brainstorming skill to refine your idea into a design."
|
||||
|
||||
## The Process
|
||||
|
||||
### Phase 1: Understanding
|
||||
- Check current project state in working directory
|
||||
- Ask ONE question at a time to refine the idea
|
||||
- Prefer multiple choice when possible
|
||||
- Gather: Purpose, constraints, success criteria
|
||||
|
||||
### Phase 2: Exploration
|
||||
- Propose 2-3 different approaches (reference skills/coding/exploring-alternatives)
|
||||
- For each: Core architecture, trade-offs, complexity assessment
|
||||
- Ask your human partner which approach resonates
|
||||
|
||||
### Phase 3: Design Presentation
|
||||
- Present in 200-300 word sections
|
||||
- Cover: Architecture, components, data flow, error handling, testing
|
||||
- Ask after each section: "Does this look right so far?"
|
||||
|
||||
### Phase 4: Worktree Setup (for implementation)
|
||||
When design is approved and implementation will follow:
|
||||
- Announce: "I'm using the Using Git Worktrees skill to set up an isolated workspace."
|
||||
- Switch to skills/collaboration/using-git-worktrees
|
||||
- Follow that skill's process for directory selection, safety verification, and setup
|
||||
- Return here when worktree ready
|
||||
|
||||
### Phase 5: Planning Handoff
|
||||
Ask: "Ready to create the implementation plan?"
|
||||
|
||||
When your human partner confirms (any affirmative response):
|
||||
- Announce: "I'm using the Writing Plans skill to create the implementation plan."
|
||||
- Switch to skills/collaboration/writing-plans skill
|
||||
- Create detailed plan in the worktree
|
||||
|
||||
## Remember
|
||||
- One question per message during Phase 1
|
||||
- Apply YAGNI ruthlessly (reference skills/architecture/reducing-complexity)
|
||||
- Explore 2-3 alternatives before settling
|
||||
- Present incrementally, validate as you go
|
||||
- Announce skill usage at start
|
||||
184
skills/collaboration/dispatching-parallel-agents/SKILL.md
Normal file
184
skills/collaboration/dispatching-parallel-agents/SKILL.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
name: Dispatching Parallel Agents
|
||||
description: Use multiple Claude agents to investigate and fix independent problems concurrently
|
||||
when_to_use: Multiple unrelated failures that can be investigated independently
|
||||
version: 1.0.0
|
||||
languages: all
|
||||
context: AI-assisted development (Claude Code or similar)
|
||||
---
|
||||
|
||||
# Dispatching Parallel Agents
|
||||
|
||||
## Overview
|
||||
|
||||
When you have multiple unrelated failures (different test files, different subsystems, different bugs), investigating them sequentially wastes time. Each investigation is independent and can happen in parallel.
|
||||
|
||||
**Core principle:** Dispatch one agent per independent problem domain. Let them work concurrently.
|
||||
|
||||
## When to Use
|
||||
|
||||
```dot
|
||||
digraph when_to_use {
|
||||
"Multiple failures?" [shape=diamond];
|
||||
"Are they independent?" [shape=diamond];
|
||||
"Single agent investigates all" [shape=box];
|
||||
"One agent per problem domain" [shape=box];
|
||||
"Can they work in parallel?" [shape=diamond];
|
||||
"Sequential agents" [shape=box];
|
||||
"Parallel dispatch" [shape=box];
|
||||
|
||||
"Multiple failures?" -> "Are they independent?" [label="yes"];
|
||||
"Are they independent?" -> "Single agent investigates all" [label="no - related"];
|
||||
"Are they independent?" -> "Can they work in parallel?" [label="yes"];
|
||||
"Can they work in parallel?" -> "Parallel dispatch" [label="yes"];
|
||||
"Can they work in parallel?" -> "Sequential agents" [label="no - shared state"];
|
||||
}
|
||||
```
|
||||
|
||||
**Use when:**
|
||||
- 3+ test files failing with different root causes
|
||||
- Multiple subsystems broken independently
|
||||
- Each problem can be understood without context from others
|
||||
- No shared state between investigations
|
||||
|
||||
**Don't use when:**
|
||||
- Failures are related (fix one might fix others)
|
||||
- Need to understand full system state
|
||||
- Agents would interfere with each other
|
||||
|
||||
## The Pattern
|
||||
|
||||
### 1. Identify Independent Domains
|
||||
|
||||
Group failures by what's broken:
|
||||
- File A tests: Tool approval flow
|
||||
- File B tests: Batch completion behavior
|
||||
- File C tests: Abort functionality
|
||||
|
||||
Each domain is independent - fixing tool approval doesn't affect abort tests.
|
||||
|
||||
### 2. Create Focused Agent Tasks
|
||||
|
||||
Each agent gets:
|
||||
- **Specific scope:** One test file or subsystem
|
||||
- **Clear goal:** Make these tests pass
|
||||
- **Constraints:** Don't change other code
|
||||
- **Expected output:** Summary of what you found and fixed
|
||||
|
||||
### 3. Dispatch in Parallel
|
||||
|
||||
```typescript
|
||||
// In Claude Code / AI environment
|
||||
Task("Fix agent-tool-abort.test.ts failures")
|
||||
Task("Fix batch-completion-behavior.test.ts failures")
|
||||
Task("Fix tool-approval-race-conditions.test.ts failures")
|
||||
// All three run concurrently
|
||||
```
|
||||
|
||||
### 4. Review and Integrate
|
||||
|
||||
When agents return:
|
||||
- Read each summary
|
||||
- Verify fixes don't conflict
|
||||
- Run full test suite
|
||||
- Integrate all changes
|
||||
|
||||
## Agent Prompt Structure
|
||||
|
||||
Good agent prompts are:
|
||||
1. **Focused** - One clear problem domain
|
||||
2. **Self-contained** - All context needed to understand the problem
|
||||
3. **Specific about output** - What should the agent return?
|
||||
|
||||
```markdown
|
||||
Fix the 3 failing tests in src/agents/agent-tool-abort.test.ts:
|
||||
|
||||
1. "should abort tool with partial output capture" - expects 'interrupted at' in message
|
||||
2. "should handle mixed completed and aborted tools" - fast tool aborted instead of completed
|
||||
3. "should properly track pendingToolCount" - expects 3 results but gets 0
|
||||
|
||||
These are timing/race condition issues. Your task:
|
||||
|
||||
1. Read the test file and understand what each test verifies
|
||||
2. Identify root cause - timing issues or actual bugs?
|
||||
3. Fix by:
|
||||
- Replacing arbitrary timeouts with event-based waiting
|
||||
- Fixing bugs in abort implementation if found
|
||||
- Adjusting test expectations if testing changed behavior
|
||||
|
||||
Do NOT just increase timeouts - find the real issue.
|
||||
|
||||
Return: Summary of what you found and what you fixed.
|
||||
```
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**❌ Too broad:** "Fix all the tests" - agent gets lost
|
||||
**✅ Specific:** "Fix agent-tool-abort.test.ts" - focused scope
|
||||
|
||||
**❌ No context:** "Fix the race condition" - agent doesn't know where
|
||||
**✅ Context:** Paste the error messages and test names
|
||||
|
||||
**❌ No constraints:** Agent might refactor everything
|
||||
**✅ Constraints:** "Do NOT change production code" or "Fix tests only"
|
||||
|
||||
**❌ Vague output:** "Fix it" - you don't know what changed
|
||||
**✅ Specific:** "Return summary of root cause and changes"
|
||||
|
||||
## When NOT to Use
|
||||
|
||||
**Related failures:** Fixing one might fix others - investigate together first
|
||||
**Need full context:** Understanding requires seeing entire system
|
||||
**Exploratory debugging:** You don't know what's broken yet
|
||||
**Shared state:** Agents would interfere (editing same files, using same resources)
|
||||
|
||||
## Real Example from Session
|
||||
|
||||
**Scenario:** 6 test failures across 3 files after major refactoring
|
||||
|
||||
**Failures:**
|
||||
- agent-tool-abort.test.ts: 3 failures (timing issues)
|
||||
- batch-completion-behavior.test.ts: 2 failures (tools not executing)
|
||||
- tool-approval-race-conditions.test.ts: 1 failure (execution count = 0)
|
||||
|
||||
**Decision:** Independent domains - abort logic separate from batch completion separate from race conditions
|
||||
|
||||
**Dispatch:**
|
||||
```
|
||||
Agent 1 → Fix agent-tool-abort.test.ts
|
||||
Agent 2 → Fix batch-completion-behavior.test.ts
|
||||
Agent 3 → Fix tool-approval-race-conditions.test.ts
|
||||
```
|
||||
|
||||
**Results:**
|
||||
- Agent 1: Replaced timeouts with event-based waiting
|
||||
- Agent 2: Fixed event structure bug (threadId in wrong place)
|
||||
- Agent 3: Added wait for async tool execution to complete
|
||||
|
||||
**Integration:** All fixes independent, no conflicts, full suite green
|
||||
|
||||
**Time saved:** 3 problems solved in parallel vs sequentially
|
||||
|
||||
## Key Benefits
|
||||
|
||||
1. **Parallelization** - Multiple investigations happen simultaneously
|
||||
2. **Focus** - Each agent has narrow scope, less context to track
|
||||
3. **Independence** - Agents don't interfere with each other
|
||||
4. **Speed** - 3 problems solved in time of 1
|
||||
|
||||
## Verification
|
||||
|
||||
After agents return:
|
||||
1. **Review each summary** - Understand what changed
|
||||
2. **Check for conflicts** - Did agents edit same code?
|
||||
3. **Run full suite** - Verify all fixes work together
|
||||
4. **Spot check** - Agents can make systematic errors
|
||||
|
||||
## Real-World Impact
|
||||
|
||||
From debugging session (2025-10-03):
|
||||
- 6 failures across 3 files
|
||||
- 3 agents dispatched in parallel
|
||||
- All investigations completed concurrently
|
||||
- All fixes integrated successfully
|
||||
- Zero conflicts between agent changes
|
||||
59
skills/collaboration/executing-plans/SKILL.md
Normal file
59
skills/collaboration/executing-plans/SKILL.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
name: Executing Plans
|
||||
description: Execute detailed plans in batches with review checkpoints
|
||||
when_to_use: When have a complete implementation plan to execute. When implementing in separate session from planning. When your human partner points you to a plan file to implement.
|
||||
version: 2.0.0
|
||||
---
|
||||
|
||||
# Executing Plans
|
||||
|
||||
## Overview
|
||||
|
||||
Load plan, review critically, execute tasks in batches, report for review between batches.
|
||||
|
||||
**Core principle:** Batch execution with checkpoints for architect review.
|
||||
|
||||
**Announce at start:** "I'm using the Executing Plans skill to implement this plan."
|
||||
|
||||
## The Process
|
||||
|
||||
### Step 1: Load and Review Plan
|
||||
1. Read plan file
|
||||
2. Review critically - identify any questions or concerns about the plan
|
||||
3. If concerns: Raise them with your human partner before starting
|
||||
4. If no concerns: Create TodoWrite and proceed
|
||||
|
||||
### Step 2: Execute Batch
|
||||
**Default: First 3 tasks**
|
||||
|
||||
For each task:
|
||||
1. Mark as in_progress
|
||||
2. Follow each step exactly (plan has bite-sized steps)
|
||||
3. Run verifications as specified
|
||||
4. Mark as completed
|
||||
|
||||
### Step 3: Report
|
||||
When batch complete:
|
||||
- Show what was implemented
|
||||
- Show verification output
|
||||
- Say: "Ready for feedback."
|
||||
|
||||
### Step 4: Continue
|
||||
Based on feedback:
|
||||
- Apply changes if needed
|
||||
- Execute next batch
|
||||
- Repeat until complete
|
||||
|
||||
### Step 5: Complete Development
|
||||
|
||||
After all tasks complete and verified:
|
||||
- Announce: "I'm using the Finishing a Development Branch skill to complete this work."
|
||||
- Switch to skills/collaboration/finishing-a-development-branch
|
||||
- Follow that skill to verify tests, present options, execute choice
|
||||
|
||||
## Remember
|
||||
- Review plan critically first
|
||||
- Follow plan steps exactly
|
||||
- Don't skip verifications
|
||||
- Reference skills when plan says to
|
||||
- Between batches: just report and wait
|
||||
202
skills/collaboration/finishing-a-development-branch/SKILL.md
Normal file
202
skills/collaboration/finishing-a-development-branch/SKILL.md
Normal file
@@ -0,0 +1,202 @@
|
||||
---
|
||||
name: Finishing a Development Branch
|
||||
description: Complete feature development with structured options for merge, PR, or cleanup
|
||||
when_to_use: After completing implementation. When all tests passing. At end of executing-plans or subagent-driven-development. When feature work is done.
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# Finishing a Development Branch
|
||||
|
||||
## Overview
|
||||
|
||||
Guide completion of development work by presenting clear options and handling chosen workflow.
|
||||
|
||||
**Core principle:** Verify tests → Present options → Execute choice → Clean up.
|
||||
|
||||
**Announce at start:** "I'm using the Finishing a Development Branch skill to complete this work."
|
||||
|
||||
## The Process
|
||||
|
||||
### Step 1: Verify Tests
|
||||
|
||||
**Before presenting options, verify tests pass:**
|
||||
|
||||
```bash
|
||||
# Run project's test suite
|
||||
npm test / cargo test / pytest / go test ./...
|
||||
```
|
||||
|
||||
**If tests fail:**
|
||||
```
|
||||
Tests failing (<N> failures). Must fix before completing:
|
||||
|
||||
[Show failures]
|
||||
|
||||
Cannot proceed with merge/PR until tests pass.
|
||||
```
|
||||
|
||||
Stop. Don't proceed to Step 2.
|
||||
|
||||
**If tests pass:** Continue to Step 2.
|
||||
|
||||
### Step 2: Determine Base Branch
|
||||
|
||||
```bash
|
||||
# Try common base branches
|
||||
git merge-base HEAD main 2>/dev/null || git merge-base HEAD master 2>/dev/null
|
||||
```
|
||||
|
||||
Or ask: "This branch split from main - is that correct?"
|
||||
|
||||
### Step 3: Present Options
|
||||
|
||||
Present exactly these 4 options:
|
||||
|
||||
```
|
||||
Implementation complete. What would you like to do?
|
||||
|
||||
1. Merge back to <base-branch> locally
|
||||
2. Push and create a Pull Request
|
||||
3. Keep the branch as-is (I'll handle it later)
|
||||
4. Discard this work
|
||||
|
||||
Which option?
|
||||
```
|
||||
|
||||
**Don't add explanation** - keep options concise.
|
||||
|
||||
### Step 4: Execute Choice
|
||||
|
||||
#### Option 1: Merge Locally
|
||||
|
||||
```bash
|
||||
# Switch to base branch
|
||||
git checkout <base-branch>
|
||||
|
||||
# Pull latest
|
||||
git pull
|
||||
|
||||
# Merge feature branch
|
||||
git merge <feature-branch>
|
||||
|
||||
# Verify tests on merged result
|
||||
<test command>
|
||||
|
||||
# If tests pass
|
||||
git branch -d <feature-branch>
|
||||
```
|
||||
|
||||
Then: Cleanup worktree (Step 5)
|
||||
|
||||
#### Option 2: Push and Create PR
|
||||
|
||||
```bash
|
||||
# Push branch
|
||||
git push -u origin <feature-branch>
|
||||
|
||||
# Create PR
|
||||
gh pr create --title "<title>" --body "$(cat <<'EOF'
|
||||
## Summary
|
||||
<2-3 bullets of what changed>
|
||||
|
||||
## Test Plan
|
||||
- [ ] <verification steps>
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
Then: Cleanup worktree (Step 5)
|
||||
|
||||
#### Option 3: Keep As-Is
|
||||
|
||||
Report: "Keeping branch <name>. Worktree preserved at <path>."
|
||||
|
||||
**Don't cleanup worktree.**
|
||||
|
||||
#### Option 4: Discard
|
||||
|
||||
**Confirm first:**
|
||||
```
|
||||
This will permanently delete:
|
||||
- Branch <name>
|
||||
- All commits: <commit-list>
|
||||
- Worktree at <path>
|
||||
|
||||
Type 'discard' to confirm.
|
||||
```
|
||||
|
||||
Wait for exact confirmation.
|
||||
|
||||
If confirmed:
|
||||
```bash
|
||||
git checkout <base-branch>
|
||||
git branch -D <feature-branch>
|
||||
```
|
||||
|
||||
Then: Cleanup worktree (Step 5)
|
||||
|
||||
### Step 5: Cleanup Worktree
|
||||
|
||||
**For Options 1, 2, 4:**
|
||||
|
||||
Check if in worktree:
|
||||
```bash
|
||||
git worktree list | grep $(git branch --show-current)
|
||||
```
|
||||
|
||||
If yes:
|
||||
```bash
|
||||
git worktree remove <worktree-path>
|
||||
```
|
||||
|
||||
**For Option 3:** Keep worktree.
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Option | Merge | Push | Keep Worktree | Cleanup Branch |
|
||||
|--------|-------|------|---------------|----------------|
|
||||
| 1. Merge locally | ✓ | - | - | ✓ |
|
||||
| 2. Create PR | - | ✓ | ✓ | - |
|
||||
| 3. Keep as-is | - | - | ✓ | - |
|
||||
| 4. Discard | - | - | - | ✓ (force) |
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**Skipping test verification**
|
||||
- **Problem:** Merge broken code, create failing PR
|
||||
- **Fix:** Always verify tests before offering options
|
||||
|
||||
**Open-ended questions**
|
||||
- **Problem:** "What should I do next?" → ambiguous
|
||||
- **Fix:** Present exactly 4 structured options
|
||||
|
||||
**Automatic worktree cleanup**
|
||||
- **Problem:** Remove worktree when might need it (Option 2, 3)
|
||||
- **Fix:** Only cleanup for Options 1 and 4
|
||||
|
||||
**No confirmation for discard**
|
||||
- **Problem:** Accidentally delete work
|
||||
- **Fix:** Require typed "discard" confirmation
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Never:**
|
||||
- Proceed with failing tests
|
||||
- Merge without verifying tests on result
|
||||
- Delete work without confirmation
|
||||
- Force-push without explicit request
|
||||
|
||||
**Always:**
|
||||
- Verify tests before offering options
|
||||
- Present exactly 4 options
|
||||
- Get typed confirmation for Option 4
|
||||
- Clean up worktree for Options 1 & 4 only
|
||||
|
||||
## Integration
|
||||
|
||||
**Called by:**
|
||||
- skills/collaboration/subagent-driven-development (Step 7)
|
||||
- skills/collaboration/executing-plans (Step 5)
|
||||
|
||||
**Pairs with:**
|
||||
- skills/collaboration/using-git-worktrees (created the worktree)
|
||||
211
skills/collaboration/receiving-code-review/SKILL.md
Normal file
211
skills/collaboration/receiving-code-review/SKILL.md
Normal file
@@ -0,0 +1,211 @@
|
||||
---
|
||||
name: Code Review Reception
|
||||
description: Receive and act on code review feedback with technical rigor, not performative agreement or blind implementation
|
||||
when_to_use: When receiving code review feedback from your human partner or external reviewers. Before implementing review suggestions. When PR comments arrive. When feedback seems wrong or unclear.
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# Code Review Reception
|
||||
|
||||
## Overview
|
||||
|
||||
Code review requires technical evaluation, not emotional performance.
|
||||
|
||||
**Core principle:** Verify before implementing. Ask before assuming. Technical correctness over social comfort.
|
||||
|
||||
## The Response Pattern
|
||||
|
||||
```
|
||||
WHEN receiving code review feedback:
|
||||
|
||||
1. READ: Complete feedback without reacting
|
||||
2. UNDERSTAND: Restate requirement in own words (or ask)
|
||||
3. VERIFY: Check against codebase reality
|
||||
4. EVALUATE: Technically sound for THIS codebase?
|
||||
5. RESPOND: Technical acknowledgment or reasoned pushback
|
||||
6. IMPLEMENT: One item at a time, test each
|
||||
```
|
||||
|
||||
## Forbidden Responses
|
||||
|
||||
**NEVER:**
|
||||
- "You're absolutely right!" (explicit CLAUDE.md violation)
|
||||
- "Great point!" / "Excellent feedback!" (performative)
|
||||
- "Let me implement that now" (before verification)
|
||||
|
||||
**INSTEAD:**
|
||||
- Restate the technical requirement
|
||||
- Ask clarifying questions
|
||||
- Push back with technical reasoning if wrong
|
||||
- Just start working (actions > words)
|
||||
|
||||
## Handling Unclear Feedback
|
||||
|
||||
```
|
||||
IF any item is unclear:
|
||||
STOP - do not implement anything yet
|
||||
ASK for clarification on unclear items
|
||||
|
||||
WHY: Items may be related. Partial understanding = wrong implementation.
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```
|
||||
your human partner: "Fix 1-6"
|
||||
You understand 1,2,3,6. Unclear on 4,5.
|
||||
|
||||
❌ WRONG: Implement 1,2,3,6 now, ask about 4,5 later
|
||||
✅ RIGHT: "I understand items 1,2,3,6. Need clarification on 4 and 5 before proceeding."
|
||||
```
|
||||
|
||||
## Source-Specific Handling
|
||||
|
||||
### From your human partner
|
||||
- **Trusted** - implement after understanding
|
||||
- **Still ask** if scope unclear
|
||||
- **No performative agreement**
|
||||
- **Skip to action** or technical acknowledgment
|
||||
|
||||
### From External Reviewers
|
||||
```
|
||||
BEFORE implementing:
|
||||
1. Check: Technically correct for THIS codebase?
|
||||
2. Check: Breaks existing functionality?
|
||||
3. Check: Reason for current implementation?
|
||||
4. Check: Works on all platforms/versions?
|
||||
5. Check: Does reviewer understand full context?
|
||||
|
||||
IF suggestion seems wrong:
|
||||
Push back with technical reasoning
|
||||
|
||||
IF can't easily verify:
|
||||
Say so: "I can't verify this without [X]. Should I [investigate/ask/proceed]?"
|
||||
|
||||
IF conflicts with your human partner's prior decisions:
|
||||
Stop and discuss with your human partner first
|
||||
```
|
||||
|
||||
**your human partner's rule:** "External feedback - be skeptical, but check carefully"
|
||||
|
||||
## YAGNI Check for "Professional" Features
|
||||
|
||||
```
|
||||
IF reviewer suggests "implementing properly":
|
||||
grep codebase for actual usage
|
||||
|
||||
IF unused: "This endpoint isn't called. Remove it (YAGNI)?"
|
||||
IF used: Then implement properly
|
||||
```
|
||||
|
||||
**your human partner's rule:** "You and reviewer both report to me. If we don't need this feature, don't add it."
|
||||
|
||||
## Implementation Order
|
||||
|
||||
```
|
||||
FOR multi-item feedback:
|
||||
1. Clarify anything unclear FIRST
|
||||
2. Then implement in this order:
|
||||
- Blocking issues (breaks, security)
|
||||
- Simple fixes (typos, imports)
|
||||
- Complex fixes (refactoring, logic)
|
||||
3. Test each fix individually
|
||||
4. Verify no regressions
|
||||
```
|
||||
|
||||
## When To Push Back
|
||||
|
||||
Push back when:
|
||||
- Suggestion breaks existing functionality
|
||||
- Reviewer lacks full context
|
||||
- Violates YAGNI (unused feature)
|
||||
- Technically incorrect for this stack
|
||||
- Legacy/compatibility reasons exist
|
||||
- Conflicts with your human partner's architectural decisions
|
||||
|
||||
**How to push back:**
|
||||
- Use technical reasoning, not defensiveness
|
||||
- Ask specific questions
|
||||
- Reference working tests/code
|
||||
- Involve your human partner if architectural
|
||||
|
||||
**Signal if uncomfortable pushing back out loud:** "Strange things are afoot at the Circle K"
|
||||
|
||||
## Acknowledging Correct Feedback
|
||||
|
||||
When feedback IS correct:
|
||||
```
|
||||
✅ "Fixed. [Brief description of what changed]"
|
||||
✅ "Good catch - [specific issue]. Fixed in [location]."
|
||||
✅ [Just fix it and show in the code]
|
||||
|
||||
❌ "You're absolutely right!"
|
||||
❌ "Great point!"
|
||||
❌ "Thanks for catching that!"
|
||||
❌ "Thanks for [anything]"
|
||||
❌ ANY gratitude expression
|
||||
```
|
||||
|
||||
**Why no thanks:** Actions speak. Just fix it. The code itself shows you heard the feedback.
|
||||
|
||||
**If you catch yourself about to write "Thanks":** DELETE IT. State the fix instead.
|
||||
|
||||
## Gracefully Correcting Your Pushback
|
||||
|
||||
If you pushed back and were wrong:
|
||||
```
|
||||
✅ "You were right - I checked [X] and it does [Y]. Implementing now."
|
||||
✅ "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing."
|
||||
|
||||
❌ Long apology
|
||||
❌ Defending why you pushed back
|
||||
❌ Over-explaining
|
||||
```
|
||||
|
||||
State the correction factually and move on.
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
| Mistake | Fix |
|
||||
|---------|-----|
|
||||
| Performative agreement | State requirement or just act |
|
||||
| Blind implementation | Verify against codebase first |
|
||||
| Batch without testing | One at a time, test each |
|
||||
| Assuming reviewer is right | Check if breaks things |
|
||||
| Avoiding pushback | Technical correctness > comfort |
|
||||
| Partial implementation | Clarify all items first |
|
||||
| Can't verify, proceed anyway | State limitation, ask for direction |
|
||||
|
||||
## Real Examples
|
||||
|
||||
**Performative Agreement (Bad):**
|
||||
```
|
||||
Reviewer: "Remove legacy code"
|
||||
❌ "You're absolutely right! Let me remove that..."
|
||||
```
|
||||
|
||||
**Technical Verification (Good):**
|
||||
```
|
||||
Reviewer: "Remove legacy code"
|
||||
✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"
|
||||
```
|
||||
|
||||
**YAGNI (Good):**
|
||||
```
|
||||
Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
|
||||
✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"
|
||||
```
|
||||
|
||||
**Unclear Item (Good):**
|
||||
```
|
||||
your human partner: "Fix items 1-6"
|
||||
You understand 1,2,3,6. Unclear on 4,5.
|
||||
✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."
|
||||
```
|
||||
|
||||
## The Bottom Line
|
||||
|
||||
**External feedback = suggestions to evaluate, not orders to follow.**
|
||||
|
||||
Verify. Question. Then implement.
|
||||
|
||||
No performative agreement. Technical rigor always.
|
||||
329
skills/collaboration/remembering-conversations/DEPLOYMENT.md
Normal file
329
skills/collaboration/remembering-conversations/DEPLOYMENT.md
Normal file
@@ -0,0 +1,329 @@
|
||||
# Conversation Search Deployment Guide
|
||||
|
||||
Quick reference for deploying and maintaining the conversation indexing system.
|
||||
|
||||
## Initial Deployment
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/collaboration/remembering-conversations/tool
|
||||
|
||||
# 1. Install hook
|
||||
./install-hook
|
||||
|
||||
# 2. Index existing conversations (with parallel summarization)
|
||||
./index-conversations --cleanup --concurrency 8
|
||||
|
||||
# 3. Verify index health
|
||||
./index-conversations --verify
|
||||
|
||||
# 4. Test search
|
||||
./search-conversations "test query"
|
||||
```
|
||||
|
||||
**Expected results:**
|
||||
- Hook installed at `~/.claude/hooks/sessionEnd`
|
||||
- Summaries created for all conversations (50-120 words each)
|
||||
- Search returns relevant results in <1 second
|
||||
- No verification errors
|
||||
|
||||
**Performance tip:** Use `--concurrency 8` or `--concurrency 16` for 8-16x faster summarization on initial indexing. Hook uses concurrency=1 (safe for background).
|
||||
|
||||
## Ongoing Maintenance
|
||||
|
||||
### Automatic (No Action Required)
|
||||
|
||||
- Hook runs after every session ends
|
||||
- New conversations indexed in background (<30 sec per conversation)
|
||||
- Summaries generated automatically
|
||||
|
||||
### Weekly Health Check
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/collaboration/remembering-conversations/tool
|
||||
./index-conversations --verify
|
||||
```
|
||||
|
||||
If issues found:
|
||||
```bash
|
||||
./index-conversations --repair
|
||||
```
|
||||
|
||||
### After System Changes
|
||||
|
||||
| Change | Action |
|
||||
|--------|--------|
|
||||
| Moved conversation archive | Update paths in code, run `--rebuild` |
|
||||
| Updated CLAUDE.md | Run `--verify` to check for issues |
|
||||
| Changed database schema | Backup DB, run `--rebuild` |
|
||||
| Hook not running | Check executable: `chmod +x ~/.claude/hooks/sessionEnd` |
|
||||
|
||||
## Recovery Scenarios
|
||||
|
||||
| Issue | Diagnosis | Fix |
|
||||
|-------|-----------|-----|
|
||||
| **Missing summaries** | `--verify` shows "Missing summaries: N" | `--repair` regenerates missing summaries |
|
||||
| **Orphaned DB entries** | `--verify` shows "Orphaned entries: N" | `--repair` removes orphaned entries |
|
||||
| **Outdated indexes** | `--verify` shows "Outdated files: N" | `--repair` re-indexes modified files |
|
||||
| **Corrupted database** | Errors during search/verify | `--rebuild` (re-indexes everything, requires confirmation) |
|
||||
| **Hook not running** | No summaries for new conversations | See Troubleshooting below |
|
||||
| **Slow indexing** | Takes >30 sec per conversation | Check API key, network, Haiku fallback in logs |
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check hook installed and executable
|
||||
ls -l ~/.claude/hooks/sessionEnd
|
||||
|
||||
# Check recent conversations
|
||||
ls -lt ~/.clank/conversation-archive/*/*.jsonl | head -5
|
||||
|
||||
# Check database size
|
||||
ls -lh ~/.clank/conversation-index/db.sqlite
|
||||
|
||||
# Full verification
|
||||
./index-conversations --verify
|
||||
```
|
||||
|
||||
### Expected Behavior Metrics
|
||||
|
||||
- **Hook execution:** Within seconds of session end
|
||||
- **Indexing speed:** <30 seconds per conversation
|
||||
- **Summary length:** 50-120 words
|
||||
- **Search latency:** <1 second
|
||||
- **Verification:** 0 errors when healthy
|
||||
|
||||
### Log Output
|
||||
|
||||
Normal indexing:
|
||||
```
|
||||
Initializing database...
|
||||
Loading embedding model...
|
||||
Processing project: my-project (3 conversations)
|
||||
Summary: 87 words
|
||||
Indexed conversation.jsonl: 5 exchanges
|
||||
✅ Indexing complete! Conversations: 3, Exchanges: 15
|
||||
```
|
||||
|
||||
Verification with issues:
|
||||
```
|
||||
Verifying conversation index...
|
||||
Verified 100 conversations.
|
||||
|
||||
=== Verification Results ===
|
||||
Missing summaries: 2
|
||||
Orphaned entries: 0
|
||||
Outdated files: 1
|
||||
Corrupted files: 0
|
||||
|
||||
Run with --repair to fix these issues.
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Hook Not Running
|
||||
|
||||
**Symptoms:** New conversations not indexed automatically
|
||||
|
||||
**Diagnosis:**
|
||||
```bash
|
||||
# 1. Check hook exists and is executable
|
||||
ls -l ~/.claude/hooks/sessionEnd
|
||||
# Should show: -rwxr-xr-x ... sessionEnd
|
||||
|
||||
# 2. Check $SESSION_ID is set during sessions
|
||||
echo $SESSION_ID
|
||||
# Should show: session ID when in active session
|
||||
|
||||
# 3. Check indexer exists
|
||||
ls -l ~/.claude/skills/collaboration/remembering-conversations/tool/index-conversations
|
||||
# Should show: -rwxr-xr-x ... index-conversations
|
||||
|
||||
# 4. Test hook manually
|
||||
SESSION_ID=test-$(date +%s) ~/.claude/hooks/sessionEnd
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
# Make hook executable
|
||||
chmod +x ~/.claude/hooks/sessionEnd
|
||||
|
||||
# Reinstall if needed
|
||||
./install-hook
|
||||
```
|
||||
|
||||
### Summaries Failing
|
||||
|
||||
**Symptoms:** Verify shows missing summaries, repair fails
|
||||
|
||||
**Diagnosis:**
|
||||
```bash
|
||||
# Check API key
|
||||
echo $ANTHROPIC_API_KEY
|
||||
# Should show: sk-ant-...
|
||||
|
||||
# Try manual indexing with logging
|
||||
./index-conversations 2>&1 | tee index.log
|
||||
grep -i error index.log
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
# Set API key if missing
|
||||
export ANTHROPIC_API_KEY="your-key-here"
|
||||
|
||||
# Check for rate limits (wait and retry)
|
||||
sleep 60 && ./index-conversations --repair
|
||||
|
||||
# Fallback uses claude-3-haiku-20240307 (cheaper)
|
||||
# Check logs for: "Summary: N words" to confirm success
|
||||
```
|
||||
|
||||
### Search Not Finding Results
|
||||
|
||||
**Symptoms:** `./search-conversations "query"` returns no results
|
||||
|
||||
**Diagnosis:**
|
||||
```bash
|
||||
# 1. Verify conversations indexed
|
||||
./index-conversations --verify
|
||||
|
||||
# 2. Check database exists and has data
|
||||
ls -lh ~/.clank/conversation-index/db.sqlite
|
||||
# Should be > 100KB if conversations indexed
|
||||
|
||||
# 3. Try text search (exact match)
|
||||
./search-conversations --text "exact phrase from conversation"
|
||||
|
||||
# 4. Check for corruption
|
||||
sqlite3 ~/.clank/conversation-index/db.sqlite "SELECT COUNT(*) FROM exchanges;"
|
||||
# Should show number > 0
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
# If database missing or corrupt
|
||||
./index-conversations --rebuild
|
||||
|
||||
# If specific conversations missing
|
||||
./index-conversations --repair
|
||||
|
||||
# If still failing, check embedding model
|
||||
rm -rf ~/.cache/transformers # Force re-download
|
||||
./index-conversations
|
||||
```
|
||||
|
||||
### Database Corruption
|
||||
|
||||
**Symptoms:** Errors like "database disk image is malformed"
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
# 1. Backup current database
|
||||
cp ~/.clank/conversation-index/db.sqlite ~/.clank/conversation-index/db.sqlite.backup
|
||||
|
||||
# 2. Rebuild from scratch
|
||||
./index-conversations --rebuild
|
||||
# Confirms with: "Are you sure? [yes/NO]:"
|
||||
# Type: yes
|
||||
|
||||
# 3. Verify rebuild
|
||||
./index-conversations --verify
|
||||
```
|
||||
|
||||
## Commands Reference
|
||||
|
||||
```bash
|
||||
# Index all conversations
|
||||
./index-conversations
|
||||
|
||||
# Index specific session (called by hook)
|
||||
./index-conversations --session <session-id>
|
||||
|
||||
# Index only unprocessed conversations
|
||||
./index-conversations --cleanup
|
||||
|
||||
# Verify index health
|
||||
./index-conversations --verify
|
||||
|
||||
# Repair issues found by verify
|
||||
./index-conversations --repair
|
||||
|
||||
# Rebuild everything (with confirmation)
|
||||
./index-conversations --rebuild
|
||||
|
||||
# Search conversations (semantic)
|
||||
./search-conversations "query"
|
||||
|
||||
# Search conversations (text match)
|
||||
./search-conversations --text "exact phrase"
|
||||
|
||||
# Install/reinstall hook
|
||||
./install-hook
|
||||
```
|
||||
|
||||
## Subagent Workflow
|
||||
|
||||
**For searching conversations from within Claude Code sessions**, use the subagent pattern (see `skills/getting-started` for complete workflow).
|
||||
|
||||
**Template:** `tool/prompts/search-agent.md`
|
||||
|
||||
**Key requirements:**
|
||||
- Synthesis must be 200-1000 words (Summary section)
|
||||
- All sources must include: project, date, file path, status
|
||||
- No raw conversation excerpts (synthesize instead)
|
||||
- Follow-up via subagent (not direct file reads)
|
||||
|
||||
**Manual test checklist:**
|
||||
1. ✓ Dispatch subagent with search template
|
||||
2. ✓ Verify synthesis 200-1000 words
|
||||
3. ✓ Verify all sources have metadata (project, date, path, status)
|
||||
4. ✓ Ask follow-up → dispatch second subagent to dig deeper
|
||||
5. ✓ Confirm no raw conversations in main context
|
||||
|
||||
## Files and Directories
|
||||
|
||||
```
|
||||
~/.claude/
|
||||
├── hooks/
|
||||
│ └── sessionEnd # Hook that triggers indexing
|
||||
└── skills/collaboration/remembering-conversations/
|
||||
├── SKILL.md # Main documentation
|
||||
├── DEPLOYMENT.md # This file
|
||||
└── tool/
|
||||
├── index-conversations # Main indexer
|
||||
├── search-conversations # Search interface
|
||||
├── install-hook # Hook installer
|
||||
├── test-deployment.sh # End-to-end tests
|
||||
├── src/ # TypeScript source
|
||||
└── prompts/
|
||||
└── search-agent.md # Subagent template
|
||||
|
||||
~/.clank/
|
||||
├── conversation-archive/ # Archived conversations
|
||||
│ └── <project>/
|
||||
│ ├── <uuid>.jsonl # Conversation file
|
||||
│ └── <uuid>-summary.txt # AI summary (50-120 words)
|
||||
└── conversation-index/
|
||||
└── db.sqlite # SQLite database with embeddings
|
||||
```
|
||||
|
||||
## Deployment Checklist
|
||||
|
||||
### Initial Setup
|
||||
- [ ] Hook installed: `./install-hook`
|
||||
- [ ] Existing conversations indexed: `./index-conversations`
|
||||
- [ ] Verification clean: `./index-conversations --verify`
|
||||
- [ ] Search working: `./search-conversations "test"`
|
||||
- [ ] Subagent template exists: `ls tool/prompts/search-agent.md`
|
||||
|
||||
### Ongoing
|
||||
- [ ] Weekly: Run `--verify` and `--repair` if needed
|
||||
- [ ] After system changes: Re-verify
|
||||
- [ ] Monitor: Check hook runs (summaries appear for new conversations)
|
||||
|
||||
### Testing
|
||||
- [ ] Run end-to-end tests: `./test-deployment.sh`
|
||||
- [ ] All 5 scenarios pass
|
||||
- [ ] Manual subagent test (see scenario 5 in test output)
|
||||
133
skills/collaboration/remembering-conversations/INDEXING.md
Normal file
133
skills/collaboration/remembering-conversations/INDEXING.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Managing Conversation Index
|
||||
|
||||
Index, archive, and maintain conversations for search.
|
||||
|
||||
## Quick Start
|
||||
|
||||
**Install auto-indexing hook:**
|
||||
```bash
|
||||
~/.claude/skills/collaboration/remembering-conversations/tool/install-hook
|
||||
```
|
||||
|
||||
**Index all conversations:**
|
||||
```bash
|
||||
~/.claude/skills/collaboration/remembering-conversations/tool/index-conversations
|
||||
```
|
||||
|
||||
**Process unindexed only:**
|
||||
```bash
|
||||
~/.claude/skills/collaboration/remembering-conversations/tool/index-conversations --cleanup
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Automatic indexing** via sessionEnd hook (install once, forget)
|
||||
- **Semantic search** across all past conversations
|
||||
- **AI summaries** (Claude Haiku with Sonnet fallback)
|
||||
- **Recovery modes** (verify, repair, rebuild)
|
||||
- **Permanent archive** at `~/.clank/conversation-archive/`
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Install Hook (One-Time)
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/collaboration/remembering-conversations/tool
|
||||
./install-hook
|
||||
```
|
||||
|
||||
Handles existing hooks gracefully (merge or replace). Runs in background after each session.
|
||||
|
||||
### 2. Index Existing Conversations
|
||||
|
||||
```bash
|
||||
# Index everything
|
||||
./index-conversations
|
||||
|
||||
# Or just unindexed (faster, cheaper)
|
||||
./index-conversations --cleanup
|
||||
```
|
||||
|
||||
## Index Modes
|
||||
|
||||
```bash
|
||||
# Index all (first run or full rebuild)
|
||||
./index-conversations
|
||||
|
||||
# Index specific session (used by hook)
|
||||
./index-conversations --session <uuid>
|
||||
|
||||
# Process only unindexed (missing summaries)
|
||||
./index-conversations --cleanup
|
||||
|
||||
# Check index health
|
||||
./index-conversations --verify
|
||||
|
||||
# Fix detected issues
|
||||
./index-conversations --repair
|
||||
|
||||
# Nuclear option (deletes DB, re-indexes everything)
|
||||
./index-conversations --rebuild
|
||||
```
|
||||
|
||||
## Recovery Scenarios
|
||||
|
||||
| Situation | Command |
|
||||
|-----------|---------|
|
||||
| Missed conversations | `--cleanup` |
|
||||
| Hook didn't run | `--cleanup` |
|
||||
| Updated conversation | `--verify` then `--repair` |
|
||||
| Corrupted database | `--rebuild` |
|
||||
| Index health check | `--verify` |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Hook not running:**
|
||||
- Check: `ls -l ~/.claude/hooks/sessionEnd` (should be executable)
|
||||
- Test: `SESSION_ID=test-$(date +%s) ~/.claude/hooks/sessionEnd`
|
||||
- Re-install: `./install-hook`
|
||||
|
||||
**Summaries failing:**
|
||||
- Check API key: `echo $ANTHROPIC_API_KEY`
|
||||
- Check logs in ~/.clank/conversation-index/
|
||||
- Try manual: `./index-conversations --session <uuid>`
|
||||
|
||||
**Search not finding results:**
|
||||
- Verify indexed: `./index-conversations --verify`
|
||||
- Try text search: `./search-conversations --text "exact phrase"`
|
||||
- Rebuild if needed: `./index-conversations --rebuild`
|
||||
|
||||
## Excluding Projects
|
||||
|
||||
To exclude specific projects from indexing (e.g., meta-conversations), create:
|
||||
|
||||
`~/.clank/conversation-index/exclude.txt`
|
||||
```
|
||||
# One project name per line
|
||||
# Lines starting with # are comments
|
||||
-Users-yourname-Documents-some-project
|
||||
```
|
||||
|
||||
Or set env variable:
|
||||
```bash
|
||||
export CONVERSATION_SEARCH_EXCLUDE_PROJECTS="project1,project2"
|
||||
```
|
||||
|
||||
## Storage
|
||||
|
||||
- **Archive:** `~/.clank/conversation-archive/<project>/<uuid>.jsonl`
|
||||
- **Summaries:** `~/.clank/conversation-archive/<project>/<uuid>-summary.txt`
|
||||
- **Database:** `~/.clank/conversation-index/db.sqlite`
|
||||
- **Exclusions:** `~/.clank/conversation-index/exclude.txt` (optional)
|
||||
|
||||
## Technical Details
|
||||
|
||||
- **Embeddings:** @xenova/transformers (all-MiniLM-L6-v2, 384 dimensions, local/free)
|
||||
- **Vector search:** sqlite-vec (local/free)
|
||||
- **Summaries:** Claude Haiku with Sonnet fallback (~$0.01-0.02/conversation)
|
||||
- **Parser:** Handles multi-message exchanges and sidechains
|
||||
|
||||
## See Also
|
||||
|
||||
- **Searching:** See SKILL.md for search modes (vector, text, time filtering)
|
||||
- **Deployment:** See DEPLOYMENT.md for production runbook
|
||||
69
skills/collaboration/remembering-conversations/SKILL.md
Normal file
69
skills/collaboration/remembering-conversations/SKILL.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
name: Remembering Conversations
|
||||
description: Search previous Claude Code conversations for facts, patterns, decisions, and context using semantic or text search
|
||||
when_to_use: When your human partner mentions "we discussed this before". When debugging similar issues. When looking for architectural decisions or code patterns from past work. Before reinventing solutions. When you need to find a specific git SHA or error message.
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# Remembering Conversations
|
||||
|
||||
Search archived conversations using semantic similarity or exact text matching.
|
||||
|
||||
**Core principle:** Search before reinventing.
|
||||
|
||||
**Announce:** "I'm searching previous conversations for [topic]."
|
||||
|
||||
**Setup:** See INDEXING.md
|
||||
|
||||
## When to Use
|
||||
|
||||
**Search when:**
|
||||
- Your human partner mentions "we discussed this before"
|
||||
- Debugging similar issues
|
||||
- Looking for architectural decisions or patterns
|
||||
- Before implementing something familiar
|
||||
|
||||
**Don't search when:**
|
||||
- Info in current conversation
|
||||
- Question about current codebase (use Grep/Read)
|
||||
|
||||
## In-Session Use
|
||||
|
||||
**Always use subagents** (50-100x context savings). See skills/getting-started for workflow.
|
||||
|
||||
**Manual/CLI use:** Direct search (below) for humans outside Claude Code sessions.
|
||||
|
||||
## Direct Search (Manual/CLI)
|
||||
|
||||
**Tool:** `${CLAUDE_PLUGIN_ROOT}/skills/collaboration/remembering-conversations/tool/search-conversations`
|
||||
|
||||
**Modes:**
|
||||
```bash
|
||||
search-conversations "query" # Vector similarity (default)
|
||||
search-conversations --text "exact" # Exact string match
|
||||
search-conversations --both "query" # Both modes
|
||||
```
|
||||
|
||||
**Flags:**
|
||||
```bash
|
||||
--after YYYY-MM-DD # Filter by date
|
||||
--before YYYY-MM-DD # Filter by date
|
||||
--limit N # Max results (default: 10)
|
||||
--help # Full usage
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Semantic search
|
||||
search-conversations "React Router authentication errors"
|
||||
|
||||
# Find git SHA
|
||||
search-conversations --text "a1b2c3d4"
|
||||
|
||||
# Time range
|
||||
search-conversations --after 2025-09-01 "refactoring"
|
||||
```
|
||||
|
||||
Returns: project, date, conversation summary, matched exchange, similarity %, file path.
|
||||
|
||||
**For details:** Run `search-conversations --help`
|
||||
8
skills/collaboration/remembering-conversations/tool/.gitignore
vendored
Normal file
8
skills/collaboration/remembering-conversations/tool/.gitignore
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
node_modules/
|
||||
dist/
|
||||
*.log
|
||||
.DS_Store
|
||||
|
||||
# Local data (database and archives are at ~/.clank/, not in repo)
|
||||
*.sqlite*
|
||||
.cache/
|
||||
10
skills/collaboration/remembering-conversations/tool/hooks/sessionEnd
Executable file
10
skills/collaboration/remembering-conversations/tool/hooks/sessionEnd
Executable file
@@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
# Auto-index conversation after session ends
|
||||
# Copy to ~/.claude/hooks/sessionEnd to enable
|
||||
|
||||
INDEXER="$HOME/.claude/skills/collaboration/remembering-conversations/tool/index-conversations"
|
||||
|
||||
if [ -n "$SESSION_ID" ] && [ -x "$INDEXER" ]; then
|
||||
# Run in background, suppress output
|
||||
"$INDEXER" --session "$SESSION_ID" > /dev/null 2>&1 &
|
||||
fi
|
||||
79
skills/collaboration/remembering-conversations/tool/index-conversations
Executable file
79
skills/collaboration/remembering-conversations/tool/index-conversations
Executable file
@@ -0,0 +1,79 @@
|
||||
#!/bin/bash
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
SCRIPT_DIR="$(pwd)"
|
||||
|
||||
case "$1" in
|
||||
--help|-h)
|
||||
cat <<'EOF'
|
||||
index-conversations - Index and manage conversation archives
|
||||
|
||||
USAGE:
|
||||
index-conversations [COMMAND] [OPTIONS]
|
||||
|
||||
COMMANDS:
|
||||
(default) Index all conversations
|
||||
--cleanup Process only unindexed conversations (fast, cheap)
|
||||
--session ID Index specific session (used by hook)
|
||||
--verify Check index health
|
||||
--repair Fix detected issues
|
||||
--rebuild Delete DB and re-index everything (requires confirmation)
|
||||
|
||||
OPTIONS:
|
||||
--concurrency N Parallel summarization (1-16, default: 1)
|
||||
-c N Short form of --concurrency
|
||||
--help, -h Show this help
|
||||
|
||||
EXAMPLES:
|
||||
# Index all unprocessed (recommended for backfill)
|
||||
index-conversations --cleanup
|
||||
|
||||
# Index with 8 parallel summarizations (8x faster)
|
||||
index-conversations --cleanup --concurrency 8
|
||||
|
||||
# Check index health
|
||||
index-conversations --verify
|
||||
|
||||
# Fix any issues found
|
||||
index-conversations --repair
|
||||
|
||||
# Nuclear option (deletes everything, re-indexes)
|
||||
index-conversations --rebuild
|
||||
|
||||
WORKFLOW:
|
||||
1. Initial setup: index-conversations --cleanup
|
||||
2. Ongoing: Auto-indexed by sessionEnd hook
|
||||
3. Health check: index-conversations --verify (weekly)
|
||||
4. Recovery: index-conversations --repair (if issues found)
|
||||
|
||||
SEE ALSO:
|
||||
INDEXING.md - Setup and maintenance guide
|
||||
DEPLOYMENT.md - Production runbook
|
||||
EOF
|
||||
exit 0
|
||||
;;
|
||||
--session)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" index-session "$@"
|
||||
;;
|
||||
--cleanup)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" index-cleanup "$@"
|
||||
;;
|
||||
--verify)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" verify "$@"
|
||||
;;
|
||||
--repair)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" repair "$@"
|
||||
;;
|
||||
--rebuild)
|
||||
echo "⚠️ This will DELETE the entire database and re-index everything."
|
||||
read -p "Are you sure? [yes/NO]: " confirm
|
||||
if [ "$confirm" = "yes" ]; then
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" rebuild "$@"
|
||||
else
|
||||
echo "Cancelled"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" index-all "$@"
|
||||
;;
|
||||
esac
|
||||
82
skills/collaboration/remembering-conversations/tool/install-hook
Executable file
82
skills/collaboration/remembering-conversations/tool/install-hook
Executable file
@@ -0,0 +1,82 @@
|
||||
#!/bin/bash
|
||||
# Install sessionEnd hook with merge support
|
||||
|
||||
HOOK_DIR="$HOME/.claude/hooks"
|
||||
HOOK_FILE="$HOOK_DIR/sessionEnd"
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
SOURCE_HOOK="$SCRIPT_DIR/hooks/sessionEnd"
|
||||
|
||||
echo "Installing conversation indexing hook..."
|
||||
|
||||
# Create hooks directory
|
||||
mkdir -p "$HOOK_DIR"
|
||||
|
||||
# Handle existing hook
|
||||
if [ -f "$HOOK_FILE" ]; then
|
||||
echo "⚠️ Existing sessionEnd hook found"
|
||||
|
||||
# Check if our indexer is already installed
|
||||
if grep -q "remembering-conversations.*index-conversations" "$HOOK_FILE"; then
|
||||
echo "✓ Indexer already installed in existing hook"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Create backup
|
||||
BACKUP="$HOOK_FILE.backup.$(date +%s)"
|
||||
cp "$HOOK_FILE" "$BACKUP"
|
||||
echo "Created backup: $BACKUP"
|
||||
|
||||
# Offer merge or replace
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " (m) Merge - Add indexer to existing hook"
|
||||
echo " (r) Replace - Overwrite with our hook"
|
||||
echo " (c) Cancel - Exit without changes"
|
||||
echo ""
|
||||
read -p "Choose [m/r/c]: " choice
|
||||
|
||||
case "$choice" in
|
||||
m|M)
|
||||
# Append our indexer
|
||||
cat >> "$HOOK_FILE" <<'EOF'
|
||||
|
||||
# Auto-index conversations (remembering-conversations skill)
|
||||
INDEXER="$HOME/.claude/skills/collaboration/remembering-conversations/tool/index-conversations"
|
||||
if [ -n "$SESSION_ID" ] && [ -x "$INDEXER" ]; then
|
||||
"$INDEXER" --session "$SESSION_ID" > /dev/null 2>&1 &
|
||||
fi
|
||||
EOF
|
||||
echo "✓ Merged indexer into existing hook"
|
||||
;;
|
||||
r|R)
|
||||
cp "$SOURCE_HOOK" "$HOOK_FILE"
|
||||
chmod +x "$HOOK_FILE"
|
||||
echo "✓ Replaced hook with our version"
|
||||
;;
|
||||
c|C)
|
||||
echo "Installation cancelled"
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
echo "Invalid choice. Exiting."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
else
|
||||
# No existing hook, install fresh
|
||||
cp "$SOURCE_HOOK" "$HOOK_FILE"
|
||||
chmod +x "$HOOK_FILE"
|
||||
echo "✓ Installed sessionEnd hook"
|
||||
fi
|
||||
|
||||
# Verify executable
|
||||
if [ ! -x "$HOOK_FILE" ]; then
|
||||
chmod +x "$HOOK_FILE"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Hook installed successfully!"
|
||||
echo "Location: $HOOK_FILE"
|
||||
echo ""
|
||||
echo "Test it:"
|
||||
echo " SESSION_ID=test-\$(date +%s) $HOOK_FILE"
|
||||
2816
skills/collaboration/remembering-conversations/tool/package-lock.json
generated
Normal file
2816
skills/collaboration/remembering-conversations/tool/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"name": "conversation-search",
|
||||
"version": "1.0.0",
|
||||
"description": "",
|
||||
"main": "index.js",
|
||||
"scripts": {
|
||||
"index": "./index-conversations",
|
||||
"search": "./search-conversations",
|
||||
"test": "vitest run",
|
||||
"test:watch": "vitest"
|
||||
},
|
||||
"keywords": [],
|
||||
"author": "",
|
||||
"license": "ISC",
|
||||
"type": "module",
|
||||
"dependencies": {
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.1.9",
|
||||
"@xenova/transformers": "^2.17.2",
|
||||
"better-sqlite3": "^12.4.1",
|
||||
"sqlite-vec": "^0.1.7-alpha.2"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/better-sqlite3": "^7.6.13",
|
||||
"@types/node": "^24.7.0",
|
||||
"tsx": "^4.20.6",
|
||||
"typescript": "^5.9.3",
|
||||
"vitest": "^3.2.4"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,157 @@
|
||||
# Conversation Search Agent
|
||||
|
||||
You are searching historical Claude Code conversations for relevant context.
|
||||
|
||||
**Your task:**
|
||||
1. Search conversations for: {TOPIC}
|
||||
2. Read the top 2-5 most relevant results
|
||||
3. Synthesize key findings (max 1000 words)
|
||||
4. Return synthesis + source pointers (so main agent can dig deeper)
|
||||
|
||||
## Search Query
|
||||
|
||||
{SEARCH_QUERY}
|
||||
|
||||
## What to Look For
|
||||
|
||||
{FOCUS_AREAS}
|
||||
|
||||
Example focus areas:
|
||||
- What was the problem or question?
|
||||
- What solution was chosen and why?
|
||||
- What alternatives were considered and rejected?
|
||||
- Any gotchas, edge cases, or lessons learned?
|
||||
- Relevant code patterns, APIs, or approaches used
|
||||
- Architectural decisions and rationale
|
||||
|
||||
## How to Search
|
||||
|
||||
Run:
|
||||
```bash
|
||||
~/.claude/skills/collaboration/remembering-conversations/tool/search-conversations "{SEARCH_QUERY}"
|
||||
```
|
||||
|
||||
This returns:
|
||||
- Project name and date
|
||||
- Conversation summary (AI-generated)
|
||||
- Matched exchange with similarity score
|
||||
- File path and line numbers
|
||||
|
||||
Read the full conversations for top 2-5 results to get complete context.
|
||||
|
||||
## Output Format
|
||||
|
||||
**Required structure:**
|
||||
|
||||
### Summary
|
||||
[Synthesize findings in 200-1000 words. Adapt structure to what you found:
|
||||
- Quick answer? 1-2 paragraphs.
|
||||
- Complex topic? Use sections (Context/Solution/Rationale/Lessons/Code).
|
||||
- Multiple approaches? Compare and contrast.
|
||||
- Historical evolution? Show progression chronologically.
|
||||
|
||||
Focus on actionable insights for the current task.]
|
||||
|
||||
### Sources
|
||||
[List ALL conversations examined, in order of relevance:]
|
||||
|
||||
**1. [project-name, YYYY-MM-DD]** - X% match
|
||||
Conversation summary: [One sentence - what was this conversation about?]
|
||||
File: ~/.clank/conversation-archive/.../uuid.jsonl:start-end
|
||||
Status: [Read in detail | Reviewed summary only | Skimmed]
|
||||
|
||||
**2. [project-name, YYYY-MM-DD]** - X% match
|
||||
Conversation summary: ...
|
||||
File: ...
|
||||
Status: ...
|
||||
|
||||
[Continue for all examined sources...]
|
||||
|
||||
### For Follow-Up
|
||||
|
||||
Main agent can:
|
||||
- Ask you to dig deeper into specific source (#1, #2, etc.)
|
||||
- Ask you to read adjacent exchanges in a conversation
|
||||
- Ask you to search with refined query
|
||||
- Read sources directly (discouraged - risks context bloat)
|
||||
|
||||
## Critical Rules
|
||||
|
||||
**DO:**
|
||||
- Search using the provided query
|
||||
- Read full conversations for top results
|
||||
- Synthesize into actionable insights (200-1000 words)
|
||||
- Include ALL sources with metadata (project, date, summary, file, status)
|
||||
- Focus on what will help the current task
|
||||
- Include specific details (function names, error messages, line numbers)
|
||||
|
||||
**DO NOT:**
|
||||
- Include raw conversation excerpts (synthesize instead)
|
||||
- Paste full file contents
|
||||
- Add meta-commentary ("I searched and found...")
|
||||
- Exceed 1000 words in Summary section
|
||||
- Return search results verbatim
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
### Summary
|
||||
|
||||
developer needed to handle authentication errors in React Router 7 data loaders
|
||||
without crashing the app. The solution uses RR7's errorElement + useRouteError()
|
||||
to catch 401s and redirect to login.
|
||||
|
||||
**Key implementation:**
|
||||
Protected route wrapper catches loader errors, checks error.status === 401.
|
||||
If 401, redirects to /login with return URL. Otherwise shows error boundary.
|
||||
|
||||
**Why this works:**
|
||||
Loaders can't use hooks (tried useNavigate, failed). Throwing redirect()
|
||||
bypasses error handling. Final approach lets errors bubble to errorElement
|
||||
where component context is available.
|
||||
|
||||
**Critical gotchas:**
|
||||
- Test with expired tokens, not just missing tokens
|
||||
- Error boundaries need unique keys per route or won't reset
|
||||
- Always include return URL in redirect
|
||||
- Loaders execute before components, no hook access
|
||||
|
||||
**Code pattern:**
|
||||
```typescript
|
||||
// In loader
|
||||
if (!response.ok) throw { status: response.status, message: 'Failed' };
|
||||
|
||||
// In ErrorBoundary
|
||||
const error = useRouteError();
|
||||
if (error.status === 401) navigate('/login?return=' + location.pathname);
|
||||
```
|
||||
|
||||
### Sources
|
||||
|
||||
**1. [react-router-7-starter, 2024-09-17]** - 92% match
|
||||
Conversation summary: Built authentication system with JWT, implemented protected routes
|
||||
File: ~/.clank/conversation-archive/react-router-7-starter/19df92b9.jsonl:145-289
|
||||
Status: Read in detail (multiple exchanges on error handling evolution)
|
||||
|
||||
**2. [react-router-docs-reading, 2024-09-10]** - 78% match
|
||||
Conversation summary: Read RR7 docs, discussed new loader patterns and errorElement
|
||||
File: ~/.clank/conversation-archive/react-router-docs-reading/a3c871f2.jsonl:56-98
|
||||
Status: Reviewed summary only (confirmed errorElement usage)
|
||||
|
||||
**3. [auth-debugging, 2024-09-18]** - 73% match
|
||||
Conversation summary: Fixed token expiration handling and error boundary reset issues
|
||||
File: ~/.clank/conversation-archive/react-router-7-starter/7b2e8d91.jsonl:201-345
|
||||
Status: Read in detail (discovered gotchas about keys and expired tokens)
|
||||
|
||||
### For Follow-Up
|
||||
|
||||
Main agent can ask me to:
|
||||
- Dig deeper into source #1 (full error handling evolution)
|
||||
- Read adjacent exchanges in #3 (more debugging context)
|
||||
- Search for "React Router error boundary patterns" more broadly
|
||||
```
|
||||
|
||||
This output:
|
||||
- Synthesis: ~350 words (actionable, specific)
|
||||
- Sources: Full metadata for 3 conversations
|
||||
- Enables iteration without context bloat
|
||||
105
skills/collaboration/remembering-conversations/tool/search-conversations
Executable file
105
skills/collaboration/remembering-conversations/tool/search-conversations
Executable file
@@ -0,0 +1,105 @@
|
||||
#!/bin/bash
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
# Parse arguments
|
||||
MODE="vector"
|
||||
AFTER=""
|
||||
BEFORE=""
|
||||
LIMIT="10"
|
||||
QUERY=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--help|-h)
|
||||
cat <<'EOF'
|
||||
search-conversations - Search previous Claude Code conversations
|
||||
|
||||
USAGE:
|
||||
search-conversations [OPTIONS] <query>
|
||||
|
||||
MODES:
|
||||
(default) Vector similarity search (semantic)
|
||||
--text Exact string matching (for git SHAs, error codes)
|
||||
--both Combine vector + text search
|
||||
|
||||
OPTIONS:
|
||||
--after DATE Only conversations after YYYY-MM-DD
|
||||
--before DATE Only conversations before YYYY-MM-DD
|
||||
--limit N Max results (default: 10)
|
||||
--help, -h Show this help
|
||||
|
||||
EXAMPLES:
|
||||
# Semantic search
|
||||
search-conversations "React Router authentication errors"
|
||||
|
||||
# Find exact string (git SHA, error message)
|
||||
search-conversations --text "a1b2c3d4e5f6"
|
||||
|
||||
# Time filtering
|
||||
search-conversations --after 2025-09-01 "refactoring"
|
||||
search-conversations --before 2025-10-01 --limit 20 "bug fix"
|
||||
|
||||
# Combine modes
|
||||
search-conversations --both "React Router data loading"
|
||||
|
||||
OUTPUT FORMAT:
|
||||
For each result:
|
||||
- Project name and date
|
||||
- Conversation summary (AI-generated)
|
||||
- Matched exchange with similarity % (vector mode)
|
||||
- File path with line numbers
|
||||
|
||||
Example:
|
||||
1. [react-router-7-starter, 2025-09-17]
|
||||
Built authentication with JWT, implemented protected routes.
|
||||
|
||||
92% match: "How do I handle auth errors in loaders?"
|
||||
~/.clank/conversation-archive/.../uuid.jsonl:145-167
|
||||
|
||||
QUERY TIPS:
|
||||
- Use natural language: "How did we handle X?"
|
||||
- Be specific: "React Router data loading" not "routing"
|
||||
- Include context: "TypeScript type narrowing in guards"
|
||||
|
||||
SEE ALSO:
|
||||
skills/collaboration/remembering-conversations/INDEXING.md - Manage index
|
||||
skills/collaboration/remembering-conversations/SKILL.md - Usage guide
|
||||
EOF
|
||||
exit 0
|
||||
;;
|
||||
--text)
|
||||
MODE="text"
|
||||
shift
|
||||
;;
|
||||
--both)
|
||||
MODE="both"
|
||||
shift
|
||||
;;
|
||||
--after)
|
||||
AFTER="$2"
|
||||
shift 2
|
||||
;;
|
||||
--before)
|
||||
BEFORE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--limit)
|
||||
LIMIT="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
QUERY="$QUERY $1"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
QUERY=$(echo "$QUERY" | sed 's/^ *//')
|
||||
|
||||
if [ -z "$QUERY" ]; then
|
||||
echo "Usage: search-conversations [options] <query>"
|
||||
echo "Try: search-conversations --help"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
npx tsx src/search-cli.ts "$QUERY" "$MODE" "$LIMIT" "$AFTER" "$BEFORE"
|
||||
@@ -0,0 +1,112 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { initDatabase, migrateSchema, insertExchange } from './db.js';
|
||||
import { ConversationExchange } from './types.js';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import Database from 'better-sqlite3';
|
||||
|
||||
describe('database migration', () => {
|
||||
const testDir = path.join(os.tmpdir(), 'db-migration-test-' + Date.now());
|
||||
const dbPath = path.join(testDir, 'test.db');
|
||||
|
||||
beforeEach(() => {
|
||||
fs.mkdirSync(testDir, { recursive: true });
|
||||
process.env.TEST_DB_PATH = dbPath;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
delete process.env.TEST_DB_PATH;
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('adds last_indexed column to existing database', () => {
|
||||
// Create a database with old schema (no last_indexed)
|
||||
const db = new Database(dbPath);
|
||||
db.exec(`
|
||||
CREATE TABLE exchanges (
|
||||
id TEXT PRIMARY KEY,
|
||||
project TEXT NOT NULL,
|
||||
timestamp TEXT NOT NULL,
|
||||
user_message TEXT NOT NULL,
|
||||
assistant_message TEXT NOT NULL,
|
||||
archive_path TEXT NOT NULL,
|
||||
line_start INTEGER NOT NULL,
|
||||
line_end INTEGER NOT NULL,
|
||||
embedding BLOB
|
||||
)
|
||||
`);
|
||||
|
||||
// Verify column doesn't exist
|
||||
const columnsBefore = db.prepare(`PRAGMA table_info(exchanges)`).all();
|
||||
const hasLastIndexedBefore = columnsBefore.some((col: any) => col.name === 'last_indexed');
|
||||
expect(hasLastIndexedBefore).toBe(false);
|
||||
|
||||
db.close();
|
||||
|
||||
// Run migration
|
||||
const migratedDb = initDatabase();
|
||||
|
||||
// Verify column now exists
|
||||
const columnsAfter = migratedDb.prepare(`PRAGMA table_info(exchanges)`).all();
|
||||
const hasLastIndexedAfter = columnsAfter.some((col: any) => col.name === 'last_indexed');
|
||||
expect(hasLastIndexedAfter).toBe(true);
|
||||
|
||||
migratedDb.close();
|
||||
});
|
||||
|
||||
it('handles existing last_indexed column gracefully', () => {
|
||||
// Create database with migration already applied
|
||||
const db = initDatabase();
|
||||
|
||||
// Run migration again - should not error
|
||||
expect(() => migrateSchema(db)).not.toThrow();
|
||||
|
||||
db.close();
|
||||
});
|
||||
});
|
||||
|
||||
describe('insertExchange with last_indexed', () => {
|
||||
const testDir = path.join(os.tmpdir(), 'insert-test-' + Date.now());
|
||||
const dbPath = path.join(testDir, 'test.db');
|
||||
|
||||
beforeEach(() => {
|
||||
fs.mkdirSync(testDir, { recursive: true });
|
||||
process.env.TEST_DB_PATH = dbPath;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
delete process.env.TEST_DB_PATH;
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('sets last_indexed timestamp when inserting exchange', () => {
|
||||
const db = initDatabase();
|
||||
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'test-id-1',
|
||||
project: 'test-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'Hello',
|
||||
assistantMessage: 'Hi there!',
|
||||
archivePath: '/test/path.jsonl',
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const beforeInsert = Date.now();
|
||||
// Create proper 384-dimensional embedding
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
const afterInsert = Date.now();
|
||||
|
||||
// Query the exchange
|
||||
const row = db.prepare(`SELECT last_indexed FROM exchanges WHERE id = ?`).get('test-id-1') as any;
|
||||
|
||||
expect(row.last_indexed).toBeDefined();
|
||||
expect(row.last_indexed).toBeGreaterThanOrEqual(beforeInsert);
|
||||
expect(row.last_indexed).toBeLessThanOrEqual(afterInsert);
|
||||
|
||||
db.close();
|
||||
});
|
||||
});
|
||||
134
skills/collaboration/remembering-conversations/tool/src/db.ts
Normal file
134
skills/collaboration/remembering-conversations/tool/src/db.ts
Normal file
@@ -0,0 +1,134 @@
|
||||
import Database from 'better-sqlite3';
|
||||
import { ConversationExchange } from './types.js';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import fs from 'fs';
|
||||
import * as sqliteVec from 'sqlite-vec';
|
||||
|
||||
function getDbPath(): string {
|
||||
return process.env.TEST_DB_PATH || path.join(os.homedir(), '.clank', 'conversation-index', 'db.sqlite');
|
||||
}
|
||||
|
||||
export function migrateSchema(db: Database.Database): void {
|
||||
const hasColumn = db.prepare(`
|
||||
SELECT COUNT(*) as count FROM pragma_table_info('exchanges')
|
||||
WHERE name='last_indexed'
|
||||
`).get() as { count: number };
|
||||
|
||||
if (hasColumn.count === 0) {
|
||||
console.log('Migrating schema: adding last_indexed column...');
|
||||
db.prepare('ALTER TABLE exchanges ADD COLUMN last_indexed INTEGER').run();
|
||||
console.log('Migration complete.');
|
||||
}
|
||||
}
|
||||
|
||||
export function initDatabase(): Database.Database {
|
||||
const dbPath = getDbPath();
|
||||
|
||||
// Ensure directory exists
|
||||
const dbDir = path.dirname(dbPath);
|
||||
if (!fs.existsSync(dbDir)) {
|
||||
fs.mkdirSync(dbDir, { recursive: true });
|
||||
}
|
||||
|
||||
const db = new Database(dbPath);
|
||||
|
||||
// Load sqlite-vec extension
|
||||
sqliteVec.load(db);
|
||||
|
||||
// Enable WAL mode for better concurrency
|
||||
db.pragma('journal_mode = WAL');
|
||||
|
||||
// Create exchanges table
|
||||
db.exec(`
|
||||
CREATE TABLE IF NOT EXISTS exchanges (
|
||||
id TEXT PRIMARY KEY,
|
||||
project TEXT NOT NULL,
|
||||
timestamp TEXT NOT NULL,
|
||||
user_message TEXT NOT NULL,
|
||||
assistant_message TEXT NOT NULL,
|
||||
archive_path TEXT NOT NULL,
|
||||
line_start INTEGER NOT NULL,
|
||||
line_end INTEGER NOT NULL,
|
||||
embedding BLOB
|
||||
)
|
||||
`);
|
||||
|
||||
// Create vector search index
|
||||
db.exec(`
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS vec_exchanges USING vec0(
|
||||
id TEXT PRIMARY KEY,
|
||||
embedding FLOAT[384]
|
||||
)
|
||||
`);
|
||||
|
||||
// Create index on timestamp for sorting
|
||||
db.exec(`
|
||||
CREATE INDEX IF NOT EXISTS idx_timestamp ON exchanges(timestamp DESC)
|
||||
`);
|
||||
|
||||
// Run migrations
|
||||
migrateSchema(db);
|
||||
|
||||
return db;
|
||||
}
|
||||
|
||||
export function insertExchange(
|
||||
db: Database.Database,
|
||||
exchange: ConversationExchange,
|
||||
embedding: number[]
|
||||
): void {
|
||||
const now = Date.now();
|
||||
|
||||
const stmt = db.prepare(`
|
||||
INSERT OR REPLACE INTO exchanges
|
||||
(id, project, timestamp, user_message, assistant_message, archive_path, line_start, line_end, last_indexed)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
stmt.run(
|
||||
exchange.id,
|
||||
exchange.project,
|
||||
exchange.timestamp,
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage,
|
||||
exchange.archivePath,
|
||||
exchange.lineStart,
|
||||
exchange.lineEnd,
|
||||
now
|
||||
);
|
||||
|
||||
// Insert into vector table (delete first since virtual tables don't support REPLACE)
|
||||
const delStmt = db.prepare(`DELETE FROM vec_exchanges WHERE id = ?`);
|
||||
delStmt.run(exchange.id);
|
||||
|
||||
const vecStmt = db.prepare(`
|
||||
INSERT INTO vec_exchanges (id, embedding)
|
||||
VALUES (?, ?)
|
||||
`);
|
||||
|
||||
vecStmt.run(exchange.id, Buffer.from(new Float32Array(embedding).buffer));
|
||||
}
|
||||
|
||||
export function getAllExchanges(db: Database.Database): Array<{ id: string; archivePath: string }> {
|
||||
const stmt = db.prepare(`SELECT id, archive_path as archivePath FROM exchanges`);
|
||||
return stmt.all() as Array<{ id: string; archivePath: string }>;
|
||||
}
|
||||
|
||||
export function getFileLastIndexed(db: Database.Database, archivePath: string): number | null {
|
||||
const stmt = db.prepare(`
|
||||
SELECT MAX(last_indexed) as lastIndexed
|
||||
FROM exchanges
|
||||
WHERE archive_path = ?
|
||||
`);
|
||||
const row = stmt.get(archivePath) as { lastIndexed: number | null };
|
||||
return row.lastIndexed;
|
||||
}
|
||||
|
||||
export function deleteExchange(db: Database.Database, id: string): void {
|
||||
// Delete from vector table
|
||||
db.prepare(`DELETE FROM vec_exchanges WHERE id = ?`).run(id);
|
||||
|
||||
// Delete from main table
|
||||
db.prepare(`DELETE FROM exchanges WHERE id = ?`).run(id);
|
||||
}
|
||||
@@ -0,0 +1,39 @@
|
||||
import { pipeline, Pipeline } from '@xenova/transformers';
|
||||
|
||||
let embeddingPipeline: Pipeline | null = null;
|
||||
|
||||
export async function initEmbeddings(): Promise<void> {
|
||||
if (!embeddingPipeline) {
|
||||
console.log('Loading embedding model (first run may take time)...');
|
||||
embeddingPipeline = await pipeline(
|
||||
'feature-extraction',
|
||||
'Xenova/all-MiniLM-L6-v2'
|
||||
);
|
||||
console.log('Embedding model loaded');
|
||||
}
|
||||
}
|
||||
|
||||
export async function generateEmbedding(text: string): Promise<number[]> {
|
||||
if (!embeddingPipeline) {
|
||||
await initEmbeddings();
|
||||
}
|
||||
|
||||
// Truncate text to avoid token limits (512 tokens max for this model)
|
||||
const truncated = text.substring(0, 2000);
|
||||
|
||||
const output = await embeddingPipeline!(truncated, {
|
||||
pooling: 'mean',
|
||||
normalize: true
|
||||
});
|
||||
|
||||
return Array.from(output.data);
|
||||
}
|
||||
|
||||
export async function generateExchangeEmbedding(
|
||||
userMessage: string,
|
||||
assistantMessage: string
|
||||
): Promise<number[]> {
|
||||
// Combine user question and assistant answer for better searchability
|
||||
const combined = `User: ${userMessage}\n\nAssistant: ${assistantMessage}`;
|
||||
return generateEmbedding(combined);
|
||||
}
|
||||
@@ -0,0 +1,115 @@
|
||||
#!/usr/bin/env node
|
||||
import { verifyIndex, repairIndex } from './verify.js';
|
||||
import { indexSession, indexUnprocessed, indexConversations } from './indexer.js';
|
||||
import { initDatabase } from './db.js';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
const command = process.argv[2];
|
||||
|
||||
// Parse --concurrency flag from remaining args
|
||||
function getConcurrency(): number {
|
||||
const concurrencyIndex = process.argv.findIndex(arg => arg === '--concurrency' || arg === '-c');
|
||||
if (concurrencyIndex !== -1 && process.argv[concurrencyIndex + 1]) {
|
||||
const value = parseInt(process.argv[concurrencyIndex + 1], 10);
|
||||
if (value >= 1 && value <= 16) return value;
|
||||
}
|
||||
return 1; // default
|
||||
}
|
||||
|
||||
const concurrency = getConcurrency();
|
||||
|
||||
async function main() {
|
||||
try {
|
||||
switch (command) {
|
||||
case 'index-session':
|
||||
const sessionId = process.argv[3];
|
||||
if (!sessionId) {
|
||||
console.error('Usage: index-cli index-session <session-id>');
|
||||
process.exit(1);
|
||||
}
|
||||
await indexSession(sessionId, concurrency);
|
||||
break;
|
||||
|
||||
case 'index-cleanup':
|
||||
await indexUnprocessed(concurrency);
|
||||
break;
|
||||
|
||||
case 'verify':
|
||||
console.log('Verifying conversation index...');
|
||||
const issues = await verifyIndex();
|
||||
|
||||
console.log('\n=== Verification Results ===');
|
||||
console.log(`Missing summaries: ${issues.missing.length}`);
|
||||
console.log(`Orphaned entries: ${issues.orphaned.length}`);
|
||||
console.log(`Outdated files: ${issues.outdated.length}`);
|
||||
console.log(`Corrupted files: ${issues.corrupted.length}`);
|
||||
|
||||
if (issues.missing.length > 0) {
|
||||
console.log('\nMissing summaries:');
|
||||
issues.missing.forEach(m => console.log(` ${m.path}`));
|
||||
}
|
||||
|
||||
if (issues.missing.length + issues.orphaned.length + issues.outdated.length + issues.corrupted.length > 0) {
|
||||
console.log('\nRun with --repair to fix these issues.');
|
||||
process.exit(1);
|
||||
} else {
|
||||
console.log('\n✅ Index is healthy!');
|
||||
}
|
||||
break;
|
||||
|
||||
case 'repair':
|
||||
console.log('Verifying conversation index...');
|
||||
const repairIssues = await verifyIndex();
|
||||
|
||||
if (repairIssues.missing.length + repairIssues.orphaned.length + repairIssues.outdated.length > 0) {
|
||||
await repairIndex(repairIssues);
|
||||
} else {
|
||||
console.log('✅ No issues to repair!');
|
||||
}
|
||||
break;
|
||||
|
||||
case 'rebuild':
|
||||
console.log('Rebuilding entire index...');
|
||||
|
||||
// Delete database
|
||||
const dbPath = path.join(os.homedir(), '.clank', 'conversation-index', 'db.sqlite');
|
||||
if (fs.existsSync(dbPath)) {
|
||||
fs.unlinkSync(dbPath);
|
||||
console.log('Deleted existing database');
|
||||
}
|
||||
|
||||
// Delete all summary files
|
||||
const archiveDir = path.join(os.homedir(), '.clank', 'conversation-archive');
|
||||
if (fs.existsSync(archiveDir)) {
|
||||
const projects = fs.readdirSync(archiveDir);
|
||||
for (const project of projects) {
|
||||
const projectPath = path.join(archiveDir, project);
|
||||
if (!fs.statSync(projectPath).isDirectory()) continue;
|
||||
|
||||
const summaries = fs.readdirSync(projectPath).filter(f => f.endsWith('-summary.txt'));
|
||||
for (const summary of summaries) {
|
||||
fs.unlinkSync(path.join(projectPath, summary));
|
||||
}
|
||||
}
|
||||
console.log('Deleted all summary files');
|
||||
}
|
||||
|
||||
// Re-index everything
|
||||
console.log('Re-indexing all conversations...');
|
||||
await indexConversations(undefined, undefined, concurrency);
|
||||
break;
|
||||
|
||||
case 'index-all':
|
||||
default:
|
||||
await indexConversations(undefined, undefined, concurrency);
|
||||
break;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error:', error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
@@ -0,0 +1,356 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { initDatabase, insertExchange } from './db.js';
|
||||
import { parseConversation } from './parser.js';
|
||||
import { initEmbeddings, generateExchangeEmbedding } from './embeddings.js';
|
||||
import { summarizeConversation } from './summarizer.js';
|
||||
import { ConversationExchange } from './types.js';
|
||||
|
||||
// Set max output tokens for Claude SDK (used by summarizer)
|
||||
process.env.CLAUDE_CODE_MAX_OUTPUT_TOKENS = '20000';
|
||||
|
||||
// Increase max listeners for concurrent API calls
|
||||
import { EventEmitter } from 'events';
|
||||
EventEmitter.defaultMaxListeners = 20;
|
||||
|
||||
// Allow overriding paths for testing
|
||||
function getProjectsDir(): string {
|
||||
return process.env.TEST_PROJECTS_DIR || path.join(os.homedir(), '.claude', 'projects');
|
||||
}
|
||||
|
||||
function getArchiveDir(): string {
|
||||
return process.env.TEST_ARCHIVE_DIR || path.join(os.homedir(), '.clank', 'conversation-archive');
|
||||
}
|
||||
|
||||
// Projects to exclude from indexing (configurable via env or config file)
|
||||
function getExcludedProjects(): string[] {
|
||||
// Check env variable first
|
||||
if (process.env.CONVERSATION_SEARCH_EXCLUDE_PROJECTS) {
|
||||
return process.env.CONVERSATION_SEARCH_EXCLUDE_PROJECTS.split(',').map(p => p.trim());
|
||||
}
|
||||
|
||||
// Check for config file
|
||||
const configPath = path.join(os.homedir(), '.clank', 'conversation-index', 'exclude.txt');
|
||||
if (fs.existsSync(configPath)) {
|
||||
const content = fs.readFileSync(configPath, 'utf-8');
|
||||
return content.split('\n').map(line => line.trim()).filter(line => line && !line.startsWith('#'));
|
||||
}
|
||||
|
||||
// Default: no exclusions
|
||||
return [];
|
||||
}
|
||||
|
||||
// Process items in batches with limited concurrency
|
||||
async function processBatch<T, R>(
|
||||
items: T[],
|
||||
processor: (item: T) => Promise<R>,
|
||||
concurrency: number
|
||||
): Promise<R[]> {
|
||||
const results: R[] = [];
|
||||
|
||||
for (let i = 0; i < items.length; i += concurrency) {
|
||||
const batch = items.slice(i, i + concurrency);
|
||||
const batchResults = await Promise.all(batch.map(processor));
|
||||
results.push(...batchResults);
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
export async function indexConversations(
|
||||
limitToProject?: string,
|
||||
maxConversations?: number,
|
||||
concurrency: number = 1
|
||||
): Promise<void> {
|
||||
console.log('Initializing database...');
|
||||
const db = initDatabase();
|
||||
|
||||
console.log('Loading embedding model...');
|
||||
await initEmbeddings();
|
||||
|
||||
console.log('Scanning for conversation files...');
|
||||
const PROJECTS_DIR = getProjectsDir();
|
||||
const ARCHIVE_DIR = getArchiveDir();
|
||||
const projects = fs.readdirSync(PROJECTS_DIR);
|
||||
|
||||
let totalExchanges = 0;
|
||||
let conversationsProcessed = 0;
|
||||
|
||||
const excludedProjects = getExcludedProjects();
|
||||
|
||||
for (const project of projects) {
|
||||
// Skip excluded projects
|
||||
if (excludedProjects.includes(project)) {
|
||||
console.log(`\nSkipping excluded project: ${project}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip if limiting to specific project
|
||||
if (limitToProject && project !== limitToProject) continue;
|
||||
const projectPath = path.join(PROJECTS_DIR, project);
|
||||
const stat = fs.statSync(projectPath);
|
||||
|
||||
if (!stat.isDirectory()) continue;
|
||||
|
||||
const files = fs.readdirSync(projectPath).filter(f => f.endsWith('.jsonl'));
|
||||
|
||||
if (files.length === 0) continue;
|
||||
|
||||
console.log(`\nProcessing project: ${project} (${files.length} conversations)`);
|
||||
if (concurrency > 1) console.log(` Concurrency: ${concurrency}`);
|
||||
|
||||
// Create archive directory for this project
|
||||
const projectArchive = path.join(ARCHIVE_DIR, project);
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
// Prepare all conversations first
|
||||
type ConvToProcess = {
|
||||
file: string;
|
||||
sourcePath: string;
|
||||
archivePath: string;
|
||||
summaryPath: string;
|
||||
exchanges: ConversationExchange[];
|
||||
};
|
||||
|
||||
const toProcess: ConvToProcess[] = [];
|
||||
|
||||
for (const file of files) {
|
||||
const sourcePath = path.join(projectPath, file);
|
||||
const archivePath = path.join(projectArchive, file);
|
||||
|
||||
// Copy to archive
|
||||
if (!fs.existsSync(archivePath)) {
|
||||
fs.copyFileSync(sourcePath, archivePath);
|
||||
console.log(` Archived: ${file}`);
|
||||
}
|
||||
|
||||
// Parse conversation
|
||||
const exchanges = await parseConversation(sourcePath, project, archivePath);
|
||||
|
||||
if (exchanges.length === 0) {
|
||||
console.log(` Skipped ${file} (no exchanges)`);
|
||||
continue;
|
||||
}
|
||||
|
||||
toProcess.push({
|
||||
file,
|
||||
sourcePath,
|
||||
archivePath,
|
||||
summaryPath: archivePath.replace('.jsonl', '-summary.txt'),
|
||||
exchanges
|
||||
});
|
||||
}
|
||||
|
||||
// Batch summarize conversations in parallel
|
||||
const needsSummary = toProcess.filter(c => !fs.existsSync(c.summaryPath));
|
||||
|
||||
if (needsSummary.length > 0) {
|
||||
console.log(` Generating ${needsSummary.length} summaries (concurrency: ${concurrency})...`);
|
||||
|
||||
await processBatch(needsSummary, async (conv) => {
|
||||
try {
|
||||
const summary = await summarizeConversation(conv.exchanges);
|
||||
fs.writeFileSync(conv.summaryPath, summary, 'utf-8');
|
||||
const wordCount = summary.split(/\s+/).length;
|
||||
console.log(` ✓ ${conv.file}: ${wordCount} words`);
|
||||
return summary;
|
||||
} catch (error) {
|
||||
console.log(` ✗ ${conv.file}: ${error}`);
|
||||
return null;
|
||||
}
|
||||
}, concurrency);
|
||||
}
|
||||
|
||||
// Now process embeddings and DB inserts (fast, sequential is fine)
|
||||
for (const conv of toProcess) {
|
||||
for (const exchange of conv.exchanges) {
|
||||
const embedding = await generateExchangeEmbedding(
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage
|
||||
);
|
||||
|
||||
insertExchange(db, exchange, embedding);
|
||||
}
|
||||
|
||||
totalExchanges += conv.exchanges.length;
|
||||
conversationsProcessed++;
|
||||
|
||||
// Check if we hit the limit
|
||||
if (maxConversations && conversationsProcessed >= maxConversations) {
|
||||
console.log(`\nReached limit of ${maxConversations} conversations`);
|
||||
db.close();
|
||||
console.log(`✅ Indexing complete! Conversations: ${conversationsProcessed}, Exchanges: ${totalExchanges}`);
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
db.close();
|
||||
console.log(`\n✅ Indexing complete! Conversations: ${conversationsProcessed}, Exchanges: ${totalExchanges}`);
|
||||
}
|
||||
|
||||
export async function indexSession(sessionId: string, concurrency: number = 1): Promise<void> {
|
||||
console.log(`Indexing session: ${sessionId}`);
|
||||
|
||||
// Find the conversation file for this session
|
||||
const PROJECTS_DIR = getProjectsDir();
|
||||
const ARCHIVE_DIR = getArchiveDir();
|
||||
const projects = fs.readdirSync(PROJECTS_DIR);
|
||||
const excludedProjects = getExcludedProjects();
|
||||
let found = false;
|
||||
|
||||
for (const project of projects) {
|
||||
if (excludedProjects.includes(project)) continue;
|
||||
|
||||
const projectPath = path.join(PROJECTS_DIR, project);
|
||||
if (!fs.statSync(projectPath).isDirectory()) continue;
|
||||
|
||||
const files = fs.readdirSync(projectPath).filter(f => f.includes(sessionId) && f.endsWith('.jsonl'));
|
||||
|
||||
if (files.length > 0) {
|
||||
found = true;
|
||||
const file = files[0];
|
||||
const sourcePath = path.join(projectPath, file);
|
||||
|
||||
const db = initDatabase();
|
||||
await initEmbeddings();
|
||||
|
||||
const projectArchive = path.join(ARCHIVE_DIR, project);
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
const archivePath = path.join(projectArchive, file);
|
||||
|
||||
// Archive
|
||||
if (!fs.existsSync(archivePath)) {
|
||||
fs.copyFileSync(sourcePath, archivePath);
|
||||
}
|
||||
|
||||
// Parse and summarize
|
||||
const exchanges = await parseConversation(sourcePath, project, archivePath);
|
||||
|
||||
if (exchanges.length > 0) {
|
||||
// Generate summary
|
||||
const summaryPath = archivePath.replace('.jsonl', '-summary.txt');
|
||||
if (!fs.existsSync(summaryPath)) {
|
||||
const summary = await summarizeConversation(exchanges);
|
||||
fs.writeFileSync(summaryPath, summary, 'utf-8');
|
||||
console.log(`Summary: ${summary.split(/\s+/).length} words`);
|
||||
}
|
||||
|
||||
// Index
|
||||
for (const exchange of exchanges) {
|
||||
const embedding = await generateExchangeEmbedding(
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage
|
||||
);
|
||||
insertExchange(db, exchange, embedding);
|
||||
}
|
||||
|
||||
console.log(`✅ Indexed session ${sessionId}: ${exchanges.length} exchanges`);
|
||||
}
|
||||
|
||||
db.close();
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!found) {
|
||||
console.log(`Session ${sessionId} not found`);
|
||||
}
|
||||
}
|
||||
|
||||
export async function indexUnprocessed(concurrency: number = 1): Promise<void> {
|
||||
console.log('Finding unprocessed conversations...');
|
||||
if (concurrency > 1) console.log(`Concurrency: ${concurrency}`);
|
||||
|
||||
const db = initDatabase();
|
||||
await initEmbeddings();
|
||||
|
||||
const PROJECTS_DIR = getProjectsDir();
|
||||
const ARCHIVE_DIR = getArchiveDir();
|
||||
const projects = fs.readdirSync(PROJECTS_DIR);
|
||||
const excludedProjects = getExcludedProjects();
|
||||
|
||||
type UnprocessedConv = {
|
||||
project: string;
|
||||
file: string;
|
||||
sourcePath: string;
|
||||
archivePath: string;
|
||||
summaryPath: string;
|
||||
exchanges: ConversationExchange[];
|
||||
};
|
||||
|
||||
const unprocessed: UnprocessedConv[] = [];
|
||||
|
||||
// Collect all unprocessed conversations
|
||||
for (const project of projects) {
|
||||
if (excludedProjects.includes(project)) continue;
|
||||
|
||||
const projectPath = path.join(PROJECTS_DIR, project);
|
||||
if (!fs.statSync(projectPath).isDirectory()) continue;
|
||||
|
||||
const files = fs.readdirSync(projectPath).filter(f => f.endsWith('.jsonl'));
|
||||
|
||||
for (const file of files) {
|
||||
const sourcePath = path.join(projectPath, file);
|
||||
const projectArchive = path.join(ARCHIVE_DIR, project);
|
||||
const archivePath = path.join(projectArchive, file);
|
||||
const summaryPath = archivePath.replace('.jsonl', '-summary.txt');
|
||||
|
||||
// Skip if already has summary
|
||||
if (fs.existsSync(summaryPath)) continue;
|
||||
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
// Archive if needed
|
||||
if (!fs.existsSync(archivePath)) {
|
||||
fs.copyFileSync(sourcePath, archivePath);
|
||||
}
|
||||
|
||||
// Parse and check
|
||||
const exchanges = await parseConversation(sourcePath, project, archivePath);
|
||||
if (exchanges.length === 0) continue;
|
||||
|
||||
unprocessed.push({ project, file, sourcePath, archivePath, summaryPath, exchanges });
|
||||
}
|
||||
}
|
||||
|
||||
if (unprocessed.length === 0) {
|
||||
console.log('✅ All conversations are already processed!');
|
||||
db.close();
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Found ${unprocessed.length} unprocessed conversations`);
|
||||
console.log(`Generating summaries (concurrency: ${concurrency})...\n`);
|
||||
|
||||
// Batch process summaries
|
||||
await processBatch(unprocessed, async (conv) => {
|
||||
try {
|
||||
const summary = await summarizeConversation(conv.exchanges);
|
||||
fs.writeFileSync(conv.summaryPath, summary, 'utf-8');
|
||||
const wordCount = summary.split(/\s+/).length;
|
||||
console.log(` ✓ ${conv.project}/${conv.file}: ${wordCount} words`);
|
||||
return summary;
|
||||
} catch (error) {
|
||||
console.log(` ✗ ${conv.project}/${conv.file}: ${error}`);
|
||||
return null;
|
||||
}
|
||||
}, concurrency);
|
||||
|
||||
// Now index embeddings
|
||||
console.log(`\nIndexing embeddings...`);
|
||||
for (const conv of unprocessed) {
|
||||
for (const exchange of conv.exchanges) {
|
||||
const embedding = await generateExchangeEmbedding(
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage
|
||||
);
|
||||
insertExchange(db, exchange, embedding);
|
||||
}
|
||||
}
|
||||
|
||||
db.close();
|
||||
console.log(`\n✅ Processed ${unprocessed.length} conversations`);
|
||||
}
|
||||
@@ -0,0 +1,118 @@
|
||||
import fs from 'fs';
|
||||
import readline from 'readline';
|
||||
import { ConversationExchange } from './types.js';
|
||||
import crypto from 'crypto';
|
||||
|
||||
interface JSONLMessage {
|
||||
type: string;
|
||||
message?: {
|
||||
role: 'user' | 'assistant';
|
||||
content: string | Array<{ type: string; text?: string }>;
|
||||
};
|
||||
timestamp?: string;
|
||||
uuid?: string;
|
||||
}
|
||||
|
||||
export async function parseConversation(
|
||||
filePath: string,
|
||||
projectName: string,
|
||||
archivePath: string
|
||||
): Promise<ConversationExchange[]> {
|
||||
const exchanges: ConversationExchange[] = [];
|
||||
const fileStream = fs.createReadStream(filePath);
|
||||
const rl = readline.createInterface({
|
||||
input: fileStream,
|
||||
crlfDelay: Infinity
|
||||
});
|
||||
|
||||
let lineNumber = 0;
|
||||
let currentExchange: {
|
||||
userMessage: string;
|
||||
userLine: number;
|
||||
assistantMessages: string[];
|
||||
lastAssistantLine: number;
|
||||
timestamp: string;
|
||||
} | null = null;
|
||||
|
||||
const finalizeExchange = () => {
|
||||
if (currentExchange && currentExchange.assistantMessages.length > 0) {
|
||||
const exchange: ConversationExchange = {
|
||||
id: crypto
|
||||
.createHash('md5')
|
||||
.update(`${archivePath}:${currentExchange.userLine}-${currentExchange.lastAssistantLine}`)
|
||||
.digest('hex'),
|
||||
project: projectName,
|
||||
timestamp: currentExchange.timestamp,
|
||||
userMessage: currentExchange.userMessage,
|
||||
assistantMessage: currentExchange.assistantMessages.join('\n\n'),
|
||||
archivePath,
|
||||
lineStart: currentExchange.userLine,
|
||||
lineEnd: currentExchange.lastAssistantLine
|
||||
};
|
||||
exchanges.push(exchange);
|
||||
}
|
||||
};
|
||||
|
||||
for await (const line of rl) {
|
||||
lineNumber++;
|
||||
|
||||
try {
|
||||
const parsed: JSONLMessage = JSON.parse(line);
|
||||
|
||||
// Skip non-message types
|
||||
if (parsed.type !== 'user' && parsed.type !== 'assistant') {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!parsed.message) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Extract text from message content
|
||||
let text = '';
|
||||
if (typeof parsed.message.content === 'string') {
|
||||
text = parsed.message.content;
|
||||
} else if (Array.isArray(parsed.message.content)) {
|
||||
text = parsed.message.content
|
||||
.filter(block => block.type === 'text' && block.text)
|
||||
.map(block => block.text)
|
||||
.join('\n');
|
||||
}
|
||||
|
||||
// Skip empty messages
|
||||
if (!text.trim()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (parsed.message.role === 'user') {
|
||||
// Finalize previous exchange before starting new one
|
||||
finalizeExchange();
|
||||
|
||||
// Start new exchange
|
||||
currentExchange = {
|
||||
userMessage: text,
|
||||
userLine: lineNumber,
|
||||
assistantMessages: [],
|
||||
lastAssistantLine: lineNumber,
|
||||
timestamp: parsed.timestamp || new Date().toISOString()
|
||||
};
|
||||
} else if (parsed.message.role === 'assistant' && currentExchange) {
|
||||
// Accumulate assistant messages
|
||||
currentExchange.assistantMessages.push(text);
|
||||
currentExchange.lastAssistantLine = lineNumber;
|
||||
// Update timestamp to last assistant message
|
||||
if (parsed.timestamp) {
|
||||
currentExchange.timestamp = parsed.timestamp;
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Skip malformed JSON lines
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// Finalize last exchange
|
||||
finalizeExchange();
|
||||
|
||||
return exchanges;
|
||||
}
|
||||
@@ -0,0 +1,109 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
describe('search-agent template', () => {
|
||||
const templatePath = path.join(__dirname, '..', 'prompts', 'search-agent.md');
|
||||
|
||||
it('exists at expected location', () => {
|
||||
expect(fs.existsSync(templatePath)).toBe(true);
|
||||
});
|
||||
|
||||
it('contains required placeholders', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check for all required placeholders
|
||||
expect(content).toContain('{TOPIC}');
|
||||
expect(content).toContain('{SEARCH_QUERY}');
|
||||
expect(content).toContain('{FOCUS_AREAS}');
|
||||
});
|
||||
|
||||
it('contains required output sections', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check for required output format sections
|
||||
expect(content).toContain('### Summary');
|
||||
expect(content).toContain('### Sources');
|
||||
expect(content).toContain('### For Follow-Up');
|
||||
});
|
||||
|
||||
it('specifies word count requirements', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Should specify 200-1000 words for synthesis
|
||||
expect(content).toMatch(/200-1000 words/);
|
||||
expect(content).toMatch(/max 1000 words/);
|
||||
});
|
||||
|
||||
it('includes source metadata requirements', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check for source metadata fields
|
||||
expect(content).toContain('project-name');
|
||||
expect(content).toContain('YYYY-MM-DD');
|
||||
expect(content).toContain('% match');
|
||||
expect(content).toContain('Conversation summary:');
|
||||
expect(content).toContain('File:');
|
||||
expect(content).toContain('Status:');
|
||||
expect(content).toContain('Read in detail');
|
||||
expect(content).toContain('Reviewed summary only');
|
||||
expect(content).toContain('Skimmed');
|
||||
});
|
||||
|
||||
it('provides search command', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Should include the search command
|
||||
expect(content).toContain('~/.claude/skills/collaboration/remembering-conversations/tool/search-conversations');
|
||||
});
|
||||
|
||||
it('includes critical rules', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check for DO and DO NOT sections
|
||||
expect(content).toContain('## Critical Rules');
|
||||
expect(content).toContain('**DO:**');
|
||||
expect(content).toContain('**DO NOT:**');
|
||||
});
|
||||
|
||||
it('includes complete example output', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check example has all required components
|
||||
expect(content).toContain('## Example Output');
|
||||
|
||||
// Example should show Summary, Sources, and For Follow-Up
|
||||
const exampleSection = content.substring(content.indexOf('## Example Output'));
|
||||
expect(exampleSection).toContain('### Summary');
|
||||
expect(exampleSection).toContain('### Sources');
|
||||
expect(exampleSection).toContain('### For Follow-Up');
|
||||
|
||||
// Example should show specific details
|
||||
expect(exampleSection).toContain('react-router-7-starter');
|
||||
expect(exampleSection).toContain('92% match');
|
||||
expect(exampleSection).toContain('.jsonl');
|
||||
});
|
||||
|
||||
it('emphasizes synthesis over raw excerpts', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Should explicitly discourage raw conversation excerpts
|
||||
expect(content).toContain('synthesize');
|
||||
expect(content).toContain('raw conversation excerpts');
|
||||
expect(content).toContain('synthesize instead');
|
||||
});
|
||||
|
||||
it('provides follow-up options', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Should explain how main agent can follow up
|
||||
expect(content).toContain('Main agent can:');
|
||||
expect(content).toContain('dig deeper');
|
||||
expect(content).toContain('refined query');
|
||||
expect(content).toContain('context bloat');
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,28 @@
|
||||
import { searchConversations, formatResults, SearchOptions } from './search.js';
|
||||
|
||||
const query = process.argv[2];
|
||||
const mode = (process.argv[3] || 'vector') as 'vector' | 'text' | 'both';
|
||||
const limit = parseInt(process.argv[4] || '10');
|
||||
const after = process.argv[5] || undefined;
|
||||
const before = process.argv[6] || undefined;
|
||||
|
||||
if (!query) {
|
||||
console.error('Usage: search-conversations <query> [mode] [limit] [after] [before]');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const options: SearchOptions = {
|
||||
mode,
|
||||
limit,
|
||||
after,
|
||||
before
|
||||
};
|
||||
|
||||
searchConversations(query, options)
|
||||
.then(results => {
|
||||
console.log(formatResults(results));
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error searching:', error);
|
||||
process.exit(1);
|
||||
});
|
||||
@@ -0,0 +1,173 @@
|
||||
import Database from 'better-sqlite3';
|
||||
import { initDatabase } from './db.js';
|
||||
import { initEmbeddings, generateEmbedding } from './embeddings.js';
|
||||
import { SearchResult, ConversationExchange } from './types.js';
|
||||
import fs from 'fs';
|
||||
|
||||
export interface SearchOptions {
|
||||
limit?: number;
|
||||
mode?: 'vector' | 'text' | 'both';
|
||||
after?: string; // ISO date string
|
||||
before?: string; // ISO date string
|
||||
}
|
||||
|
||||
function validateISODate(dateStr: string, paramName: string): void {
|
||||
const isoDateRegex = /^\d{4}-\d{2}-\d{2}$/;
|
||||
if (!isoDateRegex.test(dateStr)) {
|
||||
throw new Error(`Invalid ${paramName} date: "${dateStr}". Expected YYYY-MM-DD format (e.g., 2025-10-01)`);
|
||||
}
|
||||
// Verify it's actually a valid date
|
||||
const date = new Date(dateStr);
|
||||
if (isNaN(date.getTime())) {
|
||||
throw new Error(`Invalid ${paramName} date: "${dateStr}". Not a valid calendar date.`);
|
||||
}
|
||||
}
|
||||
|
||||
export async function searchConversations(
|
||||
query: string,
|
||||
options: SearchOptions = {}
|
||||
): Promise<SearchResult[]> {
|
||||
const { limit = 10, mode = 'vector', after, before } = options;
|
||||
|
||||
// Validate date parameters
|
||||
if (after) validateISODate(after, '--after');
|
||||
if (before) validateISODate(before, '--before');
|
||||
|
||||
const db = initDatabase();
|
||||
|
||||
let results: any[] = [];
|
||||
|
||||
// Build time filter clause
|
||||
const timeFilter = [];
|
||||
if (after) timeFilter.push(`e.timestamp >= '${after}'`);
|
||||
if (before) timeFilter.push(`e.timestamp <= '${before}'`);
|
||||
const timeClause = timeFilter.length > 0 ? `AND ${timeFilter.join(' AND ')}` : '';
|
||||
|
||||
if (mode === 'vector' || mode === 'both') {
|
||||
// Vector similarity search
|
||||
await initEmbeddings();
|
||||
const queryEmbedding = await generateEmbedding(query);
|
||||
|
||||
const stmt = db.prepare(`
|
||||
SELECT
|
||||
e.id,
|
||||
e.project,
|
||||
e.timestamp,
|
||||
e.user_message,
|
||||
e.assistant_message,
|
||||
e.archive_path,
|
||||
e.line_start,
|
||||
e.line_end,
|
||||
vec.distance
|
||||
FROM vec_exchanges AS vec
|
||||
JOIN exchanges AS e ON vec.id = e.id
|
||||
WHERE vec.embedding MATCH ?
|
||||
AND k = ?
|
||||
${timeClause}
|
||||
ORDER BY vec.distance ASC
|
||||
`);
|
||||
|
||||
results = stmt.all(
|
||||
Buffer.from(new Float32Array(queryEmbedding).buffer),
|
||||
limit
|
||||
);
|
||||
}
|
||||
|
||||
if (mode === 'text' || mode === 'both') {
|
||||
// Text search
|
||||
const textStmt = db.prepare(`
|
||||
SELECT
|
||||
e.id,
|
||||
e.project,
|
||||
e.timestamp,
|
||||
e.user_message,
|
||||
e.assistant_message,
|
||||
e.archive_path,
|
||||
e.line_start,
|
||||
e.line_end,
|
||||
0 as distance
|
||||
FROM exchanges AS e
|
||||
WHERE (e.user_message LIKE ? OR e.assistant_message LIKE ?)
|
||||
${timeClause}
|
||||
ORDER BY e.timestamp DESC
|
||||
LIMIT ?
|
||||
`);
|
||||
|
||||
const textResults = textStmt.all(`%${query}%`, `%${query}%`, limit);
|
||||
|
||||
if (mode === 'both') {
|
||||
// Merge and deduplicate by ID
|
||||
const seenIds = new Set(results.map(r => r.id));
|
||||
for (const textResult of textResults) {
|
||||
if (!seenIds.has(textResult.id)) {
|
||||
results.push(textResult);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
results = textResults;
|
||||
}
|
||||
}
|
||||
|
||||
db.close();
|
||||
|
||||
return results.map((row: any) => {
|
||||
const exchange: ConversationExchange = {
|
||||
id: row.id,
|
||||
project: row.project,
|
||||
timestamp: row.timestamp,
|
||||
userMessage: row.user_message,
|
||||
assistantMessage: row.assistant_message,
|
||||
archivePath: row.archive_path,
|
||||
lineStart: row.line_start,
|
||||
lineEnd: row.line_end
|
||||
};
|
||||
|
||||
// Try to load summary if available
|
||||
const summaryPath = row.archive_path.replace('.jsonl', '-summary.txt');
|
||||
let summary: string | undefined;
|
||||
if (fs.existsSync(summaryPath)) {
|
||||
summary = fs.readFileSync(summaryPath, 'utf-8').trim();
|
||||
}
|
||||
|
||||
// Create snippet (first 200 chars)
|
||||
const snippet = exchange.userMessage.substring(0, 200) +
|
||||
(exchange.userMessage.length > 200 ? '...' : '');
|
||||
|
||||
return {
|
||||
exchange,
|
||||
similarity: mode === 'text' ? undefined : 1 - row.distance,
|
||||
snippet,
|
||||
summary
|
||||
} as SearchResult & { summary?: string };
|
||||
});
|
||||
}
|
||||
|
||||
export function formatResults(results: Array<SearchResult & { summary?: string }>): string {
|
||||
if (results.length === 0) {
|
||||
return 'No results found.';
|
||||
}
|
||||
|
||||
let output = `Found ${results.length} relevant conversations:\n\n`;
|
||||
|
||||
results.forEach((result, index) => {
|
||||
const date = new Date(result.exchange.timestamp).toISOString().split('T')[0];
|
||||
output += `${index + 1}. [${result.exchange.project}, ${date}]\n`;
|
||||
|
||||
// Show conversation summary if available
|
||||
if (result.summary) {
|
||||
output += ` ${result.summary}\n\n`;
|
||||
}
|
||||
|
||||
// Show match with similarity percentage
|
||||
if (result.similarity !== undefined) {
|
||||
const pct = Math.round(result.similarity * 100);
|
||||
output += ` ${pct}% match: "${result.snippet}"\n`;
|
||||
} else {
|
||||
output += ` Match: "${result.snippet}"\n`;
|
||||
}
|
||||
|
||||
output += ` ${result.exchange.archivePath}:${result.exchange.lineStart}-${result.exchange.lineEnd}\n\n`;
|
||||
});
|
||||
|
||||
return output;
|
||||
}
|
||||
@@ -0,0 +1,155 @@
|
||||
import { ConversationExchange } from './types.js';
|
||||
import { query } from '@anthropic-ai/claude-agent-sdk';
|
||||
|
||||
export function formatConversationText(exchanges: ConversationExchange[]): string {
|
||||
return exchanges.map(ex => {
|
||||
return `User: ${ex.userMessage}\n\nAgent: ${ex.assistantMessage}`;
|
||||
}).join('\n\n---\n\n');
|
||||
}
|
||||
|
||||
function extractSummary(text: string): string {
|
||||
const match = text.match(/<summary>(.*?)<\/summary>/s);
|
||||
if (match) {
|
||||
return match[1].trim();
|
||||
}
|
||||
// Fallback if no tags found
|
||||
return text.trim();
|
||||
}
|
||||
|
||||
async function callClaude(prompt: string, useSonnet = false): Promise<string> {
|
||||
const model = useSonnet ? 'sonnet' : 'haiku';
|
||||
|
||||
for await (const message of query({
|
||||
prompt,
|
||||
options: {
|
||||
model,
|
||||
maxTokens: 4096,
|
||||
maxThinkingTokens: 0, // Disable extended thinking
|
||||
systemPrompt: 'Write concise, factual summaries. Output ONLY the summary - no preamble, no "Here is", no "I will". Your output will be indexed directly.'
|
||||
}
|
||||
})) {
|
||||
if (message && typeof message === 'object' && 'type' in message && message.type === 'result') {
|
||||
const result = (message as any).result;
|
||||
|
||||
// Check if result is an API error (SDK returns errors as result strings)
|
||||
if (typeof result === 'string' && result.includes('API Error') && result.includes('thinking.budget_tokens')) {
|
||||
if (!useSonnet) {
|
||||
console.log(` Haiku hit thinking budget error, retrying with Sonnet`);
|
||||
return await callClaude(prompt, true);
|
||||
}
|
||||
// If Sonnet also fails, return error message
|
||||
return result;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
}
|
||||
return '';
|
||||
}
|
||||
|
||||
function chunkExchanges(exchanges: ConversationExchange[], chunkSize: number): ConversationExchange[][] {
|
||||
const chunks: ConversationExchange[][] = [];
|
||||
for (let i = 0; i < exchanges.length; i += chunkSize) {
|
||||
chunks.push(exchanges.slice(i, i + chunkSize));
|
||||
}
|
||||
return chunks;
|
||||
}
|
||||
|
||||
export async function summarizeConversation(exchanges: ConversationExchange[]): Promise<string> {
|
||||
// Handle trivial conversations
|
||||
if (exchanges.length === 0) {
|
||||
return 'Trivial conversation with no substantive content.';
|
||||
}
|
||||
|
||||
if (exchanges.length === 1) {
|
||||
const text = formatConversationText(exchanges);
|
||||
if (text.length < 100 || exchanges[0].userMessage.trim() === '/exit') {
|
||||
return 'Trivial conversation with no substantive content.';
|
||||
}
|
||||
}
|
||||
|
||||
// For short conversations (≤15 exchanges), summarize directly
|
||||
if (exchanges.length <= 15) {
|
||||
const conversationText = formatConversationText(exchanges);
|
||||
const prompt = `Context: This summary will be shown in a list to help users and Claude choose which conversations are relevant to a future activity.
|
||||
|
||||
Summarize what happened in 2-4 sentences. Be factual and specific. Output in <summary></summary> tags.
|
||||
|
||||
Include:
|
||||
- What was built/changed/discussed (be specific)
|
||||
- Key technical decisions or approaches
|
||||
- Problems solved or current state
|
||||
|
||||
Exclude:
|
||||
- Apologies, meta-commentary, or your questions
|
||||
- Raw logs or debug output
|
||||
- Generic descriptions - focus on what makes THIS conversation unique
|
||||
|
||||
Good:
|
||||
<summary>Built JWT authentication for React app with refresh tokens and protected routes. Fixed token expiration bug by implementing refresh-during-request logic.</summary>
|
||||
|
||||
Bad:
|
||||
<summary>I apologize. The conversation discussed authentication and various approaches were considered...</summary>
|
||||
|
||||
${conversationText}`;
|
||||
|
||||
const result = await callClaude(prompt);
|
||||
return extractSummary(result);
|
||||
}
|
||||
|
||||
// For long conversations, use hierarchical summarization
|
||||
console.log(` Long conversation (${exchanges.length} exchanges) - using hierarchical summarization`);
|
||||
|
||||
// Chunk into groups of 8 exchanges
|
||||
const chunks = chunkExchanges(exchanges, 8);
|
||||
console.log(` Split into ${chunks.length} chunks`);
|
||||
|
||||
// Summarize each chunk
|
||||
const chunkSummaries: string[] = [];
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
const chunkText = formatConversationText(chunks[i]);
|
||||
const prompt = `Summarize this part of a conversation in 2-3 sentences. What happened, what was built/discussed. Use <summary></summary> tags.
|
||||
|
||||
${chunkText}
|
||||
|
||||
Example: <summary>Implemented HID keyboard functionality for ESP32. Hit Bluetooth controller initialization error, fixed by adjusting memory allocation.</summary>`;
|
||||
|
||||
try {
|
||||
const summary = await callClaude(prompt);
|
||||
const extracted = extractSummary(summary);
|
||||
chunkSummaries.push(extracted);
|
||||
console.log(` Chunk ${i + 1}/${chunks.length}: ${extracted.split(/\s+/).length} words`);
|
||||
} catch (error) {
|
||||
console.log(` Chunk ${i + 1} failed, skipping`);
|
||||
}
|
||||
}
|
||||
|
||||
if (chunkSummaries.length === 0) {
|
||||
return 'Error: Unable to summarize conversation.';
|
||||
}
|
||||
|
||||
// Synthesize chunks into final summary
|
||||
const synthesisPrompt = `Context: This summary will be shown in a list to help users and Claude choose which past conversations are relevant to a future activity.
|
||||
|
||||
Synthesize these part-summaries into one cohesive paragraph. Focus on what was accomplished and any notable technical decisions or challenges. Output in <summary></summary> tags.
|
||||
|
||||
Part summaries:
|
||||
${chunkSummaries.map((s, i) => `${i + 1}. ${s}`).join('\n')}
|
||||
|
||||
Good:
|
||||
<summary>Built conversation search system with JavaScript, sqlite-vec, and local embeddings. Implemented hierarchical summarization for long conversations. System archives conversations permanently and provides semantic search via CLI.</summary>
|
||||
|
||||
Bad:
|
||||
<summary>This conversation synthesizes several topics discussed across multiple parts...</summary>
|
||||
|
||||
Your summary (max 200 words):`;
|
||||
|
||||
console.log(` Synthesizing final summary...`);
|
||||
try {
|
||||
const result = await callClaude(synthesisPrompt);
|
||||
return extractSummary(result);
|
||||
} catch (error) {
|
||||
console.log(` Synthesis failed, using chunk summaries`);
|
||||
return chunkSummaries.join(' ');
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,16 @@
|
||||
export interface ConversationExchange {
|
||||
id: string;
|
||||
project: string;
|
||||
timestamp: string;
|
||||
userMessage: string;
|
||||
assistantMessage: string;
|
||||
archivePath: string;
|
||||
lineStart: number;
|
||||
lineEnd: number;
|
||||
}
|
||||
|
||||
export interface SearchResult {
|
||||
exchange: ConversationExchange;
|
||||
similarity: number;
|
||||
snippet: string;
|
||||
}
|
||||
@@ -0,0 +1,278 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { verifyIndex, repairIndex, VerificationResult } from './verify.js';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { initDatabase, insertExchange } from './db.js';
|
||||
import { ConversationExchange } from './types.js';
|
||||
|
||||
describe('verifyIndex', () => {
|
||||
const testDir = path.join(os.tmpdir(), 'conversation-search-test-' + Date.now());
|
||||
const projectsDir = path.join(testDir, '.claude', 'projects');
|
||||
const archiveDir = path.join(testDir, '.clank', 'conversation-archive');
|
||||
const dbPath = path.join(testDir, '.clank', 'conversation-index', 'db.sqlite');
|
||||
|
||||
beforeEach(() => {
|
||||
// Create test directories
|
||||
fs.mkdirSync(path.join(testDir, '.clank', 'conversation-index'), { recursive: true });
|
||||
fs.mkdirSync(projectsDir, { recursive: true });
|
||||
fs.mkdirSync(archiveDir, { recursive: true });
|
||||
|
||||
// Override environment paths for testing
|
||||
process.env.TEST_PROJECTS_DIR = projectsDir;
|
||||
process.env.TEST_ARCHIVE_DIR = archiveDir;
|
||||
process.env.TEST_DB_PATH = dbPath;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up test directory
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
delete process.env.TEST_PROJECTS_DIR;
|
||||
delete process.env.TEST_ARCHIVE_DIR;
|
||||
delete process.env.TEST_DB_PATH;
|
||||
});
|
||||
|
||||
it('detects missing summaries', async () => {
|
||||
// Create a test conversation file without a summary
|
||||
const projectArchive = path.join(archiveDir, 'test-project');
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
const conversationPath = path.join(projectArchive, 'test-conversation.jsonl');
|
||||
|
||||
// Create proper JSONL format (one JSON object per line)
|
||||
const messages = [
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'Hello' }, timestamp: '2024-01-01T00:00:00Z' }),
|
||||
JSON.stringify({ type: 'assistant', message: { role: 'assistant', content: 'Hi there!' }, timestamp: '2024-01-01T00:00:01Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, messages.join('\n'));
|
||||
|
||||
const result = await verifyIndex();
|
||||
|
||||
expect(result.missing.length).toBe(1);
|
||||
expect(result.missing[0].path).toBe(conversationPath);
|
||||
expect(result.missing[0].reason).toBe('No summary file');
|
||||
});
|
||||
|
||||
it('detects orphaned database entries', async () => {
|
||||
// Initialize database
|
||||
const db = initDatabase();
|
||||
|
||||
// Create an exchange in the database
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'orphan-id-1',
|
||||
project: 'deleted-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'This conversation was deleted',
|
||||
assistantMessage: 'But still in database',
|
||||
archivePath: path.join(archiveDir, 'deleted-project', 'deleted.jsonl'),
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
db.close();
|
||||
|
||||
// Verify detects orphaned entry (file doesn't exist)
|
||||
const result = await verifyIndex();
|
||||
|
||||
expect(result.orphaned.length).toBe(1);
|
||||
expect(result.orphaned[0].uuid).toBe('orphan-id-1');
|
||||
expect(result.orphaned[0].path).toBe(exchange.archivePath);
|
||||
});
|
||||
|
||||
it('detects outdated files (file modified after last_indexed)', async () => {
|
||||
// Create conversation file with summary
|
||||
const projectArchive = path.join(archiveDir, 'test-project');
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
const conversationPath = path.join(projectArchive, 'updated-conversation.jsonl');
|
||||
const summaryPath = conversationPath.replace('.jsonl', '-summary.txt');
|
||||
|
||||
// Create initial conversation
|
||||
const messages = [
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'Hello' }, timestamp: '2024-01-01T00:00:00Z' }),
|
||||
JSON.stringify({ type: 'assistant', message: { role: 'assistant', content: 'Hi there!' }, timestamp: '2024-01-01T00:00:01Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, messages.join('\n'));
|
||||
fs.writeFileSync(summaryPath, 'Test summary');
|
||||
|
||||
// Index it
|
||||
const db = initDatabase();
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'updated-id-1',
|
||||
project: 'test-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'Hello',
|
||||
assistantMessage: 'Hi there!',
|
||||
archivePath: conversationPath,
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
|
||||
// Get the last_indexed timestamp
|
||||
const row = db.prepare(`SELECT last_indexed FROM exchanges WHERE id = ?`).get('updated-id-1') as any;
|
||||
const lastIndexed = row.last_indexed;
|
||||
db.close();
|
||||
|
||||
// Wait a bit, then modify the file
|
||||
await new Promise(resolve => setTimeout(resolve, 10));
|
||||
|
||||
// Update the conversation file
|
||||
const updatedMessages = [
|
||||
...messages,
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'New message' }, timestamp: '2024-01-01T00:00:02Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, updatedMessages.join('\n'));
|
||||
|
||||
// Verify detects outdated file
|
||||
const result = await verifyIndex();
|
||||
|
||||
expect(result.outdated.length).toBe(1);
|
||||
expect(result.outdated[0].path).toBe(conversationPath);
|
||||
expect(result.outdated[0].dbTime).toBe(lastIndexed);
|
||||
expect(result.outdated[0].fileTime).toBeGreaterThan(lastIndexed);
|
||||
});
|
||||
|
||||
// Note: Parser is resilient to malformed JSON - it skips bad lines
|
||||
// Corruption detection would require file system errors or permission issues
|
||||
// which are harder to test. Skipping for now as missing summaries is the
|
||||
// primary use case for verification.
|
||||
});
|
||||
|
||||
describe('repairIndex', () => {
|
||||
const testDir = path.join(os.tmpdir(), 'conversation-repair-test-' + Date.now());
|
||||
const projectsDir = path.join(testDir, '.claude', 'projects');
|
||||
const archiveDir = path.join(testDir, '.clank', 'conversation-archive');
|
||||
const dbPath = path.join(testDir, '.clank', 'conversation-index', 'db.sqlite');
|
||||
|
||||
beforeEach(() => {
|
||||
// Create test directories
|
||||
fs.mkdirSync(path.join(testDir, '.clank', 'conversation-index'), { recursive: true });
|
||||
fs.mkdirSync(projectsDir, { recursive: true });
|
||||
fs.mkdirSync(archiveDir, { recursive: true });
|
||||
|
||||
// Override environment paths for testing
|
||||
process.env.TEST_PROJECTS_DIR = projectsDir;
|
||||
process.env.TEST_ARCHIVE_DIR = archiveDir;
|
||||
process.env.TEST_DB_PATH = dbPath;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up test directory
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
delete process.env.TEST_PROJECTS_DIR;
|
||||
delete process.env.TEST_ARCHIVE_DIR;
|
||||
delete process.env.TEST_DB_PATH;
|
||||
});
|
||||
|
||||
it('deletes orphaned database entries during repair', async () => {
|
||||
// Initialize database with orphaned entry
|
||||
const db = initDatabase();
|
||||
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'orphan-repair-1',
|
||||
project: 'deleted-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'This conversation was deleted',
|
||||
assistantMessage: 'But still in database',
|
||||
archivePath: path.join(archiveDir, 'deleted-project', 'deleted.jsonl'),
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
db.close();
|
||||
|
||||
// Verify it's there
|
||||
const dbBefore = initDatabase();
|
||||
const beforeCount = dbBefore.prepare(`SELECT COUNT(*) as count FROM exchanges WHERE id = ?`).get('orphan-repair-1') as { count: number };
|
||||
expect(beforeCount.count).toBe(1);
|
||||
dbBefore.close();
|
||||
|
||||
// Run repair
|
||||
const issues = await verifyIndex();
|
||||
expect(issues.orphaned.length).toBe(1);
|
||||
await repairIndex(issues);
|
||||
|
||||
// Verify it's gone
|
||||
const dbAfter = initDatabase();
|
||||
const afterCount = dbAfter.prepare(`SELECT COUNT(*) as count FROM exchanges WHERE id = ?`).get('orphan-repair-1') as { count: number };
|
||||
expect(afterCount.count).toBe(0);
|
||||
dbAfter.close();
|
||||
});
|
||||
|
||||
it('re-indexes outdated files during repair', { timeout: 30000 }, async () => {
|
||||
// Create conversation file with summary
|
||||
const projectArchive = path.join(archiveDir, 'test-project');
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
const conversationPath = path.join(projectArchive, 'outdated-repair.jsonl');
|
||||
const summaryPath = conversationPath.replace('.jsonl', '-summary.txt');
|
||||
|
||||
// Create initial conversation
|
||||
const messages = [
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'Hello' }, timestamp: '2024-01-01T00:00:00Z' }),
|
||||
JSON.stringify({ type: 'assistant', message: { role: 'assistant', content: 'Hi there!' }, timestamp: '2024-01-01T00:00:01Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, messages.join('\n'));
|
||||
fs.writeFileSync(summaryPath, 'Old summary');
|
||||
|
||||
// Index it
|
||||
const db = initDatabase();
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'outdated-repair-1',
|
||||
project: 'test-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'Hello',
|
||||
assistantMessage: 'Hi there!',
|
||||
archivePath: conversationPath,
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
|
||||
// Get the last_indexed timestamp
|
||||
const beforeRow = db.prepare(`SELECT last_indexed FROM exchanges WHERE id = ?`).get('outdated-repair-1') as any;
|
||||
const beforeIndexed = beforeRow.last_indexed;
|
||||
db.close();
|
||||
|
||||
// Wait a bit, then modify the file
|
||||
await new Promise(resolve => setTimeout(resolve, 10));
|
||||
|
||||
// Update the conversation file (add new exchange)
|
||||
const updatedMessages = [
|
||||
...messages,
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'New message' }, timestamp: '2024-01-01T00:00:02Z' }),
|
||||
JSON.stringify({ type: 'assistant', message: { role: 'assistant', content: 'New response' }, timestamp: '2024-01-01T00:00:03Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, updatedMessages.join('\n'));
|
||||
|
||||
// Verify detects outdated
|
||||
const issues = await verifyIndex();
|
||||
expect(issues.outdated.length).toBe(1);
|
||||
|
||||
// Wait a bit to ensure different timestamp
|
||||
await new Promise(resolve => setTimeout(resolve, 10));
|
||||
|
||||
// Run repair
|
||||
await repairIndex(issues);
|
||||
|
||||
// Verify it was re-indexed with new timestamp
|
||||
const dbAfter = initDatabase();
|
||||
const afterRow = dbAfter.prepare(`SELECT MAX(last_indexed) as last_indexed FROM exchanges WHERE archive_path = ?`).get(conversationPath) as any;
|
||||
expect(afterRow.last_indexed).toBeGreaterThan(beforeIndexed);
|
||||
|
||||
// Verify no longer outdated
|
||||
const verifyAfter = await verifyIndex();
|
||||
expect(verifyAfter.outdated.length).toBe(0);
|
||||
|
||||
dbAfter.close();
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,182 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { parseConversation } from './parser.js';
|
||||
import { initDatabase, getAllExchanges, getFileLastIndexed } from './db.js';
|
||||
|
||||
export interface VerificationResult {
|
||||
missing: Array<{ path: string; reason: string }>;
|
||||
orphaned: Array<{ uuid: string; path: string }>;
|
||||
outdated: Array<{ path: string; fileTime: number; dbTime: number }>;
|
||||
corrupted: Array<{ path: string; error: string }>;
|
||||
}
|
||||
|
||||
// Allow overriding paths for testing
|
||||
function getArchiveDir(): string {
|
||||
return process.env.TEST_ARCHIVE_DIR || path.join(os.homedir(), '.clank', 'conversation-archive');
|
||||
}
|
||||
|
||||
export async function verifyIndex(): Promise<VerificationResult> {
|
||||
const result: VerificationResult = {
|
||||
missing: [],
|
||||
orphaned: [],
|
||||
outdated: [],
|
||||
corrupted: []
|
||||
};
|
||||
|
||||
const archiveDir = getArchiveDir();
|
||||
|
||||
// Track all files we find
|
||||
const foundFiles = new Set<string>();
|
||||
|
||||
// Find all conversation files
|
||||
if (!fs.existsSync(archiveDir)) {
|
||||
return result;
|
||||
}
|
||||
|
||||
// Initialize database once for all checks
|
||||
const db = initDatabase();
|
||||
|
||||
const projects = fs.readdirSync(archiveDir);
|
||||
let totalChecked = 0;
|
||||
|
||||
for (const project of projects) {
|
||||
const projectPath = path.join(archiveDir, project);
|
||||
const stat = fs.statSync(projectPath);
|
||||
|
||||
if (!stat.isDirectory()) continue;
|
||||
|
||||
const files = fs.readdirSync(projectPath).filter(f => f.endsWith('.jsonl'));
|
||||
|
||||
for (const file of files) {
|
||||
totalChecked++;
|
||||
|
||||
if (totalChecked % 100 === 0) {
|
||||
console.log(` Checked ${totalChecked} conversations...`);
|
||||
}
|
||||
|
||||
const conversationPath = path.join(projectPath, file);
|
||||
foundFiles.add(conversationPath);
|
||||
|
||||
const summaryPath = conversationPath.replace('.jsonl', '-summary.txt');
|
||||
|
||||
// Check for missing summary
|
||||
if (!fs.existsSync(summaryPath)) {
|
||||
result.missing.push({ path: conversationPath, reason: 'No summary file' });
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if file is outdated (modified after last_indexed)
|
||||
const lastIndexed = getFileLastIndexed(db, conversationPath);
|
||||
if (lastIndexed !== null) {
|
||||
const fileStat = fs.statSync(conversationPath);
|
||||
if (fileStat.mtimeMs > lastIndexed) {
|
||||
result.outdated.push({
|
||||
path: conversationPath,
|
||||
fileTime: fileStat.mtimeMs,
|
||||
dbTime: lastIndexed
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Try parsing to detect corruption
|
||||
try {
|
||||
await parseConversation(conversationPath, project, conversationPath);
|
||||
} catch (error) {
|
||||
result.corrupted.push({
|
||||
path: conversationPath,
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`Verified ${totalChecked} conversations.`);
|
||||
|
||||
// Check for orphaned database entries
|
||||
const dbExchanges = getAllExchanges(db);
|
||||
db.close();
|
||||
|
||||
for (const exchange of dbExchanges) {
|
||||
if (!foundFiles.has(exchange.archivePath)) {
|
||||
result.orphaned.push({
|
||||
uuid: exchange.id,
|
||||
path: exchange.archivePath
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
export async function repairIndex(issues: VerificationResult): Promise<void> {
|
||||
console.log('Repairing index...');
|
||||
|
||||
// To avoid circular dependencies, we import the indexer functions dynamically
|
||||
const { initDatabase, insertExchange, deleteExchange } = await import('./db.js');
|
||||
const { parseConversation } = await import('./parser.js');
|
||||
const { initEmbeddings, generateExchangeEmbedding } = await import('./embeddings.js');
|
||||
const { summarizeConversation } = await import('./summarizer.js');
|
||||
|
||||
const db = initDatabase();
|
||||
await initEmbeddings();
|
||||
|
||||
// Remove orphaned entries first
|
||||
for (const orphan of issues.orphaned) {
|
||||
console.log(`Removing orphaned entry: ${orphan.uuid}`);
|
||||
deleteExchange(db, orphan.uuid);
|
||||
}
|
||||
|
||||
// Re-index missing and outdated conversations
|
||||
const toReindex = [
|
||||
...issues.missing.map(m => m.path),
|
||||
...issues.outdated.map(o => o.path)
|
||||
];
|
||||
|
||||
for (const conversationPath of toReindex) {
|
||||
console.log(`Re-indexing: ${conversationPath}`);
|
||||
try {
|
||||
// Extract project name from path
|
||||
const archiveDir = getArchiveDir();
|
||||
const relativePath = conversationPath.replace(archiveDir + path.sep, '');
|
||||
const project = relativePath.split(path.sep)[0];
|
||||
|
||||
// Parse conversation
|
||||
const exchanges = await parseConversation(conversationPath, project, conversationPath);
|
||||
|
||||
if (exchanges.length === 0) {
|
||||
console.log(` Skipped (no exchanges)`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Generate/update summary
|
||||
const summaryPath = conversationPath.replace('.jsonl', '-summary.txt');
|
||||
const summary = await summarizeConversation(exchanges);
|
||||
fs.writeFileSync(summaryPath, summary, 'utf-8');
|
||||
console.log(` Created summary: ${summary.split(/\s+/).length} words`);
|
||||
|
||||
// Index exchanges
|
||||
for (const exchange of exchanges) {
|
||||
const embedding = await generateExchangeEmbedding(
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage
|
||||
);
|
||||
insertExchange(db, exchange, embedding);
|
||||
}
|
||||
|
||||
console.log(` Indexed ${exchanges.length} exchanges`);
|
||||
} catch (error) {
|
||||
console.error(`Failed to re-index ${conversationPath}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
db.close();
|
||||
|
||||
// Report corrupted files (manual intervention needed)
|
||||
if (issues.corrupted.length > 0) {
|
||||
console.log('\n⚠️ Corrupted files (manual review needed):');
|
||||
issues.corrupted.forEach(c => console.log(` ${c.path}: ${c.error}`));
|
||||
}
|
||||
|
||||
console.log('✅ Repair complete.');
|
||||
}
|
||||
374
skills/collaboration/remembering-conversations/tool/test-deployment.sh
Executable file
374
skills/collaboration/remembering-conversations/tool/test-deployment.sh
Executable file
@@ -0,0 +1,374 @@
|
||||
#!/bin/bash
|
||||
# End-to-end deployment testing
|
||||
# Tests all deployment scenarios from docs/plans/2025-10-07-deployment-plan.md
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
INSTALL_HOOK="$SCRIPT_DIR/install-hook"
|
||||
INDEX_CONVERSATIONS="$SCRIPT_DIR/index-conversations"
|
||||
|
||||
# Test counter
|
||||
TESTS_RUN=0
|
||||
TESTS_PASSED=0
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper functions
|
||||
setup_test() {
|
||||
TEST_DIR=$(mktemp -d)
|
||||
export HOME="$TEST_DIR"
|
||||
export TEST_PROJECTS_DIR="$TEST_DIR/.claude/projects"
|
||||
export TEST_ARCHIVE_DIR="$TEST_DIR/.clank/conversation-archive"
|
||||
export TEST_DB_PATH="$TEST_DIR/.clank/conversation-index/db.sqlite"
|
||||
|
||||
mkdir -p "$HOME/.claude/hooks"
|
||||
mkdir -p "$TEST_PROJECTS_DIR"
|
||||
mkdir -p "$TEST_ARCHIVE_DIR"
|
||||
mkdir -p "$TEST_DIR/.clank/conversation-index"
|
||||
}
|
||||
|
||||
cleanup_test() {
|
||||
if [ -n "$TEST_DIR" ] && [ -d "$TEST_DIR" ]; then
|
||||
rm -rf "$TEST_DIR"
|
||||
fi
|
||||
unset TEST_PROJECTS_DIR
|
||||
unset TEST_ARCHIVE_DIR
|
||||
unset TEST_DB_PATH
|
||||
}
|
||||
|
||||
assert_file_exists() {
|
||||
if [ ! -f "$1" ]; then
|
||||
echo -e "${RED}❌ FAIL: File does not exist: $1${NC}"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_executable() {
|
||||
if [ ! -x "$1" ]; then
|
||||
echo -e "${RED}❌ FAIL: File is not executable: $1${NC}"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_contains() {
|
||||
if ! grep -q "$2" "$1"; then
|
||||
echo -e "${RED}❌ FAIL: File $1 does not contain: $2${NC}"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_summary_exists() {
|
||||
local jsonl_file="$1"
|
||||
|
||||
# If file is in projects dir, convert to archive path
|
||||
if [[ "$jsonl_file" == *"/.claude/projects/"* ]]; then
|
||||
jsonl_file=$(echo "$jsonl_file" | sed "s|/.claude/projects/|/.clank/conversation-archive/|")
|
||||
fi
|
||||
|
||||
local summary_file="${jsonl_file%.jsonl}-summary.txt"
|
||||
if [ ! -f "$summary_file" ]; then
|
||||
echo -e "${RED}❌ FAIL: Summary does not exist: $summary_file${NC}"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
create_test_conversation() {
|
||||
local project="$1"
|
||||
local uuid="${2:-test-$(date +%s)}"
|
||||
|
||||
mkdir -p "$TEST_PROJECTS_DIR/$project"
|
||||
local conv_file="$TEST_PROJECTS_DIR/$project/${uuid}.jsonl"
|
||||
|
||||
cat > "$conv_file" <<'EOF'
|
||||
{"type":"user","message":{"role":"user","content":"What is TDD?"},"timestamp":"2024-01-01T00:00:00Z"}
|
||||
{"type":"assistant","message":{"role":"assistant","content":"TDD stands for Test-Driven Development. You write tests first."},"timestamp":"2024-01-01T00:00:01Z"}
|
||||
EOF
|
||||
|
||||
echo "$conv_file"
|
||||
}
|
||||
|
||||
run_test() {
|
||||
local test_name="$1"
|
||||
local test_func="$2"
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
echo -e "\n${YELLOW}Running test: $test_name${NC}"
|
||||
|
||||
setup_test
|
||||
|
||||
if $test_func; then
|
||||
echo -e "${GREEN}✓ PASS: $test_name${NC}"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}❌ FAIL: $test_name${NC}"
|
||||
fi
|
||||
|
||||
cleanup_test
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 1: Fresh Installation
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_1_fresh_install() {
|
||||
echo " 1. Installing hook with no existing hook..."
|
||||
"$INSTALL_HOOK" > /dev/null 2>&1 || true
|
||||
|
||||
assert_file_exists "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
assert_file_executable "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
|
||||
echo " 2. Creating test conversation..."
|
||||
local conv_file=$(create_test_conversation "test-project" "conv-1")
|
||||
|
||||
echo " 3. Indexing conversation..."
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" > /dev/null 2>&1
|
||||
|
||||
echo " 4. Verifying summary was created..."
|
||||
assert_summary_exists "$conv_file" || return 1
|
||||
|
||||
echo " 5. Testing hook triggers indexing..."
|
||||
export SESSION_ID="hook-session-$(date +%s)"
|
||||
|
||||
# Create conversation file with SESSION_ID in name
|
||||
mkdir -p "$TEST_PROJECTS_DIR/test-project"
|
||||
local new_conv="$TEST_PROJECTS_DIR/test-project/${SESSION_ID}.jsonl"
|
||||
cat > "$new_conv" <<'EOF'
|
||||
{"type":"user","message":{"role":"user","content":"What is TDD?"},"timestamp":"2024-01-01T00:00:00Z"}
|
||||
{"type":"assistant","message":{"role":"assistant","content":"TDD stands for Test-Driven Development. You write tests first."},"timestamp":"2024-01-01T00:00:01Z"}
|
||||
EOF
|
||||
|
||||
# Verify hook runs the index command (manually call indexer with --session)
|
||||
# In real environment, hook would do this automatically
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --session "$SESSION_ID" > /dev/null 2>&1
|
||||
|
||||
echo " 6. Verifying session was indexed..."
|
||||
assert_summary_exists "$new_conv" || return 1
|
||||
|
||||
echo " 7. Testing search functionality..."
|
||||
local search_result=$(cd "$SCRIPT_DIR" && "$SCRIPT_DIR/search-conversations" "TDD" 2>/dev/null || echo "")
|
||||
if [ -z "$search_result" ]; then
|
||||
echo -e "${RED}❌ Search returned no results${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 2: Existing Hook (merge)
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_2_existing_hook_merge() {
|
||||
echo " 1. Creating existing hook..."
|
||||
cat > "$HOME/.claude/hooks/sessionEnd" <<'EOF'
|
||||
#!/bin/bash
|
||||
# Existing hook
|
||||
echo "Existing hook running"
|
||||
EOF
|
||||
chmod +x "$HOME/.claude/hooks/sessionEnd"
|
||||
|
||||
echo " 2. Installing with merge option..."
|
||||
echo "m" | "$INSTALL_HOOK" > /dev/null 2>&1 || true
|
||||
|
||||
echo " 3. Verifying backup created..."
|
||||
local backup_count=$(ls -1 "$HOME/.claude/hooks/sessionEnd.backup."* 2>/dev/null | wc -l)
|
||||
if [ "$backup_count" -lt 1 ]; then
|
||||
echo -e "${RED}❌ No backup created${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " 4. Verifying merge preserved existing content..."
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "Existing hook running" || return 1
|
||||
|
||||
echo " 5. Verifying indexer was appended..."
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "remembering-conversations.*index-conversations" || return 1
|
||||
|
||||
echo " 6. Testing merged hook runs both parts..."
|
||||
local conv_file=$(create_test_conversation "merge-project" "merge-conv")
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" > /dev/null 2>&1
|
||||
|
||||
export SESSION_ID="merge-session-$(date +%s)"
|
||||
local hook_output=$("$HOME/.claude/hooks/sessionEnd" 2>&1)
|
||||
|
||||
if ! echo "$hook_output" | grep -q "Existing hook running"; then
|
||||
echo -e "${RED}❌ Existing hook logic not executed${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 3: Recovery (verify/repair)
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_3_recovery_verify_repair() {
|
||||
echo " 1. Creating conversations and indexing..."
|
||||
local conv1=$(create_test_conversation "recovery-project" "conv-1")
|
||||
local conv2=$(create_test_conversation "recovery-project" "conv-2")
|
||||
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" > /dev/null 2>&1
|
||||
|
||||
echo " 2. Verifying summaries exist..."
|
||||
assert_summary_exists "$conv1" || return 1
|
||||
assert_summary_exists "$conv2" || return 1
|
||||
|
||||
echo " 3. Deleting summary to simulate missing file..."
|
||||
# Delete from archive (where summaries are stored)
|
||||
local archive_conv1=$(echo "$conv1" | sed "s|/.claude/projects/|/.clank/conversation-archive/|")
|
||||
rm "${archive_conv1%.jsonl}-summary.txt"
|
||||
|
||||
echo " 4. Running verify (should detect missing)..."
|
||||
local verify_output=$(cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --verify 2>&1)
|
||||
|
||||
if ! echo "$verify_output" | grep -q "Missing summaries: 1"; then
|
||||
echo -e "${RED}❌ Verify did not detect missing summary${NC}"
|
||||
echo "Verify output: $verify_output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " 5. Running repair..."
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --repair > /dev/null 2>&1
|
||||
|
||||
echo " 6. Verifying summary was regenerated..."
|
||||
assert_summary_exists "$conv1" || return 1
|
||||
|
||||
echo " 7. Running verify again (should be clean)..."
|
||||
verify_output=$(cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --verify 2>&1)
|
||||
|
||||
# Verify should report no missing issues
|
||||
if ! echo "$verify_output" | grep -q "Missing summaries: 0"; then
|
||||
echo -e "${RED}❌ Verify still reports missing issues after repair${NC}"
|
||||
echo "Verify output: $verify_output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 4: Change Detection
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_4_change_detection() {
|
||||
echo " 1. Creating and indexing conversation..."
|
||||
local conv=$(create_test_conversation "change-project" "conv-1")
|
||||
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" > /dev/null 2>&1
|
||||
|
||||
echo " 2. Verifying initial index..."
|
||||
assert_summary_exists "$conv" || return 1
|
||||
|
||||
echo " 3. Modifying conversation (adding exchange)..."
|
||||
# Wait to ensure different mtime
|
||||
sleep 1
|
||||
|
||||
# Modify the archive file (that's what verify checks)
|
||||
local archive_conv=$(echo "$conv" | sed "s|/.claude/projects/|/.clank/conversation-archive/|")
|
||||
cat >> "$archive_conv" <<'EOF'
|
||||
{"type":"user","message":{"role":"user","content":"Tell me more about TDD"},"timestamp":"2024-01-01T00:00:02Z"}
|
||||
{"type":"assistant","message":{"role":"assistant","content":"TDD has three phases: Red, Green, Refactor."},"timestamp":"2024-01-01T00:00:03Z"}
|
||||
EOF
|
||||
|
||||
echo " 4. Running verify (should detect outdated)..."
|
||||
local verify_output=$(cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --verify 2>&1)
|
||||
|
||||
if ! echo "$verify_output" | grep -q "Outdated files: 1"; then
|
||||
echo -e "${RED}❌ Verify did not detect outdated file${NC}"
|
||||
echo "Verify output: $verify_output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " 5. Running repair (should re-index)..."
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --repair > /dev/null 2>&1
|
||||
|
||||
echo " 6. Verifying conversation is up to date..."
|
||||
verify_output=$(cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --verify 2>&1)
|
||||
|
||||
if ! echo "$verify_output" | grep -q "Outdated files: 0"; then
|
||||
echo -e "${RED}❌ File still outdated after repair${NC}"
|
||||
echo "Verify output: $verify_output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " 7. Verifying new content is searchable..."
|
||||
local search_result=$(cd "$SCRIPT_DIR" && "$SCRIPT_DIR/search-conversations" "Red Green Refactor" 2>/dev/null || echo "")
|
||||
if [ -z "$search_result" ]; then
|
||||
echo -e "${RED}❌ New content not found in search${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 5: Subagent Workflow (Manual Testing Required)
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_5_subagent_workflow_docs() {
|
||||
echo " This scenario requires manual testing with a live subagent."
|
||||
echo " Automated checks:"
|
||||
|
||||
echo " 1. Verifying search-agent template exists..."
|
||||
local template_file="$SCRIPT_DIR/prompts/search-agent.md"
|
||||
assert_file_exists "$template_file" || return 1
|
||||
|
||||
echo " 2. Verifying template has required sections..."
|
||||
assert_file_contains "$template_file" "### Summary" || return 1
|
||||
assert_file_contains "$template_file" "### Sources" || return 1
|
||||
assert_file_contains "$template_file" "### For Follow-Up" || return 1
|
||||
|
||||
echo ""
|
||||
echo -e "${YELLOW} MANUAL TESTING REQUIRED:${NC}"
|
||||
echo " To complete Scenario 5 testing:"
|
||||
echo " 1. Start a new Claude Code session"
|
||||
echo " 2. Ask about a past conversation topic"
|
||||
echo " 3. Dispatch subagent using: skills/collaboration/remembering-conversations/tool/prompts/search-agent.md"
|
||||
echo " 4. Verify synthesis is 200-1000 words"
|
||||
echo " 5. Verify all sources include: project, date, file path, status"
|
||||
echo " 6. Ask follow-up question to test iterative refinement"
|
||||
echo " 7. Verify no raw conversations loaded into main context"
|
||||
echo ""
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Run All Tests
|
||||
# ============================================================================
|
||||
|
||||
echo "=========================================="
|
||||
echo " End-to-End Deployment Testing"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
echo "Testing deployment scenarios from:"
|
||||
echo " docs/plans/2025-10-07-deployment-plan.md"
|
||||
echo ""
|
||||
|
||||
run_test "Scenario 1: Fresh Installation" test_scenario_1_fresh_install
|
||||
run_test "Scenario 2: Existing Hook (merge)" test_scenario_2_existing_hook_merge
|
||||
run_test "Scenario 3: Recovery (verify/repair)" test_scenario_3_recovery_verify_repair
|
||||
run_test "Scenario 4: Change Detection" test_scenario_4_change_detection
|
||||
run_test "Scenario 5: Subagent Workflow (docs check)" test_scenario_5_subagent_workflow_docs
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo -e " Test Results: ${GREEN}$TESTS_PASSED${NC}/${TESTS_RUN} passed"
|
||||
echo "=========================================="
|
||||
|
||||
if [ $TESTS_PASSED -eq $TESTS_RUN ]; then
|
||||
echo -e "${GREEN}✅ All tests passed!${NC}"
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}❌ Some tests failed${NC}"
|
||||
exit 1
|
||||
fi
|
||||
226
skills/collaboration/remembering-conversations/tool/test-install-hook.sh
Executable file
226
skills/collaboration/remembering-conversations/tool/test-install-hook.sh
Executable file
@@ -0,0 +1,226 @@
|
||||
#!/bin/bash
|
||||
# Test suite for install-hook script
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
INSTALL_HOOK="$SCRIPT_DIR/install-hook"
|
||||
|
||||
# Test counter
|
||||
TESTS_RUN=0
|
||||
TESTS_PASSED=0
|
||||
|
||||
# Helper functions
|
||||
setup_test() {
|
||||
TEST_DIR=$(mktemp -d)
|
||||
export HOME="$TEST_DIR"
|
||||
mkdir -p "$HOME/.claude/hooks"
|
||||
}
|
||||
|
||||
cleanup_test() {
|
||||
if [ -n "$TEST_DIR" ] && [ -d "$TEST_DIR" ]; then
|
||||
rm -rf "$TEST_DIR"
|
||||
fi
|
||||
}
|
||||
|
||||
assert_file_exists() {
|
||||
if [ ! -f "$1" ]; then
|
||||
echo "❌ FAIL: File does not exist: $1"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_not_exists() {
|
||||
if [ -f "$1" ]; then
|
||||
echo "❌ FAIL: File should not exist: $1"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_executable() {
|
||||
if [ ! -x "$1" ]; then
|
||||
echo "❌ FAIL: File is not executable: $1"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_contains() {
|
||||
if ! grep -q "$2" "$1"; then
|
||||
echo "❌ FAIL: File $1 does not contain: $2"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
run_test() {
|
||||
local test_name="$1"
|
||||
local test_func="$2"
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
echo "Running test: $test_name"
|
||||
|
||||
setup_test
|
||||
|
||||
if $test_func; then
|
||||
echo "✓ PASS: $test_name"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo "❌ FAIL: $test_name"
|
||||
fi
|
||||
|
||||
cleanup_test
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Test 1: Fresh installation with no existing hook
|
||||
test_fresh_installation() {
|
||||
# Run installer with no input (non-interactive fresh install)
|
||||
if [ ! -x "$INSTALL_HOOK" ]; then
|
||||
echo "❌ install-hook script not found or not executable"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Should fail because script doesn't exist yet
|
||||
"$INSTALL_HOOK" 2>&1 || true
|
||||
|
||||
# Verify hook was created
|
||||
assert_file_exists "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
|
||||
# Verify hook is executable
|
||||
assert_file_executable "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
|
||||
# Verify hook contains indexer reference
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "remembering-conversations.*index-conversations" || return 1
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 2: Merge with existing hook (user chooses merge)
|
||||
test_merge_with_existing_hook() {
|
||||
# Create existing hook
|
||||
cat > "$HOME/.claude/hooks/sessionEnd" <<'EOF'
|
||||
#!/bin/bash
|
||||
# Existing hook content
|
||||
echo "Existing hook running"
|
||||
EOF
|
||||
chmod +x "$HOME/.claude/hooks/sessionEnd"
|
||||
|
||||
# Run installer and choose merge
|
||||
echo "m" | "$INSTALL_HOOK" 2>&1 || true
|
||||
|
||||
# Verify backup was created
|
||||
local backup_count=$(ls -1 "$HOME/.claude/hooks/sessionEnd.backup."* 2>/dev/null | wc -l)
|
||||
if [ "$backup_count" -lt 1 ]; then
|
||||
echo "❌ No backup created"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify original content is preserved
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "Existing hook running" || return 1
|
||||
|
||||
# Verify indexer was appended
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "remembering-conversations.*index-conversations" || return 1
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 3: Replace with existing hook (user chooses replace)
|
||||
test_replace_with_existing_hook() {
|
||||
# Create existing hook
|
||||
cat > "$HOME/.claude/hooks/sessionEnd" <<'EOF'
|
||||
#!/bin/bash
|
||||
# Old hook to be replaced
|
||||
echo "Old hook"
|
||||
EOF
|
||||
chmod +x "$HOME/.claude/hooks/sessionEnd"
|
||||
|
||||
# Run installer and choose replace
|
||||
echo "r" | "$INSTALL_HOOK" 2>&1 || true
|
||||
|
||||
# Verify backup was created
|
||||
local backup_count=$(ls -1 "$HOME/.claude/hooks/sessionEnd.backup."* 2>/dev/null | wc -l)
|
||||
if [ "$backup_count" -lt 1 ]; then
|
||||
echo "❌ No backup created"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify old content is gone
|
||||
if grep -q "Old hook" "$HOME/.claude/hooks/sessionEnd"; then
|
||||
echo "❌ Old hook content still present"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify new hook contains indexer
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "remembering-conversations.*index-conversations" || return 1
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 4: Detection of already-installed indexer (idempotent)
|
||||
test_already_installed_detection() {
|
||||
# Create hook with indexer already installed
|
||||
cat > "$HOME/.claude/hooks/sessionEnd" <<'EOF'
|
||||
#!/bin/bash
|
||||
# Auto-index conversations (remembering-conversations skill)
|
||||
INDEXER="$HOME/.claude/skills/collaboration/remembering-conversations/tool/index-conversations"
|
||||
if [ -n "$SESSION_ID" ] && [ -x "$INDEXER" ]; then
|
||||
"$INDEXER" --session "$SESSION_ID" > /dev/null 2>&1 &
|
||||
fi
|
||||
EOF
|
||||
chmod +x "$HOME/.claude/hooks/sessionEnd"
|
||||
|
||||
# Run installer - should detect and exit
|
||||
local output=$("$INSTALL_HOOK" 2>&1 || true)
|
||||
|
||||
# Verify it detected existing installation
|
||||
if ! echo "$output" | grep -q "already installed"; then
|
||||
echo "❌ Did not detect existing installation"
|
||||
echo "Output: $output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify no backup was created (since nothing changed)
|
||||
local backup_count=$(ls -1 "$HOME/.claude/hooks/sessionEnd.backup."* 2>/dev/null | wc -l)
|
||||
if [ "$backup_count" -gt 0 ]; then
|
||||
echo "❌ Backup created when it shouldn't have been"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 5: Executable permissions are set
|
||||
test_executable_permissions() {
|
||||
# Run installer
|
||||
"$INSTALL_HOOK" 2>&1 || true
|
||||
|
||||
# Verify hook is executable
|
||||
assert_file_executable "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Run all tests
|
||||
echo "=========================================="
|
||||
echo "Testing install-hook script"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
run_test "Fresh installation with no existing hook" test_fresh_installation
|
||||
run_test "Merge with existing hook" test_merge_with_existing_hook
|
||||
run_test "Replace with existing hook" test_replace_with_existing_hook
|
||||
run_test "Detection of already-installed indexer" test_already_installed_detection
|
||||
run_test "Executable permissions are set" test_executable_permissions
|
||||
|
||||
echo "=========================================="
|
||||
echo "Test Results: $TESTS_PASSED/$TESTS_RUN passed"
|
||||
echo "=========================================="
|
||||
|
||||
if [ $TESTS_PASSED -eq $TESTS_RUN ]; then
|
||||
exit 0
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "ESNext",
|
||||
"moduleResolution": "node",
|
||||
"esModuleInterop": true,
|
||||
"strict": true,
|
||||
"skipLibCheck": true,
|
||||
"outDir": "./dist",
|
||||
"rootDir": "./src"
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules"]
|
||||
}
|
||||
107
skills/collaboration/requesting-code-review/SKILL.md
Normal file
107
skills/collaboration/requesting-code-review/SKILL.md
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
name: Requesting Code Review
|
||||
description: Dispatch code-reviewer subagent to review implementation against plan or requirements before proceeding
|
||||
when_to_use: After completing a task. After major feature implementation. Before merging. When executing plans (after each task). When stuck and need fresh perspective.
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# Requesting Code Review
|
||||
|
||||
Dispatch code-reviewer subagent to catch issues before they cascade.
|
||||
|
||||
**Core principle:** Review early, review often.
|
||||
|
||||
## When to Request Review
|
||||
|
||||
**Mandatory:**
|
||||
- After each task in subagent-driven development
|
||||
- After completing major feature
|
||||
- Before merge to main
|
||||
|
||||
**Optional but valuable:**
|
||||
- When stuck (fresh perspective)
|
||||
- Before refactoring (baseline check)
|
||||
- After fixing complex bug
|
||||
|
||||
## How to Request
|
||||
|
||||
**1. Get git SHAs:**
|
||||
```bash
|
||||
BASE_SHA=$(git rev-parse HEAD~1) # or origin/main
|
||||
HEAD_SHA=$(git rev-parse HEAD)
|
||||
```
|
||||
|
||||
**2. Dispatch code-reviewer subagent:**
|
||||
|
||||
Use Task tool with code-reviewer type, fill template at `code-reviewer.md`
|
||||
|
||||
**Placeholders:**
|
||||
- `{WHAT_WAS_IMPLEMENTED}` - What you just built
|
||||
- `{PLAN_OR_REQUIREMENTS}` - What it should do
|
||||
- `{BASE_SHA}` - Starting commit
|
||||
- `{HEAD_SHA}` - Ending commit
|
||||
- `{DESCRIPTION}` - Brief summary
|
||||
|
||||
**3. Act on feedback:**
|
||||
- Fix Critical issues immediately
|
||||
- Fix Important issues before proceeding
|
||||
- Note Minor issues for later
|
||||
- Push back if reviewer is wrong (with reasoning)
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
[Just completed Task 2: Add verification function]
|
||||
|
||||
You: Let me request code review before proceeding.
|
||||
|
||||
BASE_SHA=$(git log --oneline | grep "Task 1" | head -1 | awk '{print $1}')
|
||||
HEAD_SHA=$(git rev-parse HEAD)
|
||||
|
||||
[Dispatch code-reviewer subagent]
|
||||
WHAT_WAS_IMPLEMENTED: Verification and repair functions for conversation index
|
||||
PLAN_OR_REQUIREMENTS: Task 2 from docs/plans/deployment-plan.md
|
||||
BASE_SHA: a7981ec
|
||||
HEAD_SHA: 3df7661
|
||||
DESCRIPTION: Added verifyIndex() and repairIndex() with 4 issue types
|
||||
|
||||
[Subagent returns]:
|
||||
Strengths: Clean architecture, real tests
|
||||
Issues:
|
||||
Important: Missing progress indicators
|
||||
Minor: Magic number (100) for reporting interval
|
||||
Assessment: Ready to proceed
|
||||
|
||||
You: [Fix progress indicators]
|
||||
[Continue to Task 3]
|
||||
```
|
||||
|
||||
## Integration with Workflows
|
||||
|
||||
**Subagent-Driven Development:**
|
||||
- Review after EACH task
|
||||
- Catch issues before they compound
|
||||
- Fix before moving to next task
|
||||
|
||||
**Executing Plans:**
|
||||
- Review after each batch (3 tasks)
|
||||
- Get feedback, apply, continue
|
||||
|
||||
**Ad-Hoc Development:**
|
||||
- Review before merge
|
||||
- Review when stuck
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Never:**
|
||||
- Skip review because "it's simple"
|
||||
- Ignore Critical issues
|
||||
- Proceed with unfixed Important issues
|
||||
- Argue with valid technical feedback
|
||||
|
||||
**If reviewer wrong:**
|
||||
- Push back with technical reasoning
|
||||
- Show code/tests that prove it works
|
||||
- Request clarification
|
||||
|
||||
See template at: skills/collaboration/requesting-code-review/code-reviewer.md
|
||||
146
skills/collaboration/requesting-code-review/code-reviewer.md
Normal file
146
skills/collaboration/requesting-code-review/code-reviewer.md
Normal file
@@ -0,0 +1,146 @@
|
||||
# Code Review Agent
|
||||
|
||||
You are reviewing code changes for production readiness.
|
||||
|
||||
**Your task:**
|
||||
1. Review {WHAT_WAS_IMPLEMENTED}
|
||||
2. Compare against {PLAN_OR_REQUIREMENTS}
|
||||
3. Check code quality, architecture, testing
|
||||
4. Categorize issues by severity
|
||||
5. Assess production readiness
|
||||
|
||||
## What Was Implemented
|
||||
|
||||
{DESCRIPTION}
|
||||
|
||||
## Requirements/Plan
|
||||
|
||||
{PLAN_REFERENCE}
|
||||
|
||||
## Git Range to Review
|
||||
|
||||
**Base:** {BASE_SHA}
|
||||
**Head:** {HEAD_SHA}
|
||||
|
||||
```bash
|
||||
git diff --stat {BASE_SHA}..{HEAD_SHA}
|
||||
git diff {BASE_SHA}..{HEAD_SHA}
|
||||
```
|
||||
|
||||
## Review Checklist
|
||||
|
||||
**Code Quality:**
|
||||
- Clean separation of concerns?
|
||||
- Proper error handling?
|
||||
- Type safety (if applicable)?
|
||||
- DRY principle followed?
|
||||
- Edge cases handled?
|
||||
|
||||
**Architecture:**
|
||||
- Sound design decisions?
|
||||
- Scalability considerations?
|
||||
- Performance implications?
|
||||
- Security concerns?
|
||||
|
||||
**Testing:**
|
||||
- Tests actually test logic (not mocks)?
|
||||
- Edge cases covered?
|
||||
- Integration tests where needed?
|
||||
- All tests passing?
|
||||
|
||||
**Requirements:**
|
||||
- All plan requirements met?
|
||||
- Implementation matches spec?
|
||||
- No scope creep?
|
||||
- Breaking changes documented?
|
||||
|
||||
**Production Readiness:**
|
||||
- Migration strategy (if schema changes)?
|
||||
- Backward compatibility considered?
|
||||
- Documentation complete?
|
||||
- No obvious bugs?
|
||||
|
||||
## Output Format
|
||||
|
||||
### Strengths
|
||||
[What's well done? Be specific.]
|
||||
|
||||
### Issues
|
||||
|
||||
#### Critical (Must Fix)
|
||||
[Bugs, security issues, data loss risks, broken functionality]
|
||||
|
||||
#### Important (Should Fix)
|
||||
[Architecture problems, missing features, poor error handling, test gaps]
|
||||
|
||||
#### Minor (Nice to Have)
|
||||
[Code style, optimization opportunities, documentation improvements]
|
||||
|
||||
**For each issue:**
|
||||
- File:line reference
|
||||
- What's wrong
|
||||
- Why it matters
|
||||
- How to fix (if not obvious)
|
||||
|
||||
### Recommendations
|
||||
[Improvements for code quality, architecture, or process]
|
||||
|
||||
### Assessment
|
||||
|
||||
**Ready to merge?** [Yes/No/With fixes]
|
||||
|
||||
**Reasoning:** [Technical assessment in 1-2 sentences]
|
||||
|
||||
## Critical Rules
|
||||
|
||||
**DO:**
|
||||
- Categorize by actual severity (not everything is Critical)
|
||||
- Be specific (file:line, not vague)
|
||||
- Explain WHY issues matter
|
||||
- Acknowledge strengths
|
||||
- Give clear verdict
|
||||
|
||||
**DON'T:**
|
||||
- Say "looks good" without checking
|
||||
- Mark nitpicks as Critical
|
||||
- Give feedback on code you didn't review
|
||||
- Be vague ("improve error handling")
|
||||
- Avoid giving a clear verdict
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
### Strengths
|
||||
- Clean database schema with proper migrations (db.ts:15-42)
|
||||
- Comprehensive test coverage (18 tests, all edge cases)
|
||||
- Good error handling with fallbacks (summarizer.ts:85-92)
|
||||
|
||||
### Issues
|
||||
|
||||
#### Important
|
||||
1. **Missing help text in CLI wrapper**
|
||||
- File: index-conversations:1-31
|
||||
- Issue: No --help flag, users won't discover --concurrency
|
||||
- Fix: Add --help case with usage examples
|
||||
|
||||
2. **Date validation missing**
|
||||
- File: search.ts:25-27
|
||||
- Issue: Invalid dates silently return no results
|
||||
- Fix: Validate ISO format, throw error with example
|
||||
|
||||
#### Minor
|
||||
1. **Progress indicators**
|
||||
- File: indexer.ts:130
|
||||
- Issue: No "X of Y" counter for long operations
|
||||
- Impact: Users don't know how long to wait
|
||||
|
||||
### Recommendations
|
||||
- Add progress reporting for user experience
|
||||
- Consider config file for excluded projects (portability)
|
||||
|
||||
### Assessment
|
||||
|
||||
**Ready to merge: With fixes**
|
||||
|
||||
**Reasoning:** Core implementation is solid with good architecture and tests. Important issues (help text, date validation) are easily fixed and don't affect core functionality.
|
||||
```
|
||||
188
skills/collaboration/subagent-driven-development/SKILL.md
Normal file
188
skills/collaboration/subagent-driven-development/SKILL.md
Normal file
@@ -0,0 +1,188 @@
|
||||
---
|
||||
name: Subagent-Driven Development
|
||||
description: Execute implementation plan by dispatching fresh subagent for each task, with code review between tasks
|
||||
when_to_use: Alternative to executing-plans when staying in same session. When tasks are independent. When want fast iteration with review checkpoints. After writing implementation plan.
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# Subagent-Driven Development
|
||||
|
||||
Execute plan by dispatching fresh subagent per task, with code review after each.
|
||||
|
||||
**Core principle:** Fresh subagent per task + review between tasks = high quality, fast iteration
|
||||
|
||||
## Overview
|
||||
|
||||
**vs. Executing Plans (parallel session):**
|
||||
- Same session (no context switch)
|
||||
- Fresh subagent per task (no context pollution)
|
||||
- Code review after each task (catch issues early)
|
||||
- Faster iteration (no human-in-loop between tasks)
|
||||
|
||||
**When to use:**
|
||||
- Staying in this session
|
||||
- Tasks are mostly independent
|
||||
- Want continuous progress with quality gates
|
||||
|
||||
**When NOT to use:**
|
||||
- Need to review plan first (use executing-plans)
|
||||
- Tasks are tightly coupled (manual execution better)
|
||||
- Plan needs revision (brainstorm first)
|
||||
|
||||
## The Process
|
||||
|
||||
### 1. Load Plan
|
||||
|
||||
Read plan file, create TodoWrite with all tasks.
|
||||
|
||||
### 2. Execute Task with Subagent
|
||||
|
||||
For each task:
|
||||
|
||||
**Dispatch fresh subagent:**
|
||||
```
|
||||
Task tool (general-purpose):
|
||||
description: "Implement Task N: [task name]"
|
||||
prompt: |
|
||||
You are implementing Task N from [plan-file].
|
||||
|
||||
Read that task carefully. Your job is to:
|
||||
1. Implement exactly what the task specifies
|
||||
2. Write tests (following TDD if task says to)
|
||||
3. Verify implementation works
|
||||
4. Commit your work
|
||||
5. Report back
|
||||
|
||||
Work from: [directory]
|
||||
|
||||
Report: What you implemented, what you tested, test results, files changed, any issues
|
||||
```
|
||||
|
||||
**Subagent reports back** with summary of work.
|
||||
|
||||
### 3. Review Subagent's Work
|
||||
|
||||
**Dispatch code-reviewer subagent:**
|
||||
```
|
||||
Task tool (code-reviewer):
|
||||
Use template at skills/collaboration/requesting-code-review/code-reviewer.md
|
||||
|
||||
WHAT_WAS_IMPLEMENTED: [from subagent's report]
|
||||
PLAN_OR_REQUIREMENTS: Task N from [plan-file]
|
||||
BASE_SHA: [commit before task]
|
||||
HEAD_SHA: [current commit]
|
||||
DESCRIPTION: [task summary]
|
||||
```
|
||||
|
||||
**Code reviewer returns:** Strengths, Issues (Critical/Important/Minor), Assessment
|
||||
|
||||
### 4. Apply Review Feedback
|
||||
|
||||
**If issues found:**
|
||||
- Fix Critical issues immediately
|
||||
- Fix Important issues before next task
|
||||
- Note Minor issues
|
||||
|
||||
**Dispatch follow-up subagent if needed:**
|
||||
```
|
||||
"Fix issues from code review: [list issues]"
|
||||
```
|
||||
|
||||
### 5. Mark Complete, Next Task
|
||||
|
||||
- Mark task as completed in TodoWrite
|
||||
- Move to next task
|
||||
- Repeat steps 2-5
|
||||
|
||||
### 6. Final Review
|
||||
|
||||
After all tasks complete, dispatch final code-reviewer:
|
||||
- Reviews entire implementation
|
||||
- Checks all plan requirements met
|
||||
- Validates overall architecture
|
||||
|
||||
### 7. Complete Development
|
||||
|
||||
After final review passes:
|
||||
- Announce: "I'm using the Finishing a Development Branch skill to complete this work."
|
||||
- Switch to skills/collaboration/finishing-a-development-branch
|
||||
- Follow that skill to verify tests, present options, execute choice
|
||||
|
||||
## Example Workflow
|
||||
|
||||
```
|
||||
You: I'm using Subagent-Driven Development to execute this plan.
|
||||
|
||||
[Load plan, create TodoWrite]
|
||||
|
||||
Task 1: Hook installation script
|
||||
|
||||
[Dispatch implementation subagent]
|
||||
Subagent: Implemented install-hook with tests, 5/5 passing
|
||||
|
||||
[Get git SHAs, dispatch code-reviewer]
|
||||
Reviewer: Strengths: Good test coverage. Issues: None. Ready.
|
||||
|
||||
[Mark Task 1 complete]
|
||||
|
||||
Task 2: Recovery modes
|
||||
|
||||
[Dispatch implementation subagent]
|
||||
Subagent: Added verify/repair, 8/8 tests passing
|
||||
|
||||
[Dispatch code-reviewer]
|
||||
Reviewer: Strengths: Solid. Issues (Important): Missing progress reporting
|
||||
|
||||
[Dispatch fix subagent]
|
||||
Fix subagent: Added progress every 100 conversations
|
||||
|
||||
[Verify fix, mark Task 2 complete]
|
||||
|
||||
...
|
||||
|
||||
[After all tasks]
|
||||
[Dispatch final code-reviewer]
|
||||
Final reviewer: All requirements met, ready to merge
|
||||
|
||||
Done!
|
||||
```
|
||||
|
||||
## Advantages
|
||||
|
||||
**vs. Manual execution:**
|
||||
- Subagents follow TDD naturally
|
||||
- Fresh context per task (no confusion)
|
||||
- Parallel-safe (subagents don't interfere)
|
||||
|
||||
**vs. Executing Plans:**
|
||||
- Same session (no handoff)
|
||||
- Continuous progress (no waiting)
|
||||
- Review checkpoints automatic
|
||||
|
||||
**Cost:**
|
||||
- More subagent invocations
|
||||
- But catches issues early (cheaper than debugging later)
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Never:**
|
||||
- Skip code review between tasks
|
||||
- Proceed with unfixed Critical issues
|
||||
- Dispatch multiple implementation subagents in parallel (conflicts)
|
||||
- Implement without reading plan task
|
||||
|
||||
**If subagent fails task:**
|
||||
- Dispatch fix subagent with specific instructions
|
||||
- Don't try to fix manually (context pollution)
|
||||
|
||||
## Integration
|
||||
|
||||
**Pairs with:**
|
||||
- skills/collaboration/writing-plans (creates the plan)
|
||||
- skills/collaboration/requesting-code-review (review template)
|
||||
- skills/testing/test-driven-development (subagents follow this)
|
||||
|
||||
**Alternative to:**
|
||||
- skills/collaboration/executing-plans (parallel session)
|
||||
|
||||
See code-reviewer template: skills/collaboration/requesting-code-review/code-reviewer.md
|
||||
215
skills/collaboration/using-git-worktrees/SKILL.md
Normal file
215
skills/collaboration/using-git-worktrees/SKILL.md
Normal file
@@ -0,0 +1,215 @@
|
||||
---
|
||||
name: Using Git Worktrees
|
||||
description: Create isolated git worktrees with smart directory selection and safety verification
|
||||
when_to_use: When starting feature implementation in isolation. When brainstorming transitions to code. When need separate workspace without branch switching. Before executing implementation plans.
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# Using Git Worktrees
|
||||
|
||||
## Overview
|
||||
|
||||
Git worktrees create isolated workspaces sharing the same repository, allowing work on multiple branches simultaneously without switching.
|
||||
|
||||
**Core principle:** Systematic directory selection + safety verification = reliable isolation.
|
||||
|
||||
**Announce at start:** "I'm using the Using Git Worktrees skill to set up an isolated workspace."
|
||||
|
||||
## Directory Selection Process
|
||||
|
||||
Follow this priority order:
|
||||
|
||||
### 1. Check Existing Directories
|
||||
|
||||
```bash
|
||||
# Check in priority order
|
||||
ls -d .worktrees 2>/dev/null # Preferred (hidden)
|
||||
ls -d worktrees 2>/dev/null # Alternative
|
||||
```
|
||||
|
||||
**If found:** Use that directory. If both exist, `.worktrees` wins.
|
||||
|
||||
### 2. Check CLAUDE.md
|
||||
|
||||
```bash
|
||||
grep -i "worktree.*director" CLAUDE.md 2>/dev/null
|
||||
```
|
||||
|
||||
**If preference specified:** Use it without asking.
|
||||
|
||||
### 3. Ask User
|
||||
|
||||
If no directory exists and no CLAUDE.md preference:
|
||||
|
||||
```
|
||||
No worktree directory found. Where should I create worktrees?
|
||||
|
||||
1. .worktrees/ (project-local, hidden)
|
||||
2. ~/.clank-worktrees/<project-name>/ (global location)
|
||||
|
||||
Which would you prefer?
|
||||
```
|
||||
|
||||
## Safety Verification
|
||||
|
||||
### For Project-Local Directories (.worktrees or worktrees)
|
||||
|
||||
**MUST verify .gitignore before creating worktree:**
|
||||
|
||||
```bash
|
||||
# Check if directory pattern in .gitignore
|
||||
grep -q "^\.worktrees/$" .gitignore || grep -q "^worktrees/$" .gitignore
|
||||
```
|
||||
|
||||
**If NOT in .gitignore:**
|
||||
|
||||
Per Jesse's rule "Fix broken things immediately":
|
||||
1. Add appropriate line to .gitignore
|
||||
2. Commit the change
|
||||
3. Proceed with worktree creation
|
||||
|
||||
**Why critical:** Prevents accidentally committing worktree contents to repository.
|
||||
|
||||
### For Global Directory (~/.clank-worktrees)
|
||||
|
||||
No .gitignore verification needed - outside project entirely.
|
||||
|
||||
## Creation Steps
|
||||
|
||||
### 1. Detect Project Name
|
||||
|
||||
```bash
|
||||
project=$(basename "$(git rev-parse --show-toplevel)")
|
||||
```
|
||||
|
||||
### 2. Create Worktree
|
||||
|
||||
```bash
|
||||
# Determine full path
|
||||
case $LOCATION in
|
||||
.worktrees|worktrees)
|
||||
path="$LOCATION/$BRANCH_NAME"
|
||||
;;
|
||||
~/.clank-worktrees/*)
|
||||
path="~/.clank-worktrees/$project/$BRANCH_NAME"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Create worktree with new branch
|
||||
git worktree add "$path" -b "$BRANCH_NAME"
|
||||
cd "$path"
|
||||
```
|
||||
|
||||
### 3. Run Project Setup
|
||||
|
||||
Auto-detect and run appropriate setup:
|
||||
|
||||
```bash
|
||||
# Node.js
|
||||
if [ -f package.json ]; then npm install; fi
|
||||
|
||||
# Rust
|
||||
if [ -f Cargo.toml ]; then cargo build; fi
|
||||
|
||||
# Python
|
||||
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
|
||||
if [ -f pyproject.toml ]; then poetry install; fi
|
||||
|
||||
# Go
|
||||
if [ -f go.mod ]; then go mod download; fi
|
||||
```
|
||||
|
||||
### 4. Verify Clean Baseline
|
||||
|
||||
Run tests to ensure worktree starts clean:
|
||||
|
||||
```bash
|
||||
# Examples - use project-appropriate command
|
||||
npm test
|
||||
cargo test
|
||||
pytest
|
||||
go test ./...
|
||||
```
|
||||
|
||||
**If tests fail:** Report failures, ask whether to proceed or investigate.
|
||||
|
||||
**If tests pass:** Report ready.
|
||||
|
||||
### 5. Report Location
|
||||
|
||||
```
|
||||
Worktree ready at <full-path>
|
||||
Tests passing (<N> tests, 0 failures)
|
||||
Ready to implement <feature-name>
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| `.worktrees/` exists | Use it (verify .gitignore) |
|
||||
| `worktrees/` exists | Use it (verify .gitignore) |
|
||||
| Both exist | Use `.worktrees/` |
|
||||
| Neither exists | Check CLAUDE.md → Ask user |
|
||||
| Directory not in .gitignore | Add it immediately + commit |
|
||||
| Tests fail during baseline | Report failures + ask |
|
||||
| No package.json/Cargo.toml | Skip dependency install |
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**Skipping .gitignore verification**
|
||||
- **Problem:** Worktree contents get tracked, pollute git status
|
||||
- **Fix:** Always grep .gitignore before creating project-local worktree
|
||||
|
||||
**Assuming directory location**
|
||||
- **Problem:** Creates inconsistency, violates project conventions
|
||||
- **Fix:** Follow priority: existing > CLAUDE.md > ask
|
||||
|
||||
**Proceeding with failing tests**
|
||||
- **Problem:** Can't distinguish new bugs from pre-existing issues
|
||||
- **Fix:** Report failures, get explicit permission to proceed
|
||||
|
||||
**Hardcoding setup commands**
|
||||
- **Problem:** Breaks on projects using different tools
|
||||
- **Fix:** Auto-detect from project files (package.json, etc.)
|
||||
|
||||
## Example Workflow
|
||||
|
||||
```
|
||||
You: I'm using the Using Git Worktrees skill to set up an isolated workspace.
|
||||
|
||||
[Check .worktrees/ - exists]
|
||||
[Verify .gitignore - contains .worktrees/]
|
||||
[Create worktree: git worktree add .worktrees/auth -b feature/auth]
|
||||
[Run npm install]
|
||||
[Run npm test - 47 passing]
|
||||
|
||||
Worktree ready at /Users/jesse/myproject/.worktrees/auth
|
||||
Tests passing (47 tests, 0 failures)
|
||||
Ready to implement auth feature
|
||||
```
|
||||
|
||||
## Red Flags
|
||||
|
||||
**Never:**
|
||||
- Create worktree without .gitignore verification (project-local)
|
||||
- Skip baseline test verification
|
||||
- Proceed with failing tests without asking
|
||||
- Assume directory location when ambiguous
|
||||
- Skip CLAUDE.md check
|
||||
|
||||
**Always:**
|
||||
- Follow directory priority: existing > CLAUDE.md > ask
|
||||
- Verify .gitignore for project-local
|
||||
- Auto-detect and run project setup
|
||||
- Verify clean test baseline
|
||||
|
||||
## Integration
|
||||
|
||||
**Called by:**
|
||||
- skills/collaboration/brainstorming (Phase 4)
|
||||
- Any skill needing isolated workspace
|
||||
|
||||
**Pairs with:**
|
||||
- skills/collaboration/finishing-a-development-branch (cleanup)
|
||||
- skills/collaboration/executing-plans (work happens here)
|
||||
118
skills/collaboration/writing-plans/SKILL.md
Normal file
118
skills/collaboration/writing-plans/SKILL.md
Normal file
@@ -0,0 +1,118 @@
|
||||
---
|
||||
name: Writing Plans
|
||||
description: Create detailed implementation plans with bite-sized tasks for engineers with zero codebase context
|
||||
when_to_use: After brainstorming/design is complete. Before implementation begins. When delegating to another developer or session. When brainstorming skill hands off to planning.
|
||||
version: 2.0.0
|
||||
---
|
||||
|
||||
# Writing Plans
|
||||
|
||||
## Overview
|
||||
|
||||
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
|
||||
|
||||
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
|
||||
|
||||
**Announce at start:** "I'm using the Writing Plans skill to create the implementation plan."
|
||||
|
||||
**Context:** This should be run in a dedicated worktree (created by brainstorming skill).
|
||||
|
||||
**Save plans to:** `docs/plans/YYYY-MM-DD-<feature-name>.md`
|
||||
|
||||
## Bite-Sized Task Granularity
|
||||
|
||||
**Each step is one action (2-5 minutes):**
|
||||
- "Write the failing test" - step
|
||||
- "Run it to make sure it fails" - step
|
||||
- "Implement the minimal code to make the test pass" - step
|
||||
- "Run the tests and make sure they pass" - step
|
||||
- "Commit" - step
|
||||
|
||||
## Plan Document Header
|
||||
|
||||
**Every plan MUST start with this header:**
|
||||
|
||||
```markdown
|
||||
# [Feature Name] Implementation Plan
|
||||
|
||||
> **For Claude:** Use `${CLAUDE_PLUGIN_ROOT}/skills/collaboration/executing-plans/SKILL.md` to implement this plan task-by-task.
|
||||
|
||||
**Goal:** [One sentence describing what this builds]
|
||||
|
||||
**Architecture:** [2-3 sentences about approach]
|
||||
|
||||
**Tech Stack:** [Key technologies/libraries]
|
||||
|
||||
---
|
||||
```
|
||||
|
||||
## Task Structure
|
||||
|
||||
```markdown
|
||||
### Task N: [Component Name]
|
||||
|
||||
**Files:**
|
||||
- Create: `exact/path/to/file.py`
|
||||
- Modify: `exact/path/to/existing.py:123-145`
|
||||
- Test: `tests/exact/path/to/test.py`
|
||||
|
||||
**Step 1: Write the failing test**
|
||||
|
||||
```python
|
||||
def test_specific_behavior():
|
||||
result = function(input)
|
||||
assert result == expected
|
||||
```
|
||||
|
||||
**Step 2: Run test to verify it fails**
|
||||
|
||||
Run: `pytest tests/path/test.py::test_name -v`
|
||||
Expected: FAIL with "function not defined"
|
||||
|
||||
**Step 3: Write minimal implementation**
|
||||
|
||||
```python
|
||||
def function(input):
|
||||
return expected
|
||||
```
|
||||
|
||||
**Step 4: Run test to verify it passes**
|
||||
|
||||
Run: `pytest tests/path/test.py::test_name -v`
|
||||
Expected: PASS
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add tests/path/test.py src/path/file.py
|
||||
git commit -m "feat: add specific feature"
|
||||
```
|
||||
```
|
||||
|
||||
## Remember
|
||||
- Exact file paths always
|
||||
- Complete code in plan (not "add validation")
|
||||
- Exact commands with expected output
|
||||
- Reference relevant skills with @ syntax
|
||||
- DRY, YAGNI, TDD, frequent commits
|
||||
|
||||
## Execution Handoff
|
||||
|
||||
After saving the plan, offer execution choice:
|
||||
|
||||
**"Plan complete and saved to `docs/plans/<filename>.md`. Two execution options:**
|
||||
|
||||
**1. Subagent-Driven (this session)** - I dispatch fresh subagent per task, review between tasks, fast iteration
|
||||
|
||||
**2. Parallel Session (separate)** - Open new session with executing-plans, batch execution with checkpoints
|
||||
|
||||
**Which approach?"**
|
||||
|
||||
**If Subagent-Driven chosen:**
|
||||
- Use skills/collaboration/subagent-driven-development
|
||||
- Stay in this session
|
||||
- Fresh subagent per task + code review
|
||||
|
||||
**If Parallel Session chosen:**
|
||||
- Guide them to open new session in worktree
|
||||
- New session uses skills/collaboration/executing-plans
|
||||
Reference in New Issue
Block a user