mirror of
https://github.com/obra/superpowers.git
synced 2026-05-01 14:49:06 +08:00
Initial commit: Superpowers plugin v1.0.0
Core skills library as Claude Code plugin: - Testing skills: TDD, async testing, anti-patterns - Debugging skills: Systematic debugging, root cause tracing - Collaboration skills: Brainstorming, planning, code review - Meta skills: Creating and testing skills Features: - SessionStart hook for context injection - Skills-search tool for discovery - Commands: /brainstorm, /write-plan, /execute-plan - Data directory at ~/.superpowers/
This commit is contained in:
329
skills/collaboration/remembering-conversations/DEPLOYMENT.md
Normal file
329
skills/collaboration/remembering-conversations/DEPLOYMENT.md
Normal file
@@ -0,0 +1,329 @@
|
||||
# Conversation Search Deployment Guide
|
||||
|
||||
Quick reference for deploying and maintaining the conversation indexing system.
|
||||
|
||||
## Initial Deployment
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/collaboration/remembering-conversations/tool
|
||||
|
||||
# 1. Install hook
|
||||
./install-hook
|
||||
|
||||
# 2. Index existing conversations (with parallel summarization)
|
||||
./index-conversations --cleanup --concurrency 8
|
||||
|
||||
# 3. Verify index health
|
||||
./index-conversations --verify
|
||||
|
||||
# 4. Test search
|
||||
./search-conversations "test query"
|
||||
```
|
||||
|
||||
**Expected results:**
|
||||
- Hook installed at `~/.claude/hooks/sessionEnd`
|
||||
- Summaries created for all conversations (50-120 words each)
|
||||
- Search returns relevant results in <1 second
|
||||
- No verification errors
|
||||
|
||||
**Performance tip:** Use `--concurrency 8` or `--concurrency 16` for 8-16x faster summarization on initial indexing. Hook uses concurrency=1 (safe for background).
|
||||
|
||||
## Ongoing Maintenance
|
||||
|
||||
### Automatic (No Action Required)
|
||||
|
||||
- Hook runs after every session ends
|
||||
- New conversations indexed in background (<30 sec per conversation)
|
||||
- Summaries generated automatically
|
||||
|
||||
### Weekly Health Check
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/collaboration/remembering-conversations/tool
|
||||
./index-conversations --verify
|
||||
```
|
||||
|
||||
If issues found:
|
||||
```bash
|
||||
./index-conversations --repair
|
||||
```
|
||||
|
||||
### After System Changes
|
||||
|
||||
| Change | Action |
|
||||
|--------|--------|
|
||||
| Moved conversation archive | Update paths in code, run `--rebuild` |
|
||||
| Updated CLAUDE.md | Run `--verify` to check for issues |
|
||||
| Changed database schema | Backup DB, run `--rebuild` |
|
||||
| Hook not running | Check executable: `chmod +x ~/.claude/hooks/sessionEnd` |
|
||||
|
||||
## Recovery Scenarios
|
||||
|
||||
| Issue | Diagnosis | Fix |
|
||||
|-------|-----------|-----|
|
||||
| **Missing summaries** | `--verify` shows "Missing summaries: N" | `--repair` regenerates missing summaries |
|
||||
| **Orphaned DB entries** | `--verify` shows "Orphaned entries: N" | `--repair` removes orphaned entries |
|
||||
| **Outdated indexes** | `--verify` shows "Outdated files: N" | `--repair` re-indexes modified files |
|
||||
| **Corrupted database** | Errors during search/verify | `--rebuild` (re-indexes everything, requires confirmation) |
|
||||
| **Hook not running** | No summaries for new conversations | See Troubleshooting below |
|
||||
| **Slow indexing** | Takes >30 sec per conversation | Check API key, network, Haiku fallback in logs |
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
```bash
|
||||
# Check hook installed and executable
|
||||
ls -l ~/.claude/hooks/sessionEnd
|
||||
|
||||
# Check recent conversations
|
||||
ls -lt ~/.clank/conversation-archive/*/*.jsonl | head -5
|
||||
|
||||
# Check database size
|
||||
ls -lh ~/.clank/conversation-index/db.sqlite
|
||||
|
||||
# Full verification
|
||||
./index-conversations --verify
|
||||
```
|
||||
|
||||
### Expected Behavior Metrics
|
||||
|
||||
- **Hook execution:** Within seconds of session end
|
||||
- **Indexing speed:** <30 seconds per conversation
|
||||
- **Summary length:** 50-120 words
|
||||
- **Search latency:** <1 second
|
||||
- **Verification:** 0 errors when healthy
|
||||
|
||||
### Log Output
|
||||
|
||||
Normal indexing:
|
||||
```
|
||||
Initializing database...
|
||||
Loading embedding model...
|
||||
Processing project: my-project (3 conversations)
|
||||
Summary: 87 words
|
||||
Indexed conversation.jsonl: 5 exchanges
|
||||
✅ Indexing complete! Conversations: 3, Exchanges: 15
|
||||
```
|
||||
|
||||
Verification with issues:
|
||||
```
|
||||
Verifying conversation index...
|
||||
Verified 100 conversations.
|
||||
|
||||
=== Verification Results ===
|
||||
Missing summaries: 2
|
||||
Orphaned entries: 0
|
||||
Outdated files: 1
|
||||
Corrupted files: 0
|
||||
|
||||
Run with --repair to fix these issues.
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Hook Not Running
|
||||
|
||||
**Symptoms:** New conversations not indexed automatically
|
||||
|
||||
**Diagnosis:**
|
||||
```bash
|
||||
# 1. Check hook exists and is executable
|
||||
ls -l ~/.claude/hooks/sessionEnd
|
||||
# Should show: -rwxr-xr-x ... sessionEnd
|
||||
|
||||
# 2. Check $SESSION_ID is set during sessions
|
||||
echo $SESSION_ID
|
||||
# Should show: session ID when in active session
|
||||
|
||||
# 3. Check indexer exists
|
||||
ls -l ~/.claude/skills/collaboration/remembering-conversations/tool/index-conversations
|
||||
# Should show: -rwxr-xr-x ... index-conversations
|
||||
|
||||
# 4. Test hook manually
|
||||
SESSION_ID=test-$(date +%s) ~/.claude/hooks/sessionEnd
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
# Make hook executable
|
||||
chmod +x ~/.claude/hooks/sessionEnd
|
||||
|
||||
# Reinstall if needed
|
||||
./install-hook
|
||||
```
|
||||
|
||||
### Summaries Failing
|
||||
|
||||
**Symptoms:** Verify shows missing summaries, repair fails
|
||||
|
||||
**Diagnosis:**
|
||||
```bash
|
||||
# Check API key
|
||||
echo $ANTHROPIC_API_KEY
|
||||
# Should show: sk-ant-...
|
||||
|
||||
# Try manual indexing with logging
|
||||
./index-conversations 2>&1 | tee index.log
|
||||
grep -i error index.log
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
# Set API key if missing
|
||||
export ANTHROPIC_API_KEY="your-key-here"
|
||||
|
||||
# Check for rate limits (wait and retry)
|
||||
sleep 60 && ./index-conversations --repair
|
||||
|
||||
# Fallback uses claude-3-haiku-20240307 (cheaper)
|
||||
# Check logs for: "Summary: N words" to confirm success
|
||||
```
|
||||
|
||||
### Search Not Finding Results
|
||||
|
||||
**Symptoms:** `./search-conversations "query"` returns no results
|
||||
|
||||
**Diagnosis:**
|
||||
```bash
|
||||
# 1. Verify conversations indexed
|
||||
./index-conversations --verify
|
||||
|
||||
# 2. Check database exists and has data
|
||||
ls -lh ~/.clank/conversation-index/db.sqlite
|
||||
# Should be > 100KB if conversations indexed
|
||||
|
||||
# 3. Try text search (exact match)
|
||||
./search-conversations --text "exact phrase from conversation"
|
||||
|
||||
# 4. Check for corruption
|
||||
sqlite3 ~/.clank/conversation-index/db.sqlite "SELECT COUNT(*) FROM exchanges;"
|
||||
# Should show number > 0
|
||||
```
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
# If database missing or corrupt
|
||||
./index-conversations --rebuild
|
||||
|
||||
# If specific conversations missing
|
||||
./index-conversations --repair
|
||||
|
||||
# If still failing, check embedding model
|
||||
rm -rf ~/.cache/transformers # Force re-download
|
||||
./index-conversations
|
||||
```
|
||||
|
||||
### Database Corruption
|
||||
|
||||
**Symptoms:** Errors like "database disk image is malformed"
|
||||
|
||||
**Fix:**
|
||||
```bash
|
||||
# 1. Backup current database
|
||||
cp ~/.clank/conversation-index/db.sqlite ~/.clank/conversation-index/db.sqlite.backup
|
||||
|
||||
# 2. Rebuild from scratch
|
||||
./index-conversations --rebuild
|
||||
# Confirms with: "Are you sure? [yes/NO]:"
|
||||
# Type: yes
|
||||
|
||||
# 3. Verify rebuild
|
||||
./index-conversations --verify
|
||||
```
|
||||
|
||||
## Commands Reference
|
||||
|
||||
```bash
|
||||
# Index all conversations
|
||||
./index-conversations
|
||||
|
||||
# Index specific session (called by hook)
|
||||
./index-conversations --session <session-id>
|
||||
|
||||
# Index only unprocessed conversations
|
||||
./index-conversations --cleanup
|
||||
|
||||
# Verify index health
|
||||
./index-conversations --verify
|
||||
|
||||
# Repair issues found by verify
|
||||
./index-conversations --repair
|
||||
|
||||
# Rebuild everything (with confirmation)
|
||||
./index-conversations --rebuild
|
||||
|
||||
# Search conversations (semantic)
|
||||
./search-conversations "query"
|
||||
|
||||
# Search conversations (text match)
|
||||
./search-conversations --text "exact phrase"
|
||||
|
||||
# Install/reinstall hook
|
||||
./install-hook
|
||||
```
|
||||
|
||||
## Subagent Workflow
|
||||
|
||||
**For searching conversations from within Claude Code sessions**, use the subagent pattern (see `skills/getting-started` for complete workflow).
|
||||
|
||||
**Template:** `tool/prompts/search-agent.md`
|
||||
|
||||
**Key requirements:**
|
||||
- Synthesis must be 200-1000 words (Summary section)
|
||||
- All sources must include: project, date, file path, status
|
||||
- No raw conversation excerpts (synthesize instead)
|
||||
- Follow-up via subagent (not direct file reads)
|
||||
|
||||
**Manual test checklist:**
|
||||
1. ✓ Dispatch subagent with search template
|
||||
2. ✓ Verify synthesis 200-1000 words
|
||||
3. ✓ Verify all sources have metadata (project, date, path, status)
|
||||
4. ✓ Ask follow-up → dispatch second subagent to dig deeper
|
||||
5. ✓ Confirm no raw conversations in main context
|
||||
|
||||
## Files and Directories
|
||||
|
||||
```
|
||||
~/.claude/
|
||||
├── hooks/
|
||||
│ └── sessionEnd # Hook that triggers indexing
|
||||
└── skills/collaboration/remembering-conversations/
|
||||
├── SKILL.md # Main documentation
|
||||
├── DEPLOYMENT.md # This file
|
||||
└── tool/
|
||||
├── index-conversations # Main indexer
|
||||
├── search-conversations # Search interface
|
||||
├── install-hook # Hook installer
|
||||
├── test-deployment.sh # End-to-end tests
|
||||
├── src/ # TypeScript source
|
||||
└── prompts/
|
||||
└── search-agent.md # Subagent template
|
||||
|
||||
~/.clank/
|
||||
├── conversation-archive/ # Archived conversations
|
||||
│ └── <project>/
|
||||
│ ├── <uuid>.jsonl # Conversation file
|
||||
│ └── <uuid>-summary.txt # AI summary (50-120 words)
|
||||
└── conversation-index/
|
||||
└── db.sqlite # SQLite database with embeddings
|
||||
```
|
||||
|
||||
## Deployment Checklist
|
||||
|
||||
### Initial Setup
|
||||
- [ ] Hook installed: `./install-hook`
|
||||
- [ ] Existing conversations indexed: `./index-conversations`
|
||||
- [ ] Verification clean: `./index-conversations --verify`
|
||||
- [ ] Search working: `./search-conversations "test"`
|
||||
- [ ] Subagent template exists: `ls tool/prompts/search-agent.md`
|
||||
|
||||
### Ongoing
|
||||
- [ ] Weekly: Run `--verify` and `--repair` if needed
|
||||
- [ ] After system changes: Re-verify
|
||||
- [ ] Monitor: Check hook runs (summaries appear for new conversations)
|
||||
|
||||
### Testing
|
||||
- [ ] Run end-to-end tests: `./test-deployment.sh`
|
||||
- [ ] All 5 scenarios pass
|
||||
- [ ] Manual subagent test (see scenario 5 in test output)
|
||||
133
skills/collaboration/remembering-conversations/INDEXING.md
Normal file
133
skills/collaboration/remembering-conversations/INDEXING.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Managing Conversation Index
|
||||
|
||||
Index, archive, and maintain conversations for search.
|
||||
|
||||
## Quick Start
|
||||
|
||||
**Install auto-indexing hook:**
|
||||
```bash
|
||||
~/.claude/skills/collaboration/remembering-conversations/tool/install-hook
|
||||
```
|
||||
|
||||
**Index all conversations:**
|
||||
```bash
|
||||
~/.claude/skills/collaboration/remembering-conversations/tool/index-conversations
|
||||
```
|
||||
|
||||
**Process unindexed only:**
|
||||
```bash
|
||||
~/.claude/skills/collaboration/remembering-conversations/tool/index-conversations --cleanup
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Automatic indexing** via sessionEnd hook (install once, forget)
|
||||
- **Semantic search** across all past conversations
|
||||
- **AI summaries** (Claude Haiku with Sonnet fallback)
|
||||
- **Recovery modes** (verify, repair, rebuild)
|
||||
- **Permanent archive** at `~/.clank/conversation-archive/`
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Install Hook (One-Time)
|
||||
|
||||
```bash
|
||||
cd ~/.claude/skills/collaboration/remembering-conversations/tool
|
||||
./install-hook
|
||||
```
|
||||
|
||||
Handles existing hooks gracefully (merge or replace). Runs in background after each session.
|
||||
|
||||
### 2. Index Existing Conversations
|
||||
|
||||
```bash
|
||||
# Index everything
|
||||
./index-conversations
|
||||
|
||||
# Or just unindexed (faster, cheaper)
|
||||
./index-conversations --cleanup
|
||||
```
|
||||
|
||||
## Index Modes
|
||||
|
||||
```bash
|
||||
# Index all (first run or full rebuild)
|
||||
./index-conversations
|
||||
|
||||
# Index specific session (used by hook)
|
||||
./index-conversations --session <uuid>
|
||||
|
||||
# Process only unindexed (missing summaries)
|
||||
./index-conversations --cleanup
|
||||
|
||||
# Check index health
|
||||
./index-conversations --verify
|
||||
|
||||
# Fix detected issues
|
||||
./index-conversations --repair
|
||||
|
||||
# Nuclear option (deletes DB, re-indexes everything)
|
||||
./index-conversations --rebuild
|
||||
```
|
||||
|
||||
## Recovery Scenarios
|
||||
|
||||
| Situation | Command |
|
||||
|-----------|---------|
|
||||
| Missed conversations | `--cleanup` |
|
||||
| Hook didn't run | `--cleanup` |
|
||||
| Updated conversation | `--verify` then `--repair` |
|
||||
| Corrupted database | `--rebuild` |
|
||||
| Index health check | `--verify` |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Hook not running:**
|
||||
- Check: `ls -l ~/.claude/hooks/sessionEnd` (should be executable)
|
||||
- Test: `SESSION_ID=test-$(date +%s) ~/.claude/hooks/sessionEnd`
|
||||
- Re-install: `./install-hook`
|
||||
|
||||
**Summaries failing:**
|
||||
- Check API key: `echo $ANTHROPIC_API_KEY`
|
||||
- Check logs in ~/.clank/conversation-index/
|
||||
- Try manual: `./index-conversations --session <uuid>`
|
||||
|
||||
**Search not finding results:**
|
||||
- Verify indexed: `./index-conversations --verify`
|
||||
- Try text search: `./search-conversations --text "exact phrase"`
|
||||
- Rebuild if needed: `./index-conversations --rebuild`
|
||||
|
||||
## Excluding Projects
|
||||
|
||||
To exclude specific projects from indexing (e.g., meta-conversations), create:
|
||||
|
||||
`~/.clank/conversation-index/exclude.txt`
|
||||
```
|
||||
# One project name per line
|
||||
# Lines starting with # are comments
|
||||
-Users-yourname-Documents-some-project
|
||||
```
|
||||
|
||||
Or set env variable:
|
||||
```bash
|
||||
export CONVERSATION_SEARCH_EXCLUDE_PROJECTS="project1,project2"
|
||||
```
|
||||
|
||||
## Storage
|
||||
|
||||
- **Archive:** `~/.clank/conversation-archive/<project>/<uuid>.jsonl`
|
||||
- **Summaries:** `~/.clank/conversation-archive/<project>/<uuid>-summary.txt`
|
||||
- **Database:** `~/.clank/conversation-index/db.sqlite`
|
||||
- **Exclusions:** `~/.clank/conversation-index/exclude.txt` (optional)
|
||||
|
||||
## Technical Details
|
||||
|
||||
- **Embeddings:** @xenova/transformers (all-MiniLM-L6-v2, 384 dimensions, local/free)
|
||||
- **Vector search:** sqlite-vec (local/free)
|
||||
- **Summaries:** Claude Haiku with Sonnet fallback (~$0.01-0.02/conversation)
|
||||
- **Parser:** Handles multi-message exchanges and sidechains
|
||||
|
||||
## See Also
|
||||
|
||||
- **Searching:** See SKILL.md for search modes (vector, text, time filtering)
|
||||
- **Deployment:** See DEPLOYMENT.md for production runbook
|
||||
69
skills/collaboration/remembering-conversations/SKILL.md
Normal file
69
skills/collaboration/remembering-conversations/SKILL.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
name: Remembering Conversations
|
||||
description: Search previous Claude Code conversations for facts, patterns, decisions, and context using semantic or text search
|
||||
when_to_use: When your human partner mentions "we discussed this before". When debugging similar issues. When looking for architectural decisions or code patterns from past work. Before reinventing solutions. When you need to find a specific git SHA or error message.
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
# Remembering Conversations
|
||||
|
||||
Search archived conversations using semantic similarity or exact text matching.
|
||||
|
||||
**Core principle:** Search before reinventing.
|
||||
|
||||
**Announce:** "I'm searching previous conversations for [topic]."
|
||||
|
||||
**Setup:** See INDEXING.md
|
||||
|
||||
## When to Use
|
||||
|
||||
**Search when:**
|
||||
- Your human partner mentions "we discussed this before"
|
||||
- Debugging similar issues
|
||||
- Looking for architectural decisions or patterns
|
||||
- Before implementing something familiar
|
||||
|
||||
**Don't search when:**
|
||||
- Info in current conversation
|
||||
- Question about current codebase (use Grep/Read)
|
||||
|
||||
## In-Session Use
|
||||
|
||||
**Always use subagents** (50-100x context savings). See skills/getting-started for workflow.
|
||||
|
||||
**Manual/CLI use:** Direct search (below) for humans outside Claude Code sessions.
|
||||
|
||||
## Direct Search (Manual/CLI)
|
||||
|
||||
**Tool:** `${CLAUDE_PLUGIN_ROOT}/skills/collaboration/remembering-conversations/tool/search-conversations`
|
||||
|
||||
**Modes:**
|
||||
```bash
|
||||
search-conversations "query" # Vector similarity (default)
|
||||
search-conversations --text "exact" # Exact string match
|
||||
search-conversations --both "query" # Both modes
|
||||
```
|
||||
|
||||
**Flags:**
|
||||
```bash
|
||||
--after YYYY-MM-DD # Filter by date
|
||||
--before YYYY-MM-DD # Filter by date
|
||||
--limit N # Max results (default: 10)
|
||||
--help # Full usage
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
```bash
|
||||
# Semantic search
|
||||
search-conversations "React Router authentication errors"
|
||||
|
||||
# Find git SHA
|
||||
search-conversations --text "a1b2c3d4"
|
||||
|
||||
# Time range
|
||||
search-conversations --after 2025-09-01 "refactoring"
|
||||
```
|
||||
|
||||
Returns: project, date, conversation summary, matched exchange, similarity %, file path.
|
||||
|
||||
**For details:** Run `search-conversations --help`
|
||||
8
skills/collaboration/remembering-conversations/tool/.gitignore
vendored
Normal file
8
skills/collaboration/remembering-conversations/tool/.gitignore
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
node_modules/
|
||||
dist/
|
||||
*.log
|
||||
.DS_Store
|
||||
|
||||
# Local data (database and archives are at ~/.clank/, not in repo)
|
||||
*.sqlite*
|
||||
.cache/
|
||||
10
skills/collaboration/remembering-conversations/tool/hooks/sessionEnd
Executable file
10
skills/collaboration/remembering-conversations/tool/hooks/sessionEnd
Executable file
@@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
# Auto-index conversation after session ends
|
||||
# Copy to ~/.claude/hooks/sessionEnd to enable
|
||||
|
||||
INDEXER="$HOME/.claude/skills/collaboration/remembering-conversations/tool/index-conversations"
|
||||
|
||||
if [ -n "$SESSION_ID" ] && [ -x "$INDEXER" ]; then
|
||||
# Run in background, suppress output
|
||||
"$INDEXER" --session "$SESSION_ID" > /dev/null 2>&1 &
|
||||
fi
|
||||
79
skills/collaboration/remembering-conversations/tool/index-conversations
Executable file
79
skills/collaboration/remembering-conversations/tool/index-conversations
Executable file
@@ -0,0 +1,79 @@
|
||||
#!/bin/bash
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
SCRIPT_DIR="$(pwd)"
|
||||
|
||||
case "$1" in
|
||||
--help|-h)
|
||||
cat <<'EOF'
|
||||
index-conversations - Index and manage conversation archives
|
||||
|
||||
USAGE:
|
||||
index-conversations [COMMAND] [OPTIONS]
|
||||
|
||||
COMMANDS:
|
||||
(default) Index all conversations
|
||||
--cleanup Process only unindexed conversations (fast, cheap)
|
||||
--session ID Index specific session (used by hook)
|
||||
--verify Check index health
|
||||
--repair Fix detected issues
|
||||
--rebuild Delete DB and re-index everything (requires confirmation)
|
||||
|
||||
OPTIONS:
|
||||
--concurrency N Parallel summarization (1-16, default: 1)
|
||||
-c N Short form of --concurrency
|
||||
--help, -h Show this help
|
||||
|
||||
EXAMPLES:
|
||||
# Index all unprocessed (recommended for backfill)
|
||||
index-conversations --cleanup
|
||||
|
||||
# Index with 8 parallel summarizations (8x faster)
|
||||
index-conversations --cleanup --concurrency 8
|
||||
|
||||
# Check index health
|
||||
index-conversations --verify
|
||||
|
||||
# Fix any issues found
|
||||
index-conversations --repair
|
||||
|
||||
# Nuclear option (deletes everything, re-indexes)
|
||||
index-conversations --rebuild
|
||||
|
||||
WORKFLOW:
|
||||
1. Initial setup: index-conversations --cleanup
|
||||
2. Ongoing: Auto-indexed by sessionEnd hook
|
||||
3. Health check: index-conversations --verify (weekly)
|
||||
4. Recovery: index-conversations --repair (if issues found)
|
||||
|
||||
SEE ALSO:
|
||||
INDEXING.md - Setup and maintenance guide
|
||||
DEPLOYMENT.md - Production runbook
|
||||
EOF
|
||||
exit 0
|
||||
;;
|
||||
--session)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" index-session "$@"
|
||||
;;
|
||||
--cleanup)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" index-cleanup "$@"
|
||||
;;
|
||||
--verify)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" verify "$@"
|
||||
;;
|
||||
--repair)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" repair "$@"
|
||||
;;
|
||||
--rebuild)
|
||||
echo "⚠️ This will DELETE the entire database and re-index everything."
|
||||
read -p "Are you sure? [yes/NO]: " confirm
|
||||
if [ "$confirm" = "yes" ]; then
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" rebuild "$@"
|
||||
else
|
||||
echo "Cancelled"
|
||||
fi
|
||||
;;
|
||||
*)
|
||||
npx tsx "$SCRIPT_DIR/src/index-cli.ts" index-all "$@"
|
||||
;;
|
||||
esac
|
||||
82
skills/collaboration/remembering-conversations/tool/install-hook
Executable file
82
skills/collaboration/remembering-conversations/tool/install-hook
Executable file
@@ -0,0 +1,82 @@
|
||||
#!/bin/bash
|
||||
# Install sessionEnd hook with merge support
|
||||
|
||||
HOOK_DIR="$HOME/.claude/hooks"
|
||||
HOOK_FILE="$HOOK_DIR/sessionEnd"
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
SOURCE_HOOK="$SCRIPT_DIR/hooks/sessionEnd"
|
||||
|
||||
echo "Installing conversation indexing hook..."
|
||||
|
||||
# Create hooks directory
|
||||
mkdir -p "$HOOK_DIR"
|
||||
|
||||
# Handle existing hook
|
||||
if [ -f "$HOOK_FILE" ]; then
|
||||
echo "⚠️ Existing sessionEnd hook found"
|
||||
|
||||
# Check if our indexer is already installed
|
||||
if grep -q "remembering-conversations.*index-conversations" "$HOOK_FILE"; then
|
||||
echo "✓ Indexer already installed in existing hook"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Create backup
|
||||
BACKUP="$HOOK_FILE.backup.$(date +%s)"
|
||||
cp "$HOOK_FILE" "$BACKUP"
|
||||
echo "Created backup: $BACKUP"
|
||||
|
||||
# Offer merge or replace
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " (m) Merge - Add indexer to existing hook"
|
||||
echo " (r) Replace - Overwrite with our hook"
|
||||
echo " (c) Cancel - Exit without changes"
|
||||
echo ""
|
||||
read -p "Choose [m/r/c]: " choice
|
||||
|
||||
case "$choice" in
|
||||
m|M)
|
||||
# Append our indexer
|
||||
cat >> "$HOOK_FILE" <<'EOF'
|
||||
|
||||
# Auto-index conversations (remembering-conversations skill)
|
||||
INDEXER="$HOME/.claude/skills/collaboration/remembering-conversations/tool/index-conversations"
|
||||
if [ -n "$SESSION_ID" ] && [ -x "$INDEXER" ]; then
|
||||
"$INDEXER" --session "$SESSION_ID" > /dev/null 2>&1 &
|
||||
fi
|
||||
EOF
|
||||
echo "✓ Merged indexer into existing hook"
|
||||
;;
|
||||
r|R)
|
||||
cp "$SOURCE_HOOK" "$HOOK_FILE"
|
||||
chmod +x "$HOOK_FILE"
|
||||
echo "✓ Replaced hook with our version"
|
||||
;;
|
||||
c|C)
|
||||
echo "Installation cancelled"
|
||||
exit 1
|
||||
;;
|
||||
*)
|
||||
echo "Invalid choice. Exiting."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
else
|
||||
# No existing hook, install fresh
|
||||
cp "$SOURCE_HOOK" "$HOOK_FILE"
|
||||
chmod +x "$HOOK_FILE"
|
||||
echo "✓ Installed sessionEnd hook"
|
||||
fi
|
||||
|
||||
# Verify executable
|
||||
if [ ! -x "$HOOK_FILE" ]; then
|
||||
chmod +x "$HOOK_FILE"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Hook installed successfully!"
|
||||
echo "Location: $HOOK_FILE"
|
||||
echo ""
|
||||
echo "Test it:"
|
||||
echo " SESSION_ID=test-\$(date +%s) $HOOK_FILE"
|
||||
2816
skills/collaboration/remembering-conversations/tool/package-lock.json
generated
Normal file
2816
skills/collaboration/remembering-conversations/tool/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"name": "conversation-search",
|
||||
"version": "1.0.0",
|
||||
"description": "",
|
||||
"main": "index.js",
|
||||
"scripts": {
|
||||
"index": "./index-conversations",
|
||||
"search": "./search-conversations",
|
||||
"test": "vitest run",
|
||||
"test:watch": "vitest"
|
||||
},
|
||||
"keywords": [],
|
||||
"author": "",
|
||||
"license": "ISC",
|
||||
"type": "module",
|
||||
"dependencies": {
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.1.9",
|
||||
"@xenova/transformers": "^2.17.2",
|
||||
"better-sqlite3": "^12.4.1",
|
||||
"sqlite-vec": "^0.1.7-alpha.2"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/better-sqlite3": "^7.6.13",
|
||||
"@types/node": "^24.7.0",
|
||||
"tsx": "^4.20.6",
|
||||
"typescript": "^5.9.3",
|
||||
"vitest": "^3.2.4"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,157 @@
|
||||
# Conversation Search Agent
|
||||
|
||||
You are searching historical Claude Code conversations for relevant context.
|
||||
|
||||
**Your task:**
|
||||
1. Search conversations for: {TOPIC}
|
||||
2. Read the top 2-5 most relevant results
|
||||
3. Synthesize key findings (max 1000 words)
|
||||
4. Return synthesis + source pointers (so main agent can dig deeper)
|
||||
|
||||
## Search Query
|
||||
|
||||
{SEARCH_QUERY}
|
||||
|
||||
## What to Look For
|
||||
|
||||
{FOCUS_AREAS}
|
||||
|
||||
Example focus areas:
|
||||
- What was the problem or question?
|
||||
- What solution was chosen and why?
|
||||
- What alternatives were considered and rejected?
|
||||
- Any gotchas, edge cases, or lessons learned?
|
||||
- Relevant code patterns, APIs, or approaches used
|
||||
- Architectural decisions and rationale
|
||||
|
||||
## How to Search
|
||||
|
||||
Run:
|
||||
```bash
|
||||
~/.claude/skills/collaboration/remembering-conversations/tool/search-conversations "{SEARCH_QUERY}"
|
||||
```
|
||||
|
||||
This returns:
|
||||
- Project name and date
|
||||
- Conversation summary (AI-generated)
|
||||
- Matched exchange with similarity score
|
||||
- File path and line numbers
|
||||
|
||||
Read the full conversations for top 2-5 results to get complete context.
|
||||
|
||||
## Output Format
|
||||
|
||||
**Required structure:**
|
||||
|
||||
### Summary
|
||||
[Synthesize findings in 200-1000 words. Adapt structure to what you found:
|
||||
- Quick answer? 1-2 paragraphs.
|
||||
- Complex topic? Use sections (Context/Solution/Rationale/Lessons/Code).
|
||||
- Multiple approaches? Compare and contrast.
|
||||
- Historical evolution? Show progression chronologically.
|
||||
|
||||
Focus on actionable insights for the current task.]
|
||||
|
||||
### Sources
|
||||
[List ALL conversations examined, in order of relevance:]
|
||||
|
||||
**1. [project-name, YYYY-MM-DD]** - X% match
|
||||
Conversation summary: [One sentence - what was this conversation about?]
|
||||
File: ~/.clank/conversation-archive/.../uuid.jsonl:start-end
|
||||
Status: [Read in detail | Reviewed summary only | Skimmed]
|
||||
|
||||
**2. [project-name, YYYY-MM-DD]** - X% match
|
||||
Conversation summary: ...
|
||||
File: ...
|
||||
Status: ...
|
||||
|
||||
[Continue for all examined sources...]
|
||||
|
||||
### For Follow-Up
|
||||
|
||||
Main agent can:
|
||||
- Ask you to dig deeper into specific source (#1, #2, etc.)
|
||||
- Ask you to read adjacent exchanges in a conversation
|
||||
- Ask you to search with refined query
|
||||
- Read sources directly (discouraged - risks context bloat)
|
||||
|
||||
## Critical Rules
|
||||
|
||||
**DO:**
|
||||
- Search using the provided query
|
||||
- Read full conversations for top results
|
||||
- Synthesize into actionable insights (200-1000 words)
|
||||
- Include ALL sources with metadata (project, date, summary, file, status)
|
||||
- Focus on what will help the current task
|
||||
- Include specific details (function names, error messages, line numbers)
|
||||
|
||||
**DO NOT:**
|
||||
- Include raw conversation excerpts (synthesize instead)
|
||||
- Paste full file contents
|
||||
- Add meta-commentary ("I searched and found...")
|
||||
- Exceed 1000 words in Summary section
|
||||
- Return search results verbatim
|
||||
|
||||
## Example Output
|
||||
|
||||
```
|
||||
### Summary
|
||||
|
||||
developer needed to handle authentication errors in React Router 7 data loaders
|
||||
without crashing the app. The solution uses RR7's errorElement + useRouteError()
|
||||
to catch 401s and redirect to login.
|
||||
|
||||
**Key implementation:**
|
||||
Protected route wrapper catches loader errors, checks error.status === 401.
|
||||
If 401, redirects to /login with return URL. Otherwise shows error boundary.
|
||||
|
||||
**Why this works:**
|
||||
Loaders can't use hooks (tried useNavigate, failed). Throwing redirect()
|
||||
bypasses error handling. Final approach lets errors bubble to errorElement
|
||||
where component context is available.
|
||||
|
||||
**Critical gotchas:**
|
||||
- Test with expired tokens, not just missing tokens
|
||||
- Error boundaries need unique keys per route or won't reset
|
||||
- Always include return URL in redirect
|
||||
- Loaders execute before components, no hook access
|
||||
|
||||
**Code pattern:**
|
||||
```typescript
|
||||
// In loader
|
||||
if (!response.ok) throw { status: response.status, message: 'Failed' };
|
||||
|
||||
// In ErrorBoundary
|
||||
const error = useRouteError();
|
||||
if (error.status === 401) navigate('/login?return=' + location.pathname);
|
||||
```
|
||||
|
||||
### Sources
|
||||
|
||||
**1. [react-router-7-starter, 2024-09-17]** - 92% match
|
||||
Conversation summary: Built authentication system with JWT, implemented protected routes
|
||||
File: ~/.clank/conversation-archive/react-router-7-starter/19df92b9.jsonl:145-289
|
||||
Status: Read in detail (multiple exchanges on error handling evolution)
|
||||
|
||||
**2. [react-router-docs-reading, 2024-09-10]** - 78% match
|
||||
Conversation summary: Read RR7 docs, discussed new loader patterns and errorElement
|
||||
File: ~/.clank/conversation-archive/react-router-docs-reading/a3c871f2.jsonl:56-98
|
||||
Status: Reviewed summary only (confirmed errorElement usage)
|
||||
|
||||
**3. [auth-debugging, 2024-09-18]** - 73% match
|
||||
Conversation summary: Fixed token expiration handling and error boundary reset issues
|
||||
File: ~/.clank/conversation-archive/react-router-7-starter/7b2e8d91.jsonl:201-345
|
||||
Status: Read in detail (discovered gotchas about keys and expired tokens)
|
||||
|
||||
### For Follow-Up
|
||||
|
||||
Main agent can ask me to:
|
||||
- Dig deeper into source #1 (full error handling evolution)
|
||||
- Read adjacent exchanges in #3 (more debugging context)
|
||||
- Search for "React Router error boundary patterns" more broadly
|
||||
```
|
||||
|
||||
This output:
|
||||
- Synthesis: ~350 words (actionable, specific)
|
||||
- Sources: Full metadata for 3 conversations
|
||||
- Enables iteration without context bloat
|
||||
105
skills/collaboration/remembering-conversations/tool/search-conversations
Executable file
105
skills/collaboration/remembering-conversations/tool/search-conversations
Executable file
@@ -0,0 +1,105 @@
|
||||
#!/bin/bash
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
# Parse arguments
|
||||
MODE="vector"
|
||||
AFTER=""
|
||||
BEFORE=""
|
||||
LIMIT="10"
|
||||
QUERY=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--help|-h)
|
||||
cat <<'EOF'
|
||||
search-conversations - Search previous Claude Code conversations
|
||||
|
||||
USAGE:
|
||||
search-conversations [OPTIONS] <query>
|
||||
|
||||
MODES:
|
||||
(default) Vector similarity search (semantic)
|
||||
--text Exact string matching (for git SHAs, error codes)
|
||||
--both Combine vector + text search
|
||||
|
||||
OPTIONS:
|
||||
--after DATE Only conversations after YYYY-MM-DD
|
||||
--before DATE Only conversations before YYYY-MM-DD
|
||||
--limit N Max results (default: 10)
|
||||
--help, -h Show this help
|
||||
|
||||
EXAMPLES:
|
||||
# Semantic search
|
||||
search-conversations "React Router authentication errors"
|
||||
|
||||
# Find exact string (git SHA, error message)
|
||||
search-conversations --text "a1b2c3d4e5f6"
|
||||
|
||||
# Time filtering
|
||||
search-conversations --after 2025-09-01 "refactoring"
|
||||
search-conversations --before 2025-10-01 --limit 20 "bug fix"
|
||||
|
||||
# Combine modes
|
||||
search-conversations --both "React Router data loading"
|
||||
|
||||
OUTPUT FORMAT:
|
||||
For each result:
|
||||
- Project name and date
|
||||
- Conversation summary (AI-generated)
|
||||
- Matched exchange with similarity % (vector mode)
|
||||
- File path with line numbers
|
||||
|
||||
Example:
|
||||
1. [react-router-7-starter, 2025-09-17]
|
||||
Built authentication with JWT, implemented protected routes.
|
||||
|
||||
92% match: "How do I handle auth errors in loaders?"
|
||||
~/.clank/conversation-archive/.../uuid.jsonl:145-167
|
||||
|
||||
QUERY TIPS:
|
||||
- Use natural language: "How did we handle X?"
|
||||
- Be specific: "React Router data loading" not "routing"
|
||||
- Include context: "TypeScript type narrowing in guards"
|
||||
|
||||
SEE ALSO:
|
||||
skills/collaboration/remembering-conversations/INDEXING.md - Manage index
|
||||
skills/collaboration/remembering-conversations/SKILL.md - Usage guide
|
||||
EOF
|
||||
exit 0
|
||||
;;
|
||||
--text)
|
||||
MODE="text"
|
||||
shift
|
||||
;;
|
||||
--both)
|
||||
MODE="both"
|
||||
shift
|
||||
;;
|
||||
--after)
|
||||
AFTER="$2"
|
||||
shift 2
|
||||
;;
|
||||
--before)
|
||||
BEFORE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--limit)
|
||||
LIMIT="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
QUERY="$QUERY $1"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
QUERY=$(echo "$QUERY" | sed 's/^ *//')
|
||||
|
||||
if [ -z "$QUERY" ]; then
|
||||
echo "Usage: search-conversations [options] <query>"
|
||||
echo "Try: search-conversations --help"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
npx tsx src/search-cli.ts "$QUERY" "$MODE" "$LIMIT" "$AFTER" "$BEFORE"
|
||||
@@ -0,0 +1,112 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { initDatabase, migrateSchema, insertExchange } from './db.js';
|
||||
import { ConversationExchange } from './types.js';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import Database from 'better-sqlite3';
|
||||
|
||||
describe('database migration', () => {
|
||||
const testDir = path.join(os.tmpdir(), 'db-migration-test-' + Date.now());
|
||||
const dbPath = path.join(testDir, 'test.db');
|
||||
|
||||
beforeEach(() => {
|
||||
fs.mkdirSync(testDir, { recursive: true });
|
||||
process.env.TEST_DB_PATH = dbPath;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
delete process.env.TEST_DB_PATH;
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('adds last_indexed column to existing database', () => {
|
||||
// Create a database with old schema (no last_indexed)
|
||||
const db = new Database(dbPath);
|
||||
db.exec(`
|
||||
CREATE TABLE exchanges (
|
||||
id TEXT PRIMARY KEY,
|
||||
project TEXT NOT NULL,
|
||||
timestamp TEXT NOT NULL,
|
||||
user_message TEXT NOT NULL,
|
||||
assistant_message TEXT NOT NULL,
|
||||
archive_path TEXT NOT NULL,
|
||||
line_start INTEGER NOT NULL,
|
||||
line_end INTEGER NOT NULL,
|
||||
embedding BLOB
|
||||
)
|
||||
`);
|
||||
|
||||
// Verify column doesn't exist
|
||||
const columnsBefore = db.prepare(`PRAGMA table_info(exchanges)`).all();
|
||||
const hasLastIndexedBefore = columnsBefore.some((col: any) => col.name === 'last_indexed');
|
||||
expect(hasLastIndexedBefore).toBe(false);
|
||||
|
||||
db.close();
|
||||
|
||||
// Run migration
|
||||
const migratedDb = initDatabase();
|
||||
|
||||
// Verify column now exists
|
||||
const columnsAfter = migratedDb.prepare(`PRAGMA table_info(exchanges)`).all();
|
||||
const hasLastIndexedAfter = columnsAfter.some((col: any) => col.name === 'last_indexed');
|
||||
expect(hasLastIndexedAfter).toBe(true);
|
||||
|
||||
migratedDb.close();
|
||||
});
|
||||
|
||||
it('handles existing last_indexed column gracefully', () => {
|
||||
// Create database with migration already applied
|
||||
const db = initDatabase();
|
||||
|
||||
// Run migration again - should not error
|
||||
expect(() => migrateSchema(db)).not.toThrow();
|
||||
|
||||
db.close();
|
||||
});
|
||||
});
|
||||
|
||||
describe('insertExchange with last_indexed', () => {
|
||||
const testDir = path.join(os.tmpdir(), 'insert-test-' + Date.now());
|
||||
const dbPath = path.join(testDir, 'test.db');
|
||||
|
||||
beforeEach(() => {
|
||||
fs.mkdirSync(testDir, { recursive: true });
|
||||
process.env.TEST_DB_PATH = dbPath;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
delete process.env.TEST_DB_PATH;
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('sets last_indexed timestamp when inserting exchange', () => {
|
||||
const db = initDatabase();
|
||||
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'test-id-1',
|
||||
project: 'test-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'Hello',
|
||||
assistantMessage: 'Hi there!',
|
||||
archivePath: '/test/path.jsonl',
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const beforeInsert = Date.now();
|
||||
// Create proper 384-dimensional embedding
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
const afterInsert = Date.now();
|
||||
|
||||
// Query the exchange
|
||||
const row = db.prepare(`SELECT last_indexed FROM exchanges WHERE id = ?`).get('test-id-1') as any;
|
||||
|
||||
expect(row.last_indexed).toBeDefined();
|
||||
expect(row.last_indexed).toBeGreaterThanOrEqual(beforeInsert);
|
||||
expect(row.last_indexed).toBeLessThanOrEqual(afterInsert);
|
||||
|
||||
db.close();
|
||||
});
|
||||
});
|
||||
134
skills/collaboration/remembering-conversations/tool/src/db.ts
Normal file
134
skills/collaboration/remembering-conversations/tool/src/db.ts
Normal file
@@ -0,0 +1,134 @@
|
||||
import Database from 'better-sqlite3';
|
||||
import { ConversationExchange } from './types.js';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import fs from 'fs';
|
||||
import * as sqliteVec from 'sqlite-vec';
|
||||
|
||||
function getDbPath(): string {
|
||||
return process.env.TEST_DB_PATH || path.join(os.homedir(), '.clank', 'conversation-index', 'db.sqlite');
|
||||
}
|
||||
|
||||
export function migrateSchema(db: Database.Database): void {
|
||||
const hasColumn = db.prepare(`
|
||||
SELECT COUNT(*) as count FROM pragma_table_info('exchanges')
|
||||
WHERE name='last_indexed'
|
||||
`).get() as { count: number };
|
||||
|
||||
if (hasColumn.count === 0) {
|
||||
console.log('Migrating schema: adding last_indexed column...');
|
||||
db.prepare('ALTER TABLE exchanges ADD COLUMN last_indexed INTEGER').run();
|
||||
console.log('Migration complete.');
|
||||
}
|
||||
}
|
||||
|
||||
export function initDatabase(): Database.Database {
|
||||
const dbPath = getDbPath();
|
||||
|
||||
// Ensure directory exists
|
||||
const dbDir = path.dirname(dbPath);
|
||||
if (!fs.existsSync(dbDir)) {
|
||||
fs.mkdirSync(dbDir, { recursive: true });
|
||||
}
|
||||
|
||||
const db = new Database(dbPath);
|
||||
|
||||
// Load sqlite-vec extension
|
||||
sqliteVec.load(db);
|
||||
|
||||
// Enable WAL mode for better concurrency
|
||||
db.pragma('journal_mode = WAL');
|
||||
|
||||
// Create exchanges table
|
||||
db.exec(`
|
||||
CREATE TABLE IF NOT EXISTS exchanges (
|
||||
id TEXT PRIMARY KEY,
|
||||
project TEXT NOT NULL,
|
||||
timestamp TEXT NOT NULL,
|
||||
user_message TEXT NOT NULL,
|
||||
assistant_message TEXT NOT NULL,
|
||||
archive_path TEXT NOT NULL,
|
||||
line_start INTEGER NOT NULL,
|
||||
line_end INTEGER NOT NULL,
|
||||
embedding BLOB
|
||||
)
|
||||
`);
|
||||
|
||||
// Create vector search index
|
||||
db.exec(`
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS vec_exchanges USING vec0(
|
||||
id TEXT PRIMARY KEY,
|
||||
embedding FLOAT[384]
|
||||
)
|
||||
`);
|
||||
|
||||
// Create index on timestamp for sorting
|
||||
db.exec(`
|
||||
CREATE INDEX IF NOT EXISTS idx_timestamp ON exchanges(timestamp DESC)
|
||||
`);
|
||||
|
||||
// Run migrations
|
||||
migrateSchema(db);
|
||||
|
||||
return db;
|
||||
}
|
||||
|
||||
export function insertExchange(
|
||||
db: Database.Database,
|
||||
exchange: ConversationExchange,
|
||||
embedding: number[]
|
||||
): void {
|
||||
const now = Date.now();
|
||||
|
||||
const stmt = db.prepare(`
|
||||
INSERT OR REPLACE INTO exchanges
|
||||
(id, project, timestamp, user_message, assistant_message, archive_path, line_start, line_end, last_indexed)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
stmt.run(
|
||||
exchange.id,
|
||||
exchange.project,
|
||||
exchange.timestamp,
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage,
|
||||
exchange.archivePath,
|
||||
exchange.lineStart,
|
||||
exchange.lineEnd,
|
||||
now
|
||||
);
|
||||
|
||||
// Insert into vector table (delete first since virtual tables don't support REPLACE)
|
||||
const delStmt = db.prepare(`DELETE FROM vec_exchanges WHERE id = ?`);
|
||||
delStmt.run(exchange.id);
|
||||
|
||||
const vecStmt = db.prepare(`
|
||||
INSERT INTO vec_exchanges (id, embedding)
|
||||
VALUES (?, ?)
|
||||
`);
|
||||
|
||||
vecStmt.run(exchange.id, Buffer.from(new Float32Array(embedding).buffer));
|
||||
}
|
||||
|
||||
export function getAllExchanges(db: Database.Database): Array<{ id: string; archivePath: string }> {
|
||||
const stmt = db.prepare(`SELECT id, archive_path as archivePath FROM exchanges`);
|
||||
return stmt.all() as Array<{ id: string; archivePath: string }>;
|
||||
}
|
||||
|
||||
export function getFileLastIndexed(db: Database.Database, archivePath: string): number | null {
|
||||
const stmt = db.prepare(`
|
||||
SELECT MAX(last_indexed) as lastIndexed
|
||||
FROM exchanges
|
||||
WHERE archive_path = ?
|
||||
`);
|
||||
const row = stmt.get(archivePath) as { lastIndexed: number | null };
|
||||
return row.lastIndexed;
|
||||
}
|
||||
|
||||
export function deleteExchange(db: Database.Database, id: string): void {
|
||||
// Delete from vector table
|
||||
db.prepare(`DELETE FROM vec_exchanges WHERE id = ?`).run(id);
|
||||
|
||||
// Delete from main table
|
||||
db.prepare(`DELETE FROM exchanges WHERE id = ?`).run(id);
|
||||
}
|
||||
@@ -0,0 +1,39 @@
|
||||
import { pipeline, Pipeline } from '@xenova/transformers';
|
||||
|
||||
let embeddingPipeline: Pipeline | null = null;
|
||||
|
||||
export async function initEmbeddings(): Promise<void> {
|
||||
if (!embeddingPipeline) {
|
||||
console.log('Loading embedding model (first run may take time)...');
|
||||
embeddingPipeline = await pipeline(
|
||||
'feature-extraction',
|
||||
'Xenova/all-MiniLM-L6-v2'
|
||||
);
|
||||
console.log('Embedding model loaded');
|
||||
}
|
||||
}
|
||||
|
||||
export async function generateEmbedding(text: string): Promise<number[]> {
|
||||
if (!embeddingPipeline) {
|
||||
await initEmbeddings();
|
||||
}
|
||||
|
||||
// Truncate text to avoid token limits (512 tokens max for this model)
|
||||
const truncated = text.substring(0, 2000);
|
||||
|
||||
const output = await embeddingPipeline!(truncated, {
|
||||
pooling: 'mean',
|
||||
normalize: true
|
||||
});
|
||||
|
||||
return Array.from(output.data);
|
||||
}
|
||||
|
||||
export async function generateExchangeEmbedding(
|
||||
userMessage: string,
|
||||
assistantMessage: string
|
||||
): Promise<number[]> {
|
||||
// Combine user question and assistant answer for better searchability
|
||||
const combined = `User: ${userMessage}\n\nAssistant: ${assistantMessage}`;
|
||||
return generateEmbedding(combined);
|
||||
}
|
||||
@@ -0,0 +1,115 @@
|
||||
#!/usr/bin/env node
|
||||
import { verifyIndex, repairIndex } from './verify.js';
|
||||
import { indexSession, indexUnprocessed, indexConversations } from './indexer.js';
|
||||
import { initDatabase } from './db.js';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
|
||||
const command = process.argv[2];
|
||||
|
||||
// Parse --concurrency flag from remaining args
|
||||
function getConcurrency(): number {
|
||||
const concurrencyIndex = process.argv.findIndex(arg => arg === '--concurrency' || arg === '-c');
|
||||
if (concurrencyIndex !== -1 && process.argv[concurrencyIndex + 1]) {
|
||||
const value = parseInt(process.argv[concurrencyIndex + 1], 10);
|
||||
if (value >= 1 && value <= 16) return value;
|
||||
}
|
||||
return 1; // default
|
||||
}
|
||||
|
||||
const concurrency = getConcurrency();
|
||||
|
||||
async function main() {
|
||||
try {
|
||||
switch (command) {
|
||||
case 'index-session':
|
||||
const sessionId = process.argv[3];
|
||||
if (!sessionId) {
|
||||
console.error('Usage: index-cli index-session <session-id>');
|
||||
process.exit(1);
|
||||
}
|
||||
await indexSession(sessionId, concurrency);
|
||||
break;
|
||||
|
||||
case 'index-cleanup':
|
||||
await indexUnprocessed(concurrency);
|
||||
break;
|
||||
|
||||
case 'verify':
|
||||
console.log('Verifying conversation index...');
|
||||
const issues = await verifyIndex();
|
||||
|
||||
console.log('\n=== Verification Results ===');
|
||||
console.log(`Missing summaries: ${issues.missing.length}`);
|
||||
console.log(`Orphaned entries: ${issues.orphaned.length}`);
|
||||
console.log(`Outdated files: ${issues.outdated.length}`);
|
||||
console.log(`Corrupted files: ${issues.corrupted.length}`);
|
||||
|
||||
if (issues.missing.length > 0) {
|
||||
console.log('\nMissing summaries:');
|
||||
issues.missing.forEach(m => console.log(` ${m.path}`));
|
||||
}
|
||||
|
||||
if (issues.missing.length + issues.orphaned.length + issues.outdated.length + issues.corrupted.length > 0) {
|
||||
console.log('\nRun with --repair to fix these issues.');
|
||||
process.exit(1);
|
||||
} else {
|
||||
console.log('\n✅ Index is healthy!');
|
||||
}
|
||||
break;
|
||||
|
||||
case 'repair':
|
||||
console.log('Verifying conversation index...');
|
||||
const repairIssues = await verifyIndex();
|
||||
|
||||
if (repairIssues.missing.length + repairIssues.orphaned.length + repairIssues.outdated.length > 0) {
|
||||
await repairIndex(repairIssues);
|
||||
} else {
|
||||
console.log('✅ No issues to repair!');
|
||||
}
|
||||
break;
|
||||
|
||||
case 'rebuild':
|
||||
console.log('Rebuilding entire index...');
|
||||
|
||||
// Delete database
|
||||
const dbPath = path.join(os.homedir(), '.clank', 'conversation-index', 'db.sqlite');
|
||||
if (fs.existsSync(dbPath)) {
|
||||
fs.unlinkSync(dbPath);
|
||||
console.log('Deleted existing database');
|
||||
}
|
||||
|
||||
// Delete all summary files
|
||||
const archiveDir = path.join(os.homedir(), '.clank', 'conversation-archive');
|
||||
if (fs.existsSync(archiveDir)) {
|
||||
const projects = fs.readdirSync(archiveDir);
|
||||
for (const project of projects) {
|
||||
const projectPath = path.join(archiveDir, project);
|
||||
if (!fs.statSync(projectPath).isDirectory()) continue;
|
||||
|
||||
const summaries = fs.readdirSync(projectPath).filter(f => f.endsWith('-summary.txt'));
|
||||
for (const summary of summaries) {
|
||||
fs.unlinkSync(path.join(projectPath, summary));
|
||||
}
|
||||
}
|
||||
console.log('Deleted all summary files');
|
||||
}
|
||||
|
||||
// Re-index everything
|
||||
console.log('Re-indexing all conversations...');
|
||||
await indexConversations(undefined, undefined, concurrency);
|
||||
break;
|
||||
|
||||
case 'index-all':
|
||||
default:
|
||||
await indexConversations(undefined, undefined, concurrency);
|
||||
break;
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error:', error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
@@ -0,0 +1,356 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { initDatabase, insertExchange } from './db.js';
|
||||
import { parseConversation } from './parser.js';
|
||||
import { initEmbeddings, generateExchangeEmbedding } from './embeddings.js';
|
||||
import { summarizeConversation } from './summarizer.js';
|
||||
import { ConversationExchange } from './types.js';
|
||||
|
||||
// Set max output tokens for Claude SDK (used by summarizer)
|
||||
process.env.CLAUDE_CODE_MAX_OUTPUT_TOKENS = '20000';
|
||||
|
||||
// Increase max listeners for concurrent API calls
|
||||
import { EventEmitter } from 'events';
|
||||
EventEmitter.defaultMaxListeners = 20;
|
||||
|
||||
// Allow overriding paths for testing
|
||||
function getProjectsDir(): string {
|
||||
return process.env.TEST_PROJECTS_DIR || path.join(os.homedir(), '.claude', 'projects');
|
||||
}
|
||||
|
||||
function getArchiveDir(): string {
|
||||
return process.env.TEST_ARCHIVE_DIR || path.join(os.homedir(), '.clank', 'conversation-archive');
|
||||
}
|
||||
|
||||
// Projects to exclude from indexing (configurable via env or config file)
|
||||
function getExcludedProjects(): string[] {
|
||||
// Check env variable first
|
||||
if (process.env.CONVERSATION_SEARCH_EXCLUDE_PROJECTS) {
|
||||
return process.env.CONVERSATION_SEARCH_EXCLUDE_PROJECTS.split(',').map(p => p.trim());
|
||||
}
|
||||
|
||||
// Check for config file
|
||||
const configPath = path.join(os.homedir(), '.clank', 'conversation-index', 'exclude.txt');
|
||||
if (fs.existsSync(configPath)) {
|
||||
const content = fs.readFileSync(configPath, 'utf-8');
|
||||
return content.split('\n').map(line => line.trim()).filter(line => line && !line.startsWith('#'));
|
||||
}
|
||||
|
||||
// Default: no exclusions
|
||||
return [];
|
||||
}
|
||||
|
||||
// Process items in batches with limited concurrency
|
||||
async function processBatch<T, R>(
|
||||
items: T[],
|
||||
processor: (item: T) => Promise<R>,
|
||||
concurrency: number
|
||||
): Promise<R[]> {
|
||||
const results: R[] = [];
|
||||
|
||||
for (let i = 0; i < items.length; i += concurrency) {
|
||||
const batch = items.slice(i, i + concurrency);
|
||||
const batchResults = await Promise.all(batch.map(processor));
|
||||
results.push(...batchResults);
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
export async function indexConversations(
|
||||
limitToProject?: string,
|
||||
maxConversations?: number,
|
||||
concurrency: number = 1
|
||||
): Promise<void> {
|
||||
console.log('Initializing database...');
|
||||
const db = initDatabase();
|
||||
|
||||
console.log('Loading embedding model...');
|
||||
await initEmbeddings();
|
||||
|
||||
console.log('Scanning for conversation files...');
|
||||
const PROJECTS_DIR = getProjectsDir();
|
||||
const ARCHIVE_DIR = getArchiveDir();
|
||||
const projects = fs.readdirSync(PROJECTS_DIR);
|
||||
|
||||
let totalExchanges = 0;
|
||||
let conversationsProcessed = 0;
|
||||
|
||||
const excludedProjects = getExcludedProjects();
|
||||
|
||||
for (const project of projects) {
|
||||
// Skip excluded projects
|
||||
if (excludedProjects.includes(project)) {
|
||||
console.log(`\nSkipping excluded project: ${project}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip if limiting to specific project
|
||||
if (limitToProject && project !== limitToProject) continue;
|
||||
const projectPath = path.join(PROJECTS_DIR, project);
|
||||
const stat = fs.statSync(projectPath);
|
||||
|
||||
if (!stat.isDirectory()) continue;
|
||||
|
||||
const files = fs.readdirSync(projectPath).filter(f => f.endsWith('.jsonl'));
|
||||
|
||||
if (files.length === 0) continue;
|
||||
|
||||
console.log(`\nProcessing project: ${project} (${files.length} conversations)`);
|
||||
if (concurrency > 1) console.log(` Concurrency: ${concurrency}`);
|
||||
|
||||
// Create archive directory for this project
|
||||
const projectArchive = path.join(ARCHIVE_DIR, project);
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
// Prepare all conversations first
|
||||
type ConvToProcess = {
|
||||
file: string;
|
||||
sourcePath: string;
|
||||
archivePath: string;
|
||||
summaryPath: string;
|
||||
exchanges: ConversationExchange[];
|
||||
};
|
||||
|
||||
const toProcess: ConvToProcess[] = [];
|
||||
|
||||
for (const file of files) {
|
||||
const sourcePath = path.join(projectPath, file);
|
||||
const archivePath = path.join(projectArchive, file);
|
||||
|
||||
// Copy to archive
|
||||
if (!fs.existsSync(archivePath)) {
|
||||
fs.copyFileSync(sourcePath, archivePath);
|
||||
console.log(` Archived: ${file}`);
|
||||
}
|
||||
|
||||
// Parse conversation
|
||||
const exchanges = await parseConversation(sourcePath, project, archivePath);
|
||||
|
||||
if (exchanges.length === 0) {
|
||||
console.log(` Skipped ${file} (no exchanges)`);
|
||||
continue;
|
||||
}
|
||||
|
||||
toProcess.push({
|
||||
file,
|
||||
sourcePath,
|
||||
archivePath,
|
||||
summaryPath: archivePath.replace('.jsonl', '-summary.txt'),
|
||||
exchanges
|
||||
});
|
||||
}
|
||||
|
||||
// Batch summarize conversations in parallel
|
||||
const needsSummary = toProcess.filter(c => !fs.existsSync(c.summaryPath));
|
||||
|
||||
if (needsSummary.length > 0) {
|
||||
console.log(` Generating ${needsSummary.length} summaries (concurrency: ${concurrency})...`);
|
||||
|
||||
await processBatch(needsSummary, async (conv) => {
|
||||
try {
|
||||
const summary = await summarizeConversation(conv.exchanges);
|
||||
fs.writeFileSync(conv.summaryPath, summary, 'utf-8');
|
||||
const wordCount = summary.split(/\s+/).length;
|
||||
console.log(` ✓ ${conv.file}: ${wordCount} words`);
|
||||
return summary;
|
||||
} catch (error) {
|
||||
console.log(` ✗ ${conv.file}: ${error}`);
|
||||
return null;
|
||||
}
|
||||
}, concurrency);
|
||||
}
|
||||
|
||||
// Now process embeddings and DB inserts (fast, sequential is fine)
|
||||
for (const conv of toProcess) {
|
||||
for (const exchange of conv.exchanges) {
|
||||
const embedding = await generateExchangeEmbedding(
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage
|
||||
);
|
||||
|
||||
insertExchange(db, exchange, embedding);
|
||||
}
|
||||
|
||||
totalExchanges += conv.exchanges.length;
|
||||
conversationsProcessed++;
|
||||
|
||||
// Check if we hit the limit
|
||||
if (maxConversations && conversationsProcessed >= maxConversations) {
|
||||
console.log(`\nReached limit of ${maxConversations} conversations`);
|
||||
db.close();
|
||||
console.log(`✅ Indexing complete! Conversations: ${conversationsProcessed}, Exchanges: ${totalExchanges}`);
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
db.close();
|
||||
console.log(`\n✅ Indexing complete! Conversations: ${conversationsProcessed}, Exchanges: ${totalExchanges}`);
|
||||
}
|
||||
|
||||
export async function indexSession(sessionId: string, concurrency: number = 1): Promise<void> {
|
||||
console.log(`Indexing session: ${sessionId}`);
|
||||
|
||||
// Find the conversation file for this session
|
||||
const PROJECTS_DIR = getProjectsDir();
|
||||
const ARCHIVE_DIR = getArchiveDir();
|
||||
const projects = fs.readdirSync(PROJECTS_DIR);
|
||||
const excludedProjects = getExcludedProjects();
|
||||
let found = false;
|
||||
|
||||
for (const project of projects) {
|
||||
if (excludedProjects.includes(project)) continue;
|
||||
|
||||
const projectPath = path.join(PROJECTS_DIR, project);
|
||||
if (!fs.statSync(projectPath).isDirectory()) continue;
|
||||
|
||||
const files = fs.readdirSync(projectPath).filter(f => f.includes(sessionId) && f.endsWith('.jsonl'));
|
||||
|
||||
if (files.length > 0) {
|
||||
found = true;
|
||||
const file = files[0];
|
||||
const sourcePath = path.join(projectPath, file);
|
||||
|
||||
const db = initDatabase();
|
||||
await initEmbeddings();
|
||||
|
||||
const projectArchive = path.join(ARCHIVE_DIR, project);
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
const archivePath = path.join(projectArchive, file);
|
||||
|
||||
// Archive
|
||||
if (!fs.existsSync(archivePath)) {
|
||||
fs.copyFileSync(sourcePath, archivePath);
|
||||
}
|
||||
|
||||
// Parse and summarize
|
||||
const exchanges = await parseConversation(sourcePath, project, archivePath);
|
||||
|
||||
if (exchanges.length > 0) {
|
||||
// Generate summary
|
||||
const summaryPath = archivePath.replace('.jsonl', '-summary.txt');
|
||||
if (!fs.existsSync(summaryPath)) {
|
||||
const summary = await summarizeConversation(exchanges);
|
||||
fs.writeFileSync(summaryPath, summary, 'utf-8');
|
||||
console.log(`Summary: ${summary.split(/\s+/).length} words`);
|
||||
}
|
||||
|
||||
// Index
|
||||
for (const exchange of exchanges) {
|
||||
const embedding = await generateExchangeEmbedding(
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage
|
||||
);
|
||||
insertExchange(db, exchange, embedding);
|
||||
}
|
||||
|
||||
console.log(`✅ Indexed session ${sessionId}: ${exchanges.length} exchanges`);
|
||||
}
|
||||
|
||||
db.close();
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!found) {
|
||||
console.log(`Session ${sessionId} not found`);
|
||||
}
|
||||
}
|
||||
|
||||
export async function indexUnprocessed(concurrency: number = 1): Promise<void> {
|
||||
console.log('Finding unprocessed conversations...');
|
||||
if (concurrency > 1) console.log(`Concurrency: ${concurrency}`);
|
||||
|
||||
const db = initDatabase();
|
||||
await initEmbeddings();
|
||||
|
||||
const PROJECTS_DIR = getProjectsDir();
|
||||
const ARCHIVE_DIR = getArchiveDir();
|
||||
const projects = fs.readdirSync(PROJECTS_DIR);
|
||||
const excludedProjects = getExcludedProjects();
|
||||
|
||||
type UnprocessedConv = {
|
||||
project: string;
|
||||
file: string;
|
||||
sourcePath: string;
|
||||
archivePath: string;
|
||||
summaryPath: string;
|
||||
exchanges: ConversationExchange[];
|
||||
};
|
||||
|
||||
const unprocessed: UnprocessedConv[] = [];
|
||||
|
||||
// Collect all unprocessed conversations
|
||||
for (const project of projects) {
|
||||
if (excludedProjects.includes(project)) continue;
|
||||
|
||||
const projectPath = path.join(PROJECTS_DIR, project);
|
||||
if (!fs.statSync(projectPath).isDirectory()) continue;
|
||||
|
||||
const files = fs.readdirSync(projectPath).filter(f => f.endsWith('.jsonl'));
|
||||
|
||||
for (const file of files) {
|
||||
const sourcePath = path.join(projectPath, file);
|
||||
const projectArchive = path.join(ARCHIVE_DIR, project);
|
||||
const archivePath = path.join(projectArchive, file);
|
||||
const summaryPath = archivePath.replace('.jsonl', '-summary.txt');
|
||||
|
||||
// Skip if already has summary
|
||||
if (fs.existsSync(summaryPath)) continue;
|
||||
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
// Archive if needed
|
||||
if (!fs.existsSync(archivePath)) {
|
||||
fs.copyFileSync(sourcePath, archivePath);
|
||||
}
|
||||
|
||||
// Parse and check
|
||||
const exchanges = await parseConversation(sourcePath, project, archivePath);
|
||||
if (exchanges.length === 0) continue;
|
||||
|
||||
unprocessed.push({ project, file, sourcePath, archivePath, summaryPath, exchanges });
|
||||
}
|
||||
}
|
||||
|
||||
if (unprocessed.length === 0) {
|
||||
console.log('✅ All conversations are already processed!');
|
||||
db.close();
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Found ${unprocessed.length} unprocessed conversations`);
|
||||
console.log(`Generating summaries (concurrency: ${concurrency})...\n`);
|
||||
|
||||
// Batch process summaries
|
||||
await processBatch(unprocessed, async (conv) => {
|
||||
try {
|
||||
const summary = await summarizeConversation(conv.exchanges);
|
||||
fs.writeFileSync(conv.summaryPath, summary, 'utf-8');
|
||||
const wordCount = summary.split(/\s+/).length;
|
||||
console.log(` ✓ ${conv.project}/${conv.file}: ${wordCount} words`);
|
||||
return summary;
|
||||
} catch (error) {
|
||||
console.log(` ✗ ${conv.project}/${conv.file}: ${error}`);
|
||||
return null;
|
||||
}
|
||||
}, concurrency);
|
||||
|
||||
// Now index embeddings
|
||||
console.log(`\nIndexing embeddings...`);
|
||||
for (const conv of unprocessed) {
|
||||
for (const exchange of conv.exchanges) {
|
||||
const embedding = await generateExchangeEmbedding(
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage
|
||||
);
|
||||
insertExchange(db, exchange, embedding);
|
||||
}
|
||||
}
|
||||
|
||||
db.close();
|
||||
console.log(`\n✅ Processed ${unprocessed.length} conversations`);
|
||||
}
|
||||
@@ -0,0 +1,118 @@
|
||||
import fs from 'fs';
|
||||
import readline from 'readline';
|
||||
import { ConversationExchange } from './types.js';
|
||||
import crypto from 'crypto';
|
||||
|
||||
interface JSONLMessage {
|
||||
type: string;
|
||||
message?: {
|
||||
role: 'user' | 'assistant';
|
||||
content: string | Array<{ type: string; text?: string }>;
|
||||
};
|
||||
timestamp?: string;
|
||||
uuid?: string;
|
||||
}
|
||||
|
||||
export async function parseConversation(
|
||||
filePath: string,
|
||||
projectName: string,
|
||||
archivePath: string
|
||||
): Promise<ConversationExchange[]> {
|
||||
const exchanges: ConversationExchange[] = [];
|
||||
const fileStream = fs.createReadStream(filePath);
|
||||
const rl = readline.createInterface({
|
||||
input: fileStream,
|
||||
crlfDelay: Infinity
|
||||
});
|
||||
|
||||
let lineNumber = 0;
|
||||
let currentExchange: {
|
||||
userMessage: string;
|
||||
userLine: number;
|
||||
assistantMessages: string[];
|
||||
lastAssistantLine: number;
|
||||
timestamp: string;
|
||||
} | null = null;
|
||||
|
||||
const finalizeExchange = () => {
|
||||
if (currentExchange && currentExchange.assistantMessages.length > 0) {
|
||||
const exchange: ConversationExchange = {
|
||||
id: crypto
|
||||
.createHash('md5')
|
||||
.update(`${archivePath}:${currentExchange.userLine}-${currentExchange.lastAssistantLine}`)
|
||||
.digest('hex'),
|
||||
project: projectName,
|
||||
timestamp: currentExchange.timestamp,
|
||||
userMessage: currentExchange.userMessage,
|
||||
assistantMessage: currentExchange.assistantMessages.join('\n\n'),
|
||||
archivePath,
|
||||
lineStart: currentExchange.userLine,
|
||||
lineEnd: currentExchange.lastAssistantLine
|
||||
};
|
||||
exchanges.push(exchange);
|
||||
}
|
||||
};
|
||||
|
||||
for await (const line of rl) {
|
||||
lineNumber++;
|
||||
|
||||
try {
|
||||
const parsed: JSONLMessage = JSON.parse(line);
|
||||
|
||||
// Skip non-message types
|
||||
if (parsed.type !== 'user' && parsed.type !== 'assistant') {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!parsed.message) {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Extract text from message content
|
||||
let text = '';
|
||||
if (typeof parsed.message.content === 'string') {
|
||||
text = parsed.message.content;
|
||||
} else if (Array.isArray(parsed.message.content)) {
|
||||
text = parsed.message.content
|
||||
.filter(block => block.type === 'text' && block.text)
|
||||
.map(block => block.text)
|
||||
.join('\n');
|
||||
}
|
||||
|
||||
// Skip empty messages
|
||||
if (!text.trim()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (parsed.message.role === 'user') {
|
||||
// Finalize previous exchange before starting new one
|
||||
finalizeExchange();
|
||||
|
||||
// Start new exchange
|
||||
currentExchange = {
|
||||
userMessage: text,
|
||||
userLine: lineNumber,
|
||||
assistantMessages: [],
|
||||
lastAssistantLine: lineNumber,
|
||||
timestamp: parsed.timestamp || new Date().toISOString()
|
||||
};
|
||||
} else if (parsed.message.role === 'assistant' && currentExchange) {
|
||||
// Accumulate assistant messages
|
||||
currentExchange.assistantMessages.push(text);
|
||||
currentExchange.lastAssistantLine = lineNumber;
|
||||
// Update timestamp to last assistant message
|
||||
if (parsed.timestamp) {
|
||||
currentExchange.timestamp = parsed.timestamp;
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
// Skip malformed JSON lines
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// Finalize last exchange
|
||||
finalizeExchange();
|
||||
|
||||
return exchanges;
|
||||
}
|
||||
@@ -0,0 +1,109 @@
|
||||
import { describe, it, expect } from 'vitest';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
|
||||
describe('search-agent template', () => {
|
||||
const templatePath = path.join(__dirname, '..', 'prompts', 'search-agent.md');
|
||||
|
||||
it('exists at expected location', () => {
|
||||
expect(fs.existsSync(templatePath)).toBe(true);
|
||||
});
|
||||
|
||||
it('contains required placeholders', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check for all required placeholders
|
||||
expect(content).toContain('{TOPIC}');
|
||||
expect(content).toContain('{SEARCH_QUERY}');
|
||||
expect(content).toContain('{FOCUS_AREAS}');
|
||||
});
|
||||
|
||||
it('contains required output sections', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check for required output format sections
|
||||
expect(content).toContain('### Summary');
|
||||
expect(content).toContain('### Sources');
|
||||
expect(content).toContain('### For Follow-Up');
|
||||
});
|
||||
|
||||
it('specifies word count requirements', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Should specify 200-1000 words for synthesis
|
||||
expect(content).toMatch(/200-1000 words/);
|
||||
expect(content).toMatch(/max 1000 words/);
|
||||
});
|
||||
|
||||
it('includes source metadata requirements', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check for source metadata fields
|
||||
expect(content).toContain('project-name');
|
||||
expect(content).toContain('YYYY-MM-DD');
|
||||
expect(content).toContain('% match');
|
||||
expect(content).toContain('Conversation summary:');
|
||||
expect(content).toContain('File:');
|
||||
expect(content).toContain('Status:');
|
||||
expect(content).toContain('Read in detail');
|
||||
expect(content).toContain('Reviewed summary only');
|
||||
expect(content).toContain('Skimmed');
|
||||
});
|
||||
|
||||
it('provides search command', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Should include the search command
|
||||
expect(content).toContain('~/.claude/skills/collaboration/remembering-conversations/tool/search-conversations');
|
||||
});
|
||||
|
||||
it('includes critical rules', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check for DO and DO NOT sections
|
||||
expect(content).toContain('## Critical Rules');
|
||||
expect(content).toContain('**DO:**');
|
||||
expect(content).toContain('**DO NOT:**');
|
||||
});
|
||||
|
||||
it('includes complete example output', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Check example has all required components
|
||||
expect(content).toContain('## Example Output');
|
||||
|
||||
// Example should show Summary, Sources, and For Follow-Up
|
||||
const exampleSection = content.substring(content.indexOf('## Example Output'));
|
||||
expect(exampleSection).toContain('### Summary');
|
||||
expect(exampleSection).toContain('### Sources');
|
||||
expect(exampleSection).toContain('### For Follow-Up');
|
||||
|
||||
// Example should show specific details
|
||||
expect(exampleSection).toContain('react-router-7-starter');
|
||||
expect(exampleSection).toContain('92% match');
|
||||
expect(exampleSection).toContain('.jsonl');
|
||||
});
|
||||
|
||||
it('emphasizes synthesis over raw excerpts', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Should explicitly discourage raw conversation excerpts
|
||||
expect(content).toContain('synthesize');
|
||||
expect(content).toContain('raw conversation excerpts');
|
||||
expect(content).toContain('synthesize instead');
|
||||
});
|
||||
|
||||
it('provides follow-up options', () => {
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
// Should explain how main agent can follow up
|
||||
expect(content).toContain('Main agent can:');
|
||||
expect(content).toContain('dig deeper');
|
||||
expect(content).toContain('refined query');
|
||||
expect(content).toContain('context bloat');
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,28 @@
|
||||
import { searchConversations, formatResults, SearchOptions } from './search.js';
|
||||
|
||||
const query = process.argv[2];
|
||||
const mode = (process.argv[3] || 'vector') as 'vector' | 'text' | 'both';
|
||||
const limit = parseInt(process.argv[4] || '10');
|
||||
const after = process.argv[5] || undefined;
|
||||
const before = process.argv[6] || undefined;
|
||||
|
||||
if (!query) {
|
||||
console.error('Usage: search-conversations <query> [mode] [limit] [after] [before]');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const options: SearchOptions = {
|
||||
mode,
|
||||
limit,
|
||||
after,
|
||||
before
|
||||
};
|
||||
|
||||
searchConversations(query, options)
|
||||
.then(results => {
|
||||
console.log(formatResults(results));
|
||||
})
|
||||
.catch(error => {
|
||||
console.error('Error searching:', error);
|
||||
process.exit(1);
|
||||
});
|
||||
@@ -0,0 +1,173 @@
|
||||
import Database from 'better-sqlite3';
|
||||
import { initDatabase } from './db.js';
|
||||
import { initEmbeddings, generateEmbedding } from './embeddings.js';
|
||||
import { SearchResult, ConversationExchange } from './types.js';
|
||||
import fs from 'fs';
|
||||
|
||||
export interface SearchOptions {
|
||||
limit?: number;
|
||||
mode?: 'vector' | 'text' | 'both';
|
||||
after?: string; // ISO date string
|
||||
before?: string; // ISO date string
|
||||
}
|
||||
|
||||
function validateISODate(dateStr: string, paramName: string): void {
|
||||
const isoDateRegex = /^\d{4}-\d{2}-\d{2}$/;
|
||||
if (!isoDateRegex.test(dateStr)) {
|
||||
throw new Error(`Invalid ${paramName} date: "${dateStr}". Expected YYYY-MM-DD format (e.g., 2025-10-01)`);
|
||||
}
|
||||
// Verify it's actually a valid date
|
||||
const date = new Date(dateStr);
|
||||
if (isNaN(date.getTime())) {
|
||||
throw new Error(`Invalid ${paramName} date: "${dateStr}". Not a valid calendar date.`);
|
||||
}
|
||||
}
|
||||
|
||||
export async function searchConversations(
|
||||
query: string,
|
||||
options: SearchOptions = {}
|
||||
): Promise<SearchResult[]> {
|
||||
const { limit = 10, mode = 'vector', after, before } = options;
|
||||
|
||||
// Validate date parameters
|
||||
if (after) validateISODate(after, '--after');
|
||||
if (before) validateISODate(before, '--before');
|
||||
|
||||
const db = initDatabase();
|
||||
|
||||
let results: any[] = [];
|
||||
|
||||
// Build time filter clause
|
||||
const timeFilter = [];
|
||||
if (after) timeFilter.push(`e.timestamp >= '${after}'`);
|
||||
if (before) timeFilter.push(`e.timestamp <= '${before}'`);
|
||||
const timeClause = timeFilter.length > 0 ? `AND ${timeFilter.join(' AND ')}` : '';
|
||||
|
||||
if (mode === 'vector' || mode === 'both') {
|
||||
// Vector similarity search
|
||||
await initEmbeddings();
|
||||
const queryEmbedding = await generateEmbedding(query);
|
||||
|
||||
const stmt = db.prepare(`
|
||||
SELECT
|
||||
e.id,
|
||||
e.project,
|
||||
e.timestamp,
|
||||
e.user_message,
|
||||
e.assistant_message,
|
||||
e.archive_path,
|
||||
e.line_start,
|
||||
e.line_end,
|
||||
vec.distance
|
||||
FROM vec_exchanges AS vec
|
||||
JOIN exchanges AS e ON vec.id = e.id
|
||||
WHERE vec.embedding MATCH ?
|
||||
AND k = ?
|
||||
${timeClause}
|
||||
ORDER BY vec.distance ASC
|
||||
`);
|
||||
|
||||
results = stmt.all(
|
||||
Buffer.from(new Float32Array(queryEmbedding).buffer),
|
||||
limit
|
||||
);
|
||||
}
|
||||
|
||||
if (mode === 'text' || mode === 'both') {
|
||||
// Text search
|
||||
const textStmt = db.prepare(`
|
||||
SELECT
|
||||
e.id,
|
||||
e.project,
|
||||
e.timestamp,
|
||||
e.user_message,
|
||||
e.assistant_message,
|
||||
e.archive_path,
|
||||
e.line_start,
|
||||
e.line_end,
|
||||
0 as distance
|
||||
FROM exchanges AS e
|
||||
WHERE (e.user_message LIKE ? OR e.assistant_message LIKE ?)
|
||||
${timeClause}
|
||||
ORDER BY e.timestamp DESC
|
||||
LIMIT ?
|
||||
`);
|
||||
|
||||
const textResults = textStmt.all(`%${query}%`, `%${query}%`, limit);
|
||||
|
||||
if (mode === 'both') {
|
||||
// Merge and deduplicate by ID
|
||||
const seenIds = new Set(results.map(r => r.id));
|
||||
for (const textResult of textResults) {
|
||||
if (!seenIds.has(textResult.id)) {
|
||||
results.push(textResult);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
results = textResults;
|
||||
}
|
||||
}
|
||||
|
||||
db.close();
|
||||
|
||||
return results.map((row: any) => {
|
||||
const exchange: ConversationExchange = {
|
||||
id: row.id,
|
||||
project: row.project,
|
||||
timestamp: row.timestamp,
|
||||
userMessage: row.user_message,
|
||||
assistantMessage: row.assistant_message,
|
||||
archivePath: row.archive_path,
|
||||
lineStart: row.line_start,
|
||||
lineEnd: row.line_end
|
||||
};
|
||||
|
||||
// Try to load summary if available
|
||||
const summaryPath = row.archive_path.replace('.jsonl', '-summary.txt');
|
||||
let summary: string | undefined;
|
||||
if (fs.existsSync(summaryPath)) {
|
||||
summary = fs.readFileSync(summaryPath, 'utf-8').trim();
|
||||
}
|
||||
|
||||
// Create snippet (first 200 chars)
|
||||
const snippet = exchange.userMessage.substring(0, 200) +
|
||||
(exchange.userMessage.length > 200 ? '...' : '');
|
||||
|
||||
return {
|
||||
exchange,
|
||||
similarity: mode === 'text' ? undefined : 1 - row.distance,
|
||||
snippet,
|
||||
summary
|
||||
} as SearchResult & { summary?: string };
|
||||
});
|
||||
}
|
||||
|
||||
export function formatResults(results: Array<SearchResult & { summary?: string }>): string {
|
||||
if (results.length === 0) {
|
||||
return 'No results found.';
|
||||
}
|
||||
|
||||
let output = `Found ${results.length} relevant conversations:\n\n`;
|
||||
|
||||
results.forEach((result, index) => {
|
||||
const date = new Date(result.exchange.timestamp).toISOString().split('T')[0];
|
||||
output += `${index + 1}. [${result.exchange.project}, ${date}]\n`;
|
||||
|
||||
// Show conversation summary if available
|
||||
if (result.summary) {
|
||||
output += ` ${result.summary}\n\n`;
|
||||
}
|
||||
|
||||
// Show match with similarity percentage
|
||||
if (result.similarity !== undefined) {
|
||||
const pct = Math.round(result.similarity * 100);
|
||||
output += ` ${pct}% match: "${result.snippet}"\n`;
|
||||
} else {
|
||||
output += ` Match: "${result.snippet}"\n`;
|
||||
}
|
||||
|
||||
output += ` ${result.exchange.archivePath}:${result.exchange.lineStart}-${result.exchange.lineEnd}\n\n`;
|
||||
});
|
||||
|
||||
return output;
|
||||
}
|
||||
@@ -0,0 +1,155 @@
|
||||
import { ConversationExchange } from './types.js';
|
||||
import { query } from '@anthropic-ai/claude-agent-sdk';
|
||||
|
||||
export function formatConversationText(exchanges: ConversationExchange[]): string {
|
||||
return exchanges.map(ex => {
|
||||
return `User: ${ex.userMessage}\n\nAgent: ${ex.assistantMessage}`;
|
||||
}).join('\n\n---\n\n');
|
||||
}
|
||||
|
||||
function extractSummary(text: string): string {
|
||||
const match = text.match(/<summary>(.*?)<\/summary>/s);
|
||||
if (match) {
|
||||
return match[1].trim();
|
||||
}
|
||||
// Fallback if no tags found
|
||||
return text.trim();
|
||||
}
|
||||
|
||||
async function callClaude(prompt: string, useSonnet = false): Promise<string> {
|
||||
const model = useSonnet ? 'sonnet' : 'haiku';
|
||||
|
||||
for await (const message of query({
|
||||
prompt,
|
||||
options: {
|
||||
model,
|
||||
maxTokens: 4096,
|
||||
maxThinkingTokens: 0, // Disable extended thinking
|
||||
systemPrompt: 'Write concise, factual summaries. Output ONLY the summary - no preamble, no "Here is", no "I will". Your output will be indexed directly.'
|
||||
}
|
||||
})) {
|
||||
if (message && typeof message === 'object' && 'type' in message && message.type === 'result') {
|
||||
const result = (message as any).result;
|
||||
|
||||
// Check if result is an API error (SDK returns errors as result strings)
|
||||
if (typeof result === 'string' && result.includes('API Error') && result.includes('thinking.budget_tokens')) {
|
||||
if (!useSonnet) {
|
||||
console.log(` Haiku hit thinking budget error, retrying with Sonnet`);
|
||||
return await callClaude(prompt, true);
|
||||
}
|
||||
// If Sonnet also fails, return error message
|
||||
return result;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
}
|
||||
return '';
|
||||
}
|
||||
|
||||
function chunkExchanges(exchanges: ConversationExchange[], chunkSize: number): ConversationExchange[][] {
|
||||
const chunks: ConversationExchange[][] = [];
|
||||
for (let i = 0; i < exchanges.length; i += chunkSize) {
|
||||
chunks.push(exchanges.slice(i, i + chunkSize));
|
||||
}
|
||||
return chunks;
|
||||
}
|
||||
|
||||
export async function summarizeConversation(exchanges: ConversationExchange[]): Promise<string> {
|
||||
// Handle trivial conversations
|
||||
if (exchanges.length === 0) {
|
||||
return 'Trivial conversation with no substantive content.';
|
||||
}
|
||||
|
||||
if (exchanges.length === 1) {
|
||||
const text = formatConversationText(exchanges);
|
||||
if (text.length < 100 || exchanges[0].userMessage.trim() === '/exit') {
|
||||
return 'Trivial conversation with no substantive content.';
|
||||
}
|
||||
}
|
||||
|
||||
// For short conversations (≤15 exchanges), summarize directly
|
||||
if (exchanges.length <= 15) {
|
||||
const conversationText = formatConversationText(exchanges);
|
||||
const prompt = `Context: This summary will be shown in a list to help users and Claude choose which conversations are relevant to a future activity.
|
||||
|
||||
Summarize what happened in 2-4 sentences. Be factual and specific. Output in <summary></summary> tags.
|
||||
|
||||
Include:
|
||||
- What was built/changed/discussed (be specific)
|
||||
- Key technical decisions or approaches
|
||||
- Problems solved or current state
|
||||
|
||||
Exclude:
|
||||
- Apologies, meta-commentary, or your questions
|
||||
- Raw logs or debug output
|
||||
- Generic descriptions - focus on what makes THIS conversation unique
|
||||
|
||||
Good:
|
||||
<summary>Built JWT authentication for React app with refresh tokens and protected routes. Fixed token expiration bug by implementing refresh-during-request logic.</summary>
|
||||
|
||||
Bad:
|
||||
<summary>I apologize. The conversation discussed authentication and various approaches were considered...</summary>
|
||||
|
||||
${conversationText}`;
|
||||
|
||||
const result = await callClaude(prompt);
|
||||
return extractSummary(result);
|
||||
}
|
||||
|
||||
// For long conversations, use hierarchical summarization
|
||||
console.log(` Long conversation (${exchanges.length} exchanges) - using hierarchical summarization`);
|
||||
|
||||
// Chunk into groups of 8 exchanges
|
||||
const chunks = chunkExchanges(exchanges, 8);
|
||||
console.log(` Split into ${chunks.length} chunks`);
|
||||
|
||||
// Summarize each chunk
|
||||
const chunkSummaries: string[] = [];
|
||||
for (let i = 0; i < chunks.length; i++) {
|
||||
const chunkText = formatConversationText(chunks[i]);
|
||||
const prompt = `Summarize this part of a conversation in 2-3 sentences. What happened, what was built/discussed. Use <summary></summary> tags.
|
||||
|
||||
${chunkText}
|
||||
|
||||
Example: <summary>Implemented HID keyboard functionality for ESP32. Hit Bluetooth controller initialization error, fixed by adjusting memory allocation.</summary>`;
|
||||
|
||||
try {
|
||||
const summary = await callClaude(prompt);
|
||||
const extracted = extractSummary(summary);
|
||||
chunkSummaries.push(extracted);
|
||||
console.log(` Chunk ${i + 1}/${chunks.length}: ${extracted.split(/\s+/).length} words`);
|
||||
} catch (error) {
|
||||
console.log(` Chunk ${i + 1} failed, skipping`);
|
||||
}
|
||||
}
|
||||
|
||||
if (chunkSummaries.length === 0) {
|
||||
return 'Error: Unable to summarize conversation.';
|
||||
}
|
||||
|
||||
// Synthesize chunks into final summary
|
||||
const synthesisPrompt = `Context: This summary will be shown in a list to help users and Claude choose which past conversations are relevant to a future activity.
|
||||
|
||||
Synthesize these part-summaries into one cohesive paragraph. Focus on what was accomplished and any notable technical decisions or challenges. Output in <summary></summary> tags.
|
||||
|
||||
Part summaries:
|
||||
${chunkSummaries.map((s, i) => `${i + 1}. ${s}`).join('\n')}
|
||||
|
||||
Good:
|
||||
<summary>Built conversation search system with JavaScript, sqlite-vec, and local embeddings. Implemented hierarchical summarization for long conversations. System archives conversations permanently and provides semantic search via CLI.</summary>
|
||||
|
||||
Bad:
|
||||
<summary>This conversation synthesizes several topics discussed across multiple parts...</summary>
|
||||
|
||||
Your summary (max 200 words):`;
|
||||
|
||||
console.log(` Synthesizing final summary...`);
|
||||
try {
|
||||
const result = await callClaude(synthesisPrompt);
|
||||
return extractSummary(result);
|
||||
} catch (error) {
|
||||
console.log(` Synthesis failed, using chunk summaries`);
|
||||
return chunkSummaries.join(' ');
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,16 @@
|
||||
export interface ConversationExchange {
|
||||
id: string;
|
||||
project: string;
|
||||
timestamp: string;
|
||||
userMessage: string;
|
||||
assistantMessage: string;
|
||||
archivePath: string;
|
||||
lineStart: number;
|
||||
lineEnd: number;
|
||||
}
|
||||
|
||||
export interface SearchResult {
|
||||
exchange: ConversationExchange;
|
||||
similarity: number;
|
||||
snippet: string;
|
||||
}
|
||||
@@ -0,0 +1,278 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { verifyIndex, repairIndex, VerificationResult } from './verify.js';
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { initDatabase, insertExchange } from './db.js';
|
||||
import { ConversationExchange } from './types.js';
|
||||
|
||||
describe('verifyIndex', () => {
|
||||
const testDir = path.join(os.tmpdir(), 'conversation-search-test-' + Date.now());
|
||||
const projectsDir = path.join(testDir, '.claude', 'projects');
|
||||
const archiveDir = path.join(testDir, '.clank', 'conversation-archive');
|
||||
const dbPath = path.join(testDir, '.clank', 'conversation-index', 'db.sqlite');
|
||||
|
||||
beforeEach(() => {
|
||||
// Create test directories
|
||||
fs.mkdirSync(path.join(testDir, '.clank', 'conversation-index'), { recursive: true });
|
||||
fs.mkdirSync(projectsDir, { recursive: true });
|
||||
fs.mkdirSync(archiveDir, { recursive: true });
|
||||
|
||||
// Override environment paths for testing
|
||||
process.env.TEST_PROJECTS_DIR = projectsDir;
|
||||
process.env.TEST_ARCHIVE_DIR = archiveDir;
|
||||
process.env.TEST_DB_PATH = dbPath;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up test directory
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
delete process.env.TEST_PROJECTS_DIR;
|
||||
delete process.env.TEST_ARCHIVE_DIR;
|
||||
delete process.env.TEST_DB_PATH;
|
||||
});
|
||||
|
||||
it('detects missing summaries', async () => {
|
||||
// Create a test conversation file without a summary
|
||||
const projectArchive = path.join(archiveDir, 'test-project');
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
const conversationPath = path.join(projectArchive, 'test-conversation.jsonl');
|
||||
|
||||
// Create proper JSONL format (one JSON object per line)
|
||||
const messages = [
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'Hello' }, timestamp: '2024-01-01T00:00:00Z' }),
|
||||
JSON.stringify({ type: 'assistant', message: { role: 'assistant', content: 'Hi there!' }, timestamp: '2024-01-01T00:00:01Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, messages.join('\n'));
|
||||
|
||||
const result = await verifyIndex();
|
||||
|
||||
expect(result.missing.length).toBe(1);
|
||||
expect(result.missing[0].path).toBe(conversationPath);
|
||||
expect(result.missing[0].reason).toBe('No summary file');
|
||||
});
|
||||
|
||||
it('detects orphaned database entries', async () => {
|
||||
// Initialize database
|
||||
const db = initDatabase();
|
||||
|
||||
// Create an exchange in the database
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'orphan-id-1',
|
||||
project: 'deleted-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'This conversation was deleted',
|
||||
assistantMessage: 'But still in database',
|
||||
archivePath: path.join(archiveDir, 'deleted-project', 'deleted.jsonl'),
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
db.close();
|
||||
|
||||
// Verify detects orphaned entry (file doesn't exist)
|
||||
const result = await verifyIndex();
|
||||
|
||||
expect(result.orphaned.length).toBe(1);
|
||||
expect(result.orphaned[0].uuid).toBe('orphan-id-1');
|
||||
expect(result.orphaned[0].path).toBe(exchange.archivePath);
|
||||
});
|
||||
|
||||
it('detects outdated files (file modified after last_indexed)', async () => {
|
||||
// Create conversation file with summary
|
||||
const projectArchive = path.join(archiveDir, 'test-project');
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
const conversationPath = path.join(projectArchive, 'updated-conversation.jsonl');
|
||||
const summaryPath = conversationPath.replace('.jsonl', '-summary.txt');
|
||||
|
||||
// Create initial conversation
|
||||
const messages = [
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'Hello' }, timestamp: '2024-01-01T00:00:00Z' }),
|
||||
JSON.stringify({ type: 'assistant', message: { role: 'assistant', content: 'Hi there!' }, timestamp: '2024-01-01T00:00:01Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, messages.join('\n'));
|
||||
fs.writeFileSync(summaryPath, 'Test summary');
|
||||
|
||||
// Index it
|
||||
const db = initDatabase();
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'updated-id-1',
|
||||
project: 'test-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'Hello',
|
||||
assistantMessage: 'Hi there!',
|
||||
archivePath: conversationPath,
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
|
||||
// Get the last_indexed timestamp
|
||||
const row = db.prepare(`SELECT last_indexed FROM exchanges WHERE id = ?`).get('updated-id-1') as any;
|
||||
const lastIndexed = row.last_indexed;
|
||||
db.close();
|
||||
|
||||
// Wait a bit, then modify the file
|
||||
await new Promise(resolve => setTimeout(resolve, 10));
|
||||
|
||||
// Update the conversation file
|
||||
const updatedMessages = [
|
||||
...messages,
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'New message' }, timestamp: '2024-01-01T00:00:02Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, updatedMessages.join('\n'));
|
||||
|
||||
// Verify detects outdated file
|
||||
const result = await verifyIndex();
|
||||
|
||||
expect(result.outdated.length).toBe(1);
|
||||
expect(result.outdated[0].path).toBe(conversationPath);
|
||||
expect(result.outdated[0].dbTime).toBe(lastIndexed);
|
||||
expect(result.outdated[0].fileTime).toBeGreaterThan(lastIndexed);
|
||||
});
|
||||
|
||||
// Note: Parser is resilient to malformed JSON - it skips bad lines
|
||||
// Corruption detection would require file system errors or permission issues
|
||||
// which are harder to test. Skipping for now as missing summaries is the
|
||||
// primary use case for verification.
|
||||
});
|
||||
|
||||
describe('repairIndex', () => {
|
||||
const testDir = path.join(os.tmpdir(), 'conversation-repair-test-' + Date.now());
|
||||
const projectsDir = path.join(testDir, '.claude', 'projects');
|
||||
const archiveDir = path.join(testDir, '.clank', 'conversation-archive');
|
||||
const dbPath = path.join(testDir, '.clank', 'conversation-index', 'db.sqlite');
|
||||
|
||||
beforeEach(() => {
|
||||
// Create test directories
|
||||
fs.mkdirSync(path.join(testDir, '.clank', 'conversation-index'), { recursive: true });
|
||||
fs.mkdirSync(projectsDir, { recursive: true });
|
||||
fs.mkdirSync(archiveDir, { recursive: true });
|
||||
|
||||
// Override environment paths for testing
|
||||
process.env.TEST_PROJECTS_DIR = projectsDir;
|
||||
process.env.TEST_ARCHIVE_DIR = archiveDir;
|
||||
process.env.TEST_DB_PATH = dbPath;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
// Clean up test directory
|
||||
fs.rmSync(testDir, { recursive: true, force: true });
|
||||
delete process.env.TEST_PROJECTS_DIR;
|
||||
delete process.env.TEST_ARCHIVE_DIR;
|
||||
delete process.env.TEST_DB_PATH;
|
||||
});
|
||||
|
||||
it('deletes orphaned database entries during repair', async () => {
|
||||
// Initialize database with orphaned entry
|
||||
const db = initDatabase();
|
||||
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'orphan-repair-1',
|
||||
project: 'deleted-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'This conversation was deleted',
|
||||
assistantMessage: 'But still in database',
|
||||
archivePath: path.join(archiveDir, 'deleted-project', 'deleted.jsonl'),
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
db.close();
|
||||
|
||||
// Verify it's there
|
||||
const dbBefore = initDatabase();
|
||||
const beforeCount = dbBefore.prepare(`SELECT COUNT(*) as count FROM exchanges WHERE id = ?`).get('orphan-repair-1') as { count: number };
|
||||
expect(beforeCount.count).toBe(1);
|
||||
dbBefore.close();
|
||||
|
||||
// Run repair
|
||||
const issues = await verifyIndex();
|
||||
expect(issues.orphaned.length).toBe(1);
|
||||
await repairIndex(issues);
|
||||
|
||||
// Verify it's gone
|
||||
const dbAfter = initDatabase();
|
||||
const afterCount = dbAfter.prepare(`SELECT COUNT(*) as count FROM exchanges WHERE id = ?`).get('orphan-repair-1') as { count: number };
|
||||
expect(afterCount.count).toBe(0);
|
||||
dbAfter.close();
|
||||
});
|
||||
|
||||
it('re-indexes outdated files during repair', { timeout: 30000 }, async () => {
|
||||
// Create conversation file with summary
|
||||
const projectArchive = path.join(archiveDir, 'test-project');
|
||||
fs.mkdirSync(projectArchive, { recursive: true });
|
||||
|
||||
const conversationPath = path.join(projectArchive, 'outdated-repair.jsonl');
|
||||
const summaryPath = conversationPath.replace('.jsonl', '-summary.txt');
|
||||
|
||||
// Create initial conversation
|
||||
const messages = [
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'Hello' }, timestamp: '2024-01-01T00:00:00Z' }),
|
||||
JSON.stringify({ type: 'assistant', message: { role: 'assistant', content: 'Hi there!' }, timestamp: '2024-01-01T00:00:01Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, messages.join('\n'));
|
||||
fs.writeFileSync(summaryPath, 'Old summary');
|
||||
|
||||
// Index it
|
||||
const db = initDatabase();
|
||||
const exchange: ConversationExchange = {
|
||||
id: 'outdated-repair-1',
|
||||
project: 'test-project',
|
||||
timestamp: '2024-01-01T00:00:00Z',
|
||||
userMessage: 'Hello',
|
||||
assistantMessage: 'Hi there!',
|
||||
archivePath: conversationPath,
|
||||
lineStart: 1,
|
||||
lineEnd: 2
|
||||
};
|
||||
|
||||
const embedding = new Array(384).fill(0.1);
|
||||
insertExchange(db, exchange, embedding);
|
||||
|
||||
// Get the last_indexed timestamp
|
||||
const beforeRow = db.prepare(`SELECT last_indexed FROM exchanges WHERE id = ?`).get('outdated-repair-1') as any;
|
||||
const beforeIndexed = beforeRow.last_indexed;
|
||||
db.close();
|
||||
|
||||
// Wait a bit, then modify the file
|
||||
await new Promise(resolve => setTimeout(resolve, 10));
|
||||
|
||||
// Update the conversation file (add new exchange)
|
||||
const updatedMessages = [
|
||||
...messages,
|
||||
JSON.stringify({ type: 'user', message: { role: 'user', content: 'New message' }, timestamp: '2024-01-01T00:00:02Z' }),
|
||||
JSON.stringify({ type: 'assistant', message: { role: 'assistant', content: 'New response' }, timestamp: '2024-01-01T00:00:03Z' })
|
||||
];
|
||||
fs.writeFileSync(conversationPath, updatedMessages.join('\n'));
|
||||
|
||||
// Verify detects outdated
|
||||
const issues = await verifyIndex();
|
||||
expect(issues.outdated.length).toBe(1);
|
||||
|
||||
// Wait a bit to ensure different timestamp
|
||||
await new Promise(resolve => setTimeout(resolve, 10));
|
||||
|
||||
// Run repair
|
||||
await repairIndex(issues);
|
||||
|
||||
// Verify it was re-indexed with new timestamp
|
||||
const dbAfter = initDatabase();
|
||||
const afterRow = dbAfter.prepare(`SELECT MAX(last_indexed) as last_indexed FROM exchanges WHERE archive_path = ?`).get(conversationPath) as any;
|
||||
expect(afterRow.last_indexed).toBeGreaterThan(beforeIndexed);
|
||||
|
||||
// Verify no longer outdated
|
||||
const verifyAfter = await verifyIndex();
|
||||
expect(verifyAfter.outdated.length).toBe(0);
|
||||
|
||||
dbAfter.close();
|
||||
});
|
||||
});
|
||||
@@ -0,0 +1,182 @@
|
||||
import fs from 'fs';
|
||||
import path from 'path';
|
||||
import os from 'os';
|
||||
import { parseConversation } from './parser.js';
|
||||
import { initDatabase, getAllExchanges, getFileLastIndexed } from './db.js';
|
||||
|
||||
export interface VerificationResult {
|
||||
missing: Array<{ path: string; reason: string }>;
|
||||
orphaned: Array<{ uuid: string; path: string }>;
|
||||
outdated: Array<{ path: string; fileTime: number; dbTime: number }>;
|
||||
corrupted: Array<{ path: string; error: string }>;
|
||||
}
|
||||
|
||||
// Allow overriding paths for testing
|
||||
function getArchiveDir(): string {
|
||||
return process.env.TEST_ARCHIVE_DIR || path.join(os.homedir(), '.clank', 'conversation-archive');
|
||||
}
|
||||
|
||||
export async function verifyIndex(): Promise<VerificationResult> {
|
||||
const result: VerificationResult = {
|
||||
missing: [],
|
||||
orphaned: [],
|
||||
outdated: [],
|
||||
corrupted: []
|
||||
};
|
||||
|
||||
const archiveDir = getArchiveDir();
|
||||
|
||||
// Track all files we find
|
||||
const foundFiles = new Set<string>();
|
||||
|
||||
// Find all conversation files
|
||||
if (!fs.existsSync(archiveDir)) {
|
||||
return result;
|
||||
}
|
||||
|
||||
// Initialize database once for all checks
|
||||
const db = initDatabase();
|
||||
|
||||
const projects = fs.readdirSync(archiveDir);
|
||||
let totalChecked = 0;
|
||||
|
||||
for (const project of projects) {
|
||||
const projectPath = path.join(archiveDir, project);
|
||||
const stat = fs.statSync(projectPath);
|
||||
|
||||
if (!stat.isDirectory()) continue;
|
||||
|
||||
const files = fs.readdirSync(projectPath).filter(f => f.endsWith('.jsonl'));
|
||||
|
||||
for (const file of files) {
|
||||
totalChecked++;
|
||||
|
||||
if (totalChecked % 100 === 0) {
|
||||
console.log(` Checked ${totalChecked} conversations...`);
|
||||
}
|
||||
|
||||
const conversationPath = path.join(projectPath, file);
|
||||
foundFiles.add(conversationPath);
|
||||
|
||||
const summaryPath = conversationPath.replace('.jsonl', '-summary.txt');
|
||||
|
||||
// Check for missing summary
|
||||
if (!fs.existsSync(summaryPath)) {
|
||||
result.missing.push({ path: conversationPath, reason: 'No summary file' });
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if file is outdated (modified after last_indexed)
|
||||
const lastIndexed = getFileLastIndexed(db, conversationPath);
|
||||
if (lastIndexed !== null) {
|
||||
const fileStat = fs.statSync(conversationPath);
|
||||
if (fileStat.mtimeMs > lastIndexed) {
|
||||
result.outdated.push({
|
||||
path: conversationPath,
|
||||
fileTime: fileStat.mtimeMs,
|
||||
dbTime: lastIndexed
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Try parsing to detect corruption
|
||||
try {
|
||||
await parseConversation(conversationPath, project, conversationPath);
|
||||
} catch (error) {
|
||||
result.corrupted.push({
|
||||
path: conversationPath,
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`Verified ${totalChecked} conversations.`);
|
||||
|
||||
// Check for orphaned database entries
|
||||
const dbExchanges = getAllExchanges(db);
|
||||
db.close();
|
||||
|
||||
for (const exchange of dbExchanges) {
|
||||
if (!foundFiles.has(exchange.archivePath)) {
|
||||
result.orphaned.push({
|
||||
uuid: exchange.id,
|
||||
path: exchange.archivePath
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
export async function repairIndex(issues: VerificationResult): Promise<void> {
|
||||
console.log('Repairing index...');
|
||||
|
||||
// To avoid circular dependencies, we import the indexer functions dynamically
|
||||
const { initDatabase, insertExchange, deleteExchange } = await import('./db.js');
|
||||
const { parseConversation } = await import('./parser.js');
|
||||
const { initEmbeddings, generateExchangeEmbedding } = await import('./embeddings.js');
|
||||
const { summarizeConversation } = await import('./summarizer.js');
|
||||
|
||||
const db = initDatabase();
|
||||
await initEmbeddings();
|
||||
|
||||
// Remove orphaned entries first
|
||||
for (const orphan of issues.orphaned) {
|
||||
console.log(`Removing orphaned entry: ${orphan.uuid}`);
|
||||
deleteExchange(db, orphan.uuid);
|
||||
}
|
||||
|
||||
// Re-index missing and outdated conversations
|
||||
const toReindex = [
|
||||
...issues.missing.map(m => m.path),
|
||||
...issues.outdated.map(o => o.path)
|
||||
];
|
||||
|
||||
for (const conversationPath of toReindex) {
|
||||
console.log(`Re-indexing: ${conversationPath}`);
|
||||
try {
|
||||
// Extract project name from path
|
||||
const archiveDir = getArchiveDir();
|
||||
const relativePath = conversationPath.replace(archiveDir + path.sep, '');
|
||||
const project = relativePath.split(path.sep)[0];
|
||||
|
||||
// Parse conversation
|
||||
const exchanges = await parseConversation(conversationPath, project, conversationPath);
|
||||
|
||||
if (exchanges.length === 0) {
|
||||
console.log(` Skipped (no exchanges)`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Generate/update summary
|
||||
const summaryPath = conversationPath.replace('.jsonl', '-summary.txt');
|
||||
const summary = await summarizeConversation(exchanges);
|
||||
fs.writeFileSync(summaryPath, summary, 'utf-8');
|
||||
console.log(` Created summary: ${summary.split(/\s+/).length} words`);
|
||||
|
||||
// Index exchanges
|
||||
for (const exchange of exchanges) {
|
||||
const embedding = await generateExchangeEmbedding(
|
||||
exchange.userMessage,
|
||||
exchange.assistantMessage
|
||||
);
|
||||
insertExchange(db, exchange, embedding);
|
||||
}
|
||||
|
||||
console.log(` Indexed ${exchanges.length} exchanges`);
|
||||
} catch (error) {
|
||||
console.error(`Failed to re-index ${conversationPath}:`, error);
|
||||
}
|
||||
}
|
||||
|
||||
db.close();
|
||||
|
||||
// Report corrupted files (manual intervention needed)
|
||||
if (issues.corrupted.length > 0) {
|
||||
console.log('\n⚠️ Corrupted files (manual review needed):');
|
||||
issues.corrupted.forEach(c => console.log(` ${c.path}: ${c.error}`));
|
||||
}
|
||||
|
||||
console.log('✅ Repair complete.');
|
||||
}
|
||||
374
skills/collaboration/remembering-conversations/tool/test-deployment.sh
Executable file
374
skills/collaboration/remembering-conversations/tool/test-deployment.sh
Executable file
@@ -0,0 +1,374 @@
|
||||
#!/bin/bash
|
||||
# End-to-end deployment testing
|
||||
# Tests all deployment scenarios from docs/plans/2025-10-07-deployment-plan.md
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
INSTALL_HOOK="$SCRIPT_DIR/install-hook"
|
||||
INDEX_CONVERSATIONS="$SCRIPT_DIR/index-conversations"
|
||||
|
||||
# Test counter
|
||||
TESTS_RUN=0
|
||||
TESTS_PASSED=0
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper functions
|
||||
setup_test() {
|
||||
TEST_DIR=$(mktemp -d)
|
||||
export HOME="$TEST_DIR"
|
||||
export TEST_PROJECTS_DIR="$TEST_DIR/.claude/projects"
|
||||
export TEST_ARCHIVE_DIR="$TEST_DIR/.clank/conversation-archive"
|
||||
export TEST_DB_PATH="$TEST_DIR/.clank/conversation-index/db.sqlite"
|
||||
|
||||
mkdir -p "$HOME/.claude/hooks"
|
||||
mkdir -p "$TEST_PROJECTS_DIR"
|
||||
mkdir -p "$TEST_ARCHIVE_DIR"
|
||||
mkdir -p "$TEST_DIR/.clank/conversation-index"
|
||||
}
|
||||
|
||||
cleanup_test() {
|
||||
if [ -n "$TEST_DIR" ] && [ -d "$TEST_DIR" ]; then
|
||||
rm -rf "$TEST_DIR"
|
||||
fi
|
||||
unset TEST_PROJECTS_DIR
|
||||
unset TEST_ARCHIVE_DIR
|
||||
unset TEST_DB_PATH
|
||||
}
|
||||
|
||||
assert_file_exists() {
|
||||
if [ ! -f "$1" ]; then
|
||||
echo -e "${RED}❌ FAIL: File does not exist: $1${NC}"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_executable() {
|
||||
if [ ! -x "$1" ]; then
|
||||
echo -e "${RED}❌ FAIL: File is not executable: $1${NC}"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_contains() {
|
||||
if ! grep -q "$2" "$1"; then
|
||||
echo -e "${RED}❌ FAIL: File $1 does not contain: $2${NC}"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_summary_exists() {
|
||||
local jsonl_file="$1"
|
||||
|
||||
# If file is in projects dir, convert to archive path
|
||||
if [[ "$jsonl_file" == *"/.claude/projects/"* ]]; then
|
||||
jsonl_file=$(echo "$jsonl_file" | sed "s|/.claude/projects/|/.clank/conversation-archive/|")
|
||||
fi
|
||||
|
||||
local summary_file="${jsonl_file%.jsonl}-summary.txt"
|
||||
if [ ! -f "$summary_file" ]; then
|
||||
echo -e "${RED}❌ FAIL: Summary does not exist: $summary_file${NC}"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
create_test_conversation() {
|
||||
local project="$1"
|
||||
local uuid="${2:-test-$(date +%s)}"
|
||||
|
||||
mkdir -p "$TEST_PROJECTS_DIR/$project"
|
||||
local conv_file="$TEST_PROJECTS_DIR/$project/${uuid}.jsonl"
|
||||
|
||||
cat > "$conv_file" <<'EOF'
|
||||
{"type":"user","message":{"role":"user","content":"What is TDD?"},"timestamp":"2024-01-01T00:00:00Z"}
|
||||
{"type":"assistant","message":{"role":"assistant","content":"TDD stands for Test-Driven Development. You write tests first."},"timestamp":"2024-01-01T00:00:01Z"}
|
||||
EOF
|
||||
|
||||
echo "$conv_file"
|
||||
}
|
||||
|
||||
run_test() {
|
||||
local test_name="$1"
|
||||
local test_func="$2"
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
echo -e "\n${YELLOW}Running test: $test_name${NC}"
|
||||
|
||||
setup_test
|
||||
|
||||
if $test_func; then
|
||||
echo -e "${GREEN}✓ PASS: $test_name${NC}"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo -e "${RED}❌ FAIL: $test_name${NC}"
|
||||
fi
|
||||
|
||||
cleanup_test
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 1: Fresh Installation
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_1_fresh_install() {
|
||||
echo " 1. Installing hook with no existing hook..."
|
||||
"$INSTALL_HOOK" > /dev/null 2>&1 || true
|
||||
|
||||
assert_file_exists "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
assert_file_executable "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
|
||||
echo " 2. Creating test conversation..."
|
||||
local conv_file=$(create_test_conversation "test-project" "conv-1")
|
||||
|
||||
echo " 3. Indexing conversation..."
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" > /dev/null 2>&1
|
||||
|
||||
echo " 4. Verifying summary was created..."
|
||||
assert_summary_exists "$conv_file" || return 1
|
||||
|
||||
echo " 5. Testing hook triggers indexing..."
|
||||
export SESSION_ID="hook-session-$(date +%s)"
|
||||
|
||||
# Create conversation file with SESSION_ID in name
|
||||
mkdir -p "$TEST_PROJECTS_DIR/test-project"
|
||||
local new_conv="$TEST_PROJECTS_DIR/test-project/${SESSION_ID}.jsonl"
|
||||
cat > "$new_conv" <<'EOF'
|
||||
{"type":"user","message":{"role":"user","content":"What is TDD?"},"timestamp":"2024-01-01T00:00:00Z"}
|
||||
{"type":"assistant","message":{"role":"assistant","content":"TDD stands for Test-Driven Development. You write tests first."},"timestamp":"2024-01-01T00:00:01Z"}
|
||||
EOF
|
||||
|
||||
# Verify hook runs the index command (manually call indexer with --session)
|
||||
# In real environment, hook would do this automatically
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --session "$SESSION_ID" > /dev/null 2>&1
|
||||
|
||||
echo " 6. Verifying session was indexed..."
|
||||
assert_summary_exists "$new_conv" || return 1
|
||||
|
||||
echo " 7. Testing search functionality..."
|
||||
local search_result=$(cd "$SCRIPT_DIR" && "$SCRIPT_DIR/search-conversations" "TDD" 2>/dev/null || echo "")
|
||||
if [ -z "$search_result" ]; then
|
||||
echo -e "${RED}❌ Search returned no results${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 2: Existing Hook (merge)
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_2_existing_hook_merge() {
|
||||
echo " 1. Creating existing hook..."
|
||||
cat > "$HOME/.claude/hooks/sessionEnd" <<'EOF'
|
||||
#!/bin/bash
|
||||
# Existing hook
|
||||
echo "Existing hook running"
|
||||
EOF
|
||||
chmod +x "$HOME/.claude/hooks/sessionEnd"
|
||||
|
||||
echo " 2. Installing with merge option..."
|
||||
echo "m" | "$INSTALL_HOOK" > /dev/null 2>&1 || true
|
||||
|
||||
echo " 3. Verifying backup created..."
|
||||
local backup_count=$(ls -1 "$HOME/.claude/hooks/sessionEnd.backup."* 2>/dev/null | wc -l)
|
||||
if [ "$backup_count" -lt 1 ]; then
|
||||
echo -e "${RED}❌ No backup created${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " 4. Verifying merge preserved existing content..."
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "Existing hook running" || return 1
|
||||
|
||||
echo " 5. Verifying indexer was appended..."
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "remembering-conversations.*index-conversations" || return 1
|
||||
|
||||
echo " 6. Testing merged hook runs both parts..."
|
||||
local conv_file=$(create_test_conversation "merge-project" "merge-conv")
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" > /dev/null 2>&1
|
||||
|
||||
export SESSION_ID="merge-session-$(date +%s)"
|
||||
local hook_output=$("$HOME/.claude/hooks/sessionEnd" 2>&1)
|
||||
|
||||
if ! echo "$hook_output" | grep -q "Existing hook running"; then
|
||||
echo -e "${RED}❌ Existing hook logic not executed${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 3: Recovery (verify/repair)
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_3_recovery_verify_repair() {
|
||||
echo " 1. Creating conversations and indexing..."
|
||||
local conv1=$(create_test_conversation "recovery-project" "conv-1")
|
||||
local conv2=$(create_test_conversation "recovery-project" "conv-2")
|
||||
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" > /dev/null 2>&1
|
||||
|
||||
echo " 2. Verifying summaries exist..."
|
||||
assert_summary_exists "$conv1" || return 1
|
||||
assert_summary_exists "$conv2" || return 1
|
||||
|
||||
echo " 3. Deleting summary to simulate missing file..."
|
||||
# Delete from archive (where summaries are stored)
|
||||
local archive_conv1=$(echo "$conv1" | sed "s|/.claude/projects/|/.clank/conversation-archive/|")
|
||||
rm "${archive_conv1%.jsonl}-summary.txt"
|
||||
|
||||
echo " 4. Running verify (should detect missing)..."
|
||||
local verify_output=$(cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --verify 2>&1)
|
||||
|
||||
if ! echo "$verify_output" | grep -q "Missing summaries: 1"; then
|
||||
echo -e "${RED}❌ Verify did not detect missing summary${NC}"
|
||||
echo "Verify output: $verify_output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " 5. Running repair..."
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --repair > /dev/null 2>&1
|
||||
|
||||
echo " 6. Verifying summary was regenerated..."
|
||||
assert_summary_exists "$conv1" || return 1
|
||||
|
||||
echo " 7. Running verify again (should be clean)..."
|
||||
verify_output=$(cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --verify 2>&1)
|
||||
|
||||
# Verify should report no missing issues
|
||||
if ! echo "$verify_output" | grep -q "Missing summaries: 0"; then
|
||||
echo -e "${RED}❌ Verify still reports missing issues after repair${NC}"
|
||||
echo "Verify output: $verify_output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 4: Change Detection
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_4_change_detection() {
|
||||
echo " 1. Creating and indexing conversation..."
|
||||
local conv=$(create_test_conversation "change-project" "conv-1")
|
||||
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" > /dev/null 2>&1
|
||||
|
||||
echo " 2. Verifying initial index..."
|
||||
assert_summary_exists "$conv" || return 1
|
||||
|
||||
echo " 3. Modifying conversation (adding exchange)..."
|
||||
# Wait to ensure different mtime
|
||||
sleep 1
|
||||
|
||||
# Modify the archive file (that's what verify checks)
|
||||
local archive_conv=$(echo "$conv" | sed "s|/.claude/projects/|/.clank/conversation-archive/|")
|
||||
cat >> "$archive_conv" <<'EOF'
|
||||
{"type":"user","message":{"role":"user","content":"Tell me more about TDD"},"timestamp":"2024-01-01T00:00:02Z"}
|
||||
{"type":"assistant","message":{"role":"assistant","content":"TDD has three phases: Red, Green, Refactor."},"timestamp":"2024-01-01T00:00:03Z"}
|
||||
EOF
|
||||
|
||||
echo " 4. Running verify (should detect outdated)..."
|
||||
local verify_output=$(cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --verify 2>&1)
|
||||
|
||||
if ! echo "$verify_output" | grep -q "Outdated files: 1"; then
|
||||
echo -e "${RED}❌ Verify did not detect outdated file${NC}"
|
||||
echo "Verify output: $verify_output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " 5. Running repair (should re-index)..."
|
||||
cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --repair > /dev/null 2>&1
|
||||
|
||||
echo " 6. Verifying conversation is up to date..."
|
||||
verify_output=$(cd "$SCRIPT_DIR" && "$INDEX_CONVERSATIONS" --verify 2>&1)
|
||||
|
||||
if ! echo "$verify_output" | grep -q "Outdated files: 0"; then
|
||||
echo -e "${RED}❌ File still outdated after repair${NC}"
|
||||
echo "Verify output: $verify_output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo " 7. Verifying new content is searchable..."
|
||||
local search_result=$(cd "$SCRIPT_DIR" && "$SCRIPT_DIR/search-conversations" "Red Green Refactor" 2>/dev/null || echo "")
|
||||
if [ -z "$search_result" ]; then
|
||||
echo -e "${RED}❌ New content not found in search${NC}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Scenario 5: Subagent Workflow (Manual Testing Required)
|
||||
# ============================================================================
|
||||
|
||||
test_scenario_5_subagent_workflow_docs() {
|
||||
echo " This scenario requires manual testing with a live subagent."
|
||||
echo " Automated checks:"
|
||||
|
||||
echo " 1. Verifying search-agent template exists..."
|
||||
local template_file="$SCRIPT_DIR/prompts/search-agent.md"
|
||||
assert_file_exists "$template_file" || return 1
|
||||
|
||||
echo " 2. Verifying template has required sections..."
|
||||
assert_file_contains "$template_file" "### Summary" || return 1
|
||||
assert_file_contains "$template_file" "### Sources" || return 1
|
||||
assert_file_contains "$template_file" "### For Follow-Up" || return 1
|
||||
|
||||
echo ""
|
||||
echo -e "${YELLOW} MANUAL TESTING REQUIRED:${NC}"
|
||||
echo " To complete Scenario 5 testing:"
|
||||
echo " 1. Start a new Claude Code session"
|
||||
echo " 2. Ask about a past conversation topic"
|
||||
echo " 3. Dispatch subagent using: skills/collaboration/remembering-conversations/tool/prompts/search-agent.md"
|
||||
echo " 4. Verify synthesis is 200-1000 words"
|
||||
echo " 5. Verify all sources include: project, date, file path, status"
|
||||
echo " 6. Ask follow-up question to test iterative refinement"
|
||||
echo " 7. Verify no raw conversations loaded into main context"
|
||||
echo ""
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Run All Tests
|
||||
# ============================================================================
|
||||
|
||||
echo "=========================================="
|
||||
echo " End-to-End Deployment Testing"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
echo "Testing deployment scenarios from:"
|
||||
echo " docs/plans/2025-10-07-deployment-plan.md"
|
||||
echo ""
|
||||
|
||||
run_test "Scenario 1: Fresh Installation" test_scenario_1_fresh_install
|
||||
run_test "Scenario 2: Existing Hook (merge)" test_scenario_2_existing_hook_merge
|
||||
run_test "Scenario 3: Recovery (verify/repair)" test_scenario_3_recovery_verify_repair
|
||||
run_test "Scenario 4: Change Detection" test_scenario_4_change_detection
|
||||
run_test "Scenario 5: Subagent Workflow (docs check)" test_scenario_5_subagent_workflow_docs
|
||||
|
||||
echo ""
|
||||
echo "=========================================="
|
||||
echo -e " Test Results: ${GREEN}$TESTS_PASSED${NC}/${TESTS_RUN} passed"
|
||||
echo "=========================================="
|
||||
|
||||
if [ $TESTS_PASSED -eq $TESTS_RUN ]; then
|
||||
echo -e "${GREEN}✅ All tests passed!${NC}"
|
||||
exit 0
|
||||
else
|
||||
echo -e "${RED}❌ Some tests failed${NC}"
|
||||
exit 1
|
||||
fi
|
||||
226
skills/collaboration/remembering-conversations/tool/test-install-hook.sh
Executable file
226
skills/collaboration/remembering-conversations/tool/test-install-hook.sh
Executable file
@@ -0,0 +1,226 @@
|
||||
#!/bin/bash
|
||||
# Test suite for install-hook script
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||
INSTALL_HOOK="$SCRIPT_DIR/install-hook"
|
||||
|
||||
# Test counter
|
||||
TESTS_RUN=0
|
||||
TESTS_PASSED=0
|
||||
|
||||
# Helper functions
|
||||
setup_test() {
|
||||
TEST_DIR=$(mktemp -d)
|
||||
export HOME="$TEST_DIR"
|
||||
mkdir -p "$HOME/.claude/hooks"
|
||||
}
|
||||
|
||||
cleanup_test() {
|
||||
if [ -n "$TEST_DIR" ] && [ -d "$TEST_DIR" ]; then
|
||||
rm -rf "$TEST_DIR"
|
||||
fi
|
||||
}
|
||||
|
||||
assert_file_exists() {
|
||||
if [ ! -f "$1" ]; then
|
||||
echo "❌ FAIL: File does not exist: $1"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_not_exists() {
|
||||
if [ -f "$1" ]; then
|
||||
echo "❌ FAIL: File should not exist: $1"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_executable() {
|
||||
if [ ! -x "$1" ]; then
|
||||
echo "❌ FAIL: File is not executable: $1"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
assert_file_contains() {
|
||||
if ! grep -q "$2" "$1"; then
|
||||
echo "❌ FAIL: File $1 does not contain: $2"
|
||||
return 1
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
run_test() {
|
||||
local test_name="$1"
|
||||
local test_func="$2"
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
echo "Running test: $test_name"
|
||||
|
||||
setup_test
|
||||
|
||||
if $test_func; then
|
||||
echo "✓ PASS: $test_name"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
else
|
||||
echo "❌ FAIL: $test_name"
|
||||
fi
|
||||
|
||||
cleanup_test
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Test 1: Fresh installation with no existing hook
|
||||
test_fresh_installation() {
|
||||
# Run installer with no input (non-interactive fresh install)
|
||||
if [ ! -x "$INSTALL_HOOK" ]; then
|
||||
echo "❌ install-hook script not found or not executable"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Should fail because script doesn't exist yet
|
||||
"$INSTALL_HOOK" 2>&1 || true
|
||||
|
||||
# Verify hook was created
|
||||
assert_file_exists "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
|
||||
# Verify hook is executable
|
||||
assert_file_executable "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
|
||||
# Verify hook contains indexer reference
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "remembering-conversations.*index-conversations" || return 1
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 2: Merge with existing hook (user chooses merge)
|
||||
test_merge_with_existing_hook() {
|
||||
# Create existing hook
|
||||
cat > "$HOME/.claude/hooks/sessionEnd" <<'EOF'
|
||||
#!/bin/bash
|
||||
# Existing hook content
|
||||
echo "Existing hook running"
|
||||
EOF
|
||||
chmod +x "$HOME/.claude/hooks/sessionEnd"
|
||||
|
||||
# Run installer and choose merge
|
||||
echo "m" | "$INSTALL_HOOK" 2>&1 || true
|
||||
|
||||
# Verify backup was created
|
||||
local backup_count=$(ls -1 "$HOME/.claude/hooks/sessionEnd.backup."* 2>/dev/null | wc -l)
|
||||
if [ "$backup_count" -lt 1 ]; then
|
||||
echo "❌ No backup created"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify original content is preserved
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "Existing hook running" || return 1
|
||||
|
||||
# Verify indexer was appended
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "remembering-conversations.*index-conversations" || return 1
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 3: Replace with existing hook (user chooses replace)
|
||||
test_replace_with_existing_hook() {
|
||||
# Create existing hook
|
||||
cat > "$HOME/.claude/hooks/sessionEnd" <<'EOF'
|
||||
#!/bin/bash
|
||||
# Old hook to be replaced
|
||||
echo "Old hook"
|
||||
EOF
|
||||
chmod +x "$HOME/.claude/hooks/sessionEnd"
|
||||
|
||||
# Run installer and choose replace
|
||||
echo "r" | "$INSTALL_HOOK" 2>&1 || true
|
||||
|
||||
# Verify backup was created
|
||||
local backup_count=$(ls -1 "$HOME/.claude/hooks/sessionEnd.backup."* 2>/dev/null | wc -l)
|
||||
if [ "$backup_count" -lt 1 ]; then
|
||||
echo "❌ No backup created"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify old content is gone
|
||||
if grep -q "Old hook" "$HOME/.claude/hooks/sessionEnd"; then
|
||||
echo "❌ Old hook content still present"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify new hook contains indexer
|
||||
assert_file_contains "$HOME/.claude/hooks/sessionEnd" "remembering-conversations.*index-conversations" || return 1
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 4: Detection of already-installed indexer (idempotent)
|
||||
test_already_installed_detection() {
|
||||
# Create hook with indexer already installed
|
||||
cat > "$HOME/.claude/hooks/sessionEnd" <<'EOF'
|
||||
#!/bin/bash
|
||||
# Auto-index conversations (remembering-conversations skill)
|
||||
INDEXER="$HOME/.claude/skills/collaboration/remembering-conversations/tool/index-conversations"
|
||||
if [ -n "$SESSION_ID" ] && [ -x "$INDEXER" ]; then
|
||||
"$INDEXER" --session "$SESSION_ID" > /dev/null 2>&1 &
|
||||
fi
|
||||
EOF
|
||||
chmod +x "$HOME/.claude/hooks/sessionEnd"
|
||||
|
||||
# Run installer - should detect and exit
|
||||
local output=$("$INSTALL_HOOK" 2>&1 || true)
|
||||
|
||||
# Verify it detected existing installation
|
||||
if ! echo "$output" | grep -q "already installed"; then
|
||||
echo "❌ Did not detect existing installation"
|
||||
echo "Output: $output"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Verify no backup was created (since nothing changed)
|
||||
local backup_count=$(ls -1 "$HOME/.claude/hooks/sessionEnd.backup."* 2>/dev/null | wc -l)
|
||||
if [ "$backup_count" -gt 0 ]; then
|
||||
echo "❌ Backup created when it shouldn't have been"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test 5: Executable permissions are set
|
||||
test_executable_permissions() {
|
||||
# Run installer
|
||||
"$INSTALL_HOOK" 2>&1 || true
|
||||
|
||||
# Verify hook is executable
|
||||
assert_file_executable "$HOME/.claude/hooks/sessionEnd" || return 1
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Run all tests
|
||||
echo "=========================================="
|
||||
echo "Testing install-hook script"
|
||||
echo "=========================================="
|
||||
echo ""
|
||||
|
||||
run_test "Fresh installation with no existing hook" test_fresh_installation
|
||||
run_test "Merge with existing hook" test_merge_with_existing_hook
|
||||
run_test "Replace with existing hook" test_replace_with_existing_hook
|
||||
run_test "Detection of already-installed indexer" test_already_installed_detection
|
||||
run_test "Executable permissions are set" test_executable_permissions
|
||||
|
||||
echo "=========================================="
|
||||
echo "Test Results: $TESTS_PASSED/$TESTS_RUN passed"
|
||||
echo "=========================================="
|
||||
|
||||
if [ $TESTS_PASSED -eq $TESTS_RUN ]; then
|
||||
exit 0
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2022",
|
||||
"module": "ESNext",
|
||||
"moduleResolution": "node",
|
||||
"esModuleInterop": true,
|
||||
"strict": true,
|
||||
"skipLibCheck": true,
|
||||
"outDir": "./dist",
|
||||
"rootDir": "./src"
|
||||
},
|
||||
"include": ["src/**/*"],
|
||||
"exclude": ["node_modules"]
|
||||
}
|
||||
Reference in New Issue
Block a user