Compare commits
10 Commits
4021323984
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 13ca7ddbed | |||
| 42bc9aff42 | |||
| efc5067612 | |||
| 3286f609c4 | |||
| 6e45a8965a | |||
| bf7cfc8224 | |||
| ad8ca2b1f7 | |||
| 3867736858 | |||
| aa377bc7ed | |||
| 223a49ac87 |
188
.gemini/commands/speckit.analyze.toml
Normal file
188
.gemini/commands/speckit.analyze.toml
Normal file
@@ -0,0 +1,188 @@
|
||||
description = "Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation."
|
||||
|
||||
prompt = """
|
||||
---
|
||||
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
|
||||
|
||||
## Operating Constraints
|
||||
|
||||
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
||||
|
||||
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Initialize Analysis Context
|
||||
|
||||
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
|
||||
|
||||
- SPEC = FEATURE_DIR/spec.md
|
||||
- PLAN = FEATURE_DIR/plan.md
|
||||
- TASKS = FEATURE_DIR/tasks.md
|
||||
|
||||
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
|
||||
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
### 2. Load Artifacts (Progressive Disclosure)
|
||||
|
||||
Load only the minimal necessary context from each artifact:
|
||||
|
||||
**From spec.md:**
|
||||
|
||||
- Overview/Context
|
||||
- Functional Requirements
|
||||
- Non-Functional Requirements
|
||||
- User Stories
|
||||
- Edge Cases (if present)
|
||||
|
||||
**From plan.md:**
|
||||
|
||||
- Architecture/stack choices
|
||||
- Data Model references
|
||||
- Phases
|
||||
- Technical constraints
|
||||
|
||||
**From tasks.md:**
|
||||
|
||||
- Task IDs
|
||||
- Descriptions
|
||||
- Phase grouping
|
||||
- Parallel markers [P]
|
||||
- Referenced file paths
|
||||
|
||||
**From constitution:**
|
||||
|
||||
- Load `.specify/memory/constitution.md` for principle validation
|
||||
|
||||
### 3. Build Semantic Models
|
||||
|
||||
Create internal representations (do not include raw artifacts in output):
|
||||
|
||||
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
|
||||
- **User story/action inventory**: Discrete user actions with acceptance criteria
|
||||
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
|
||||
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
|
||||
|
||||
### 4. Detection Passes (Token-Efficient Analysis)
|
||||
|
||||
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
|
||||
|
||||
#### A. Duplication Detection
|
||||
|
||||
- Identify near-duplicate requirements
|
||||
- Mark lower-quality phrasing for consolidation
|
||||
|
||||
#### B. Ambiguity Detection
|
||||
|
||||
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
|
||||
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
|
||||
|
||||
#### C. Underspecification
|
||||
|
||||
- Requirements with verbs but missing object or measurable outcome
|
||||
- User stories missing acceptance criteria alignment
|
||||
- Tasks referencing files or components not defined in spec/plan
|
||||
|
||||
#### D. Constitution Alignment
|
||||
|
||||
- Any requirement or plan element conflicting with a MUST principle
|
||||
- Missing mandated sections or quality gates from constitution
|
||||
|
||||
#### E. Coverage Gaps
|
||||
|
||||
- Requirements with zero associated tasks
|
||||
- Tasks with no mapped requirement/story
|
||||
- Non-functional requirements not reflected in tasks (e.g., performance, security)
|
||||
|
||||
#### F. Inconsistency
|
||||
|
||||
- Terminology drift (same concept named differently across files)
|
||||
- Data entities referenced in plan but absent in spec (or vice versa)
|
||||
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
|
||||
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
|
||||
|
||||
### 5. Severity Assignment
|
||||
|
||||
Use this heuristic to prioritize findings:
|
||||
|
||||
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
|
||||
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
|
||||
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
|
||||
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
|
||||
|
||||
### 6. Produce Compact Analysis Report
|
||||
|
||||
Output a Markdown report (no file writes) with the following structure:
|
||||
|
||||
## Specification Analysis Report
|
||||
|
||||
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|
||||
|----|----------|----------|-------------|---------|----------------|
|
||||
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
|
||||
|
||||
(Add one row per finding; generate stable IDs prefixed by category initial.)
|
||||
|
||||
**Coverage Summary Table:**
|
||||
|
||||
| Requirement Key | Has Task? | Task IDs | Notes |
|
||||
|-----------------|-----------|----------|-------|
|
||||
|
||||
**Constitution Alignment Issues:** (if any)
|
||||
|
||||
**Unmapped Tasks:** (if any)
|
||||
|
||||
**Metrics:**
|
||||
|
||||
- Total Requirements
|
||||
- Total Tasks
|
||||
- Coverage % (requirements with >=1 task)
|
||||
- Ambiguity Count
|
||||
- Duplication Count
|
||||
- Critical Issues Count
|
||||
|
||||
### 7. Provide Next Actions
|
||||
|
||||
At end of report, output a concise Next Actions block:
|
||||
|
||||
- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
|
||||
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
|
||||
- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
|
||||
|
||||
### 8. Offer Remediation
|
||||
|
||||
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
|
||||
|
||||
## Operating Principles
|
||||
|
||||
### Context Efficiency
|
||||
|
||||
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
|
||||
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
|
||||
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
|
||||
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
|
||||
|
||||
### Analysis Guidelines
|
||||
|
||||
- **NEVER modify files** (this is read-only analysis)
|
||||
- **NEVER hallucinate missing sections** (if absent, report them accurately)
|
||||
- **Prioritize constitution violations** (these are always CRITICAL)
|
||||
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
|
||||
- **Report zero issues gracefully** (emit success report with coverage statistics)
|
||||
|
||||
## Context
|
||||
|
||||
{{args}}
|
||||
"""
|
||||
298
.gemini/commands/speckit.checklist.toml
Normal file
298
.gemini/commands/speckit.checklist.toml
Normal file
@@ -0,0 +1,298 @@
|
||||
description = "Generate a custom checklist for the current feature based on user requirements."
|
||||
|
||||
prompt = """
|
||||
---
|
||||
description: Generate a custom checklist for the current feature based on user requirements.
|
||||
---
|
||||
|
||||
## Checklist Purpose: "Unit Tests for English"
|
||||
|
||||
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
|
||||
|
||||
**NOT for verification/testing**:
|
||||
|
||||
- ❌ NOT "Verify the button clicks correctly"
|
||||
- ❌ NOT "Test error handling works"
|
||||
- ❌ NOT "Confirm the API returns 200"
|
||||
- ❌ NOT checking if code/implementation matches the spec
|
||||
|
||||
**FOR requirements quality validation**:
|
||||
|
||||
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
|
||||
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
|
||||
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
|
||||
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
|
||||
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
|
||||
|
||||
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Execution Steps
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
|
||||
- All file paths must be absolute.
|
||||
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
|
||||
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
|
||||
- Only ask about information that materially changes checklist content
|
||||
- Be skipped individually if already unambiguous in `$ARGUMENTS`
|
||||
- Prefer precision over breadth
|
||||
|
||||
Generation algorithm:
|
||||
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
|
||||
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
|
||||
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
|
||||
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
|
||||
5. Formulate questions chosen from these archetypes:
|
||||
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
|
||||
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
|
||||
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
|
||||
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
|
||||
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
|
||||
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
|
||||
|
||||
Question formatting rules:
|
||||
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
|
||||
- Limit to A–E options maximum; omit table if a free-form answer is clearer
|
||||
- Never ask the user to restate what they already said
|
||||
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
|
||||
|
||||
Defaults when interaction impossible:
|
||||
- Depth: Standard
|
||||
- Audience: Reviewer (PR) if code-related; Author otherwise
|
||||
- Focus: Top 2 relevance clusters
|
||||
|
||||
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
|
||||
|
||||
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
|
||||
- Derive checklist theme (e.g., security, review, deploy, ux)
|
||||
- Consolidate explicit must-have items mentioned by user
|
||||
- Map focus selections to category scaffolding
|
||||
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
|
||||
|
||||
4. **Load feature context**: Read from FEATURE_DIR:
|
||||
- spec.md: Feature requirements and scope
|
||||
- plan.md (if exists): Technical details, dependencies
|
||||
- tasks.md (if exists): Implementation tasks
|
||||
|
||||
**Context Loading Strategy**:
|
||||
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
|
||||
- Prefer summarizing long sections into concise scenario/requirement bullets
|
||||
- Use progressive disclosure: add follow-on retrieval only if gaps detected
|
||||
- If source docs are large, generate interim summary items instead of embedding raw text
|
||||
|
||||
5. **Generate checklist** - Create "Unit Tests for Requirements":
|
||||
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
|
||||
- Generate unique checklist filename:
|
||||
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
|
||||
- Format: `[domain].md`
|
||||
- If file exists, append to existing file
|
||||
- Number items sequentially starting from CHK001
|
||||
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
|
||||
|
||||
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
|
||||
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
|
||||
- **Completeness**: Are all necessary requirements present?
|
||||
- **Clarity**: Are requirements unambiguous and specific?
|
||||
- **Consistency**: Do requirements align with each other?
|
||||
- **Measurability**: Can requirements be objectively verified?
|
||||
- **Coverage**: Are all scenarios/edge cases addressed?
|
||||
|
||||
**Category Structure** - Group items by requirement quality dimensions:
|
||||
- **Requirement Completeness** (Are all necessary requirements documented?)
|
||||
- **Requirement Clarity** (Are requirements specific and unambiguous?)
|
||||
- **Requirement Consistency** (Do requirements align without conflicts?)
|
||||
- **Acceptance Criteria Quality** (Are success criteria measurable?)
|
||||
- **Scenario Coverage** (Are all flows/cases addressed?)
|
||||
- **Edge Case Coverage** (Are boundary conditions defined?)
|
||||
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
|
||||
- **Dependencies & Assumptions** (Are they documented and validated?)
|
||||
- **Ambiguities & Conflicts** (What needs clarification?)
|
||||
|
||||
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
|
||||
|
||||
❌ **WRONG** (Testing implementation):
|
||||
- "Verify landing page displays 3 episode cards"
|
||||
- "Test hover states work on desktop"
|
||||
- "Confirm logo click navigates home"
|
||||
|
||||
✅ **CORRECT** (Testing requirements quality):
|
||||
- "Are the exact number and layout of featured episodes specified?" [Completeness]
|
||||
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
|
||||
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
|
||||
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
|
||||
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
|
||||
- "Are loading states defined for asynchronous episode data?" [Completeness]
|
||||
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
|
||||
|
||||
**ITEM STRUCTURE**:
|
||||
Each item should follow this pattern:
|
||||
- Question format asking about requirement quality
|
||||
- Focus on what's WRITTEN (or not written) in the spec/plan
|
||||
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
|
||||
- Reference spec section `[Spec §X.Y]` when checking existing requirements
|
||||
- Use `[Gap]` marker when checking for missing requirements
|
||||
|
||||
**EXAMPLES BY QUALITY DIMENSION**:
|
||||
|
||||
Completeness:
|
||||
- "Are error handling requirements defined for all API failure modes? [Gap]"
|
||||
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
|
||||
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
|
||||
|
||||
Clarity:
|
||||
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
|
||||
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
|
||||
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
|
||||
|
||||
Consistency:
|
||||
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
|
||||
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
|
||||
|
||||
Coverage:
|
||||
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
|
||||
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
|
||||
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
|
||||
|
||||
Measurability:
|
||||
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
|
||||
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
|
||||
|
||||
**Scenario Classification & Coverage** (Requirements Quality Focus):
|
||||
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
|
||||
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
|
||||
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
|
||||
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
|
||||
|
||||
**Traceability Requirements**:
|
||||
- MINIMUM: ≥80% of items MUST include at least one traceability reference
|
||||
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
|
||||
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
|
||||
|
||||
**Surface & Resolve Issues** (Requirements Quality Problems):
|
||||
Ask questions about the requirements themselves:
|
||||
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
|
||||
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
|
||||
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
|
||||
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
|
||||
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
|
||||
|
||||
**Content Consolidation**:
|
||||
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
|
||||
- Merge near-duplicates checking the same requirement aspect
|
||||
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
|
||||
|
||||
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
|
||||
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
|
||||
- ❌ References to code execution, user actions, system behavior
|
||||
- ❌ "Displays correctly", "works properly", "functions as expected"
|
||||
- ❌ "Click", "navigate", "render", "load", "execute"
|
||||
- ❌ Test cases, test plans, QA procedures
|
||||
- ❌ Implementation details (frameworks, APIs, algorithms)
|
||||
|
||||
**✅ REQUIRED PATTERNS** - These test requirements quality:
|
||||
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
|
||||
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
|
||||
- ✅ "Are requirements consistent between [section A] and [section B]?"
|
||||
- ✅ "Can [requirement] be objectively measured/verified?"
|
||||
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
|
||||
- ✅ "Does the spec define [missing aspect]?"
|
||||
|
||||
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
|
||||
|
||||
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
|
||||
- Focus areas selected
|
||||
- Depth level
|
||||
- Actor/timing
|
||||
- Any explicit user-specified must-have items incorporated
|
||||
|
||||
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
|
||||
|
||||
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
|
||||
- Simple, memorable filenames that indicate checklist purpose
|
||||
- Easy identification and navigation in the `checklists/` folder
|
||||
|
||||
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
|
||||
|
||||
## Example Checklist Types & Sample Items
|
||||
|
||||
**UX Requirements Quality:** `ux.md`
|
||||
|
||||
Sample items (testing the requirements, NOT the implementation):
|
||||
|
||||
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
|
||||
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
|
||||
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
|
||||
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
|
||||
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
|
||||
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
|
||||
|
||||
**API Requirements Quality:** `api.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Are error response formats specified for all failure scenarios? [Completeness]"
|
||||
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
|
||||
- "Are authentication requirements consistent across all endpoints? [Consistency]"
|
||||
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
|
||||
- "Is versioning strategy documented in requirements? [Gap]"
|
||||
|
||||
**Performance Requirements Quality:** `performance.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Are performance requirements quantified with specific metrics? [Clarity]"
|
||||
- "Are performance targets defined for all critical user journeys? [Coverage]"
|
||||
- "Are performance requirements under different load conditions specified? [Completeness]"
|
||||
- "Can performance requirements be objectively measured? [Measurability]"
|
||||
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
|
||||
|
||||
**Security Requirements Quality:** `security.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Are authentication requirements specified for all protected resources? [Coverage]"
|
||||
- "Are data protection requirements defined for sensitive information? [Completeness]"
|
||||
- "Is the threat model documented and requirements aligned to it? [Traceability]"
|
||||
- "Are security requirements consistent with compliance obligations? [Consistency]"
|
||||
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
|
||||
|
||||
## Anti-Examples: What NOT To Do
|
||||
|
||||
**❌ WRONG - These test implementation, not requirements:**
|
||||
|
||||
```markdown
|
||||
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
|
||||
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
|
||||
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
|
||||
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
|
||||
```
|
||||
|
||||
**✅ CORRECT - These test requirements quality:**
|
||||
|
||||
```markdown
|
||||
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
|
||||
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
|
||||
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
|
||||
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
|
||||
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
|
||||
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
|
||||
```
|
||||
|
||||
**Key Differences:**
|
||||
|
||||
- Wrong: Tests if the system works correctly
|
||||
- Correct: Tests if the requirements are written correctly
|
||||
- Wrong: Verification of behavior
|
||||
- Correct: Validation of requirement quality
|
||||
- Wrong: "Does it do X?"
|
||||
- Correct: "Is X clearly specified?"
|
||||
"""
|
||||
181
.gemini/commands/speckit.clarify.toml
Normal file
181
.gemini/commands/speckit.clarify.toml
Normal file
@@ -0,0 +1,181 @@
|
||||
description = "Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec."
|
||||
|
||||
prompt = """
|
||||
---
|
||||
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
|
||||
|
||||
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
|
||||
|
||||
Execution steps:
|
||||
|
||||
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
|
||||
- `FEATURE_DIR`
|
||||
- `FEATURE_SPEC`
|
||||
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
|
||||
- If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
|
||||
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
|
||||
|
||||
Functional Scope & Behavior:
|
||||
- Core user goals & success criteria
|
||||
- Explicit out-of-scope declarations
|
||||
- User roles / personas differentiation
|
||||
|
||||
Domain & Data Model:
|
||||
- Entities, attributes, relationships
|
||||
- Identity & uniqueness rules
|
||||
- Lifecycle/state transitions
|
||||
- Data volume / scale assumptions
|
||||
|
||||
Interaction & UX Flow:
|
||||
- Critical user journeys / sequences
|
||||
- Error/empty/loading states
|
||||
- Accessibility or localization notes
|
||||
|
||||
Non-Functional Quality Attributes:
|
||||
- Performance (latency, throughput targets)
|
||||
- Scalability (horizontal/vertical, limits)
|
||||
- Reliability & availability (uptime, recovery expectations)
|
||||
- Observability (logging, metrics, tracing signals)
|
||||
- Security & privacy (authN/Z, data protection, threat assumptions)
|
||||
- Compliance / regulatory constraints (if any)
|
||||
|
||||
Integration & External Dependencies:
|
||||
- External services/APIs and failure modes
|
||||
- Data import/export formats
|
||||
- Protocol/versioning assumptions
|
||||
|
||||
Edge Cases & Failure Handling:
|
||||
- Negative scenarios
|
||||
- Rate limiting / throttling
|
||||
- Conflict resolution (e.g., concurrent edits)
|
||||
|
||||
Constraints & Tradeoffs:
|
||||
- Technical constraints (language, storage, hosting)
|
||||
- Explicit tradeoffs or rejected alternatives
|
||||
|
||||
Terminology & Consistency:
|
||||
- Canonical glossary terms
|
||||
- Avoided synonyms / deprecated terms
|
||||
|
||||
Completion Signals:
|
||||
- Acceptance criteria testability
|
||||
- Measurable Definition of Done style indicators
|
||||
|
||||
Misc / Placeholders:
|
||||
- TODO markers / unresolved decisions
|
||||
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
|
||||
|
||||
For each category with Partial or Missing status, add a candidate question opportunity unless:
|
||||
- Clarification would not materially change implementation or validation strategy
|
||||
- Information is better deferred to planning phase (note internally)
|
||||
|
||||
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
|
||||
- Maximum of 10 total questions across the whole session.
|
||||
- Each question must be answerable with EITHER:
|
||||
- A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
|
||||
- A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
|
||||
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
|
||||
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
|
||||
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
|
||||
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
|
||||
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
|
||||
|
||||
4. Sequential questioning loop (interactive):
|
||||
- Present EXACTLY ONE question at a time.
|
||||
- For multiple‑choice questions:
|
||||
- **Analyze all options** and determine the **most suitable option** based on:
|
||||
- Best practices for the project type
|
||||
- Common patterns in similar implementations
|
||||
- Risk reduction (security, performance, maintainability)
|
||||
- Alignment with any explicit project goals or constraints visible in the spec
|
||||
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
|
||||
- Format as: `**Recommended:** Option [X] - <reasoning>`
|
||||
- Then render all options as a Markdown table:
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| A | <Option A description> |
|
||||
| B | <Option B description> |
|
||||
| C | <Option C description> (add D/E as needed up to 5) |
|
||||
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
|
||||
|
||||
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
|
||||
- For short‑answer style (no meaningful discrete options):
|
||||
- Provide your **suggested answer** based on best practices and context.
|
||||
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
|
||||
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
|
||||
- After the user answers:
|
||||
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
|
||||
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
|
||||
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
||||
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
||||
- Stop asking further questions when:
|
||||
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
|
||||
- User signals completion ("done", "good", "no more"), OR
|
||||
- You reach 5 asked questions.
|
||||
- Never reveal future queued questions in advance.
|
||||
- If no valid questions exist at start, immediately report no critical ambiguities.
|
||||
|
||||
5. Integration after EACH accepted answer (incremental update approach):
|
||||
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
|
||||
- For the first integrated answer in this session:
|
||||
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
|
||||
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
|
||||
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
|
||||
- Then immediately apply the clarification to the most appropriate section(s):
|
||||
- Functional ambiguity → Update or add a bullet in Functional Requirements.
|
||||
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
||||
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
||||
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
||||
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
||||
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
||||
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
||||
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
|
||||
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
|
||||
- Keep each inserted clarification minimal and testable (avoid narrative drift).
|
||||
|
||||
6. Validation (performed after EACH write plus final pass):
|
||||
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
|
||||
- Total asked (accepted) questions ≤ 5.
|
||||
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
|
||||
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
|
||||
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
|
||||
- Terminology consistency: same canonical term used across all updated sections.
|
||||
|
||||
7. Write the updated spec back to `FEATURE_SPEC`.
|
||||
|
||||
8. Report completion (after questioning loop ends or early termination):
|
||||
- Number of questions asked & answered.
|
||||
- Path to updated spec.
|
||||
- Sections touched (list names).
|
||||
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
|
||||
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
|
||||
- Suggested next command.
|
||||
|
||||
Behavior rules:
|
||||
|
||||
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
|
||||
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
|
||||
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
|
||||
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
|
||||
- Respect user early termination signals ("stop", "done", "proceed").
|
||||
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
|
||||
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
||||
|
||||
Context for prioritization: {{args}}
|
||||
"""
|
||||
82
.gemini/commands/speckit.constitution.toml
Normal file
82
.gemini/commands/speckit.constitution.toml
Normal file
@@ -0,0 +1,82 @@
|
||||
description = "Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync"
|
||||
|
||||
prompt = """
|
||||
---
|
||||
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
||||
|
||||
Follow this execution flow:
|
||||
|
||||
1. Load the existing constitution template at `.specify/memory/constitution.md`.
|
||||
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
||||
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
||||
|
||||
2. Collect/derive values for placeholders:
|
||||
- If user input (conversation) supplies a value, use it.
|
||||
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
|
||||
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
|
||||
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
|
||||
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
|
||||
- MINOR: New principle/section added or materially expanded guidance.
|
||||
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
|
||||
- If version bump type ambiguous, propose reasoning before finalizing.
|
||||
|
||||
3. Draft the updated constitution content:
|
||||
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
|
||||
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
|
||||
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
|
||||
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
||||
|
||||
4. Consistency propagation checklist (convert prior checklist into active validations):
|
||||
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
||||
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
||||
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
||||
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
||||
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
|
||||
|
||||
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
||||
- Version change: old → new
|
||||
- List of modified principles (old title → new title if renamed)
|
||||
- Added sections
|
||||
- Removed sections
|
||||
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
|
||||
- Follow-up TODOs if any placeholders intentionally deferred.
|
||||
|
||||
6. Validation before final output:
|
||||
- No remaining unexplained bracket tokens.
|
||||
- Version line matches report.
|
||||
- Dates ISO format YYYY-MM-DD.
|
||||
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
||||
|
||||
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
|
||||
|
||||
8. Output a final summary to the user with:
|
||||
- New version and bump rationale.
|
||||
- Any files flagged for manual follow-up.
|
||||
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
|
||||
|
||||
Formatting & Style Requirements:
|
||||
|
||||
- Use Markdown headings exactly as in the template (do not demote/promote levels).
|
||||
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
|
||||
- Keep a single blank line between sections.
|
||||
- Avoid trailing whitespace.
|
||||
|
||||
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
|
||||
|
||||
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
||||
|
||||
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
|
||||
"""
|
||||
138
.gemini/commands/speckit.implement.toml
Normal file
138
.gemini/commands/speckit.implement.toml
Normal file
@@ -0,0 +1,138 @@
|
||||
description = "Execute the implementation plan by processing and executing all tasks defined in tasks.md"
|
||||
|
||||
prompt = """
|
||||
---
|
||||
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
|
||||
- Scan all checklist files in the checklists/ directory
|
||||
- For each checklist, count:
|
||||
- Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
|
||||
- Completed items: Lines matching `- [X]` or `- [x]`
|
||||
- Incomplete items: Lines matching `- [ ]`
|
||||
- Create a status table:
|
||||
|
||||
```text
|
||||
| Checklist | Total | Completed | Incomplete | Status |
|
||||
|-----------|-------|-----------|------------|--------|
|
||||
| ux.md | 12 | 12 | 0 | ✓ PASS |
|
||||
| test.md | 8 | 5 | 3 | ✗ FAIL |
|
||||
| security.md | 6 | 6 | 0 | ✓ PASS |
|
||||
```
|
||||
|
||||
- Calculate overall status:
|
||||
- **PASS**: All checklists have 0 incomplete items
|
||||
- **FAIL**: One or more checklists have incomplete items
|
||||
|
||||
- **If any checklist is incomplete**:
|
||||
- Display the table with incomplete item counts
|
||||
- **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
|
||||
- Wait for user response before continuing
|
||||
- If user says "no" or "wait" or "stop", halt execution
|
||||
- If user says "yes" or "proceed" or "continue", proceed to step 3
|
||||
|
||||
- **If all checklists are complete**:
|
||||
- Display the table showing all checklists passed
|
||||
- Automatically proceed to step 3
|
||||
|
||||
3. Load and analyze the implementation context:
|
||||
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
||||
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
||||
- **IF EXISTS**: Read data-model.md for entities and relationships
|
||||
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
||||
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||
|
||||
4. **Project Setup Verification**:
|
||||
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
||||
|
||||
**Detection & Creation Logic**:
|
||||
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
|
||||
|
||||
```sh
|
||||
git rev-parse --git-dir 2>/dev/null
|
||||
```
|
||||
|
||||
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
|
||||
- Check if .eslintrc*or eslint.config.* exists → create/verify .eslintignore
|
||||
- Check if .prettierrc* exists → create/verify .prettierignore
|
||||
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
|
||||
- Check if terraform files (*.tf) exist → create/verify .terraformignore
|
||||
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
|
||||
|
||||
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
|
||||
**If ignore file missing**: Create with full pattern set for detected technology
|
||||
|
||||
**Common Patterns by Technology** (from plan.md tech stack):
|
||||
- **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
|
||||
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
|
||||
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
|
||||
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
|
||||
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
|
||||
- **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
|
||||
- **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
|
||||
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
|
||||
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
|
||||
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
|
||||
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
|
||||
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
|
||||
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
|
||||
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
|
||||
|
||||
**Tool-Specific Patterns**:
|
||||
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
|
||||
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
|
||||
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
|
||||
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
|
||||
- **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
|
||||
|
||||
5. Parse tasks.md structure and extract:
|
||||
- **Task phases**: Setup, Tests, Core, Integration, Polish
|
||||
- **Task dependencies**: Sequential vs parallel execution rules
|
||||
- **Task details**: ID, description, file paths, parallel markers [P]
|
||||
- **Execution flow**: Order and dependency requirements
|
||||
|
||||
6. Execute implementation following the task plan:
|
||||
- **Phase-by-phase execution**: Complete each phase before moving to the next
|
||||
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
|
||||
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
|
||||
- **File-based coordination**: Tasks affecting the same files must run sequentially
|
||||
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||
|
||||
7. Implementation execution rules:
|
||||
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||
- **Core development**: Implement models, services, CLI commands, endpoints
|
||||
- **Integration work**: Database connections, middleware, logging, external services
|
||||
- **Polish and validation**: Unit tests, performance optimization, documentation
|
||||
|
||||
8. Progress tracking and error handling:
|
||||
- Report progress after each completed task
|
||||
- Halt execution if any non-parallel task fails
|
||||
- For parallel tasks [P], continue with successful tasks, report failed ones
|
||||
- Provide clear error messages with context for debugging
|
||||
- Suggest next steps if implementation cannot proceed
|
||||
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
|
||||
|
||||
9. Completion validation:
|
||||
- Verify all required tasks are completed
|
||||
- Check that implemented features match the original specification
|
||||
- Validate that tests pass and coverage meets requirements
|
||||
- Confirm the implementation follows the technical plan
|
||||
- Report final status with summary of completed work
|
||||
|
||||
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.
|
||||
"""
|
||||
85
.gemini/commands/speckit.plan.toml
Normal file
85
.gemini/commands/speckit.plan.toml
Normal file
@@ -0,0 +1,85 @@
|
||||
description = "Execute the implementation planning workflow using the plan template to generate design artifacts."
|
||||
|
||||
prompt = """
|
||||
---
|
||||
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
|
||||
|
||||
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
||||
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
||||
- Fill Constitution Check section from constitution
|
||||
- Evaluate gates (ERROR if violations unjustified)
|
||||
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
|
||||
- Phase 1: Generate data-model.md, contracts/, quickstart.md
|
||||
- Phase 1: Update agent context by running the agent script
|
||||
- Re-evaluate Constitution Check post-design
|
||||
|
||||
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
|
||||
|
||||
## Phases
|
||||
|
||||
### Phase 0: Outline & Research
|
||||
|
||||
1. **Extract unknowns from Technical Context** above:
|
||||
- For each NEEDS CLARIFICATION → research task
|
||||
- For each dependency → best practices task
|
||||
- For each integration → patterns task
|
||||
|
||||
2. **Generate and dispatch research agents**:
|
||||
|
||||
```text
|
||||
For each unknown in Technical Context:
|
||||
Task: "Research {unknown} for {feature context}"
|
||||
For each technology choice:
|
||||
Task: "Find best practices for {tech} in {domain}"
|
||||
```
|
||||
|
||||
3. **Consolidate findings** in `research.md` using format:
|
||||
- Decision: [what was chosen]
|
||||
- Rationale: [why chosen]
|
||||
- Alternatives considered: [what else evaluated]
|
||||
|
||||
**Output**: research.md with all NEEDS CLARIFICATION resolved
|
||||
|
||||
### Phase 1: Design & Contracts
|
||||
|
||||
**Prerequisites:** `research.md` complete
|
||||
|
||||
1. **Extract entities from feature spec** → `data-model.md`:
|
||||
- Entity name, fields, relationships
|
||||
- Validation rules from requirements
|
||||
- State transitions if applicable
|
||||
|
||||
2. **Generate API contracts** from functional requirements:
|
||||
- For each user action → endpoint
|
||||
- Use standard REST/GraphQL patterns
|
||||
- Output OpenAPI/GraphQL schema to `/contracts/`
|
||||
|
||||
3. **Agent context update**:
|
||||
- Run `.specify/scripts/bash/update-agent-context.sh gemini`
|
||||
- These scripts detect which AI agent is in use
|
||||
- Update the appropriate agent-specific context file
|
||||
- Add only new technology from current plan
|
||||
- Preserve manual additions between markers
|
||||
|
||||
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
|
||||
|
||||
## Key rules
|
||||
|
||||
- Use absolute paths
|
||||
- ERROR on gate failures or unresolved clarifications
|
||||
"""
|
||||
253
.gemini/commands/speckit.specify.toml
Normal file
253
.gemini/commands/speckit.specify.toml
Normal file
@@ -0,0 +1,253 @@
|
||||
description = "Create or update the feature specification from a natural language feature description."
|
||||
|
||||
prompt = """
|
||||
---
|
||||
description: Create or update the feature specification from a natural language feature description.
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `{{args}}` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
|
||||
|
||||
Given that feature description, do this:
|
||||
|
||||
1. **Generate a concise short name** (2-4 words) for the branch:
|
||||
- Analyze the feature description and extract the most meaningful keywords
|
||||
- Create a 2-4 word short name that captures the essence of the feature
|
||||
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
|
||||
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
|
||||
- Keep it concise but descriptive enough to understand the feature at a glance
|
||||
- Examples:
|
||||
- "I want to add user authentication" → "user-auth"
|
||||
- "Implement OAuth2 integration for the API" → "oauth2-api-integration"
|
||||
- "Create a dashboard for analytics" → "analytics-dashboard"
|
||||
- "Fix payment processing timeout bug" → "fix-payment-timeout"
|
||||
|
||||
2. **Check for existing branches before creating new one**:
|
||||
|
||||
a. First, fetch all remote branches to ensure we have the latest information:
|
||||
```bash
|
||||
git fetch --all --prune
|
||||
```
|
||||
|
||||
b. Find the highest feature number across all sources for the short-name:
|
||||
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
|
||||
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
|
||||
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
|
||||
|
||||
c. Determine the next available number:
|
||||
- Extract all numbers from all three sources
|
||||
- Find the highest number N
|
||||
- Use N+1 for the new branch number
|
||||
|
||||
d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name:
|
||||
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
|
||||
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"`
|
||||
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
|
||||
|
||||
**IMPORTANT**:
|
||||
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
|
||||
- Only match branches/directories with the exact short-name pattern
|
||||
- If no existing branches/directories found with this short-name, start with number 1
|
||||
- You must only ever run this script once per feature
|
||||
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
|
||||
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths
|
||||
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot")
|
||||
|
||||
3. Load `.specify/templates/spec-template.md` to understand required sections.
|
||||
|
||||
4. Follow this execution flow:
|
||||
|
||||
1. Parse user description from Input
|
||||
If empty: ERROR "No feature description provided"
|
||||
2. Extract key concepts from description
|
||||
Identify: actors, actions, data, constraints
|
||||
3. For unclear aspects:
|
||||
- Make informed guesses based on context and industry standards
|
||||
- Only mark with [NEEDS CLARIFICATION: specific question] if:
|
||||
- The choice significantly impacts feature scope or user experience
|
||||
- Multiple reasonable interpretations exist with different implications
|
||||
- No reasonable default exists
|
||||
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
|
||||
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
|
||||
4. Fill User Scenarios & Testing section
|
||||
If no clear user flow: ERROR "Cannot determine user scenarios"
|
||||
5. Generate Functional Requirements
|
||||
Each requirement must be testable
|
||||
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
|
||||
6. Define Success Criteria
|
||||
Create measurable, technology-agnostic outcomes
|
||||
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
|
||||
Each criterion must be verifiable without implementation details
|
||||
7. Identify Key Entities (if data involved)
|
||||
8. Return: SUCCESS (spec ready for planning)
|
||||
|
||||
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
|
||||
|
||||
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
|
||||
|
||||
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
|
||||
|
||||
```markdown
|
||||
# Specification Quality Checklist: [FEATURE NAME]
|
||||
|
||||
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
||||
**Created**: [DATE]
|
||||
**Feature**: [Link to spec.md]
|
||||
|
||||
## Content Quality
|
||||
|
||||
- [ ] No implementation details (languages, frameworks, APIs)
|
||||
- [ ] Focused on user value and business needs
|
||||
- [ ] Written for non-technical stakeholders
|
||||
- [ ] All mandatory sections completed
|
||||
|
||||
## Requirement Completeness
|
||||
|
||||
- [ ] No [NEEDS CLARIFICATION] markers remain
|
||||
- [ ] Requirements are testable and unambiguous
|
||||
- [ ] Success criteria are measurable
|
||||
- [ ] Success criteria are technology-agnostic (no implementation details)
|
||||
- [ ] All acceptance scenarios are defined
|
||||
- [ ] Edge cases are identified
|
||||
- [ ] Scope is clearly bounded
|
||||
- [ ] Dependencies and assumptions identified
|
||||
|
||||
## Feature Readiness
|
||||
|
||||
- [ ] All functional requirements have clear acceptance criteria
|
||||
- [ ] User scenarios cover primary flows
|
||||
- [ ] Feature meets measurable outcomes defined in Success Criteria
|
||||
- [ ] No implementation details leak into specification
|
||||
|
||||
## Notes
|
||||
|
||||
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
|
||||
```
|
||||
|
||||
b. **Run Validation Check**: Review the spec against each checklist item:
|
||||
- For each item, determine if it passes or fails
|
||||
- Document specific issues found (quote relevant spec sections)
|
||||
|
||||
c. **Handle Validation Results**:
|
||||
|
||||
- **If all items pass**: Mark checklist complete and proceed to step 6
|
||||
|
||||
- **If items fail (excluding [NEEDS CLARIFICATION])**:
|
||||
1. List the failing items and specific issues
|
||||
2. Update the spec to address each issue
|
||||
3. Re-run validation until all items pass (max 3 iterations)
|
||||
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
|
||||
|
||||
- **If [NEEDS CLARIFICATION] markers remain**:
|
||||
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
|
||||
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
|
||||
3. For each clarification needed (max 3), present options to user in this format:
|
||||
|
||||
```markdown
|
||||
## Question [N]: [Topic]
|
||||
|
||||
**Context**: [Quote relevant spec section]
|
||||
|
||||
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
|
||||
|
||||
**Suggested Answers**:
|
||||
|
||||
| Option | Answer | Implications |
|
||||
|--------|--------|--------------|
|
||||
| A | [First suggested answer] | [What this means for the feature] |
|
||||
| B | [Second suggested answer] | [What this means for the feature] |
|
||||
| C | [Third suggested answer] | [What this means for the feature] |
|
||||
| Custom | Provide your own answer | [Explain how to provide custom input] |
|
||||
|
||||
**Your choice**: _[Wait for user response]_
|
||||
```
|
||||
|
||||
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
|
||||
- Use consistent spacing with pipes aligned
|
||||
- Each cell should have spaces around content: `| Content |` not `|Content|`
|
||||
- Header separator must have at least 3 dashes: `|--------|`
|
||||
- Test that the table renders correctly in markdown preview
|
||||
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
|
||||
6. Present all questions together before waiting for responses
|
||||
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
|
||||
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
|
||||
9. Re-run validation after all clarifications are resolved
|
||||
|
||||
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
|
||||
|
||||
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
|
||||
|
||||
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
|
||||
|
||||
## General Guidelines
|
||||
|
||||
## Quick Guidelines
|
||||
|
||||
- Focus on **WHAT** users need and **WHY**.
|
||||
- Avoid HOW to implement (no tech stack, APIs, code structure).
|
||||
- Written for business stakeholders, not developers.
|
||||
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
|
||||
|
||||
### Section Requirements
|
||||
|
||||
- **Mandatory sections**: Must be completed for every feature
|
||||
- **Optional sections**: Include only when relevant to the feature
|
||||
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
|
||||
|
||||
### For AI Generation
|
||||
|
||||
When creating this spec from a user prompt:
|
||||
|
||||
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
|
||||
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
|
||||
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
|
||||
- Significantly impact feature scope or user experience
|
||||
- Have multiple reasonable interpretations with different implications
|
||||
- Lack any reasonable default
|
||||
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
|
||||
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
|
||||
6. **Common areas needing clarification** (only if no reasonable default exists):
|
||||
- Feature scope and boundaries (include/exclude specific use cases)
|
||||
- User types and permissions (if multiple conflicting interpretations possible)
|
||||
- Security/compliance requirements (when legally/financially significant)
|
||||
|
||||
**Examples of reasonable defaults** (don't ask about these):
|
||||
|
||||
- Data retention: Industry-standard practices for the domain
|
||||
- Performance targets: Standard web/mobile app expectations unless specified
|
||||
- Error handling: User-friendly messages with appropriate fallbacks
|
||||
- Authentication method: Standard session-based or OAuth2 for web apps
|
||||
- Integration patterns: RESTful APIs unless specified otherwise
|
||||
|
||||
### Success Criteria Guidelines
|
||||
|
||||
Success criteria must be:
|
||||
|
||||
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
|
||||
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
|
||||
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
|
||||
4. **Verifiable**: Can be tested/validated without knowing implementation details
|
||||
|
||||
**Good examples**:
|
||||
|
||||
- "Users can complete checkout in under 3 minutes"
|
||||
- "System supports 10,000 concurrent users"
|
||||
- "95% of searches return results in under 1 second"
|
||||
- "Task completion rate improves by 40%"
|
||||
|
||||
**Bad examples** (implementation-focused):
|
||||
|
||||
- "API response time is under 200ms" (too technical, use "Users see results instantly")
|
||||
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
|
||||
- "React components render efficiently" (framework-specific)
|
||||
- "Redis cache hit rate above 80%" (technology-specific)
|
||||
"""
|
||||
132
.gemini/commands/speckit.tasks.toml
Normal file
132
.gemini/commands/speckit.tasks.toml
Normal file
@@ -0,0 +1,132 @@
|
||||
description = "Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts."
|
||||
|
||||
prompt = """
|
||||
---
|
||||
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Load design documents**: Read from FEATURE_DIR:
|
||||
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
|
||||
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
|
||||
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
||||
|
||||
3. **Execute task generation workflow**:
|
||||
- Load plan.md and extract tech stack, libraries, project structure
|
||||
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
|
||||
- If data-model.md exists: Extract entities and map to user stories
|
||||
- If contracts/ exists: Map endpoints to user stories
|
||||
- If research.md exists: Extract decisions for setup tasks
|
||||
- Generate tasks organized by user story (see Task Generation Rules below)
|
||||
- Generate dependency graph showing user story completion order
|
||||
- Create parallel execution examples per user story
|
||||
- Validate task completeness (each user story has all needed tasks, independently testable)
|
||||
|
||||
4. **Generate tasks.md**: Use `.specify.specify/templates/tasks-template.md` as structure, fill with:
|
||||
- Correct feature name from plan.md
|
||||
- Phase 1: Setup tasks (project initialization)
|
||||
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
|
||||
- Phase 3+: One phase per user story (in priority order from spec.md)
|
||||
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
|
||||
- Final Phase: Polish & cross-cutting concerns
|
||||
- All tasks must follow the strict checklist format (see Task Generation Rules below)
|
||||
- Clear file paths for each task
|
||||
- Dependencies section showing story completion order
|
||||
- Parallel execution examples per story
|
||||
- Implementation strategy section (MVP first, incremental delivery)
|
||||
|
||||
5. **Report**: Output path to generated tasks.md and summary:
|
||||
- Total task count
|
||||
- Task count per user story
|
||||
- Parallel opportunities identified
|
||||
- Independent test criteria for each story
|
||||
- Suggested MVP scope (typically just User Story 1)
|
||||
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
|
||||
|
||||
Context for task generation: {{args}}
|
||||
|
||||
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
|
||||
|
||||
## Task Generation Rules
|
||||
|
||||
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
|
||||
|
||||
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
|
||||
|
||||
### Checklist Format (REQUIRED)
|
||||
|
||||
Every task MUST strictly follow this format:
|
||||
|
||||
```text
|
||||
- [ ] [TaskID] [P?] [Story?] Description with file path
|
||||
```
|
||||
|
||||
**Format Components**:
|
||||
|
||||
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
|
||||
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
|
||||
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
|
||||
4. **[Story] label**: REQUIRED for user story phase tasks only
|
||||
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
|
||||
- Setup phase: NO story label
|
||||
- Foundational phase: NO story label
|
||||
- User Story phases: MUST have story label
|
||||
- Polish phase: NO story label
|
||||
5. **Description**: Clear action with exact file path
|
||||
|
||||
**Examples**:
|
||||
|
||||
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
|
||||
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
|
||||
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
|
||||
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
|
||||
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
|
||||
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
|
||||
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
|
||||
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
|
||||
|
||||
### Task Organization
|
||||
|
||||
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
|
||||
- Each user story (P1, P2, P3...) gets its own phase
|
||||
- Map all related components to their story:
|
||||
- Models needed for that story
|
||||
- Services needed for that story
|
||||
- Endpoints/UI needed for that story
|
||||
- If tests requested: Tests specific to that story
|
||||
- Mark story dependencies (most stories should be independent)
|
||||
|
||||
2. **From Contracts**:
|
||||
- Map each contract/endpoint → to the user story it serves
|
||||
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
||||
|
||||
3. **From Data Model**:
|
||||
- Map each entity to the user story(ies) that need it
|
||||
- If entity serves multiple stories: Put in earliest story or Setup phase
|
||||
- Relationships → service layer tasks in appropriate story phase
|
||||
|
||||
4. **From Setup/Infrastructure**:
|
||||
- Shared infrastructure → Setup phase (Phase 1)
|
||||
- Foundational/blocking tasks → Foundational phase (Phase 2)
|
||||
- Story-specific setup → within that story's phase
|
||||
|
||||
### Phase Structure
|
||||
|
||||
- **Phase 1**: Setup (project initialization)
|
||||
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
|
||||
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
|
||||
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
||||
- Each phase should be a complete, independently testable increment
|
||||
- **Final Phase**: Polish & Cross-Cutting Concerns
|
||||
"""
|
||||
7
.gitignore
vendored
7
.gitignore
vendored
@@ -5,6 +5,7 @@
|
||||
|
||||
# Dependency directories
|
||||
vendor/
|
||||
node_modules/
|
||||
|
||||
# IDE and editor clutter
|
||||
.vscode/
|
||||
@@ -17,7 +18,13 @@ Thumbs.db
|
||||
# Temporary files
|
||||
*.tmp
|
||||
*.swp
|
||||
*.log
|
||||
|
||||
# Environment files
|
||||
.env
|
||||
.env.*
|
||||
|
||||
# Build outputs
|
||||
dist/
|
||||
build/
|
||||
coverage/
|
||||
|
||||
42
.renamer.example
Normal file
42
.renamer.example
Normal file
@@ -0,0 +1,42 @@
|
||||
# Example AI vendor credentials for renamer.
|
||||
# Copy this file to ~/.config/.renamer/.renamer (or the path specified by
|
||||
# RENAMER_CONFIG_DIR) and replace the placeholder values with real tokens.
|
||||
|
||||
# OpenAI (gpt-4o, o1, ChatGPT)
|
||||
OPENAI_TOKEN=sk-openai-xxxxxxxxxxxxxxxxxxxxxxxx
|
||||
|
||||
# Anthropic (Claude models)
|
||||
ANTHROPIC_TOKEN=sk-anthropic-xxxxxxxxxxxxxxxx
|
||||
|
||||
# Google (Gemini, LearnLM, PaLM)
|
||||
GOOGLE_TOKEN=ya29.xxxxxxxxxxxxxxxxxxxxxxxx
|
||||
|
||||
# Mistral AI (Mistral, Mixtral, Ministral)
|
||||
MISTRAL_TOKEN=sk-mistral-xxxxxxxxxxxxxxxx
|
||||
|
||||
# Cohere (Command family)
|
||||
COHERE_TOKEN=sk-cohere-xxxxxxxxxxxxxxxx
|
||||
|
||||
# Moonshot AI (Moonshot models)
|
||||
MOONSHOT_TOKEN=sk-moonshot-xxxxxxxxxxxxxxxx
|
||||
|
||||
# Zhipu AI (GLM series)
|
||||
ZHIPU_TOKEN=sk-zhipu-xxxxxxxxxxxxxxxx
|
||||
|
||||
# Alibaba DashScope (Qwen)
|
||||
ALIBABA_TOKEN=sk-dashscope-xxxxxxxxxxxxxxxx
|
||||
|
||||
# Baidu Wenxin/ERNIE
|
||||
BAIDU_TOKEN=sk-baidu-xxxxxxxxxxxxxxxx
|
||||
|
||||
# MiniMax (ABAB)
|
||||
MINIMAX_TOKEN=sk-minimax-xxxxxxxxxxxxxxxx
|
||||
|
||||
# ByteDance Doubao
|
||||
BYTEDANCE_TOKEN=sk-bytedance-xxxxxxxxxxxxxxxx
|
||||
|
||||
# DeepSeek
|
||||
DEEPSEEK_TOKEN=sk-deepseek-xxxxxxxxxxxxxxxx
|
||||
|
||||
# xAI Grok
|
||||
XAI_TOKEN=sk-xai-xxxxxxxxxxxxxxxx
|
||||
@@ -11,6 +11,8 @@ Auto-generated from all feature plans. Last updated: 2025-10-29
|
||||
- Go 1.24 + `spf13/cobra`, `spf13/pflag`, internal traversal/history/output packages (005-add-insert-command)
|
||||
- Go 1.24 + `spf13/cobra`, `spf13/pflag`, Go `regexp` (RE2 engine), internal traversal/history/output packages (006-add-regex-command)
|
||||
- Local filesystem and `.renamer` ledger files (006-add-regex-command)
|
||||
- Go 1.24 (CLI), Node.js 20 + TypeScript (Google Genkit workflow) + `spf13/cobra`, internal traversal/history/output packages, Google Genkit SDK, OpenAI-compatible HTTP client for fallbacks (008-ai-rename-prompt)
|
||||
- Local filesystem plus `.renamer` append-only ledger (008-ai-rename-prompt)
|
||||
|
||||
## Project Structure
|
||||
|
||||
@@ -43,9 +45,9 @@ tests/
|
||||
- Smoke: `scripts/smoke-test-replace.sh`, `scripts/smoke-test-remove.sh`
|
||||
|
||||
## Recent Changes
|
||||
- 008-ai-rename-prompt: Added Go 1.24 (CLI), Node.js 20 + TypeScript (Google Genkit workflow) + `spf13/cobra`, internal traversal/history/output packages, Google Genkit SDK, OpenAI-compatible HTTP client for fallbacks
|
||||
- 001-sequence-numbering: Added Go 1.24 + `spf13/cobra`, `spf13/pflag`, internal traversal/history/output packages
|
||||
- 006-add-regex-command: Added Go 1.24 + `spf13/cobra`, `spf13/pflag`, Go `regexp` (RE2 engine), internal traversal/history/output packages
|
||||
- 005-add-insert-command: Added Go 1.24 + `spf13/cobra`, `spf13/pflag`, internal traversal/history/output packages
|
||||
|
||||
<!-- MANUAL ADDITIONS START -->
|
||||
<!-- MANUAL ADDITIONS END -->
|
||||
|
||||
304
cmd/ai.go
Normal file
304
cmd/ai.go
Normal file
@@ -0,0 +1,304 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rogeecn/renamer/internal/ai"
|
||||
"github.com/rogeecn/renamer/internal/listing"
|
||||
"github.com/rogeecn/renamer/internal/traversal"
|
||||
)
|
||||
|
||||
const maxAIFileCount = 200
|
||||
|
||||
func newAICommand() *cobra.Command {
|
||||
var prompt string
|
||||
var sequenceSeparator string
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "ai",
|
||||
Short: "Generate AI-assisted rename suggestions",
|
||||
Long: "Preview rename suggestions proposed by the integrated AI assistant before applying changes.",
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
scope, err := listing.ScopeFromCmd(cmd)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
autoApply, err := lookupBool(cmd, "yes")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dryRun, err := lookupBool(cmd, "dry-run")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dryRun && autoApply {
|
||||
return errors.New("--dry-run cannot be combined with --yes; remove one of them")
|
||||
}
|
||||
|
||||
files, err := collectScopeEntries(cmd.Context(), scope)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(files) == 0 {
|
||||
fmt.Fprintln(cmd.OutOrStdout(), "No files matched the current scope.")
|
||||
return nil
|
||||
}
|
||||
|
||||
if len(files) > maxAIFileCount {
|
||||
return fmt.Errorf("scope contains %d files; reduce to %d or fewer before running ai preview", len(files), maxAIFileCount)
|
||||
}
|
||||
|
||||
if sequenceSeparator == "" {
|
||||
sequenceSeparator = "."
|
||||
}
|
||||
|
||||
client := ai.NewClient()
|
||||
session := ai.NewSession(files, prompt, sequenceSeparator, client)
|
||||
|
||||
reader := bufio.NewReader(cmd.InOrStdin())
|
||||
out := cmd.OutOrStdout()
|
||||
|
||||
for {
|
||||
output, validation, err := session.Generate(cmd.Context())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := ai.PrintPreview(out, output.Suggestions, validation); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
printSessionSummary(out, session)
|
||||
|
||||
if len(validation.Conflicts) > 0 {
|
||||
fmt.Fprintln(out, "Conflicts detected. Adjust guidance or scope before proceeding.")
|
||||
}
|
||||
|
||||
if autoApply {
|
||||
if len(validation.Conflicts) > 0 {
|
||||
return errors.New("preview contains conflicts; refine the prompt or scope before using --yes")
|
||||
}
|
||||
session.RecordAcceptance()
|
||||
entry, err := ai.Apply(cmd.Context(), scope.WorkingDir, output.Suggestions, validation, ai.ApplyMetadata{
|
||||
Prompt: session.CurrentPrompt(),
|
||||
PromptHistory: session.PromptHistory(),
|
||||
Notes: session.Notes(),
|
||||
Model: session.Model(),
|
||||
SequenceSeparator: session.SequenceSeparator(),
|
||||
}, out)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintf(out, "Applied %d rename(s). Ledger updated.\n", len(entry.Operations))
|
||||
return nil
|
||||
}
|
||||
|
||||
action, err := readSessionAction(reader, out, len(validation.Conflicts) == 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch action {
|
||||
case actionQuit:
|
||||
fmt.Fprintln(out, "Session ended without applying changes.")
|
||||
return nil
|
||||
case actionAccept:
|
||||
if len(validation.Conflicts) > 0 {
|
||||
fmt.Fprintln(out, "Cannot accept preview while conflicts remain. Resolve them first.")
|
||||
continue
|
||||
}
|
||||
if dryRun {
|
||||
fmt.Fprintln(out, "Dry-run mode active; no changes were applied.")
|
||||
return nil
|
||||
}
|
||||
applyNow, err := confirmApply(reader, out)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !applyNow {
|
||||
fmt.Fprintln(out, "Preview accepted without applying changes.")
|
||||
return nil
|
||||
}
|
||||
session.RecordAcceptance()
|
||||
entry, err := ai.Apply(cmd.Context(), scope.WorkingDir, output.Suggestions, validation, ai.ApplyMetadata{
|
||||
Prompt: session.CurrentPrompt(),
|
||||
PromptHistory: session.PromptHistory(),
|
||||
Notes: session.Notes(),
|
||||
Model: session.Model(),
|
||||
SequenceSeparator: session.SequenceSeparator(),
|
||||
}, out)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Fprintf(out, "Applied %d rename(s). Ledger updated.\n", len(entry.Operations))
|
||||
return nil
|
||||
case actionRegenerate:
|
||||
session.RecordRegeneration()
|
||||
continue
|
||||
case actionEdit:
|
||||
newPrompt, err := readPrompt(reader, out, session.CurrentPrompt())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
session.UpdatePrompt(newPrompt)
|
||||
continue
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
cmd.Flags().StringVar(&prompt, "prompt", "", "Optional guidance for the AI suggestion engine")
|
||||
cmd.Flags().StringVar(&sequenceSeparator, "sequence-separator", ".", "Separator inserted between sequence number and generated name")
|
||||
|
||||
return cmd
|
||||
}
|
||||
|
||||
func collectScopeEntries(ctx context.Context, req *listing.ListingRequest) ([]string, error) {
|
||||
walker := traversal.NewWalker()
|
||||
extensions := make(map[string]struct{}, len(req.Extensions))
|
||||
for _, ext := range req.Extensions {
|
||||
extensions[strings.ToLower(ext)] = struct{}{}
|
||||
}
|
||||
|
||||
var files []string
|
||||
err := walker.Walk(req.WorkingDir, req.Recursive, req.IncludeDirectories, req.IncludeHidden, req.MaxDepth, func(relPath string, entry fs.DirEntry, depth int) error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
if entry.IsDir() {
|
||||
if !req.IncludeDirectories {
|
||||
return nil
|
||||
}
|
||||
} else {
|
||||
if len(extensions) > 0 {
|
||||
ext := strings.ToLower(filepath.Ext(entry.Name()))
|
||||
if _, ok := extensions[ext]; !ok {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
relSlash := filepath.ToSlash(relPath)
|
||||
if relSlash == "." {
|
||||
return nil
|
||||
}
|
||||
files = append(files, relSlash)
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sort.Strings(files)
|
||||
return files, nil
|
||||
}
|
||||
|
||||
const (
|
||||
actionAccept = "accept"
|
||||
actionRegenerate = "regenerate"
|
||||
actionEdit = "edit"
|
||||
actionQuit = "quit"
|
||||
)
|
||||
|
||||
func readSessionAction(reader *bufio.Reader, out io.Writer, canAccept bool) (string, error) {
|
||||
prompt := "Choose action: [Enter] finish, (e) edit prompt, (r) regenerate, (q) quit: "
|
||||
if !canAccept {
|
||||
prompt = "Choose action: (e) edit prompt, (r) regenerate, (q) quit: "
|
||||
}
|
||||
fmt.Fprint(out, prompt)
|
||||
line, err := reader.ReadString('\n')
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
choice := strings.TrimSpace(strings.ToLower(line))
|
||||
|
||||
if choice == "" {
|
||||
if !canAccept {
|
||||
return actionRegenerate, nil
|
||||
}
|
||||
return actionAccept, nil
|
||||
}
|
||||
|
||||
switch choice {
|
||||
case "e", "edit":
|
||||
return actionEdit, nil
|
||||
case "r", "regen", "regenerate":
|
||||
return actionRegenerate, nil
|
||||
case "q", "quit", "exit":
|
||||
return actionQuit, nil
|
||||
case "accept", "a":
|
||||
if canAccept {
|
||||
return actionAccept, nil
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintln(out, "Unrecognised choice; please try again.")
|
||||
return readSessionAction(reader, out, canAccept)
|
||||
}
|
||||
|
||||
func readPrompt(reader *bufio.Reader, out io.Writer, current string) (string, error) {
|
||||
fmt.Fprintf(out, "Enter new prompt (leave blank to keep %q): ", current)
|
||||
line, err := reader.ReadString('\n')
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
trimmed := strings.TrimSpace(line)
|
||||
if trimmed == "" {
|
||||
return current, nil
|
||||
}
|
||||
return trimmed, nil
|
||||
}
|
||||
|
||||
func printSessionSummary(w io.Writer, session *ai.Session) {
|
||||
history := session.PromptHistory()
|
||||
fmt.Fprintf(w, "Current prompt: %q\n", session.CurrentPrompt())
|
||||
if len(history) > 1 {
|
||||
fmt.Fprintf(w, "Prompt history (%d entries): %s\n", len(history), strings.Join(history, " -> "))
|
||||
}
|
||||
if notes := session.Notes(); len(notes) > 0 {
|
||||
fmt.Fprintf(w, "Notes: %s\n", strings.Join(notes, "; "))
|
||||
}
|
||||
}
|
||||
|
||||
func confirmApply(reader *bufio.Reader, out io.Writer) (bool, error) {
|
||||
fmt.Fprint(out, "Apply these changes now? (y/N): ")
|
||||
line, err := reader.ReadString('\n')
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
choice := strings.TrimSpace(strings.ToLower(line))
|
||||
switch choice {
|
||||
case "y", "yes":
|
||||
return true, nil
|
||||
default:
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
func lookupBool(cmd *cobra.Command, name string) (bool, error) {
|
||||
if flag := cmd.Flags().Lookup(name); flag != nil {
|
||||
return cmd.Flags().GetBool(name)
|
||||
}
|
||||
if flag := cmd.InheritedFlags().Lookup(name); flag != nil {
|
||||
return cmd.InheritedFlags().GetBool(name)
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(newAICommand())
|
||||
}
|
||||
@@ -15,8 +15,9 @@ var rootCmd = &cobra.Command{
|
||||
Use: "renamer",
|
||||
Short: "Safe, scriptable batch renaming utility",
|
||||
Long: `Renamer provides preview-first, undoable rename operations for files and directories.
|
||||
Use subcommands like "preview", "rename", and "list" with shared scope flags to target exactly
|
||||
the paths you intend to change.`,
|
||||
Use subcommands like "list", "replace", "ai", and "undo" with shared scope flags to target
|
||||
the paths you intend to change. Each command supports --dry-run previews and ledger-backed undo
|
||||
workflows so you can safely iterate before applying changes.`,
|
||||
}
|
||||
|
||||
// Execute adds all child commands to the root command and sets flags appropriately.
|
||||
@@ -48,6 +49,7 @@ func NewRootCommand() *cobra.Command {
|
||||
cmd.AddCommand(NewReplaceCommand())
|
||||
cmd.AddCommand(NewRemoveCommand())
|
||||
cmd.AddCommand(NewExtensionCommand())
|
||||
cmd.AddCommand(newAICommand())
|
||||
cmd.AddCommand(newInsertCommand())
|
||||
cmd.AddCommand(newRegexCommand())
|
||||
cmd.AddCommand(newSequenceCommand())
|
||||
|
||||
@@ -38,6 +38,13 @@ func newUndoCommand() *cobra.Command {
|
||||
if sources, ok := entry.Metadata["sourceExtensions"].([]string); ok && len(sources) > 0 {
|
||||
fmt.Fprintf(out, "Previous sources: %s\n", strings.Join(sources, ", "))
|
||||
}
|
||||
case "ai":
|
||||
if prompt, ok := entry.Metadata["prompt"].(string); ok && prompt != "" {
|
||||
fmt.Fprintf(out, "Reverted AI batch generated from prompt %q\n", prompt)
|
||||
}
|
||||
if warnings, ok := entry.Metadata["warnings"].([]string); ok && len(warnings) > 0 {
|
||||
fmt.Fprintf(out, "Warnings during preview: %s\n", strings.Join(warnings, "; "))
|
||||
}
|
||||
case "insert":
|
||||
insertText, _ := entry.Metadata["insertText"].(string)
|
||||
positionToken, _ := entry.Metadata["positionToken"].(string)
|
||||
|
||||
@@ -9,3 +9,4 @@
|
||||
- Document quoting guidance, `--dry-run` / `--yes` behavior, and automation scenarios for replace command.
|
||||
- Add `renamer list` subcommand with shared scope flags and plain/table output formats.
|
||||
- Document global scope flags and hidden-file behavior.
|
||||
- Add `renamer ai` subcommand with export/import workflow, policy enforcement flags, prompt hash telemetry, and ledger metadata for applied plans.
|
||||
|
||||
@@ -120,3 +120,23 @@ renamer extension <source-ext...> <target-ext> [flags]
|
||||
- Preview normalization: `renamer extension .jpeg .JPG .jpg --dry-run`
|
||||
- Apply case-folded extension updates: `renamer extension .yaml .yml .yml --yes --path ./configs`
|
||||
- Include hidden assets recursively: `renamer extension .TMP .tmp --recursive --hidden`
|
||||
|
||||
## AI Command Quick Reference
|
||||
|
||||
```bash
|
||||
renamer ai [flags]
|
||||
```
|
||||
|
||||
- Generates AI rename suggestions using the embedded Genkit flow. Preview results can be applied immediately or inspected interactively first.
|
||||
- Scope flags (`--path`, `-r`, `-d`, `--hidden`, `--extensions`) determine which files feed into the flow. Up to 200 entries are accepted per request.
|
||||
- Provide user guidance via `--prompt "Describe naming scheme"`; when omitted the flow falls back to deterministic sequencing.
|
||||
- Output renders in a tabular layout showing sequence number, original path, and proposed filename. Validation conflicts are surfaced inline and block continuation.
|
||||
- After each preview choose `(r)` to regenerate, `(e)` to edit the prompt, `(q)` to exit, or press Enter with a clean preview to finish.
|
||||
- Use `--yes` for non-interactive runs; the command applies the suggestions when the preview is conflict-free.
|
||||
- Control numbering format with `--sequence-separator` (default `.`) to change the character(s) inserted between the sequence value and the generated name.
|
||||
|
||||
### Credentials
|
||||
|
||||
- Provide a Gemini API key via `GOOGLE_API_KEY` (recommended) or `GEMINI_API_KEY`. For backward compatibility `RENAMER_AI_KEY` is also accepted.
|
||||
- Optional: override service endpoints using `GOOGLE_GEMINI_BASE_URL` (Gemini) and `GOOGLE_VERTEX_BASE_URL` (Vertex AI). These must be set **before** the command runs so the Genkit SDK can pick them up.
|
||||
- If no key is detected the command exits with an error before calling the model.
|
||||
|
||||
53
go.mod
53
go.mod
@@ -1,9 +1,54 @@
|
||||
module github.com/rogeecn/renamer
|
||||
|
||||
go 1.24.0
|
||||
go 1.24.1
|
||||
|
||||
toolchain go1.24.9
|
||||
|
||||
require (
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/spf13/cobra v1.10.1 // indirect
|
||||
github.com/spf13/pflag v1.0.9 // indirect
|
||||
github.com/firebase/genkit/go v1.1.0
|
||||
github.com/spf13/cobra v1.10.1
|
||||
github.com/spf13/pflag v1.0.9
|
||||
)
|
||||
|
||||
require (
|
||||
cloud.google.com/go v0.120.0 // indirect
|
||||
cloud.google.com/go/auth v0.16.2 // indirect
|
||||
cloud.google.com/go/compute/metadata v0.7.0 // indirect
|
||||
github.com/bahlo/generic-list-go v0.2.0 // indirect
|
||||
github.com/buger/jsonparser v1.1.1 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/goccy/go-yaml v1.17.1 // indirect
|
||||
github.com/google/dotprompt/go v0.0.0-20251014011017-8d056e027254 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
github.com/google/s2a-go v0.1.9 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6 // indirect
|
||||
github.com/googleapis/gax-go/v2 v2.14.2 // indirect
|
||||
github.com/gorilla/websocket v1.5.3 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/invopop/jsonschema v0.13.0 // indirect
|
||||
github.com/mailru/easyjson v0.9.0 // indirect
|
||||
github.com/mbleigh/raymond v0.0.0-20250414171441-6b3a58ab9e0a // indirect
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
|
||||
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
|
||||
go.opentelemetry.io/otel v1.36.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.36.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.36.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.36.0 // indirect
|
||||
golang.org/x/crypto v0.40.0 // indirect
|
||||
golang.org/x/net v0.41.0 // indirect
|
||||
golang.org/x/sys v0.34.0 // indirect
|
||||
golang.org/x/text v0.27.0 // indirect
|
||||
google.golang.org/genai v1.30.0 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
|
||||
google.golang.org/grpc v1.73.0 // indirect
|
||||
google.golang.org/protobuf v1.36.6 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
||||
|
||||
107
go.sum
107
go.sum
@@ -1,10 +1,117 @@
|
||||
cloud.google.com/go v0.120.0 h1:wc6bgG9DHyKqF5/vQvX1CiZrtHnxJjBlKUyF9nP6meA=
|
||||
cloud.google.com/go v0.120.0/go.mod h1:/beW32s8/pGRuj4IILWQNd4uuebeT4dkOhKmkfit64Q=
|
||||
cloud.google.com/go/auth v0.16.2 h1:QvBAGFPLrDeoiNjyfVunhQ10HKNYuOwZ5noee0M5df4=
|
||||
cloud.google.com/go/auth v0.16.2/go.mod h1:sRBas2Y1fB1vZTdurouM0AzuYQBMZinrUYL8EufhtEA=
|
||||
cloud.google.com/go/compute/metadata v0.7.0 h1:PBWF+iiAerVNe8UCHxdOt6eHLVc3ydFeOCw78U8ytSU=
|
||||
cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo=
|
||||
github.com/bahlo/generic-list-go v0.2.0 h1:5sz/EEAK+ls5wF+NeqDpk5+iNdMDXrh3z3nPnH1Wvgk=
|
||||
github.com/bahlo/generic-list-go v0.2.0/go.mod h1:2KvAjgMlE5NNynlg/5iLrrCCZ2+5xWbdbCW3pNTGyYg=
|
||||
github.com/buger/jsonparser v1.1.1 h1:2PnMjfWD7wBILjqQbt530v576A/cAbQvEW9gGIpYMUs=
|
||||
github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||
github.com/firebase/genkit/go v1.1.0 h1:SQqzQt19gEubvUUCFV98TARFAzD30zT3QhseF3oTKqo=
|
||||
github.com/firebase/genkit/go v1.1.0/go.mod h1:ru1cIuxG1s3HeUjhnadVveDJ1yhinj+j+uUh0f0pyxE=
|
||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/goccy/go-yaml v1.17.1 h1:LI34wktB2xEE3ONG/2Ar54+/HJVBriAGJ55PHls4YuY=
|
||||
github.com/goccy/go-yaml v1.17.1/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/google/dotprompt/go v0.0.0-20251014011017-8d056e027254 h1:okN800+zMJOGHLJCgry+OGzhhtH6YrjQh1rluHmOacE=
|
||||
github.com/google/dotprompt/go v0.0.0-20251014011017-8d056e027254/go.mod h1:k8cjJAQWc//ac/bMnzItyOFbfT01tgRTZGgxELCuxEQ=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
|
||||
github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6 h1:GW/XbdyBFQ8Qe+YAmFU9uHLo7OnF5tL52HFAgMmyrf4=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.6/go.mod h1:MkHOF77EYAE7qfSuSS9PU6g4Nt4e11cnsDUowfwewLA=
|
||||
github.com/googleapis/gax-go/v2 v2.14.2 h1:eBLnkZ9635krYIPD+ag1USrOAI0Nr0QYF3+/3GqO0k0=
|
||||
github.com/googleapis/gax-go/v2 v2.14.2/go.mod h1:ON64QhlJkhVtSqp4v1uaK92VyZ2gmvDQsweuyLV+8+w=
|
||||
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
|
||||
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/invopop/jsonschema v0.13.0 h1:KvpoAJWEjR3uD9Kbm2HWJmqsEaHt8lBUpd0qHcIi21E=
|
||||
github.com/invopop/jsonschema v0.13.0/go.mod h1:ffZ5Km5SWWRAIN6wbDXItl95euhFz2uON45H2qjYt+0=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/mailru/easyjson v0.9.0 h1:PrnmzHw7262yW8sTBwxi1PdJA3Iw/EKBa8psRf7d9a4=
|
||||
github.com/mailru/easyjson v0.9.0/go.mod h1:1+xMtQp2MRNVL/V1bOzuP3aP8VNwRW55fQUto+XFtTU=
|
||||
github.com/mbleigh/raymond v0.0.0-20250414171441-6b3a58ab9e0a h1:v2cBA3xWKv2cIOVhnzX/gNgkNXqiHfUgJtA3r61Hf7A=
|
||||
github.com/mbleigh/raymond v0.0.0-20250414171441-6b3a58ab9e0a/go.mod h1:Y6ghKH+ZijXn5d9E7qGGZBmjitx7iitZdQiIW97EpTU=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
||||
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
||||
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
||||
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8 h1:5h/BUHu93oj4gIdvHHHGsScSTMijfx5PeYkE/fJgbpc=
|
||||
github.com/wk8/go-ordered-map/v2 v2.1.8/go.mod h1:5nJHM5DyteebpVlHnWMV0rPz6Zp7+xBAnxjb1X5vnTw=
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo=
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 h1:EzJWgHovont7NscjpAxXsDA8S8BMYve8Y5+7cuRE7R0=
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
|
||||
github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17UxZ74=
|
||||
github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2 h1:Ed3Oyj9yrmi9087+NczuL5BwkIc4wvTb5zIM+UJPGz4=
|
||||
github.com/yosida95/uritemplate/v3 v3.0.2/go.mod h1:ILOh0sOhIJR3+L/8afwt/kE++YT040gmv5BQTMR2HP4=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
|
||||
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
|
||||
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
|
||||
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
|
||||
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
|
||||
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
|
||||
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
|
||||
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
|
||||
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||
golang.org/x/crypto v0.40.0 h1:r4x+VvoG5Fm+eJcxMaY8CQM7Lb0l1lsmjGBQ6s8BfKM=
|
||||
golang.org/x/crypto v0.40.0/go.mod h1:Qr1vMER5WyS2dfPHAlsOj01wgLbsyWtFn/aY+5+ZdxY=
|
||||
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
|
||||
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
|
||||
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
|
||||
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
|
||||
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
|
||||
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/text v0.27.0 h1:4fGWRpyh641NLlecmyl4LOe6yDdfaYNrGb2zdfo4JV4=
|
||||
golang.org/x/text v0.27.0/go.mod h1:1D28KMCvyooCX9hBiosv5Tz/+YLxj0j7XhWjpSUF7CU=
|
||||
google.golang.org/genai v1.30.0 h1:7021aneIvl24nEBLbtQFEWleHsMbjzpcQvkT4WcJ1dc=
|
||||
google.golang.org/genai v1.30.0/go.mod h1:7pAilaICJlQBonjKKJNhftDFv3SREhZcTe9F6nRcjbg=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
|
||||
google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok=
|
||||
google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=
|
||||
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
|
||||
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
|
||||
121
internal/ai/apply.go
Normal file
121
internal/ai/apply.go
Normal file
@@ -0,0 +1,121 @@
|
||||
package ai
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
|
||||
"github.com/rogeecn/renamer/internal/ai/flow"
|
||||
"github.com/rogeecn/renamer/internal/history"
|
||||
"github.com/rogeecn/renamer/internal/output"
|
||||
)
|
||||
|
||||
// ApplyMetadata captures contextual information persisted alongside ledger entries.
|
||||
type ApplyMetadata struct {
|
||||
Prompt string
|
||||
PromptHistory []string
|
||||
Notes []string
|
||||
Model string
|
||||
SequenceSeparator string
|
||||
}
|
||||
|
||||
// toMap converts metadata into a ledger-friendly map.
|
||||
func (m ApplyMetadata) toMap(warnings []string) map[string]any {
|
||||
data := history.BuildAIMetadata(m.Prompt, m.PromptHistory, m.Notes, m.Model, warnings)
|
||||
if m.SequenceSeparator != "" {
|
||||
data["sequenceSeparator"] = m.SequenceSeparator
|
||||
}
|
||||
return data
|
||||
}
|
||||
|
||||
// Apply executes the rename suggestions, records a ledger entry, and emits progress updates.
|
||||
func Apply(ctx context.Context, workingDir string, suggestions []flow.Suggestion, validation ValidationResult, meta ApplyMetadata, writer io.Writer) (history.Entry, error) {
|
||||
entry := history.Entry{Command: "ai"}
|
||||
|
||||
if len(suggestions) == 0 {
|
||||
return entry, nil
|
||||
}
|
||||
|
||||
reporter := output.NewProgressReporter(writer, len(suggestions))
|
||||
|
||||
sort.SliceStable(suggestions, func(i, j int) bool {
|
||||
return suggestions[i].Original > suggestions[j].Original
|
||||
})
|
||||
|
||||
operations := make([]history.Operation, 0, len(suggestions))
|
||||
|
||||
revert := func() error {
|
||||
for i := len(operations) - 1; i >= 0; i-- {
|
||||
op := operations[i]
|
||||
source := filepath.Join(workingDir, filepath.FromSlash(op.To))
|
||||
destination := filepath.Join(workingDir, filepath.FromSlash(op.From))
|
||||
if err := os.Rename(source, destination); err != nil && !errors.Is(err, os.ErrNotExist) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, suggestion := range suggestions {
|
||||
if err := ctx.Err(); err != nil {
|
||||
_ = revert()
|
||||
return history.Entry{}, err
|
||||
}
|
||||
|
||||
fromRel := flowToKey(suggestion.Original)
|
||||
toRel := flowToKey(suggestion.Suggested)
|
||||
|
||||
fromAbs := filepath.Join(workingDir, filepath.FromSlash(fromRel))
|
||||
toAbs := filepath.Join(workingDir, filepath.FromSlash(toRel))
|
||||
|
||||
if fromAbs == toAbs {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := ensureParentDir(toAbs); err != nil {
|
||||
_ = revert()
|
||||
return history.Entry{}, err
|
||||
}
|
||||
|
||||
if err := os.Rename(fromAbs, toAbs); err != nil {
|
||||
_ = revert()
|
||||
return history.Entry{}, err
|
||||
}
|
||||
|
||||
operations = append(operations, history.Operation{From: fromRel, To: toRel})
|
||||
if err := reporter.Step(fromRel, toRel); err != nil {
|
||||
_ = revert()
|
||||
return history.Entry{}, err
|
||||
}
|
||||
}
|
||||
|
||||
if len(operations) == 0 {
|
||||
return entry, reporter.Complete()
|
||||
}
|
||||
|
||||
if err := reporter.Complete(); err != nil {
|
||||
_ = revert()
|
||||
return history.Entry{}, err
|
||||
}
|
||||
|
||||
entry.Operations = operations
|
||||
entry.Metadata = meta.toMap(validation.Warnings)
|
||||
|
||||
if err := history.Append(workingDir, entry); err != nil {
|
||||
_ = revert()
|
||||
return history.Entry{}, err
|
||||
}
|
||||
|
||||
return entry, nil
|
||||
}
|
||||
|
||||
func ensureParentDir(path string) error {
|
||||
dir := filepath.Dir(path)
|
||||
if dir == "." || dir == "" {
|
||||
return nil
|
||||
}
|
||||
return os.MkdirAll(dir, 0o755)
|
||||
}
|
||||
59
internal/ai/client.go
Normal file
59
internal/ai/client.go
Normal file
@@ -0,0 +1,59 @@
|
||||
package ai
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
|
||||
"github.com/rogeecn/renamer/internal/ai/flow"
|
||||
)
|
||||
|
||||
// Runner executes the rename flow and returns structured suggestions.
|
||||
type Runner func(ctx context.Context, input *flow.RenameFlowInput) (*flow.Output, error)
|
||||
|
||||
// Client orchestrates flow invocation for callers such as the CLI command.
|
||||
type Client struct {
|
||||
runner Runner
|
||||
}
|
||||
|
||||
// ClientOption customises the AI client behaviour.
|
||||
type ClientOption func(*Client)
|
||||
|
||||
// WithRunner overrides the flow runner implementation (useful for tests).
|
||||
func WithRunner(r Runner) ClientOption {
|
||||
return func(c *Client) {
|
||||
c.runner = r
|
||||
}
|
||||
}
|
||||
|
||||
// NewClient constructs a Client with the default Genkit-backed runner.
|
||||
func NewClient(opts ...ClientOption) *Client {
|
||||
client := &Client{}
|
||||
client.runner = func(ctx context.Context, input *flow.RenameFlowInput) (*flow.Output, error) {
|
||||
creds, err := LoadCredentials()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return runRenameFlow(ctx, input, creds)
|
||||
}
|
||||
for _, opt := range opts {
|
||||
opt(client)
|
||||
}
|
||||
return client
|
||||
}
|
||||
|
||||
// Suggest executes the rename flow and returns structured suggestions.
|
||||
func (c *Client) Suggest(ctx context.Context, input *flow.RenameFlowInput) (*flow.Output, error) {
|
||||
if c == nil {
|
||||
return nil, ErrClientNotInitialized
|
||||
}
|
||||
if c.runner == nil {
|
||||
return nil, ErrRunnerNotConfigured
|
||||
}
|
||||
return c.runner(ctx, input)
|
||||
}
|
||||
|
||||
// ErrClientNotInitialized indicates the client receiver was nil.
|
||||
var ErrClientNotInitialized = errors.New("ai client not initialized")
|
||||
|
||||
// ErrRunnerNotConfigured indicates the client runner is missing.
|
||||
var ErrRunnerNotConfigured = errors.New("ai client runner not configured")
|
||||
41
internal/ai/config.go
Normal file
41
internal/ai/config.go
Normal file
@@ -0,0 +1,41 @@
|
||||
package ai
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
var apiKeyEnvVars = []string{
|
||||
"GOOGLE_API_KEY",
|
||||
"GEMINI_API_KEY",
|
||||
"RENAMER_AI_KEY",
|
||||
}
|
||||
|
||||
// Credentials encapsulates the values required to authenticate with the AI provider.
|
||||
type Credentials struct {
|
||||
APIKey string
|
||||
}
|
||||
|
||||
// LoadCredentials returns the AI credentials sourced from environment variables.
|
||||
func LoadCredentials() (Credentials, error) {
|
||||
for _, env := range apiKeyEnvVars {
|
||||
if key, ok := os.LookupEnv(env); ok && key != "" {
|
||||
return Credentials{APIKey: key}, nil
|
||||
}
|
||||
}
|
||||
return Credentials{}, errors.New("AI provider key missing; set GOOGLE_API_KEY (recommended), GEMINI_API_KEY, or RENAMER_AI_KEY")
|
||||
}
|
||||
|
||||
// MaskedCredentials returns a redacted view of the credentials for logging purposes.
|
||||
func MaskedCredentials(creds Credentials) string {
|
||||
if creds.APIKey == "" {
|
||||
return "(empty)"
|
||||
}
|
||||
|
||||
if len(creds.APIKey) <= 6 {
|
||||
return "***"
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s***", creds.APIKey[:3])
|
||||
}
|
||||
3
internal/ai/flow/doc.go
Normal file
3
internal/ai/flow/doc.go
Normal file
@@ -0,0 +1,3 @@
|
||||
package flow
|
||||
|
||||
// Package flow hosts the Genkit rename flow implementation.
|
||||
7
internal/ai/flow/flow_test.go
Normal file
7
internal/ai/flow/flow_test.go
Normal file
@@ -0,0 +1,7 @@
|
||||
package flow_test
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestRenameFlowStub(t *testing.T) {
|
||||
t.Skip("rename flow implementation pending")
|
||||
}
|
||||
50
internal/ai/flow/json.go
Normal file
50
internal/ai/flow/json.go
Normal file
@@ -0,0 +1,50 @@
|
||||
package flow
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// Suggestion represents a single rename mapping emitted by the Genkit flow.
|
||||
type Suggestion struct {
|
||||
Original string `json:"original"`
|
||||
Suggested string `json:"suggested"`
|
||||
}
|
||||
|
||||
// Output wraps the list of suggestions returned by the flow.
|
||||
type Output struct {
|
||||
Suggestions []Suggestion `json:"suggestions"`
|
||||
}
|
||||
|
||||
var (
|
||||
errEmptyResponse = errors.New("genkit flow returned empty response")
|
||||
errMissingSuggestions = errors.New("genkit flow response missing suggestions")
|
||||
)
|
||||
|
||||
// ParseOutput converts the raw JSON payload into a structured Output.
|
||||
func ParseOutput(raw []byte) (Output, error) {
|
||||
if len(raw) == 0 {
|
||||
return Output{}, errEmptyResponse
|
||||
}
|
||||
|
||||
var out Output
|
||||
if err := json.Unmarshal(raw, &out); err != nil {
|
||||
return Output{}, fmt.Errorf("failed to decode genkit output: %w", err)
|
||||
}
|
||||
|
||||
if len(out.Suggestions) == 0 {
|
||||
return Output{}, errMissingSuggestions
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// MarshalInput serialises the flow input for logging or replay.
|
||||
func MarshalInput(input any) ([]byte, error) {
|
||||
buf, err := json.Marshal(input)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to encode genkit input: %w", err)
|
||||
}
|
||||
return buf, nil
|
||||
}
|
||||
29
internal/ai/flow/prompt.tmpl
Normal file
29
internal/ai/flow/prompt.tmpl
Normal file
@@ -0,0 +1,29 @@
|
||||
你是一个智能文件重命名助手。你的任务是根据用户提供的文件名列表和命名指令,为每个文件生成一个清晰、统一的新名称。
|
||||
|
||||
规则:
|
||||
1. 保持原始文件的扩展名不变。
|
||||
2. 新文件名中不允许包含非法字符,如 / \\ : * ? \" < > |。
|
||||
3. 如果需要添加序列号,请先按文件所在的目录维度分组,对每个目录内部的文件进行稳定排序(建议使用原始文件名自然序),序列号放在文件名的开头(例如 "01.假期照片.jpg"),不要放在结尾。序列号和名称之间默认使用句点 (.) 分隔,如果调用方提供了其他分隔符,则使用对应字符。
|
||||
4. 严格按照以下 JSON 格式返回你的建议,不要包含任何额外的解释或 Markdown 标记。
|
||||
|
||||
[INPUT]
|
||||
用户命名指令: "{{ .UserPrompt }}"
|
||||
文件名列表:
|
||||
{{- range .FileNames }}
|
||||
- {{ . }}
|
||||
{{- end }}
|
||||
|
||||
[OUTPUT]
|
||||
请在这里输出你的 JSON 结果,格式如下:
|
||||
{
|
||||
"suggestions": [
|
||||
{
|
||||
"original": "原始文件名1.ext",
|
||||
"suggested": "建议的新文件名1.ext"
|
||||
},
|
||||
{
|
||||
"original": "原始文件名2.ext",
|
||||
"suggested": "建议的新文件名2.ext"
|
||||
}
|
||||
]
|
||||
}
|
||||
34
internal/ai/flow/prompt_test.go
Normal file
34
internal/ai/flow/prompt_test.go
Normal file
@@ -0,0 +1,34 @@
|
||||
package flow_test
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/rogeecn/renamer/internal/ai/flow"
|
||||
)
|
||||
|
||||
func TestRenderPromptIncludesFilesAndPrompt(t *testing.T) {
|
||||
input := flow.RenameFlowInput{
|
||||
FileNames: []string{"IMG_0001.jpg", "albums/Day 1.png"},
|
||||
UserPrompt: "按地点重新命名",
|
||||
}
|
||||
|
||||
rendered, err := flow.RenderPrompt(input)
|
||||
if err != nil {
|
||||
t.Fatalf("RenderPrompt error: %v", err)
|
||||
}
|
||||
|
||||
for _, expected := range []string{"IMG_0001.jpg", "albums/Day 1.png"} {
|
||||
if !strings.Contains(rendered, expected) {
|
||||
t.Fatalf("prompt missing filename %q: %s", expected, rendered)
|
||||
}
|
||||
}
|
||||
|
||||
if !strings.Contains(rendered, "按地点重新命名") {
|
||||
t.Fatalf("prompt missing user guidance: %s", rendered)
|
||||
}
|
||||
|
||||
if !strings.Contains(rendered, "suggestions") {
|
||||
t.Fatalf("prompt missing JSON structure guidance: %s", rendered)
|
||||
}
|
||||
}
|
||||
197
internal/ai/flow/rename_flow.go
Normal file
197
internal/ai/flow/rename_flow.go
Normal file
@@ -0,0 +1,197 @@
|
||||
package flow
|
||||
|
||||
import (
|
||||
"context"
|
||||
_ "embed"
|
||||
"errors"
|
||||
"fmt"
|
||||
"path"
|
||||
"sort"
|
||||
"strings"
|
||||
"text/template"
|
||||
"unicode"
|
||||
|
||||
"github.com/firebase/genkit/go/core"
|
||||
"github.com/firebase/genkit/go/genkit"
|
||||
)
|
||||
|
||||
//go:embed prompt.tmpl
|
||||
var promptTemplateSource string
|
||||
|
||||
var (
|
||||
promptTemplate = template.Must(template.New("renameFlowPrompt").Parse(promptTemplateSource))
|
||||
)
|
||||
|
||||
// RenameFlowInput mirrors the JSON payload passed into the Genkit flow.
|
||||
type RenameFlowInput struct {
|
||||
FileNames []string `json:"fileNames"`
|
||||
UserPrompt string `json:"userPrompt"`
|
||||
SequenceSeparator string `json:"sequenceSeparator,omitempty"`
|
||||
}
|
||||
|
||||
// Validate ensures the flow input is well formed.
|
||||
func (in *RenameFlowInput) Validate() error {
|
||||
if in == nil {
|
||||
return errors.New("rename flow input cannot be nil")
|
||||
}
|
||||
if len(in.FileNames) == 0 {
|
||||
return errors.New("no file names provided to rename flow")
|
||||
}
|
||||
if len(in.FileNames) > 200 {
|
||||
return fmt.Errorf("rename flow supports up to 200 files per invocation (received %d)", len(in.FileNames))
|
||||
}
|
||||
normalized := make([]string, len(in.FileNames))
|
||||
for i, name := range in.FileNames {
|
||||
trimmed := strings.TrimSpace(name)
|
||||
if trimmed == "" {
|
||||
return fmt.Errorf("file name at index %d is empty", i)
|
||||
}
|
||||
normalized[i] = toSlash(trimmed)
|
||||
}
|
||||
// Ensure no duplicates to simplify downstream validation.
|
||||
if dup := firstDuplicate(normalized); dup != "" {
|
||||
return fmt.Errorf("duplicate file name %q detected in flow input", dup)
|
||||
}
|
||||
in.FileNames = normalized
|
||||
|
||||
sep := strings.TrimSpace(in.SequenceSeparator)
|
||||
if sep == "" {
|
||||
sep = "."
|
||||
}
|
||||
if strings.ContainsAny(sep, "/\\") {
|
||||
return fmt.Errorf("sequence separator %q cannot contain path separators", sep)
|
||||
}
|
||||
if strings.ContainsAny(sep, "\n\r") {
|
||||
return errors.New("sequence separator cannot contain newline characters")
|
||||
}
|
||||
in.SequenceSeparator = sep
|
||||
return nil
|
||||
}
|
||||
|
||||
// RenderPrompt materialises the prompt template for the provided input.
|
||||
func RenderPrompt(input RenameFlowInput) (string, error) {
|
||||
if err := input.Validate(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
var builder strings.Builder
|
||||
if err := promptTemplate.Execute(&builder, input); err != nil {
|
||||
return "", fmt.Errorf("render rename prompt: %w", err)
|
||||
}
|
||||
return builder.String(), nil
|
||||
}
|
||||
|
||||
// Define registers the rename flow on the supplied Genkit instance.
|
||||
func Define(g *genkit.Genkit) *core.Flow[*RenameFlowInput, *Output, struct{}] {
|
||||
if g == nil {
|
||||
panic("genkit instance cannot be nil")
|
||||
}
|
||||
return genkit.DefineFlow(g, "renameFlow", flowFn)
|
||||
}
|
||||
|
||||
func flowFn(ctx context.Context, input *RenameFlowInput) (*Output, error) {
|
||||
if err := input.Validate(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
prefix := slugify(input.UserPrompt)
|
||||
suggestions := make([]Suggestion, 0, len(input.FileNames))
|
||||
dirCounters := make(map[string]int)
|
||||
|
||||
for _, name := range input.FileNames {
|
||||
suggestion := deterministicSuggestion(name, prefix, dirCounters, input.SequenceSeparator)
|
||||
suggestions = append(suggestions, Suggestion{
|
||||
Original: name,
|
||||
Suggested: suggestion,
|
||||
})
|
||||
}
|
||||
|
||||
sort.SliceStable(suggestions, func(i, j int) bool {
|
||||
return suggestions[i].Original < suggestions[j].Original
|
||||
})
|
||||
|
||||
return &Output{Suggestions: suggestions}, nil
|
||||
}
|
||||
|
||||
func deterministicSuggestion(rel string, promptPrefix string, dirCounters map[string]int, separator string) string {
|
||||
rel = toSlash(rel)
|
||||
dir := path.Dir(rel)
|
||||
if dir == "." {
|
||||
dir = ""
|
||||
}
|
||||
|
||||
base := path.Base(rel)
|
||||
ext := path.Ext(base)
|
||||
name := strings.TrimSuffix(base, ext)
|
||||
|
||||
sanitizedName := slugify(name)
|
||||
|
||||
candidate := sanitizedName
|
||||
if promptPrefix != "" {
|
||||
switch {
|
||||
case candidate == "":
|
||||
candidate = promptPrefix
|
||||
default:
|
||||
candidate = fmt.Sprintf("%s-%s", promptPrefix, candidate)
|
||||
}
|
||||
}
|
||||
|
||||
if candidate == "" {
|
||||
candidate = "renamed"
|
||||
}
|
||||
|
||||
counterKey := dir
|
||||
dirCounters[counterKey]++
|
||||
seq := dirCounters[counterKey]
|
||||
|
||||
sep := separator
|
||||
if sep == "" {
|
||||
sep = "."
|
||||
}
|
||||
numbered := fmt.Sprintf("%02d%s%s", seq, sep, candidate)
|
||||
proposed := numbered + ext
|
||||
if dir != "" {
|
||||
return path.Join(dir, proposed)
|
||||
}
|
||||
return proposed
|
||||
}
|
||||
|
||||
func slugify(value string) string {
|
||||
value = strings.TrimSpace(value)
|
||||
if value == "" {
|
||||
return ""
|
||||
}
|
||||
var b strings.Builder
|
||||
b.Grow(len(value))
|
||||
lastHyphen := false
|
||||
for _, r := range value {
|
||||
switch {
|
||||
case unicode.IsLetter(r) || unicode.IsDigit(r):
|
||||
b.WriteRune(unicode.ToLower(r))
|
||||
lastHyphen = false
|
||||
case r == ' ' || r == '-' || r == '_' || r == '.':
|
||||
if !lastHyphen && b.Len() > 0 {
|
||||
b.WriteRune('-')
|
||||
lastHyphen = true
|
||||
}
|
||||
}
|
||||
}
|
||||
result := strings.Trim(b.String(), "-")
|
||||
return result
|
||||
}
|
||||
|
||||
func toSlash(pathStr string) string {
|
||||
return strings.ReplaceAll(pathStr, "\\", "/")
|
||||
}
|
||||
|
||||
func firstDuplicate(values []string) string {
|
||||
seen := make(map[string]struct{}, len(values))
|
||||
for _, v := range values {
|
||||
lower := strings.ToLower(v)
|
||||
if _, exists := seen[lower]; exists {
|
||||
return v
|
||||
}
|
||||
seen[lower] = struct{}{}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
55
internal/ai/preview.go
Normal file
55
internal/ai/preview.go
Normal file
@@ -0,0 +1,55 @@
|
||||
package ai
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/rogeecn/renamer/internal/ai/flow"
|
||||
"github.com/rogeecn/renamer/internal/output"
|
||||
)
|
||||
|
||||
// PrintPreview renders suggestions in a tabular format alongside validation results.
|
||||
func PrintPreview(w io.Writer, suggestions []flow.Suggestion, validation ValidationResult) error {
|
||||
table := output.NewAIPlanTable()
|
||||
if err := table.Begin(w); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for idx, suggestion := range suggestions {
|
||||
if err := table.WriteRow(output.AIPlanRow{
|
||||
Sequence: fmt.Sprintf("%02d", idx+1),
|
||||
Original: suggestion.Original,
|
||||
Proposed: suggestion.Suggested,
|
||||
Sanitized: flowToKey(suggestion.Suggested),
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err := table.End(w); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, warn := range validation.Warnings {
|
||||
if _, err := fmt.Fprintf(w, "Warning: %s\n", warn); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(validation.Conflicts) > 0 {
|
||||
if _, err := fmt.Fprintln(w, "Conflicts detected:"); err != nil {
|
||||
return err
|
||||
}
|
||||
for _, conflict := range validation.Conflicts {
|
||||
if _, err := fmt.Fprintf(w, " - %s -> %s (%s)\n", conflict.Original, conflict.Suggested, conflict.Reason); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if _, err := fmt.Fprintf(w, "Previewed %d suggestion(s)\n", len(suggestions)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
54
internal/ai/runtime.go
Normal file
54
internal/ai/runtime.go
Normal file
@@ -0,0 +1,54 @@
|
||||
package ai
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/firebase/genkit/go/core"
|
||||
"github.com/firebase/genkit/go/genkit"
|
||||
"github.com/firebase/genkit/go/plugins/googlegenai"
|
||||
"google.golang.org/genai"
|
||||
|
||||
"github.com/rogeecn/renamer/internal/ai/flow"
|
||||
)
|
||||
|
||||
var (
|
||||
runtimeOnce sync.Once
|
||||
runtimeErr error
|
||||
runtimeFlow *core.Flow[*flow.RenameFlowInput, *flow.Output, struct{}]
|
||||
)
|
||||
|
||||
func ensureRuntime(creds Credentials) error {
|
||||
runtimeOnce.Do(func() {
|
||||
ctx := context.Background()
|
||||
geminiBase := os.Getenv("GOOGLE_GEMINI_BASE_URL")
|
||||
vertexBase := os.Getenv("GOOGLE_VERTEX_BASE_URL")
|
||||
if geminiBase != "" || vertexBase != "" {
|
||||
genai.SetDefaultBaseURLs(genai.BaseURLParameters{
|
||||
GeminiURL: geminiBase,
|
||||
VertexURL: vertexBase,
|
||||
})
|
||||
}
|
||||
|
||||
plugin := &googlegenai.GoogleAI{APIKey: creds.APIKey}
|
||||
|
||||
g := genkit.Init(ctx,
|
||||
genkit.WithPlugins(plugin),
|
||||
genkit.WithDefaultModel(defaultModelID),
|
||||
)
|
||||
|
||||
runtimeFlow = flow.Define(g)
|
||||
})
|
||||
return runtimeErr
|
||||
}
|
||||
|
||||
func runRenameFlow(ctx context.Context, input *flow.RenameFlowInput, creds Credentials) (*flow.Output, error) {
|
||||
if err := ensureRuntime(creds); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if runtimeFlow == nil {
|
||||
return nil, runtimeErr
|
||||
}
|
||||
return runtimeFlow.Run(ctx, input)
|
||||
}
|
||||
138
internal/ai/session.go
Normal file
138
internal/ai/session.go
Normal file
@@ -0,0 +1,138 @@
|
||||
package ai
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
flowpkg "github.com/rogeecn/renamer/internal/ai/flow"
|
||||
)
|
||||
|
||||
const defaultModelID = "googleai/gemini-1.5-flash"
|
||||
|
||||
// Session tracks prompt history and guidance notes for a single AI preview loop.
|
||||
type Session struct {
|
||||
files []string
|
||||
client *Client
|
||||
prompts []string
|
||||
notes []string
|
||||
model string
|
||||
sequenceSeparator string
|
||||
|
||||
lastOutput *flowpkg.Output
|
||||
lastValidation ValidationResult
|
||||
}
|
||||
|
||||
// NewSession builds a session with the provided scope, initial prompt, and client.
|
||||
func NewSession(files []string, initialPrompt string, sequenceSeparator string, client *Client) *Session {
|
||||
prompts := []string{strings.TrimSpace(initialPrompt)}
|
||||
if prompts[0] == "" {
|
||||
prompts[0] = ""
|
||||
}
|
||||
|
||||
if client == nil {
|
||||
client = NewClient()
|
||||
}
|
||||
|
||||
sep := strings.TrimSpace(sequenceSeparator)
|
||||
if sep == "" {
|
||||
sep = "."
|
||||
}
|
||||
|
||||
return &Session{
|
||||
files: append([]string(nil), files...),
|
||||
client: client,
|
||||
prompts: prompts,
|
||||
notes: make([]string, 0),
|
||||
model: defaultModelID,
|
||||
sequenceSeparator: sep,
|
||||
}
|
||||
}
|
||||
|
||||
// Generate executes the flow and returns structured suggestions with validation.
|
||||
func (s *Session) Generate(ctx context.Context) (*flowpkg.Output, ValidationResult, error) {
|
||||
prompt := s.CurrentPrompt()
|
||||
input := &flowpkg.RenameFlowInput{
|
||||
FileNames: append([]string(nil), s.files...),
|
||||
UserPrompt: prompt,
|
||||
SequenceSeparator: s.sequenceSeparator,
|
||||
}
|
||||
|
||||
output, err := s.client.Suggest(ctx, input)
|
||||
if err != nil {
|
||||
return nil, ValidationResult{}, err
|
||||
}
|
||||
|
||||
validation := ValidateSuggestions(s.files, output.Suggestions)
|
||||
s.lastOutput = output
|
||||
s.lastValidation = validation
|
||||
return output, validation, nil
|
||||
}
|
||||
|
||||
// CurrentPrompt returns the most recent prompt in the session.
|
||||
func (s *Session) CurrentPrompt() string {
|
||||
if len(s.prompts) == 0 {
|
||||
return ""
|
||||
}
|
||||
return s.prompts[len(s.prompts)-1]
|
||||
}
|
||||
|
||||
// UpdatePrompt records a new prompt and adds a note for auditing.
|
||||
func (s *Session) UpdatePrompt(prompt string) {
|
||||
trimmed := strings.TrimSpace(prompt)
|
||||
s.prompts = append(s.prompts, trimmed)
|
||||
s.notes = append(s.notes, "prompt updated")
|
||||
}
|
||||
|
||||
// RecordRegeneration appends an audit note for regenerations.
|
||||
func (s *Session) RecordRegeneration() {
|
||||
s.notes = append(s.notes, "regenerated suggestions")
|
||||
}
|
||||
|
||||
// RecordAcceptance stores an audit note for accepted previews.
|
||||
func (s *Session) RecordAcceptance() {
|
||||
s.notes = append(s.notes, "accepted preview")
|
||||
}
|
||||
|
||||
// PromptHistory returns a copy of the recorded prompts.
|
||||
func (s *Session) PromptHistory() []string {
|
||||
history := make([]string, len(s.prompts))
|
||||
copy(history, s.prompts)
|
||||
return history
|
||||
}
|
||||
|
||||
// Notes returns audit notes collected during the session.
|
||||
func (s *Session) Notes() []string {
|
||||
copied := make([]string, len(s.notes))
|
||||
copy(copied, s.notes)
|
||||
return copied
|
||||
}
|
||||
|
||||
// Files returns the original scoped filenames.
|
||||
func (s *Session) Files() []string {
|
||||
copied := make([]string, len(s.files))
|
||||
copy(copied, s.files)
|
||||
return copied
|
||||
}
|
||||
|
||||
// SequenceSeparator returns the configured sequence separator.
|
||||
func (s *Session) SequenceSeparator() string {
|
||||
return s.sequenceSeparator
|
||||
}
|
||||
|
||||
// LastOutput returns the most recent flow output.
|
||||
func (s *Session) LastOutput() *flowpkg.Output {
|
||||
return s.lastOutput
|
||||
}
|
||||
|
||||
// LastValidation returns the validation result for the most recent output.
|
||||
func (s *Session) LastValidation() ValidationResult {
|
||||
return s.lastValidation
|
||||
}
|
||||
|
||||
// Model returns the model identifier associated with the session.
|
||||
func (s *Session) Model() string {
|
||||
if s.model == "" {
|
||||
return defaultModelID
|
||||
}
|
||||
return s.model
|
||||
}
|
||||
169
internal/ai/validation.go
Normal file
169
internal/ai/validation.go
Normal file
@@ -0,0 +1,169 @@
|
||||
package ai
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path"
|
||||
"strings"
|
||||
|
||||
"github.com/rogeecn/renamer/internal/ai/flow"
|
||||
)
|
||||
|
||||
var invalidCharacters = []rune{'/', '\\', ':', '*', '?', '"', '<', '>', '|'}
|
||||
|
||||
// Conflict captures a validation failure for a proposed rename.
|
||||
type Conflict struct {
|
||||
Original string
|
||||
Suggested string
|
||||
Reason string
|
||||
}
|
||||
|
||||
// ValidationResult aggregates conflicts and warnings.
|
||||
type ValidationResult struct {
|
||||
Conflicts []Conflict
|
||||
Warnings []string
|
||||
}
|
||||
|
||||
// ValidateSuggestions enforces rename safety rules before applying suggestions.
|
||||
func ValidateSuggestions(expected []string, suggestions []flow.Suggestion) ValidationResult {
|
||||
result := ValidationResult{}
|
||||
|
||||
expectedSet := make(map[string]struct{}, len(expected))
|
||||
for _, name := range expected {
|
||||
expectedSet[strings.ToLower(flowToKey(name))] = struct{}{}
|
||||
}
|
||||
|
||||
seenTargets := make(map[string]string)
|
||||
|
||||
for _, suggestion := range suggestions {
|
||||
key := strings.ToLower(flowToKey(suggestion.Original))
|
||||
if _, ok := expectedSet[key]; !ok {
|
||||
result.Conflicts = append(result.Conflicts, Conflict{
|
||||
Original: suggestion.Original,
|
||||
Suggested: suggestion.Suggested,
|
||||
Reason: "original file not present in scope",
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
cleaned := strings.TrimSpace(suggestion.Suggested)
|
||||
if cleaned == "" {
|
||||
result.Conflicts = append(result.Conflicts, Conflict{
|
||||
Original: suggestion.Original,
|
||||
Suggested: suggestion.Suggested,
|
||||
Reason: "suggested name is empty",
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
normalizedOriginal := flowToKey(suggestion.Original)
|
||||
normalizedSuggested := flowToKey(cleaned)
|
||||
|
||||
if strings.HasPrefix(normalizedSuggested, "/") {
|
||||
result.Conflicts = append(result.Conflicts, Conflict{
|
||||
Original: suggestion.Original,
|
||||
Suggested: suggestion.Suggested,
|
||||
Reason: "suggested name must be relative",
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
if containsParentSegment(normalizedSuggested) {
|
||||
result.Conflicts = append(result.Conflicts, Conflict{
|
||||
Original: suggestion.Original,
|
||||
Suggested: suggestion.Suggested,
|
||||
Reason: "suggested name cannot traverse directories",
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
base := path.Base(cleaned)
|
||||
if containsInvalidCharacter(base) {
|
||||
result.Conflicts = append(result.Conflicts, Conflict{
|
||||
Original: suggestion.Original,
|
||||
Suggested: suggestion.Suggested,
|
||||
Reason: "suggested name contains invalid characters",
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
if !extensionsMatch(suggestion.Original, cleaned) {
|
||||
result.Conflicts = append(result.Conflicts, Conflict{
|
||||
Original: suggestion.Original,
|
||||
Suggested: suggestion.Suggested,
|
||||
Reason: "file extension changed",
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
if path.Dir(normalizedOriginal) != path.Dir(normalizedSuggested) {
|
||||
result.Warnings = append(result.Warnings, fmt.Sprintf("suggestion for %q moves file to a different directory", suggestion.Original))
|
||||
}
|
||||
|
||||
targetKey := strings.ToLower(normalizedSuggested)
|
||||
if existing, ok := seenTargets[targetKey]; ok && existing != suggestion.Original {
|
||||
result.Conflicts = append(result.Conflicts, Conflict{
|
||||
Original: suggestion.Original,
|
||||
Suggested: suggestion.Suggested,
|
||||
Reason: "duplicate target generated",
|
||||
})
|
||||
continue
|
||||
}
|
||||
seenTargets[targetKey] = suggestion.Original
|
||||
|
||||
if normalizedOriginal == normalizedSuggested {
|
||||
result.Warnings = append(result.Warnings, fmt.Sprintf("suggestion for %q does not change the filename", suggestion.Original))
|
||||
}
|
||||
}
|
||||
|
||||
if len(suggestions) != len(expected) {
|
||||
result.Warnings = append(result.Warnings, fmt.Sprintf("expected %d suggestions but received %d", len(expected), len(suggestions)))
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func flowToKey(value string) string {
|
||||
return strings.ReplaceAll(strings.TrimSpace(value), "\\", "/")
|
||||
}
|
||||
|
||||
func containsInvalidCharacter(value string) bool {
|
||||
for _, ch := range invalidCharacters {
|
||||
if strings.ContainsRune(value, ch) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func extensionsMatch(original, proposed string) bool {
|
||||
origExt := strings.ToLower(path.Ext(original))
|
||||
propExt := strings.ToLower(path.Ext(proposed))
|
||||
return origExt == propExt
|
||||
}
|
||||
|
||||
// SummarizeConflicts renders a human-readable summary of conflicts.
|
||||
func SummarizeConflicts(conflicts []Conflict) string {
|
||||
if len(conflicts) == 0 {
|
||||
return ""
|
||||
}
|
||||
builder := strings.Builder{}
|
||||
for _, c := range conflicts {
|
||||
builder.WriteString(fmt.Sprintf("%s -> %s (%s); ", c.Original, c.Suggested, c.Reason))
|
||||
}
|
||||
return strings.TrimSpace(builder.String())
|
||||
}
|
||||
|
||||
// SummarizeWarnings renders warnings as a delimited string.
|
||||
func SummarizeWarnings(warnings []string) string {
|
||||
return strings.Join(warnings, "; ")
|
||||
}
|
||||
|
||||
func containsParentSegment(value string) bool {
|
||||
parts := strings.Split(value, "/")
|
||||
for _, part := range parts {
|
||||
if part == ".." {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
21
internal/history/ai_entry.go
Normal file
21
internal/history/ai_entry.go
Normal file
@@ -0,0 +1,21 @@
|
||||
package history
|
||||
|
||||
// BuildAIMetadata constructs ledger metadata for AI-driven rename batches.
|
||||
func BuildAIMetadata(prompt string, promptHistory []string, notes []string, model string, warnings []string) map[string]any {
|
||||
data := map[string]any{
|
||||
"prompt": prompt,
|
||||
"model": model,
|
||||
"flow": "renameFlow",
|
||||
"warnings": warnings,
|
||||
}
|
||||
|
||||
if len(promptHistory) > 0 {
|
||||
data["promptHistory"] = append([]string(nil), promptHistory...)
|
||||
}
|
||||
|
||||
if len(notes) > 0 {
|
||||
data["notes"] = append([]string(nil), notes...)
|
||||
}
|
||||
|
||||
return data
|
||||
}
|
||||
@@ -26,6 +26,14 @@ type Entry struct {
|
||||
Metadata map[string]any `json:"metadata,omitempty"`
|
||||
}
|
||||
|
||||
func remarshal(value any, target any) error {
|
||||
data, err := json.Marshal(value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return json.Unmarshal(data, target)
|
||||
}
|
||||
|
||||
// Append writes a new entry to the ledger in newline-delimited JSON format.
|
||||
func Append(workingDir string, entry Entry) error {
|
||||
entry.Timestamp = time.Now().UTC()
|
||||
|
||||
@@ -3,6 +3,7 @@ package output
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// plainFormatter emits one entry per line suitable for piping into other tools.
|
||||
@@ -26,3 +27,41 @@ func (plainFormatter) WriteSummary(w io.Writer, summary Summary) error {
|
||||
_, err := fmt.Fprintln(w, DefaultSummaryLine(summary))
|
||||
return err
|
||||
}
|
||||
|
||||
// WriteAIPlanDebug emits prompt hashes and warnings to the provided writer.
|
||||
func WriteAIPlanDebug(w io.Writer, promptHash string, warnings []string) {
|
||||
if w == nil {
|
||||
return
|
||||
}
|
||||
if promptHash != "" {
|
||||
fmt.Fprintf(w, "Prompt hash: %s\n", promptHash)
|
||||
}
|
||||
for _, warning := range warnings {
|
||||
if strings.TrimSpace(warning) == "" {
|
||||
continue
|
||||
}
|
||||
fmt.Fprintf(w, "%s\n", warning)
|
||||
}
|
||||
}
|
||||
|
||||
// PolicyViolationMessage describes a single policy failure for display purposes.
|
||||
type PolicyViolationMessage struct {
|
||||
Original string
|
||||
Proposed string
|
||||
Rule string
|
||||
Message string
|
||||
}
|
||||
|
||||
// WritePolicyViolations prints detailed policy failure information to the writer.
|
||||
func WritePolicyViolations(w io.Writer, violations []PolicyViolationMessage) {
|
||||
if w == nil {
|
||||
return
|
||||
}
|
||||
for _, violation := range violations {
|
||||
rule := violation.Rule
|
||||
if rule == "" {
|
||||
rule = "policy"
|
||||
}
|
||||
fmt.Fprintf(w, "Policy violation (%s): %s -> %s (%s)\n", rule, violation.Original, violation.Proposed, violation.Message)
|
||||
}
|
||||
}
|
||||
|
||||
40
internal/output/progress.go
Normal file
40
internal/output/progress.go
Normal file
@@ -0,0 +1,40 @@
|
||||
package output
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
// ProgressReporter prints textual progress for rename operations.
|
||||
type ProgressReporter struct {
|
||||
writer io.Writer
|
||||
total int
|
||||
count int
|
||||
}
|
||||
|
||||
// NewProgressReporter constructs a reporter bound to the supplied writer.
|
||||
func NewProgressReporter(w io.Writer, total int) *ProgressReporter {
|
||||
if w == nil {
|
||||
w = io.Discard
|
||||
}
|
||||
return &ProgressReporter{writer: w, total: total}
|
||||
}
|
||||
|
||||
// Step registers a completed operation and prints the progress.
|
||||
func (r *ProgressReporter) Step(from, to string) error {
|
||||
if r == nil {
|
||||
return nil
|
||||
}
|
||||
r.count++
|
||||
_, err := fmt.Fprintf(r.writer, "[%d/%d] %s -> %s\n", r.count, r.total, from, to)
|
||||
return err
|
||||
}
|
||||
|
||||
// Complete emits a summary line after all operations finish.
|
||||
func (r *ProgressReporter) Complete() error {
|
||||
if r == nil {
|
||||
return nil
|
||||
}
|
||||
_, err := fmt.Fprintf(r.writer, "Completed %d rename(s).\n", r.count)
|
||||
return err
|
||||
}
|
||||
@@ -46,3 +46,53 @@ func (f *tableFormatter) WriteSummary(w io.Writer, summary Summary) error {
|
||||
_, err := fmt.Fprintln(w, DefaultSummaryLine(summary))
|
||||
return err
|
||||
}
|
||||
|
||||
// AIPlanRow represents a single AI plan preview row.
|
||||
type AIPlanRow struct {
|
||||
Sequence string
|
||||
Original string
|
||||
Proposed string
|
||||
Sanitized string
|
||||
}
|
||||
|
||||
// AIPlanTable renders AI plan previews in a tabular format.
|
||||
type AIPlanTable struct {
|
||||
writer *tabwriter.Writer
|
||||
}
|
||||
|
||||
// NewAIPlanTable constructs a table for AI plan previews.
|
||||
func NewAIPlanTable() *AIPlanTable {
|
||||
return &AIPlanTable{}
|
||||
}
|
||||
|
||||
// Begin writes the header for the AI plan table.
|
||||
func (t *AIPlanTable) Begin(w io.Writer) error {
|
||||
if t.writer != nil {
|
||||
return fmt.Errorf("ai plan table already initialized")
|
||||
}
|
||||
t.writer = tabwriter.NewWriter(w, 0, 4, 2, ' ', 0)
|
||||
_, err := fmt.Fprintln(t.writer, "SEQ\tORIGINAL\tPROPOSED\tSANITIZED")
|
||||
return err
|
||||
}
|
||||
|
||||
// WriteRow appends a plan row to the table.
|
||||
func (t *AIPlanTable) WriteRow(row AIPlanRow) error {
|
||||
if t.writer == nil {
|
||||
return fmt.Errorf("ai plan table not initialized")
|
||||
}
|
||||
_, err := fmt.Fprintf(t.writer, "%s\t%s\t%s\t%s\n", row.Sequence, row.Original, row.Proposed, row.Sanitized)
|
||||
return err
|
||||
}
|
||||
|
||||
// End flushes the table to the underlying writer.
|
||||
func (t *AIPlanTable) End(w io.Writer) error {
|
||||
if t.writer == nil {
|
||||
return fmt.Errorf("ai plan table not initialized")
|
||||
}
|
||||
if err := t.writer.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
_, err := fmt.Fprintln(w)
|
||||
t.writer = nil
|
||||
return err
|
||||
}
|
||||
|
||||
25
scripts/smoke-test-ai.sh
Executable file
25
scripts/smoke-test-ai.sh
Executable file
@@ -0,0 +1,25 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
if [[ -z "${RENAMER_AI_KEY:-}" ]]; then
|
||||
echo "RENAMER_AI_KEY must be set" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
tmp=$(mktemp -d)
|
||||
trap 'rm -rf "$tmp"' EXIT
|
||||
|
||||
mkdir -p "$tmp/nested"
|
||||
touch "$tmp/IMG_0001.jpg"
|
||||
touch "$tmp/nested/video01.mp4"
|
||||
|
||||
echo "Previewing AI suggestions..."
|
||||
renamer ai --path "$tmp" --prompt "Smoke demo" --dry-run <<<'q'
|
||||
|
||||
echo "Applying AI suggestions..."
|
||||
renamer ai --path "$tmp" --prompt "Smoke demo" --yes
|
||||
|
||||
echo "Undoing last AI batch..."
|
||||
renamer undo --path "$tmp"
|
||||
|
||||
echo "Smoke test completed."
|
||||
34
specs/008-ai-rename-command/checklists/requirements.md
Normal file
34
specs/008-ai-rename-command/checklists/requirements.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Specification Quality Checklist: AI-Assisted Rename Command
|
||||
|
||||
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
||||
**Created**: 2025-11-05
|
||||
**Feature**: [Link to spec.md](/home/yanghao/projects/renamer/specs/008-ai-rename-command/spec.md)
|
||||
|
||||
## Content Quality
|
||||
|
||||
- [x] No implementation details (languages, frameworks, APIs)
|
||||
- [x] Focused on user value and business needs
|
||||
- [x] Written for non-technical stakeholders
|
||||
- [x] All mandatory sections completed
|
||||
|
||||
## Requirement Completeness
|
||||
|
||||
- [x] No [NEEDS CLARIFICATION] markers remain
|
||||
- [x] Requirements are testable and unambiguous
|
||||
- [x] Success criteria are measurable
|
||||
- [x] Success criteria are technology-agnostic (no implementation details)
|
||||
- [x] All acceptance scenarios are defined
|
||||
- [x] Edge cases are identified
|
||||
- [x] Scope is clearly bounded
|
||||
- [x] Dependencies and assumptions identified
|
||||
|
||||
## Feature Readiness
|
||||
|
||||
- [x] All functional requirements have clear acceptance criteria
|
||||
- [x] User scenarios cover primary flows
|
||||
- [x] Feature meets measurable outcomes defined in Success Criteria
|
||||
- [x] No implementation details leak into specification
|
||||
|
||||
## Notes
|
||||
|
||||
- All criteria satisfied.
|
||||
93
specs/008-ai-rename-command/contracts/rename-flow.yaml
Normal file
93
specs/008-ai-rename-command/contracts/rename-flow.yaml
Normal file
@@ -0,0 +1,93 @@
|
||||
openapi: 3.1.0
|
||||
info:
|
||||
title: Genkit renameFlow Contract
|
||||
version: 0.1.0
|
||||
description: >
|
||||
Contract for the `renameFlow` Genkit workflow that produces structured rename
|
||||
suggestions consumed by the `renamer ai` CLI command.
|
||||
servers:
|
||||
- url: genkit://renameFlow
|
||||
description: Logical identifier for local Genkit execution.
|
||||
paths:
|
||||
/renameFlow: # logical entry point (function invocation)
|
||||
post:
|
||||
summary: Generate rename suggestions for provided file names.
|
||||
description: Mirrors `genkit.Run(ctx, renameFlow, input)` in the CLI integration.
|
||||
operationId: runRenameFlow
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/RenameFlowInput'
|
||||
responses:
|
||||
'200':
|
||||
description: Successful rename suggestion payload.
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/RenameFlowOutput'
|
||||
'400':
|
||||
description: Validation error (e.g., invalid filenames, mismatched counts).
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ErrorResponse'
|
||||
|
||||
components:
|
||||
schemas:
|
||||
RenameFlowInput:
|
||||
type: object
|
||||
required:
|
||||
- fileNames
|
||||
- userPrompt
|
||||
properties:
|
||||
fileNames:
|
||||
type: array
|
||||
description: Ordered list of basenames collected from CLI traversal.
|
||||
minItems: 1
|
||||
maxItems: 200
|
||||
items:
|
||||
type: string
|
||||
pattern: '^[^\\/:*?"<>|]+$'
|
||||
userPrompt:
|
||||
type: string
|
||||
description: Optional guidance supplied by the user.
|
||||
minLength: 0
|
||||
maxLength: 500
|
||||
RenameFlowOutput:
|
||||
type: object
|
||||
required:
|
||||
- suggestions
|
||||
properties:
|
||||
suggestions:
|
||||
type: array
|
||||
description: Suggested rename entries aligned with the input order.
|
||||
minItems: 1
|
||||
items:
|
||||
$ref: '#/components/schemas/RenameSuggestion'
|
||||
RenameSuggestion:
|
||||
type: object
|
||||
required:
|
||||
- original
|
||||
- suggested
|
||||
properties:
|
||||
original:
|
||||
type: string
|
||||
description: Original basename supplied in the request.
|
||||
pattern: '^[^\\/:*?"<>|]+$'
|
||||
suggested:
|
||||
type: string
|
||||
description: Proposed basename retaining original extension.
|
||||
pattern: '^[^\\/:*?"<>|]+$'
|
||||
ErrorResponse:
|
||||
type: object
|
||||
required:
|
||||
- error
|
||||
properties:
|
||||
error:
|
||||
type: string
|
||||
description: Human-readable reason for failure.
|
||||
remediation:
|
||||
type: string
|
||||
description: Suggested user action (e.g., adjust scope, reduce file count).
|
||||
62
specs/008-ai-rename-command/data-model.md
Normal file
62
specs/008-ai-rename-command/data-model.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Data Model – Genkit renameFlow & AI CLI
|
||||
|
||||
## Entity: RenameFlowInput
|
||||
- **Fields**
|
||||
- `fileNames []string` — Ordered list of basenames collected from scope traversal.
|
||||
- `userPrompt string` — Optional user guidance merged into the prompt template.
|
||||
- **Validation Rules**
|
||||
- Require at least one filename; enforce maximum of 200 per invocation (soft limit before batching).
|
||||
- Reject names containing path separators; traversal supplies basenames only.
|
||||
- Trim whitespace from `userPrompt`; clamp length (e.g., 1–500 characters) to guard against prompt injection.
|
||||
- **Relationships**
|
||||
- Serialized to JSON and passed into `genkit.Generate()` as the model input payload.
|
||||
- Logged with invocation metadata to support replay/debugging.
|
||||
|
||||
## Entity: RenameFlowOutput
|
||||
- **Fields**
|
||||
- `suggestions []RenameSuggestion` — AI-produced rename pairs in same order as input list when possible.
|
||||
- **Validation Rules**
|
||||
- `len(suggestions)` MUST equal length of input `fileNames` before approval.
|
||||
- Each suggestion MUST pass filename safety checks (see `RenameSuggestion`).
|
||||
- JSON payload MUST parse cleanly with no additional top-level properties.
|
||||
- **Relationships**
|
||||
- Returned to the CLI bridge, transformed into preview rows and ledger entries.
|
||||
|
||||
## Entity: RenameSuggestion
|
||||
- **Fields**
|
||||
- `original string` — Original basename (must match an item from input list).
|
||||
- `suggested string` — Proposed basename with identical extension as `original`.
|
||||
- **Validation Rules**
|
||||
- Preserve extension suffix (text after last `.`); fail if changed or removed.
|
||||
- Disallow illegal filesystem characters: `/ \ : * ? " < > |` and control bytes.
|
||||
- Enforce case-insensitive uniqueness across all `suggested` values to avoid collisions.
|
||||
- Reject empty or whitespace-only suggestions; trim incidental spaces.
|
||||
- **Relationships**
|
||||
- Consumed by preview renderer to display mappings.
|
||||
- Persisted in ledger metadata alongside user prompt and model ID.
|
||||
|
||||
## Entity: AISuggestionBatch (Go side)
|
||||
- **Fields**
|
||||
- `Scope traversal.ScopeResult` — Snapshot of files selected for AI processing.
|
||||
- `Prompt string` — Rendered prompt sent to Genkit (stored for debugging).
|
||||
- `ModelID string` — Identifier for the AI model used during generation.
|
||||
- `Suggestions []RenameSuggestion` — Parsed results aligned with scope entries.
|
||||
- `Warnings []string` — Issues detected during validation (duplicates, unchanged names, limit truncation).
|
||||
- **Validation Rules**
|
||||
- Warnings that correspond to hard failures (duplicate targets, invalid characters) block apply until resolved.
|
||||
- Scope result order MUST align with suggestion order to keep preview deterministic.
|
||||
- **Relationships**
|
||||
- Passed into output renderer for table display.
|
||||
- Written to ledger with `history.RecordBatch` for undo.
|
||||
|
||||
## Entity: FlowInvocationLog
|
||||
- **Fields**
|
||||
- `InvocationID string` — UUID tying output to ledger entry.
|
||||
- `Timestamp time.Time` — Invocation time for audit trail.
|
||||
- `Duration time.Duration` — Round-trip latency for success criteria tracking.
|
||||
- `InputSize int` — Number of filenames processed (used for batching heuristics).
|
||||
- `Errors []string` — Captured model or validation errors.
|
||||
- **Validation Rules**
|
||||
- Duration recorded only on successful completions; errors populated otherwise.
|
||||
- **Relationships**
|
||||
- Optional: appended to debug logs or analytics for performance monitoring (non-ledger).
|
||||
90
specs/008-ai-rename-command/plan.md
Normal file
90
specs/008-ai-rename-command/plan.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Implementation Plan: AI-Assisted Rename Command
|
||||
|
||||
**Branch**: `008-ai-rename-command` | **Date**: 2025-11-05 | **Spec**: `specs/008-ai-rename-command/spec.md`
|
||||
**Input**: Feature specification from `/specs/008-ai-rename-command/spec.md`
|
||||
|
||||
**Note**: This plan grounds the `/speckit.plan` prompt “Genkit Flow 设计 (Genkit Flow Design)” by detailing how the CLI and Genkit workflow collaborate to deliver structured AI rename suggestions.
|
||||
|
||||
## Summary
|
||||
|
||||
Design and implement a `renameFlow` Genkit workflow that produces deterministic, JSON-formatted rename suggestions and wire it into a new `renamer ai` CLI path. The plan covers prompt templating, JSON validation, scope handling parity with existing commands, preview/confirmation UX, ledger integration, and fallback/error flows to keep AI-generated batches auditable and undoable.
|
||||
|
||||
## Technical Context
|
||||
|
||||
**Language/Version**: Go 1.24 (CLI + Genkit workflow)
|
||||
**Primary Dependencies**: `spf13/cobra`, `spf13/pflag`, internal traversal/history/output packages, `github.com/firebase/genkit/go`, OpenAI-compatible provider bridge
|
||||
**Storage**: Local filesystem plus append-only `.renamer` ledger
|
||||
**Testing**: `go test ./...` including flow unit tests for prompt/render/validation, contract + integration tests under `tests/`
|
||||
**Target Platform**: Cross-platform CLI executed from local shells; Genkit workflow runs in-process via Go bindings
|
||||
**Project Type**: Single Go CLI project with additional internal AI packages
|
||||
**Performance Goals**: Generate rename suggestions for ≤200 files within 30 seconds end-to-end (per SC-001)
|
||||
**Constraints**: Preview-first safety, undoable ledger entries, scope parity with existing commands, deterministic JSON responses, offline fallback excluded (network required)
|
||||
**Scale/Scope**: Handles hundreds of files per invocation, with potential thousands when batched; assumes human-in-the-loop confirmation
|
||||
|
||||
## Constitution Check
|
||||
|
||||
- Preview flow MUST show deterministic rename mappings and require explicit confirmation (Preview-First Safety). ✅ `renamer ai` reuses preview renderer to display AI suggestions, blocks apply until `--yes` or interactive confirmation, and supports `--dry-run`.
|
||||
- Undo strategy MUST describe how the `.renamer` ledger entry is written and reversed (Persistent Undo Ledger). ✅ Accepted batches append AI metadata (prompt, model, rationale) to ledger entries; undo replays via existing ledger service with no schema break.
|
||||
- Planned rename rules MUST document their inputs, validations, and composing order (Composable Rule Engine). ✅ `renameFlow` enforces rename suggestion structure (original, suggested), keeps extensions intact, and CLI validates conflicts before applying.
|
||||
- Scope handling MUST cover files vs directories (`-d`), recursion (`-r`), and extension filtering via `-e` without escaping the requested path (Scope-Aware Traversal). ✅ CLI gathers scope using shared traversal component, honoring existing flags before passing filenames to Genkit.
|
||||
- CLI UX plan MUST confirm Cobra usage, flag naming, help text, and automated tests for preview/undo flows (Ergonomic CLI Stewardship). ✅ New `ai` command extends Cobra root with existing persistent flags, adds prompt/model overrides, and includes contract + integration coverage for preview/apply/undo.
|
||||
|
||||
## Project Structure
|
||||
|
||||
### Documentation (this feature)
|
||||
|
||||
```text
|
||||
specs/008-ai-rename-command/
|
||||
├── plan.md
|
||||
├── research.md
|
||||
├── data-model.md
|
||||
├── quickstart.md
|
||||
├── contracts/
|
||||
└── spec.md
|
||||
```
|
||||
|
||||
### Source Code (repository root)
|
||||
|
||||
```text
|
||||
cmd/
|
||||
├── root.go
|
||||
├── ai.go # new Cobra command wiring + RunE
|
||||
├── list.go
|
||||
├── replace.go
|
||||
├── remove.go
|
||||
└── undo.go
|
||||
|
||||
internal/
|
||||
├── ai/
|
||||
│ ├── flow/
|
||||
│ │ ├── rename_flow.go # Genkit flow definition using Go SDK
|
||||
│ │ └── prompt.tmpl # prompt template with rules/formatting
|
||||
│ ├── client.go # wraps Genkit invocation + response handling
|
||||
│ ├── preview.go # maps RenameSuggestion -> preview rows
|
||||
│ ├── validation.go # conflict + filename safety checks
|
||||
│ └── session.go # manages user guidance refinements
|
||||
├── traversal/
|
||||
├── output/
|
||||
├── history/
|
||||
└── ...
|
||||
|
||||
tests/
|
||||
├── contract/
|
||||
│ └── ai_command_preview_test.go # ensures JSON contract adherence
|
||||
├── integration/
|
||||
│ └── ai_flow_apply_test.go # preview, confirm, undo happy path
|
||||
└── fixtures/
|
||||
└── ai/
|
||||
└── sample_photos/ # test assets for AI rename flows
|
||||
|
||||
scripts/
|
||||
└── smoke-test-ai.sh # optional future smoke harness (planned)
|
||||
```
|
||||
|
||||
**Structure Decision**: Implement the Genkit `renameFlow` directly within Go (`internal/ai/flow`) while reusing shared traversal/output pipelines through the new `ai` command. Tests mirror existing command coverage with contract and integration suites.
|
||||
|
||||
## Complexity Tracking
|
||||
|
||||
| Violation | Why Needed | Simpler Alternative Rejected Because |
|
||||
|-----------|------------|--------------------------------------|
|
||||
| _None_ | — | — |
|
||||
29
specs/008-ai-rename-command/quickstart.md
Normal file
29
specs/008-ai-rename-command/quickstart.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Quickstart – AI Rename Command
|
||||
|
||||
1. **Preview AI suggestions before applying.**
|
||||
```bash
|
||||
renamer ai --path ./photos --prompt "Hawaii vacation album"
|
||||
```
|
||||
- Traverses `./photos` (non-recursive by default) and sends the collected basenames to `renameFlow`.
|
||||
- Displays a preview table with original → suggested names and any validation warnings.
|
||||
|
||||
2. **Adjust scope or guidance and regenerate.**
|
||||
```bash
|
||||
renamer ai --path ./photos --recursive --hidden \
|
||||
--prompt "Group by location, keep capture order"
|
||||
```
|
||||
- `--recursive` includes nested folders; `--hidden` opts in hidden files.
|
||||
- Re-running the command with updated guidance regenerates suggestions without modifying files.
|
||||
|
||||
3. **Apply suggestions non-interactively when satisfied.**
|
||||
```bash
|
||||
renamer ai --path ./photos --prompt "Hawaii vacation" --yes
|
||||
```
|
||||
- `--yes` skips the interactive confirmation while still logging the preview.
|
||||
- Use `--dry-run` to inspect output programmatically without touching the filesystem.
|
||||
|
||||
4. **Undo the most recent AI batch if needed.**
|
||||
```bash
|
||||
renamer undo
|
||||
```
|
||||
- Restores original filenames using the ledger entry created by the AI command.
|
||||
21
specs/008-ai-rename-command/research.md
Normal file
21
specs/008-ai-rename-command/research.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Phase 0 Research – Genkit renameFlow
|
||||
|
||||
## Decision: Enforce JSON-Only Responses via Prompt + Guardrails
|
||||
- **Rationale**: The CLI must parse deterministic structures. Embedding an explicit JSON schema example, restating illegal character rules, and wrapping the Genkit call with `OutputJSON()` (or equivalent) reduces hallucinated prose and aligns with ledger needs.
|
||||
- **Alternatives considered**: Post-processing free-form text was rejected because it increases parsing failures and weakens auditability. Relaxing constraints to “JSON preferred” was rejected to avoid brittle regex extraction.
|
||||
|
||||
## Decision: Keep Prompt Template as External File with Go Template Variables
|
||||
- **Rationale**: Storing the prompt under `internal/ai/flow/prompt.tmpl` keeps localization and iteration separate from code. Using Go-style templating enables the flow to substitute the file list and user prompt consistently while making it easier to unit test rendered prompts.
|
||||
- **Alternatives considered**: Hardcoding prompt strings inside the Go flow was rejected due to limited reuse and poor readability; using Markdown-based prompts was rejected because the model might echo formatting in its response.
|
||||
|
||||
## Decision: Invoke Genkit Flow In-Process via Go SDK
|
||||
- **Rationale**: The spec emphasizes local filesystem workflows without network services. Using the Genkit Go SDK keeps execution in-process, avoids packaging a separate runtime, and fits CLI invocation patterns.
|
||||
- **Alternatives considered**: Hosting a long-lived HTTP service was rejected because it complicates installation and violates the local-only assumption. Spawning an external Node process was rejected due to additional toolchain requirements.
|
||||
|
||||
## Decision: Validate Suggestions Against Existing Filename Rules Before Apply
|
||||
- **Rationale**: Even with JSON enforcement, the model could suggest duplicates, rename directories, or remove extensions. Reusing internal validation logic ensures suggestions honor filesystem invariants and matches ledger expectations before touching disk.
|
||||
- **Alternatives considered**: Trusting AI output without local validation was rejected due to risk of destructive renames. Silently auto-correcting invalid names was rejected because it obscures AI behavior from users.
|
||||
|
||||
## Decision: Align Testing with Contract + Golden Prompt Fixtures
|
||||
- **Rationale**: Contract tests with fixed model responses (via canned JSON) allow deterministic verification, while golden prompt fixtures ensure template rendering matches expectations. This combo offers coverage without depending on live AI calls in CI.
|
||||
- **Alternatives considered**: Live integration tests hitting the model were rejected due to cost, flakiness, and determinism concerns. Pure unit tests without prompt verification were rejected because prompt regressions directly impact model quality.
|
||||
102
specs/008-ai-rename-command/spec.md
Normal file
102
specs/008-ai-rename-command/spec.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Feature Specification: AI-Assisted Rename Command
|
||||
|
||||
**Feature Branch**: `008-ai-rename-command`
|
||||
**Created**: 2025-11-05
|
||||
**Status**: Draft
|
||||
**Input**: User description: "添加 ai 子命令,使用go genkit 调用ai能力对文件列表进行重命名。"
|
||||
|
||||
## Clarifications
|
||||
|
||||
### Session 2025-11-05
|
||||
|
||||
- Q: How should the CLI handle filename privacy when calling the AI service? → A: Send raw filenames without masking.
|
||||
- Q: How should the AI provider credential be supplied to the CLI? → A: Read from an environment variable (e.g., `RENAMER_AI_KEY`).
|
||||
|
||||
## User Scenarios & Testing *(mandatory)*
|
||||
|
||||
### User Story 1 - Request AI rename plan (Priority: P1)
|
||||
|
||||
As a command-line user, I can request AI-generated rename suggestions for a set of files so that I get a consistent naming plan without defining rules manually.
|
||||
|
||||
**Why this priority**: This delivers the core value of leveraging AI to save time on naming decisions for large batches.
|
||||
|
||||
**Independent Test**: Execute the AI rename command against a sample directory and verify a preview of suggested names is produced without altering files.
|
||||
|
||||
**Acceptance Scenarios**:
|
||||
|
||||
1. **Given** a directory with mixed file names and optional user instructions, **When** the user runs the AI rename command, **Then** the tool returns a preview mapping each original name to a suggested name.
|
||||
2. **Given** a scope that includes hidden files when the user enables the corresponding flag, **When** the AI rename command runs, **Then** the preview reflects only the files allowed by the selected scope options.
|
||||
|
||||
---
|
||||
|
||||
### User Story 2 - Refine and confirm suggestions (Priority: P2)
|
||||
|
||||
As a command-line user, I can review, adjust, or regenerate AI suggestions before applying them so that I have control over the final names.
|
||||
|
||||
**Why this priority**: Users need confidence and agency to ensure AI suggestions match their intent, reducing the risk of undesired renames.
|
||||
|
||||
**Independent Test**: Run the AI rename command, adjust the instruction text, regenerate suggestions, and confirm the tool updates the preview without applying changes until approval.
|
||||
|
||||
**Acceptance Scenarios**:
|
||||
|
||||
1. **Given** an initial AI preview, **When** the user supplies new guidance or rejects the batch, **Then** the tool allows a regeneration or cancellation without renaming any files.
|
||||
2. **Given** highlighted conflicts or invalid suggestions in the preview, **When** the user attempts to accept the batch, **Then** the tool blocks execution and instructs the user to resolve the issues.
|
||||
|
||||
---
|
||||
|
||||
### User Story 3 - Apply and audit AI renames (Priority: P3)
|
||||
|
||||
As a command-line user, I can apply approved AI rename suggestions and rely on the existing history and undo mechanisms so that AI-driven batches are traceable and reversible.
|
||||
|
||||
**Why this priority**: Preserving auditability and undo aligns AI-driven actions with existing safety guarantees.
|
||||
|
||||
**Independent Test**: Accept an AI rename batch, verify files are renamed, the ledger records the operation, and the undo command restores originals.
|
||||
|
||||
**Acceptance Scenarios**:
|
||||
|
||||
1. **Given** an approved AI rename preview, **When** the user confirms execution, **Then** the files are renamed and the batch details are recorded in the ledger with AI-specific metadata.
|
||||
2. **Given** an executed AI rename batch, **When** the user runs the undo command, **Then** all affected files return to their original names and the ledger reflects the reversal.
|
||||
|
||||
---
|
||||
|
||||
### Edge Cases
|
||||
|
||||
- AI service fails, times out, or returns an empty response; the command must preserve current filenames and surface actionable error guidance.
|
||||
- The AI proposes duplicate, conflicting, or filesystem-invalid names; the preview must flag each item and prevent application until resolved.
|
||||
- The selected scope includes more files than the AI request limit; the command must communicate limits and guide the user to narrow the scope or batch the request.
|
||||
- The ledger already contains pending batches for the same files; the tool must clarify how the new AI batch interacts with existing history before proceeding.
|
||||
|
||||
## Requirements *(mandatory)*
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
- **FR-001**: The CLI must gather the current file scope using existing flags and present the selected files and optional instructions to the AI suggestion service.
|
||||
- **FR-002**: The system must generate a human-readable preview that pairs each original filename with the AI-proposed name and indicates the rationale or confidence when available.
|
||||
- **FR-003**: The CLI must block application when the preview contains conflicts, invalid names, or missing suggestions and must explain the required corrective actions.
|
||||
- **FR-004**: Users must be able to modify guidance and request a new set of AI suggestions without leaving the command until they accept or exit.
|
||||
- **FR-005**: When users approve a preview, the tool must execute the rename batch, record it in the ledger with the user guidance and AI attribution, and support undo via the existing command.
|
||||
- **FR-006**: The command must support dry-run mode that exercises the AI interaction and preview without writing to disk, clearly labeling the output as non-destructive.
|
||||
- **FR-007**: The system must handle AI service errors gracefully by retaining current filenames, logging diagnostic information, and providing retry instructions.
|
||||
|
||||
### Key Entities
|
||||
|
||||
- **AISuggestionBatch**: Captures the scope summary, user guidance, timestamp, AI provider metadata, and the list of rename suggestions evaluated during a session.
|
||||
- **RenameSuggestion**: Represents a single proposed change with original name, suggested name, validation status, and optional rationale.
|
||||
- **UserGuidance**: Stores free-form instructions supplied by the user, including any follow-up refinements applied within the session.
|
||||
|
||||
## Assumptions
|
||||
|
||||
- AI rename suggestions are generated within existing rate limits; large directories may require the user to split the work manually.
|
||||
- Users running the AI command have network access and credentials required to reach the AI service.
|
||||
- Existing ledger and undo mechanisms remain unchanged and can store additional metadata without format migrations.
|
||||
- AI requests transmit the original filenames without masking; users must avoid including sensitive names when invoking the command.
|
||||
- The CLI reads AI provider credentials from environment variables (default `RENAMER_AI_KEY`); no interactive credential prompts are provided.
|
||||
|
||||
## Success Criteria *(mandatory)*
|
||||
|
||||
### Measurable Outcomes
|
||||
|
||||
- **SC-001**: 95% of AI rename previews for up to 200 files complete in under 30 seconds from command invocation.
|
||||
- **SC-002**: 90% of accepted AI rename batches complete without conflicts or manual post-fix adjustments reported by users.
|
||||
- **SC-003**: 100% of AI-driven rename batches remain fully undoable via the existing undo command.
|
||||
- **SC-004**: In post-launch surveys, at least 80% of participating users report that AI suggestions improved their rename workflow efficiency.
|
||||
119
specs/008-ai-rename-command/tasks.md
Normal file
119
specs/008-ai-rename-command/tasks.md
Normal file
@@ -0,0 +1,119 @@
|
||||
# Tasks: AI-Assisted Rename Command
|
||||
|
||||
**Input**: Design documents from `/specs/008-ai-rename-command/`
|
||||
**Prerequisites**: plan.md, spec.md, research.md, data-model.md, contracts/
|
||||
|
||||
## Format: `[ID] [P?] [Story] Description`
|
||||
|
||||
- **[P]**: Can run in parallel (different files, no dependencies)
|
||||
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
|
||||
- Include exact file paths in descriptions
|
||||
|
||||
## Phase 1: Setup (Shared Infrastructure)
|
||||
|
||||
**Purpose**: Add Go-based Genkit dependency and scaffold AI flow package.
|
||||
|
||||
- [x] T001 Ensure Genkit Go module dependency (`github.com/firebase/genkit/go`) is present in `go.mod` / `go.sum`
|
||||
- [x] T002 Create AI flow package directories in `internal/ai/flow/`
|
||||
- [x] T003 [P] Add Go test harness scaffold for AI flow in `internal/ai/flow/flow_test.go`
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Foundational (Blocking Prerequisites)
|
||||
|
||||
**Purpose**: Provide shared assets and configuration required by all user stories.
|
||||
|
||||
- [x] T004 Author AI rename prompt template with JSON instructions in `internal/ai/flow/prompt.tmpl`
|
||||
- [x] T005 Implement reusable JSON parsing helpers for Genkit responses in `internal/ai/flow/json.go`
|
||||
- [x] T006 Implement AI credential loader reading `RENAMER_AI_KEY` in `internal/ai/config.go`
|
||||
- [x] T007 Register the `ai` Cobra command scaffold in `cmd/root.go`
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: User Story 1 - Request AI rename plan (Priority: P1) 🎯 MVP
|
||||
|
||||
**Goal**: Allow users to preview AI-generated rename suggestions for a scoped set of files.
|
||||
|
||||
**Independent Test**: Run `renamer ai --path <dir> --dry-run` and verify the preview table lists original → suggested names without renaming files.
|
||||
|
||||
### Tests for User Story 1
|
||||
|
||||
- [x] T008 [P] [US1] Add prompt rendering unit test covering file list formatting in `internal/ai/flow/prompt_test.go`
|
||||
- [x] T009 [P] [US1] Create CLI preview contract test enforcing JSON schema in `tests/contract/ai_command_preview_test.go`
|
||||
|
||||
### Implementation for User Story 1
|
||||
|
||||
- [x] T010 [US1] Implement `renameFlow` Genkit workflow with JSON-only response in `internal/ai/flow/rename_flow.go`
|
||||
- [x] T011 [P] [US1] Build Genkit client wrapper and response parser in `internal/ai/client.go`
|
||||
- [x] T012 [P] [US1] Implement suggestion validation rules (extensions, duplicates, illegal chars) in `internal/ai/validation.go`
|
||||
- [x] T013 [US1] Map AI suggestions to preview rows with rationale fields in `internal/ai/preview.go`
|
||||
- [x] T014 [US1] Wire `renamer ai` command to gather scope, invoke AI flow, and render preview in `cmd/ai.go`
|
||||
- [x] T015 [US1] Document preview usage and flags for `renamer ai` in `docs/cli-flags.md`
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: User Story 2 - Refine and confirm suggestions (Priority: P2)
|
||||
|
||||
**Goal**: Let users iterate on AI guidance, regenerate suggestions, and resolve conflicts before applying changes.
|
||||
|
||||
**Independent Test**: Run `renamer ai` twice with updated prompts, confirm regenerated preview replaces the previous batch, and verify conflicting targets block approval with actionable warnings.
|
||||
|
||||
### Tests for User Story 2
|
||||
|
||||
- [x] T016 [P] [US2] Add integration test for preview regeneration and cancel flow in `tests/integration/ai_preview_regen_test.go`
|
||||
|
||||
### Implementation for User Story 2
|
||||
|
||||
- [x] T017 [US2] Extend interactive loop in `cmd/ai.go` to support prompt refinement and regeneration commands
|
||||
- [x] T018 [P] [US2] Enhance conflict and warning annotations for regenerated suggestions in `internal/ai/validation.go`
|
||||
- [x] T019 [US2] Persist per-session prompt history and guidance notes in `internal/ai/session.go`
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: User Story 3 - Apply and audit AI renames (Priority: P3)
|
||||
|
||||
**Goal**: Execute approved AI rename batches, record them in the ledger, and ensure undo restores originals.
|
||||
|
||||
**Independent Test**: Accept an AI preview with `--yes`, verify files are renamed, ledger entry captures AI metadata, and `renamer undo` restores originals.
|
||||
|
||||
### Tests for User Story 3
|
||||
|
||||
- [X] T020 [P] [US3] Add integration test covering apply + undo lifecycle in `tests/integration/ai_flow_apply_test.go`
|
||||
- [X] T021 [P] [US3] Add ledger contract test verifying AI metadata persistence in `tests/contract/ai_ledger_entry_test.go`
|
||||
|
||||
### Implementation for User Story 3
|
||||
|
||||
- [X] T022 [US3] Implement confirm/apply execution path with `--yes` handling in `cmd/ai.go`
|
||||
- [X] T023 [P] [US3] Append AI batch metadata to ledger entries in `internal/history/ai_entry.go`
|
||||
- [X] T024 [US3] Ensure undo replay reads AI ledger metadata in `internal/history/undo.go`
|
||||
- [X] T025 [US3] Display progress and per-file outcomes during apply in `internal/output/progress.go`
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Polish & Cross-Cutting
|
||||
|
||||
**Purpose**: Final quality improvements, docs, and operational readiness.
|
||||
|
||||
- [ ] T026 Add smoke test script invoking `renamer ai` preview/apply flows in `scripts/smoke-test-ai.sh`
|
||||
- [X] T027 Update top-level documentation with AI command overview and credential requirements in `README.md`
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
- Complete Phases 1 → 2 before starting user stories.
|
||||
- User Story order: US1 → US2 → US3 (each builds on prior capabilities).
|
||||
- Polish tasks run after all user stories are feature-complete.
|
||||
|
||||
## Parallel Execution Opportunities
|
||||
|
||||
- US1: T011 and T012 can run in parallel after T010 completes.
|
||||
- US2: T018 can run in parallel with T017 once session loop scaffolding exists.
|
||||
- US3: T023 and T025 can proceed concurrently after T022 defines apply workflow.
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
1. Deliver User Story 1 as the MVP (preview-only experience).
|
||||
2. Iterate on refinement workflow (User Story 2) to reduce risk of bad suggestions before apply.
|
||||
3. Add apply + ledger integration (User Story 3) to complete end-to-end flow.
|
||||
4. Finish with polish tasks to solidify operational readiness.
|
||||
48
tests/contract/ai_command_preview_test.go
Normal file
48
tests/contract/ai_command_preview_test.go
Normal file
@@ -0,0 +1,48 @@
|
||||
package contract
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/rogeecn/renamer/cmd"
|
||||
)
|
||||
|
||||
func TestAICommandPreviewTable(t *testing.T) {
|
||||
t.Setenv("RENAMER_AI_KEY", "test-key")
|
||||
|
||||
tmp := t.TempDir()
|
||||
createFile(t, filepath.Join(tmp, "IMG_0001.jpg"))
|
||||
createFile(t, filepath.Join(tmp, "trip-notes.txt"))
|
||||
|
||||
root := cmd.NewRootCommand()
|
||||
root.SetArgs([]string{"ai", "--path", tmp, "--prompt", "Travel Memories", "--dry-run"})
|
||||
|
||||
var buf bytes.Buffer
|
||||
root.SetIn(strings.NewReader("\n"))
|
||||
root.SetOut(&buf)
|
||||
root.SetErr(&buf)
|
||||
|
||||
if err := root.Execute(); err != nil {
|
||||
t.Fatalf("ai command returned error: %v\noutput: %s", err, buf.String())
|
||||
}
|
||||
|
||||
output := buf.String()
|
||||
|
||||
if !strings.Contains(output, "IMG_0001.jpg") {
|
||||
t.Fatalf("expected original filename in preview, got: %s", output)
|
||||
}
|
||||
|
||||
if !strings.Contains(output, "trip-notes.txt") {
|
||||
t.Fatalf("expected secondary filename in preview, got: %s", output)
|
||||
}
|
||||
|
||||
if !strings.Contains(output, "01.travel-memories-img-0001.jpg") {
|
||||
t.Fatalf("expected deterministic suggestion in preview, got: %s", output)
|
||||
}
|
||||
|
||||
if !strings.Contains(output, "02.travel-memories-trip-notes.txt") {
|
||||
t.Fatalf("expected sequential suggestion for second file, got: %s", output)
|
||||
}
|
||||
}
|
||||
68
tests/contract/ai_ledger_entry_test.go
Normal file
68
tests/contract/ai_ledger_entry_test.go
Normal file
@@ -0,0 +1,68 @@
|
||||
package contract
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/rogeecn/renamer/internal/ai"
|
||||
"github.com/rogeecn/renamer/internal/ai/flow"
|
||||
)
|
||||
|
||||
func TestAIMetadataPersistedInLedgerEntry(t *testing.T) {
|
||||
t.Setenv("RENAMER_AI_KEY", "test-key")
|
||||
|
||||
tmp := t.TempDir()
|
||||
createFile(t, filepath.Join(tmp, "clip.mov"))
|
||||
|
||||
suggestions := []flow.Suggestion{
|
||||
{Original: "clip.mov", Suggested: "highlight-01.mov"},
|
||||
}
|
||||
|
||||
validation := ai.ValidateSuggestions([]string{"clip.mov"}, suggestions)
|
||||
if len(validation.Conflicts) != 0 {
|
||||
t.Fatalf("expected no conflicts, got %#v", validation)
|
||||
}
|
||||
|
||||
entry, err := ai.Apply(context.Background(), tmp, suggestions, validation, ai.ApplyMetadata{
|
||||
Prompt: "Highlight Reel",
|
||||
PromptHistory: []string{"Highlight Reel", "Celebration Cut"},
|
||||
Notes: []string{"accepted preview"},
|
||||
Model: "googleai/gemini-1.5-flash",
|
||||
SequenceSeparator: "_",
|
||||
}, io.Discard)
|
||||
if err != nil {
|
||||
t.Fatalf("apply error: %v", err)
|
||||
}
|
||||
|
||||
if entry.Command != "ai" {
|
||||
t.Fatalf("expected command 'ai', got %q", entry.Command)
|
||||
}
|
||||
|
||||
if entry.Metadata == nil {
|
||||
t.Fatalf("expected metadata to be recorded")
|
||||
}
|
||||
|
||||
if got := entry.Metadata["prompt"]; got != "Highlight Reel" {
|
||||
t.Fatalf("unexpected prompt metadata: %#v", got)
|
||||
}
|
||||
|
||||
history, ok := entry.Metadata["promptHistory"].([]string)
|
||||
if !ok || len(history) != 2 {
|
||||
t.Fatalf("unexpected prompt history: %#v", entry.Metadata["promptHistory"])
|
||||
}
|
||||
|
||||
model, _ := entry.Metadata["model"].(string)
|
||||
if model == "" {
|
||||
t.Fatalf("expected model metadata to be present")
|
||||
}
|
||||
|
||||
if sep, ok := entry.Metadata["sequenceSeparator"].(string); !ok || sep != "_" {
|
||||
t.Fatalf("expected sequence separator metadata, got %#v", entry.Metadata["sequenceSeparator"])
|
||||
}
|
||||
|
||||
if _, err := ai.Apply(context.Background(), tmp, suggestions, validation, ai.ApplyMetadata{Prompt: "irrelevant"}, io.Discard); err == nil {
|
||||
t.Fatalf("expected error when renaming non-existent file")
|
||||
}
|
||||
}
|
||||
82
tests/integration/ai_flow_apply_test.go
Normal file
82
tests/integration/ai_flow_apply_test.go
Normal file
@@ -0,0 +1,82 @@
|
||||
package integration
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
renamercmd "github.com/rogeecn/renamer/cmd"
|
||||
"github.com/rogeecn/renamer/internal/history"
|
||||
)
|
||||
|
||||
func TestAIRenameApplyAndUndo(t *testing.T) {
|
||||
t.Setenv("RENAMER_AI_KEY", "test-key")
|
||||
|
||||
tmp := t.TempDir()
|
||||
createFile(t, filepath.Join(tmp, "IMG_2001.jpg"))
|
||||
createFile(t, filepath.Join(tmp, "session-notes.txt"))
|
||||
|
||||
root := renamercmd.NewRootCommand()
|
||||
root.SetArgs([]string{"ai", "--path", tmp, "--prompt", "Album Shots", "--yes"})
|
||||
root.SetIn(strings.NewReader(""))
|
||||
var output bytes.Buffer
|
||||
root.SetOut(&output)
|
||||
root.SetErr(&output)
|
||||
|
||||
if err := root.Execute(); err != nil {
|
||||
t.Fatalf("ai command returned error: %v\noutput: %s", err, output.String())
|
||||
}
|
||||
|
||||
ledgerPath := filepath.Join(tmp, ".renamer")
|
||||
data, err := os.ReadFile(ledgerPath)
|
||||
if err != nil {
|
||||
t.Fatalf("read ledger: %v", err)
|
||||
}
|
||||
lines := strings.Split(strings.TrimSpace(string(data)), "\n")
|
||||
if len(lines) == 0 {
|
||||
t.Fatalf("expected ledger entries")
|
||||
}
|
||||
var entry history.Entry
|
||||
if err := json.Unmarshal([]byte(lines[len(lines)-1]), &entry); err != nil {
|
||||
t.Fatalf("decode entry: %v", err)
|
||||
}
|
||||
if entry.Command != "ai" {
|
||||
t.Fatalf("expected command 'ai', got %q", entry.Command)
|
||||
}
|
||||
if len(entry.Operations) != 2 {
|
||||
t.Fatalf("expected 2 operations recorded, got %d", len(entry.Operations))
|
||||
}
|
||||
if entry.Metadata == nil || entry.Metadata["prompt"] != "Album Shots" {
|
||||
t.Fatalf("expected prompt metadata recorded, got %#v", entry.Metadata)
|
||||
}
|
||||
|
||||
if sep, ok := entry.Metadata["sequenceSeparator"].(string); !ok || sep != "." {
|
||||
t.Fatalf("expected sequence separator metadata, got %#v", entry.Metadata["sequenceSeparator"])
|
||||
}
|
||||
|
||||
for _, op := range entry.Operations {
|
||||
dest := filepath.Join(tmp, filepath.FromSlash(op.To))
|
||||
if _, err := os.Stat(dest); err != nil {
|
||||
t.Fatalf("expected destination %q to exist: %v", dest, err)
|
||||
}
|
||||
}
|
||||
|
||||
undoCmd := renamercmd.NewRootCommand()
|
||||
undoCmd.SetArgs([]string{"undo", "--path", tmp})
|
||||
undoCmd.SetIn(strings.NewReader(""))
|
||||
undoCmd.SetOut(&output)
|
||||
undoCmd.SetErr(&output)
|
||||
if err := undoCmd.Execute(); err != nil {
|
||||
t.Fatalf("undo command error: %v\noutput: %s", err, output.String())
|
||||
}
|
||||
|
||||
if _, err := os.Stat(filepath.Join(tmp, "IMG_2001.jpg")); err != nil {
|
||||
t.Fatalf("expected original root file restored: %v", err)
|
||||
}
|
||||
if _, err := os.Stat(filepath.Join(tmp, "session-notes.txt")); err != nil {
|
||||
t.Fatalf("expected original secondary file restored: %v", err)
|
||||
}
|
||||
}
|
||||
46
tests/integration/ai_preview_regen_test.go
Normal file
46
tests/integration/ai_preview_regen_test.go
Normal file
@@ -0,0 +1,46 @@
|
||||
package integration
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/rogeecn/renamer/cmd"
|
||||
)
|
||||
|
||||
func TestAICommandSupportsPromptRefinement(t *testing.T) {
|
||||
t.Setenv("RENAMER_AI_KEY", "test-key")
|
||||
|
||||
tmp := t.TempDir()
|
||||
createFile(t, filepath.Join(tmp, "IMG_1024.jpg"))
|
||||
createFile(t, filepath.Join(tmp, "notes/day1.txt"))
|
||||
|
||||
root := cmd.NewRootCommand()
|
||||
root.SetArgs([]string{"ai", "--path", tmp})
|
||||
|
||||
// Simulate editing the prompt then quitting.
|
||||
var output bytes.Buffer
|
||||
input := strings.NewReader("e\nVacation Highlights\nq\n")
|
||||
root.SetIn(input)
|
||||
root.SetOut(&output)
|
||||
root.SetErr(&output)
|
||||
|
||||
if err := root.Execute(); err != nil {
|
||||
t.Fatalf("ai command returned error: %v\noutput: %s", err, output.String())
|
||||
}
|
||||
|
||||
got := output.String()
|
||||
|
||||
if !strings.Contains(got, "Current prompt: \"Vacation Highlights\"") {
|
||||
t.Fatalf("expected updated prompt in output, got: %s", got)
|
||||
}
|
||||
|
||||
if !strings.Contains(got, "01.vacation-highlights-img-1024.jpg") {
|
||||
t.Fatalf("expected regenerated suggestion with new prefix, got: %s", got)
|
||||
}
|
||||
|
||||
if !strings.Contains(got, "Session ended without applying changes.") {
|
||||
t.Fatalf("expected session completion message, got: %s", got)
|
||||
}
|
||||
}
|
||||
9
tools/genkit.go
Normal file
9
tools/genkit.go
Normal file
@@ -0,0 +1,9 @@
|
||||
//go:build tools
|
||||
|
||||
package tools
|
||||
|
||||
// This file ensures Go modules keep the Genkit dependency pinned even before
|
||||
// runtime wiring lands.
|
||||
import (
|
||||
_ "github.com/firebase/genkit/go/genkit"
|
||||
)
|
||||
Reference in New Issue
Block a user