Spaces:
Running
Running
Commit
·
08af9fd
0
Parent(s):
BACKEND FIX: Filter by credential provider during login
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- Chatbot/.claude/commands/sp.adr.md +207 -0
- Chatbot/.claude/commands/sp.analyze.md +210 -0
- Chatbot/.claude/commands/sp.checklist.md +320 -0
- Chatbot/.claude/commands/sp.clarify.md +207 -0
- Chatbot/.claude/commands/sp.constitution.md +108 -0
- Chatbot/.claude/commands/sp.git.commit_pr.md +328 -0
- Chatbot/.claude/commands/sp.implement.md +161 -0
- Chatbot/.claude/commands/sp.phr.md +195 -0
- Chatbot/.claude/commands/sp.plan.md +115 -0
- Chatbot/.claude/commands/sp.reverse-engineer.md +1612 -0
- Chatbot/.claude/commands/sp.specify.md +284 -0
- Chatbot/.claude/commands/sp.tasks.md +163 -0
- Chatbot/.claude/commands/sp.taskstoissues.md +56 -0
- Chatbot/.env +1 -0
- Chatbot/.pytest_cache/CACHEDIR.TAG +4 -0
- Chatbot/.pytest_cache/README.md +8 -0
- Chatbot/.pytest_cache/v/cache/lastfailed +1 -0
- Chatbot/.pytest_cache/v/cache/nodeids +111 -0
- Chatbot/.pytest_cache/v/cache/stepwise +1 -0
- Chatbot/.specify/memory/constitution.md +274 -0
- Chatbot/.specify/scripts/bash/check-prerequisites.sh +166 -0
- Chatbot/.specify/scripts/bash/common.sh +156 -0
- Chatbot/.specify/scripts/bash/create-adr.sh +101 -0
- Chatbot/.specify/scripts/bash/create-new-feature.sh +302 -0
- Chatbot/.specify/scripts/bash/create-phr.sh +256 -0
- Chatbot/.specify/scripts/bash/setup-plan.sh +61 -0
- Chatbot/.specify/scripts/bash/update-agent-context.sh +799 -0
- Chatbot/.specify/templates/adr-template.md +56 -0
- Chatbot/.specify/templates/agent-file-template.md +28 -0
- Chatbot/.specify/templates/checklist-template.md +40 -0
- Chatbot/.specify/templates/phr-template.prompt.md +45 -0
- Chatbot/.specify/templates/plan-template.md +104 -0
- Chatbot/.specify/templates/spec-template.md +115 -0
- Chatbot/.specify/templates/tasks-template.md +251 -0
- Chatbot/CLAUDE.md +217 -0
- Chatbot/README.md +187 -0
- Chatbot/RUN_ME_FIRST.txt +63 -0
- Chatbot/__init__.py +0 -0
- Chatbot/backend/__init__.py +1 -0
- Chatbot/backend/cleanup_db.py +23 -0
- Chatbot/backend/db.py +61 -0
- Chatbot/backend/http_server.py +319 -0
- Chatbot/backend/mcp_server/__init__.py +12 -0
- Chatbot/backend/mcp_server/schemas.py +158 -0
- Chatbot/backend/mcp_server/server.py +173 -0
- Chatbot/backend/mcp_server/tools/__init__.py +13 -0
- Chatbot/backend/mcp_server/tools/add_task.py +95 -0
- Chatbot/backend/mcp_server/tools/complete_task.py +79 -0
- Chatbot/backend/mcp_server/tools/delete_all_tasks.py +37 -0
- Chatbot/backend/mcp_server/tools/delete_task.py +80 -0
Chatbot/.claude/commands/sp.adr.md
ADDED
|
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Review planning artifacts for architecturally significant decisions and create ADRs.
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# COMMAND: Analyze planning artifacts and document architecturally significant decisions as ADRs
|
| 6 |
+
|
| 7 |
+
## CONTEXT
|
| 8 |
+
|
| 9 |
+
The user has completed feature planning and needs to:
|
| 10 |
+
|
| 11 |
+
- Identify architecturally significant technical decisions from plan.md
|
| 12 |
+
- Document these decisions as Architecture Decision Records (ADRs)
|
| 13 |
+
- Ensure team alignment on technical approach before implementation
|
| 14 |
+
- Create a permanent, reviewable record of why decisions were made
|
| 15 |
+
|
| 16 |
+
Architecture Decision Records capture decisions that:
|
| 17 |
+
|
| 18 |
+
- Impact how engineers write or structure software
|
| 19 |
+
- Have notable tradeoffs or alternatives
|
| 20 |
+
- Will likely be questioned or revisited later
|
| 21 |
+
|
| 22 |
+
**User's additional input:**
|
| 23 |
+
|
| 24 |
+
$ARGUMENTS
|
| 25 |
+
|
| 26 |
+
## YOUR ROLE
|
| 27 |
+
|
| 28 |
+
Act as a senior software architect with expertise in:
|
| 29 |
+
|
| 30 |
+
- Technical decision analysis and evaluation
|
| 31 |
+
- System design patterns and tradeoffs
|
| 32 |
+
- Enterprise architecture documentation
|
| 33 |
+
- Risk assessment and consequence analysis
|
| 34 |
+
|
| 35 |
+
## OUTPUT STRUCTURE (with quick flywheel hooks)
|
| 36 |
+
|
| 37 |
+
Execute this workflow in 6 sequential steps. At Steps 2 and 4, apply lightweight Analyze→Measure checks:
|
| 38 |
+
- Analyze: Identify likely failure modes, specifically:
|
| 39 |
+
- Over-granular ADRs: ADRs that document decisions which are trivial, low-impact, or do not affect architectural direction (e.g., naming conventions, minor refactorings).
|
| 40 |
+
- Missing alternatives: ADRs that do not list at least one alternative approach considered.
|
| 41 |
+
- Measure: Apply the following checklist grader (PASS only if all are met):
|
| 42 |
+
- The ADR documents a decision that clusters related changes or impacts multiple components (not a trivial/single-file change).
|
| 43 |
+
- The ADR explicitly lists at least one alternative approach, with rationale.
|
| 44 |
+
- The ADR includes clear pros and cons for the chosen approach and alternatives.
|
| 45 |
+
- The ADR is concise but sufficiently detailed for future reference.
|
| 46 |
+
|
| 47 |
+
## Step 1: Load Planning Context
|
| 48 |
+
|
| 49 |
+
Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS.
|
| 50 |
+
|
| 51 |
+
Derive absolute paths:
|
| 52 |
+
|
| 53 |
+
- PLAN = FEATURE_DIR/plan.md (REQUIRED - abort if missing with "Run /sp.plan first")
|
| 54 |
+
- RESEARCH = FEATURE_DIR/research.md (if exists)
|
| 55 |
+
- DATA_MODEL = FEATURE_DIR/data-model.md (if exists)
|
| 56 |
+
- CONTRACTS_DIR = FEATURE_DIR/contracts/ (if exists)
|
| 57 |
+
|
| 58 |
+
## Step 2: Extract Architectural Decisions (Analyze)
|
| 59 |
+
|
| 60 |
+
Load plan.md and available artifacts. Extract architecturally significant decisions as **decision clusters** (not atomic choices):
|
| 61 |
+
|
| 62 |
+
**✅ GOOD (Clustered):**
|
| 63 |
+
|
| 64 |
+
- "Frontend Stack" (Next.js + Tailwind + Vercel as integrated solution)
|
| 65 |
+
- "Authentication Approach" (JWT strategy + Auth0 + session handling)
|
| 66 |
+
- "Data Architecture" (PostgreSQL + Redis caching + migration strategy)
|
| 67 |
+
|
| 68 |
+
**❌ BAD (Over-granular):**
|
| 69 |
+
|
| 70 |
+
- Separate ADRs for Next.js, Tailwind, and Vercel
|
| 71 |
+
- Separate ADRs for each library choice
|
| 72 |
+
|
| 73 |
+
**Clustering Rules:**
|
| 74 |
+
|
| 75 |
+
- Group technologies that work together and would likely change together
|
| 76 |
+
- Separate only if decisions are independent and could diverge
|
| 77 |
+
- Example: Frontend stack vs Backend stack = 2 ADRs (can evolve independently)
|
| 78 |
+
- Example: Next.js + Tailwind + Vercel = 1 ADR (integrated, change together)
|
| 79 |
+
|
| 80 |
+
For each decision cluster, note: what was decided, why, where in docs.
|
| 81 |
+
|
| 82 |
+
## Step 3: Check Existing ADRs
|
| 83 |
+
|
| 84 |
+
Scan `history/adr/` directory. For each extracted decision:
|
| 85 |
+
|
| 86 |
+
- If covered by existing ADR → note reference
|
| 87 |
+
- If conflicts with existing ADR → flag conflict
|
| 88 |
+
- If not covered → mark as ADR candidate
|
| 89 |
+
|
| 90 |
+
## Step 4: Apply Significance Test (Measure)
|
| 91 |
+
|
| 92 |
+
For each ADR candidate, test:
|
| 93 |
+
|
| 94 |
+
- Does it impact how engineers write/structure software?
|
| 95 |
+
- Are there notable tradeoffs or alternatives?
|
| 96 |
+
- Will it be questioned or revisited later?
|
| 97 |
+
|
| 98 |
+
Only proceed with ADRs that pass ALL three tests.
|
| 99 |
+
|
| 100 |
+
## Step 5: Create ADRs (Improve)
|
| 101 |
+
|
| 102 |
+
For each qualifying decision cluster:
|
| 103 |
+
|
| 104 |
+
1. Generate concise title reflecting the cluster (e.g., "Frontend Technology Stack" not "Use Next.js")
|
| 105 |
+
2. Run `create-adr.sh "<title>"` from repo root
|
| 106 |
+
3. Parse JSON response for `adr_path` and `adr_id`
|
| 107 |
+
4. Read created file (contains template with {{PLACEHOLDERS}})
|
| 108 |
+
5. Fill ALL placeholders:
|
| 109 |
+
- `{{TITLE}}` = decision cluster title
|
| 110 |
+
- `{{STATUS}}` = "Proposed" or "Accepted"
|
| 111 |
+
- `{{DATE}}` = today (YYYY-MM-DD)
|
| 112 |
+
- `{{CONTEXT}}` = situation, constraints leading to decision cluster
|
| 113 |
+
- `{{DECISION}}` = list ALL components of cluster (e.g., "Framework: Next.js 14, Styling: Tailwind CSS v3, Deployment: Vercel")
|
| 114 |
+
- `{{CONSEQUENCES}}` = outcomes, tradeoffs, risks for the integrated solution
|
| 115 |
+
- `{{ALTERNATIVES}}` = alternative clusters (e.g., "Remix + styled-components + Cloudflare")
|
| 116 |
+
- `{{REFERENCES}}` = plan.md, research.md, data-model.md
|
| 117 |
+
6. Save file
|
| 118 |
+
|
| 119 |
+
## Step 6: Report Completion
|
| 120 |
+
|
| 121 |
+
Output:
|
| 122 |
+
|
| 123 |
+
```
|
| 124 |
+
✅ ADR Review Complete - Created N ADRs, referenced M existing
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
List created ADRs with ID and title.
|
| 128 |
+
|
| 129 |
+
If conflicts detected:
|
| 130 |
+
|
| 131 |
+
```
|
| 132 |
+
⚠️ Conflicts with existing ADRs [IDs]. Review and update outdated decisions or revise plan.
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
If create-adr.sh fails: Report script error and skip that ADR.
|
| 136 |
+
|
| 137 |
+
## FORMATTING REQUIREMENTS
|
| 138 |
+
|
| 139 |
+
Present results in this exact structure:
|
| 140 |
+
|
| 141 |
+
```
|
| 142 |
+
✅ ADR Review Complete
|
| 143 |
+
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
| 144 |
+
|
| 145 |
+
📋 Created ADRs: {count}
|
| 146 |
+
- ADR-{id}: {title}
|
| 147 |
+
- ADR-{id}: {title}
|
| 148 |
+
|
| 149 |
+
📚 Referenced Existing: {count}
|
| 150 |
+
- ADR-{id}: {title}
|
| 151 |
+
|
| 152 |
+
⚠️ Conflicts Detected: {count}
|
| 153 |
+
- ADR-{id}: {conflict description}
|
| 154 |
+
|
| 155 |
+
Next Steps:
|
| 156 |
+
→ Resolve conflicts before proceeding to /sp.tasks
|
| 157 |
+
→ Review created ADRs with team
|
| 158 |
+
→ Update plan.md if needed
|
| 159 |
+
|
| 160 |
+
Acceptance Criteria (PASS only if all true)
|
| 161 |
+
- Decisions are clustered (not atomic), with explicit alternatives and tradeoffs
|
| 162 |
+
- Consequences cover both positive and negative outcomes
|
| 163 |
+
- References link back to plan and related docs
|
| 164 |
+
```
|
| 165 |
+
|
| 166 |
+
## ERROR HANDLING
|
| 167 |
+
|
| 168 |
+
If plan.md missing:
|
| 169 |
+
|
| 170 |
+
- Display: "❌ Error: plan.md not found. Run /sp.plan first to generate planning artifacts."
|
| 171 |
+
- Exit gracefully without creating any ADRs
|
| 172 |
+
|
| 173 |
+
If create-adr.sh fails:
|
| 174 |
+
|
| 175 |
+
- Display exact error message
|
| 176 |
+
- Skip that ADR and continue with others
|
| 177 |
+
- Report partial completion at end
|
| 178 |
+
|
| 179 |
+
## TONE
|
| 180 |
+
|
| 181 |
+
Be thorough, analytical, and decision-focused. Emphasize the "why" behind each decision and its long-term implications.
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 186 |
+
|
| 187 |
+
1) Determine Stage
|
| 188 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 189 |
+
|
| 190 |
+
2) Generate Title and Determine Routing:
|
| 191 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 192 |
+
- Route is automatically determined by stage:
|
| 193 |
+
- `constitution` → `history/prompts/constitution/`
|
| 194 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 195 |
+
- `general` → `history/prompts/general/`
|
| 196 |
+
|
| 197 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 198 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 199 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 200 |
+
- If the script fails:
|
| 201 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 202 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 203 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 204 |
+
|
| 205 |
+
4) Validate + report
|
| 206 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 207 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.analyze.md
ADDED
|
@@ -0,0 +1,210 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
## User Input
|
| 6 |
+
|
| 7 |
+
```text
|
| 8 |
+
$ARGUMENTS
|
| 9 |
+
```
|
| 10 |
+
|
| 11 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 12 |
+
|
| 13 |
+
## Goal
|
| 14 |
+
|
| 15 |
+
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/sp.tasks` has successfully produced a complete `tasks.md`.
|
| 16 |
+
|
| 17 |
+
## Operating Constraints
|
| 18 |
+
|
| 19 |
+
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
| 20 |
+
|
| 21 |
+
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/sp.analyze`.
|
| 22 |
+
|
| 23 |
+
## Execution Steps
|
| 24 |
+
|
| 25 |
+
### 1. Initialize Analysis Context
|
| 26 |
+
|
| 27 |
+
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
|
| 28 |
+
|
| 29 |
+
- SPEC = FEATURE_DIR/spec.md
|
| 30 |
+
- PLAN = FEATURE_DIR/plan.md
|
| 31 |
+
- TASKS = FEATURE_DIR/tasks.md
|
| 32 |
+
|
| 33 |
+
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
|
| 34 |
+
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 35 |
+
|
| 36 |
+
### 2. Load Artifacts (Progressive Disclosure)
|
| 37 |
+
|
| 38 |
+
Load only the minimal necessary context from each artifact:
|
| 39 |
+
|
| 40 |
+
**From spec.md:**
|
| 41 |
+
|
| 42 |
+
- Overview/Context
|
| 43 |
+
- Functional Requirements
|
| 44 |
+
- Non-Functional Requirements
|
| 45 |
+
- User Stories
|
| 46 |
+
- Edge Cases (if present)
|
| 47 |
+
|
| 48 |
+
**From plan.md:**
|
| 49 |
+
|
| 50 |
+
- Architecture/stack choices
|
| 51 |
+
- Data Model references
|
| 52 |
+
- Phases
|
| 53 |
+
- Technical constraints
|
| 54 |
+
|
| 55 |
+
**From tasks.md:**
|
| 56 |
+
|
| 57 |
+
- Task IDs
|
| 58 |
+
- Descriptions
|
| 59 |
+
- Phase grouping
|
| 60 |
+
- Parallel markers [P]
|
| 61 |
+
- Referenced file paths
|
| 62 |
+
|
| 63 |
+
**From constitution:**
|
| 64 |
+
|
| 65 |
+
- Load `.specify/memory/constitution.md` for principle validation
|
| 66 |
+
|
| 67 |
+
### 3. Build Semantic Models
|
| 68 |
+
|
| 69 |
+
Create internal representations (do not include raw artifacts in output):
|
| 70 |
+
|
| 71 |
+
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
|
| 72 |
+
- **User story/action inventory**: Discrete user actions with acceptance criteria
|
| 73 |
+
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
|
| 74 |
+
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
|
| 75 |
+
|
| 76 |
+
### 4. Detection Passes (Token-Efficient Analysis)
|
| 77 |
+
|
| 78 |
+
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
|
| 79 |
+
|
| 80 |
+
#### A. Duplication Detection
|
| 81 |
+
|
| 82 |
+
- Identify near-duplicate requirements
|
| 83 |
+
- Mark lower-quality phrasing for consolidation
|
| 84 |
+
|
| 85 |
+
#### B. Ambiguity Detection
|
| 86 |
+
|
| 87 |
+
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
|
| 88 |
+
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
|
| 89 |
+
|
| 90 |
+
#### C. Underspecification
|
| 91 |
+
|
| 92 |
+
- Requirements with verbs but missing object or measurable outcome
|
| 93 |
+
- User stories missing acceptance criteria alignment
|
| 94 |
+
- Tasks referencing files or components not defined in spec/plan
|
| 95 |
+
|
| 96 |
+
#### D. Constitution Alignment
|
| 97 |
+
|
| 98 |
+
- Any requirement or plan element conflicting with a MUST principle
|
| 99 |
+
- Missing mandated sections or quality gates from constitution
|
| 100 |
+
|
| 101 |
+
#### E. Coverage Gaps
|
| 102 |
+
|
| 103 |
+
- Requirements with zero associated tasks
|
| 104 |
+
- Tasks with no mapped requirement/story
|
| 105 |
+
- Non-functional requirements not reflected in tasks (e.g., performance, security)
|
| 106 |
+
|
| 107 |
+
#### F. Inconsistency
|
| 108 |
+
|
| 109 |
+
- Terminology drift (same concept named differently across files)
|
| 110 |
+
- Data entities referenced in plan but absent in spec (or vice versa)
|
| 111 |
+
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
|
| 112 |
+
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
|
| 113 |
+
|
| 114 |
+
### 5. Severity Assignment
|
| 115 |
+
|
| 116 |
+
Use this heuristic to prioritize findings:
|
| 117 |
+
|
| 118 |
+
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
|
| 119 |
+
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
|
| 120 |
+
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
|
| 121 |
+
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
|
| 122 |
+
|
| 123 |
+
### 6. Produce Compact Analysis Report
|
| 124 |
+
|
| 125 |
+
Output a Markdown report (no file writes) with the following structure:
|
| 126 |
+
|
| 127 |
+
## Specification Analysis Report
|
| 128 |
+
|
| 129 |
+
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|
| 130 |
+
|----|----------|----------|-------------|---------|----------------|
|
| 131 |
+
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
|
| 132 |
+
|
| 133 |
+
(Add one row per finding; generate stable IDs prefixed by category initial.)
|
| 134 |
+
|
| 135 |
+
**Coverage Summary Table:**
|
| 136 |
+
|
| 137 |
+
| Requirement Key | Has Task? | Task IDs | Notes |
|
| 138 |
+
|-----------------|-----------|----------|-------|
|
| 139 |
+
|
| 140 |
+
**Constitution Alignment Issues:** (if any)
|
| 141 |
+
|
| 142 |
+
**Unmapped Tasks:** (if any)
|
| 143 |
+
|
| 144 |
+
**Metrics:**
|
| 145 |
+
|
| 146 |
+
- Total Requirements
|
| 147 |
+
- Total Tasks
|
| 148 |
+
- Coverage % (requirements with >=1 task)
|
| 149 |
+
- Ambiguity Count
|
| 150 |
+
- Duplication Count
|
| 151 |
+
- Critical Issues Count
|
| 152 |
+
|
| 153 |
+
### 7. Provide Next Actions
|
| 154 |
+
|
| 155 |
+
At end of report, output a concise Next Actions block:
|
| 156 |
+
|
| 157 |
+
- If CRITICAL issues exist: Recommend resolving before `/sp.implement`
|
| 158 |
+
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
|
| 159 |
+
- Provide explicit command suggestions: e.g., "Run /sp.specify with refinement", "Run /sp.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
|
| 160 |
+
|
| 161 |
+
### 8. Offer Remediation
|
| 162 |
+
|
| 163 |
+
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
|
| 164 |
+
|
| 165 |
+
## Operating Principles
|
| 166 |
+
|
| 167 |
+
### Context Efficiency
|
| 168 |
+
|
| 169 |
+
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
|
| 170 |
+
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
|
| 171 |
+
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
|
| 172 |
+
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
|
| 173 |
+
|
| 174 |
+
### Analysis Guidelines
|
| 175 |
+
|
| 176 |
+
- **NEVER modify files** (this is read-only analysis)
|
| 177 |
+
- **NEVER hallucinate missing sections** (if absent, report them accurately)
|
| 178 |
+
- **Prioritize constitution violations** (these are always CRITICAL)
|
| 179 |
+
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
|
| 180 |
+
- **Report zero issues gracefully** (emit success report with coverage statistics)
|
| 181 |
+
|
| 182 |
+
## Context
|
| 183 |
+
|
| 184 |
+
$ARGUMENTS
|
| 185 |
+
|
| 186 |
+
---
|
| 187 |
+
|
| 188 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 189 |
+
|
| 190 |
+
1) Determine Stage
|
| 191 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 192 |
+
|
| 193 |
+
2) Generate Title and Determine Routing:
|
| 194 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 195 |
+
- Route is automatically determined by stage:
|
| 196 |
+
- `constitution` → `history/prompts/constitution/`
|
| 197 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 198 |
+
- `general` → `history/prompts/general/`
|
| 199 |
+
|
| 200 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 201 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 202 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 203 |
+
- If the script fails:
|
| 204 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 205 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 206 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 207 |
+
|
| 208 |
+
4) Validate + report
|
| 209 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 210 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.checklist.md
ADDED
|
@@ -0,0 +1,320 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Generate a custom checklist for the current feature based on user requirements.
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
## Checklist Purpose: "Unit Tests for English"
|
| 6 |
+
|
| 7 |
+
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
|
| 8 |
+
|
| 9 |
+
**NOT for verification/testing**:
|
| 10 |
+
|
| 11 |
+
- ❌ NOT "Verify the button clicks correctly"
|
| 12 |
+
- ❌ NOT "Test error handling works"
|
| 13 |
+
- ❌ NOT "Confirm the API returns 200"
|
| 14 |
+
- ❌ NOT checking if code/implementation matches the spec
|
| 15 |
+
|
| 16 |
+
**FOR requirements quality validation**:
|
| 17 |
+
|
| 18 |
+
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
|
| 19 |
+
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
|
| 20 |
+
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
|
| 21 |
+
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
|
| 22 |
+
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
|
| 23 |
+
|
| 24 |
+
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
|
| 25 |
+
|
| 26 |
+
## User Input
|
| 27 |
+
|
| 28 |
+
```text
|
| 29 |
+
$ARGUMENTS
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 33 |
+
|
| 34 |
+
## Execution Steps
|
| 35 |
+
|
| 36 |
+
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
|
| 37 |
+
- All file paths must be absolute.
|
| 38 |
+
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 39 |
+
|
| 40 |
+
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
|
| 41 |
+
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
|
| 42 |
+
- Only ask about information that materially changes checklist content
|
| 43 |
+
- Be skipped individually if already unambiguous in `$ARGUMENTS`
|
| 44 |
+
- Prefer precision over breadth
|
| 45 |
+
|
| 46 |
+
Generation algorithm:
|
| 47 |
+
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
|
| 48 |
+
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
|
| 49 |
+
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
|
| 50 |
+
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
|
| 51 |
+
5. Formulate questions chosen from these archetypes:
|
| 52 |
+
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
|
| 53 |
+
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
|
| 54 |
+
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
|
| 55 |
+
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
|
| 56 |
+
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
|
| 57 |
+
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
|
| 58 |
+
|
| 59 |
+
Question formatting rules:
|
| 60 |
+
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
|
| 61 |
+
- Limit to A–E options maximum; omit table if a free-form answer is clearer
|
| 62 |
+
- Never ask the user to restate what they already said
|
| 63 |
+
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
|
| 64 |
+
|
| 65 |
+
Defaults when interaction impossible:
|
| 66 |
+
- Depth: Standard
|
| 67 |
+
- Audience: Reviewer (PR) if code-related; Author otherwise
|
| 68 |
+
- Focus: Top 2 relevance clusters
|
| 69 |
+
|
| 70 |
+
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
|
| 71 |
+
|
| 72 |
+
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
|
| 73 |
+
- Derive checklist theme (e.g., security, review, deploy, ux)
|
| 74 |
+
- Consolidate explicit must-have items mentioned by user
|
| 75 |
+
- Map focus selections to category scaffolding
|
| 76 |
+
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
|
| 77 |
+
|
| 78 |
+
4. **Load feature context**: Read from FEATURE_DIR:
|
| 79 |
+
- spec.md: Feature requirements and scope
|
| 80 |
+
- plan.md (if exists): Technical details, dependencies
|
| 81 |
+
- tasks.md (if exists): Implementation tasks
|
| 82 |
+
|
| 83 |
+
**Context Loading Strategy**:
|
| 84 |
+
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
|
| 85 |
+
- Prefer summarizing long sections into concise scenario/requirement bullets
|
| 86 |
+
- Use progressive disclosure: add follow-on retrieval only if gaps detected
|
| 87 |
+
- If source docs are large, generate interim summary items instead of embedding raw text
|
| 88 |
+
|
| 89 |
+
5. **Generate checklist** - Create "Unit Tests for Requirements":
|
| 90 |
+
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
|
| 91 |
+
- Generate unique checklist filename:
|
| 92 |
+
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
|
| 93 |
+
- Format: `[domain].md`
|
| 94 |
+
- If file exists, append to existing file
|
| 95 |
+
- Number items sequentially starting from CHK001
|
| 96 |
+
- Each `/sp.checklist` run creates a NEW file (never overwrites existing checklists)
|
| 97 |
+
|
| 98 |
+
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
|
| 99 |
+
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
|
| 100 |
+
- **Completeness**: Are all necessary requirements present?
|
| 101 |
+
- **Clarity**: Are requirements unambiguous and specific?
|
| 102 |
+
- **Consistency**: Do requirements align with each other?
|
| 103 |
+
- **Measurability**: Can requirements be objectively verified?
|
| 104 |
+
- **Coverage**: Are all scenarios/edge cases addressed?
|
| 105 |
+
|
| 106 |
+
**Category Structure** - Group items by requirement quality dimensions:
|
| 107 |
+
- **Requirement Completeness** (Are all necessary requirements documented?)
|
| 108 |
+
- **Requirement Clarity** (Are requirements specific and unambiguous?)
|
| 109 |
+
- **Requirement Consistency** (Do requirements align without conflicts?)
|
| 110 |
+
- **Acceptance Criteria Quality** (Are success criteria measurable?)
|
| 111 |
+
- **Scenario Coverage** (Are all flows/cases addressed?)
|
| 112 |
+
- **Edge Case Coverage** (Are boundary conditions defined?)
|
| 113 |
+
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
|
| 114 |
+
- **Dependencies & Assumptions** (Are they documented and validated?)
|
| 115 |
+
- **Ambiguities & Conflicts** (What needs clarification?)
|
| 116 |
+
|
| 117 |
+
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
|
| 118 |
+
|
| 119 |
+
❌ **WRONG** (Testing implementation):
|
| 120 |
+
- "Verify landing page displays 3 episode cards"
|
| 121 |
+
- "Test hover states work on desktop"
|
| 122 |
+
- "Confirm logo click navigates home"
|
| 123 |
+
|
| 124 |
+
✅ **CORRECT** (Testing requirements quality):
|
| 125 |
+
- "Are the exact number and layout of featured episodes specified?" [Completeness]
|
| 126 |
+
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
|
| 127 |
+
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
|
| 128 |
+
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
|
| 129 |
+
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
|
| 130 |
+
- "Are loading states defined for asynchronous episode data?" [Completeness]
|
| 131 |
+
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
|
| 132 |
+
|
| 133 |
+
**ITEM STRUCTURE**:
|
| 134 |
+
Each item should follow this pattern:
|
| 135 |
+
- Question format asking about requirement quality
|
| 136 |
+
- Focus on what's WRITTEN (or not written) in the spec/plan
|
| 137 |
+
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
|
| 138 |
+
- Reference spec section `[Spec §X.Y]` when checking existing requirements
|
| 139 |
+
- Use `[Gap]` marker when checking for missing requirements
|
| 140 |
+
|
| 141 |
+
**EXAMPLES BY QUALITY DIMENSION**:
|
| 142 |
+
|
| 143 |
+
Completeness:
|
| 144 |
+
- "Are error handling requirements defined for all API failure modes? [Gap]"
|
| 145 |
+
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
|
| 146 |
+
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
|
| 147 |
+
|
| 148 |
+
Clarity:
|
| 149 |
+
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
|
| 150 |
+
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
|
| 151 |
+
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
|
| 152 |
+
|
| 153 |
+
Consistency:
|
| 154 |
+
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
|
| 155 |
+
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
|
| 156 |
+
|
| 157 |
+
Coverage:
|
| 158 |
+
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
|
| 159 |
+
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
|
| 160 |
+
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
|
| 161 |
+
|
| 162 |
+
Measurability:
|
| 163 |
+
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
|
| 164 |
+
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
|
| 165 |
+
|
| 166 |
+
**Scenario Classification & Coverage** (Requirements Quality Focus):
|
| 167 |
+
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
|
| 168 |
+
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
|
| 169 |
+
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
|
| 170 |
+
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
|
| 171 |
+
|
| 172 |
+
**Traceability Requirements**:
|
| 173 |
+
- MINIMUM: ≥80% of items MUST include at least one traceability reference
|
| 174 |
+
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
|
| 175 |
+
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
|
| 176 |
+
|
| 177 |
+
**Surface & Resolve Issues** (Requirements Quality Problems):
|
| 178 |
+
Ask questions about the requirements themselves:
|
| 179 |
+
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
|
| 180 |
+
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
|
| 181 |
+
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
|
| 182 |
+
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
|
| 183 |
+
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
|
| 184 |
+
|
| 185 |
+
**Content Consolidation**:
|
| 186 |
+
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
|
| 187 |
+
- Merge near-duplicates checking the same requirement aspect
|
| 188 |
+
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
|
| 189 |
+
|
| 190 |
+
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
|
| 191 |
+
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
|
| 192 |
+
- ❌ References to code execution, user actions, system behavior
|
| 193 |
+
- ❌ "Displays correctly", "works properly", "functions as expected"
|
| 194 |
+
- ❌ "Click", "navigate", "render", "load", "execute"
|
| 195 |
+
- ❌ Test cases, test plans, QA procedures
|
| 196 |
+
- ❌ Implementation details (frameworks, APIs, algorithms)
|
| 197 |
+
|
| 198 |
+
**✅ REQUIRED PATTERNS** - These test requirements quality:
|
| 199 |
+
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
|
| 200 |
+
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
|
| 201 |
+
- ✅ "Are requirements consistent between [section A] and [section B]?"
|
| 202 |
+
- ✅ "Can [requirement] be objectively measured/verified?"
|
| 203 |
+
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
|
| 204 |
+
- ✅ "Does the spec define [missing aspect]?"
|
| 205 |
+
|
| 206 |
+
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
|
| 207 |
+
|
| 208 |
+
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
|
| 209 |
+
- Focus areas selected
|
| 210 |
+
- Depth level
|
| 211 |
+
- Actor/timing
|
| 212 |
+
- Any explicit user-specified must-have items incorporated
|
| 213 |
+
|
| 214 |
+
**Important**: Each `/sp.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
|
| 215 |
+
|
| 216 |
+
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
|
| 217 |
+
- Simple, memorable filenames that indicate checklist purpose
|
| 218 |
+
- Easy identification and navigation in the `checklists/` folder
|
| 219 |
+
|
| 220 |
+
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
|
| 221 |
+
|
| 222 |
+
## Example Checklist Types & Sample Items
|
| 223 |
+
|
| 224 |
+
**UX Requirements Quality:** `ux.md`
|
| 225 |
+
|
| 226 |
+
Sample items (testing the requirements, NOT the implementation):
|
| 227 |
+
|
| 228 |
+
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
|
| 229 |
+
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
|
| 230 |
+
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
|
| 231 |
+
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
|
| 232 |
+
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
|
| 233 |
+
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
|
| 234 |
+
|
| 235 |
+
**API Requirements Quality:** `api.md`
|
| 236 |
+
|
| 237 |
+
Sample items:
|
| 238 |
+
|
| 239 |
+
- "Are error response formats specified for all failure scenarios? [Completeness]"
|
| 240 |
+
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
|
| 241 |
+
- "Are authentication requirements consistent across all endpoints? [Consistency]"
|
| 242 |
+
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
|
| 243 |
+
- "Is versioning strategy documented in requirements? [Gap]"
|
| 244 |
+
|
| 245 |
+
**Performance Requirements Quality:** `performance.md`
|
| 246 |
+
|
| 247 |
+
Sample items:
|
| 248 |
+
|
| 249 |
+
- "Are performance requirements quantified with specific metrics? [Clarity]"
|
| 250 |
+
- "Are performance targets defined for all critical user journeys? [Coverage]"
|
| 251 |
+
- "Are performance requirements under different load conditions specified? [Completeness]"
|
| 252 |
+
- "Can performance requirements be objectively measured? [Measurability]"
|
| 253 |
+
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
|
| 254 |
+
|
| 255 |
+
**Security Requirements Quality:** `security.md`
|
| 256 |
+
|
| 257 |
+
Sample items:
|
| 258 |
+
|
| 259 |
+
- "Are authentication requirements specified for all protected resources? [Coverage]"
|
| 260 |
+
- "Are data protection requirements defined for sensitive information? [Completeness]"
|
| 261 |
+
- "Is the threat model documented and requirements aligned to it? [Traceability]"
|
| 262 |
+
- "Are security requirements consistent with compliance obligations? [Consistency]"
|
| 263 |
+
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
|
| 264 |
+
|
| 265 |
+
## Anti-Examples: What NOT To Do
|
| 266 |
+
|
| 267 |
+
**❌ WRONG - These test implementation, not requirements:**
|
| 268 |
+
|
| 269 |
+
```markdown
|
| 270 |
+
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
|
| 271 |
+
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
|
| 272 |
+
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
|
| 273 |
+
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
|
| 274 |
+
```
|
| 275 |
+
|
| 276 |
+
**✅ CORRECT - These test requirements quality:**
|
| 277 |
+
|
| 278 |
+
```markdown
|
| 279 |
+
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
|
| 280 |
+
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
|
| 281 |
+
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
|
| 282 |
+
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
|
| 283 |
+
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
|
| 284 |
+
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
|
| 285 |
+
```
|
| 286 |
+
|
| 287 |
+
**Key Differences:**
|
| 288 |
+
|
| 289 |
+
- Wrong: Tests if the system works correctly
|
| 290 |
+
- Correct: Tests if the requirements are written correctly
|
| 291 |
+
- Wrong: Verification of behavior
|
| 292 |
+
- Correct: Validation of requirement quality
|
| 293 |
+
- Wrong: "Does it do X?"
|
| 294 |
+
- Correct: "Is X clearly specified?"
|
| 295 |
+
|
| 296 |
+
---
|
| 297 |
+
|
| 298 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 299 |
+
|
| 300 |
+
1) Determine Stage
|
| 301 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 302 |
+
|
| 303 |
+
2) Generate Title and Determine Routing:
|
| 304 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 305 |
+
- Route is automatically determined by stage:
|
| 306 |
+
- `constitution` → `history/prompts/constitution/`
|
| 307 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 308 |
+
- `general` → `history/prompts/general/`
|
| 309 |
+
|
| 310 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 311 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 312 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 313 |
+
- If the script fails:
|
| 314 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 315 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 316 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 317 |
+
|
| 318 |
+
4) Validate + report
|
| 319 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 320 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.clarify.md
ADDED
|
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
| 3 |
+
handoffs:
|
| 4 |
+
- label: Build Technical Plan
|
| 5 |
+
agent: sp.plan
|
| 6 |
+
prompt: Create a plan for the spec. I am building with...
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## User Input
|
| 10 |
+
|
| 11 |
+
```text
|
| 12 |
+
$ARGUMENTS
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 16 |
+
|
| 17 |
+
## Outline
|
| 18 |
+
|
| 19 |
+
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
|
| 20 |
+
|
| 21 |
+
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/sp.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
|
| 22 |
+
|
| 23 |
+
Execution steps:
|
| 24 |
+
|
| 25 |
+
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
|
| 26 |
+
- `FEATURE_DIR`
|
| 27 |
+
- `FEATURE_SPEC`
|
| 28 |
+
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
|
| 29 |
+
- If JSON parsing fails, abort and instruct user to re-run `/sp.specify` or verify feature branch environment.
|
| 30 |
+
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 31 |
+
|
| 32 |
+
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
|
| 33 |
+
|
| 34 |
+
Functional Scope & Behavior:
|
| 35 |
+
- Core user goals & success criteria
|
| 36 |
+
- Explicit out-of-scope declarations
|
| 37 |
+
- User roles / personas differentiation
|
| 38 |
+
|
| 39 |
+
Domain & Data Model:
|
| 40 |
+
- Entities, attributes, relationships
|
| 41 |
+
- Identity & uniqueness rules
|
| 42 |
+
- Lifecycle/state transitions
|
| 43 |
+
- Data volume / scale assumptions
|
| 44 |
+
|
| 45 |
+
Interaction & UX Flow:
|
| 46 |
+
- Critical user journeys / sequences
|
| 47 |
+
- Error/empty/loading states
|
| 48 |
+
- Accessibility or localization notes
|
| 49 |
+
|
| 50 |
+
Non-Functional Quality Attributes:
|
| 51 |
+
- Performance (latency, throughput targets)
|
| 52 |
+
- Scalability (horizontal/vertical, limits)
|
| 53 |
+
- Reliability & availability (uptime, recovery expectations)
|
| 54 |
+
- Observability (logging, metrics, tracing signals)
|
| 55 |
+
- Security & privacy (authN/Z, data protection, threat assumptions)
|
| 56 |
+
- Compliance / regulatory constraints (if any)
|
| 57 |
+
|
| 58 |
+
Integration & External Dependencies:
|
| 59 |
+
- External services/APIs and failure modes
|
| 60 |
+
- Data import/export formats
|
| 61 |
+
- Protocol/versioning assumptions
|
| 62 |
+
|
| 63 |
+
Edge Cases & Failure Handling:
|
| 64 |
+
- Negative scenarios
|
| 65 |
+
- Rate limiting / throttling
|
| 66 |
+
- Conflict resolution (e.g., concurrent edits)
|
| 67 |
+
|
| 68 |
+
Constraints & Tradeoffs:
|
| 69 |
+
- Technical constraints (language, storage, hosting)
|
| 70 |
+
- Explicit tradeoffs or rejected alternatives
|
| 71 |
+
|
| 72 |
+
Terminology & Consistency:
|
| 73 |
+
- Canonical glossary terms
|
| 74 |
+
- Avoided synonyms / deprecated terms
|
| 75 |
+
|
| 76 |
+
Completion Signals:
|
| 77 |
+
- Acceptance criteria testability
|
| 78 |
+
- Measurable Definition of Done style indicators
|
| 79 |
+
|
| 80 |
+
Misc / Placeholders:
|
| 81 |
+
- TODO markers / unresolved decisions
|
| 82 |
+
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
|
| 83 |
+
|
| 84 |
+
For each category with Partial or Missing status, add a candidate question opportunity unless:
|
| 85 |
+
- Clarification would not materially change implementation or validation strategy
|
| 86 |
+
- Information is better deferred to planning phase (note internally)
|
| 87 |
+
|
| 88 |
+
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
|
| 89 |
+
- Maximum of 10 total questions across the whole session.
|
| 90 |
+
- Each question must be answerable with EITHER:
|
| 91 |
+
- A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
|
| 92 |
+
- A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
|
| 93 |
+
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
|
| 94 |
+
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
|
| 95 |
+
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
|
| 96 |
+
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
|
| 97 |
+
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
|
| 98 |
+
|
| 99 |
+
4. Sequential questioning loop (interactive):
|
| 100 |
+
- Present EXACTLY ONE question at a time.
|
| 101 |
+
- For multiple‑choice questions:
|
| 102 |
+
- **Analyze all options** and determine the **most suitable option** based on:
|
| 103 |
+
- Best practices for the project type
|
| 104 |
+
- Common patterns in similar implementations
|
| 105 |
+
- Risk reduction (security, performance, maintainability)
|
| 106 |
+
- Alignment with any explicit project goals or constraints visible in the spec
|
| 107 |
+
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
|
| 108 |
+
- Format as: `**Recommended:** Option [X] - <reasoning>`
|
| 109 |
+
- Then render all options as a Markdown table:
|
| 110 |
+
|
| 111 |
+
| Option | Description |
|
| 112 |
+
|--------|-------------|
|
| 113 |
+
| A | <Option A description> |
|
| 114 |
+
| B | <Option B description> |
|
| 115 |
+
| C | <Option C description> (add D/E as needed up to 5) |
|
| 116 |
+
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
|
| 117 |
+
|
| 118 |
+
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
|
| 119 |
+
- For short‑answer style (no meaningful discrete options):
|
| 120 |
+
- Provide your **suggested answer** based on best practices and context.
|
| 121 |
+
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
|
| 122 |
+
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
|
| 123 |
+
- After the user answers:
|
| 124 |
+
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
|
| 125 |
+
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
|
| 126 |
+
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
| 127 |
+
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
| 128 |
+
- Stop asking further questions when:
|
| 129 |
+
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
|
| 130 |
+
- User signals completion ("done", "good", "no more"), OR
|
| 131 |
+
- You reach 5 asked questions.
|
| 132 |
+
- Never reveal future queued questions in advance.
|
| 133 |
+
- If no valid questions exist at start, immediately report no critical ambiguities.
|
| 134 |
+
|
| 135 |
+
5. Integration after EACH accepted answer (incremental update approach):
|
| 136 |
+
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
|
| 137 |
+
- For the first integrated answer in this session:
|
| 138 |
+
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
|
| 139 |
+
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
|
| 140 |
+
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
|
| 141 |
+
- Then immediately apply the clarification to the most appropriate section(s):
|
| 142 |
+
- Functional ambiguity → Update or add a bullet in Functional Requirements.
|
| 143 |
+
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
| 144 |
+
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
| 145 |
+
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
| 146 |
+
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
| 147 |
+
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
| 148 |
+
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
| 149 |
+
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
|
| 150 |
+
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
|
| 151 |
+
- Keep each inserted clarification minimal and testable (avoid narrative drift).
|
| 152 |
+
|
| 153 |
+
6. Validation (performed after EACH write plus final pass):
|
| 154 |
+
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
|
| 155 |
+
- Total asked (accepted) questions ≤ 5.
|
| 156 |
+
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
|
| 157 |
+
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
|
| 158 |
+
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
|
| 159 |
+
- Terminology consistency: same canonical term used across all updated sections.
|
| 160 |
+
|
| 161 |
+
7. Write the updated spec back to `FEATURE_SPEC`.
|
| 162 |
+
|
| 163 |
+
8. Report completion (after questioning loop ends or early termination):
|
| 164 |
+
- Number of questions asked & answered.
|
| 165 |
+
- Path to updated spec.
|
| 166 |
+
- Sections touched (list names).
|
| 167 |
+
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
|
| 168 |
+
- If any Outstanding or Deferred remain, recommend whether to proceed to `/sp.plan` or run `/sp.clarify` again later post-plan.
|
| 169 |
+
- Suggested next command.
|
| 170 |
+
|
| 171 |
+
Behavior rules:
|
| 172 |
+
|
| 173 |
+
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
|
| 174 |
+
- If spec file missing, instruct user to run `/sp.specify` first (do not create a new spec here).
|
| 175 |
+
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
|
| 176 |
+
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
|
| 177 |
+
- Respect user early termination signals ("stop", "done", "proceed").
|
| 178 |
+
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
|
| 179 |
+
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
| 180 |
+
|
| 181 |
+
Context for prioritization: $ARGUMENTS
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 186 |
+
|
| 187 |
+
1) Determine Stage
|
| 188 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 189 |
+
|
| 190 |
+
2) Generate Title and Determine Routing:
|
| 191 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 192 |
+
- Route is automatically determined by stage:
|
| 193 |
+
- `constitution` → `history/prompts/constitution/`
|
| 194 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 195 |
+
- `general` → `history/prompts/general/`
|
| 196 |
+
|
| 197 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 198 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 199 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 200 |
+
- If the script fails:
|
| 201 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 202 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 203 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 204 |
+
|
| 205 |
+
4) Validate + report
|
| 206 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 207 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.constitution.md
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
| 3 |
+
handoffs:
|
| 4 |
+
- label: Build Specification
|
| 5 |
+
agent: sp.specify
|
| 6 |
+
prompt: Implement the feature specification based on the updated constitution. I want to build...
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## User Input
|
| 10 |
+
|
| 11 |
+
```text
|
| 12 |
+
$ARGUMENTS
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 16 |
+
|
| 17 |
+
## Outline
|
| 18 |
+
|
| 19 |
+
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
| 20 |
+
|
| 21 |
+
Follow this execution flow:
|
| 22 |
+
|
| 23 |
+
1. Load the existing constitution template at `.specify/memory/constitution.md`.
|
| 24 |
+
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
| 25 |
+
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
| 26 |
+
|
| 27 |
+
2. Collect/derive values for placeholders:
|
| 28 |
+
- If user input (conversation) supplies a value, use it.
|
| 29 |
+
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
|
| 30 |
+
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
|
| 31 |
+
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
|
| 32 |
+
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
|
| 33 |
+
- MINOR: New principle/section added or materially expanded guidance.
|
| 34 |
+
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
|
| 35 |
+
- If version bump type ambiguous, propose reasoning before finalizing.
|
| 36 |
+
|
| 37 |
+
3. Draft the updated constitution content:
|
| 38 |
+
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
|
| 39 |
+
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
|
| 40 |
+
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
|
| 41 |
+
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
| 42 |
+
|
| 43 |
+
4. Consistency propagation checklist (convert prior checklist into active validations):
|
| 44 |
+
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
| 45 |
+
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
| 46 |
+
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
| 47 |
+
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
| 48 |
+
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
|
| 49 |
+
|
| 50 |
+
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
| 51 |
+
- Version change: old → new
|
| 52 |
+
- List of modified principles (old title → new title if renamed)
|
| 53 |
+
- Added sections
|
| 54 |
+
- Removed sections
|
| 55 |
+
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
|
| 56 |
+
- Follow-up TODOs if any placeholders intentionally deferred.
|
| 57 |
+
|
| 58 |
+
6. Validation before final output:
|
| 59 |
+
- No remaining unexplained bracket tokens.
|
| 60 |
+
- Version line matches report.
|
| 61 |
+
- Dates ISO format YYYY-MM-DD.
|
| 62 |
+
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
| 63 |
+
|
| 64 |
+
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
|
| 65 |
+
|
| 66 |
+
8. Output a final summary to the user with:
|
| 67 |
+
- New version and bump rationale.
|
| 68 |
+
- Any files flagged for manual follow-up.
|
| 69 |
+
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
|
| 70 |
+
|
| 71 |
+
Formatting & Style Requirements:
|
| 72 |
+
|
| 73 |
+
- Use Markdown headings exactly as in the template (do not demote/promote levels).
|
| 74 |
+
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
|
| 75 |
+
- Keep a single blank line between sections.
|
| 76 |
+
- Avoid trailing whitespace.
|
| 77 |
+
|
| 78 |
+
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
|
| 79 |
+
|
| 80 |
+
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
| 81 |
+
|
| 82 |
+
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
|
| 83 |
+
|
| 84 |
+
---
|
| 85 |
+
|
| 86 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 87 |
+
|
| 88 |
+
1) Determine Stage
|
| 89 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 90 |
+
|
| 91 |
+
2) Generate Title and Determine Routing:
|
| 92 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 93 |
+
- Route is automatically determined by stage:
|
| 94 |
+
- `constitution` → `history/prompts/constitution/`
|
| 95 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 96 |
+
- `general` → `history/prompts/general/`
|
| 97 |
+
|
| 98 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 99 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 100 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 101 |
+
- If the script fails:
|
| 102 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 103 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 104 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 105 |
+
|
| 106 |
+
4) Validate + report
|
| 107 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 108 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.git.commit_pr.md
ADDED
|
@@ -0,0 +1,328 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: An autonomous Git agent that intelligently executes git workflows. Your task is to intelligently executes git workflows to commit the work and create PR.
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
Your task is to intelligently executes git workflows to commit the work and create PR following your Principles
|
| 6 |
+
|
| 7 |
+
# Agentic Git Workflow Agent
|
| 8 |
+
|
| 9 |
+
## Core Principle
|
| 10 |
+
|
| 11 |
+
You are an autonomous Git agent. Your job is to **fulfill the user's intent efficiently**. You have agency to:
|
| 12 |
+
- Analyze the current state independently
|
| 13 |
+
- Make intelligent decisions about the best workflow
|
| 14 |
+
- Execute steps without asking permission for each one
|
| 15 |
+
- Invoke the human validator only when the decision requires their judgment
|
| 16 |
+
|
| 17 |
+
The human is not a step-orchestrator. The human is an **intent-provider** and **decision validator**.
|
| 18 |
+
|
| 19 |
+
## Your Agency
|
| 20 |
+
|
| 21 |
+
You can autonomously:
|
| 22 |
+
✅ Analyze repository state
|
| 23 |
+
✅ Determine optimal branch strategy
|
| 24 |
+
✅ Generate meaningful commit messages based on code changes
|
| 25 |
+
✅ Create branches, commits, and push to remote
|
| 26 |
+
✅ Create PRs with intelligent titles and descriptions
|
| 27 |
+
✅ Detect and handle common errors
|
| 28 |
+
|
| 29 |
+
You CANNOT autonomously:
|
| 30 |
+
❌ Run long-running processes (servers, watchers, etc.)
|
| 31 |
+
❌ Execute code that blocks indefinitely
|
| 32 |
+
❌ Make changes outside the repo (create files elsewhere, etc.)
|
| 33 |
+
❌ Execute destructive commands without explicit approval
|
| 34 |
+
|
| 35 |
+
You invoke the human when:
|
| 36 |
+
🔴 The intent is ambiguous
|
| 37 |
+
🔴 Multiple equally-valid strategies exist and you need to know their preference
|
| 38 |
+
🔴 You detect something risky or unexpected
|
| 39 |
+
🔴 The outcome differs significantly from what was requested
|
| 40 |
+
🔴 Any non-Git command would run indefinitely or block execution
|
| 41 |
+
|
| 42 |
+
## Phase 1: Context Gathering (Autonomous)
|
| 43 |
+
|
| 44 |
+
Start by understanding the complete situation:
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
git --version # Verify Git exists
|
| 48 |
+
git rev-parse --is-inside-work-tree # Verify we're in a repo
|
| 49 |
+
git status --porcelain # See what changed
|
| 50 |
+
git diff --stat # Quantify changes
|
| 51 |
+
git log --oneline -5 # Recent history context
|
| 52 |
+
git rev-parse --abbrev-ref HEAD # Current branch
|
| 53 |
+
git remote -v # Remote configuration
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
**CRITICAL:** Only run Git commands. Do not:
|
| 57 |
+
- Run `python main.py`, `npm start`, `make`, or other build/start scripts
|
| 58 |
+
- Execute anything that might be long-running or blocking
|
| 59 |
+
- Run tests, servers, or development tools
|
| 60 |
+
|
| 61 |
+
If Git is not available or this isn't a repo, **invoke human validator** with the problem.
|
| 62 |
+
|
| 63 |
+
## Phase 2: Analyze & Decide (Autonomous)
|
| 64 |
+
|
| 65 |
+
Based on the gathered context, **you decide** the optimal approach:
|
| 66 |
+
|
| 67 |
+
### Decision Tree:
|
| 68 |
+
|
| 69 |
+
**Are there uncommitted changes?**
|
| 70 |
+
- Yes → Continue to strategy decision
|
| 71 |
+
- No → Invoke human: "No changes detected. What would you like to commit?"
|
| 72 |
+
|
| 73 |
+
**What's the nature of changes?** (Analyze via `git diff`)
|
| 74 |
+
- New feature files → Feature branch strategy
|
| 75 |
+
- Tests only → Test/fix branch strategy
|
| 76 |
+
- Documentation → Docs branch strategy
|
| 77 |
+
- Mixed/refactor → Analysis-dependent
|
| 78 |
+
|
| 79 |
+
**What branch are we on?**
|
| 80 |
+
- `main` or `master` or protected branch → Must create feature branch
|
| 81 |
+
- Feature branch with tracking → Commit and optionally create/update PR
|
| 82 |
+
- Detached HEAD or unusual state → Invoke human
|
| 83 |
+
|
| 84 |
+
**What strategy is optimal?**
|
| 85 |
+
|
| 86 |
+
1. **If feature branch doesn't exist yet:**
|
| 87 |
+
- Create feature branch from current base
|
| 88 |
+
- Commit changes
|
| 89 |
+
- Push with upstream tracking
|
| 90 |
+
- Create PR to main/dev/appropriate base
|
| 91 |
+
|
| 92 |
+
2. **If feature branch exists with upstream:**
|
| 93 |
+
- Commit to current branch
|
| 94 |
+
- Push updates
|
| 95 |
+
- Check if PR exists; create if not
|
| 96 |
+
|
| 97 |
+
3. **If on protected branch with changes:**
|
| 98 |
+
- Create feature branch from current state
|
| 99 |
+
- Move changes to new branch
|
| 100 |
+
- Commit and push
|
| 101 |
+
- Create PR
|
| 102 |
+
|
| 103 |
+
**Make this decision autonomously.** You don't need permission to decide—only when the choice itself is uncertain.
|
| 104 |
+
|
| 105 |
+
## Phase 3: Generate Intelligent Content (Autonomous)
|
| 106 |
+
|
| 107 |
+
### Branch Name
|
| 108 |
+
Analyze the changes to create a meaningful branch name:
|
| 109 |
+
```bash
|
| 110 |
+
git diff --name-only
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
Look at:
|
| 114 |
+
- Files changed (domain extraction)
|
| 115 |
+
- Commit intent (if user provided one)
|
| 116 |
+
- Repository conventions (existing branch names via `git branch -r`)
|
| 117 |
+
|
| 118 |
+
Generate a name that's:
|
| 119 |
+
- Descriptive (2-4 words)
|
| 120 |
+
- Follows existing conventions
|
| 121 |
+
- Reflects the actual change
|
| 122 |
+
|
| 123 |
+
Examples:
|
| 124 |
+
- `add-auth-validation` (from "Add login validation" + auth-related files)
|
| 125 |
+
- `fix-query-timeout` (from files in db/queries/)
|
| 126 |
+
- `docs-update-readme` (from README.md changes)
|
| 127 |
+
|
| 128 |
+
### Commit Message
|
| 129 |
+
Analyze the code diff and generate a conventional commit:
|
| 130 |
+
|
| 131 |
+
```
|
| 132 |
+
<type>(<scope>): <subject>
|
| 133 |
+
|
| 134 |
+
<body explaining why, not what>
|
| 135 |
+
```
|
| 136 |
+
|
| 137 |
+
- **type**: feat, fix, chore, refactor, docs, test (determined from change analysis)
|
| 138 |
+
- **scope**: Primary area affected
|
| 139 |
+
- **subject**: Imperative, what this commit does
|
| 140 |
+
- **body**: Why this change was needed
|
| 141 |
+
|
| 142 |
+
**Do not ask the user for a commit message.** Extract intent from:
|
| 143 |
+
- Their stated purpose (if provided)
|
| 144 |
+
- The code changes themselves
|
| 145 |
+
- File modifications
|
| 146 |
+
|
| 147 |
+
### PR Title & Description
|
| 148 |
+
Create automatically:
|
| 149 |
+
- **Title**: Based on commit message or user intent
|
| 150 |
+
- **Description**:
|
| 151 |
+
- What changed
|
| 152 |
+
- Why it matters
|
| 153 |
+
- Files affected
|
| 154 |
+
- Related issues (if detectable)
|
| 155 |
+
|
| 156 |
+
## Phase 4: Execute (Autonomous)
|
| 157 |
+
|
| 158 |
+
Execute the workflow you decided:
|
| 159 |
+
|
| 160 |
+
```bash
|
| 161 |
+
git add .
|
| 162 |
+
git checkout -b # or git switch if branch exists
|
| 163 |
+
git commit -m ""
|
| 164 |
+
git push -u origin
|
| 165 |
+
gh pr create --title "" --body ""
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
+
Handle common errors autonomously:
|
| 169 |
+
- `git push` fails (auth/permission) → Report clearly, suggest manual push
|
| 170 |
+
- `gh` not available → Provide manual PR URL: `https://github.com/<owner>/<repo>/compare/<branch>`
|
| 171 |
+
- Merge conflicts → Stop and invoke human
|
| 172 |
+
|
| 173 |
+
## Phase 5: Validate & Report (Conditional)
|
| 174 |
+
|
| 175 |
+
**After execution, evaluate the outcome:**
|
| 176 |
+
|
| 177 |
+
Compare your executed workflow against the user's original intent.
|
| 178 |
+
|
| 179 |
+
**If outcome matches intent:** ✅ Report success
|
| 180 |
+
```
|
| 181 |
+
✅ Workflow executed successfully:
|
| 182 |
+
• Branch: feature/add-auth-validation
|
| 183 |
+
• Commit: "feat(auth): add login validation"
|
| 184 |
+
• PR: https://github.com/...
|
| 185 |
+
```
|
| 186 |
+
|
| 187 |
+
**If outcome differs significantly:** 🔴 Invoke human validator
|
| 188 |
+
```
|
| 189 |
+
⚠️ Outcome differs from intent:
|
| 190 |
+
• Your intent: "Update documentation"
|
| 191 |
+
• Actual changes: 15 files modified, 3 new features detected
|
| 192 |
+
|
| 193 |
+
Does this reflect what you wanted? If not, what should I have done?
|
| 194 |
+
```
|
| 195 |
+
|
| 196 |
+
**If something was unexpected:** 🔴 Invoke human validator
|
| 197 |
+
```
|
| 198 |
+
⚠️ Unexpected state detected:
|
| 199 |
+
• On protected branch 'main'
|
| 200 |
+
• User provided intent but no files changed
|
| 201 |
+
• Branch already has open PR
|
| 202 |
+
|
| 203 |
+
What should I do?
|
| 204 |
+
```
|
| 205 |
+
|
| 206 |
+
## When to Invoke Human Validator
|
| 207 |
+
|
| 208 |
+
Use the `invoke_human` tool when:
|
| 209 |
+
|
| 210 |
+
### 1. Ambiguous Intent
|
| 211 |
+
**User said:** "Do the thing"
|
| 212 |
+
**You need:** Clarification on what "the thing" is
|
| 213 |
+
|
| 214 |
+
### 2. Risk Detected
|
| 215 |
+
**Scenario:** Changes affect core system, or branch already exists with different content
|
| 216 |
+
**Action:** Ask for confirmation: "I detected this might break X. Continue? [Y/n]"
|
| 217 |
+
|
| 218 |
+
### 3. Multiple Valid Strategies
|
| 219 |
+
**Scenario:** Could create new branch OR commit to existing, both valid
|
| 220 |
+
**Action:** Present the decision: "I can do [A] or [B]. Which do you prefer?"
|
| 221 |
+
|
| 222 |
+
### 4. Outcome Validation
|
| 223 |
+
**Scenario:** Workflow executed but results differ from intent
|
| 224 |
+
**Action:** Ask: "Does this match what you wanted?"
|
| 225 |
+
|
| 226 |
+
### 5. Environment Issues
|
| 227 |
+
**Scenario:** Git/GitHub not configured, credentials missing, unexpected state
|
| 228 |
+
**Action:** Explain the blocker and ask for guidance
|
| 229 |
+
|
| 230 |
+
## Format for Human Invocation
|
| 231 |
+
|
| 232 |
+
When you need to invoke the human validator, format clearly:
|
| 233 |
+
|
| 234 |
+
```
|
| 235 |
+
🔴 DECISION NEEDED
|
| 236 |
+
|
| 237 |
+
Situation: <What you're trying to do>
|
| 238 |
+
Problem/Options: <Why you need human input>
|
| 239 |
+
|
| 240 |
+
Option A: <First approach>
|
| 241 |
+
Option B: <Second approach>
|
| 242 |
+
|
| 243 |
+
What would you prefer? [A/B/other]
|
| 244 |
+
```
|
| 245 |
+
|
| 246 |
+
Or for validation:
|
| 247 |
+
|
| 248 |
+
```
|
| 249 |
+
✅ OUTCOME VALIDATION
|
| 250 |
+
|
| 251 |
+
I executed: <What I did>
|
| 252 |
+
Result: <What happened>
|
| 253 |
+
|
| 254 |
+
Does this match your intent? [Y/n]
|
| 255 |
+
If not, what should I have done?
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
+
## What You Decide Autonomously
|
| 259 |
+
|
| 260 |
+
✅ Branch strategy
|
| 261 |
+
✅ Branch naming
|
| 262 |
+
✅ Commit message generation
|
| 263 |
+
✅ PR creation
|
| 264 |
+
✅ Workflow execution (Git only)
|
| 265 |
+
✅ Error recovery (when possible)
|
| 266 |
+
✅ Reading files to analyze changes
|
| 267 |
+
|
| 268 |
+
## What You NEVER Do Autonomously
|
| 269 |
+
|
| 270 |
+
❌ Run servers, watchers, or development tools
|
| 271 |
+
❌ Execute build steps unless explicitly asked
|
| 272 |
+
❌ Run tests or other processes
|
| 273 |
+
❌ Execute anything that blocks or runs indefinitely
|
| 274 |
+
❌ Run commands outside of Git operations
|
| 275 |
+
|
| 276 |
+
## What Requires Human Input
|
| 277 |
+
|
| 278 |
+
🔴 Clarifying ambiguous intent
|
| 279 |
+
🔴 Choosing between equally valid strategies
|
| 280 |
+
🔴 Confirming risky actions
|
| 281 |
+
🔴 Validating outcomes don't match intent
|
| 282 |
+
🔴 Resolving blockers
|
| 283 |
+
|
| 284 |
+
## Example Execution
|
| 285 |
+
|
| 286 |
+
**User Intent:** "I added email validation to the auth system"
|
| 287 |
+
|
| 288 |
+
**You (autonomous):**
|
| 289 |
+
1. Gather context → See auth files + validation logic changes
|
| 290 |
+
2. Decide → Create feature branch, conventional commit, PR to main
|
| 291 |
+
3. Generate → Branch: `add-email-validation`, Commit: "feat(auth): add email validation"
|
| 292 |
+
4. Execute → All steps without asking
|
| 293 |
+
5. Report → Show what was done + PR link
|
| 294 |
+
6. Validate → Check if outcome matches intent
|
| 295 |
+
|
| 296 |
+
**If something was off:**
|
| 297 |
+
- You executed correctly but sense it wasn't what they meant → Invoke validator
|
| 298 |
+
- They later say "Actually I meant..." → Update accordingly
|
| 299 |
+
|
| 300 |
+
## Philosophy
|
| 301 |
+
|
| 302 |
+
You are not a tool waiting for instructions. You are an agent fulfilling intent. The human provides direction; you provide execution. Invoke them only when you genuinely need their judgment, not for step-by-step choreography.
|
| 303 |
+
|
| 304 |
+
---
|
| 305 |
+
|
| 306 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 307 |
+
|
| 308 |
+
1) Determine Stage
|
| 309 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 310 |
+
|
| 311 |
+
2) Generate Title and Determine Routing:
|
| 312 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 313 |
+
- Route is automatically determined by stage:
|
| 314 |
+
- `constitution` → `history/prompts/constitution/`
|
| 315 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 316 |
+
- `general` → `history/prompts/general/`
|
| 317 |
+
|
| 318 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 319 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 320 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 321 |
+
- If the script fails:
|
| 322 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 323 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 324 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 325 |
+
|
| 326 |
+
4) Validate + report
|
| 327 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 328 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.implement.md
ADDED
|
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
## User Input
|
| 6 |
+
|
| 7 |
+
```text
|
| 8 |
+
$ARGUMENTS
|
| 9 |
+
```
|
| 10 |
+
|
| 11 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 12 |
+
|
| 13 |
+
## Outline
|
| 14 |
+
|
| 15 |
+
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 16 |
+
|
| 17 |
+
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
|
| 18 |
+
- Scan all checklist files in the checklists/ directory
|
| 19 |
+
- For each checklist, count:
|
| 20 |
+
- Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
|
| 21 |
+
- Completed items: Lines matching `- [X]` or `- [x]`
|
| 22 |
+
- Incomplete items: Lines matching `- [ ]`
|
| 23 |
+
- Create a status table:
|
| 24 |
+
|
| 25 |
+
```text
|
| 26 |
+
| Checklist | Total | Completed | Incomplete | Status |
|
| 27 |
+
|-----------|-------|-----------|------------|--------|
|
| 28 |
+
| ux.md | 12 | 12 | 0 | ✓ PASS |
|
| 29 |
+
| test.md | 8 | 5 | 3 | ✗ FAIL |
|
| 30 |
+
| security.md | 6 | 6 | 0 | ✓ PASS |
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
- Calculate overall status:
|
| 34 |
+
- **PASS**: All checklists have 0 incomplete items
|
| 35 |
+
- **FAIL**: One or more checklists have incomplete items
|
| 36 |
+
|
| 37 |
+
- **If any checklist is incomplete**:
|
| 38 |
+
- Display the table with incomplete item counts
|
| 39 |
+
- **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
|
| 40 |
+
- Wait for user response before continuing
|
| 41 |
+
- If user says "no" or "wait" or "stop", halt execution
|
| 42 |
+
- If user says "yes" or "proceed" or "continue", proceed to step 3
|
| 43 |
+
|
| 44 |
+
- **If all checklists are complete**:
|
| 45 |
+
- Display the table showing all checklists passed
|
| 46 |
+
- Automatically proceed to step 3
|
| 47 |
+
|
| 48 |
+
3. Load and analyze the implementation context:
|
| 49 |
+
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
| 50 |
+
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
| 51 |
+
- **IF EXISTS**: Read data-model.md for entities and relationships
|
| 52 |
+
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
| 53 |
+
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
| 54 |
+
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
| 55 |
+
|
| 56 |
+
4. **Project Setup Verification**:
|
| 57 |
+
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
| 58 |
+
|
| 59 |
+
**Detection & Creation Logic**:
|
| 60 |
+
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
|
| 61 |
+
|
| 62 |
+
```sh
|
| 63 |
+
git rev-parse --git-dir 2>/dev/null
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
|
| 67 |
+
- Check if .eslintrc* exists → create/verify .eslintignore
|
| 68 |
+
- Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
|
| 69 |
+
- Check if .prettierrc* exists → create/verify .prettierignore
|
| 70 |
+
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
|
| 71 |
+
- Check if terraform files (*.tf) exist → create/verify .terraformignore
|
| 72 |
+
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
|
| 73 |
+
|
| 74 |
+
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
|
| 75 |
+
**If ignore file missing**: Create with full pattern set for detected technology
|
| 76 |
+
|
| 77 |
+
**Common Patterns by Technology** (from plan.md tech stack):
|
| 78 |
+
- **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
|
| 79 |
+
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
|
| 80 |
+
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
|
| 81 |
+
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
|
| 82 |
+
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
|
| 83 |
+
- **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
|
| 84 |
+
- **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
|
| 85 |
+
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
|
| 86 |
+
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
|
| 87 |
+
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
|
| 88 |
+
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
|
| 89 |
+
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
|
| 90 |
+
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
|
| 91 |
+
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
|
| 92 |
+
|
| 93 |
+
**Tool-Specific Patterns**:
|
| 94 |
+
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
|
| 95 |
+
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
|
| 96 |
+
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
|
| 97 |
+
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
|
| 98 |
+
- **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
|
| 99 |
+
|
| 100 |
+
5. Parse tasks.md structure and extract:
|
| 101 |
+
- **Task phases**: Setup, Tests, Core, Integration, Polish
|
| 102 |
+
- **Task dependencies**: Sequential vs parallel execution rules
|
| 103 |
+
- **Task details**: ID, description, file paths, parallel markers [P]
|
| 104 |
+
- **Execution flow**: Order and dependency requirements
|
| 105 |
+
|
| 106 |
+
6. Execute implementation following the task plan:
|
| 107 |
+
- **Phase-by-phase execution**: Complete each phase before moving to the next
|
| 108 |
+
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
|
| 109 |
+
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
|
| 110 |
+
- **File-based coordination**: Tasks affecting the same files must run sequentially
|
| 111 |
+
- **Validation checkpoints**: Verify each phase completion before proceeding
|
| 112 |
+
|
| 113 |
+
7. Implementation execution rules:
|
| 114 |
+
- **Setup first**: Initialize project structure, dependencies, configuration
|
| 115 |
+
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
| 116 |
+
- **Core development**: Implement models, services, CLI commands, endpoints
|
| 117 |
+
- **Integration work**: Database connections, middleware, logging, external services
|
| 118 |
+
- **Polish and validation**: Unit tests, performance optimization, documentation
|
| 119 |
+
|
| 120 |
+
8. Progress tracking and error handling:
|
| 121 |
+
- Report progress after each completed task
|
| 122 |
+
- Halt execution if any non-parallel task fails
|
| 123 |
+
- For parallel tasks [P], continue with successful tasks, report failed ones
|
| 124 |
+
- Provide clear error messages with context for debugging
|
| 125 |
+
- Suggest next steps if implementation cannot proceed
|
| 126 |
+
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
|
| 127 |
+
|
| 128 |
+
9. Completion validation:
|
| 129 |
+
- Verify all required tasks are completed
|
| 130 |
+
- Check that implemented features match the original specification
|
| 131 |
+
- Validate that tests pass and coverage meets requirements
|
| 132 |
+
- Confirm the implementation follows the technical plan
|
| 133 |
+
- Report final status with summary of completed work
|
| 134 |
+
|
| 135 |
+
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/sp.tasks` first to regenerate the task list.
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 140 |
+
|
| 141 |
+
1) Determine Stage
|
| 142 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 143 |
+
|
| 144 |
+
2) Generate Title and Determine Routing:
|
| 145 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 146 |
+
- Route is automatically determined by stage:
|
| 147 |
+
- `constitution` → `history/prompts/constitution/`
|
| 148 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 149 |
+
- `general` → `history/prompts/general/`
|
| 150 |
+
|
| 151 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 152 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 153 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 154 |
+
- If the script fails:
|
| 155 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 156 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 157 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 158 |
+
|
| 159 |
+
4) Validate + report
|
| 160 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 161 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.phr.md
ADDED
|
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Record an AI exchange as a Prompt History Record (PHR) for learning and traceability.
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# COMMAND: Record this AI exchange as a structured PHR artifact
|
| 6 |
+
|
| 7 |
+
## CONTEXT
|
| 8 |
+
|
| 9 |
+
The user has just completed work (or is requesting work) and needs to capture this exchange as a Prompt History Record (PHR) for:
|
| 10 |
+
|
| 11 |
+
- Learning and pattern recognition (spaced repetition)
|
| 12 |
+
- Team knowledge sharing and traceability
|
| 13 |
+
- Compliance and audit requirements
|
| 14 |
+
- Building a searchable corpus of effective prompts
|
| 15 |
+
|
| 16 |
+
**User's input to record:**
|
| 17 |
+
|
| 18 |
+
$ARGUMENTS
|
| 19 |
+
|
| 20 |
+
**CRITICAL**: The complete text above is the PROMPT to preserve verbatim. Do NOT truncate to first line only.
|
| 21 |
+
|
| 22 |
+
## YOUR ROLE
|
| 23 |
+
|
| 24 |
+
Act as a meticulous documentation specialist with expertise in:
|
| 25 |
+
|
| 26 |
+
- Knowledge management and organizational learning
|
| 27 |
+
- Software development lifecycle documentation
|
| 28 |
+
- Metadata extraction and classification
|
| 29 |
+
- Creating structured, searchable technical records
|
| 30 |
+
|
| 31 |
+
## QUICK OVERVIEW (strict)
|
| 32 |
+
|
| 33 |
+
After completing ANY work, automatically create a PHR:
|
| 34 |
+
|
| 35 |
+
1. **Detect work type**: constitution|spec|plan|tasks|implementation|debugging|refactoring|discussion|general
|
| 36 |
+
2. **Generate title**: 3-7 word descriptive title summarizing the work
|
| 37 |
+
3. **Capture context**: COMPLETE conversation (never truncate to summaries)
|
| 38 |
+
4. **Route correctly**:
|
| 39 |
+
- Pre-feature work → `history/prompts/`
|
| 40 |
+
- Feature-specific work → `specs/<feature>/prompts/`
|
| 41 |
+
5. **Confirm**: Show "📝 PHR-NNNN recorded"
|
| 42 |
+
|
| 43 |
+
## OUTPUT STRUCTURE (with quick flywheel hooks)
|
| 44 |
+
|
| 45 |
+
Execute this workflow in 5 sequential steps, reporting progress after each:
|
| 46 |
+
|
| 47 |
+
## Step 1: Execute User's Request (if not already done)
|
| 48 |
+
|
| 49 |
+
If the user provided a task/question in $ARGUMENTS:
|
| 50 |
+
|
| 51 |
+
- Complete the requested work first
|
| 52 |
+
- Provide full response to user
|
| 53 |
+
- Then proceed to Step 2 to record the exchange
|
| 54 |
+
|
| 55 |
+
If you already completed work and user just wants to record it:
|
| 56 |
+
|
| 57 |
+
- Skip to Step 2
|
| 58 |
+
|
| 59 |
+
## Step 2: Determine Stage and Routing
|
| 60 |
+
|
| 61 |
+
Select ONE stage that best describes the work:
|
| 62 |
+
|
| 63 |
+
**Constitution** (→ `history/prompts/constitution/`):
|
| 64 |
+
- `constitution` - Defining quality standards, project principles
|
| 65 |
+
|
| 66 |
+
**Feature-specific** (→ `history/prompts/<feature-name>/` - requires feature context):
|
| 67 |
+
- `spec` - Creating feature specifications
|
| 68 |
+
- `plan` - Architecture design and technical approach
|
| 69 |
+
- `tasks` - Implementation breakdown with test cases
|
| 70 |
+
- `red` - Debugging, fixing errors, test failures
|
| 71 |
+
- `green` - Implementation, new features, passing tests
|
| 72 |
+
- `refactor` - Code cleanup, optimization
|
| 73 |
+
- `explainer` - Code explanations, documentation
|
| 74 |
+
- `misc` - Other feature-specific work
|
| 75 |
+
|
| 76 |
+
**General/Catch-all** (→ `history/prompts/general/`):
|
| 77 |
+
- `general` - General work not tied to a specific feature
|
| 78 |
+
|
| 79 |
+
## Step 3: Create PHR File
|
| 80 |
+
|
| 81 |
+
Generate a concise title (3-7 words) summarizing what was accomplished.
|
| 82 |
+
|
| 83 |
+
Call the PHR creation script with title and stage:
|
| 84 |
+
|
| 85 |
+
```bash
|
| 86 |
+
.specify/scripts/bash/create-phr.sh \
|
| 87 |
+
--title "<your-generated-title>" \
|
| 88 |
+
--stage <selected-stage> \
|
| 89 |
+
[--feature <feature-slug>] \
|
| 90 |
+
--json
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
Parse the JSON output to get: `id`, `path`, `context`, `stage`, `feature`
|
| 94 |
+
|
| 95 |
+
**Routing is determined automatically:**
|
| 96 |
+
- `constitution` → `history/prompts/constitution/`
|
| 97 |
+
- Feature stages → `history/prompts/<feature-name>/`
|
| 98 |
+
- `general` → `history/prompts/general/`
|
| 99 |
+
|
| 100 |
+
## Step 4: Fill ALL Template Placeholders (Analyze→Measure)
|
| 101 |
+
|
| 102 |
+
Read the file at `path` from JSON output. Replace ALL {{PLACEHOLDERS}}:
|
| 103 |
+
|
| 104 |
+
**YAML Frontmatter:**
|
| 105 |
+
|
| 106 |
+
- `{{ID}}` → ID from JSON output
|
| 107 |
+
- `{{TITLE}}` → Your generated title
|
| 108 |
+
- `{{STAGE}}` → Selected stage
|
| 109 |
+
- `{{DATE_ISO}}` → Today (YYYY-MM-DD format)
|
| 110 |
+
- `{{SURFACE}}` → "agent"
|
| 111 |
+
- `{{MODEL}}` → Your model name or "unspecified"
|
| 112 |
+
- `{{FEATURE}}` → Feature from JSON or "none"
|
| 113 |
+
- `{{BRANCH}}` → Current branch name
|
| 114 |
+
- `{{USER}}` → Git user name or "unknown"
|
| 115 |
+
- `{{COMMAND}}` → "/sp.phr" or the command that triggered this
|
| 116 |
+
- `{{LABELS}}` → Extract key topics as ["topic1", "topic2", ...]
|
| 117 |
+
- `{{LINKS_SPEC}}`, `{{LINKS_TICKET}}`, `{{LINKS_ADR}}`, `{{LINKS_PR}}` → Relevant links or "null"
|
| 118 |
+
- `{{FILES_YAML}}` → List files modified/created, one per line with " - " prefix, or " - none"
|
| 119 |
+
- `{{TESTS_YAML}}` → List tests run/created, one per line with " - " prefix, or " - none"
|
| 120 |
+
|
| 121 |
+
**Content Sections:**
|
| 122 |
+
|
| 123 |
+
- `{{PROMPT_TEXT}}` → **THE COMPLETE $ARGUMENTS TEXT VERBATIM** (do NOT truncate to first line!)
|
| 124 |
+
- `{{RESPONSE_TEXT}}` → Brief summary of your response (1-3 sentences)
|
| 125 |
+
- `{{OUTCOME_IMPACT}}` → What was accomplished
|
| 126 |
+
- `{{TESTS_SUMMARY}}` → Tests run or "none"
|
| 127 |
+
- `{{FILES_SUMMARY}}` → Files modified or "none"
|
| 128 |
+
- `{{NEXT_PROMPTS}}` → Suggested next steps or "none"
|
| 129 |
+
- `{{REFLECTION_NOTE}}` → One key insight
|
| 130 |
+
|
| 131 |
+
Add short evaluation notes:
|
| 132 |
+
- **Failure modes observed:** Specify any issues encountered, such as ambiguous instructions, incomplete metadata, misrouted commands, or unexpected script errors. Example: "Prompt did not capture full user input; metadata field 'LABELS' was left blank."
|
| 133 |
+
- **Next experiment to improve prompt quality:** Suggest a concrete action to address the failure mode. Example: "Rephrase prompt to clarify required metadata fields," or "Test with a multi-line user input to ensure full capture."
|
| 134 |
+
|
| 135 |
+
**CRITICAL**: `{{PROMPT_TEXT}}` MUST be the FULL multiline user input from $ARGUMENTS above, not just the title or first line.
|
| 136 |
+
|
| 137 |
+
## Step 5: Report Completion
|
| 138 |
+
|
| 139 |
+
## FORMATTING REQUIREMENTS
|
| 140 |
+
|
| 141 |
+
Present results in this exact structure:
|
| 142 |
+
|
| 143 |
+
```
|
| 144 |
+
✅ Exchange recorded as PHR-{id} in {context} context
|
| 145 |
+
📁 {relative-path-from-repo-root}
|
| 146 |
+
|
| 147 |
+
Stage: {stage}
|
| 148 |
+
Feature: {feature or "none"}
|
| 149 |
+
Files modified: {count}
|
| 150 |
+
Tests involved: {count}
|
| 151 |
+
|
| 152 |
+
Acceptance Criteria (PASS only if all true)
|
| 153 |
+
- Full prompt preserved verbatim (no truncation)
|
| 154 |
+
- Stage and routing determined correctly
|
| 155 |
+
- Metadata fields populated; missing values noted explicitly
|
| 156 |
+
```
|
| 157 |
+
|
| 158 |
+
## ERROR HANDLING
|
| 159 |
+
|
| 160 |
+
If create-phr.sh fails:
|
| 161 |
+
|
| 162 |
+
1. Display the exact error message from script
|
| 163 |
+
2. Explain what went wrong in plain language
|
| 164 |
+
3. Provide specific corrective action with commands
|
| 165 |
+
4. Do NOT fail silently or hide errors
|
| 166 |
+
|
| 167 |
+
## TONE
|
| 168 |
+
|
| 169 |
+
Be professional, concise, and action-oriented. Focus on what was accomplished and what's next.
|
| 170 |
+
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 174 |
+
|
| 175 |
+
1) Determine Stage
|
| 176 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 177 |
+
|
| 178 |
+
2) Generate Title and Determine Routing:
|
| 179 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 180 |
+
- Route is automatically determined by stage:
|
| 181 |
+
- `constitution` → `history/prompts/constitution/`
|
| 182 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 183 |
+
- `general` → `history/prompts/general/`
|
| 184 |
+
|
| 185 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 186 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 187 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 188 |
+
- If the script fails:
|
| 189 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 190 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 191 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 192 |
+
|
| 193 |
+
4) Validate + report
|
| 194 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 195 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.plan.md
ADDED
|
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
| 3 |
+
handoffs:
|
| 4 |
+
- label: Create Tasks
|
| 5 |
+
agent: sp.tasks
|
| 6 |
+
prompt: Break the plan into tasks
|
| 7 |
+
send: true
|
| 8 |
+
- label: Create Checklist
|
| 9 |
+
agent: sp.checklist
|
| 10 |
+
prompt: Create a checklist for the following domain...
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
## User Input
|
| 14 |
+
|
| 15 |
+
```text
|
| 16 |
+
$ARGUMENTS
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 20 |
+
|
| 21 |
+
## Outline
|
| 22 |
+
|
| 23 |
+
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 24 |
+
|
| 25 |
+
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
|
| 26 |
+
|
| 27 |
+
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
| 28 |
+
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
| 29 |
+
- Fill Constitution Check section from constitution
|
| 30 |
+
- Evaluate gates (ERROR if violations unjustified)
|
| 31 |
+
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
|
| 32 |
+
- Phase 1: Generate data-model.md, contracts/, quickstart.md
|
| 33 |
+
- Phase 1: Update agent context by running the agent script
|
| 34 |
+
- Re-evaluate Constitution Check post-design
|
| 35 |
+
|
| 36 |
+
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
|
| 37 |
+
|
| 38 |
+
## Phases
|
| 39 |
+
|
| 40 |
+
### Phase 0: Outline & Research
|
| 41 |
+
|
| 42 |
+
1. **Extract unknowns from Technical Context** above:
|
| 43 |
+
- For each NEEDS CLARIFICATION → research task
|
| 44 |
+
- For each dependency → best practices task
|
| 45 |
+
- For each integration → patterns task
|
| 46 |
+
|
| 47 |
+
2. **Generate and dispatch research agents**:
|
| 48 |
+
|
| 49 |
+
```text
|
| 50 |
+
For each unknown in Technical Context:
|
| 51 |
+
Task: "Research {unknown} for {feature context}"
|
| 52 |
+
For each technology choice:
|
| 53 |
+
Task: "Find best practices for {tech} in {domain}"
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
3. **Consolidate findings** in `research.md` using format:
|
| 57 |
+
- Decision: [what was chosen]
|
| 58 |
+
- Rationale: [why chosen]
|
| 59 |
+
- Alternatives considered: [what else evaluated]
|
| 60 |
+
|
| 61 |
+
**Output**: research.md with all NEEDS CLARIFICATION resolved
|
| 62 |
+
|
| 63 |
+
### Phase 1: Design & Contracts
|
| 64 |
+
|
| 65 |
+
**Prerequisites:** `research.md` complete
|
| 66 |
+
|
| 67 |
+
1. **Extract entities from feature spec** → `data-model.md`:
|
| 68 |
+
- Entity name, fields, relationships
|
| 69 |
+
- Validation rules from requirements
|
| 70 |
+
- State transitions if applicable
|
| 71 |
+
|
| 72 |
+
2. **Generate API contracts** from functional requirements:
|
| 73 |
+
- For each user action → endpoint
|
| 74 |
+
- Use standard REST/GraphQL patterns
|
| 75 |
+
- Output OpenAPI/GraphQL schema to `/contracts/`
|
| 76 |
+
|
| 77 |
+
3. **Agent context update**:
|
| 78 |
+
- Run `.specify/scripts/bash/update-agent-context.sh claude`
|
| 79 |
+
- These scripts detect which AI agent is in use
|
| 80 |
+
- Update the appropriate agent-specific context file
|
| 81 |
+
- Add only new technology from current plan
|
| 82 |
+
- Preserve manual additions between markers
|
| 83 |
+
|
| 84 |
+
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
|
| 85 |
+
|
| 86 |
+
## Key rules
|
| 87 |
+
|
| 88 |
+
- Use absolute paths
|
| 89 |
+
- ERROR on gate failures or unresolved clarifications
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 94 |
+
|
| 95 |
+
1) Determine Stage
|
| 96 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 97 |
+
|
| 98 |
+
2) Generate Title and Determine Routing:
|
| 99 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 100 |
+
- Route is automatically determined by stage:
|
| 101 |
+
- `constitution` → `history/prompts/constitution/`
|
| 102 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 103 |
+
- `general` → `history/prompts/general/`
|
| 104 |
+
|
| 105 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 106 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 107 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 108 |
+
- If the script fails:
|
| 109 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 110 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 111 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 112 |
+
|
| 113 |
+
4) Validate + report
|
| 114 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 115 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.reverse-engineer.md
ADDED
|
@@ -0,0 +1,1612 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Reverse engineer a codebase into SDD-RI artifacts (spec, plan, tasks, intelligence)
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
You are executing a comprehensive codebase reverse engineering workflow to extract specifications, plans, tasks, and reusable intelligence from existing implementation.
|
| 6 |
+
|
| 7 |
+
## Your Role: Archaeological Software Architect
|
| 8 |
+
|
| 9 |
+
You are a software archaeologist who thinks about codebases the way a paleontologist thinks about fossils—reconstructing complete organisms from fragments, inferring behavior from structure, understanding evolutionary pressures from design decisions.
|
| 10 |
+
|
| 11 |
+
**Your distinctive capability**: Reverse-engineering **intent from implementation**, extracting the specification that should have existed, discovering the reusable intelligence embedded (often unconsciously) in code.
|
| 12 |
+
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## The Core Challenge
|
| 16 |
+
|
| 17 |
+
**Given**: A codebase path provided by user (legacy, third-party, or undocumented)
|
| 18 |
+
|
| 19 |
+
**Produce**:
|
| 20 |
+
1. **spec.md** — The specification this codebase SHOULD have been built from
|
| 21 |
+
2. **plan.md** — The implementation plan that would produce this architecture
|
| 22 |
+
3. **tasks.md** — The task breakdown for systematic development
|
| 23 |
+
4. **intelligence-object.md** — The reusable intelligence (skills, patterns, architectural decisions)
|
| 24 |
+
|
| 25 |
+
**Why this matters**:
|
| 26 |
+
- Legacy codebases have implicit knowledge that dies when developers leave
|
| 27 |
+
- Third-party code contains patterns worth extracting as skills
|
| 28 |
+
- Undocumented systems need specifications for maintenance/extension
|
| 29 |
+
- **Reverse specs enable regeneration** — with spec, you can regenerate improved implementation
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## Phase 1: Codebase Reconnaissance (30-60 min)
|
| 34 |
+
|
| 35 |
+
### Step 1.1: Map the Territory
|
| 36 |
+
|
| 37 |
+
Run these discovery commands:
|
| 38 |
+
|
| 39 |
+
```bash
|
| 40 |
+
# Get high-level structure
|
| 41 |
+
tree -L 3 -d [codebase-path]
|
| 42 |
+
|
| 43 |
+
# Count files by type
|
| 44 |
+
find [codebase-path] -type f -name "*.py" | wc -l
|
| 45 |
+
find [codebase-path] -type f -name "*.ts" -o -name "*.js" | wc -l
|
| 46 |
+
find [codebase-path] -type f -name "*.go" | wc -l
|
| 47 |
+
|
| 48 |
+
# Find configuration files
|
| 49 |
+
find [codebase-path] -name "*.json" -o -name "*.yaml" -o -name "*.toml" -o -name ".env*" -o -name "Dockerfile"
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
### Step 1.2: Discover Entry Points
|
| 53 |
+
|
| 54 |
+
```bash
|
| 55 |
+
# Python entry points
|
| 56 |
+
grep -r "if __name__ == '__main__'" [codebase-path] --include="*.py"
|
| 57 |
+
|
| 58 |
+
# TypeScript/JavaScript entry points
|
| 59 |
+
grep -r "express\(\)\|fastify\(\)\|app.listen" [codebase-path] --include="*.ts" --include="*.js"
|
| 60 |
+
|
| 61 |
+
# Go entry points
|
| 62 |
+
grep -r "func main()" [codebase-path] --include="*.go"
|
| 63 |
+
|
| 64 |
+
# Java entry points
|
| 65 |
+
grep -r "public static void main" [codebase-path] --include="*.java"
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
### Step 1.3: Analyze Dependencies
|
| 69 |
+
|
| 70 |
+
```bash
|
| 71 |
+
# Python
|
| 72 |
+
cat [codebase-path]/requirements.txt [codebase-path]/setup.py [codebase-path]/pyproject.toml 2>/dev/null
|
| 73 |
+
|
| 74 |
+
# Node/TypeScript
|
| 75 |
+
cat [codebase-path]/package.json 2>/dev/null
|
| 76 |
+
|
| 77 |
+
# Go
|
| 78 |
+
cat [codebase-path]/go.mod 2>/dev/null
|
| 79 |
+
|
| 80 |
+
# Java
|
| 81 |
+
cat [codebase-path]/pom.xml [codebase-path]/build.gradle 2>/dev/null
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
### Step 1.4: Assess Test Coverage
|
| 85 |
+
|
| 86 |
+
```bash
|
| 87 |
+
# Find test files
|
| 88 |
+
find [codebase-path] -name "*test*" -o -name "*spec*" | head -20
|
| 89 |
+
|
| 90 |
+
# Identify test frameworks
|
| 91 |
+
grep -r "import.*pytest\|unittest\|jest\|mocha\|testing" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -10
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
### Step 1.5: Read Existing Documentation
|
| 95 |
+
|
| 96 |
+
```bash
|
| 97 |
+
# Find documentation files
|
| 98 |
+
find [codebase-path] -name "README*" -o -name "*.md" -o -name "docs" -type d
|
| 99 |
+
|
| 100 |
+
# List markdown files
|
| 101 |
+
find [codebase-path] -name "*.md" | head -10
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
**Read**: README.md, ARCHITECTURE.md, CONTRIBUTING.md (if they exist)
|
| 105 |
+
|
| 106 |
+
---
|
| 107 |
+
|
| 108 |
+
## Phase 2: Deep Analysis (4-6 hours)
|
| 109 |
+
|
| 110 |
+
Execute these six analysis dimensions systematically:
|
| 111 |
+
|
| 112 |
+
### Dimension 1: Intent Archaeology (2 hours)
|
| 113 |
+
|
| 114 |
+
**Goal**: Extract the WHAT and WHY
|
| 115 |
+
|
| 116 |
+
#### 1.1 System Purpose Inference
|
| 117 |
+
|
| 118 |
+
**Questions to ask yourself**:
|
| 119 |
+
- If this codebase disappeared, what would users lose?
|
| 120 |
+
- What's the "elevator pitch" for this system?
|
| 121 |
+
- What problem is so painful this was built to solve it?
|
| 122 |
+
|
| 123 |
+
**Evidence to gather**:
|
| 124 |
+
- Read README, comments, docstrings for stated purpose
|
| 125 |
+
- Analyze entry points: what operations are exposed?
|
| 126 |
+
- Study data models: what entities are central?
|
| 127 |
+
|
| 128 |
+
#### 1.2 Functional Requirements Extraction
|
| 129 |
+
|
| 130 |
+
```bash
|
| 131 |
+
# Find API endpoints/routes
|
| 132 |
+
grep -r "route\|@app\|@get\|@post\|@put\|@delete\|router\." [codebase-path] --include="*.py" --include="*.ts" --include="*.js" | head -30
|
| 133 |
+
|
| 134 |
+
# Find public interfaces
|
| 135 |
+
grep -r "class.*public\|export class\|export function\|def.*public" [codebase-path] | head -30
|
| 136 |
+
|
| 137 |
+
# Find CLI commands
|
| 138 |
+
grep -r "argparse\|cobra\|click\|commander" [codebase-path] --include="*.py" --include="*.go" --include="*.js" | head -20
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
**For each interface discovered**:
|
| 142 |
+
- What operation does it perform?
|
| 143 |
+
- What inputs does it require?
|
| 144 |
+
- What outputs does it produce?
|
| 145 |
+
- What side effects occur?
|
| 146 |
+
|
| 147 |
+
#### 1.3 Non-Functional Requirements Detection
|
| 148 |
+
|
| 149 |
+
**Performance patterns**:
|
| 150 |
+
```bash
|
| 151 |
+
grep -r "cache\|redis\|memcached\|async\|await\|pool" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | wc -l
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
**Security patterns**:
|
| 155 |
+
```bash
|
| 156 |
+
grep -r "auth\|jwt\|bcrypt\|encrypt\|sanitize\|validate" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | wc -l
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
**Reliability patterns**:
|
| 160 |
+
```bash
|
| 161 |
+
grep -r "retry\|circuit.breaker\|fallback\|timeout" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | wc -l
|
| 162 |
+
```
|
| 163 |
+
|
| 164 |
+
**Observability patterns**:
|
| 165 |
+
```bash
|
| 166 |
+
grep -r "log\|logger\|metric\|trace\|monitor" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | wc -l
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
#### 1.4 Constraint Discovery
|
| 170 |
+
|
| 171 |
+
**External integrations**:
|
| 172 |
+
```bash
|
| 173 |
+
# Database connections
|
| 174 |
+
grep -r "postgresql\|mysql\|mongodb\|redis\|sqlite" [codebase-path] --include="*.py" --include="*.ts" --include="*.go"
|
| 175 |
+
|
| 176 |
+
# External APIs
|
| 177 |
+
grep -r "http.get\|requests.post\|fetch\|axios\|http.Client" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -20
|
| 178 |
+
|
| 179 |
+
# Message queues
|
| 180 |
+
grep -r "kafka\|rabbitmq\|sqs\|pubsub\|queue" [codebase-path] --include="*.py" --include="*.ts" --include="*.go"
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
### Dimension 2: Architectural Pattern Recognition (1.5 hours)
|
| 186 |
+
|
| 187 |
+
**Goal**: Identify the HOW — architectural decisions and design patterns
|
| 188 |
+
|
| 189 |
+
#### 2.1 Layering Detection
|
| 190 |
+
|
| 191 |
+
```bash
|
| 192 |
+
# Look for common layer names
|
| 193 |
+
find [codebase-path] -type d -name "*controller*" -o -name "*service*" -o -name "*repository*" -o -name "*domain*" -o -name "*handler*" -o -name "*model*"
|
| 194 |
+
|
| 195 |
+
# Check directory structure for layers
|
| 196 |
+
ls -la [codebase-path]/
|
| 197 |
+
```
|
| 198 |
+
|
| 199 |
+
**Questions to ask**:
|
| 200 |
+
- Is there clear separation of concerns?
|
| 201 |
+
- What's the dependency flow? (UI → Service → Data)
|
| 202 |
+
- Are layers respected or violated?
|
| 203 |
+
|
| 204 |
+
#### 2.2 Design Pattern Identification
|
| 205 |
+
|
| 206 |
+
```bash
|
| 207 |
+
# Find pattern keywords in code
|
| 208 |
+
grep -r "Factory\|Builder\|Singleton\|Adapter\|Strategy\|Observer\|Command\|Decorator" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -20
|
| 209 |
+
|
| 210 |
+
# Find interface/abstract class definitions
|
| 211 |
+
grep -r "interface\|abstract class\|Protocol\|ABC" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -20
|
| 212 |
+
```
|
| 213 |
+
|
| 214 |
+
#### 2.3 Architectural Style Classification
|
| 215 |
+
|
| 216 |
+
**Check for MVC/MVP/MVVM**:
|
| 217 |
+
```bash
|
| 218 |
+
find [codebase-path] -type d -name "*view*" -o -name "*controller*" -o -name "*model*"
|
| 219 |
+
```
|
| 220 |
+
|
| 221 |
+
**Check for Hexagonal/Clean Architecture**:
|
| 222 |
+
```bash
|
| 223 |
+
find [codebase-path] -type d -name "*domain*" -o -name "*infrastructure*" -o -name "*application*" -o -name "*adapter*"
|
| 224 |
+
```
|
| 225 |
+
|
| 226 |
+
**Check for Event-Driven**:
|
| 227 |
+
```bash
|
| 228 |
+
grep -r "event\|emit\|publish\|subscribe\|listener\|handler" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | wc -l
|
| 229 |
+
```
|
| 230 |
+
|
| 231 |
+
**Check for CQRS**:
|
| 232 |
+
```bash
|
| 233 |
+
grep -r "command\|query\|CommandHandler\|QueryHandler" [codebase-path] --include="*.py" --include="*.ts" --include="*.go"
|
| 234 |
+
```
|
| 235 |
+
|
| 236 |
+
#### 2.4 Data Flow Tracing
|
| 237 |
+
|
| 238 |
+
**Pick one representative operation and trace it**:
|
| 239 |
+
1. Find entry point (route/handler)
|
| 240 |
+
2. Follow to business logic (service/use-case)
|
| 241 |
+
3. Trace to data layer (repository/DAO)
|
| 242 |
+
4. Document the flow
|
| 243 |
+
|
| 244 |
+
---
|
| 245 |
+
|
| 246 |
+
### Dimension 3: Code Structure Decomposition (1 hour)
|
| 247 |
+
|
| 248 |
+
**Goal**: Break down implementation into logical task units
|
| 249 |
+
|
| 250 |
+
#### 3.1 Module Inventory
|
| 251 |
+
|
| 252 |
+
```bash
|
| 253 |
+
# List all significant modules (exclude tests)
|
| 254 |
+
find [codebase-path] -name "*.py" -o -name "*.ts" -o -name "*.go" | grep -v test | sort
|
| 255 |
+
|
| 256 |
+
# Group by domain/feature
|
| 257 |
+
ls -d [codebase-path]/*/ | sort
|
| 258 |
+
```
|
| 259 |
+
|
| 260 |
+
#### 3.2 Responsibility Assignment
|
| 261 |
+
|
| 262 |
+
For each major module/package:
|
| 263 |
+
- What's its single responsibility?
|
| 264 |
+
- What other modules does it depend on?
|
| 265 |
+
- What modules depend on it?
|
| 266 |
+
- Could it be extracted as standalone component?
|
| 267 |
+
|
| 268 |
+
#### 3.3 Integration Point Mapping
|
| 269 |
+
|
| 270 |
+
```bash
|
| 271 |
+
# External service calls
|
| 272 |
+
grep -rn "http.get\|requests.post\|fetch\|axios\|http.Client" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -20
|
| 273 |
+
|
| 274 |
+
# Database queries
|
| 275 |
+
grep -rn "SELECT\|INSERT\|UPDATE\|DELETE\|query\|execute\|find\|create\|save" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -20
|
| 276 |
+
|
| 277 |
+
# Queue/messaging
|
| 278 |
+
grep -rn "publish\|subscribe\|send_message\|consume\|produce" [codebase-path] --include="*.py" --include="*.ts" --include="*.go"
|
| 279 |
+
```
|
| 280 |
+
|
| 281 |
+
#### 3.4 Cross-Cutting Concern Identification
|
| 282 |
+
|
| 283 |
+
**Logging**:
|
| 284 |
+
```bash
|
| 285 |
+
grep -r "logger\|log\." [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -10
|
| 286 |
+
```
|
| 287 |
+
|
| 288 |
+
**Error Handling**:
|
| 289 |
+
```bash
|
| 290 |
+
grep -r "try:\|catch\|except\|error\|Error" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -10
|
| 291 |
+
```
|
| 292 |
+
|
| 293 |
+
**Configuration**:
|
| 294 |
+
```bash
|
| 295 |
+
grep -r "config\|env\|settings\|getenv" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -10
|
| 296 |
+
```
|
| 297 |
+
|
| 298 |
+
---
|
| 299 |
+
|
| 300 |
+
### Dimension 4: Intelligence Extraction (1 hour)
|
| 301 |
+
|
| 302 |
+
**Goal**: Extract reusable intelligence — patterns worth encoding as skills
|
| 303 |
+
|
| 304 |
+
#### 4.1 Pattern Frequency Analysis
|
| 305 |
+
|
| 306 |
+
**Questions to ask**:
|
| 307 |
+
- What code patterns repeat 3+ times?
|
| 308 |
+
- What decisions are made consistently?
|
| 309 |
+
- What best practices are applied systematically?
|
| 310 |
+
|
| 311 |
+
**Look for**:
|
| 312 |
+
```bash
|
| 313 |
+
# Find repeated function/method names
|
| 314 |
+
grep -rh "def \|func \|function " [codebase-path] --include="*.py" --include="*.go" --include="*.ts" | sort | uniq -c | sort -rn | head -20
|
| 315 |
+
```
|
| 316 |
+
|
| 317 |
+
#### 4.2 Implicit Expertise Detection
|
| 318 |
+
|
| 319 |
+
**Find important comments** (reveal tacit knowledge):
|
| 320 |
+
```bash
|
| 321 |
+
# Comments with keywords indicating critical knowledge
|
| 322 |
+
grep -rn "IMPORTANT:\|NOTE:\|WARNING:\|SECURITY:\|TODO:\|HACK:\|FIXME:" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" | head -30
|
| 323 |
+
```
|
| 324 |
+
|
| 325 |
+
#### 4.3 Architecture Decision Extraction
|
| 326 |
+
|
| 327 |
+
```bash
|
| 328 |
+
# Look for ADR-style documents
|
| 329 |
+
find [codebase-path] -name "*decision*" -o -name "*ADR*" -o -name "architecture.md"
|
| 330 |
+
|
| 331 |
+
# Look for significant comments about choices
|
| 332 |
+
grep -rn "chosen because\|decided to\|alternative\|tradeoff" [codebase-path] --include="*.py" --include="*.ts" --include="*.go" --include="*.md"
|
| 333 |
+
```
|
| 334 |
+
|
| 335 |
+
#### 4.4 Skill Candidate Identification
|
| 336 |
+
|
| 337 |
+
**Identify patterns worth encoding as Persona + Questions + Principles**:
|
| 338 |
+
|
| 339 |
+
Common candidates:
|
| 340 |
+
- Error handling strategy (if consistent across modules)
|
| 341 |
+
- API design patterns (REST conventions, response formats)
|
| 342 |
+
- Data validation approach (schema validation patterns)
|
| 343 |
+
- Security patterns (auth middleware, input sanitization)
|
| 344 |
+
- Performance optimization (caching strategies, query optimization)
|
| 345 |
+
|
| 346 |
+
**For each candidate**:
|
| 347 |
+
1. Extract the pattern (what's done consistently)
|
| 348 |
+
2. Infer the reasoning (why this approach)
|
| 349 |
+
3. Identify decision points (what questions guide choices)
|
| 350 |
+
4. Formulate as P+Q+P skill
|
| 351 |
+
|
| 352 |
+
---
|
| 353 |
+
|
| 354 |
+
### Dimension 5: Gap Analysis & Technical Debt (0.5 hours)
|
| 355 |
+
|
| 356 |
+
**Goal**: Identify what SHOULD be there but is missing
|
| 357 |
+
|
| 358 |
+
#### 5.1 Missing Documentation
|
| 359 |
+
|
| 360 |
+
```bash
|
| 361 |
+
# Check for API documentation
|
| 362 |
+
find [codebase-path] -name "openapi.*" -o -name "swagger.*" -o -name "api.md"
|
| 363 |
+
|
| 364 |
+
# Check for data model docs
|
| 365 |
+
find [codebase-path] -name "schema.*" -o -name "models.md" -o -name "ERD.*"
|
| 366 |
+
```
|
| 367 |
+
|
| 368 |
+
#### 5.2 Testing Gaps
|
| 369 |
+
|
| 370 |
+
```bash
|
| 371 |
+
# Calculate test file ratio
|
| 372 |
+
total_files=$(find [codebase-path] -name "*.py" -o -name "*.ts" -o -name "*.go" | wc -l)
|
| 373 |
+
test_files=$(find [codebase-path] -name "*test*" -o -name "*spec*" | wc -l)
|
| 374 |
+
echo "Test coverage: $test_files / $total_files files"
|
| 375 |
+
```
|
| 376 |
+
|
| 377 |
+
**If coverage tools available**:
|
| 378 |
+
```bash
|
| 379 |
+
# Python
|
| 380 |
+
cd [codebase-path] && pytest --cov=. --cov-report=term 2>/dev/null
|
| 381 |
+
|
| 382 |
+
# TypeScript/JavaScript
|
| 383 |
+
cd [codebase-path] && npm test -- --coverage 2>/dev/null
|
| 384 |
+
|
| 385 |
+
# Go
|
| 386 |
+
cd [codebase-path] && go test -cover ./... 2>/dev/null
|
| 387 |
+
```
|
| 388 |
+
|
| 389 |
+
#### 5.3 Security Audit
|
| 390 |
+
|
| 391 |
+
**Potential security issues**:
|
| 392 |
+
```bash
|
| 393 |
+
# Code injection risks
|
| 394 |
+
grep -rn "eval\|exec\|system\|shell" [codebase-path] --include="*.py" --include="*.js"
|
| 395 |
+
|
| 396 |
+
# Hardcoded secrets
|
| 397 |
+
grep -rn "password.*=.*\"\|api_key.*=.*\"\|secret.*=.*\"" [codebase-path] --include="*.py" --include="*.ts" --include="*.go"
|
| 398 |
+
|
| 399 |
+
# SQL injection risks
|
| 400 |
+
grep -rn "execute.*%\|query.*format\|SELECT.*+" [codebase-path] --include="*.py" --include="*.ts" --include="*.go"
|
| 401 |
+
```
|
| 402 |
+
|
| 403 |
+
#### 5.4 Observability Gaps
|
| 404 |
+
|
| 405 |
+
**Check for**:
|
| 406 |
+
- Structured logging (JSON format)
|
| 407 |
+
- Metrics collection (Prometheus, StatsD)
|
| 408 |
+
- Distributed tracing (OpenTelemetry, Jaeger)
|
| 409 |
+
- Health check endpoints
|
| 410 |
+
|
| 411 |
+
```bash
|
| 412 |
+
# Structured logging
|
| 413 |
+
grep -r "json\|structured" [codebase-path] --include="*log*"
|
| 414 |
+
|
| 415 |
+
# Metrics
|
| 416 |
+
grep -r "prometheus\|statsd\|metric" [codebase-path] --include="*.py" --include="*.ts" --include="*.go"
|
| 417 |
+
|
| 418 |
+
# Tracing
|
| 419 |
+
grep -r "trace\|span\|opentelemetry" [codebase-path] --include="*.py" --include="*.ts" --include="*.go"
|
| 420 |
+
|
| 421 |
+
# Health checks
|
| 422 |
+
grep -rn "/health\|/ready\|/alive" [codebase-path] --include="*.py" --include="*.ts" --include="*.go"
|
| 423 |
+
```
|
| 424 |
+
|
| 425 |
+
---
|
| 426 |
+
|
| 427 |
+
### Dimension 6: Regeneration Blueprint (30 min)
|
| 428 |
+
|
| 429 |
+
**Goal**: Ensure specs can regenerate this system (or improved version)
|
| 430 |
+
|
| 431 |
+
#### 6.1 Specification Completeness Check
|
| 432 |
+
|
| 433 |
+
**Ask yourself**:
|
| 434 |
+
- Can another developer read my spec and build equivalent system?
|
| 435 |
+
- Are all architectural decisions documented with rationale?
|
| 436 |
+
- Are success criteria measurable and testable?
|
| 437 |
+
|
| 438 |
+
#### 6.2 Reusability Assessment
|
| 439 |
+
|
| 440 |
+
**Identify**:
|
| 441 |
+
- What components are reusable as-is?
|
| 442 |
+
- What patterns should become skills?
|
| 443 |
+
- What should be generalized vs kept specific?
|
| 444 |
+
|
| 445 |
+
#### 6.3 Improvement Opportunities
|
| 446 |
+
|
| 447 |
+
**If rebuilding from scratch, what would you change?**:
|
| 448 |
+
- Technical debt to avoid replicating
|
| 449 |
+
- Modern alternatives to outdated dependencies
|
| 450 |
+
- Missing features to add
|
| 451 |
+
- Architecture improvements (event sourcing, CQRS, etc.)
|
| 452 |
+
|
| 453 |
+
---
|
| 454 |
+
|
| 455 |
+
## Phase 3: Synthesis & Documentation (2-3 hours)
|
| 456 |
+
|
| 457 |
+
### Output 1: spec.md
|
| 458 |
+
|
| 459 |
+
Create comprehensive specification with these sections:
|
| 460 |
+
|
| 461 |
+
```markdown
|
| 462 |
+
# [System Name] Specification
|
| 463 |
+
|
| 464 |
+
**Version**: 1.0 (Reverse Engineered)
|
| 465 |
+
**Date**: [Date]
|
| 466 |
+
**Source**: [Codebase path]
|
| 467 |
+
|
| 468 |
+
## Problem Statement
|
| 469 |
+
|
| 470 |
+
[What problem does this solve? Inferred from code purpose]
|
| 471 |
+
|
| 472 |
+
## System Intent
|
| 473 |
+
|
| 474 |
+
**Target Users**: [Who uses this system?]
|
| 475 |
+
|
| 476 |
+
**Core Value Proposition**: [Why this exists instead of alternatives?]
|
| 477 |
+
|
| 478 |
+
**Key Capabilities**:
|
| 479 |
+
- [Capability 1 from functional analysis]
|
| 480 |
+
- [Capability 2]
|
| 481 |
+
- [Capability 3]
|
| 482 |
+
|
| 483 |
+
## Functional Requirements
|
| 484 |
+
|
| 485 |
+
### Requirement 1: [Operation Name]
|
| 486 |
+
- **What**: [What this operation does]
|
| 487 |
+
- **Why**: [Business justification - inferred]
|
| 488 |
+
- **Inputs**: [Required data/parameters]
|
| 489 |
+
- **Outputs**: [Results produced]
|
| 490 |
+
- **Side Effects**: [Database changes, external calls, etc.]
|
| 491 |
+
- **Success Criteria**: [How to verify correct behavior]
|
| 492 |
+
|
| 493 |
+
[Repeat for all major operations discovered]
|
| 494 |
+
|
| 495 |
+
## Non-Functional Requirements
|
| 496 |
+
|
| 497 |
+
### Performance
|
| 498 |
+
[Observed patterns: caching, async, connection pooling]
|
| 499 |
+
**Target**: [If metrics found in code/comments]
|
| 500 |
+
|
| 501 |
+
### Security
|
| 502 |
+
[Auth mechanisms, input validation, encryption observed]
|
| 503 |
+
**Standards**: [Compliance patterns detected]
|
| 504 |
+
|
| 505 |
+
### Reliability
|
| 506 |
+
[Retry logic, circuit breakers, graceful degradation]
|
| 507 |
+
**SLA**: [If defined in code/comments]
|
| 508 |
+
|
| 509 |
+
### Scalability
|
| 510 |
+
[Horizontal/vertical scaling patterns observed]
|
| 511 |
+
**Load Capacity**: [If defined]
|
| 512 |
+
|
| 513 |
+
### Observability
|
| 514 |
+
[Logging, metrics, tracing implemented]
|
| 515 |
+
**Monitoring**: [What's monitored]
|
| 516 |
+
|
| 517 |
+
## System Constraints
|
| 518 |
+
|
| 519 |
+
### External Dependencies
|
| 520 |
+
- [Database: PostgreSQL 14+]
|
| 521 |
+
- [Cache: Redis 6+]
|
| 522 |
+
- [Message Queue: RabbitMQ]
|
| 523 |
+
- [External API: Stripe for payments]
|
| 524 |
+
|
| 525 |
+
### Data Formats
|
| 526 |
+
- [JSON for API requests/responses]
|
| 527 |
+
- [Protocol Buffers for internal service communication]
|
| 528 |
+
|
| 529 |
+
### Deployment Context
|
| 530 |
+
- [Docker containers on Kubernetes]
|
| 531 |
+
- [Environment: AWS EKS]
|
| 532 |
+
|
| 533 |
+
### Compliance Requirements
|
| 534 |
+
- [GDPR: Personal data handling patterns observed]
|
| 535 |
+
- [PCI-DSS: Payment data security patterns]
|
| 536 |
+
|
| 537 |
+
## Non-Goals & Out of Scope
|
| 538 |
+
|
| 539 |
+
**Explicitly excluded** (inferred from missing implementation):
|
| 540 |
+
- [Feature X: No evidence in codebase]
|
| 541 |
+
- [Integration Y: Stub code suggests planned but not implemented]
|
| 542 |
+
|
| 543 |
+
## Known Gaps & Technical Debt
|
| 544 |
+
|
| 545 |
+
### Gap 1: [Issue Name]
|
| 546 |
+
- **Issue**: [Specific problem]
|
| 547 |
+
- **Evidence**: [file:line reference]
|
| 548 |
+
- **Impact**: [Consequences]
|
| 549 |
+
- **Recommendation**: [How to fix]
|
| 550 |
+
|
| 551 |
+
[Continue for all gaps]
|
| 552 |
+
|
| 553 |
+
## Success Criteria
|
| 554 |
+
|
| 555 |
+
### Functional Success
|
| 556 |
+
- [ ] All API endpoints return correct responses for valid inputs
|
| 557 |
+
- [ ] All error cases handled gracefully
|
| 558 |
+
- [ ] All integrations with external systems work correctly
|
| 559 |
+
|
| 560 |
+
### Non-Functional Success
|
| 561 |
+
- [ ] Response time < [X]ms for [operation]
|
| 562 |
+
- [ ] System handles [Y] concurrent users
|
| 563 |
+
- [ ] [Z]% test coverage achieved
|
| 564 |
+
- [ ] Zero critical security vulnerabilities
|
| 565 |
+
|
| 566 |
+
## Acceptance Tests
|
| 567 |
+
|
| 568 |
+
### Test 1: [Scenario]
|
| 569 |
+
**Given**: [Initial state]
|
| 570 |
+
**When**: [Action]
|
| 571 |
+
**Then**: [Expected outcome]
|
| 572 |
+
|
| 573 |
+
[Continue for critical scenarios]
|
| 574 |
+
```
|
| 575 |
+
|
| 576 |
+
---
|
| 577 |
+
|
| 578 |
+
### Output 2: plan.md
|
| 579 |
+
|
| 580 |
+
Create implementation plan:
|
| 581 |
+
|
| 582 |
+
```markdown
|
| 583 |
+
# [System Name] Implementation Plan
|
| 584 |
+
|
| 585 |
+
**Version**: 1.0 (Reverse Engineered)
|
| 586 |
+
**Date**: [Date]
|
| 587 |
+
|
| 588 |
+
## Architecture Overview
|
| 589 |
+
|
| 590 |
+
**Architectural Style**: [MVC, Hexagonal, Event-Driven, etc.]
|
| 591 |
+
|
| 592 |
+
**Reasoning**: [Why this pattern fits the requirements - inferred from structure]
|
| 593 |
+
|
| 594 |
+
**Diagram** (ASCII):
|
| 595 |
+
```
|
| 596 |
+
[Visual representation of architecture]
|
| 597 |
+
```
|
| 598 |
+
|
| 599 |
+
## Layer Structure
|
| 600 |
+
|
| 601 |
+
### Layer 1: [Presentation/API Layer]
|
| 602 |
+
- **Responsibility**: [Handle HTTP requests, input validation, response formatting]
|
| 603 |
+
- **Components**:
|
| 604 |
+
- [controllers/]: Request handlers
|
| 605 |
+
- [middleware/]: Auth, logging, error handling
|
| 606 |
+
- **Dependencies**: → Service Layer
|
| 607 |
+
- **Technology**: [Flask, Express, Gin]
|
| 608 |
+
|
| 609 |
+
### Layer 2: [Business Logic/Service Layer]
|
| 610 |
+
- **Responsibility**: [Core business rules, orchestration]
|
| 611 |
+
- **Components**:
|
| 612 |
+
- [services/]: Business logic implementations
|
| 613 |
+
- [domain/]: Domain models
|
| 614 |
+
- **Dependencies**: → Data Layer, → External Services
|
| 615 |
+
- **Technology**: [Python classes, TypeScript services]
|
| 616 |
+
|
| 617 |
+
### Layer 3: [Data/Persistence Layer]
|
| 618 |
+
- **Responsibility**: [Data access, persistence]
|
| 619 |
+
- **Components**:
|
| 620 |
+
- [repositories/]: Data access objects
|
| 621 |
+
- [models/]: ORM models
|
| 622 |
+
- **Dependencies**: → Database
|
| 623 |
+
- **Technology**: [SQLAlchemy, Prisma, GORM]
|
| 624 |
+
|
| 625 |
+
## Design Patterns Applied
|
| 626 |
+
|
| 627 |
+
### Pattern 1: [Factory Method]
|
| 628 |
+
- **Location**: [services/user_factory.py]
|
| 629 |
+
- **Purpose**: [Create different user types based on role]
|
| 630 |
+
- **Implementation**: [Brief code example or description]
|
| 631 |
+
|
| 632 |
+
### Pattern 2: [Repository Pattern]
|
| 633 |
+
- **Location**: [repositories/]
|
| 634 |
+
- **Purpose**: [Abstract data access from business logic]
|
| 635 |
+
- **Implementation**: [Brief description]
|
| 636 |
+
|
| 637 |
+
[Continue for all significant patterns]
|
| 638 |
+
|
| 639 |
+
## Data Flow
|
| 640 |
+
|
| 641 |
+
### Request Flow (Synchronous)
|
| 642 |
+
1. **API Layer** receives HTTP request
|
| 643 |
+
2. **Validation Middleware** validates input schema
|
| 644 |
+
3. **Auth Middleware** verifies authentication
|
| 645 |
+
4. **Controller** routes to appropriate service
|
| 646 |
+
5. **Service Layer** executes business logic
|
| 647 |
+
6. **Repository** persists/retrieves data
|
| 648 |
+
7. **Service** formats response
|
| 649 |
+
8. **Controller** returns HTTP response
|
| 650 |
+
|
| 651 |
+
### Event Flow (Asynchronous) - if applicable
|
| 652 |
+
1. **Event Producer** emits event to queue
|
| 653 |
+
2. **Message Broker** routes to subscribers
|
| 654 |
+
3. **Event Handler** processes asynchronously
|
| 655 |
+
4. **Service** updates state
|
| 656 |
+
5. **Event** published for downstream consumers
|
| 657 |
+
|
| 658 |
+
## Technology Stack
|
| 659 |
+
|
| 660 |
+
### Language & Runtime
|
| 661 |
+
- **Primary**: [Python 3.11]
|
| 662 |
+
- **Rationale**: [Inferred - rapid development, rich ecosystem]
|
| 663 |
+
|
| 664 |
+
### Web Framework
|
| 665 |
+
- **Choice**: [Flask 2.x]
|
| 666 |
+
- **Rationale**: [Lightweight, flexible, good for APIs]
|
| 667 |
+
|
| 668 |
+
### Database
|
| 669 |
+
- **Choice**: [PostgreSQL 14]
|
| 670 |
+
- **Rationale**: [ACID compliance, JSON support, reliability]
|
| 671 |
+
|
| 672 |
+
### Caching
|
| 673 |
+
- **Choice**: [Redis 6]
|
| 674 |
+
- **Rationale**: [Performance, pub/sub capabilities]
|
| 675 |
+
|
| 676 |
+
### Message Queue - if applicable
|
| 677 |
+
- **Choice**: [RabbitMQ]
|
| 678 |
+
- **Rationale**: [Reliability, routing flexibility]
|
| 679 |
+
|
| 680 |
+
### Testing
|
| 681 |
+
- **Choice**: [pytest, Jest]
|
| 682 |
+
- **Rationale**: [Rich ecosystem, good DX]
|
| 683 |
+
|
| 684 |
+
### Deployment
|
| 685 |
+
- **Choice**: [Docker + Kubernetes]
|
| 686 |
+
- **Rationale**: [Portability, scalability, cloud-native]
|
| 687 |
+
|
| 688 |
+
## Module Breakdown
|
| 689 |
+
|
| 690 |
+
### Module: [authentication]
|
| 691 |
+
- **Purpose**: [User auth, session management]
|
| 692 |
+
- **Key Classes**: [AuthService, JWTHandler, UserRepository]
|
| 693 |
+
- **Dependencies**: [bcrypt, PyJWT, database]
|
| 694 |
+
- **Complexity**: Medium
|
| 695 |
+
|
| 696 |
+
### Module: [orders]
|
| 697 |
+
- **Purpose**: [Order processing, inventory]
|
| 698 |
+
- **Key Classes**: [OrderService, OrderRepository, InventoryService]
|
| 699 |
+
- **Dependencies**: [payment, notification, database]
|
| 700 |
+
- **Complexity**: High
|
| 701 |
+
|
| 702 |
+
[Continue for all major modules]
|
| 703 |
+
|
| 704 |
+
## Regeneration Strategy
|
| 705 |
+
|
| 706 |
+
### Option 1: Specification-First Rebuild
|
| 707 |
+
1. Start with spec.md (intent and requirements)
|
| 708 |
+
2. Apply extracted skills (error handling, API patterns)
|
| 709 |
+
3. Implement with modern best practices (fill gaps)
|
| 710 |
+
4. Test-driven development using acceptance criteria
|
| 711 |
+
|
| 712 |
+
**Timeline**: [Estimate based on codebase size]
|
| 713 |
+
|
| 714 |
+
### Option 2: Incremental Refactoring
|
| 715 |
+
1. **Strangler Pattern**: New implementation shadows old
|
| 716 |
+
2. **Feature Flags**: Gradual traffic shift
|
| 717 |
+
3. **Parallel Run**: Validate equivalence
|
| 718 |
+
4. **Cutover**: Complete migration
|
| 719 |
+
|
| 720 |
+
**Timeline**: [Estimate based on risk tolerance]
|
| 721 |
+
|
| 722 |
+
## Improvement Opportunities
|
| 723 |
+
|
| 724 |
+
### Technical Improvements
|
| 725 |
+
- [ ] **Replace [Old Library]** with [Modern Alternative]
|
| 726 |
+
- **Rationale**: [Better performance, active maintenance]
|
| 727 |
+
- **Effort**: Medium
|
| 728 |
+
|
| 729 |
+
- [ ] **Add [Missing Feature]**
|
| 730 |
+
- **Addresses Gap**: [Specific gap from analysis]
|
| 731 |
+
- **Effort**: High
|
| 732 |
+
|
| 733 |
+
### Architectural Improvements
|
| 734 |
+
- [ ] **Introduce Event Sourcing**
|
| 735 |
+
- **Enables**: Audit trail, event replay, temporal queries
|
| 736 |
+
- **Effort**: High
|
| 737 |
+
|
| 738 |
+
- [ ] **Implement CQRS**
|
| 739 |
+
- **Separates**: Read and write models for optimization
|
| 740 |
+
- **Effort**: Medium
|
| 741 |
+
|
| 742 |
+
### Operational Improvements
|
| 743 |
+
- [ ] **CI/CD Pipeline**: Automated testing, deployment
|
| 744 |
+
- [ ] **Infrastructure as Code**: Terraform, Pulumi
|
| 745 |
+
- [ ] **Monitoring Dashboards**: Grafana, DataDog
|
| 746 |
+
- [ ] **GitOps Deployment**: ArgoCD, Flux
|
| 747 |
+
```
|
| 748 |
+
|
| 749 |
+
---
|
| 750 |
+
|
| 751 |
+
### Output 3: tasks.md
|
| 752 |
+
|
| 753 |
+
Create actionable task breakdown:
|
| 754 |
+
|
| 755 |
+
```markdown
|
| 756 |
+
# [System Name] Implementation Tasks
|
| 757 |
+
|
| 758 |
+
**Version**: 1.0 (Reverse Engineered)
|
| 759 |
+
**Date**: [Date]
|
| 760 |
+
|
| 761 |
+
## Overview
|
| 762 |
+
|
| 763 |
+
This task breakdown represents how to rebuild this system from scratch using the specification and plan.
|
| 764 |
+
|
| 765 |
+
**Estimated Timeline**: [X weeks based on team size]
|
| 766 |
+
**Team Size**: [Assumed team composition]
|
| 767 |
+
|
| 768 |
+
---
|
| 769 |
+
|
| 770 |
+
## Phase 1: Core Infrastructure
|
| 771 |
+
|
| 772 |
+
**Timeline**: Week 1
|
| 773 |
+
**Dependencies**: None
|
| 774 |
+
|
| 775 |
+
### Task 1.1: Project Setup
|
| 776 |
+
- [ ] Initialize repository with [language] project structure
|
| 777 |
+
- [ ] Configure build system: [tool]
|
| 778 |
+
- [ ] Setup dependency management: [requirements.txt, package.json, go.mod]
|
| 779 |
+
- [ ] Configure linting: [flake8, eslint, golangci-lint]
|
| 780 |
+
- [ ] Setup pre-commit hooks
|
| 781 |
+
- [ ] Create initial README
|
| 782 |
+
|
| 783 |
+
### Task 1.2: Configuration System
|
| 784 |
+
- [ ] Implement environment-based configuration
|
| 785 |
+
- [ ] Support: Environment variables, config files, secrets management
|
| 786 |
+
- [ ] Validation: Config schema validation on startup
|
| 787 |
+
- [ ] Defaults: Sensible defaults for local development
|
| 788 |
+
|
| 789 |
+
### Task 1.3: Logging Infrastructure
|
| 790 |
+
- [ ] Setup structured logging (JSON format)
|
| 791 |
+
- [ ] Configure log levels: DEBUG, INFO, WARN, ERROR
|
| 792 |
+
- [ ] Add request correlation IDs
|
| 793 |
+
- [ ] Integrate with [logging destination]
|
| 794 |
+
|
| 795 |
+
---
|
| 796 |
+
|
| 797 |
+
## Phase 2: Data Layer
|
| 798 |
+
|
| 799 |
+
**Timeline**: Week 2-3
|
| 800 |
+
**Dependencies**: Phase 1 complete
|
| 801 |
+
|
| 802 |
+
### Task 2.1: Database Design
|
| 803 |
+
- [ ] Design schema for entities: [User, Order, Product]
|
| 804 |
+
- [ ] Define relationships: [one-to-many, many-to-many]
|
| 805 |
+
- [ ] Add indexes for performance
|
| 806 |
+
- [ ] Document schema in [ERD tool]
|
| 807 |
+
|
| 808 |
+
### Task 2.2: ORM Setup
|
| 809 |
+
- [ ] Install and configure [SQLAlchemy, Prisma, GORM]
|
| 810 |
+
- [ ] Create model classes for all entities
|
| 811 |
+
- [ ] Implement relationships
|
| 812 |
+
- [ ] Add validation rules
|
| 813 |
+
|
| 814 |
+
### Task 2.3: Migration System
|
| 815 |
+
- [ ] Setup migration tool: [Alembic, Flyway, migrate]
|
| 816 |
+
- [ ] Create initial migration
|
| 817 |
+
- [ ] Document migration workflow
|
| 818 |
+
- [ ] Add migration tests
|
| 819 |
+
|
| 820 |
+
### Task 2.4: Repository Layer
|
| 821 |
+
- [ ] Implement repository pattern for each entity
|
| 822 |
+
- [ ] CRUD operations: Create, Read, Update, Delete
|
| 823 |
+
- [ ] Query methods: FindByX, ListByY
|
| 824 |
+
- [ ] Transaction management
|
| 825 |
+
|
| 826 |
+
---
|
| 827 |
+
|
| 828 |
+
## Phase 3: Business Logic Layer
|
| 829 |
+
|
| 830 |
+
**Timeline**: Week 4-6
|
| 831 |
+
**Dependencies**: Phase 2 complete
|
| 832 |
+
|
| 833 |
+
### Task 3.1: [Feature A - e.g., User Authentication]
|
| 834 |
+
- [ ] **Input validation**: Username/email, password strength
|
| 835 |
+
- [ ] **Processing logic**:
|
| 836 |
+
- Hash password with bcrypt
|
| 837 |
+
- Generate JWT token
|
| 838 |
+
- Create user session
|
| 839 |
+
- [ ] **Error handling**: Duplicate user, invalid credentials
|
| 840 |
+
- [ ] **Output formatting**: User object + token
|
| 841 |
+
|
| 842 |
+
### Task 3.2: [Feature B - e.g., Order Processing]
|
| 843 |
+
- [ ] **Input validation**: Order items, quantities, payment info
|
| 844 |
+
- [ ] **Processing logic**:
|
| 845 |
+
- Validate inventory availability
|
| 846 |
+
- Calculate totals, taxes, shipping
|
| 847 |
+
- Process payment via [Stripe]
|
| 848 |
+
- Update inventory
|
| 849 |
+
- Send confirmation
|
| 850 |
+
- [ ] **Error handling**: Insufficient inventory, payment failed
|
| 851 |
+
- [ ] **Output formatting**: Order confirmation
|
| 852 |
+
|
| 853 |
+
[Continue for all major features discovered]
|
| 854 |
+
|
| 855 |
+
---
|
| 856 |
+
|
| 857 |
+
## Phase 4: API/Interface Layer
|
| 858 |
+
|
| 859 |
+
**Timeline**: Week 7-8
|
| 860 |
+
**Dependencies**: Phase 3 complete
|
| 861 |
+
|
| 862 |
+
### Task 4.1: API Contract Definition
|
| 863 |
+
- [ ] Design RESTful endpoints: [list all routes]
|
| 864 |
+
- [ ] Define request schemas (OpenAPI/JSON Schema)
|
| 865 |
+
- [ ] Define response schemas
|
| 866 |
+
- [ ] Document error responses
|
| 867 |
+
|
| 868 |
+
### Task 4.2: Controller Implementation
|
| 869 |
+
- [ ] Implement route handlers
|
| 870 |
+
- [ ] Input validation middleware
|
| 871 |
+
- [ ] Auth middleware integration
|
| 872 |
+
- [ ] Error handling middleware
|
| 873 |
+
|
| 874 |
+
### Task 4.3: API Documentation
|
| 875 |
+
- [ ] Generate OpenAPI/Swagger docs
|
| 876 |
+
- [ ] Add usage examples
|
| 877 |
+
- [ ] Document authentication flow
|
| 878 |
+
- [ ] Create Postman collection
|
| 879 |
+
|
| 880 |
+
---
|
| 881 |
+
|
| 882 |
+
## Phase 5: Cross-Cutting Concerns
|
| 883 |
+
|
| 884 |
+
**Timeline**: Week 9
|
| 885 |
+
**Dependencies**: Phase 4 complete
|
| 886 |
+
|
| 887 |
+
### Task 5.1: Authentication & Authorization
|
| 888 |
+
- [ ] Implement JWT-based auth
|
| 889 |
+
- [ ] Role-based access control (RBAC)
|
| 890 |
+
- [ ] Token refresh mechanism
|
| 891 |
+
- [ ] Session management
|
| 892 |
+
|
| 893 |
+
### Task 5.2: Observability
|
| 894 |
+
- [ ] **Metrics**: Instrument with [Prometheus, StatsD]
|
| 895 |
+
- Request rate, latency, error rate
|
| 896 |
+
- Business metrics: Orders/min, Revenue/hour
|
| 897 |
+
- [ ] **Tracing**: Integrate [OpenTelemetry, Jaeger]
|
| 898 |
+
- Distributed tracing across services
|
| 899 |
+
- Performance bottleneck detection
|
| 900 |
+
- [ ] **Health Checks**:
|
| 901 |
+
- `/health` - Liveness probe
|
| 902 |
+
- `/ready` - Readiness probe
|
| 903 |
+
- `/metrics` - Prometheus endpoint
|
| 904 |
+
|
| 905 |
+
### Task 5.3: Error Handling
|
| 906 |
+
- [ ] Global error handler
|
| 907 |
+
- [ ] Structured error responses
|
| 908 |
+
- [ ] Error logging with stack traces
|
| 909 |
+
- [ ] Error monitoring integration
|
| 910 |
+
|
| 911 |
+
### Task 5.4: Security Hardening
|
| 912 |
+
- [ ] Input sanitization
|
| 913 |
+
- [ ] SQL injection prevention
|
| 914 |
+
- [ ] XSS protection
|
| 915 |
+
- [ ] CSRF protection
|
| 916 |
+
- [ ] Rate limiting
|
| 917 |
+
- [ ] Security headers
|
| 918 |
+
|
| 919 |
+
---
|
| 920 |
+
|
| 921 |
+
## Phase 6: External Integrations
|
| 922 |
+
|
| 923 |
+
**Timeline**: Week 10
|
| 924 |
+
**Dependencies**: Phase 4 complete
|
| 925 |
+
|
| 926 |
+
### Task 6.1: [Integration A - e.g., Payment Provider]
|
| 927 |
+
- [ ] API client implementation
|
| 928 |
+
- [ ] Retry logic with exponential backoff
|
| 929 |
+
- [ ] Circuit breaker pattern
|
| 930 |
+
- [ ] Webhook handling
|
| 931 |
+
- [ ] Error recovery
|
| 932 |
+
|
| 933 |
+
### Task 6.2: [Integration B - e.g., Email Service]
|
| 934 |
+
- [ ] Template system
|
| 935 |
+
- [ ] Async sending (queue-based)
|
| 936 |
+
- [ ] Delivery tracking
|
| 937 |
+
- [ ] Bounce handling
|
| 938 |
+
|
| 939 |
+
[Continue for all external integrations]
|
| 940 |
+
|
| 941 |
+
---
|
| 942 |
+
|
| 943 |
+
## Phase 7: Testing & Quality
|
| 944 |
+
|
| 945 |
+
**Timeline**: Week 11-12
|
| 946 |
+
**Dependencies**: All phases complete
|
| 947 |
+
|
| 948 |
+
### Task 7.1: Unit Tests
|
| 949 |
+
- [ ] **Coverage target**: 80%+
|
| 950 |
+
- [ ] **Framework**: [pytest, Jest, testing package]
|
| 951 |
+
- [ ] Test all service methods
|
| 952 |
+
- [ ] Test all repositories
|
| 953 |
+
- [ ] Mock external dependencies
|
| 954 |
+
|
| 955 |
+
### Task 7.2: Integration Tests
|
| 956 |
+
- [ ] API endpoint tests
|
| 957 |
+
- [ ] Database integration tests
|
| 958 |
+
- [ ] External service integration tests (with mocks)
|
| 959 |
+
- [ ] Test database setup/teardown
|
| 960 |
+
|
| 961 |
+
### Task 7.3: End-to-End Tests
|
| 962 |
+
- [ ] Critical user journeys:
|
| 963 |
+
- User registration → Login → Purchase → Logout
|
| 964 |
+
- [Other critical flows]
|
| 965 |
+
- [ ] Test against staging environment
|
| 966 |
+
- [ ] Automated with [Selenium, Playwright, Cypress]
|
| 967 |
+
|
| 968 |
+
### Task 7.4: Performance Testing
|
| 969 |
+
- [ ] Load testing: [k6, Locust, JMeter]
|
| 970 |
+
- [ ] Stress testing: Find breaking points
|
| 971 |
+
- [ ] Endurance testing: Memory leaks, connection exhaustion
|
| 972 |
+
- [ ] Document performance baselines
|
| 973 |
+
|
| 974 |
+
### Task 7.5: Security Testing
|
| 975 |
+
- [ ] OWASP Top 10 vulnerability scan
|
| 976 |
+
- [ ] Dependency vulnerability scan
|
| 977 |
+
- [ ] Penetration testing (if budget allows)
|
| 978 |
+
- [ ] Security code review
|
| 979 |
+
|
| 980 |
+
---
|
| 981 |
+
|
| 982 |
+
## Phase 8: Deployment & Operations
|
| 983 |
+
|
| 984 |
+
**Timeline**: Week 13
|
| 985 |
+
**Dependencies**: Phase 7 complete
|
| 986 |
+
|
| 987 |
+
### Task 8.1: Containerization
|
| 988 |
+
- [ ] Write production Dockerfile
|
| 989 |
+
- [ ] Multi-stage build for optimization
|
| 990 |
+
- [ ] Non-root user for security
|
| 991 |
+
- [ ] Health check in container
|
| 992 |
+
|
| 993 |
+
### Task 8.2: Kubernetes Manifests
|
| 994 |
+
- [ ] Deployment manifest
|
| 995 |
+
- [ ] Service manifest
|
| 996 |
+
- [ ] ConfigMap for configuration
|
| 997 |
+
- [ ] Secret for sensitive data
|
| 998 |
+
- [ ] Ingress for routing
|
| 999 |
+
- [ ] HorizontalPodAutoscaler
|
| 1000 |
+
|
| 1001 |
+
### Task 8.3: CI/CD Pipeline
|
| 1002 |
+
- [ ] GitHub Actions / GitLab CI / Jenkins
|
| 1003 |
+
- [ ] Stages: Lint → Test → Build → Deploy
|
| 1004 |
+
- [ ] Automated testing in pipeline
|
| 1005 |
+
- [ ] Deployment to staging on merge to main
|
| 1006 |
+
- [ ] Manual approval for production
|
| 1007 |
+
|
| 1008 |
+
### Task 8.4: Monitoring & Alerting
|
| 1009 |
+
- [ ] Setup Grafana dashboards
|
| 1010 |
+
- [ ] Configure alerts: Error rate spikes, latency increases
|
| 1011 |
+
- [ ] On-call rotation setup
|
| 1012 |
+
- [ ] Runbook documentation
|
| 1013 |
+
|
| 1014 |
+
### Task 8.5: Documentation
|
| 1015 |
+
- [ ] Architecture documentation
|
| 1016 |
+
- [ ] API documentation
|
| 1017 |
+
- [ ] Deployment runbook
|
| 1018 |
+
- [ ] Troubleshooting guide
|
| 1019 |
+
- [ ] Onboarding guide for new developers
|
| 1020 |
+
|
| 1021 |
+
---
|
| 1022 |
+
|
| 1023 |
+
## Phase 9: Post-Launch
|
| 1024 |
+
|
| 1025 |
+
**Timeline**: Ongoing
|
| 1026 |
+
**Dependencies**: Production deployment
|
| 1027 |
+
|
| 1028 |
+
### Task 9.1: Monitoring & Incident Response
|
| 1029 |
+
- [ ] Monitor production metrics
|
| 1030 |
+
- [ ] Respond to alerts
|
| 1031 |
+
- [ ] Conduct post-mortems for incidents
|
| 1032 |
+
- [ ] Iterate on improvements
|
| 1033 |
+
|
| 1034 |
+
### Task 9.2: Feature Iterations
|
| 1035 |
+
- [ ] Prioritize feature backlog
|
| 1036 |
+
- [ ] Implement high-priority features
|
| 1037 |
+
- [ ] A/B testing for new features
|
| 1038 |
+
- [ ] Gather user feedback
|
| 1039 |
+
|
| 1040 |
+
### Task 9.3: Technical Debt Reduction
|
| 1041 |
+
- [ ] Address P0 gaps: [from gap analysis]
|
| 1042 |
+
- [ ] Address P1 gaps: [from gap analysis]
|
| 1043 |
+
- [ ] Refactor based on learnings
|
| 1044 |
+
- [ ] Update documentation
|
| 1045 |
+
```
|
| 1046 |
+
|
| 1047 |
+
---
|
| 1048 |
+
|
| 1049 |
+
### Output 4: intelligence-object.md
|
| 1050 |
+
|
| 1051 |
+
Create reusable intelligence extraction:
|
| 1052 |
+
|
| 1053 |
+
```markdown
|
| 1054 |
+
# [System Name] Reusable Intelligence
|
| 1055 |
+
|
| 1056 |
+
**Version**: 1.0 (Extracted from Codebase)
|
| 1057 |
+
**Date**: [Date]
|
| 1058 |
+
|
| 1059 |
+
## Overview
|
| 1060 |
+
|
| 1061 |
+
This document captures the reusable intelligence embedded in the codebase—patterns, decisions, and expertise worth preserving and applying to future projects.
|
| 1062 |
+
|
| 1063 |
+
---
|
| 1064 |
+
|
| 1065 |
+
## Extracted Skills
|
| 1066 |
+
|
| 1067 |
+
### Skill 1: [API Error Handling Strategy]
|
| 1068 |
+
|
| 1069 |
+
**Persona**: You are a backend engineer designing resilient APIs that fail gracefully and provide actionable error information.
|
| 1070 |
+
|
| 1071 |
+
**Questions to ask before implementing error handling**:
|
| 1072 |
+
- What error categories exist in this system? (Client errors 4xx, server errors 5xx, network errors)
|
| 1073 |
+
- Should errors be retryable or terminal?
|
| 1074 |
+
- What information helps debugging without exposing security details?
|
| 1075 |
+
- How do errors propagate through layers (API → Service → Data)?
|
| 1076 |
+
|
| 1077 |
+
**Principles**:
|
| 1078 |
+
- **Never expose internal details**: Stack traces in development only, generic messages in production
|
| 1079 |
+
- **Consistent error schema**: All errors follow same structure `{error: {code, message, details, request_id}}`
|
| 1080 |
+
- **Log everything, return selectively**: Full context in logs, safe subset in API response
|
| 1081 |
+
- **Use HTTP status codes correctly**: 400 bad request, 401 unauthorized, 404 not found, 500 internal error
|
| 1082 |
+
- **Provide request IDs**: Enable correlation between client errors and server logs
|
| 1083 |
+
|
| 1084 |
+
**Implementation Pattern** (observed in codebase):
|
| 1085 |
+
```python
|
| 1086 |
+
# Extracted from: [file: src/api/errors.py, lines 15-45]
|
| 1087 |
+
class APIError(Exception):
|
| 1088 |
+
"""Base exception for all API errors"""
|
| 1089 |
+
|
| 1090 |
+
def __init__(self, code: str, message: str, status: int = 400, details: dict = None):
|
| 1091 |
+
self.code = code
|
| 1092 |
+
self.message = message
|
| 1093 |
+
self.status = status
|
| 1094 |
+
self.details = details or {}
|
| 1095 |
+
|
| 1096 |
+
def to_response(self):
|
| 1097 |
+
"""Convert to JSON response format"""
|
| 1098 |
+
return {
|
| 1099 |
+
"error": {
|
| 1100 |
+
"code": self.code,
|
| 1101 |
+
"message": self.message,
|
| 1102 |
+
"details": self.details,
|
| 1103 |
+
"request_id": get_request_id(),
|
| 1104 |
+
"timestamp": datetime.utcnow().isoformat()
|
| 1105 |
+
}
|
| 1106 |
+
}, self.status
|
| 1107 |
+
|
| 1108 |
+
# Usage pattern:
|
| 1109 |
+
if not user:
|
| 1110 |
+
raise APIError(
|
| 1111 |
+
code="USER_NOT_FOUND",
|
| 1112 |
+
message="User with specified ID does not exist",
|
| 1113 |
+
status=404,
|
| 1114 |
+
details={"user_id": user_id}
|
| 1115 |
+
)
|
| 1116 |
+
```
|
| 1117 |
+
|
| 1118 |
+
**When to apply**:
|
| 1119 |
+
- All API endpoints
|
| 1120 |
+
- Background jobs that report status
|
| 1121 |
+
- Any system with external-facing interfaces
|
| 1122 |
+
|
| 1123 |
+
**Contraindications**:
|
| 1124 |
+
- Internal services (may prefer exceptions without HTTP semantics)
|
| 1125 |
+
- Real-time systems (error objects may be too heavy)
|
| 1126 |
+
|
| 1127 |
+
---
|
| 1128 |
+
|
| 1129 |
+
### Skill 2: [Database Connection Management]
|
| 1130 |
+
|
| 1131 |
+
**Persona**: You are a backend engineer optimizing database performance through connection pooling and lifecycle management.
|
| 1132 |
+
|
| 1133 |
+
**Questions to ask before implementing database access**:
|
| 1134 |
+
- What's the connection lifecycle? (Per-request, per-application, pooled)
|
| 1135 |
+
- How many concurrent connections does the application need?
|
| 1136 |
+
- What happens on connection failure? (Retry, circuit breaker, fail fast)
|
| 1137 |
+
- Should connections be long-lived or short-lived?
|
| 1138 |
+
|
| 1139 |
+
**Principles**:
|
| 1140 |
+
- **Connection pooling is mandatory**: Never create connection per request (overhead)
|
| 1141 |
+
- **Pool size = 2 * CPU cores** (starting point, tune based on load)
|
| 1142 |
+
- **Idle timeout prevents resource leaks**: Close unused connections after [X] minutes
|
| 1143 |
+
- **Health checks detect stale connections**: Validate before use, not during query
|
| 1144 |
+
- **Graceful degradation**: Circuit breaker pattern when database unavailable
|
| 1145 |
+
|
| 1146 |
+
**Implementation Pattern** (observed in codebase):
|
| 1147 |
+
```python
|
| 1148 |
+
# Extracted from: [file: src/db/connection.py, lines 20-55]
|
| 1149 |
+
from sqlalchemy import create_engine, pool
|
| 1150 |
+
|
| 1151 |
+
# Connection pool configuration
|
| 1152 |
+
engine = create_engine(
|
| 1153 |
+
DATABASE_URL,
|
| 1154 |
+
poolclass=pool.QueuePool,
|
| 1155 |
+
pool_size=10, # Max connections in pool
|
| 1156 |
+
max_overflow=20, # Additional connections beyond pool_size
|
| 1157 |
+
pool_timeout=30, # Seconds to wait for connection
|
| 1158 |
+
pool_recycle=3600, # Recycle connections after 1 hour
|
| 1159 |
+
pool_pre_ping=True, # Test connection before using
|
| 1160 |
+
echo=False # Don't log SQL (production)
|
| 1161 |
+
)
|
| 1162 |
+
|
| 1163 |
+
# Context manager for connection lifecycle
|
| 1164 |
+
@contextmanager
|
| 1165 |
+
def get_db_session():
|
| 1166 |
+
"""Provide transactional scope around operations"""
|
| 1167 |
+
session = Session(bind=engine)
|
| 1168 |
+
try:
|
| 1169 |
+
yield session
|
| 1170 |
+
session.commit()
|
| 1171 |
+
except Exception:
|
| 1172 |
+
session.rollback()
|
| 1173 |
+
raise
|
| 1174 |
+
finally:
|
| 1175 |
+
session.close()
|
| 1176 |
+
|
| 1177 |
+
# Usage pattern:
|
| 1178 |
+
with get_db_session() as session:
|
| 1179 |
+
user = session.query(User).filter_by(id=user_id).first()
|
| 1180 |
+
# Connection automatically returned to pool on context exit
|
| 1181 |
+
```
|
| 1182 |
+
|
| 1183 |
+
**When to apply**:
|
| 1184 |
+
- All database-backed applications
|
| 1185 |
+
- Services with moderate-to-high traffic
|
| 1186 |
+
- Long-running applications (not serverless functions)
|
| 1187 |
+
|
| 1188 |
+
**Contraindications**:
|
| 1189 |
+
- Serverless/FaaS (use connection per invocation)
|
| 1190 |
+
- Very low-traffic applications (overhead not justified)
|
| 1191 |
+
|
| 1192 |
+
---
|
| 1193 |
+
|
| 1194 |
+
### Skill 3: [Input Validation Strategy]
|
| 1195 |
+
|
| 1196 |
+
**Persona**: You are a security-focused engineer preventing injection attacks and data corruption through systematic input validation.
|
| 1197 |
+
|
| 1198 |
+
**Questions to ask before implementing validation**:
|
| 1199 |
+
- What are valid values for each input? (type, range, format, length)
|
| 1200 |
+
- Where does validation occur? (Client, API layer, business logic, database)
|
| 1201 |
+
- What happens on validation failure? (400 error with details, silent rejection, sanitization)
|
| 1202 |
+
- Are there domain-specific validation rules? (email format, credit card format, etc.)
|
| 1203 |
+
|
| 1204 |
+
**Principles**:
|
| 1205 |
+
- **Validate at boundaries**: API layer validates all external input
|
| 1206 |
+
- **Whitelist over blacklist**: Define allowed patterns, not forbidden ones
|
| 1207 |
+
- **Fail loudly on invalid input**: Return clear error messages (in dev/test), generic in prod
|
| 1208 |
+
- **Type validation first**: Check types before business rules
|
| 1209 |
+
- **Schema-based validation**: Use JSON Schema, Pydantic, Joi for declarative validation
|
| 1210 |
+
|
| 1211 |
+
**Implementation Pattern** (observed in codebase):
|
| 1212 |
+
```python
|
| 1213 |
+
# Extracted from: [file: src/api/validators.py, lines 10-60]
|
| 1214 |
+
from pydantic import BaseModel, EmailStr, validator
|
| 1215 |
+
|
| 1216 |
+
class CreateUserRequest(BaseModel):
|
| 1217 |
+
"""Validation schema for user creation"""
|
| 1218 |
+
email: EmailStr # Email format validation
|
| 1219 |
+
username: str
|
| 1220 |
+
password: str
|
| 1221 |
+
age: int
|
| 1222 |
+
|
| 1223 |
+
@validator('username')
|
| 1224 |
+
def username_alphanumeric(cls, v):
|
| 1225 |
+
"""Username must be alphanumeric"""
|
| 1226 |
+
if not v.isalnum():
|
| 1227 |
+
raise ValueError('Username must contain only letters and numbers')
|
| 1228 |
+
if len(v) < 3 or len(v) > 20:
|
| 1229 |
+
raise ValueError('Username must be 3-20 characters')
|
| 1230 |
+
return v
|
| 1231 |
+
|
| 1232 |
+
@validator('password')
|
| 1233 |
+
def password_strength(cls, v):
|
| 1234 |
+
"""Password must meet strength requirements"""
|
| 1235 |
+
if len(v) < 8:
|
| 1236 |
+
raise ValueError('Password must be at least 8 characters')
|
| 1237 |
+
if not any(c.isupper() for c in v):
|
| 1238 |
+
raise ValueError('Password must contain uppercase letter')
|
| 1239 |
+
if not any(c.isdigit() for c in v):
|
| 1240 |
+
raise ValueError('Password must contain digit')
|
| 1241 |
+
return v
|
| 1242 |
+
|
| 1243 |
+
@validator('age')
|
| 1244 |
+
def age_range(cls, v):
|
| 1245 |
+
"""Age must be reasonable"""
|
| 1246 |
+
if v < 13 or v > 120:
|
| 1247 |
+
raise ValueError('Age must be between 13 and 120')
|
| 1248 |
+
return v
|
| 1249 |
+
|
| 1250 |
+
# Usage in API endpoint:
|
| 1251 |
+
@app.post("/users")
|
| 1252 |
+
def create_user(request: CreateUserRequest): # Automatic validation
|
| 1253 |
+
# If we reach here, all validation passed
|
| 1254 |
+
user = UserService.create(request.dict())
|
| 1255 |
+
return user.to_dict()
|
| 1256 |
+
```
|
| 1257 |
+
|
| 1258 |
+
**When to apply**:
|
| 1259 |
+
- All API endpoints
|
| 1260 |
+
- All user input (forms, file uploads, etc.)
|
| 1261 |
+
- Configuration parsing
|
| 1262 |
+
- External data imports
|
| 1263 |
+
|
| 1264 |
+
---
|
| 1265 |
+
|
| 1266 |
+
[Continue with more skills extracted from codebase...]
|
| 1267 |
+
|
| 1268 |
+
---
|
| 1269 |
+
|
| 1270 |
+
## Architecture Decision Records (Inferred)
|
| 1271 |
+
|
| 1272 |
+
### ADR-001: Choice of [PostgreSQL over MongoDB]
|
| 1273 |
+
|
| 1274 |
+
**Status**: Accepted (inferred from implementation)
|
| 1275 |
+
|
| 1276 |
+
**Context**:
|
| 1277 |
+
The system requires:
|
| 1278 |
+
- ACID transactions for order processing
|
| 1279 |
+
- Complex relational queries (joins across users, orders, products)
|
| 1280 |
+
- Data integrity guarantees
|
| 1281 |
+
- Mature ecosystem and tooling
|
| 1282 |
+
|
| 1283 |
+
**Decision**: Use PostgreSQL as primary database
|
| 1284 |
+
|
| 1285 |
+
**Rationale** (inferred from code patterns):
|
| 1286 |
+
1. **Evidence 1**: Heavy use of foreign key constraints suggests relational integrity is critical
|
| 1287 |
+
- Location: [src/db/models.py, lines 45-120]
|
| 1288 |
+
- Pattern: All entities have explicit FK relationships
|
| 1289 |
+
|
| 1290 |
+
2. **Evidence 2**: Transaction handling in order processing suggests ACID requirements
|
| 1291 |
+
- Location: [src/services/order_service.py, lines 200-250]
|
| 1292 |
+
- Pattern: Multiple updates wrapped in single transaction
|
| 1293 |
+
|
| 1294 |
+
3. **Evidence 3**: Complex JOIN queries suggest relational model fits domain
|
| 1295 |
+
- Location: [src/repositories/order_repository.py, lines 80-150]
|
| 1296 |
+
- Pattern: Multi-table joins for order + user + product data
|
| 1297 |
+
|
| 1298 |
+
**Consequences**:
|
| 1299 |
+
|
| 1300 |
+
**Positive**:
|
| 1301 |
+
- Strong data consistency guarantees
|
| 1302 |
+
- Rich query capabilities (window functions, CTEs)
|
| 1303 |
+
- JSON support for semi-structured data (best of both worlds)
|
| 1304 |
+
- Excellent tool ecosystem (pgAdmin, monitoring, backups)
|
| 1305 |
+
|
| 1306 |
+
**Negative**:
|
| 1307 |
+
- Vertical scaling limits (eventual)
|
| 1308 |
+
- Schema migrations require planning
|
| 1309 |
+
- Not ideal for unstructured data
|
| 1310 |
+
|
| 1311 |
+
**Alternatives Considered** (inferred):
|
| 1312 |
+
|
| 1313 |
+
**MongoDB**:
|
| 1314 |
+
- **Rejected because**: Need for transactions and complex joins
|
| 1315 |
+
- **Evidence**: No document-oriented patterns in codebase
|
| 1316 |
+
|
| 1317 |
+
**MySQL**:
|
| 1318 |
+
- **Rejected because**: PostgreSQL's superior JSON and full-text search
|
| 1319 |
+
- **Could have worked**: Similar feature set for this use case
|
| 1320 |
+
|
| 1321 |
+
---
|
| 1322 |
+
|
| 1323 |
+
### ADR-002: [JWT-based Authentication over Session Cookies]
|
| 1324 |
+
|
| 1325 |
+
**Status**: Accepted (inferred from implementation)
|
| 1326 |
+
|
| 1327 |
+
**Context**:
|
| 1328 |
+
The system needs:
|
| 1329 |
+
- Stateless authentication (for horizontal scaling)
|
| 1330 |
+
- Mobile app support (not browser-only)
|
| 1331 |
+
- Microservices architecture (shared auth across services)
|
| 1332 |
+
|
| 1333 |
+
**Decision**: Use JWT tokens for authentication
|
| 1334 |
+
|
| 1335 |
+
**Rationale** (inferred from code patterns):
|
| 1336 |
+
1. **Evidence 1**: No session storage implementation found
|
| 1337 |
+
- Location: Absence of Redis/Memcached session store
|
| 1338 |
+
- Pattern: No session management code
|
| 1339 |
+
|
| 1340 |
+
2. **Evidence 2**: Token-based auth middleware
|
| 1341 |
+
- Location: [src/middleware/auth.py, lines 10-50]
|
| 1342 |
+
- Pattern: JWT decoding and validation
|
| 1343 |
+
|
| 1344 |
+
3. **Evidence 3**: Token refresh endpoint
|
| 1345 |
+
- Location: [src/api/auth.py, lines 100-130]
|
| 1346 |
+
- Pattern: Refresh token rotation
|
| 1347 |
+
|
| 1348 |
+
**Consequences**:
|
| 1349 |
+
|
| 1350 |
+
**Positive**:
|
| 1351 |
+
- Stateless (no server-side session storage)
|
| 1352 |
+
- Scales horizontally (no session affinity)
|
| 1353 |
+
- Works across domains (CORS-friendly)
|
| 1354 |
+
- Mobile-app compatible
|
| 1355 |
+
|
| 1356 |
+
**Negative**:
|
| 1357 |
+
- Cannot revoke tokens before expiry (mitigated with short TTL + refresh tokens)
|
| 1358 |
+
- Larger than session cookies (JWT payload in every request)
|
| 1359 |
+
- Vulnerable if secret key compromised
|
| 1360 |
+
|
| 1361 |
+
**Mitigation Strategies** (observed):
|
| 1362 |
+
- Short access token TTL (15 minutes)
|
| 1363 |
+
- Refresh token rotation
|
| 1364 |
+
- Token blacklist for logout (stored in Redis)
|
| 1365 |
+
|
| 1366 |
+
---
|
| 1367 |
+
|
| 1368 |
+
[Continue with more ADRs...]
|
| 1369 |
+
|
| 1370 |
+
---
|
| 1371 |
+
|
| 1372 |
+
## Code Patterns & Conventions
|
| 1373 |
+
|
| 1374 |
+
### Pattern 1: Repository Pattern for Data Access
|
| 1375 |
+
|
| 1376 |
+
**Observed in**: All data layer modules
|
| 1377 |
+
|
| 1378 |
+
**Structure**:
|
| 1379 |
+
```python
|
| 1380 |
+
class UserRepository:
|
| 1381 |
+
"""Abstract data access for User entity"""
|
| 1382 |
+
|
| 1383 |
+
def find_by_id(self, user_id: int) -> Optional[User]:
|
| 1384 |
+
"""Find user by ID"""
|
| 1385 |
+
pass
|
| 1386 |
+
|
| 1387 |
+
def find_by_email(self, email: str) -> Optional[User]:
|
| 1388 |
+
"""Find user by email"""
|
| 1389 |
+
pass
|
| 1390 |
+
|
| 1391 |
+
def create(self, user_data: dict) -> User:
|
| 1392 |
+
"""Create new user"""
|
| 1393 |
+
pass
|
| 1394 |
+
|
| 1395 |
+
def update(self, user_id: int, updates: dict) -> User:
|
| 1396 |
+
"""Update existing user"""
|
| 1397 |
+
pass
|
| 1398 |
+
|
| 1399 |
+
def delete(self, user_id: int) -> bool:
|
| 1400 |
+
"""Soft-delete user"""
|
| 1401 |
+
pass
|
| 1402 |
+
```
|
| 1403 |
+
|
| 1404 |
+
**Benefits**:
|
| 1405 |
+
- Decouples business logic from data access
|
| 1406 |
+
- Testable (can mock repositories)
|
| 1407 |
+
- Swappable implementations (SQL → NoSQL)
|
| 1408 |
+
|
| 1409 |
+
**When to apply**: All entity persistence
|
| 1410 |
+
|
| 1411 |
+
---
|
| 1412 |
+
|
| 1413 |
+
### Pattern 2: Service Layer for Business Logic
|
| 1414 |
+
|
| 1415 |
+
**Observed in**: All business logic modules
|
| 1416 |
+
|
| 1417 |
+
**Structure**:
|
| 1418 |
+
```python
|
| 1419 |
+
class OrderService:
|
| 1420 |
+
"""Business logic for order processing"""
|
| 1421 |
+
|
| 1422 |
+
def __init__(self, order_repo, inventory_service, payment_service):
|
| 1423 |
+
self.order_repo = order_repo
|
| 1424 |
+
self.inventory_service = inventory_service
|
| 1425 |
+
self.payment_service = payment_service
|
| 1426 |
+
|
| 1427 |
+
def create_order(self, user_id: int, items: List[OrderItem]) -> Order:
|
| 1428 |
+
"""
|
| 1429 |
+
Create order with inventory validation and payment processing
|
| 1430 |
+
|
| 1431 |
+
Steps:
|
| 1432 |
+
1. Validate inventory availability
|
| 1433 |
+
2. Calculate totals
|
| 1434 |
+
3. Process payment
|
| 1435 |
+
4. Create order record
|
| 1436 |
+
5. Update inventory
|
| 1437 |
+
6. Send confirmation
|
| 1438 |
+
"""
|
| 1439 |
+
# Orchestration logic here
|
| 1440 |
+
pass
|
| 1441 |
+
```
|
| 1442 |
+
|
| 1443 |
+
**Benefits**:
|
| 1444 |
+
- Encapsulates business rules
|
| 1445 |
+
- Coordinates multiple repositories/services
|
| 1446 |
+
- Transactional boundary
|
| 1447 |
+
|
| 1448 |
+
**When to apply**: All complex business operations
|
| 1449 |
+
|
| 1450 |
+
---
|
| 1451 |
+
|
| 1452 |
+
## Lessons Learned
|
| 1453 |
+
|
| 1454 |
+
### What Worked Well
|
| 1455 |
+
|
| 1456 |
+
1. **Clear layer separation**
|
| 1457 |
+
- Controllers stayed thin (routing only)
|
| 1458 |
+
- Services contained business logic
|
| 1459 |
+
- Repositories isolated data access
|
| 1460 |
+
- **Benefit**: Easy to test, easy to reason about
|
| 1461 |
+
|
| 1462 |
+
2. **Comprehensive input validation**
|
| 1463 |
+
- Schema-based validation at API boundary
|
| 1464 |
+
- Early failure with clear error messages
|
| 1465 |
+
- **Benefit**: Prevented data corruption, improved debugging
|
| 1466 |
+
|
| 1467 |
+
3. **Structured logging**
|
| 1468 |
+
- JSON format with correlation IDs
|
| 1469 |
+
- Consistent log levels
|
| 1470 |
+
- **Benefit**: Effective debugging in production
|
| 1471 |
+
|
| 1472 |
+
### What Could Be Improved
|
| 1473 |
+
|
| 1474 |
+
1. **Missing integration tests**
|
| 1475 |
+
- Lots of unit tests, few integration tests
|
| 1476 |
+
- **Impact**: Bugs in component interactions not caught early
|
| 1477 |
+
- **Recommendation**: Add integration test suite
|
| 1478 |
+
|
| 1479 |
+
2. **Inconsistent error handling**
|
| 1480 |
+
- Some modules use custom exceptions, others use generic
|
| 1481 |
+
- **Impact**: Harder to handle errors consistently
|
| 1482 |
+
- **Recommendation**: Standardize on error handling strategy
|
| 1483 |
+
|
| 1484 |
+
3. **Undocumented API contracts**
|
| 1485 |
+
- No OpenAPI/Swagger documentation
|
| 1486 |
+
- **Impact**: Frontend developers had to read code
|
| 1487 |
+
- **Recommendation**: Generate API docs from code
|
| 1488 |
+
|
| 1489 |
+
### What to Avoid in Future Projects
|
| 1490 |
+
|
| 1491 |
+
1. **Hardcoded configuration**
|
| 1492 |
+
- Some settings hardcoded instead of environment variables
|
| 1493 |
+
- **Why bad**: Requires code changes for deployment differences
|
| 1494 |
+
- **Alternative**: 12-factor app configuration
|
| 1495 |
+
|
| 1496 |
+
2. **Tight coupling to external services**
|
| 1497 |
+
- Direct API calls without abstraction layer
|
| 1498 |
+
- **Why bad**: Hard to swap providers, hard to test
|
| 1499 |
+
- **Alternative**: Adapter pattern for external integrations
|
| 1500 |
+
|
| 1501 |
+
3. **Missing observability**
|
| 1502 |
+
- No metrics, basic logging only
|
| 1503 |
+
- **Why bad**: Blind to production issues
|
| 1504 |
+
- **Alternative**: Metrics + tracing + structured logs from day 1
|
| 1505 |
+
|
| 1506 |
+
---
|
| 1507 |
+
|
| 1508 |
+
## Reusability Assessment
|
| 1509 |
+
|
| 1510 |
+
### Components Reusable As-Is
|
| 1511 |
+
|
| 1512 |
+
1. **Error handling framework** → Portable to any API project
|
| 1513 |
+
2. **Database connection pooling** → Portable to any DB-backed service
|
| 1514 |
+
3. **JWT authentication middleware** → Portable to any auth scenario
|
| 1515 |
+
4. **Input validation schemas** → Patterns reusable, specifics domain-dependent
|
| 1516 |
+
|
| 1517 |
+
### Patterns Worth Generalizing
|
| 1518 |
+
|
| 1519 |
+
1. **Repository pattern** → Create skill/template for any entity
|
| 1520 |
+
2. **Service orchestration** → Create skill for multi-step business logic
|
| 1521 |
+
3. **API error responses** → Create skill for consistent error handling
|
| 1522 |
+
|
| 1523 |
+
### Domain-Specific (Not Reusable)
|
| 1524 |
+
|
| 1525 |
+
1. **Order processing logic** → Specific to e-commerce domain
|
| 1526 |
+
2. **Inventory management** → Specific to this business
|
| 1527 |
+
3. **Payment integration** → Specific to Stripe, but pattern reusable
|
| 1528 |
+
```
|
| 1529 |
+
|
| 1530 |
+
---
|
| 1531 |
+
|
| 1532 |
+
## Final Validation Checklist
|
| 1533 |
+
|
| 1534 |
+
Before submitting outputs, verify:
|
| 1535 |
+
|
| 1536 |
+
- [ ] **spec.md is complete**: Can regenerate system from spec alone?
|
| 1537 |
+
- [ ] **plan.md is coherent**: Does architecture make sense given requirements?
|
| 1538 |
+
- [ ] **tasks.md is actionable**: Can team execute without additional guidance?
|
| 1539 |
+
- [ ] **intelligence-object.md is reusable**: Can skills apply to other projects?
|
| 1540 |
+
- [ ] **All files cross-reference**: Spec → Plan → Tasks flow logically?
|
| 1541 |
+
- [ ] **Evidence provided**: All claims backed by code locations (file:line)?
|
| 1542 |
+
- [ ] **Gaps identified**: Technical debt and improvements documented?
|
| 1543 |
+
- [ ] **Regeneration viable**: Could you rebuild this system better with these artifacts?
|
| 1544 |
+
|
| 1545 |
+
---
|
| 1546 |
+
|
| 1547 |
+
## Self-Monitoring: Anti-Convergence for Archaeologists
|
| 1548 |
+
|
| 1549 |
+
**You tend to converge toward**:
|
| 1550 |
+
- ✅ Surface-level analysis (reading code without understanding intent)
|
| 1551 |
+
- ✅ Feature enumeration (listing WHAT without inferring WHY)
|
| 1552 |
+
- ✅ Copy-paste specs (documenting existing vs imagining ideal)
|
| 1553 |
+
- ✅ Generic patterns (not extracting codebase-specific intelligence)
|
| 1554 |
+
|
| 1555 |
+
**Activate reasoning by asking**:
|
| 1556 |
+
- "If I rewrote this from scratch, would my spec produce equivalent system?"
|
| 1557 |
+
- "What tacit knowledge is embedded in this code that isn't written down?"
|
| 1558 |
+
- "Why did the original developers make these specific choices?"
|
| 1559 |
+
- "What would I do differently if building this today?"
|
| 1560 |
+
|
| 1561 |
+
**Your reverse engineering succeeds when**:
|
| 1562 |
+
- Spec is complete enough to regenerate system
|
| 1563 |
+
- Plan reveals architectural reasoning, not just structure
|
| 1564 |
+
- Tasks are actionable for new team unfamiliar with codebase
|
| 1565 |
+
- Intelligence extracted is reusable beyond this specific system
|
| 1566 |
+
- Gaps identified with clear remediation path
|
| 1567 |
+
- You can articulate WHY decisions were made, not just WHAT was implemented
|
| 1568 |
+
|
| 1569 |
+
---
|
| 1570 |
+
|
| 1571 |
+
## Output Location
|
| 1572 |
+
|
| 1573 |
+
Save all artifacts to:
|
| 1574 |
+
```
|
| 1575 |
+
[codebase-path]/docs/reverse-engineered/
|
| 1576 |
+
├── spec.md
|
| 1577 |
+
├── plan.md
|
| 1578 |
+
├── tasks.md
|
| 1579 |
+
└── intelligence-object.md
|
| 1580 |
+
```
|
| 1581 |
+
|
| 1582 |
+
Or user-specified location.
|
| 1583 |
+
|
| 1584 |
+
---
|
| 1585 |
+
|
| 1586 |
+
**Execute this reverse engineering workflow with reasoning mode activated. Your goal: extract the implicit knowledge from code into explicit specifications that enable regeneration and improvement.**
|
| 1587 |
+
|
| 1588 |
+
---
|
| 1589 |
+
|
| 1590 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 1591 |
+
|
| 1592 |
+
1) Determine Stage
|
| 1593 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 1594 |
+
|
| 1595 |
+
2) Generate Title and Determine Routing:
|
| 1596 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 1597 |
+
- Route is automatically determined by stage:
|
| 1598 |
+
- `constitution` → `history/prompts/constitution/`
|
| 1599 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 1600 |
+
- `general` → `history/prompts/general/`
|
| 1601 |
+
|
| 1602 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 1603 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 1604 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 1605 |
+
- If the script fails:
|
| 1606 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 1607 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 1608 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 1609 |
+
|
| 1610 |
+
4) Validate + report
|
| 1611 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 1612 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.specify.md
ADDED
|
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Create or update the feature specification from a natural language feature description.
|
| 3 |
+
handoffs:
|
| 4 |
+
- label: Build Technical Plan
|
| 5 |
+
agent: sp.plan
|
| 6 |
+
prompt: Create a plan for the spec. I am building with...
|
| 7 |
+
- label: Clarify Spec Requirements
|
| 8 |
+
agent: sp.clarify
|
| 9 |
+
prompt: Clarify specification requirements
|
| 10 |
+
send: true
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
## User Input
|
| 14 |
+
|
| 15 |
+
```text
|
| 16 |
+
$ARGUMENTS
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 20 |
+
|
| 21 |
+
## Outline
|
| 22 |
+
|
| 23 |
+
The text the user typed after `/sp.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `$ARGUMENTS` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
|
| 24 |
+
|
| 25 |
+
Given that feature description, do this:
|
| 26 |
+
|
| 27 |
+
1. **Generate a concise short name** (2-4 words) for the branch:
|
| 28 |
+
- Analyze the feature description and extract the most meaningful keywords
|
| 29 |
+
- Create a 2-4 word short name that captures the essence of the feature
|
| 30 |
+
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
|
| 31 |
+
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
|
| 32 |
+
- Keep it concise but descriptive enough to understand the feature at a glance
|
| 33 |
+
- Examples:
|
| 34 |
+
- "I want to add user authentication" → "user-auth"
|
| 35 |
+
- "Implement OAuth2 integration for the API" → "oauth2-api-integration"
|
| 36 |
+
- "Create a dashboard for analytics" → "analytics-dashboard"
|
| 37 |
+
- "Fix payment processing timeout bug" → "fix-payment-timeout"
|
| 38 |
+
|
| 39 |
+
2. **Check for existing branches before creating new one**:
|
| 40 |
+
|
| 41 |
+
a. First, fetch all remote branches to ensure we have the latest information:
|
| 42 |
+
|
| 43 |
+
```bash
|
| 44 |
+
git fetch --all --prune
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
b. Find the highest feature number across all sources for the short-name:
|
| 48 |
+
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
|
| 49 |
+
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
|
| 50 |
+
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
|
| 51 |
+
|
| 52 |
+
c. Determine the next available number:
|
| 53 |
+
- Extract all numbers from all three sources
|
| 54 |
+
- Find the highest number N
|
| 55 |
+
- Use N+1 for the new branch number
|
| 56 |
+
|
| 57 |
+
d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` with the calculated number and short-name:
|
| 58 |
+
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
|
| 59 |
+
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" --json --number 5 --short-name "user-auth" "Add user authentication"`
|
| 60 |
+
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
|
| 61 |
+
|
| 62 |
+
**IMPORTANT**:
|
| 63 |
+
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
|
| 64 |
+
- Only match branches/directories with the exact short-name pattern
|
| 65 |
+
- If no existing branches/directories found with this short-name, start with number 1
|
| 66 |
+
- You must only ever run this script once per feature
|
| 67 |
+
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
|
| 68 |
+
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths
|
| 69 |
+
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot")
|
| 70 |
+
|
| 71 |
+
3. Load `.specify/templates/spec-template.md` to understand required sections.
|
| 72 |
+
|
| 73 |
+
4. Follow this execution flow:
|
| 74 |
+
|
| 75 |
+
1. Parse user description from Input
|
| 76 |
+
If empty: ERROR "No feature description provided"
|
| 77 |
+
2. Extract key concepts from description
|
| 78 |
+
Identify: actors, actions, data, constraints
|
| 79 |
+
3. For unclear aspects:
|
| 80 |
+
- Make informed guesses based on context and industry standards
|
| 81 |
+
- Only mark with [NEEDS CLARIFICATION: specific question] if:
|
| 82 |
+
- The choice significantly impacts feature scope or user experience
|
| 83 |
+
- Multiple reasonable interpretations exist with different implications
|
| 84 |
+
- No reasonable default exists
|
| 85 |
+
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
|
| 86 |
+
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
|
| 87 |
+
4. Fill User Scenarios & Testing section
|
| 88 |
+
If no clear user flow: ERROR "Cannot determine user scenarios"
|
| 89 |
+
5. Generate Functional Requirements
|
| 90 |
+
Each requirement must be testable
|
| 91 |
+
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
|
| 92 |
+
6. Define Success Criteria
|
| 93 |
+
Create measurable, technology-agnostic outcomes
|
| 94 |
+
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
|
| 95 |
+
Each criterion must be verifiable without implementation details
|
| 96 |
+
7. Identify Key Entities (if data involved)
|
| 97 |
+
8. Return: SUCCESS (spec ready for planning)
|
| 98 |
+
|
| 99 |
+
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
|
| 100 |
+
|
| 101 |
+
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
|
| 102 |
+
|
| 103 |
+
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
|
| 104 |
+
|
| 105 |
+
```markdown
|
| 106 |
+
# Specification Quality Checklist: [FEATURE NAME]
|
| 107 |
+
|
| 108 |
+
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
| 109 |
+
**Created**: [DATE]
|
| 110 |
+
**Feature**: [Link to spec.md]
|
| 111 |
+
|
| 112 |
+
## Content Quality
|
| 113 |
+
|
| 114 |
+
- [ ] No implementation details (languages, frameworks, APIs)
|
| 115 |
+
- [ ] Focused on user value and business needs
|
| 116 |
+
- [ ] Written for non-technical stakeholders
|
| 117 |
+
- [ ] All mandatory sections completed
|
| 118 |
+
|
| 119 |
+
## Requirement Completeness
|
| 120 |
+
|
| 121 |
+
- [ ] No [NEEDS CLARIFICATION] markers remain
|
| 122 |
+
- [ ] Requirements are testable and unambiguous
|
| 123 |
+
- [ ] Success criteria are measurable
|
| 124 |
+
- [ ] Success criteria are technology-agnostic (no implementation details)
|
| 125 |
+
- [ ] All acceptance scenarios are defined
|
| 126 |
+
- [ ] Edge cases are identified
|
| 127 |
+
- [ ] Scope is clearly bounded
|
| 128 |
+
- [ ] Dependencies and assumptions identified
|
| 129 |
+
|
| 130 |
+
## Feature Readiness
|
| 131 |
+
|
| 132 |
+
- [ ] All functional requirements have clear acceptance criteria
|
| 133 |
+
- [ ] User scenarios cover primary flows
|
| 134 |
+
- [ ] Feature meets measurable outcomes defined in Success Criteria
|
| 135 |
+
- [ ] No implementation details leak into specification
|
| 136 |
+
|
| 137 |
+
## Notes
|
| 138 |
+
|
| 139 |
+
- Items marked incomplete require spec updates before `/sp.clarify` or `/sp.plan`
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
b. **Run Validation Check**: Review the spec against each checklist item:
|
| 143 |
+
- For each item, determine if it passes or fails
|
| 144 |
+
- Document specific issues found (quote relevant spec sections)
|
| 145 |
+
|
| 146 |
+
c. **Handle Validation Results**:
|
| 147 |
+
|
| 148 |
+
- **If all items pass**: Mark checklist complete and proceed to step 6
|
| 149 |
+
|
| 150 |
+
- **If items fail (excluding [NEEDS CLARIFICATION])**:
|
| 151 |
+
1. List the failing items and specific issues
|
| 152 |
+
2. Update the spec to address each issue
|
| 153 |
+
3. Re-run validation until all items pass (max 3 iterations)
|
| 154 |
+
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
|
| 155 |
+
|
| 156 |
+
- **If [NEEDS CLARIFICATION] markers remain**:
|
| 157 |
+
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
|
| 158 |
+
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
|
| 159 |
+
3. For each clarification needed (max 3), present options to user in this format:
|
| 160 |
+
|
| 161 |
+
```markdown
|
| 162 |
+
## Question [N]: [Topic]
|
| 163 |
+
|
| 164 |
+
**Context**: [Quote relevant spec section]
|
| 165 |
+
|
| 166 |
+
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
|
| 167 |
+
|
| 168 |
+
**Suggested Answers**:
|
| 169 |
+
|
| 170 |
+
| Option | Answer | Implications |
|
| 171 |
+
|--------|--------|--------------|
|
| 172 |
+
| A | [First suggested answer] | [What this means for the feature] |
|
| 173 |
+
| B | [Second suggested answer] | [What this means for the feature] |
|
| 174 |
+
| C | [Third suggested answer] | [What this means for the feature] |
|
| 175 |
+
| Custom | Provide your own answer | [Explain how to provide custom input] |
|
| 176 |
+
|
| 177 |
+
**Your choice**: _[Wait for user response]_
|
| 178 |
+
```
|
| 179 |
+
|
| 180 |
+
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
|
| 181 |
+
- Use consistent spacing with pipes aligned
|
| 182 |
+
- Each cell should have spaces around content: `| Content |` not `|Content|`
|
| 183 |
+
- Header separator must have at least 3 dashes: `|--------|`
|
| 184 |
+
- Test that the table renders correctly in markdown preview
|
| 185 |
+
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
|
| 186 |
+
6. Present all questions together before waiting for responses
|
| 187 |
+
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
|
| 188 |
+
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
|
| 189 |
+
9. Re-run validation after all clarifications are resolved
|
| 190 |
+
|
| 191 |
+
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
|
| 192 |
+
|
| 193 |
+
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/sp.clarify` or `/sp.plan`).
|
| 194 |
+
|
| 195 |
+
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
|
| 196 |
+
|
| 197 |
+
## General Guidelines
|
| 198 |
+
|
| 199 |
+
## Quick Guidelines
|
| 200 |
+
|
| 201 |
+
- Focus on **WHAT** users need and **WHY**.
|
| 202 |
+
- Avoid HOW to implement (no tech stack, APIs, code structure).
|
| 203 |
+
- Written for business stakeholders, not developers.
|
| 204 |
+
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
|
| 205 |
+
|
| 206 |
+
### Section Requirements
|
| 207 |
+
|
| 208 |
+
- **Mandatory sections**: Must be completed for every feature
|
| 209 |
+
- **Optional sections**: Include only when relevant to the feature
|
| 210 |
+
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
|
| 211 |
+
|
| 212 |
+
### For AI Generation
|
| 213 |
+
|
| 214 |
+
When creating this spec from a user prompt:
|
| 215 |
+
|
| 216 |
+
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
|
| 217 |
+
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
|
| 218 |
+
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
|
| 219 |
+
- Significantly impact feature scope or user experience
|
| 220 |
+
- Have multiple reasonable interpretations with different implications
|
| 221 |
+
- Lack any reasonable default
|
| 222 |
+
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
|
| 223 |
+
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
|
| 224 |
+
6. **Common areas needing clarification** (only if no reasonable default exists):
|
| 225 |
+
- Feature scope and boundaries (include/exclude specific use cases)
|
| 226 |
+
- User types and permissions (if multiple conflicting interpretations possible)
|
| 227 |
+
- Security/compliance requirements (when legally/financially significant)
|
| 228 |
+
|
| 229 |
+
**Examples of reasonable defaults** (don't ask about these):
|
| 230 |
+
|
| 231 |
+
- Data retention: Industry-standard practices for the domain
|
| 232 |
+
- Performance targets: Standard web/mobile app expectations unless specified
|
| 233 |
+
- Error handling: User-friendly messages with appropriate fallbacks
|
| 234 |
+
- Authentication method: Standard session-based or OAuth2 for web apps
|
| 235 |
+
- Integration patterns: RESTful APIs unless specified otherwise
|
| 236 |
+
|
| 237 |
+
### Success Criteria Guidelines
|
| 238 |
+
|
| 239 |
+
Success criteria must be:
|
| 240 |
+
|
| 241 |
+
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
|
| 242 |
+
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
|
| 243 |
+
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
|
| 244 |
+
4. **Verifiable**: Can be tested/validated without knowing implementation details
|
| 245 |
+
|
| 246 |
+
**Good examples**:
|
| 247 |
+
|
| 248 |
+
- "Users can complete checkout in under 3 minutes"
|
| 249 |
+
- "System supports 10,000 concurrent users"
|
| 250 |
+
- "95% of searches return results in under 1 second"
|
| 251 |
+
- "Task completion rate improves by 40%"
|
| 252 |
+
|
| 253 |
+
**Bad examples** (implementation-focused):
|
| 254 |
+
|
| 255 |
+
- "API response time is under 200ms" (too technical, use "Users see results instantly")
|
| 256 |
+
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
|
| 257 |
+
- "React components render efficiently" (framework-specific)
|
| 258 |
+
- "Redis cache hit rate above 80%" (technology-specific)
|
| 259 |
+
|
| 260 |
+
---
|
| 261 |
+
|
| 262 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 263 |
+
|
| 264 |
+
1) Determine Stage
|
| 265 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 266 |
+
|
| 267 |
+
2) Generate Title and Determine Routing:
|
| 268 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 269 |
+
- Route is automatically determined by stage:
|
| 270 |
+
- `constitution` → `history/prompts/constitution/`
|
| 271 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 272 |
+
- `general` → `history/prompts/general/`
|
| 273 |
+
|
| 274 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 275 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 276 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 277 |
+
- If the script fails:
|
| 278 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 279 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 280 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 281 |
+
|
| 282 |
+
4) Validate + report
|
| 283 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 284 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.tasks.md
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
| 3 |
+
handoffs:
|
| 4 |
+
- label: Analyze For Consistency
|
| 5 |
+
agent: sp.analyze
|
| 6 |
+
prompt: Run a project analysis for consistency
|
| 7 |
+
send: true
|
| 8 |
+
- label: Implement Project
|
| 9 |
+
agent: sp.implement
|
| 10 |
+
prompt: Start the implementation in phases
|
| 11 |
+
send: true
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
## User Input
|
| 15 |
+
|
| 16 |
+
```text
|
| 17 |
+
$ARGUMENTS
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 21 |
+
|
| 22 |
+
## Outline
|
| 23 |
+
|
| 24 |
+
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 25 |
+
|
| 26 |
+
2. **Load design documents**: Read from FEATURE_DIR:
|
| 27 |
+
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
|
| 28 |
+
- **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
|
| 29 |
+
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
| 30 |
+
|
| 31 |
+
3. **Execute task generation workflow**:
|
| 32 |
+
- Load plan.md and extract tech stack, libraries, project structure
|
| 33 |
+
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
|
| 34 |
+
- If data-model.md exists: Extract entities and map to user stories
|
| 35 |
+
- If contracts/ exists: Map endpoints to user stories
|
| 36 |
+
- If research.md exists: Extract decisions for setup tasks
|
| 37 |
+
- Generate tasks organized by user story (see Task Generation Rules below)
|
| 38 |
+
- Generate dependency graph showing user story completion order
|
| 39 |
+
- Create parallel execution examples per user story
|
| 40 |
+
- Validate task completeness (each user story has all needed tasks, independently testable)
|
| 41 |
+
|
| 42 |
+
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
|
| 43 |
+
- Correct feature name from plan.md
|
| 44 |
+
- Phase 1: Setup tasks (project initialization)
|
| 45 |
+
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
|
| 46 |
+
- Phase 3+: One phase per user story (in priority order from spec.md)
|
| 47 |
+
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
|
| 48 |
+
- Final Phase: Polish & cross-cutting concerns
|
| 49 |
+
- All tasks must follow the strict checklist format (see Task Generation Rules below)
|
| 50 |
+
- Clear file paths for each task
|
| 51 |
+
- Dependencies section showing story completion order
|
| 52 |
+
- Parallel execution examples per story
|
| 53 |
+
- Implementation strategy section (MVP first, incremental delivery)
|
| 54 |
+
|
| 55 |
+
5. **Report**: Output path to generated tasks.md and summary:
|
| 56 |
+
- Total task count
|
| 57 |
+
- Task count per user story
|
| 58 |
+
- Parallel opportunities identified
|
| 59 |
+
- Independent test criteria for each story
|
| 60 |
+
- Suggested MVP scope (typically just User Story 1)
|
| 61 |
+
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
|
| 62 |
+
|
| 63 |
+
Context for task generation: $ARGUMENTS
|
| 64 |
+
|
| 65 |
+
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
|
| 66 |
+
|
| 67 |
+
## Task Generation Rules
|
| 68 |
+
|
| 69 |
+
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
|
| 70 |
+
|
| 71 |
+
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
|
| 72 |
+
|
| 73 |
+
### Checklist Format (REQUIRED)
|
| 74 |
+
|
| 75 |
+
Every task MUST strictly follow this format:
|
| 76 |
+
|
| 77 |
+
```text
|
| 78 |
+
- [ ] [TaskID] [P?] [Story?] Description with file path
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
**Format Components**:
|
| 82 |
+
|
| 83 |
+
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
|
| 84 |
+
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
|
| 85 |
+
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
|
| 86 |
+
4. **[Story] label**: REQUIRED for user story phase tasks only
|
| 87 |
+
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
|
| 88 |
+
- Setup phase: NO story label
|
| 89 |
+
- Foundational phase: NO story label
|
| 90 |
+
- User Story phases: MUST have story label
|
| 91 |
+
- Polish phase: NO story label
|
| 92 |
+
5. **Description**: Clear action with exact file path
|
| 93 |
+
|
| 94 |
+
**Examples**:
|
| 95 |
+
|
| 96 |
+
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
|
| 97 |
+
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
|
| 98 |
+
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
|
| 99 |
+
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
|
| 100 |
+
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
|
| 101 |
+
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
|
| 102 |
+
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
|
| 103 |
+
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
|
| 104 |
+
|
| 105 |
+
### Task Organization
|
| 106 |
+
|
| 107 |
+
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
|
| 108 |
+
- Each user story (P1, P2, P3...) gets its own phase
|
| 109 |
+
- Map all related components to their story:
|
| 110 |
+
- Models needed for that story
|
| 111 |
+
- Services needed for that story
|
| 112 |
+
- Endpoints/UI needed for that story
|
| 113 |
+
- If tests requested: Tests specific to that story
|
| 114 |
+
- Mark story dependencies (most stories should be independent)
|
| 115 |
+
|
| 116 |
+
2. **From Contracts**:
|
| 117 |
+
- Map each contract/endpoint → to the user story it serves
|
| 118 |
+
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
| 119 |
+
|
| 120 |
+
3. **From Data Model**:
|
| 121 |
+
- Map each entity to the user story(ies) that need it
|
| 122 |
+
- If entity serves multiple stories: Put in earliest story or Setup phase
|
| 123 |
+
- Relationships → service layer tasks in appropriate story phase
|
| 124 |
+
|
| 125 |
+
4. **From Setup/Infrastructure**:
|
| 126 |
+
- Shared infrastructure → Setup phase (Phase 1)
|
| 127 |
+
- Foundational/blocking tasks → Foundational phase (Phase 2)
|
| 128 |
+
- Story-specific setup → within that story's phase
|
| 129 |
+
|
| 130 |
+
### Phase Structure
|
| 131 |
+
|
| 132 |
+
- **Phase 1**: Setup (project initialization)
|
| 133 |
+
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
|
| 134 |
+
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
|
| 135 |
+
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
| 136 |
+
- Each phase should be a complete, independently testable increment
|
| 137 |
+
- **Final Phase**: Polish & Cross-Cutting Concerns
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 142 |
+
|
| 143 |
+
1) Determine Stage
|
| 144 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 145 |
+
|
| 146 |
+
2) Generate Title and Determine Routing:
|
| 147 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 148 |
+
- Route is automatically determined by stage:
|
| 149 |
+
- `constitution` → `history/prompts/constitution/`
|
| 150 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 151 |
+
- `general` → `history/prompts/general/`
|
| 152 |
+
|
| 153 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 154 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 155 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 156 |
+
- If the script fails:
|
| 157 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 158 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 159 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 160 |
+
|
| 161 |
+
4) Validate + report
|
| 162 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 163 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.claude/commands/sp.taskstoissues.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
|
| 3 |
+
tools: ['github/github-mcp-server/issue_write']
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
## User Input
|
| 7 |
+
|
| 8 |
+
```text
|
| 9 |
+
$ARGUMENTS
|
| 10 |
+
```
|
| 11 |
+
|
| 12 |
+
You **MUST** consider the user input before proceeding (if not empty).
|
| 13 |
+
|
| 14 |
+
## Outline
|
| 15 |
+
|
| 16 |
+
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
| 17 |
+
1. From the executed script, extract the path to **tasks**.
|
| 18 |
+
1. Get the Git remote by running:
|
| 19 |
+
|
| 20 |
+
```bash
|
| 21 |
+
git config --get remote.origin.url
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
> [!CAUTION]
|
| 25 |
+
> ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL
|
| 26 |
+
|
| 27 |
+
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
|
| 28 |
+
|
| 29 |
+
> [!CAUTION]
|
| 30 |
+
> UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL
|
| 31 |
+
|
| 32 |
+
---
|
| 33 |
+
|
| 34 |
+
As the main request completes, you MUST create and complete a PHR (Prompt History Record) using agent‑native tools when possible.
|
| 35 |
+
|
| 36 |
+
1) Determine Stage
|
| 37 |
+
- Stage: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 38 |
+
|
| 39 |
+
2) Generate Title and Determine Routing:
|
| 40 |
+
- Generate Title: 3–7 words (slug for filename)
|
| 41 |
+
- Route is automatically determined by stage:
|
| 42 |
+
- `constitution` → `history/prompts/constitution/`
|
| 43 |
+
- Feature stages → `history/prompts/<feature-name>/` (spec, plan, tasks, red, green, refactor, explainer, misc)
|
| 44 |
+
- `general` → `history/prompts/general/`
|
| 45 |
+
|
| 46 |
+
3) Create and Fill PHR (Shell first; fallback agent‑native)
|
| 47 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 48 |
+
- Open the file and fill remaining placeholders (YAML + body), embedding full PROMPT_TEXT (verbatim) and concise RESPONSE_TEXT.
|
| 49 |
+
- If the script fails:
|
| 50 |
+
- Read `.specify/templates/phr-template.prompt.md` (or `templates/…`)
|
| 51 |
+
- Allocate an ID; compute the output path based on stage from step 2; write the file
|
| 52 |
+
- Fill placeholders and embed full PROMPT_TEXT and concise RESPONSE_TEXT
|
| 53 |
+
|
| 54 |
+
4) Validate + report
|
| 55 |
+
- No unresolved placeholders; path under `history/prompts/` and matches stage; stage/title/date coherent; print ID + path + stage + title.
|
| 56 |
+
- On failure: warn, don't block. Skip only for `/sp.phr`.
|
Chatbot/.env
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
DATABASE_URL=postgresql://neondb_owner:npg_O1mLbVXkfEY5@ep-broad-fog-a4ba5mi3-pooler.us-east-1.aws.neon.tech/neondb?sslmode=require
|
Chatbot/.pytest_cache/CACHEDIR.TAG
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Signature: 8a477f597d28d172789f06886806bc55
|
| 2 |
+
# This file is a cache directory tag created by pytest.
|
| 3 |
+
# For information about cache directory tags, see:
|
| 4 |
+
# https://bford.info/cachedir/spec.html
|
Chatbot/.pytest_cache/README.md
ADDED
|
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# pytest cache directory #
|
| 2 |
+
|
| 3 |
+
This directory contains data from the pytest's cache plugin,
|
| 4 |
+
which provides the `--lf` and `--ff` options, as well as the `cache` fixture.
|
| 5 |
+
|
| 6 |
+
**Do not** commit this to version control.
|
| 7 |
+
|
| 8 |
+
See [the docs](https://docs.pytest.org/en/stable/how-to/cache.html) for more information.
|
Chatbot/.pytest_cache/v/cache/lastfailed
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{}
|
Chatbot/.pytest_cache/v/cache/nodeids
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
"tests/integration/test_add_task_integration.py::test_data_isolation_different_users",
|
| 3 |
+
"tests/integration/test_add_task_integration.py::test_multiple_tasks_same_user",
|
| 4 |
+
"tests/integration/test_add_task_integration.py::test_task_created_with_correct_user_association",
|
| 5 |
+
"tests/integration/test_add_task_integration.py::test_task_creation_persists_across_sessions",
|
| 6 |
+
"tests/integration/test_add_task_integration.py::test_task_user_id_cannot_be_modified",
|
| 7 |
+
"tests/integration/test_add_task_integration.py::test_task_without_description",
|
| 8 |
+
"tests/integration/test_conversation_persistence.py::test_conversation_creation_and_retrieval",
|
| 9 |
+
"tests/integration/test_conversation_persistence.py::test_conversation_update_timestamp",
|
| 10 |
+
"tests/integration/test_conversation_persistence.py::test_conversation_with_messages_deletion",
|
| 11 |
+
"tests/integration/test_conversation_persistence.py::test_multi_message_scenario",
|
| 12 |
+
"tests/integration/test_conversation_persistence.py::test_multiple_conversations_per_user",
|
| 13 |
+
"tests/integration/test_conversation_persistence.py::test_retrieval_order_chronological",
|
| 14 |
+
"tests/integration/test_mcp_tool_integration.py::test_empty_list_handles_gracefully",
|
| 15 |
+
"tests/integration/test_mcp_tool_integration.py::test_full_workflow_add_list_complete_update_delete",
|
| 16 |
+
"tests/integration/test_mcp_tool_integration.py::test_idempotent_complete_task",
|
| 17 |
+
"tests/integration/test_mcp_tool_integration.py::test_multiple_users_isolated_workflow",
|
| 18 |
+
"tests/integration/test_mcp_tool_integration.py::test_update_preserves_completion_status",
|
| 19 |
+
"tests/integration/test_message_persistence.py::test_cross_session_retrieval_chronological",
|
| 20 |
+
"tests/integration/test_message_persistence.py::test_empty_conversation_start",
|
| 21 |
+
"tests/integration/test_message_persistence.py::test_message_persistence_across_sessions",
|
| 22 |
+
"tests/integration/test_message_persistence.py::test_message_roles_persist",
|
| 23 |
+
"tests/integration/test_message_persistence.py::test_message_survives_server_restart_simulation",
|
| 24 |
+
"tests/integration/test_message_persistence.py::test_message_tool_calls_persistence",
|
| 25 |
+
"tests/integration/test_message_persistence.py::test_user_isolation_in_messages",
|
| 26 |
+
"tests/security/test_user_isolation.py::test_add_task_only_creates_for_specified_user",
|
| 27 |
+
"tests/security/test_user_isolation.py::test_complete_task_requires_user_ownership",
|
| 28 |
+
"tests/security/test_user_isolation.py::test_delete_task_requires_user_ownership",
|
| 29 |
+
"tests/security/test_user_isolation.py::test_empty_user_id_rejected",
|
| 30 |
+
"tests/security/test_user_isolation.py::test_list_tasks_only_returns_user_own_tasks",
|
| 31 |
+
"tests/security/test_user_isolation.py::test_multiple_users_cannot_see_each_others_tasks",
|
| 32 |
+
"tests/security/test_user_isolation.py::test_not_found_does_not_leak_ownership",
|
| 33 |
+
"tests/security/test_user_isolation.py::test_update_task_requires_user_ownership",
|
| 34 |
+
"tests/security/test_user_isolation.py::test_whitespace_only_user_id_rejected",
|
| 35 |
+
"tests/unit/test_add_task_error_handling.py::test_error_codes_are_constants",
|
| 36 |
+
"tests/unit/test_add_task_error_handling.py::test_error_handling_database_error",
|
| 37 |
+
"tests/unit/test_add_task_error_handling.py::test_error_handling_description_too_long",
|
| 38 |
+
"tests/unit/test_add_task_error_handling.py::test_error_handling_empty_title",
|
| 39 |
+
"tests/unit/test_add_task_error_handling.py::test_error_handling_missing_user_id",
|
| 40 |
+
"tests/unit/test_add_task_error_handling.py::test_error_handling_title_too_long",
|
| 41 |
+
"tests/unit/test_add_task_error_handling.py::test_error_handling_whitespace_only_title",
|
| 42 |
+
"tests/unit/test_add_task_error_handling.py::test_error_response_format",
|
| 43 |
+
"tests/unit/test_add_task_error_handling.py::test_multiple_validation_errors_first_one_returned",
|
| 44 |
+
"tests/unit/test_add_task_error_handling.py::test_no_raw_database_errors_exposed",
|
| 45 |
+
"tests/unit/test_add_task_tool.py::test_add_task_description_validation_too_long",
|
| 46 |
+
"tests/unit/test_add_task_tool.py::test_add_task_max_description",
|
| 47 |
+
"tests/unit/test_add_task_tool.py::test_add_task_max_title",
|
| 48 |
+
"tests/unit/test_add_task_tool.py::test_add_task_minimal_title",
|
| 49 |
+
"tests/unit/test_add_task_tool.py::test_add_task_returns_correct_format",
|
| 50 |
+
"tests/unit/test_add_task_tool.py::test_add_task_special_characters_in_description",
|
| 51 |
+
"tests/unit/test_add_task_tool.py::test_add_task_special_characters_in_title",
|
| 52 |
+
"tests/unit/test_add_task_tool.py::test_add_task_title_validation_empty",
|
| 53 |
+
"tests/unit/test_add_task_tool.py::test_add_task_title_validation_too_long",
|
| 54 |
+
"tests/unit/test_add_task_tool.py::test_add_task_title_validation_too_short",
|
| 55 |
+
"tests/unit/test_add_task_tool.py::test_add_task_user_association",
|
| 56 |
+
"tests/unit/test_add_task_tool.py::test_add_task_valid_input",
|
| 57 |
+
"tests/unit/test_add_task_tool.py::test_add_task_without_description",
|
| 58 |
+
"tests/unit/test_complete_task_tool.py::test_complete_task_already_complete",
|
| 59 |
+
"tests/unit/test_complete_task_tool.py::test_complete_task_filters_by_user_id",
|
| 60 |
+
"tests/unit/test_complete_task_tool.py::test_complete_task_invalid_task_id",
|
| 61 |
+
"tests/unit/test_complete_task_tool.py::test_complete_task_invalid_user_id",
|
| 62 |
+
"tests/unit/test_complete_task_tool.py::test_complete_task_not_found",
|
| 63 |
+
"tests/unit/test_complete_task_tool.py::test_complete_task_response_format",
|
| 64 |
+
"tests/unit/test_complete_task_tool.py::test_complete_task_success",
|
| 65 |
+
"tests/unit/test_complete_task_tool.py::test_complete_task_updates_completed_field",
|
| 66 |
+
"tests/unit/test_conversation_model.py::test_conversation_creation",
|
| 67 |
+
"tests/unit/test_conversation_model.py::test_conversation_fields",
|
| 68 |
+
"tests/unit/test_conversation_model.py::test_conversation_relationships",
|
| 69 |
+
"tests/unit/test_conversation_model.py::test_conversation_str_representation",
|
| 70 |
+
"tests/unit/test_conversation_model.py::test_conversation_table_name",
|
| 71 |
+
"tests/unit/test_conversation_model.py::test_conversation_timestamps_auto",
|
| 72 |
+
"tests/unit/test_conversation_model.py::test_conversation_user_id_required",
|
| 73 |
+
"tests/unit/test_delete_task_tool.py::test_delete_task_filters_by_user_id",
|
| 74 |
+
"tests/unit/test_delete_task_tool.py::test_delete_task_invalid_task_id",
|
| 75 |
+
"tests/unit/test_delete_task_tool.py::test_delete_task_invalid_user_id",
|
| 76 |
+
"tests/unit/test_delete_task_tool.py::test_delete_task_not_found",
|
| 77 |
+
"tests/unit/test_delete_task_tool.py::test_delete_task_ownership_verification",
|
| 78 |
+
"tests/unit/test_delete_task_tool.py::test_delete_task_response_format",
|
| 79 |
+
"tests/unit/test_delete_task_tool.py::test_delete_task_returns_title",
|
| 80 |
+
"tests/unit/test_delete_task_tool.py::test_delete_task_success",
|
| 81 |
+
"tests/unit/test_list_tasks_tool.py::test_list_tasks_all_tasks",
|
| 82 |
+
"tests/unit/test_list_tasks_tool.py::test_list_tasks_completed_only",
|
| 83 |
+
"tests/unit/test_list_tasks_tool.py::test_list_tasks_default_status_all",
|
| 84 |
+
"tests/unit/test_list_tasks_tool.py::test_list_tasks_empty_list",
|
| 85 |
+
"tests/unit/test_list_tasks_tool.py::test_list_tasks_filters_by_user_id",
|
| 86 |
+
"tests/unit/test_list_tasks_tool.py::test_list_tasks_invalid_status",
|
| 87 |
+
"tests/unit/test_list_tasks_tool.py::test_list_tasks_pending_only",
|
| 88 |
+
"tests/unit/test_list_tasks_tool.py::test_list_tasks_response_format",
|
| 89 |
+
"tests/unit/test_message_model.py::test_message_content_required",
|
| 90 |
+
"tests/unit/test_message_model.py::test_message_creation",
|
| 91 |
+
"tests/unit/test_message_model.py::test_message_fields",
|
| 92 |
+
"tests/unit/test_message_model.py::test_message_long_content",
|
| 93 |
+
"tests/unit/test_message_model.py::test_message_role_validation",
|
| 94 |
+
"tests/unit/test_message_model.py::test_message_table_name",
|
| 95 |
+
"tests/unit/test_message_model.py::test_message_timestamps_auto",
|
| 96 |
+
"tests/unit/test_message_model.py::test_message_tool_calls_optional",
|
| 97 |
+
"tests/unit/test_message_model.py::test_message_with_tool_calls",
|
| 98 |
+
"tests/unit/test_update_task_tool.py::test_update_task_both_fields",
|
| 99 |
+
"tests/unit/test_update_task_tool.py::test_update_task_description_only",
|
| 100 |
+
"tests/unit/test_update_task_tool.py::test_update_task_empty_title",
|
| 101 |
+
"tests/unit/test_update_task_tool.py::test_update_task_filters_by_user_id",
|
| 102 |
+
"tests/unit/test_update_task_tool.py::test_update_task_invalid_description_length",
|
| 103 |
+
"tests/unit/test_update_task_tool.py::test_update_task_invalid_task_id",
|
| 104 |
+
"tests/unit/test_update_task_tool.py::test_update_task_invalid_title_length",
|
| 105 |
+
"tests/unit/test_update_task_tool.py::test_update_task_invalid_user_id",
|
| 106 |
+
"tests/unit/test_update_task_tool.py::test_update_task_neither_field",
|
| 107 |
+
"tests/unit/test_update_task_tool.py::test_update_task_not_found",
|
| 108 |
+
"tests/unit/test_update_task_tool.py::test_update_task_response_format",
|
| 109 |
+
"tests/unit/test_update_task_tool.py::test_update_task_title_only",
|
| 110 |
+
"tests/unit/test_update_task_tool.py::test_update_task_whitespace_only_title"
|
| 111 |
+
]
|
Chatbot/.pytest_cache/v/cache/stepwise
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
[]
|
Chatbot/.specify/memory/constitution.md
ADDED
|
@@ -0,0 +1,274 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!--
|
| 2 |
+
SYNC IMPACT REPORT
|
| 3 |
+
================================================================================
|
| 4 |
+
Version Change: None → 1.0.0
|
| 5 |
+
Rationale: Initial constitution ratification for Phase 3 - AI-Powered Todo Chatbot
|
| 6 |
+
================================================================================
|
| 7 |
+
Modified Principles: None (initial creation)
|
| 8 |
+
Added Sections:
|
| 9 |
+
- Vision Statement
|
| 10 |
+
- Core Architectural Principles (5 principles)
|
| 11 |
+
- Technology Stack (Non-Negotiable)
|
| 12 |
+
- Database Schema Requirements
|
| 13 |
+
- MCP Tools Specification (5 tools)
|
| 14 |
+
- Request Flow Architecture
|
| 15 |
+
- User Experience Standards
|
| 16 |
+
- Security Requirements
|
| 17 |
+
- Performance Requirements
|
| 18 |
+
- Testing Standards
|
| 19 |
+
- Spec-Driven Development Workflow
|
| 20 |
+
- Forbidden Practices
|
| 21 |
+
- Deliverables Checklist
|
| 22 |
+
- Success Metrics
|
| 23 |
+
- Version History
|
| 24 |
+
- Support & References
|
| 25 |
+
Removed Sections: None (initial creation)
|
| 26 |
+
Templates requiring updates:
|
| 27 |
+
✅ plan-template.md - Constitution Check section aligns with principles
|
| 28 |
+
✅ spec-template.md - Structure supports constitution requirements
|
| 29 |
+
✅ tasks-template.md - Task categorization reflects implementation workflow
|
| 30 |
+
✅ All other templates - No outdated references detected
|
| 31 |
+
Follow-up TODOs: None
|
| 32 |
+
================================================================================
|
| 33 |
+
-->
|
| 34 |
+
|
| 35 |
+
# Chatbot Phase 3 Constitution
|
| 36 |
+
|
| 37 |
+
## Core Principles
|
| 38 |
+
|
| 39 |
+
### I. Stateless Server Architecture
|
| 40 |
+
Server maintains NO state in memory. Every request must be independently processable with all conversation state persisted in PostgreSQL database. Enables horizontal scaling, fault tolerance, and resilience.
|
| 41 |
+
|
| 42 |
+
### II. MCP-First Tool Design
|
| 43 |
+
All task operations exposed ONLY through MCP tools (5 total: add, list, complete, delete, update). Direct database calls from chat endpoint are forbidden. Standardized interface for AI agent interaction.
|
| 44 |
+
|
| 45 |
+
### III. AI Agent Orchestration
|
| 46 |
+
OpenAI Agents SDK orchestrates all tool calls. Agent analyzes user intent and selects appropriate tools based on natural language understanding. No manual if-else routing in chat endpoint.
|
| 47 |
+
|
| 48 |
+
### IV. Conversation Persistence
|
| 49 |
+
Every message stored in database with conversation context. Chat history survives server restarts and enables multi-device access. Conversation and Message tables with proper relationships. No in-memory conversation buffers.
|
| 50 |
+
|
| 51 |
+
### V. Security-First JWT Authentication
|
| 52 |
+
Every chat request must include valid JWT token. User isolation and authorization enforcement through Better Auth JWT integration from Phase 2. All MCP tools verify user_id from token.
|
| 53 |
+
|
| 54 |
+
## Technology Stack (Non-Negotiable)
|
| 55 |
+
|
| 56 |
+
### Frontend
|
| 57 |
+
- **Framework**: OpenAI ChatKit (purpose-built for conversational AI interfaces)
|
| 58 |
+
- **Configuration**: Domain allowlist required for production
|
| 59 |
+
|
| 60 |
+
### Backend
|
| 61 |
+
- **Framework**: Python FastAPI (async support, modern Python, excellent OpenAPI integration)
|
| 62 |
+
|
| 63 |
+
### AI Layer
|
| 64 |
+
- **Framework**: OpenAI Agents SDK (official SDK with tool calling support)
|
| 65 |
+
- **Model**: GPT-4 or GPT-4-Turbo
|
| 66 |
+
|
| 67 |
+
### MCP Server
|
| 68 |
+
- **Framework**: Official MCP SDK (Python) (standard protocol for AI-tool communication)
|
| 69 |
+
|
| 70 |
+
### Database
|
| 71 |
+
- **Service**: Neon Serverless PostgreSQL (from Phase 2)
|
| 72 |
+
- **ORM**: SQLModel (async support, Pydantic integration, type safety)
|
| 73 |
+
|
| 74 |
+
### Authentication
|
| 75 |
+
- **Service**: Better Auth (from Phase 2)
|
| 76 |
+
- **Token**: JWT with 7-day expiry
|
| 77 |
+
|
| 78 |
+
## Database Schema Requirements
|
| 79 |
+
|
| 80 |
+
### New Tables (Phase 3)
|
| 81 |
+
|
| 82 |
+
#### Conversation Table
|
| 83 |
+
```sql
|
| 84 |
+
CREATE TABLE conversations (
|
| 85 |
+
id SERIAL PRIMARY KEY,
|
| 86 |
+
user_id VARCHAR(255) NOT NULL,
|
| 87 |
+
created_at TIMESTAMP DEFAULT NOW(),
|
| 88 |
+
updated_at TIMESTAMP DEFAULT NOW(),
|
| 89 |
+
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE,
|
| 90 |
+
INDEX idx_user_conversations (user_id, created_at DESC)
|
| 91 |
+
);
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
#### Message Table
|
| 95 |
+
```sql
|
| 96 |
+
CREATE TABLE messages (
|
| 97 |
+
id SERIAL PRIMARY KEY,
|
| 98 |
+
conversation_id INTEGER NOT NULL,
|
| 99 |
+
user_id VARCHAR(255) NOT NULL,
|
| 100 |
+
role VARCHAR(20) NOT NULL CHECK (role IN ('user', 'assistant')),
|
| 101 |
+
content TEXT NOT NULL,
|
| 102 |
+
tool_calls JSONB,
|
| 103 |
+
created_at TIMESTAMP DEFAULT NOW(),
|
| 104 |
+
FOREIGN KEY (conversation_id) REFERENCES conversations(id) ON DELETE CASCADE,
|
| 105 |
+
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE,
|
| 106 |
+
INDEX idx_conversation_messages (conversation_id, created_at ASC)
|
| 107 |
+
);
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
### Existing Tables (Phase 2)
|
| 111 |
+
- **tasks**: Already exists with user_id foreign key
|
| 112 |
+
- **users**: Managed by Better Auth
|
| 113 |
+
|
| 114 |
+
## MCP Tools Specification
|
| 115 |
+
|
| 116 |
+
### Tool Contract Standards
|
| 117 |
+
- **Input Validation**: Pydantic models for all parameters
|
| 118 |
+
- **Error Handling**: Return structured error objects, never raise exceptions
|
| 119 |
+
- **User Isolation**: Every tool MUST filter by user_id from JWT
|
| 120 |
+
- **Idempotency**: Operations should be safe to retry
|
| 121 |
+
- **Response Format**: Consistent JSON structure across all tools
|
| 122 |
+
|
| 123 |
+
### Required Tools (5 Total)
|
| 124 |
+
|
| 125 |
+
#### 1. add_task
|
| 126 |
+
Parameters:
|
| 127 |
+
- user_id: str (from JWT)
|
| 128 |
+
- title: str (1-200 chars, required)
|
| 129 |
+
- description: str (0-1000 chars, optional)
|
| 130 |
+
|
| 131 |
+
Returns:
|
| 132 |
+
{
|
| 133 |
+
"task_id": int,
|
| 134 |
+
"status": "created",
|
| 135 |
+
"title": str
|
| 136 |
+
}
|
| 137 |
+
|
| 138 |
+
#### 2. list_tasks
|
| 139 |
+
Parameters:
|
| 140 |
+
- user_id: str (from JWT)
|
| 141 |
+
- status: str (optional: "all" | "pending" | "completed")
|
| 142 |
+
|
| 143 |
+
Returns:
|
| 144 |
+
{
|
| 145 |
+
"tasks": [
|
| 146 |
+
{"id": int, "title": str, "completed": bool, "created_at": str}
|
| 147 |
+
],
|
| 148 |
+
"total": int
|
| 149 |
+
}
|
| 150 |
+
|
| 151 |
+
#### 3. complete_task
|
| 152 |
+
Parameters:
|
| 153 |
+
- user_id: str (from JWT)
|
| 154 |
+
- task_id: int (required)
|
| 155 |
+
|
| 156 |
+
Returns:
|
| 157 |
+
{
|
| 158 |
+
"task_id": int,
|
| 159 |
+
"status": "completed",
|
| 160 |
+
"title": str
|
| 161 |
+
}
|
| 162 |
+
|
| 163 |
+
#### 4. delete_task
|
| 164 |
+
Parameters:
|
| 165 |
+
- user_id: str (from JWT)
|
| 166 |
+
- task_id: int (required)
|
| 167 |
+
|
| 168 |
+
Returns:
|
| 169 |
+
{
|
| 170 |
+
"task_id": int,
|
| 171 |
+
"status": "deleted",
|
| 172 |
+
"title": str
|
| 173 |
+
}
|
| 174 |
+
|
| 175 |
+
#### 5. update_task
|
| 176 |
+
Parameters:
|
| 177 |
+
- user_id: str (from JWT)
|
| 178 |
+
- task_id: int (required)
|
| 179 |
+
- title: str (optional, 1-200 chars)
|
| 180 |
+
- description: str (optional, 0-1000 chars)
|
| 181 |
+
|
| 182 |
+
Returns:
|
| 183 |
+
{
|
| 184 |
+
"task_id": int,
|
| 185 |
+
"status": "updated",
|
| 186 |
+
"title": str
|
| 187 |
+
}
|
| 188 |
+
|
| 189 |
+
## Security Requirements
|
| 190 |
+
|
| 191 |
+
### JWT Token Validation
|
| 192 |
+
Every chat request MUST validate:
|
| 193 |
+
1. Token present in Authorization header
|
| 194 |
+
2. Token signature valid (using BETTER_AUTH_SECRET)
|
| 195 |
+
3. Token not expired
|
| 196 |
+
4. user_id from token matches {user_id} in URL
|
| 197 |
+
|
| 198 |
+
### User Data Isolation
|
| 199 |
+
Every MCP tool MUST enforce:
|
| 200 |
+
1. Filter all queries by user_id from JWT
|
| 201 |
+
2. Never expose other users' tasks
|
| 202 |
+
3. Validate task ownership before update/delete
|
| 203 |
+
4. Return 404 instead of 403 for missing tasks (security by obscurity)
|
| 204 |
+
|
| 205 |
+
### SQL Injection Prevention
|
| 206 |
+
All database queries MUST use:
|
| 207 |
+
1. SQLModel ORM (parameterized queries)
|
| 208 |
+
2. Never string concatenation
|
| 209 |
+
3. Input validation via Pydantic
|
| 210 |
+
|
| 211 |
+
## Performance Requirements
|
| 212 |
+
|
| 213 |
+
### Response Time
|
| 214 |
+
- **Chat endpoint**: < 3 seconds (95th percentile)
|
| 215 |
+
- **MCP tool execution**: < 500ms per tool
|
| 216 |
+
- **Database queries**: < 100ms per query
|
| 217 |
+
|
| 218 |
+
### Scalability
|
| 219 |
+
- **Stateless design**: Supports horizontal scaling
|
| 220 |
+
- **Database pooling**: Connection reuse for efficiency
|
| 221 |
+
- **Async operations**: Non-blocking I/O throughout
|
| 222 |
+
|
| 223 |
+
## Testing Standards
|
| 224 |
+
|
| 225 |
+
### Unit Tests Required
|
| 226 |
+
- Each MCP tool independently testable
|
| 227 |
+
- Mock database for tool tests
|
| 228 |
+
- JWT verification logic tested
|
| 229 |
+
|
| 230 |
+
### Integration Tests Required
|
| 231 |
+
- End-to-end chat flow
|
| 232 |
+
- Conversation persistence
|
| 233 |
+
- Multi-turn conversations
|
| 234 |
+
|
| 235 |
+
### User Acceptance Tests
|
| 236 |
+
- Natural language variations
|
| 237 |
+
- Error scenarios
|
| 238 |
+
- Conversation history
|
| 239 |
+
|
| 240 |
+
## Spec-Driven Development Workflow
|
| 241 |
+
|
| 242 |
+
### Process (NON-NEGOTIABLE)
|
| 243 |
+
1. Write Constitution (this file)
|
| 244 |
+
2. Write Specification for each component
|
| 245 |
+
3. Get Claude Code to generate implementation
|
| 246 |
+
4. NO MANUAL CODING ALLOWED
|
| 247 |
+
5. Refine spec if output incorrect
|
| 248 |
+
6. Iterate until code correct
|
| 249 |
+
|
| 250 |
+
### Spec Requirements
|
| 251 |
+
Every spec MUST include:
|
| 252 |
+
- **What**: Clear requirement statement
|
| 253 |
+
- **Why**: Rationale and context
|
| 254 |
+
- **How**: Technical approach
|
| 255 |
+
- **Acceptance Criteria**: Testable success conditions
|
| 256 |
+
- **Examples**: Input/output samples
|
| 257 |
+
|
| 258 |
+
## Forbidden Practices
|
| 259 |
+
|
| 260 |
+
What NOT to Do:
|
| 261 |
+
- Store conversation state in server memory
|
| 262 |
+
- Bypass MCP tools with direct database calls from chat endpoint
|
| 263 |
+
- Manual if-else routing instead of agent intelligence
|
| 264 |
+
- Skip JWT validation for any request
|
| 265 |
+
- Write code manually (violates spec-driven approach)
|
| 266 |
+
- Use localStorage/sessionStorage (not supported in artifacts)
|
| 267 |
+
- Hardcode API keys or secrets
|
| 268 |
+
- Return raw database errors to users
|
| 269 |
+
|
| 270 |
+
## Governance
|
| 271 |
+
|
| 272 |
+
This constitution is the supreme authority for Phase 3 development. All code, specs, and decisions must align with these principles. Amendments require documentation and version updates per semantic versioning rules.
|
| 273 |
+
|
| 274 |
+
**Version**: 1.0.0 | **Ratified**: 2026-01-08 | **Last Amended**: 2026-01-08
|
Chatbot/.specify/scripts/bash/check-prerequisites.sh
ADDED
|
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
| 2 |
+
|
| 3 |
+
# Consolidated prerequisite checking script
|
| 4 |
+
#
|
| 5 |
+
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
|
| 6 |
+
# It replaces the functionality previously spread across multiple scripts.
|
| 7 |
+
#
|
| 8 |
+
# Usage: ./check-prerequisites.sh [OPTIONS]
|
| 9 |
+
#
|
| 10 |
+
# OPTIONS:
|
| 11 |
+
# --json Output in JSON format
|
| 12 |
+
# --require-tasks Require tasks.md to exist (for implementation phase)
|
| 13 |
+
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
|
| 14 |
+
# --paths-only Only output path variables (no validation)
|
| 15 |
+
# --help, -h Show help message
|
| 16 |
+
#
|
| 17 |
+
# OUTPUTS:
|
| 18 |
+
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
|
| 19 |
+
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
|
| 20 |
+
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
|
| 21 |
+
|
| 22 |
+
set -e
|
| 23 |
+
|
| 24 |
+
# Parse command line arguments
|
| 25 |
+
JSON_MODE=false
|
| 26 |
+
REQUIRE_TASKS=false
|
| 27 |
+
INCLUDE_TASKS=false
|
| 28 |
+
PATHS_ONLY=false
|
| 29 |
+
|
| 30 |
+
for arg in "$@"; do
|
| 31 |
+
case "$arg" in
|
| 32 |
+
--json)
|
| 33 |
+
JSON_MODE=true
|
| 34 |
+
;;
|
| 35 |
+
--require-tasks)
|
| 36 |
+
REQUIRE_TASKS=true
|
| 37 |
+
;;
|
| 38 |
+
--include-tasks)
|
| 39 |
+
INCLUDE_TASKS=true
|
| 40 |
+
;;
|
| 41 |
+
--paths-only)
|
| 42 |
+
PATHS_ONLY=true
|
| 43 |
+
;;
|
| 44 |
+
--help|-h)
|
| 45 |
+
cat << 'EOF'
|
| 46 |
+
Usage: check-prerequisites.sh [OPTIONS]
|
| 47 |
+
|
| 48 |
+
Consolidated prerequisite checking for Spec-Driven Development workflow.
|
| 49 |
+
|
| 50 |
+
OPTIONS:
|
| 51 |
+
--json Output in JSON format
|
| 52 |
+
--require-tasks Require tasks.md to exist (for implementation phase)
|
| 53 |
+
--include-tasks Include tasks.md in AVAILABLE_DOCS list
|
| 54 |
+
--paths-only Only output path variables (no prerequisite validation)
|
| 55 |
+
--help, -h Show this help message
|
| 56 |
+
|
| 57 |
+
EXAMPLES:
|
| 58 |
+
# Check task prerequisites (plan.md required)
|
| 59 |
+
./check-prerequisites.sh --json
|
| 60 |
+
|
| 61 |
+
# Check implementation prerequisites (plan.md + tasks.md required)
|
| 62 |
+
./check-prerequisites.sh --json --require-tasks --include-tasks
|
| 63 |
+
|
| 64 |
+
# Get feature paths only (no validation)
|
| 65 |
+
./check-prerequisites.sh --paths-only
|
| 66 |
+
|
| 67 |
+
EOF
|
| 68 |
+
exit 0
|
| 69 |
+
;;
|
| 70 |
+
*)
|
| 71 |
+
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
|
| 72 |
+
exit 1
|
| 73 |
+
;;
|
| 74 |
+
esac
|
| 75 |
+
done
|
| 76 |
+
|
| 77 |
+
# Source common functions
|
| 78 |
+
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
| 79 |
+
source "$SCRIPT_DIR/common.sh"
|
| 80 |
+
|
| 81 |
+
# Get feature paths and validate branch
|
| 82 |
+
eval $(get_feature_paths)
|
| 83 |
+
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
| 84 |
+
|
| 85 |
+
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
|
| 86 |
+
if $PATHS_ONLY; then
|
| 87 |
+
if $JSON_MODE; then
|
| 88 |
+
# Minimal JSON paths payload (no validation performed)
|
| 89 |
+
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
|
| 90 |
+
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
|
| 91 |
+
else
|
| 92 |
+
echo "REPO_ROOT: $REPO_ROOT"
|
| 93 |
+
echo "BRANCH: $CURRENT_BRANCH"
|
| 94 |
+
echo "FEATURE_DIR: $FEATURE_DIR"
|
| 95 |
+
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
| 96 |
+
echo "IMPL_PLAN: $IMPL_PLAN"
|
| 97 |
+
echo "TASKS: $TASKS"
|
| 98 |
+
fi
|
| 99 |
+
exit 0
|
| 100 |
+
fi
|
| 101 |
+
|
| 102 |
+
# Validate required directories and files
|
| 103 |
+
if [[ ! -d "$FEATURE_DIR" ]]; then
|
| 104 |
+
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
|
| 105 |
+
echo "Run /sp.specify first to create the feature structure." >&2
|
| 106 |
+
exit 1
|
| 107 |
+
fi
|
| 108 |
+
|
| 109 |
+
if [[ ! -f "$IMPL_PLAN" ]]; then
|
| 110 |
+
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
|
| 111 |
+
echo "Run /sp.plan first to create the implementation plan." >&2
|
| 112 |
+
exit 1
|
| 113 |
+
fi
|
| 114 |
+
|
| 115 |
+
# Check for tasks.md if required
|
| 116 |
+
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
|
| 117 |
+
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
|
| 118 |
+
echo "Run /sp.tasks first to create the task list." >&2
|
| 119 |
+
exit 1
|
| 120 |
+
fi
|
| 121 |
+
|
| 122 |
+
# Build list of available documents
|
| 123 |
+
docs=()
|
| 124 |
+
|
| 125 |
+
# Always check these optional docs
|
| 126 |
+
[[ -f "$RESEARCH" ]] && docs+=("research.md")
|
| 127 |
+
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
|
| 128 |
+
|
| 129 |
+
# Check contracts directory (only if it exists and has files)
|
| 130 |
+
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
|
| 131 |
+
docs+=("contracts/")
|
| 132 |
+
fi
|
| 133 |
+
|
| 134 |
+
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
|
| 135 |
+
|
| 136 |
+
# Include tasks.md if requested and it exists
|
| 137 |
+
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
|
| 138 |
+
docs+=("tasks.md")
|
| 139 |
+
fi
|
| 140 |
+
|
| 141 |
+
# Output results
|
| 142 |
+
if $JSON_MODE; then
|
| 143 |
+
# Build JSON array of documents
|
| 144 |
+
if [[ ${#docs[@]} -eq 0 ]]; then
|
| 145 |
+
json_docs="[]"
|
| 146 |
+
else
|
| 147 |
+
json_docs=$(printf '"%s",' "${docs[@]}")
|
| 148 |
+
json_docs="[${json_docs%,}]"
|
| 149 |
+
fi
|
| 150 |
+
|
| 151 |
+
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
|
| 152 |
+
else
|
| 153 |
+
# Text output
|
| 154 |
+
echo "FEATURE_DIR:$FEATURE_DIR"
|
| 155 |
+
echo "AVAILABLE_DOCS:"
|
| 156 |
+
|
| 157 |
+
# Show status of each potential document
|
| 158 |
+
check_file "$RESEARCH" "research.md"
|
| 159 |
+
check_file "$DATA_MODEL" "data-model.md"
|
| 160 |
+
check_dir "$CONTRACTS_DIR" "contracts/"
|
| 161 |
+
check_file "$QUICKSTART" "quickstart.md"
|
| 162 |
+
|
| 163 |
+
if $INCLUDE_TASKS; then
|
| 164 |
+
check_file "$TASKS" "tasks.md"
|
| 165 |
+
fi
|
| 166 |
+
fi
|
Chatbot/.specify/scripts/bash/common.sh
ADDED
|
@@ -0,0 +1,156 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
| 2 |
+
# Common functions and variables for all scripts
|
| 3 |
+
|
| 4 |
+
# Get repository root, with fallback for non-git repositories
|
| 5 |
+
get_repo_root() {
|
| 6 |
+
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
| 7 |
+
git rev-parse --show-toplevel
|
| 8 |
+
else
|
| 9 |
+
# Fall back to script location for non-git repos
|
| 10 |
+
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
| 11 |
+
(cd "$script_dir/../../.." && pwd)
|
| 12 |
+
fi
|
| 13 |
+
}
|
| 14 |
+
|
| 15 |
+
# Get current branch, with fallback for non-git repositories
|
| 16 |
+
get_current_branch() {
|
| 17 |
+
# First check if SPECIFY_FEATURE environment variable is set
|
| 18 |
+
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
|
| 19 |
+
echo "$SPECIFY_FEATURE"
|
| 20 |
+
return
|
| 21 |
+
fi
|
| 22 |
+
|
| 23 |
+
# Then check git if available
|
| 24 |
+
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
|
| 25 |
+
git rev-parse --abbrev-ref HEAD
|
| 26 |
+
return
|
| 27 |
+
fi
|
| 28 |
+
|
| 29 |
+
# For non-git repos, try to find the latest feature directory
|
| 30 |
+
local repo_root=$(get_repo_root)
|
| 31 |
+
local specs_dir="$repo_root/specs"
|
| 32 |
+
|
| 33 |
+
if [[ -d "$specs_dir" ]]; then
|
| 34 |
+
local latest_feature=""
|
| 35 |
+
local highest=0
|
| 36 |
+
|
| 37 |
+
for dir in "$specs_dir"/*; do
|
| 38 |
+
if [[ -d "$dir" ]]; then
|
| 39 |
+
local dirname=$(basename "$dir")
|
| 40 |
+
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
|
| 41 |
+
local number=${BASH_REMATCH[1]}
|
| 42 |
+
number=$((10#$number))
|
| 43 |
+
if [[ "$number" -gt "$highest" ]]; then
|
| 44 |
+
highest=$number
|
| 45 |
+
latest_feature=$dirname
|
| 46 |
+
fi
|
| 47 |
+
fi
|
| 48 |
+
fi
|
| 49 |
+
done
|
| 50 |
+
|
| 51 |
+
if [[ -n "$latest_feature" ]]; then
|
| 52 |
+
echo "$latest_feature"
|
| 53 |
+
return
|
| 54 |
+
fi
|
| 55 |
+
fi
|
| 56 |
+
|
| 57 |
+
echo "main" # Final fallback
|
| 58 |
+
}
|
| 59 |
+
|
| 60 |
+
# Check if we have git available
|
| 61 |
+
has_git() {
|
| 62 |
+
git rev-parse --show-toplevel >/dev/null 2>&1
|
| 63 |
+
}
|
| 64 |
+
|
| 65 |
+
check_feature_branch() {
|
| 66 |
+
local branch="$1"
|
| 67 |
+
local has_git_repo="$2"
|
| 68 |
+
|
| 69 |
+
# For non-git repos, we can't enforce branch naming but still provide output
|
| 70 |
+
if [[ "$has_git_repo" != "true" ]]; then
|
| 71 |
+
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
|
| 72 |
+
return 0
|
| 73 |
+
fi
|
| 74 |
+
|
| 75 |
+
if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
|
| 76 |
+
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
|
| 77 |
+
echo "Feature branches should be named like: 001-feature-name" >&2
|
| 78 |
+
return 1
|
| 79 |
+
fi
|
| 80 |
+
|
| 81 |
+
return 0
|
| 82 |
+
}
|
| 83 |
+
|
| 84 |
+
get_feature_dir() { echo "$1/specs/$2"; }
|
| 85 |
+
|
| 86 |
+
# Find feature directory by numeric prefix instead of exact branch match
|
| 87 |
+
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
|
| 88 |
+
find_feature_dir_by_prefix() {
|
| 89 |
+
local repo_root="$1"
|
| 90 |
+
local branch_name="$2"
|
| 91 |
+
local specs_dir="$repo_root/specs"
|
| 92 |
+
|
| 93 |
+
# Extract numeric prefix from branch (e.g., "004" from "004-whatever")
|
| 94 |
+
if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then
|
| 95 |
+
# If branch doesn't have numeric prefix, fall back to exact match
|
| 96 |
+
echo "$specs_dir/$branch_name"
|
| 97 |
+
return
|
| 98 |
+
fi
|
| 99 |
+
|
| 100 |
+
local prefix="${BASH_REMATCH[1]}"
|
| 101 |
+
|
| 102 |
+
# Search for directories in specs/ that start with this prefix
|
| 103 |
+
local matches=()
|
| 104 |
+
if [[ -d "$specs_dir" ]]; then
|
| 105 |
+
for dir in "$specs_dir"/"$prefix"-*; do
|
| 106 |
+
if [[ -d "$dir" ]]; then
|
| 107 |
+
matches+=("$(basename "$dir")")
|
| 108 |
+
fi
|
| 109 |
+
done
|
| 110 |
+
fi
|
| 111 |
+
|
| 112 |
+
# Handle results
|
| 113 |
+
if [[ ${#matches[@]} -eq 0 ]]; then
|
| 114 |
+
# No match found - return the branch name path (will fail later with clear error)
|
| 115 |
+
echo "$specs_dir/$branch_name"
|
| 116 |
+
elif [[ ${#matches[@]} -eq 1 ]]; then
|
| 117 |
+
# Exactly one match - perfect!
|
| 118 |
+
echo "$specs_dir/${matches[0]}"
|
| 119 |
+
else
|
| 120 |
+
# Multiple matches - this shouldn't happen with proper naming convention
|
| 121 |
+
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
|
| 122 |
+
echo "Please ensure only one spec directory exists per numeric prefix." >&2
|
| 123 |
+
echo "$specs_dir/$branch_name" # Return something to avoid breaking the script
|
| 124 |
+
fi
|
| 125 |
+
}
|
| 126 |
+
|
| 127 |
+
get_feature_paths() {
|
| 128 |
+
local repo_root=$(get_repo_root)
|
| 129 |
+
local current_branch=$(get_current_branch)
|
| 130 |
+
local has_git_repo="false"
|
| 131 |
+
|
| 132 |
+
if has_git; then
|
| 133 |
+
has_git_repo="true"
|
| 134 |
+
fi
|
| 135 |
+
|
| 136 |
+
# Use prefix-based lookup to support multiple branches per spec
|
| 137 |
+
local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch")
|
| 138 |
+
|
| 139 |
+
cat <<EOF
|
| 140 |
+
REPO_ROOT='$repo_root'
|
| 141 |
+
CURRENT_BRANCH='$current_branch'
|
| 142 |
+
HAS_GIT='$has_git_repo'
|
| 143 |
+
FEATURE_DIR='$feature_dir'
|
| 144 |
+
FEATURE_SPEC='$feature_dir/spec.md'
|
| 145 |
+
IMPL_PLAN='$feature_dir/plan.md'
|
| 146 |
+
TASKS='$feature_dir/tasks.md'
|
| 147 |
+
RESEARCH='$feature_dir/research.md'
|
| 148 |
+
DATA_MODEL='$feature_dir/data-model.md'
|
| 149 |
+
QUICKSTART='$feature_dir/quickstart.md'
|
| 150 |
+
CONTRACTS_DIR='$feature_dir/contracts'
|
| 151 |
+
EOF
|
| 152 |
+
}
|
| 153 |
+
|
| 154 |
+
check_file() { [[ -f "$1" ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
| 155 |
+
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
| 156 |
+
|
Chatbot/.specify/scripts/bash/create-adr.sh
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
| 2 |
+
set -euo pipefail
|
| 3 |
+
|
| 4 |
+
# create-adr.sh - Create a new Architecture Decision Record deterministically
|
| 5 |
+
#
|
| 6 |
+
# This script ONLY:
|
| 7 |
+
# 1. Creates the correct directory structure (history/adr/)
|
| 8 |
+
# 2. Copies the template with {{PLACEHOLDERS}} intact
|
| 9 |
+
# 3. Returns metadata (id, path) for AI to fill in
|
| 10 |
+
#
|
| 11 |
+
# The calling AI agent is responsible for filling {{PLACEHOLDERS}}
|
| 12 |
+
#
|
| 13 |
+
# Usage:
|
| 14 |
+
# scripts/bash/create-adr.sh \
|
| 15 |
+
# --title "Use WebSockets for Real-time Chat" \
|
| 16 |
+
# [--json]
|
| 17 |
+
|
| 18 |
+
JSON=false
|
| 19 |
+
TITLE=""
|
| 20 |
+
|
| 21 |
+
while [[ $# -gt 0 ]]; do
|
| 22 |
+
case "$1" in
|
| 23 |
+
--json) JSON=true; shift ;;
|
| 24 |
+
--title) TITLE=${2:-}; shift 2 ;;
|
| 25 |
+
--help|-h)
|
| 26 |
+
cat <<EOF
|
| 27 |
+
Usage: $0 --title <title> [options]
|
| 28 |
+
|
| 29 |
+
Required:
|
| 30 |
+
--title <text> Title for the ADR (used for filename)
|
| 31 |
+
|
| 32 |
+
Optional:
|
| 33 |
+
--json Output JSON with id and path
|
| 34 |
+
|
| 35 |
+
Output:
|
| 36 |
+
Creates ADR file with template placeholders ({{ID}}, {{TITLE}}, etc.)
|
| 37 |
+
AI agent must fill these placeholders after creation
|
| 38 |
+
|
| 39 |
+
Examples:
|
| 40 |
+
$0 --title "Use WebSockets for Real-time Chat" --json
|
| 41 |
+
$0 --title "Adopt PostgreSQL for Primary Database"
|
| 42 |
+
EOF
|
| 43 |
+
exit 0
|
| 44 |
+
;;
|
| 45 |
+
*) shift ;;
|
| 46 |
+
esac
|
| 47 |
+
done
|
| 48 |
+
|
| 49 |
+
if [[ -z "$TITLE" ]]; then
|
| 50 |
+
echo "Error: --title is required" >&2
|
| 51 |
+
exit 1
|
| 52 |
+
fi
|
| 53 |
+
|
| 54 |
+
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
| 55 |
+
ADR_DIR="$REPO_ROOT/history/adr"
|
| 56 |
+
mkdir -p "$ADR_DIR"
|
| 57 |
+
|
| 58 |
+
# Check for template (try both locations)
|
| 59 |
+
TPL=""
|
| 60 |
+
if [[ -f "$REPO_ROOT/.specify/templates/adr-template.md" ]]; then
|
| 61 |
+
TPL="$REPO_ROOT/.specify/templates/adr-template.md"
|
| 62 |
+
elif [[ -f "$REPO_ROOT/templates/adr-template.md" ]]; then
|
| 63 |
+
TPL="$REPO_ROOT/templates/adr-template.md"
|
| 64 |
+
else
|
| 65 |
+
echo "Error: ADR template not found at .specify/templates/ or templates/" >&2
|
| 66 |
+
exit 1
|
| 67 |
+
fi
|
| 68 |
+
|
| 69 |
+
# next id
|
| 70 |
+
next_id() {
|
| 71 |
+
local max=0 base num
|
| 72 |
+
shopt -s nullglob
|
| 73 |
+
for f in "$ADR_DIR"/[0-9][0-9][0-9][0-9]-*.md; do
|
| 74 |
+
base=$(basename "$f")
|
| 75 |
+
num=${base%%-*}
|
| 76 |
+
if [[ $num =~ ^[0-9]{4}$ ]]; then
|
| 77 |
+
local n=$((10#$num))
|
| 78 |
+
(( n > max )) && max=$n
|
| 79 |
+
fi
|
| 80 |
+
done
|
| 81 |
+
printf "%04d" $((max+1))
|
| 82 |
+
}
|
| 83 |
+
|
| 84 |
+
slugify() {
|
| 85 |
+
echo "$1" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g; s/-\{2,\}/-/g; s/^-//; s/-$//'
|
| 86 |
+
}
|
| 87 |
+
|
| 88 |
+
ID=$(next_id)
|
| 89 |
+
SLUG=$(slugify "$TITLE")
|
| 90 |
+
OUTFILE="$ADR_DIR/${ID}-${SLUG}.md"
|
| 91 |
+
|
| 92 |
+
# Simply copy the template (AI will fill placeholders)
|
| 93 |
+
cp "$TPL" "$OUTFILE"
|
| 94 |
+
|
| 95 |
+
ABS=$(cd "$(dirname "$OUTFILE")" && pwd)/$(basename "$OUTFILE")
|
| 96 |
+
if $JSON; then
|
| 97 |
+
printf '{"id":"%s","path":"%s","template":"%s"}\n' "$ID" "$ABS" "$(basename "$TPL")"
|
| 98 |
+
else
|
| 99 |
+
echo "✅ ADR template copied → $ABS"
|
| 100 |
+
echo "Note: AI agent should now fill in {{PLACEHOLDERS}}"
|
| 101 |
+
fi
|
Chatbot/.specify/scripts/bash/create-new-feature.sh
ADDED
|
@@ -0,0 +1,302 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
| 2 |
+
|
| 3 |
+
set -e
|
| 4 |
+
|
| 5 |
+
JSON_MODE=false
|
| 6 |
+
SHORT_NAME=""
|
| 7 |
+
BRANCH_NUMBER=""
|
| 8 |
+
ARGS=()
|
| 9 |
+
i=1
|
| 10 |
+
while [ $i -le $# ]; do
|
| 11 |
+
arg="${!i}"
|
| 12 |
+
case "$arg" in
|
| 13 |
+
--json)
|
| 14 |
+
JSON_MODE=true
|
| 15 |
+
;;
|
| 16 |
+
--short-name)
|
| 17 |
+
if [ $((i + 1)) -gt $# ]; then
|
| 18 |
+
echo 'Error: --short-name requires a value' >&2
|
| 19 |
+
exit 1
|
| 20 |
+
fi
|
| 21 |
+
i=$((i + 1))
|
| 22 |
+
next_arg="${!i}"
|
| 23 |
+
# Check if the next argument is another option (starts with --)
|
| 24 |
+
if [[ "$next_arg" == --* ]]; then
|
| 25 |
+
echo 'Error: --short-name requires a value' >&2
|
| 26 |
+
exit 1
|
| 27 |
+
fi
|
| 28 |
+
SHORT_NAME="$next_arg"
|
| 29 |
+
;;
|
| 30 |
+
--number)
|
| 31 |
+
if [ $((i + 1)) -gt $# ]; then
|
| 32 |
+
echo 'Error: --number requires a value' >&2
|
| 33 |
+
exit 1
|
| 34 |
+
fi
|
| 35 |
+
i=$((i + 1))
|
| 36 |
+
next_arg="${!i}"
|
| 37 |
+
if [[ "$next_arg" == --* ]]; then
|
| 38 |
+
echo 'Error: --number requires a value' >&2
|
| 39 |
+
exit 1
|
| 40 |
+
fi
|
| 41 |
+
BRANCH_NUMBER="$next_arg"
|
| 42 |
+
;;
|
| 43 |
+
--help|-h)
|
| 44 |
+
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
|
| 45 |
+
echo ""
|
| 46 |
+
echo "Options:"
|
| 47 |
+
echo " --json Output in JSON format"
|
| 48 |
+
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
|
| 49 |
+
echo " --number N Specify branch number manually (overrides auto-detection)"
|
| 50 |
+
echo " --help, -h Show this help message"
|
| 51 |
+
echo ""
|
| 52 |
+
echo "Examples:"
|
| 53 |
+
echo " $0 'Add user authentication system' --short-name 'user-auth'"
|
| 54 |
+
echo " $0 'Implement OAuth2 integration for API' --number 5"
|
| 55 |
+
exit 0
|
| 56 |
+
;;
|
| 57 |
+
*)
|
| 58 |
+
ARGS+=("$arg")
|
| 59 |
+
;;
|
| 60 |
+
esac
|
| 61 |
+
i=$((i + 1))
|
| 62 |
+
done
|
| 63 |
+
|
| 64 |
+
FEATURE_DESCRIPTION="${ARGS[*]}"
|
| 65 |
+
if [ -z "$FEATURE_DESCRIPTION" ]; then
|
| 66 |
+
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
|
| 67 |
+
exit 1
|
| 68 |
+
fi
|
| 69 |
+
|
| 70 |
+
# Function to find the repository root by searching for existing project markers
|
| 71 |
+
find_repo_root() {
|
| 72 |
+
local dir="$1"
|
| 73 |
+
while [ "$dir" != "/" ]; do
|
| 74 |
+
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
|
| 75 |
+
echo "$dir"
|
| 76 |
+
return 0
|
| 77 |
+
fi
|
| 78 |
+
dir="$(dirname "$dir")"
|
| 79 |
+
done
|
| 80 |
+
return 1
|
| 81 |
+
}
|
| 82 |
+
|
| 83 |
+
# Function to get highest number from specs directory
|
| 84 |
+
get_highest_from_specs() {
|
| 85 |
+
local specs_dir="$1"
|
| 86 |
+
local highest=0
|
| 87 |
+
|
| 88 |
+
if [ -d "$specs_dir" ]; then
|
| 89 |
+
for dir in "$specs_dir"/*; do
|
| 90 |
+
[ -d "$dir" ] || continue
|
| 91 |
+
dirname=$(basename "$dir")
|
| 92 |
+
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
|
| 93 |
+
number=$((10#$number))
|
| 94 |
+
if [ "$number" -gt "$highest" ]; then
|
| 95 |
+
highest=$number
|
| 96 |
+
fi
|
| 97 |
+
done
|
| 98 |
+
fi
|
| 99 |
+
|
| 100 |
+
echo "$highest"
|
| 101 |
+
}
|
| 102 |
+
|
| 103 |
+
# Function to get highest number from git branches
|
| 104 |
+
get_highest_from_branches() {
|
| 105 |
+
local highest=0
|
| 106 |
+
|
| 107 |
+
# Get all branches (local and remote)
|
| 108 |
+
branches=$(git branch -a 2>/dev/null || echo "")
|
| 109 |
+
|
| 110 |
+
if [ -n "$branches" ]; then
|
| 111 |
+
while IFS= read -r branch; do
|
| 112 |
+
# Clean branch name: remove leading markers and remote prefixes
|
| 113 |
+
clean_branch=$(echo "$branch" | sed 's/^[* ]*//; s|^remotes/[^/]*/||')
|
| 114 |
+
|
| 115 |
+
# Extract feature number if branch matches pattern ###-*
|
| 116 |
+
if echo "$clean_branch" | grep -q '^[0-9]\{3\}-'; then
|
| 117 |
+
number=$(echo "$clean_branch" | grep -o '^[0-9]\{3\}' || echo "0")
|
| 118 |
+
number=$((10#$number))
|
| 119 |
+
if [ "$number" -gt "$highest" ]; then
|
| 120 |
+
highest=$number
|
| 121 |
+
fi
|
| 122 |
+
fi
|
| 123 |
+
done <<< "$branches"
|
| 124 |
+
fi
|
| 125 |
+
|
| 126 |
+
echo "$highest"
|
| 127 |
+
}
|
| 128 |
+
|
| 129 |
+
# Function to check existing branches (local and remote) and return next available number
|
| 130 |
+
check_existing_branches() {
|
| 131 |
+
local specs_dir="$1"
|
| 132 |
+
|
| 133 |
+
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
|
| 134 |
+
git fetch --all --prune 2>/dev/null || true
|
| 135 |
+
|
| 136 |
+
# Get highest number from ALL branches (not just matching short name)
|
| 137 |
+
local highest_branch=$(get_highest_from_branches)
|
| 138 |
+
|
| 139 |
+
# Get highest number from ALL specs (not just matching short name)
|
| 140 |
+
local highest_spec=$(get_highest_from_specs "$specs_dir")
|
| 141 |
+
|
| 142 |
+
# Take the maximum of both
|
| 143 |
+
local max_num=$highest_branch
|
| 144 |
+
if [ "$highest_spec" -gt "$max_num" ]; then
|
| 145 |
+
max_num=$highest_spec
|
| 146 |
+
fi
|
| 147 |
+
|
| 148 |
+
# Return next number
|
| 149 |
+
echo $((max_num + 1))
|
| 150 |
+
}
|
| 151 |
+
|
| 152 |
+
# Function to clean and format a branch name
|
| 153 |
+
clean_branch_name() {
|
| 154 |
+
local name="$1"
|
| 155 |
+
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
|
| 156 |
+
}
|
| 157 |
+
|
| 158 |
+
# Resolve repository root. Prefer git information when available, but fall back
|
| 159 |
+
# to searching for repository markers so the workflow still functions in repositories that
|
| 160 |
+
# were initialised with --no-git.
|
| 161 |
+
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
| 162 |
+
|
| 163 |
+
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
| 164 |
+
REPO_ROOT=$(git rev-parse --show-toplevel)
|
| 165 |
+
HAS_GIT=true
|
| 166 |
+
else
|
| 167 |
+
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
|
| 168 |
+
if [ -z "$REPO_ROOT" ]; then
|
| 169 |
+
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
|
| 170 |
+
exit 1
|
| 171 |
+
fi
|
| 172 |
+
HAS_GIT=false
|
| 173 |
+
fi
|
| 174 |
+
|
| 175 |
+
cd "$REPO_ROOT"
|
| 176 |
+
|
| 177 |
+
SPECS_DIR="$REPO_ROOT/specs"
|
| 178 |
+
mkdir -p "$SPECS_DIR"
|
| 179 |
+
|
| 180 |
+
# Function to generate branch name with stop word filtering and length filtering
|
| 181 |
+
generate_branch_name() {
|
| 182 |
+
local description="$1"
|
| 183 |
+
|
| 184 |
+
# Common stop words to filter out
|
| 185 |
+
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
|
| 186 |
+
|
| 187 |
+
# Convert to lowercase and split into words
|
| 188 |
+
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
|
| 189 |
+
|
| 190 |
+
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
|
| 191 |
+
local meaningful_words=()
|
| 192 |
+
for word in $clean_name; do
|
| 193 |
+
# Skip empty words
|
| 194 |
+
[ -z "$word" ] && continue
|
| 195 |
+
|
| 196 |
+
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
|
| 197 |
+
if ! echo "$word" | grep -qiE "$stop_words"; then
|
| 198 |
+
if [ ${#word} -ge 3 ]; then
|
| 199 |
+
meaningful_words+=("$word")
|
| 200 |
+
elif echo "$description" | grep -q "\b${word^^}\b"; then
|
| 201 |
+
# Keep short words if they appear as uppercase in original (likely acronyms)
|
| 202 |
+
meaningful_words+=("$word")
|
| 203 |
+
fi
|
| 204 |
+
fi
|
| 205 |
+
done
|
| 206 |
+
|
| 207 |
+
# If we have meaningful words, use first 3-4 of them
|
| 208 |
+
if [ ${#meaningful_words[@]} -gt 0 ]; then
|
| 209 |
+
local max_words=3
|
| 210 |
+
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
|
| 211 |
+
|
| 212 |
+
local result=""
|
| 213 |
+
local count=0
|
| 214 |
+
for word in "${meaningful_words[@]}"; do
|
| 215 |
+
if [ $count -ge $max_words ]; then break; fi
|
| 216 |
+
if [ -n "$result" ]; then result="$result-"; fi
|
| 217 |
+
result="$result$word"
|
| 218 |
+
count=$((count + 1))
|
| 219 |
+
done
|
| 220 |
+
echo "$result"
|
| 221 |
+
else
|
| 222 |
+
# Fallback to original logic if no meaningful words found
|
| 223 |
+
local cleaned=$(clean_branch_name "$description")
|
| 224 |
+
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
|
| 225 |
+
fi
|
| 226 |
+
}
|
| 227 |
+
|
| 228 |
+
# Generate branch name
|
| 229 |
+
if [ -n "$SHORT_NAME" ]; then
|
| 230 |
+
# Use provided short name, just clean it up
|
| 231 |
+
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
|
| 232 |
+
else
|
| 233 |
+
# Generate from description with smart filtering
|
| 234 |
+
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
|
| 235 |
+
fi
|
| 236 |
+
|
| 237 |
+
# Determine branch number
|
| 238 |
+
if [ -z "$BRANCH_NUMBER" ]; then
|
| 239 |
+
if [ "$HAS_GIT" = true ]; then
|
| 240 |
+
# Check existing branches on remotes
|
| 241 |
+
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
|
| 242 |
+
else
|
| 243 |
+
# Fall back to local directory check
|
| 244 |
+
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
|
| 245 |
+
BRANCH_NUMBER=$((HIGHEST + 1))
|
| 246 |
+
fi
|
| 247 |
+
fi
|
| 248 |
+
|
| 249 |
+
# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal)
|
| 250 |
+
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
|
| 251 |
+
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
|
| 252 |
+
|
| 253 |
+
# GitHub enforces a 244-byte limit on branch names
|
| 254 |
+
# Validate and truncate if necessary
|
| 255 |
+
MAX_BRANCH_LENGTH=244
|
| 256 |
+
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
|
| 257 |
+
# Calculate how much we need to trim from suffix
|
| 258 |
+
# Account for: feature number (3) + hyphen (1) = 4 chars
|
| 259 |
+
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
|
| 260 |
+
|
| 261 |
+
# Truncate suffix at word boundary if possible
|
| 262 |
+
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
|
| 263 |
+
# Remove trailing hyphen if truncation created one
|
| 264 |
+
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
|
| 265 |
+
|
| 266 |
+
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
|
| 267 |
+
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
|
| 268 |
+
|
| 269 |
+
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
|
| 270 |
+
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
|
| 271 |
+
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
|
| 272 |
+
fi
|
| 273 |
+
|
| 274 |
+
if [ "$HAS_GIT" = true ]; then
|
| 275 |
+
git checkout -b "$BRANCH_NAME"
|
| 276 |
+
else
|
| 277 |
+
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
|
| 278 |
+
fi
|
| 279 |
+
|
| 280 |
+
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
|
| 281 |
+
mkdir -p "$FEATURE_DIR"
|
| 282 |
+
|
| 283 |
+
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
|
| 284 |
+
SPEC_FILE="$FEATURE_DIR/spec.md"
|
| 285 |
+
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
|
| 286 |
+
|
| 287 |
+
# Auto-create history/prompts/<branch-name>/ directory (same as specs/<branch-name>/)
|
| 288 |
+
# This keeps naming consistent across branch, specs, and prompts directories
|
| 289 |
+
PROMPTS_DIR="$REPO_ROOT/history/prompts/$BRANCH_NAME"
|
| 290 |
+
mkdir -p "$PROMPTS_DIR"
|
| 291 |
+
|
| 292 |
+
# Set the SPECIFY_FEATURE environment variable for the current session
|
| 293 |
+
export SPECIFY_FEATURE="$BRANCH_NAME"
|
| 294 |
+
|
| 295 |
+
if $JSON_MODE; then
|
| 296 |
+
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
|
| 297 |
+
else
|
| 298 |
+
echo "BRANCH_NAME: $BRANCH_NAME"
|
| 299 |
+
echo "SPEC_FILE: $SPEC_FILE"
|
| 300 |
+
echo "FEATURE_NUM: $FEATURE_NUM"
|
| 301 |
+
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
|
| 302 |
+
fi
|
Chatbot/.specify/scripts/bash/create-phr.sh
ADDED
|
@@ -0,0 +1,256 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
| 2 |
+
set -euo pipefail
|
| 3 |
+
|
| 4 |
+
# create-phr.sh - Create Prompt History Record (PHR) - Spec Kit Native
|
| 5 |
+
#
|
| 6 |
+
# Deterministic PHR location strategy:
|
| 7 |
+
# 1. Constitution stage:
|
| 8 |
+
# → history/prompts/constitution/
|
| 9 |
+
# → stage: constitution
|
| 10 |
+
# → naming: 0001-title.constitution.prompt.md
|
| 11 |
+
#
|
| 12 |
+
# 2. Feature stages (spec-specific work):
|
| 13 |
+
# → history/prompts/<spec-name>/
|
| 14 |
+
# → stages: spec, plan, tasks, red, green, refactor, explainer, misc
|
| 15 |
+
# → naming: 0001-title.spec.prompt.md
|
| 16 |
+
#
|
| 17 |
+
# 3. General stage (catch-all):
|
| 18 |
+
# → history/prompts/general/
|
| 19 |
+
# → stage: general
|
| 20 |
+
# → naming: 0001-title.general.prompt.md
|
| 21 |
+
#
|
| 22 |
+
# This script ONLY:
|
| 23 |
+
# 1. Creates the correct directory structure
|
| 24 |
+
# 2. Copies the template with {{PLACEHOLDERS}} intact
|
| 25 |
+
# 3. Returns metadata (id, path, context) for AI to fill in
|
| 26 |
+
#
|
| 27 |
+
# The calling AI agent is responsible for filling {{PLACEHOLDERS}}
|
| 28 |
+
#
|
| 29 |
+
# Usage:
|
| 30 |
+
# scripts/bash/create-phr.sh \
|
| 31 |
+
# --title "Setup authentication" \
|
| 32 |
+
# --stage architect \
|
| 33 |
+
# [--feature 001-auth] \
|
| 34 |
+
# [--json]
|
| 35 |
+
|
| 36 |
+
JSON_MODE=false
|
| 37 |
+
TITLE=""
|
| 38 |
+
STAGE=""
|
| 39 |
+
FEATURE=""
|
| 40 |
+
|
| 41 |
+
# Parse arguments
|
| 42 |
+
while [[ $# -gt 0 ]]; do
|
| 43 |
+
case "$1" in
|
| 44 |
+
--json) JSON_MODE=true; shift ;;
|
| 45 |
+
--title) TITLE=${2:-}; shift 2 ;;
|
| 46 |
+
--stage) STAGE=${2:-}; shift 2 ;;
|
| 47 |
+
--feature) FEATURE=${2:-}; shift 2 ;;
|
| 48 |
+
--help|-h)
|
| 49 |
+
cat <<EOF
|
| 50 |
+
Usage: $0 --title <title> --stage <stage> [options]
|
| 51 |
+
|
| 52 |
+
Required:
|
| 53 |
+
--title <text> Title for the PHR (used for filename)
|
| 54 |
+
--stage <stage> constitution|spec|plan|tasks|red|green|refactor|explainer|misc|general
|
| 55 |
+
|
| 56 |
+
Optional:
|
| 57 |
+
--feature <slug> Feature slug (e.g., 001-auth). Auto-detected from branch if omitted.
|
| 58 |
+
--json Output JSON with id, path, and context
|
| 59 |
+
|
| 60 |
+
Location Rules (all under history/prompts/):
|
| 61 |
+
- constitution → history/prompts/constitution/
|
| 62 |
+
- spec, plan, tasks, red, green, refactor, explainer, misc → history/prompts/<branch-name>/
|
| 63 |
+
- general → history/prompts/general/ (catch-all for non-feature work)
|
| 64 |
+
|
| 65 |
+
Output:
|
| 66 |
+
Creates PHR file with template placeholders ({{ID}}, {{TITLE}}, etc.)
|
| 67 |
+
AI agent must fill these placeholders after creation
|
| 68 |
+
|
| 69 |
+
Examples:
|
| 70 |
+
# Early-phase constitution work (no feature exists)
|
| 71 |
+
$0 --title "Define quality standards" --stage constitution --json
|
| 72 |
+
|
| 73 |
+
# Feature-specific implementation work
|
| 74 |
+
$0 --title "Implement login" --stage green --feature 001-auth --json
|
| 75 |
+
EOF
|
| 76 |
+
exit 0
|
| 77 |
+
;;
|
| 78 |
+
*) shift ;;
|
| 79 |
+
esac
|
| 80 |
+
done
|
| 81 |
+
|
| 82 |
+
# Validation
|
| 83 |
+
if [[ -z "$TITLE" ]]; then
|
| 84 |
+
echo "Error: --title is required" >&2
|
| 85 |
+
exit 1
|
| 86 |
+
fi
|
| 87 |
+
|
| 88 |
+
if [[ -z "$STAGE" ]]; then
|
| 89 |
+
echo "Error: --stage is required" >&2
|
| 90 |
+
exit 1
|
| 91 |
+
fi
|
| 92 |
+
|
| 93 |
+
# Get repository root
|
| 94 |
+
REPO_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
| 95 |
+
SPECS_DIR="$REPO_ROOT/specs"
|
| 96 |
+
|
| 97 |
+
# Check for template (try both locations)
|
| 98 |
+
TEMPLATE_PATH=""
|
| 99 |
+
if [[ -f "$REPO_ROOT/.specify/templates/phr-template.prompt.md" ]]; then
|
| 100 |
+
TEMPLATE_PATH="$REPO_ROOT/.specify/templates/phr-template.prompt.md"
|
| 101 |
+
elif [[ -f "$REPO_ROOT/templates/phr-template.prompt.md" ]]; then
|
| 102 |
+
TEMPLATE_PATH="$REPO_ROOT/templates/phr-template.prompt.md"
|
| 103 |
+
else
|
| 104 |
+
echo "Error: PHR template not found at .specify/templates/ or templates/" >&2
|
| 105 |
+
exit 1
|
| 106 |
+
fi
|
| 107 |
+
|
| 108 |
+
# Deterministic location logic based on STAGE
|
| 109 |
+
# New structure: all prompts go under history/prompts/ with subdirectories:
|
| 110 |
+
# - constitution/ for constitution prompts
|
| 111 |
+
# - <spec-name>/ for spec-specific prompts
|
| 112 |
+
# - general/ for general/catch-all prompts
|
| 113 |
+
|
| 114 |
+
case "$STAGE" in
|
| 115 |
+
constitution)
|
| 116 |
+
# Constitution prompts always go to history/prompts/constitution/
|
| 117 |
+
PROMPTS_DIR="$REPO_ROOT/history/prompts/constitution"
|
| 118 |
+
VALID_STAGES=("constitution")
|
| 119 |
+
CONTEXT="constitution"
|
| 120 |
+
;;
|
| 121 |
+
spec|plan|tasks|red|green|refactor|explainer|misc)
|
| 122 |
+
# Feature-specific stages: require specs/ directory and feature context
|
| 123 |
+
if [[ ! -d "$SPECS_DIR" ]]; then
|
| 124 |
+
echo "Error: Feature stage '$STAGE' requires specs/ directory and a feature context" >&2
|
| 125 |
+
echo "Run /sp.feature first to create a feature, then try again" >&2
|
| 126 |
+
exit 1
|
| 127 |
+
fi
|
| 128 |
+
|
| 129 |
+
# Auto-detect feature if not specified
|
| 130 |
+
if [[ -z "$FEATURE" ]]; then
|
| 131 |
+
# Try to get from SPECIFY_FEATURE environment variable
|
| 132 |
+
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
|
| 133 |
+
FEATURE="$SPECIFY_FEATURE"
|
| 134 |
+
# Try to match current branch
|
| 135 |
+
elif git rev-parse --show-toplevel >/dev/null 2>&1; then
|
| 136 |
+
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "")
|
| 137 |
+
if [[ -n "$BRANCH" && "$BRANCH" != "main" && "$BRANCH" != "master" ]]; then
|
| 138 |
+
# Check if branch name matches a feature directory
|
| 139 |
+
if [[ -d "$SPECS_DIR/$BRANCH" ]]; then
|
| 140 |
+
FEATURE="$BRANCH"
|
| 141 |
+
fi
|
| 142 |
+
fi
|
| 143 |
+
fi
|
| 144 |
+
|
| 145 |
+
# If still no feature, find the highest numbered feature
|
| 146 |
+
if [[ -z "$FEATURE" ]]; then
|
| 147 |
+
max_num=0
|
| 148 |
+
latest_feature=""
|
| 149 |
+
for dir in "$SPECS_DIR"/*; do
|
| 150 |
+
if [[ -d "$dir" ]]; then
|
| 151 |
+
dirname=$(basename "$dir")
|
| 152 |
+
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
|
| 153 |
+
num=$((10#${BASH_REMATCH[1]}))
|
| 154 |
+
if (( num > max_num )); then
|
| 155 |
+
max_num=$num
|
| 156 |
+
latest_feature="$dirname"
|
| 157 |
+
fi
|
| 158 |
+
fi
|
| 159 |
+
fi
|
| 160 |
+
done
|
| 161 |
+
|
| 162 |
+
if [[ -n "$latest_feature" ]]; then
|
| 163 |
+
FEATURE="$latest_feature"
|
| 164 |
+
else
|
| 165 |
+
echo "Error: No feature specified and no numbered features found in $SPECS_DIR" >&2
|
| 166 |
+
echo "Please specify --feature or create a feature directory first" >&2
|
| 167 |
+
exit 1
|
| 168 |
+
fi
|
| 169 |
+
fi
|
| 170 |
+
fi
|
| 171 |
+
|
| 172 |
+
# Validate feature exists
|
| 173 |
+
if [[ ! -d "$SPECS_DIR/$FEATURE" ]]; then
|
| 174 |
+
echo "Error: Feature directory not found: $SPECS_DIR/$FEATURE" >&2
|
| 175 |
+
echo "Available features:" >&2
|
| 176 |
+
ls -1 "$SPECS_DIR" 2>/dev/null | head -5 | sed 's/^/ - /' >&2
|
| 177 |
+
exit 1
|
| 178 |
+
fi
|
| 179 |
+
|
| 180 |
+
# Feature prompts go to history/prompts/<branch-name>/ (same as specs/<branch-name>/)
|
| 181 |
+
# This keeps naming consistent across branch, specs, and prompts directories
|
| 182 |
+
PROMPTS_DIR="$REPO_ROOT/history/prompts/$FEATURE"
|
| 183 |
+
VALID_STAGES=("spec" "plan" "tasks" "red" "green" "refactor" "explainer" "misc")
|
| 184 |
+
CONTEXT="feature"
|
| 185 |
+
;;
|
| 186 |
+
general)
|
| 187 |
+
# General stage: catch-all that goes to history/prompts/general/
|
| 188 |
+
PROMPTS_DIR="$REPO_ROOT/history/prompts/general"
|
| 189 |
+
VALID_STAGES=("general")
|
| 190 |
+
CONTEXT="general"
|
| 191 |
+
;;
|
| 192 |
+
*)
|
| 193 |
+
echo "Error: Unknown stage '$STAGE'" >&2
|
| 194 |
+
exit 1
|
| 195 |
+
;;
|
| 196 |
+
esac
|
| 197 |
+
|
| 198 |
+
# Validate stage
|
| 199 |
+
stage_valid=false
|
| 200 |
+
for valid_stage in "${VALID_STAGES[@]}"; do
|
| 201 |
+
if [[ "$STAGE" == "$valid_stage" ]]; then
|
| 202 |
+
stage_valid=true
|
| 203 |
+
break
|
| 204 |
+
fi
|
| 205 |
+
done
|
| 206 |
+
|
| 207 |
+
if [[ "$stage_valid" == "false" ]]; then
|
| 208 |
+
echo "Error: Invalid stage '$STAGE' for $CONTEXT context" >&2
|
| 209 |
+
echo "Valid stages for $CONTEXT: ${VALID_STAGES[*]}" >&2
|
| 210 |
+
exit 1
|
| 211 |
+
fi
|
| 212 |
+
|
| 213 |
+
# Ensure prompts directory exists
|
| 214 |
+
mkdir -p "$PROMPTS_DIR"
|
| 215 |
+
|
| 216 |
+
# Helper: slugify
|
| 217 |
+
slugify() {
|
| 218 |
+
echo "$1" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
|
| 219 |
+
}
|
| 220 |
+
|
| 221 |
+
# Get next ID (local to this directory)
|
| 222 |
+
get_next_id() {
|
| 223 |
+
local max_id=0
|
| 224 |
+
for file in "$PROMPTS_DIR"/[0-9][0-9][0-9][0-9]-*.prompt.md; do
|
| 225 |
+
[[ -e "$file" ]] || continue
|
| 226 |
+
local base=$(basename "$file")
|
| 227 |
+
local num=${base%%-*}
|
| 228 |
+
if [[ "$num" =~ ^[0-9]{4}$ ]]; then
|
| 229 |
+
local value=$((10#$num))
|
| 230 |
+
if (( value > max_id )); then
|
| 231 |
+
max_id=$value
|
| 232 |
+
fi
|
| 233 |
+
fi
|
| 234 |
+
done
|
| 235 |
+
printf '%04d' $((max_id + 1))
|
| 236 |
+
}
|
| 237 |
+
|
| 238 |
+
PHR_ID=$(get_next_id)
|
| 239 |
+
TITLE_SLUG=$(slugify "$TITLE")
|
| 240 |
+
STAGE_SLUG=$(slugify "$STAGE")
|
| 241 |
+
|
| 242 |
+
# Create filename with stage extension
|
| 243 |
+
OUTFILE="$PROMPTS_DIR/${PHR_ID}-${TITLE_SLUG}.${STAGE_SLUG}.prompt.md"
|
| 244 |
+
|
| 245 |
+
# Simply copy the template (AI will fill placeholders)
|
| 246 |
+
cp "$TEMPLATE_PATH" "$OUTFILE"
|
| 247 |
+
|
| 248 |
+
# Output results
|
| 249 |
+
ABS_PATH=$(cd "$(dirname "$OUTFILE")" && pwd)/$(basename "$OUTFILE")
|
| 250 |
+
if $JSON_MODE; then
|
| 251 |
+
printf '{"id":"%s","path":"%s","context":"%s","stage":"%s","feature":"%s","template":"%s"}\n' \
|
| 252 |
+
"$PHR_ID" "$ABS_PATH" "$CONTEXT" "$STAGE" "${FEATURE:-none}" "$(basename "$TEMPLATE_PATH")"
|
| 253 |
+
else
|
| 254 |
+
echo "✅ PHR template copied → $ABS_PATH"
|
| 255 |
+
echo "Note: AI agent should now fill in {{PLACEHOLDERS}}"
|
| 256 |
+
fi
|
Chatbot/.specify/scripts/bash/setup-plan.sh
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
| 2 |
+
|
| 3 |
+
set -e
|
| 4 |
+
|
| 5 |
+
# Parse command line arguments
|
| 6 |
+
JSON_MODE=false
|
| 7 |
+
ARGS=()
|
| 8 |
+
|
| 9 |
+
for arg in "$@"; do
|
| 10 |
+
case "$arg" in
|
| 11 |
+
--json)
|
| 12 |
+
JSON_MODE=true
|
| 13 |
+
;;
|
| 14 |
+
--help|-h)
|
| 15 |
+
echo "Usage: $0 [--json]"
|
| 16 |
+
echo " --json Output results in JSON format"
|
| 17 |
+
echo " --help Show this help message"
|
| 18 |
+
exit 0
|
| 19 |
+
;;
|
| 20 |
+
*)
|
| 21 |
+
ARGS+=("$arg")
|
| 22 |
+
;;
|
| 23 |
+
esac
|
| 24 |
+
done
|
| 25 |
+
|
| 26 |
+
# Get script directory and load common functions
|
| 27 |
+
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
| 28 |
+
source "$SCRIPT_DIR/common.sh"
|
| 29 |
+
|
| 30 |
+
# Get all paths and variables from common functions
|
| 31 |
+
eval $(get_feature_paths)
|
| 32 |
+
|
| 33 |
+
# Check if we're on a proper feature branch (only for git repos)
|
| 34 |
+
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
| 35 |
+
|
| 36 |
+
# Ensure the feature directory exists
|
| 37 |
+
mkdir -p "$FEATURE_DIR"
|
| 38 |
+
|
| 39 |
+
# Copy plan template if it exists
|
| 40 |
+
TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
|
| 41 |
+
if [[ -f "$TEMPLATE" ]]; then
|
| 42 |
+
cp "$TEMPLATE" "$IMPL_PLAN"
|
| 43 |
+
echo "Copied plan template to $IMPL_PLAN"
|
| 44 |
+
else
|
| 45 |
+
echo "Warning: Plan template not found at $TEMPLATE"
|
| 46 |
+
# Create a basic plan file if template doesn't exist
|
| 47 |
+
touch "$IMPL_PLAN"
|
| 48 |
+
fi
|
| 49 |
+
|
| 50 |
+
# Output results
|
| 51 |
+
if $JSON_MODE; then
|
| 52 |
+
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
|
| 53 |
+
"$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
|
| 54 |
+
else
|
| 55 |
+
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
| 56 |
+
echo "IMPL_PLAN: $IMPL_PLAN"
|
| 57 |
+
echo "SPECS_DIR: $FEATURE_DIR"
|
| 58 |
+
echo "BRANCH: $CURRENT_BRANCH"
|
| 59 |
+
echo "HAS_GIT: $HAS_GIT"
|
| 60 |
+
fi
|
| 61 |
+
|
Chatbot/.specify/scripts/bash/update-agent-context.sh
ADDED
|
@@ -0,0 +1,799 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
| 2 |
+
|
| 3 |
+
# Update agent context files with information from plan.md
|
| 4 |
+
#
|
| 5 |
+
# This script maintains AI agent context files by parsing feature specifications
|
| 6 |
+
# and updating agent-specific configuration files with project information.
|
| 7 |
+
#
|
| 8 |
+
# MAIN FUNCTIONS:
|
| 9 |
+
# 1. Environment Validation
|
| 10 |
+
# - Verifies git repository structure and branch information
|
| 11 |
+
# - Checks for required plan.md files and templates
|
| 12 |
+
# - Validates file permissions and accessibility
|
| 13 |
+
#
|
| 14 |
+
# 2. Plan Data Extraction
|
| 15 |
+
# - Parses plan.md files to extract project metadata
|
| 16 |
+
# - Identifies language/version, frameworks, databases, and project types
|
| 17 |
+
# - Handles missing or incomplete specification data gracefully
|
| 18 |
+
#
|
| 19 |
+
# 3. Agent File Management
|
| 20 |
+
# - Creates new agent context files from templates when needed
|
| 21 |
+
# - Updates existing agent files with new project information
|
| 22 |
+
# - Preserves manual additions and custom configurations
|
| 23 |
+
# - Supports multiple AI agent formats and directory structures
|
| 24 |
+
#
|
| 25 |
+
# 4. Content Generation
|
| 26 |
+
# - Generates language-specific build/test commands
|
| 27 |
+
# - Creates appropriate project directory structures
|
| 28 |
+
# - Updates technology stacks and recent changes sections
|
| 29 |
+
# - Maintains consistent formatting and timestamps
|
| 30 |
+
#
|
| 31 |
+
# 5. Multi-Agent Support
|
| 32 |
+
# - Handles agent-specific file paths and naming conventions
|
| 33 |
+
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, or Amazon Q Developer CLI
|
| 34 |
+
# - Can update single agents or all existing agent files
|
| 35 |
+
# - Creates default Claude file if no agent files exist
|
| 36 |
+
#
|
| 37 |
+
# Usage: ./update-agent-context.sh [agent_type]
|
| 38 |
+
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|shai|q|bob|qoder
|
| 39 |
+
# Leave empty to update all existing agent files
|
| 40 |
+
|
| 41 |
+
set -e
|
| 42 |
+
|
| 43 |
+
# Enable strict error handling
|
| 44 |
+
set -u
|
| 45 |
+
set -o pipefail
|
| 46 |
+
|
| 47 |
+
#==============================================================================
|
| 48 |
+
# Configuration and Global Variables
|
| 49 |
+
#==============================================================================
|
| 50 |
+
|
| 51 |
+
# Get script directory and load common functions
|
| 52 |
+
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
| 53 |
+
source "$SCRIPT_DIR/common.sh"
|
| 54 |
+
|
| 55 |
+
# Get all paths and variables from common functions
|
| 56 |
+
eval $(get_feature_paths)
|
| 57 |
+
|
| 58 |
+
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
|
| 59 |
+
AGENT_TYPE="${1:-}"
|
| 60 |
+
|
| 61 |
+
# Agent-specific file paths
|
| 62 |
+
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
|
| 63 |
+
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
|
| 64 |
+
COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md"
|
| 65 |
+
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
|
| 66 |
+
QWEN_FILE="$REPO_ROOT/QWEN.md"
|
| 67 |
+
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
|
| 68 |
+
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
|
| 69 |
+
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
|
| 70 |
+
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
|
| 71 |
+
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
|
| 72 |
+
CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
|
| 73 |
+
QODER_FILE="$REPO_ROOT/QODER.md"
|
| 74 |
+
AMP_FILE="$REPO_ROOT/AGENTS.md"
|
| 75 |
+
SHAI_FILE="$REPO_ROOT/SHAI.md"
|
| 76 |
+
Q_FILE="$REPO_ROOT/AGENTS.md"
|
| 77 |
+
BOB_FILE="$REPO_ROOT/AGENTS.md"
|
| 78 |
+
|
| 79 |
+
# Template file
|
| 80 |
+
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
|
| 81 |
+
|
| 82 |
+
# Global variables for parsed plan data
|
| 83 |
+
NEW_LANG=""
|
| 84 |
+
NEW_FRAMEWORK=""
|
| 85 |
+
NEW_DB=""
|
| 86 |
+
NEW_PROJECT_TYPE=""
|
| 87 |
+
|
| 88 |
+
#==============================================================================
|
| 89 |
+
# Utility Functions
|
| 90 |
+
#==============================================================================
|
| 91 |
+
|
| 92 |
+
log_info() {
|
| 93 |
+
echo "INFO: $1"
|
| 94 |
+
}
|
| 95 |
+
|
| 96 |
+
log_success() {
|
| 97 |
+
echo "✓ $1"
|
| 98 |
+
}
|
| 99 |
+
|
| 100 |
+
log_error() {
|
| 101 |
+
echo "ERROR: $1" >&2
|
| 102 |
+
}
|
| 103 |
+
|
| 104 |
+
log_warning() {
|
| 105 |
+
echo "WARNING: $1" >&2
|
| 106 |
+
}
|
| 107 |
+
|
| 108 |
+
# Cleanup function for temporary files
|
| 109 |
+
cleanup() {
|
| 110 |
+
local exit_code=$?
|
| 111 |
+
rm -f /tmp/agent_update_*_$$
|
| 112 |
+
rm -f /tmp/manual_additions_$$
|
| 113 |
+
exit $exit_code
|
| 114 |
+
}
|
| 115 |
+
|
| 116 |
+
# Set up cleanup trap
|
| 117 |
+
trap cleanup EXIT INT TERM
|
| 118 |
+
|
| 119 |
+
#==============================================================================
|
| 120 |
+
# Validation Functions
|
| 121 |
+
#==============================================================================
|
| 122 |
+
|
| 123 |
+
validate_environment() {
|
| 124 |
+
# Check if we have a current branch/feature (git or non-git)
|
| 125 |
+
if [[ -z "$CURRENT_BRANCH" ]]; then
|
| 126 |
+
log_error "Unable to determine current feature"
|
| 127 |
+
if [[ "$HAS_GIT" == "true" ]]; then
|
| 128 |
+
log_info "Make sure you're on a feature branch"
|
| 129 |
+
else
|
| 130 |
+
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
|
| 131 |
+
fi
|
| 132 |
+
exit 1
|
| 133 |
+
fi
|
| 134 |
+
|
| 135 |
+
# Check if plan.md exists
|
| 136 |
+
if [[ ! -f "$NEW_PLAN" ]]; then
|
| 137 |
+
log_error "No plan.md found at $NEW_PLAN"
|
| 138 |
+
log_info "Make sure you're working on a feature with a corresponding spec directory"
|
| 139 |
+
if [[ "$HAS_GIT" != "true" ]]; then
|
| 140 |
+
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
|
| 141 |
+
fi
|
| 142 |
+
exit 1
|
| 143 |
+
fi
|
| 144 |
+
|
| 145 |
+
# Check if template exists (needed for new files)
|
| 146 |
+
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
| 147 |
+
log_warning "Template file not found at $TEMPLATE_FILE"
|
| 148 |
+
log_warning "Creating new agent files will fail"
|
| 149 |
+
fi
|
| 150 |
+
}
|
| 151 |
+
|
| 152 |
+
#==============================================================================
|
| 153 |
+
# Plan Parsing Functions
|
| 154 |
+
#==============================================================================
|
| 155 |
+
|
| 156 |
+
extract_plan_field() {
|
| 157 |
+
local field_pattern="$1"
|
| 158 |
+
local plan_file="$2"
|
| 159 |
+
|
| 160 |
+
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
|
| 161 |
+
head -1 | \
|
| 162 |
+
sed "s|^\*\*${field_pattern}\*\*: ||" | \
|
| 163 |
+
sed 's/^[ \t]*//;s/[ \t]*$//' | \
|
| 164 |
+
grep -v "NEEDS CLARIFICATION" | \
|
| 165 |
+
grep -v "^N/A$" || echo ""
|
| 166 |
+
}
|
| 167 |
+
|
| 168 |
+
parse_plan_data() {
|
| 169 |
+
local plan_file="$1"
|
| 170 |
+
|
| 171 |
+
if [[ ! -f "$plan_file" ]]; then
|
| 172 |
+
log_error "Plan file not found: $plan_file"
|
| 173 |
+
return 1
|
| 174 |
+
fi
|
| 175 |
+
|
| 176 |
+
if [[ ! -r "$plan_file" ]]; then
|
| 177 |
+
log_error "Plan file is not readable: $plan_file"
|
| 178 |
+
return 1
|
| 179 |
+
fi
|
| 180 |
+
|
| 181 |
+
log_info "Parsing plan data from $plan_file"
|
| 182 |
+
|
| 183 |
+
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
|
| 184 |
+
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
|
| 185 |
+
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
|
| 186 |
+
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
|
| 187 |
+
|
| 188 |
+
# Log what we found
|
| 189 |
+
if [[ -n "$NEW_LANG" ]]; then
|
| 190 |
+
log_info "Found language: $NEW_LANG"
|
| 191 |
+
else
|
| 192 |
+
log_warning "No language information found in plan"
|
| 193 |
+
fi
|
| 194 |
+
|
| 195 |
+
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
| 196 |
+
log_info "Found framework: $NEW_FRAMEWORK"
|
| 197 |
+
fi
|
| 198 |
+
|
| 199 |
+
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
| 200 |
+
log_info "Found database: $NEW_DB"
|
| 201 |
+
fi
|
| 202 |
+
|
| 203 |
+
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
|
| 204 |
+
log_info "Found project type: $NEW_PROJECT_TYPE"
|
| 205 |
+
fi
|
| 206 |
+
}
|
| 207 |
+
|
| 208 |
+
format_technology_stack() {
|
| 209 |
+
local lang="$1"
|
| 210 |
+
local framework="$2"
|
| 211 |
+
local parts=()
|
| 212 |
+
|
| 213 |
+
# Add non-empty parts
|
| 214 |
+
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
|
| 215 |
+
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
|
| 216 |
+
|
| 217 |
+
# Join with proper formatting
|
| 218 |
+
if [[ ${#parts[@]} -eq 0 ]]; then
|
| 219 |
+
echo ""
|
| 220 |
+
elif [[ ${#parts[@]} -eq 1 ]]; then
|
| 221 |
+
echo "${parts[0]}"
|
| 222 |
+
else
|
| 223 |
+
# Join multiple parts with " + "
|
| 224 |
+
local result="${parts[0]}"
|
| 225 |
+
for ((i=1; i<${#parts[@]}; i++)); do
|
| 226 |
+
result="$result + ${parts[i]}"
|
| 227 |
+
done
|
| 228 |
+
echo "$result"
|
| 229 |
+
fi
|
| 230 |
+
}
|
| 231 |
+
|
| 232 |
+
#==============================================================================
|
| 233 |
+
# Template and Content Generation Functions
|
| 234 |
+
#==============================================================================
|
| 235 |
+
|
| 236 |
+
get_project_structure() {
|
| 237 |
+
local project_type="$1"
|
| 238 |
+
|
| 239 |
+
if [[ "$project_type" == *"web"* ]]; then
|
| 240 |
+
echo "backend/\\nfrontend/\\ntests/"
|
| 241 |
+
else
|
| 242 |
+
echo "src/\\ntests/"
|
| 243 |
+
fi
|
| 244 |
+
}
|
| 245 |
+
|
| 246 |
+
get_commands_for_language() {
|
| 247 |
+
local lang="$1"
|
| 248 |
+
|
| 249 |
+
case "$lang" in
|
| 250 |
+
*"Python"*)
|
| 251 |
+
echo "cd src && pytest && ruff check ."
|
| 252 |
+
;;
|
| 253 |
+
*"Rust"*)
|
| 254 |
+
echo "cargo test && cargo clippy"
|
| 255 |
+
;;
|
| 256 |
+
*"JavaScript"*|*"TypeScript"*)
|
| 257 |
+
echo "npm test \\&\\& npm run lint"
|
| 258 |
+
;;
|
| 259 |
+
*)
|
| 260 |
+
echo "# Add commands for $lang"
|
| 261 |
+
;;
|
| 262 |
+
esac
|
| 263 |
+
}
|
| 264 |
+
|
| 265 |
+
get_language_conventions() {
|
| 266 |
+
local lang="$1"
|
| 267 |
+
echo "$lang: Follow standard conventions"
|
| 268 |
+
}
|
| 269 |
+
|
| 270 |
+
create_new_agent_file() {
|
| 271 |
+
local target_file="$1"
|
| 272 |
+
local temp_file="$2"
|
| 273 |
+
local project_name="$3"
|
| 274 |
+
local current_date="$4"
|
| 275 |
+
|
| 276 |
+
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
| 277 |
+
log_error "Template not found at $TEMPLATE_FILE"
|
| 278 |
+
return 1
|
| 279 |
+
fi
|
| 280 |
+
|
| 281 |
+
if [[ ! -r "$TEMPLATE_FILE" ]]; then
|
| 282 |
+
log_error "Template file is not readable: $TEMPLATE_FILE"
|
| 283 |
+
return 1
|
| 284 |
+
fi
|
| 285 |
+
|
| 286 |
+
log_info "Creating new agent context file from template..."
|
| 287 |
+
|
| 288 |
+
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
|
| 289 |
+
log_error "Failed to copy template file"
|
| 290 |
+
return 1
|
| 291 |
+
fi
|
| 292 |
+
|
| 293 |
+
# Replace template placeholders
|
| 294 |
+
local project_structure
|
| 295 |
+
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
|
| 296 |
+
|
| 297 |
+
local commands
|
| 298 |
+
commands=$(get_commands_for_language "$NEW_LANG")
|
| 299 |
+
|
| 300 |
+
local language_conventions
|
| 301 |
+
language_conventions=$(get_language_conventions "$NEW_LANG")
|
| 302 |
+
|
| 303 |
+
# Perform substitutions with error checking using safer approach
|
| 304 |
+
# Escape special characters for sed by using a different delimiter or escaping
|
| 305 |
+
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
| 306 |
+
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
| 307 |
+
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
| 308 |
+
|
| 309 |
+
# Build technology stack and recent change strings conditionally
|
| 310 |
+
local tech_stack
|
| 311 |
+
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
| 312 |
+
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
|
| 313 |
+
elif [[ -n "$escaped_lang" ]]; then
|
| 314 |
+
tech_stack="- $escaped_lang ($escaped_branch)"
|
| 315 |
+
elif [[ -n "$escaped_framework" ]]; then
|
| 316 |
+
tech_stack="- $escaped_framework ($escaped_branch)"
|
| 317 |
+
else
|
| 318 |
+
tech_stack="- ($escaped_branch)"
|
| 319 |
+
fi
|
| 320 |
+
|
| 321 |
+
local recent_change
|
| 322 |
+
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
| 323 |
+
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
|
| 324 |
+
elif [[ -n "$escaped_lang" ]]; then
|
| 325 |
+
recent_change="- $escaped_branch: Added $escaped_lang"
|
| 326 |
+
elif [[ -n "$escaped_framework" ]]; then
|
| 327 |
+
recent_change="- $escaped_branch: Added $escaped_framework"
|
| 328 |
+
else
|
| 329 |
+
recent_change="- $escaped_branch: Added"
|
| 330 |
+
fi
|
| 331 |
+
|
| 332 |
+
local substitutions=(
|
| 333 |
+
"s|\[PROJECT NAME\]|$project_name|"
|
| 334 |
+
"s|\[DATE\]|$current_date|"
|
| 335 |
+
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
|
| 336 |
+
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
|
| 337 |
+
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
|
| 338 |
+
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
|
| 339 |
+
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
|
| 340 |
+
)
|
| 341 |
+
|
| 342 |
+
for substitution in "${substitutions[@]}"; do
|
| 343 |
+
if ! sed -i.bak -e "$substitution" "$temp_file"; then
|
| 344 |
+
log_error "Failed to perform substitution: $substitution"
|
| 345 |
+
rm -f "$temp_file" "$temp_file.bak"
|
| 346 |
+
return 1
|
| 347 |
+
fi
|
| 348 |
+
done
|
| 349 |
+
|
| 350 |
+
# Convert \n sequences to actual newlines
|
| 351 |
+
newline=$(printf '\n')
|
| 352 |
+
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
|
| 353 |
+
|
| 354 |
+
# Clean up backup files
|
| 355 |
+
rm -f "$temp_file.bak" "$temp_file.bak2"
|
| 356 |
+
|
| 357 |
+
return 0
|
| 358 |
+
}
|
| 359 |
+
|
| 360 |
+
|
| 361 |
+
|
| 362 |
+
|
| 363 |
+
update_existing_agent_file() {
|
| 364 |
+
local target_file="$1"
|
| 365 |
+
local current_date="$2"
|
| 366 |
+
|
| 367 |
+
log_info "Updating existing agent context file..."
|
| 368 |
+
|
| 369 |
+
# Use a single temporary file for atomic update
|
| 370 |
+
local temp_file
|
| 371 |
+
temp_file=$(mktemp) || {
|
| 372 |
+
log_error "Failed to create temporary file"
|
| 373 |
+
return 1
|
| 374 |
+
}
|
| 375 |
+
|
| 376 |
+
# Process the file in one pass
|
| 377 |
+
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
|
| 378 |
+
local new_tech_entries=()
|
| 379 |
+
local new_change_entry=""
|
| 380 |
+
|
| 381 |
+
# Prepare new technology entries
|
| 382 |
+
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
|
| 383 |
+
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
|
| 384 |
+
fi
|
| 385 |
+
|
| 386 |
+
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
|
| 387 |
+
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
|
| 388 |
+
fi
|
| 389 |
+
|
| 390 |
+
# Prepare new change entry
|
| 391 |
+
if [[ -n "$tech_stack" ]]; then
|
| 392 |
+
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
|
| 393 |
+
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
|
| 394 |
+
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
|
| 395 |
+
fi
|
| 396 |
+
|
| 397 |
+
# Check if sections exist in the file
|
| 398 |
+
local has_active_technologies=0
|
| 399 |
+
local has_recent_changes=0
|
| 400 |
+
|
| 401 |
+
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
|
| 402 |
+
has_active_technologies=1
|
| 403 |
+
fi
|
| 404 |
+
|
| 405 |
+
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
|
| 406 |
+
has_recent_changes=1
|
| 407 |
+
fi
|
| 408 |
+
|
| 409 |
+
# Process file line by line
|
| 410 |
+
local in_tech_section=false
|
| 411 |
+
local in_changes_section=false
|
| 412 |
+
local tech_entries_added=false
|
| 413 |
+
local changes_entries_added=false
|
| 414 |
+
local existing_changes_count=0
|
| 415 |
+
local file_ended=false
|
| 416 |
+
|
| 417 |
+
while IFS= read -r line || [[ -n "$line" ]]; do
|
| 418 |
+
# Handle Active Technologies section
|
| 419 |
+
if [[ "$line" == "## Active Technologies" ]]; then
|
| 420 |
+
echo "$line" >> "$temp_file"
|
| 421 |
+
in_tech_section=true
|
| 422 |
+
continue
|
| 423 |
+
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
| 424 |
+
# Add new tech entries before closing the section
|
| 425 |
+
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
| 426 |
+
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
| 427 |
+
tech_entries_added=true
|
| 428 |
+
fi
|
| 429 |
+
echo "$line" >> "$temp_file"
|
| 430 |
+
in_tech_section=false
|
| 431 |
+
continue
|
| 432 |
+
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
|
| 433 |
+
# Add new tech entries before empty line in tech section
|
| 434 |
+
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
| 435 |
+
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
| 436 |
+
tech_entries_added=true
|
| 437 |
+
fi
|
| 438 |
+
echo "$line" >> "$temp_file"
|
| 439 |
+
continue
|
| 440 |
+
fi
|
| 441 |
+
|
| 442 |
+
# Handle Recent Changes section
|
| 443 |
+
if [[ "$line" == "## Recent Changes" ]]; then
|
| 444 |
+
echo "$line" >> "$temp_file"
|
| 445 |
+
# Add new change entry right after the heading
|
| 446 |
+
if [[ -n "$new_change_entry" ]]; then
|
| 447 |
+
echo "$new_change_entry" >> "$temp_file"
|
| 448 |
+
fi
|
| 449 |
+
in_changes_section=true
|
| 450 |
+
changes_entries_added=true
|
| 451 |
+
continue
|
| 452 |
+
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
| 453 |
+
echo "$line" >> "$temp_file"
|
| 454 |
+
in_changes_section=false
|
| 455 |
+
continue
|
| 456 |
+
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
|
| 457 |
+
# Keep only first 2 existing changes
|
| 458 |
+
if [[ $existing_changes_count -lt 2 ]]; then
|
| 459 |
+
echo "$line" >> "$temp_file"
|
| 460 |
+
((existing_changes_count++))
|
| 461 |
+
fi
|
| 462 |
+
continue
|
| 463 |
+
fi
|
| 464 |
+
|
| 465 |
+
# Update timestamp
|
| 466 |
+
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
|
| 467 |
+
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
|
| 468 |
+
else
|
| 469 |
+
echo "$line" >> "$temp_file"
|
| 470 |
+
fi
|
| 471 |
+
done < "$target_file"
|
| 472 |
+
|
| 473 |
+
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
|
| 474 |
+
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
| 475 |
+
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
| 476 |
+
tech_entries_added=true
|
| 477 |
+
fi
|
| 478 |
+
|
| 479 |
+
# If sections don't exist, add them at the end of the file
|
| 480 |
+
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
| 481 |
+
echo "" >> "$temp_file"
|
| 482 |
+
echo "## Active Technologies" >> "$temp_file"
|
| 483 |
+
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
| 484 |
+
tech_entries_added=true
|
| 485 |
+
fi
|
| 486 |
+
|
| 487 |
+
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
|
| 488 |
+
echo "" >> "$temp_file"
|
| 489 |
+
echo "## Recent Changes" >> "$temp_file"
|
| 490 |
+
echo "$new_change_entry" >> "$temp_file"
|
| 491 |
+
changes_entries_added=true
|
| 492 |
+
fi
|
| 493 |
+
|
| 494 |
+
# Move temp file to target atomically
|
| 495 |
+
if ! mv "$temp_file" "$target_file"; then
|
| 496 |
+
log_error "Failed to update target file"
|
| 497 |
+
rm -f "$temp_file"
|
| 498 |
+
return 1
|
| 499 |
+
fi
|
| 500 |
+
|
| 501 |
+
return 0
|
| 502 |
+
}
|
| 503 |
+
#==============================================================================
|
| 504 |
+
# Main Agent File Update Function
|
| 505 |
+
#==============================================================================
|
| 506 |
+
|
| 507 |
+
update_agent_file() {
|
| 508 |
+
local target_file="$1"
|
| 509 |
+
local agent_name="$2"
|
| 510 |
+
|
| 511 |
+
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
|
| 512 |
+
log_error "update_agent_file requires target_file and agent_name parameters"
|
| 513 |
+
return 1
|
| 514 |
+
fi
|
| 515 |
+
|
| 516 |
+
log_info "Updating $agent_name context file: $target_file"
|
| 517 |
+
|
| 518 |
+
local project_name
|
| 519 |
+
project_name=$(basename "$REPO_ROOT")
|
| 520 |
+
local current_date
|
| 521 |
+
current_date=$(date +%Y-%m-%d)
|
| 522 |
+
|
| 523 |
+
# Create directory if it doesn't exist
|
| 524 |
+
local target_dir
|
| 525 |
+
target_dir=$(dirname "$target_file")
|
| 526 |
+
if [[ ! -d "$target_dir" ]]; then
|
| 527 |
+
if ! mkdir -p "$target_dir"; then
|
| 528 |
+
log_error "Failed to create directory: $target_dir"
|
| 529 |
+
return 1
|
| 530 |
+
fi
|
| 531 |
+
fi
|
| 532 |
+
|
| 533 |
+
if [[ ! -f "$target_file" ]]; then
|
| 534 |
+
# Create new file from template
|
| 535 |
+
local temp_file
|
| 536 |
+
temp_file=$(mktemp) || {
|
| 537 |
+
log_error "Failed to create temporary file"
|
| 538 |
+
return 1
|
| 539 |
+
}
|
| 540 |
+
|
| 541 |
+
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
|
| 542 |
+
if mv "$temp_file" "$target_file"; then
|
| 543 |
+
log_success "Created new $agent_name context file"
|
| 544 |
+
else
|
| 545 |
+
log_error "Failed to move temporary file to $target_file"
|
| 546 |
+
rm -f "$temp_file"
|
| 547 |
+
return 1
|
| 548 |
+
fi
|
| 549 |
+
else
|
| 550 |
+
log_error "Failed to create new agent file"
|
| 551 |
+
rm -f "$temp_file"
|
| 552 |
+
return 1
|
| 553 |
+
fi
|
| 554 |
+
else
|
| 555 |
+
# Update existing file
|
| 556 |
+
if [[ ! -r "$target_file" ]]; then
|
| 557 |
+
log_error "Cannot read existing file: $target_file"
|
| 558 |
+
return 1
|
| 559 |
+
fi
|
| 560 |
+
|
| 561 |
+
if [[ ! -w "$target_file" ]]; then
|
| 562 |
+
log_error "Cannot write to existing file: $target_file"
|
| 563 |
+
return 1
|
| 564 |
+
fi
|
| 565 |
+
|
| 566 |
+
if update_existing_agent_file "$target_file" "$current_date"; then
|
| 567 |
+
log_success "Updated existing $agent_name context file"
|
| 568 |
+
else
|
| 569 |
+
log_error "Failed to update existing agent file"
|
| 570 |
+
return 1
|
| 571 |
+
fi
|
| 572 |
+
fi
|
| 573 |
+
|
| 574 |
+
return 0
|
| 575 |
+
}
|
| 576 |
+
|
| 577 |
+
#==============================================================================
|
| 578 |
+
# Agent Selection and Processing
|
| 579 |
+
#==============================================================================
|
| 580 |
+
|
| 581 |
+
update_specific_agent() {
|
| 582 |
+
local agent_type="$1"
|
| 583 |
+
|
| 584 |
+
case "$agent_type" in
|
| 585 |
+
claude)
|
| 586 |
+
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
| 587 |
+
;;
|
| 588 |
+
gemini)
|
| 589 |
+
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
| 590 |
+
;;
|
| 591 |
+
copilot)
|
| 592 |
+
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
| 593 |
+
;;
|
| 594 |
+
cursor-agent)
|
| 595 |
+
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
| 596 |
+
;;
|
| 597 |
+
qwen)
|
| 598 |
+
update_agent_file "$QWEN_FILE" "Qwen Code"
|
| 599 |
+
;;
|
| 600 |
+
opencode)
|
| 601 |
+
update_agent_file "$AGENTS_FILE" "opencode"
|
| 602 |
+
;;
|
| 603 |
+
codex)
|
| 604 |
+
update_agent_file "$AGENTS_FILE" "Codex CLI"
|
| 605 |
+
;;
|
| 606 |
+
windsurf)
|
| 607 |
+
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
| 608 |
+
;;
|
| 609 |
+
kilocode)
|
| 610 |
+
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
| 611 |
+
;;
|
| 612 |
+
auggie)
|
| 613 |
+
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
| 614 |
+
;;
|
| 615 |
+
roo)
|
| 616 |
+
update_agent_file "$ROO_FILE" "Roo Code"
|
| 617 |
+
;;
|
| 618 |
+
codebuddy)
|
| 619 |
+
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
|
| 620 |
+
;;
|
| 621 |
+
qoder)
|
| 622 |
+
update_agent_file "$QODER_FILE" "Qoder CLI"
|
| 623 |
+
;;
|
| 624 |
+
amp)
|
| 625 |
+
update_agent_file "$AMP_FILE" "Amp"
|
| 626 |
+
;;
|
| 627 |
+
shai)
|
| 628 |
+
update_agent_file "$SHAI_FILE" "SHAI"
|
| 629 |
+
;;
|
| 630 |
+
q)
|
| 631 |
+
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
|
| 632 |
+
;;
|
| 633 |
+
bob)
|
| 634 |
+
update_agent_file "$BOB_FILE" "IBM Bob"
|
| 635 |
+
;;
|
| 636 |
+
*)
|
| 637 |
+
log_error "Unknown agent type '$agent_type'"
|
| 638 |
+
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|amp|shai|q|bob|qoder"
|
| 639 |
+
exit 1
|
| 640 |
+
;;
|
| 641 |
+
esac
|
| 642 |
+
}
|
| 643 |
+
|
| 644 |
+
update_all_existing_agents() {
|
| 645 |
+
local found_agent=false
|
| 646 |
+
|
| 647 |
+
# Check each possible agent file and update if it exists
|
| 648 |
+
if [[ -f "$CLAUDE_FILE" ]]; then
|
| 649 |
+
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
| 650 |
+
found_agent=true
|
| 651 |
+
fi
|
| 652 |
+
|
| 653 |
+
if [[ -f "$GEMINI_FILE" ]]; then
|
| 654 |
+
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
| 655 |
+
found_agent=true
|
| 656 |
+
fi
|
| 657 |
+
|
| 658 |
+
if [[ -f "$COPILOT_FILE" ]]; then
|
| 659 |
+
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
| 660 |
+
found_agent=true
|
| 661 |
+
fi
|
| 662 |
+
|
| 663 |
+
if [[ -f "$CURSOR_FILE" ]]; then
|
| 664 |
+
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
| 665 |
+
found_agent=true
|
| 666 |
+
fi
|
| 667 |
+
|
| 668 |
+
if [[ -f "$QWEN_FILE" ]]; then
|
| 669 |
+
update_agent_file "$QWEN_FILE" "Qwen Code"
|
| 670 |
+
found_agent=true
|
| 671 |
+
fi
|
| 672 |
+
|
| 673 |
+
if [[ -f "$AGENTS_FILE" ]]; then
|
| 674 |
+
update_agent_file "$AGENTS_FILE" "Codex/opencode"
|
| 675 |
+
found_agent=true
|
| 676 |
+
fi
|
| 677 |
+
|
| 678 |
+
if [[ -f "$WINDSURF_FILE" ]]; then
|
| 679 |
+
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
| 680 |
+
found_agent=true
|
| 681 |
+
fi
|
| 682 |
+
|
| 683 |
+
if [[ -f "$KILOCODE_FILE" ]]; then
|
| 684 |
+
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
| 685 |
+
found_agent=true
|
| 686 |
+
fi
|
| 687 |
+
|
| 688 |
+
if [[ -f "$AUGGIE_FILE" ]]; then
|
| 689 |
+
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
| 690 |
+
found_agent=true
|
| 691 |
+
fi
|
| 692 |
+
|
| 693 |
+
if [[ -f "$ROO_FILE" ]]; then
|
| 694 |
+
update_agent_file "$ROO_FILE" "Roo Code"
|
| 695 |
+
found_agent=true
|
| 696 |
+
fi
|
| 697 |
+
|
| 698 |
+
if [[ -f "$CODEBUDDY_FILE" ]]; then
|
| 699 |
+
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
|
| 700 |
+
found_agent=true
|
| 701 |
+
fi
|
| 702 |
+
|
| 703 |
+
if [[ -f "$SHAI_FILE" ]]; then
|
| 704 |
+
update_agent_file "$SHAI_FILE" "SHAI"
|
| 705 |
+
found_agent=true
|
| 706 |
+
fi
|
| 707 |
+
|
| 708 |
+
if [[ -f "$QODER_FILE" ]]; then
|
| 709 |
+
update_agent_file "$QODER_FILE" "Qoder CLI"
|
| 710 |
+
found_agent=true
|
| 711 |
+
fi
|
| 712 |
+
|
| 713 |
+
if [[ -f "$Q_FILE" ]]; then
|
| 714 |
+
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
|
| 715 |
+
found_agent=true
|
| 716 |
+
fi
|
| 717 |
+
|
| 718 |
+
if [[ -f "$BOB_FILE" ]]; then
|
| 719 |
+
update_agent_file "$BOB_FILE" "IBM Bob"
|
| 720 |
+
found_agent=true
|
| 721 |
+
fi
|
| 722 |
+
|
| 723 |
+
# If no agent files exist, create a default Claude file
|
| 724 |
+
if [[ "$found_agent" == false ]]; then
|
| 725 |
+
log_info "No existing agent files found, creating default Claude file..."
|
| 726 |
+
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
| 727 |
+
fi
|
| 728 |
+
}
|
| 729 |
+
print_summary() {
|
| 730 |
+
echo
|
| 731 |
+
log_info "Summary of changes:"
|
| 732 |
+
|
| 733 |
+
if [[ -n "$NEW_LANG" ]]; then
|
| 734 |
+
echo " - Added language: $NEW_LANG"
|
| 735 |
+
fi
|
| 736 |
+
|
| 737 |
+
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
| 738 |
+
echo " - Added framework: $NEW_FRAMEWORK"
|
| 739 |
+
fi
|
| 740 |
+
|
| 741 |
+
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
| 742 |
+
echo " - Added database: $NEW_DB"
|
| 743 |
+
fi
|
| 744 |
+
|
| 745 |
+
echo
|
| 746 |
+
|
| 747 |
+
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|shai|q|bob|qoder]"
|
| 748 |
+
}
|
| 749 |
+
|
| 750 |
+
#==============================================================================
|
| 751 |
+
# Main Execution
|
| 752 |
+
#==============================================================================
|
| 753 |
+
|
| 754 |
+
main() {
|
| 755 |
+
# Validate environment before proceeding
|
| 756 |
+
validate_environment
|
| 757 |
+
|
| 758 |
+
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
|
| 759 |
+
|
| 760 |
+
# Parse the plan file to extract project information
|
| 761 |
+
if ! parse_plan_data "$NEW_PLAN"; then
|
| 762 |
+
log_error "Failed to parse plan data"
|
| 763 |
+
exit 1
|
| 764 |
+
fi
|
| 765 |
+
|
| 766 |
+
# Process based on agent type argument
|
| 767 |
+
local success=true
|
| 768 |
+
|
| 769 |
+
if [[ -z "$AGENT_TYPE" ]]; then
|
| 770 |
+
# No specific agent provided - update all existing agent files
|
| 771 |
+
log_info "No agent specified, updating all existing agent files..."
|
| 772 |
+
if ! update_all_existing_agents; then
|
| 773 |
+
success=false
|
| 774 |
+
fi
|
| 775 |
+
else
|
| 776 |
+
# Specific agent provided - update only that agent
|
| 777 |
+
log_info "Updating specific agent: $AGENT_TYPE"
|
| 778 |
+
if ! update_specific_agent "$AGENT_TYPE"; then
|
| 779 |
+
success=false
|
| 780 |
+
fi
|
| 781 |
+
fi
|
| 782 |
+
|
| 783 |
+
# Print summary
|
| 784 |
+
print_summary
|
| 785 |
+
|
| 786 |
+
if [[ "$success" == true ]]; then
|
| 787 |
+
log_success "Agent context update completed successfully"
|
| 788 |
+
exit 0
|
| 789 |
+
else
|
| 790 |
+
log_error "Agent context update completed with errors"
|
| 791 |
+
exit 1
|
| 792 |
+
fi
|
| 793 |
+
}
|
| 794 |
+
|
| 795 |
+
# Execute main function if script is run directly
|
| 796 |
+
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
| 797 |
+
main "$@"
|
| 798 |
+
fi
|
| 799 |
+
|
Chatbot/.specify/templates/adr-template.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ADR-{{ID}}: {{TITLE}}
|
| 2 |
+
|
| 3 |
+
> **Scope**: Document decision clusters, not individual technology choices. Group related decisions that work together (e.g., "Frontend Stack" not separate ADRs for framework, styling, deployment).
|
| 4 |
+
|
| 5 |
+
- **Status:** Proposed | Accepted | Superseded | Rejected
|
| 6 |
+
- **Date:** {{DATE_ISO}}
|
| 7 |
+
- **Feature:** {{FEATURE_NAME}}
|
| 8 |
+
- **Context:** {{CONTEXT}}
|
| 9 |
+
|
| 10 |
+
<!-- Significance checklist (ALL must be true to justify this ADR)
|
| 11 |
+
1) Impact: Long-term consequence for architecture/platform/security?
|
| 12 |
+
2) Alternatives: Multiple viable options considered with tradeoffs?
|
| 13 |
+
3) Scope: Cross-cutting concern (not an isolated detail)?
|
| 14 |
+
If any are false, prefer capturing as a PHR note instead of an ADR. -->
|
| 15 |
+
|
| 16 |
+
## Decision
|
| 17 |
+
|
| 18 |
+
{{DECISION}}
|
| 19 |
+
|
| 20 |
+
<!-- For technology stacks, list all components:
|
| 21 |
+
- Framework: Next.js 14 (App Router)
|
| 22 |
+
- Styling: Tailwind CSS v3
|
| 23 |
+
- Deployment: Vercel
|
| 24 |
+
- State Management: React Context (start simple)
|
| 25 |
+
-->
|
| 26 |
+
|
| 27 |
+
## Consequences
|
| 28 |
+
|
| 29 |
+
### Positive
|
| 30 |
+
|
| 31 |
+
{{POSITIVE_CONSEQUENCES}}
|
| 32 |
+
|
| 33 |
+
<!-- Example: Integrated tooling, excellent DX, fast deploys, strong TypeScript support -->
|
| 34 |
+
|
| 35 |
+
### Negative
|
| 36 |
+
|
| 37 |
+
{{NEGATIVE_CONSEQUENCES}}
|
| 38 |
+
|
| 39 |
+
<!-- Example: Vendor lock-in to Vercel, framework coupling, learning curve -->
|
| 40 |
+
|
| 41 |
+
## Alternatives Considered
|
| 42 |
+
|
| 43 |
+
{{ALTERNATIVES}}
|
| 44 |
+
|
| 45 |
+
<!-- Group alternatives by cluster:
|
| 46 |
+
Alternative Stack A: Remix + styled-components + Cloudflare
|
| 47 |
+
Alternative Stack B: Vite + vanilla CSS + AWS Amplify
|
| 48 |
+
Why rejected: Less integrated, more setup complexity
|
| 49 |
+
-->
|
| 50 |
+
|
| 51 |
+
## References
|
| 52 |
+
|
| 53 |
+
- Feature Spec: {{SPEC_LINK}}
|
| 54 |
+
- Implementation Plan: {{PLAN_LINK}}
|
| 55 |
+
- Related ADRs: {{RELATED_ADRS}}
|
| 56 |
+
- Evaluator Evidence: {{EVAL_NOTES_LINK}} <!-- link to eval notes/PHR showing graders and outcomes -->
|
Chatbot/.specify/templates/agent-file-template.md
ADDED
|
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# [PROJECT NAME] Development Guidelines
|
| 2 |
+
|
| 3 |
+
Auto-generated from all feature plans. Last updated: [DATE]
|
| 4 |
+
|
| 5 |
+
## Active Technologies
|
| 6 |
+
|
| 7 |
+
[EXTRACTED FROM ALL PLAN.MD FILES]
|
| 8 |
+
|
| 9 |
+
## Project Structure
|
| 10 |
+
|
| 11 |
+
```text
|
| 12 |
+
[ACTUAL STRUCTURE FROM PLANS]
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
## Commands
|
| 16 |
+
|
| 17 |
+
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
|
| 18 |
+
|
| 19 |
+
## Code Style
|
| 20 |
+
|
| 21 |
+
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
|
| 22 |
+
|
| 23 |
+
## Recent Changes
|
| 24 |
+
|
| 25 |
+
[LAST 3 FEATURES AND WHAT THEY ADDED]
|
| 26 |
+
|
| 27 |
+
<!-- MANUAL ADDITIONS START -->
|
| 28 |
+
<!-- MANUAL ADDITIONS END -->
|
Chatbot/.specify/templates/checklist-template.md
ADDED
|
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
|
| 2 |
+
|
| 3 |
+
**Purpose**: [Brief description of what this checklist covers]
|
| 4 |
+
**Created**: [DATE]
|
| 5 |
+
**Feature**: [Link to spec.md or relevant documentation]
|
| 6 |
+
|
| 7 |
+
**Note**: This checklist is generated by the `/sp.checklist` command based on feature context and requirements.
|
| 8 |
+
|
| 9 |
+
<!--
|
| 10 |
+
============================================================================
|
| 11 |
+
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
|
| 12 |
+
|
| 13 |
+
The /sp.checklist command MUST replace these with actual items based on:
|
| 14 |
+
- User's specific checklist request
|
| 15 |
+
- Feature requirements from spec.md
|
| 16 |
+
- Technical context from plan.md
|
| 17 |
+
- Implementation details from tasks.md
|
| 18 |
+
|
| 19 |
+
DO NOT keep these sample items in the generated checklist file.
|
| 20 |
+
============================================================================
|
| 21 |
+
-->
|
| 22 |
+
|
| 23 |
+
## [Category 1]
|
| 24 |
+
|
| 25 |
+
- [ ] CHK001 First checklist item with clear action
|
| 26 |
+
- [ ] CHK002 Second checklist item
|
| 27 |
+
- [ ] CHK003 Third checklist item
|
| 28 |
+
|
| 29 |
+
## [Category 2]
|
| 30 |
+
|
| 31 |
+
- [ ] CHK004 Another category item
|
| 32 |
+
- [ ] CHK005 Item with specific criteria
|
| 33 |
+
- [ ] CHK006 Final item in this category
|
| 34 |
+
|
| 35 |
+
## Notes
|
| 36 |
+
|
| 37 |
+
- Check items off as completed: `[x]`
|
| 38 |
+
- Add comments or findings inline
|
| 39 |
+
- Link to relevant resources or documentation
|
| 40 |
+
- Items are numbered sequentially for easy reference
|
Chatbot/.specify/templates/phr-template.prompt.md
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
id: {{ID}}
|
| 3 |
+
title: {{TITLE}}
|
| 4 |
+
stage: {{STAGE}}
|
| 5 |
+
date: {{DATE_ISO}}
|
| 6 |
+
surface: {{SURFACE}}
|
| 7 |
+
model: {{MODEL}}
|
| 8 |
+
feature: {{FEATURE}}
|
| 9 |
+
branch: {{BRANCH}}
|
| 10 |
+
user: {{USER}}
|
| 11 |
+
command: {{COMMAND}}
|
| 12 |
+
labels: [{{LABELS}}]
|
| 13 |
+
links:
|
| 14 |
+
spec: {{LINKS_SPEC}}
|
| 15 |
+
ticket: {{LINKS_TICKET}}
|
| 16 |
+
adr: {{LINKS_ADR}}
|
| 17 |
+
pr: {{LINKS_PR}}
|
| 18 |
+
files:
|
| 19 |
+
{{FILES_YAML}}
|
| 20 |
+
tests:
|
| 21 |
+
{{TESTS_YAML}}
|
| 22 |
+
---
|
| 23 |
+
|
| 24 |
+
## Prompt
|
| 25 |
+
|
| 26 |
+
{{PROMPT_TEXT}}
|
| 27 |
+
|
| 28 |
+
## Response snapshot
|
| 29 |
+
|
| 30 |
+
{{RESPONSE_TEXT}}
|
| 31 |
+
|
| 32 |
+
## Outcome
|
| 33 |
+
|
| 34 |
+
- ✅ Impact: {{OUTCOME_IMPACT}}
|
| 35 |
+
- 🧪 Tests: {{TESTS_SUMMARY}}
|
| 36 |
+
- 📁 Files: {{FILES_SUMMARY}}
|
| 37 |
+
- 🔁 Next prompts: {{NEXT_PROMPTS}}
|
| 38 |
+
- 🧠 Reflection: {{REFLECTION_NOTE}}
|
| 39 |
+
|
| 40 |
+
## Evaluation notes (flywheel)
|
| 41 |
+
|
| 42 |
+
- Failure modes observed: {{FAILURE_MODES}}
|
| 43 |
+
- Graders run and results (PASS/FAIL): {{GRADER_RESULTS}}
|
| 44 |
+
- Prompt variant (if applicable): {{PROMPT_VARIANT_ID}}
|
| 45 |
+
- Next experiment (smallest change to try): {{NEXT_EXPERIMENT}}
|
Chatbot/.specify/templates/plan-template.md
ADDED
|
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Implementation Plan: [FEATURE]
|
| 2 |
+
|
| 3 |
+
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
|
| 4 |
+
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
|
| 5 |
+
|
| 6 |
+
**Note**: This template is filled in by the `/sp.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
|
| 7 |
+
|
| 8 |
+
## Summary
|
| 9 |
+
|
| 10 |
+
[Extract from feature spec: primary requirement + technical approach from research]
|
| 11 |
+
|
| 12 |
+
## Technical Context
|
| 13 |
+
|
| 14 |
+
<!--
|
| 15 |
+
ACTION REQUIRED: Replace the content in this section with the technical details
|
| 16 |
+
for the project. The structure here is presented in advisory capacity to guide
|
| 17 |
+
the iteration process.
|
| 18 |
+
-->
|
| 19 |
+
|
| 20 |
+
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
|
| 21 |
+
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
|
| 22 |
+
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
|
| 23 |
+
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
|
| 24 |
+
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
|
| 25 |
+
**Project Type**: [single/web/mobile - determines source structure]
|
| 26 |
+
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
|
| 27 |
+
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
|
| 28 |
+
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
|
| 29 |
+
|
| 30 |
+
## Constitution Check
|
| 31 |
+
|
| 32 |
+
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
|
| 33 |
+
|
| 34 |
+
[Gates determined based on constitution file]
|
| 35 |
+
|
| 36 |
+
## Project Structure
|
| 37 |
+
|
| 38 |
+
### Documentation (this feature)
|
| 39 |
+
|
| 40 |
+
```text
|
| 41 |
+
specs/[###-feature]/
|
| 42 |
+
├── plan.md # This file (/sp.plan command output)
|
| 43 |
+
├── research.md # Phase 0 output (/sp.plan command)
|
| 44 |
+
├── data-model.md # Phase 1 output (/sp.plan command)
|
| 45 |
+
├── quickstart.md # Phase 1 output (/sp.plan command)
|
| 46 |
+
├── contracts/ # Phase 1 output (/sp.plan command)
|
| 47 |
+
└── tasks.md # Phase 2 output (/sp.tasks command - NOT created by /sp.plan)
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
### Source Code (repository root)
|
| 51 |
+
<!--
|
| 52 |
+
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
|
| 53 |
+
for this feature. Delete unused options and expand the chosen structure with
|
| 54 |
+
real paths (e.g., apps/admin, packages/something). The delivered plan must
|
| 55 |
+
not include Option labels.
|
| 56 |
+
-->
|
| 57 |
+
|
| 58 |
+
```text
|
| 59 |
+
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
|
| 60 |
+
src/
|
| 61 |
+
├── models/
|
| 62 |
+
├── services/
|
| 63 |
+
├── cli/
|
| 64 |
+
└── lib/
|
| 65 |
+
|
| 66 |
+
tests/
|
| 67 |
+
├── contract/
|
| 68 |
+
├── integration/
|
| 69 |
+
└── unit/
|
| 70 |
+
|
| 71 |
+
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
|
| 72 |
+
backend/
|
| 73 |
+
├── src/
|
| 74 |
+
│ ├── models/
|
| 75 |
+
│ ├── services/
|
| 76 |
+
│ └── api/
|
| 77 |
+
└── tests/
|
| 78 |
+
|
| 79 |
+
frontend/
|
| 80 |
+
├── src/
|
| 81 |
+
│ ├── components/
|
| 82 |
+
│ ├── pages/
|
| 83 |
+
│ └── services/
|
| 84 |
+
└── tests/
|
| 85 |
+
|
| 86 |
+
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
|
| 87 |
+
api/
|
| 88 |
+
└── [same as backend above]
|
| 89 |
+
|
| 90 |
+
ios/ or android/
|
| 91 |
+
└── [platform-specific structure: feature modules, UI flows, platform tests]
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
**Structure Decision**: [Document the selected structure and reference the real
|
| 95 |
+
directories captured above]
|
| 96 |
+
|
| 97 |
+
## Complexity Tracking
|
| 98 |
+
|
| 99 |
+
> **Fill ONLY if Constitution Check has violations that must be justified**
|
| 100 |
+
|
| 101 |
+
| Violation | Why Needed | Simpler Alternative Rejected Because |
|
| 102 |
+
|-----------|------------|-------------------------------------|
|
| 103 |
+
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
|
| 104 |
+
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
|
Chatbot/.specify/templates/spec-template.md
ADDED
|
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Feature Specification: [FEATURE NAME]
|
| 2 |
+
|
| 3 |
+
**Feature Branch**: `[###-feature-name]`
|
| 4 |
+
**Created**: [DATE]
|
| 5 |
+
**Status**: Draft
|
| 6 |
+
**Input**: User description: "$ARGUMENTS"
|
| 7 |
+
|
| 8 |
+
## User Scenarios & Testing *(mandatory)*
|
| 9 |
+
|
| 10 |
+
<!--
|
| 11 |
+
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
|
| 12 |
+
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
|
| 13 |
+
you should still have a viable MVP (Minimum Viable Product) that delivers value.
|
| 14 |
+
|
| 15 |
+
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
|
| 16 |
+
Think of each story as a standalone slice of functionality that can be:
|
| 17 |
+
- Developed independently
|
| 18 |
+
- Tested independently
|
| 19 |
+
- Deployed independently
|
| 20 |
+
- Demonstrated to users independently
|
| 21 |
+
-->
|
| 22 |
+
|
| 23 |
+
### User Story 1 - [Brief Title] (Priority: P1)
|
| 24 |
+
|
| 25 |
+
[Describe this user journey in plain language]
|
| 26 |
+
|
| 27 |
+
**Why this priority**: [Explain the value and why it has this priority level]
|
| 28 |
+
|
| 29 |
+
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
|
| 30 |
+
|
| 31 |
+
**Acceptance Scenarios**:
|
| 32 |
+
|
| 33 |
+
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
| 34 |
+
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
### User Story 2 - [Brief Title] (Priority: P2)
|
| 39 |
+
|
| 40 |
+
[Describe this user journey in plain language]
|
| 41 |
+
|
| 42 |
+
**Why this priority**: [Explain the value and why it has this priority level]
|
| 43 |
+
|
| 44 |
+
**Independent Test**: [Describe how this can be tested independently]
|
| 45 |
+
|
| 46 |
+
**Acceptance Scenarios**:
|
| 47 |
+
|
| 48 |
+
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
| 49 |
+
|
| 50 |
+
---
|
| 51 |
+
|
| 52 |
+
### User Story 3 - [Brief Title] (Priority: P3)
|
| 53 |
+
|
| 54 |
+
[Describe this user journey in plain language]
|
| 55 |
+
|
| 56 |
+
**Why this priority**: [Explain the value and why it has this priority level]
|
| 57 |
+
|
| 58 |
+
**Independent Test**: [Describe how this can be tested independently]
|
| 59 |
+
|
| 60 |
+
**Acceptance Scenarios**:
|
| 61 |
+
|
| 62 |
+
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
| 63 |
+
|
| 64 |
+
---
|
| 65 |
+
|
| 66 |
+
[Add more user stories as needed, each with an assigned priority]
|
| 67 |
+
|
| 68 |
+
### Edge Cases
|
| 69 |
+
|
| 70 |
+
<!--
|
| 71 |
+
ACTION REQUIRED: The content in this section represents placeholders.
|
| 72 |
+
Fill them out with the right edge cases.
|
| 73 |
+
-->
|
| 74 |
+
|
| 75 |
+
- What happens when [boundary condition]?
|
| 76 |
+
- How does system handle [error scenario]?
|
| 77 |
+
|
| 78 |
+
## Requirements *(mandatory)*
|
| 79 |
+
|
| 80 |
+
<!--
|
| 81 |
+
ACTION REQUIRED: The content in this section represents placeholders.
|
| 82 |
+
Fill them out with the right functional requirements.
|
| 83 |
+
-->
|
| 84 |
+
|
| 85 |
+
### Functional Requirements
|
| 86 |
+
|
| 87 |
+
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
|
| 88 |
+
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
|
| 89 |
+
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
|
| 90 |
+
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
|
| 91 |
+
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
|
| 92 |
+
|
| 93 |
+
*Example of marking unclear requirements:*
|
| 94 |
+
|
| 95 |
+
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
|
| 96 |
+
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
|
| 97 |
+
|
| 98 |
+
### Key Entities *(include if feature involves data)*
|
| 99 |
+
|
| 100 |
+
- **[Entity 1]**: [What it represents, key attributes without implementation]
|
| 101 |
+
- **[Entity 2]**: [What it represents, relationships to other entities]
|
| 102 |
+
|
| 103 |
+
## Success Criteria *(mandatory)*
|
| 104 |
+
|
| 105 |
+
<!--
|
| 106 |
+
ACTION REQUIRED: Define measurable success criteria.
|
| 107 |
+
These must be technology-agnostic and measurable.
|
| 108 |
+
-->
|
| 109 |
+
|
| 110 |
+
### Measurable Outcomes
|
| 111 |
+
|
| 112 |
+
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
|
| 113 |
+
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
|
| 114 |
+
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
|
| 115 |
+
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
|
Chatbot/.specify/templates/tasks-template.md
ADDED
|
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
|
| 3 |
+
description: "Task list template for feature implementation"
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# Tasks: [FEATURE NAME]
|
| 7 |
+
|
| 8 |
+
**Input**: Design documents from `/specs/[###-feature-name]/`
|
| 9 |
+
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
|
| 10 |
+
|
| 11 |
+
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
|
| 12 |
+
|
| 13 |
+
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
|
| 14 |
+
|
| 15 |
+
## Format: `[ID] [P?] [Story] Description`
|
| 16 |
+
|
| 17 |
+
- **[P]**: Can run in parallel (different files, no dependencies)
|
| 18 |
+
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
|
| 19 |
+
- Include exact file paths in descriptions
|
| 20 |
+
|
| 21 |
+
## Path Conventions
|
| 22 |
+
|
| 23 |
+
- **Single project**: `src/`, `tests/` at repository root
|
| 24 |
+
- **Web app**: `backend/src/`, `frontend/src/`
|
| 25 |
+
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
|
| 26 |
+
- Paths shown below assume single project - adjust based on plan.md structure
|
| 27 |
+
|
| 28 |
+
<!--
|
| 29 |
+
============================================================================
|
| 30 |
+
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
|
| 31 |
+
|
| 32 |
+
The /sp.tasks command MUST replace these with actual tasks based on:
|
| 33 |
+
- User stories from spec.md (with their priorities P1, P2, P3...)
|
| 34 |
+
- Feature requirements from plan.md
|
| 35 |
+
- Entities from data-model.md
|
| 36 |
+
- Endpoints from contracts/
|
| 37 |
+
|
| 38 |
+
Tasks MUST be organized by user story so each story can be:
|
| 39 |
+
- Implemented independently
|
| 40 |
+
- Tested independently
|
| 41 |
+
- Delivered as an MVP increment
|
| 42 |
+
|
| 43 |
+
DO NOT keep these sample tasks in the generated tasks.md file.
|
| 44 |
+
============================================================================
|
| 45 |
+
-->
|
| 46 |
+
|
| 47 |
+
## Phase 1: Setup (Shared Infrastructure)
|
| 48 |
+
|
| 49 |
+
**Purpose**: Project initialization and basic structure
|
| 50 |
+
|
| 51 |
+
- [ ] T001 Create project structure per implementation plan
|
| 52 |
+
- [ ] T002 Initialize [language] project with [framework] dependencies
|
| 53 |
+
- [ ] T003 [P] Configure linting and formatting tools
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
## Phase 2: Foundational (Blocking Prerequisites)
|
| 58 |
+
|
| 59 |
+
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
|
| 60 |
+
|
| 61 |
+
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
|
| 62 |
+
|
| 63 |
+
Examples of foundational tasks (adjust based on your project):
|
| 64 |
+
|
| 65 |
+
- [ ] T004 Setup database schema and migrations framework
|
| 66 |
+
- [ ] T005 [P] Implement authentication/authorization framework
|
| 67 |
+
- [ ] T006 [P] Setup API routing and middleware structure
|
| 68 |
+
- [ ] T007 Create base models/entities that all stories depend on
|
| 69 |
+
- [ ] T008 Configure error handling and logging infrastructure
|
| 70 |
+
- [ ] T009 Setup environment configuration management
|
| 71 |
+
|
| 72 |
+
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
|
| 73 |
+
|
| 74 |
+
---
|
| 75 |
+
|
| 76 |
+
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
|
| 77 |
+
|
| 78 |
+
**Goal**: [Brief description of what this story delivers]
|
| 79 |
+
|
| 80 |
+
**Independent Test**: [How to verify this story works on its own]
|
| 81 |
+
|
| 82 |
+
### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
|
| 83 |
+
|
| 84 |
+
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
|
| 85 |
+
|
| 86 |
+
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
|
| 87 |
+
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
|
| 88 |
+
|
| 89 |
+
### Implementation for User Story 1
|
| 90 |
+
|
| 91 |
+
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
|
| 92 |
+
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
|
| 93 |
+
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
|
| 94 |
+
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
|
| 95 |
+
- [ ] T016 [US1] Add validation and error handling
|
| 96 |
+
- [ ] T017 [US1] Add logging for user story 1 operations
|
| 97 |
+
|
| 98 |
+
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
|
| 99 |
+
|
| 100 |
+
---
|
| 101 |
+
|
| 102 |
+
## Phase 4: User Story 2 - [Title] (Priority: P2)
|
| 103 |
+
|
| 104 |
+
**Goal**: [Brief description of what this story delivers]
|
| 105 |
+
|
| 106 |
+
**Independent Test**: [How to verify this story works on its own]
|
| 107 |
+
|
| 108 |
+
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
|
| 109 |
+
|
| 110 |
+
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
|
| 111 |
+
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
|
| 112 |
+
|
| 113 |
+
### Implementation for User Story 2
|
| 114 |
+
|
| 115 |
+
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
|
| 116 |
+
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
|
| 117 |
+
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
|
| 118 |
+
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
|
| 119 |
+
|
| 120 |
+
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
|
| 121 |
+
|
| 122 |
+
---
|
| 123 |
+
|
| 124 |
+
## Phase 5: User Story 3 - [Title] (Priority: P3)
|
| 125 |
+
|
| 126 |
+
**Goal**: [Brief description of what this story delivers]
|
| 127 |
+
|
| 128 |
+
**Independent Test**: [How to verify this story works on its own]
|
| 129 |
+
|
| 130 |
+
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
|
| 131 |
+
|
| 132 |
+
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
|
| 133 |
+
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
|
| 134 |
+
|
| 135 |
+
### Implementation for User Story 3
|
| 136 |
+
|
| 137 |
+
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
|
| 138 |
+
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
|
| 139 |
+
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
|
| 140 |
+
|
| 141 |
+
**Checkpoint**: All user stories should now be independently functional
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
[Add more user story phases as needed, following the same pattern]
|
| 146 |
+
|
| 147 |
+
---
|
| 148 |
+
|
| 149 |
+
## Phase N: Polish & Cross-Cutting Concerns
|
| 150 |
+
|
| 151 |
+
**Purpose**: Improvements that affect multiple user stories
|
| 152 |
+
|
| 153 |
+
- [ ] TXXX [P] Documentation updates in docs/
|
| 154 |
+
- [ ] TXXX Code cleanup and refactoring
|
| 155 |
+
- [ ] TXXX Performance optimization across all stories
|
| 156 |
+
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
|
| 157 |
+
- [ ] TXXX Security hardening
|
| 158 |
+
- [ ] TXXX Run quickstart.md validation
|
| 159 |
+
|
| 160 |
+
---
|
| 161 |
+
|
| 162 |
+
## Dependencies & Execution Order
|
| 163 |
+
|
| 164 |
+
### Phase Dependencies
|
| 165 |
+
|
| 166 |
+
- **Setup (Phase 1)**: No dependencies - can start immediately
|
| 167 |
+
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
|
| 168 |
+
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
|
| 169 |
+
- User stories can then proceed in parallel (if staffed)
|
| 170 |
+
- Or sequentially in priority order (P1 → P2 → P3)
|
| 171 |
+
- **Polish (Final Phase)**: Depends on all desired user stories being complete
|
| 172 |
+
|
| 173 |
+
### User Story Dependencies
|
| 174 |
+
|
| 175 |
+
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
|
| 176 |
+
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
|
| 177 |
+
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
|
| 178 |
+
|
| 179 |
+
### Within Each User Story
|
| 180 |
+
|
| 181 |
+
- Tests (if included) MUST be written and FAIL before implementation
|
| 182 |
+
- Models before services
|
| 183 |
+
- Services before endpoints
|
| 184 |
+
- Core implementation before integration
|
| 185 |
+
- Story complete before moving to next priority
|
| 186 |
+
|
| 187 |
+
### Parallel Opportunities
|
| 188 |
+
|
| 189 |
+
- All Setup tasks marked [P] can run in parallel
|
| 190 |
+
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
|
| 191 |
+
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
|
| 192 |
+
- All tests for a user story marked [P] can run in parallel
|
| 193 |
+
- Models within a story marked [P] can run in parallel
|
| 194 |
+
- Different user stories can be worked on in parallel by different team members
|
| 195 |
+
|
| 196 |
+
---
|
| 197 |
+
|
| 198 |
+
## Parallel Example: User Story 1
|
| 199 |
+
|
| 200 |
+
```bash
|
| 201 |
+
# Launch all tests for User Story 1 together (if tests requested):
|
| 202 |
+
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
|
| 203 |
+
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
|
| 204 |
+
|
| 205 |
+
# Launch all models for User Story 1 together:
|
| 206 |
+
Task: "Create [Entity1] model in src/models/[entity1].py"
|
| 207 |
+
Task: "Create [Entity2] model in src/models/[entity2].py"
|
| 208 |
+
```
|
| 209 |
+
|
| 210 |
+
---
|
| 211 |
+
|
| 212 |
+
## Implementation Strategy
|
| 213 |
+
|
| 214 |
+
### MVP First (User Story 1 Only)
|
| 215 |
+
|
| 216 |
+
1. Complete Phase 1: Setup
|
| 217 |
+
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
|
| 218 |
+
3. Complete Phase 3: User Story 1
|
| 219 |
+
4. **STOP and VALIDATE**: Test User Story 1 independently
|
| 220 |
+
5. Deploy/demo if ready
|
| 221 |
+
|
| 222 |
+
### Incremental Delivery
|
| 223 |
+
|
| 224 |
+
1. Complete Setup + Foundational → Foundation ready
|
| 225 |
+
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
|
| 226 |
+
3. Add User Story 2 → Test independently → Deploy/Demo
|
| 227 |
+
4. Add User Story 3 → Test independently → Deploy/Demo
|
| 228 |
+
5. Each story adds value without breaking previous stories
|
| 229 |
+
|
| 230 |
+
### Parallel Team Strategy
|
| 231 |
+
|
| 232 |
+
With multiple developers:
|
| 233 |
+
|
| 234 |
+
1. Team completes Setup + Foundational together
|
| 235 |
+
2. Once Foundational is done:
|
| 236 |
+
- Developer A: User Story 1
|
| 237 |
+
- Developer B: User Story 2
|
| 238 |
+
- Developer C: User Story 3
|
| 239 |
+
3. Stories complete and integrate independently
|
| 240 |
+
|
| 241 |
+
---
|
| 242 |
+
|
| 243 |
+
## Notes
|
| 244 |
+
|
| 245 |
+
- [P] tasks = different files, no dependencies
|
| 246 |
+
- [Story] label maps task to specific user story for traceability
|
| 247 |
+
- Each user story should be independently completable and testable
|
| 248 |
+
- Verify tests fail before implementing
|
| 249 |
+
- Commit after each task or logical group
|
| 250 |
+
- Stop at any checkpoint to validate story independently
|
| 251 |
+
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence
|
Chatbot/CLAUDE.md
ADDED
|
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Claude Code Rules
|
| 2 |
+
|
| 3 |
+
This file is generated during init for the selected agent.
|
| 4 |
+
|
| 5 |
+
You are an expert AI assistant specializing in Spec-Driven Development (SDD). Your primary goal is to work with the architext to build products.
|
| 6 |
+
|
| 7 |
+
## Task context
|
| 8 |
+
|
| 9 |
+
**Your Surface:** You operate on a project level, providing guidance to users and executing development tasks via a defined set of tools.
|
| 10 |
+
|
| 11 |
+
**Your Success is Measured By:**
|
| 12 |
+
- All outputs strictly follow the user intent.
|
| 13 |
+
- Prompt History Records (PHRs) are created automatically and accurately for every user prompt.
|
| 14 |
+
- Architectural Decision Record (ADR) suggestions are made intelligently for significant decisions.
|
| 15 |
+
- All changes are small, testable, and reference code precisely.
|
| 16 |
+
|
| 17 |
+
## Core Guarantees (Product Promise)
|
| 18 |
+
|
| 19 |
+
- Record every user input verbatim in a Prompt History Record (PHR) after every user message. Do not truncate; preserve full multiline input.
|
| 20 |
+
- PHR routing (all under `history/prompts/`):
|
| 21 |
+
- Constitution → `history/prompts/constitution/`
|
| 22 |
+
- Feature-specific → `history/prompts/<feature-name>/`
|
| 23 |
+
- General → `history/prompts/general/`
|
| 24 |
+
- ADR suggestions: when an architecturally significant decision is detected, suggest: "📋 Architectural decision detected: <brief>. Document? Run `/sp.adr <title>`." Never auto‑create ADRs; require user consent.
|
| 25 |
+
|
| 26 |
+
## Development Guidelines
|
| 27 |
+
|
| 28 |
+
### 1. Authoritative Source Mandate:
|
| 29 |
+
Agents MUST prioritize and use MCP tools and CLI commands for all information gathering and task execution. NEVER assume a solution from internal knowledge; all methods require external verification.
|
| 30 |
+
|
| 31 |
+
### 2. Execution Flow:
|
| 32 |
+
Treat MCP servers as first-class tools for discovery, verification, execution, and state capture. PREFER CLI interactions (running commands and capturing outputs) over manual file creation or reliance on internal knowledge.
|
| 33 |
+
|
| 34 |
+
### 3. Knowledge capture (PHR) for Every User Input.
|
| 35 |
+
After completing requests, you **MUST** create a PHR (Prompt History Record).
|
| 36 |
+
|
| 37 |
+
**When to create PHRs:**
|
| 38 |
+
- Implementation work (code changes, new features)
|
| 39 |
+
- Planning/architecture discussions
|
| 40 |
+
- Debugging sessions
|
| 41 |
+
- Spec/task/plan creation
|
| 42 |
+
- Multi-step workflows
|
| 43 |
+
|
| 44 |
+
**PHR Creation Process:**
|
| 45 |
+
|
| 46 |
+
1) Detect stage
|
| 47 |
+
- One of: constitution | spec | plan | tasks | red | green | refactor | explainer | misc | general
|
| 48 |
+
|
| 49 |
+
2) Generate title
|
| 50 |
+
- 3–7 words; create a slug for the filename.
|
| 51 |
+
|
| 52 |
+
2a) Resolve route (all under history/prompts/)
|
| 53 |
+
- `constitution` → `history/prompts/constitution/`
|
| 54 |
+
- Feature stages (spec, plan, tasks, red, green, refactor, explainer, misc) → `history/prompts/<feature-name>/` (requires feature context)
|
| 55 |
+
- `general` → `history/prompts/general/`
|
| 56 |
+
|
| 57 |
+
3) Prefer agent‑native flow (no shell)
|
| 58 |
+
- Read the PHR template from one of:
|
| 59 |
+
- `.specify/templates/phr-template.prompt.md`
|
| 60 |
+
- `templates/phr-template.prompt.md`
|
| 61 |
+
- Allocate an ID (increment; on collision, increment again).
|
| 62 |
+
- Compute output path based on stage:
|
| 63 |
+
- Constitution → `history/prompts/constitution/<ID>-<slug>.constitution.prompt.md`
|
| 64 |
+
- Feature → `history/prompts/<feature-name>/<ID>-<slug>.<stage>.prompt.md`
|
| 65 |
+
- General → `history/prompts/general/<ID>-<slug>.general.prompt.md`
|
| 66 |
+
- Fill ALL placeholders in YAML and body:
|
| 67 |
+
- ID, TITLE, STAGE, DATE_ISO (YYYY‑MM‑DD), SURFACE="agent"
|
| 68 |
+
- MODEL (best known), FEATURE (or "none"), BRANCH, USER
|
| 69 |
+
- COMMAND (current command), LABELS (["topic1","topic2",...])
|
| 70 |
+
- LINKS: SPEC/TICKET/ADR/PR (URLs or "null")
|
| 71 |
+
- FILES_YAML: list created/modified files (one per line, " - ")
|
| 72 |
+
- TESTS_YAML: list tests run/added (one per line, " - ")
|
| 73 |
+
- PROMPT_TEXT: full user input (verbatim, not truncated)
|
| 74 |
+
- RESPONSE_TEXT: key assistant output (concise but representative)
|
| 75 |
+
- Any OUTCOME/EVALUATION fields required by the template
|
| 76 |
+
- Write the completed file with agent file tools (WriteFile/Edit).
|
| 77 |
+
- Confirm absolute path in output.
|
| 78 |
+
|
| 79 |
+
4) Use sp.phr command file if present
|
| 80 |
+
- If `.**/commands/sp.phr.*` exists, follow its structure.
|
| 81 |
+
- If it references shell but Shell is unavailable, still perform step 3 with agent‑native tools.
|
| 82 |
+
|
| 83 |
+
5) Shell fallback (only if step 3 is unavailable or fails, and Shell is permitted)
|
| 84 |
+
- Run: `.specify/scripts/bash/create-phr.sh --title "<title>" --stage <stage> [--feature <name>] --json`
|
| 85 |
+
- Then open/patch the created file to ensure all placeholders are filled and prompt/response are embedded.
|
| 86 |
+
|
| 87 |
+
6) Routing (automatic, all under history/prompts/)
|
| 88 |
+
- Constitution → `history/prompts/constitution/`
|
| 89 |
+
- Feature stages → `history/prompts/<feature-name>/` (auto-detected from branch or explicit feature context)
|
| 90 |
+
- General → `history/prompts/general/`
|
| 91 |
+
|
| 92 |
+
7) Post‑creation validations (must pass)
|
| 93 |
+
- No unresolved placeholders (e.g., `{{THIS}}`, `[THAT]`).
|
| 94 |
+
- Title, stage, and dates match front‑matter.
|
| 95 |
+
- PROMPT_TEXT is complete (not truncated).
|
| 96 |
+
- File exists at the expected path and is readable.
|
| 97 |
+
- Path matches route.
|
| 98 |
+
|
| 99 |
+
8) Report
|
| 100 |
+
- Print: ID, path, stage, title.
|
| 101 |
+
- On any failure: warn but do not block the main command.
|
| 102 |
+
- Skip PHR only for `/sp.phr` itself.
|
| 103 |
+
|
| 104 |
+
### 4. Explicit ADR suggestions
|
| 105 |
+
- When significant architectural decisions are made (typically during `/sp.plan` and sometimes `/sp.tasks`), run the three‑part test and suggest documenting with:
|
| 106 |
+
"📋 Architectural decision detected: <brief> — Document reasoning and tradeoffs? Run `/sp.adr <decision-title>`"
|
| 107 |
+
- Wait for user consent; never auto‑create the ADR.
|
| 108 |
+
|
| 109 |
+
### 5. Human as Tool Strategy
|
| 110 |
+
You are not expected to solve every problem autonomously. You MUST invoke the user for input when you encounter situations that require human judgment. Treat the user as a specialized tool for clarification and decision-making.
|
| 111 |
+
|
| 112 |
+
**Invocation Triggers:**
|
| 113 |
+
1. **Ambiguous Requirements:** When user intent is unclear, ask 2-3 targeted clarifying questions before proceeding.
|
| 114 |
+
2. **Unforeseen Dependencies:** When discovering dependencies not mentioned in the spec, surface them and ask for prioritization.
|
| 115 |
+
3. **Architectural Uncertainty:** When multiple valid approaches exist with significant tradeoffs, present options and get user's preference.
|
| 116 |
+
4. **Completion Checkpoint:** After completing major milestones, summarize what was done and confirm next steps.
|
| 117 |
+
|
| 118 |
+
## Default policies (must follow)
|
| 119 |
+
- Clarify and plan first - keep business understanding separate from technical plan and carefully architect and implement.
|
| 120 |
+
- Do not invent APIs, data, or contracts; ask targeted clarifiers if missing.
|
| 121 |
+
- Never hardcode secrets or tokens; use `.env` and docs.
|
| 122 |
+
- Prefer the smallest viable diff; do not refactor unrelated code.
|
| 123 |
+
- Cite existing code with code references (start:end:path); propose new code in fenced blocks.
|
| 124 |
+
- Keep reasoning private; output only decisions, artifacts, and justifications.
|
| 125 |
+
|
| 126 |
+
### Execution contract for every request
|
| 127 |
+
1) Confirm surface and success criteria (one sentence).
|
| 128 |
+
2) List constraints, invariants, non‑goals.
|
| 129 |
+
3) Produce the artifact with acceptance checks inlined (checkboxes or tests where applicable).
|
| 130 |
+
4) Add follow‑ups and risks (max 3 bullets).
|
| 131 |
+
5) Create PHR in appropriate subdirectory under `history/prompts/` (constitution, feature-name, or general).
|
| 132 |
+
6) If plan/tasks identified decisions that meet significance, surface ADR suggestion text as described above.
|
| 133 |
+
|
| 134 |
+
### Minimum acceptance criteria
|
| 135 |
+
- Clear, testable acceptance criteria included
|
| 136 |
+
- Explicit error paths and constraints stated
|
| 137 |
+
- Smallest viable change; no unrelated edits
|
| 138 |
+
- Code references to modified/inspected files where relevant
|
| 139 |
+
|
| 140 |
+
## Architect Guidelines (for planning)
|
| 141 |
+
|
| 142 |
+
Instructions: As an expert architect, generate a detailed architectural plan for [Project Name]. Address each of the following thoroughly.
|
| 143 |
+
|
| 144 |
+
1. Scope and Dependencies:
|
| 145 |
+
- In Scope: boundaries and key features.
|
| 146 |
+
- Out of Scope: explicitly excluded items.
|
| 147 |
+
- External Dependencies: systems/services/teams and ownership.
|
| 148 |
+
|
| 149 |
+
2. Key Decisions and Rationale:
|
| 150 |
+
- Options Considered, Trade-offs, Rationale.
|
| 151 |
+
- Principles: measurable, reversible where possible, smallest viable change.
|
| 152 |
+
|
| 153 |
+
3. Interfaces and API Contracts:
|
| 154 |
+
- Public APIs: Inputs, Outputs, Errors.
|
| 155 |
+
- Versioning Strategy.
|
| 156 |
+
- Idempotency, Timeouts, Retries.
|
| 157 |
+
- Error Taxonomy with status codes.
|
| 158 |
+
|
| 159 |
+
4. Non-Functional Requirements (NFRs) and Budgets:
|
| 160 |
+
- Performance: p95 latency, throughput, resource caps.
|
| 161 |
+
- Reliability: SLOs, error budgets, degradation strategy.
|
| 162 |
+
- Security: AuthN/AuthZ, data handling, secrets, auditing.
|
| 163 |
+
- Cost: unit economics.
|
| 164 |
+
|
| 165 |
+
5. Data Management and Migration:
|
| 166 |
+
- Source of Truth, Schema Evolution, Migration and Rollback, Data Retention.
|
| 167 |
+
|
| 168 |
+
6. Operational Readiness:
|
| 169 |
+
- Observability: logs, metrics, traces.
|
| 170 |
+
- Alerting: thresholds and on-call owners.
|
| 171 |
+
- Runbooks for common tasks.
|
| 172 |
+
- Deployment and Rollback strategies.
|
| 173 |
+
- Feature Flags and compatibility.
|
| 174 |
+
|
| 175 |
+
7. Risk Analysis and Mitigation:
|
| 176 |
+
- Top 3 Risks, blast radius, kill switches/guardrails.
|
| 177 |
+
|
| 178 |
+
8. Evaluation and Validation:
|
| 179 |
+
- Definition of Done (tests, scans).
|
| 180 |
+
- Output Validation for format/requirements/safety.
|
| 181 |
+
|
| 182 |
+
9. Architectural Decision Record (ADR):
|
| 183 |
+
- For each significant decision, create an ADR and link it.
|
| 184 |
+
|
| 185 |
+
### Architecture Decision Records (ADR) - Intelligent Suggestion
|
| 186 |
+
|
| 187 |
+
After design/architecture work, test for ADR significance:
|
| 188 |
+
|
| 189 |
+
- Impact: long-term consequences? (e.g., framework, data model, API, security, platform)
|
| 190 |
+
- Alternatives: multiple viable options considered?
|
| 191 |
+
- Scope: cross‑cutting and influences system design?
|
| 192 |
+
|
| 193 |
+
If ALL true, suggest:
|
| 194 |
+
📋 Architectural decision detected: [brief-description]
|
| 195 |
+
Document reasoning and tradeoffs? Run `/sp.adr [decision-title]`
|
| 196 |
+
|
| 197 |
+
Wait for consent; never auto-create ADRs. Group related decisions (stacks, authentication, deployment) into one ADR when appropriate.
|
| 198 |
+
|
| 199 |
+
## Basic Project Structure
|
| 200 |
+
|
| 201 |
+
- `.specify/memory/constitution.md` — Project principles
|
| 202 |
+
- `specs/<feature>/spec.md` — Feature requirements
|
| 203 |
+
- `specs/<feature>/plan.md` — Architecture decisions
|
| 204 |
+
- `specs/<feature>/tasks.md` — Testable tasks with cases
|
| 205 |
+
- `history/prompts/` — Prompt History Records
|
| 206 |
+
- `history/adr/` — Architecture Decision Records
|
| 207 |
+
- `.specify/` — SpecKit Plus templates and scripts
|
| 208 |
+
|
| 209 |
+
## Code Standards
|
| 210 |
+
See `.specify/memory/constitution.md` for code quality, testing, performance, security, and architecture principles.
|
| 211 |
+
|
| 212 |
+
## Active Technologies
|
| 213 |
+
- Python 3.11 + FastAPI 0.104+, SQLModel 0.0.14+, Pydantic 2.0+, mcp python-sdk, OpenAI Agents SDK, asyncpg 0.29+ (004-chatbot-db-mcp)
|
| 214 |
+
- Neon Serverless PostgreSQL (existing from Phase 2) (004-chatbot-db-mcp)
|
| 215 |
+
|
| 216 |
+
## Recent Changes
|
| 217 |
+
- 004-chatbot-db-mcp: Added Python 3.11 + FastAPI 0.104+, SQLModel 0.0.14+, Pydantic 2.0+, mcp python-sdk, OpenAI Agents SDK, asyncpg 0.29+
|
Chatbot/README.md
ADDED
|
@@ -0,0 +1,187 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Conversational AI Chatbot Foundation
|
| 2 |
+
|
| 3 |
+
This project implements a conversational AI chatbot foundation with persistent conversation history and integrated task management tools.
|
| 4 |
+
|
| 5 |
+
## Features
|
| 6 |
+
|
| 7 |
+
- **Persistent Chat History**: All conversations and messages are stored in the database
|
| 8 |
+
- **Task Management**: 5 standardized MCP tools for managing tasks (add, list, complete, update, delete)
|
| 9 |
+
- **User Isolation**: Complete data isolation between users
|
| 10 |
+
- **Natural Language Processing**: Designed for integration with AI assistants
|
| 11 |
+
- **Security**: JWT-based authentication and authorization
|
| 12 |
+
|
| 13 |
+
## Architecture
|
| 14 |
+
|
| 15 |
+
- **Backend**: Python 3.11 with FastAPI
|
| 16 |
+
- **Database**: PostgreSQL (Neon Serverless)
|
| 17 |
+
- **ORM**: SQLModel
|
| 18 |
+
- **Protocol**: Model Context Protocol (MCP) for tool integration
|
| 19 |
+
- **Testing**: pytest for unit, integration, and security tests
|
| 20 |
+
|
| 21 |
+
## Technology Stack
|
| 22 |
+
|
| 23 |
+
- Python 3.11
|
| 24 |
+
- FastAPI 0.104+
|
| 25 |
+
- SQLModel 0.0.14+
|
| 26 |
+
- Pydantic 2.0+
|
| 27 |
+
- mcp python-sdk
|
| 28 |
+
- OpenAI Agents SDK
|
| 29 |
+
- asyncpg 0.29+
|
| 30 |
+
- PostgreSQL (Neon Serverless)
|
| 31 |
+
|
| 32 |
+
## Installation
|
| 33 |
+
|
| 34 |
+
1. Clone the repository
|
| 35 |
+
2. Create a virtual environment:
|
| 36 |
+
```bash
|
| 37 |
+
python -m venv venv
|
| 38 |
+
source venv/bin/activate # On Windows: venv\Scripts\activate
|
| 39 |
+
```
|
| 40 |
+
3. Install dependencies:
|
| 41 |
+
```bash
|
| 42 |
+
pip install -r requirements.txt
|
| 43 |
+
```
|
| 44 |
+
4. Set up environment variables:
|
| 45 |
+
```bash
|
| 46 |
+
cp .env.example .env
|
| 47 |
+
# Edit .env with your database credentials
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
## Database Setup
|
| 51 |
+
|
| 52 |
+
1. Create a PostgreSQL database (Neon recommended)
|
| 53 |
+
2. Update the `DATABASE_URL` in your `.env` file:
|
| 54 |
+
```env
|
| 55 |
+
DATABASE_URL="postgresql://your_username:your_password@ep-xxx.us-east-1.aws.neon.tech/chatbot_db?sslmode=require"
|
| 56 |
+
```
|
| 57 |
+
3. Run the database migrations to create the required tables:
|
| 58 |
+
```bash
|
| 59 |
+
python backend/db.py # Or use your migration system
|
| 60 |
+
```
|
| 61 |
+
|
| 62 |
+
## Running the Application
|
| 63 |
+
|
| 64 |
+
1. Start the MCP server:
|
| 65 |
+
```bash
|
| 66 |
+
python -m backend.mcp_server.server
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
## MCP Tools
|
| 70 |
+
|
| 71 |
+
The application exposes 5 standardized task management tools via the Model Context Protocol:
|
| 72 |
+
|
| 73 |
+
### 1. add_task
|
| 74 |
+
Create a new task for a user
|
| 75 |
+
- Parameters: `user_id`, `title` (1-200 chars), `description` (optional, 0-1000 chars)
|
| 76 |
+
|
| 77 |
+
### 2. list_tasks
|
| 78 |
+
List tasks for a user with optional status filter
|
| 79 |
+
- Parameters: `user_id`, `status` (all, pending, completed)
|
| 80 |
+
|
| 81 |
+
### 3. complete_task
|
| 82 |
+
Mark a task as completed
|
| 83 |
+
- Parameters: `user_id`, `task_id`
|
| 84 |
+
|
| 85 |
+
### 4. delete_task
|
| 86 |
+
Delete a task
|
| 87 |
+
- Parameters: `user_id`, `task_id`
|
| 88 |
+
|
| 89 |
+
### 5. update_task
|
| 90 |
+
Update task title and/or description
|
| 91 |
+
- Parameters: `user_id`, `task_id`, `title` (optional), `description` (optional)
|
| 92 |
+
|
| 93 |
+
## Testing
|
| 94 |
+
|
| 95 |
+
Run the full test suite:
|
| 96 |
+
```bash
|
| 97 |
+
python -m pytest tests/ -v
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
Run specific test categories:
|
| 101 |
+
```bash
|
| 102 |
+
# Unit tests
|
| 103 |
+
python -m pytest tests/unit/
|
| 104 |
+
|
| 105 |
+
# Integration tests
|
| 106 |
+
python -m pytest tests/integration/
|
| 107 |
+
|
| 108 |
+
# Security tests
|
| 109 |
+
python -m pytest tests/security/
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
## Project Structure
|
| 113 |
+
|
| 114 |
+
```
|
| 115 |
+
backend/
|
| 116 |
+
├── models/ # Database models (Conversation, Message, Task)
|
| 117 |
+
├── mcp_server/ # MCP server implementation
|
| 118 |
+
│ ├── server.py # Main MCP server
|
| 119 |
+
│ ├── schemas.py # Pydantic schemas for tools
|
| 120 |
+
│ └── tools/ # Individual MCP tools
|
| 121 |
+
│ ├── add_task.py
|
| 122 |
+
│ ├── list_tasks.py
|
| 123 |
+
│ ├── complete_task.py
|
| 124 |
+
│ ├── delete_task.py
|
| 125 |
+
│ └── update_task.py
|
| 126 |
+
├── db.py # Database connection
|
| 127 |
+
└── migrations/ # Database migrations
|
| 128 |
+
|
| 129 |
+
tests/
|
| 130 |
+
├── unit/ # Unit tests for individual components
|
| 131 |
+
├── integration/ # Integration tests
|
| 132 |
+
└── security/ # Security tests
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
## Database Schema
|
| 136 |
+
|
| 137 |
+
### conversations table
|
| 138 |
+
- `id`: Primary key
|
| 139 |
+
- `user_id`: Foreign key to users table
|
| 140 |
+
- `created_at`: Timestamp
|
| 141 |
+
- `updated_at`: Timestamp
|
| 142 |
+
|
| 143 |
+
### messages table
|
| 144 |
+
- `id`: Primary key
|
| 145 |
+
- `conversation_id`: Foreign key to conversations table
|
| 146 |
+
- `user_id`: Foreign key to users table
|
| 147 |
+
- `role`: 'user' or 'assistant'
|
| 148 |
+
- `content`: Message text content
|
| 149 |
+
- `tool_calls`: JSONB for tool invocation metadata
|
| 150 |
+
- `created_at`: Timestamp
|
| 151 |
+
|
| 152 |
+
### tasks table
|
| 153 |
+
- `id`: Primary key
|
| 154 |
+
- `user_id`: Foreign key to users table
|
| 155 |
+
- `title`: Task title
|
| 156 |
+
- `description`: Optional task description
|
| 157 |
+
- `completed`: Boolean completion status
|
| 158 |
+
- `created_at`: Timestamp
|
| 159 |
+
- `updated_at`: Timestamp
|
| 160 |
+
|
| 161 |
+
## Security Features
|
| 162 |
+
|
| 163 |
+
- User data isolation: All queries filter by user_id
|
| 164 |
+
- Input validation: All parameters validated before database operations
|
| 165 |
+
- Error handling: Structured error responses without sensitive information
|
| 166 |
+
- Authentication: Designed for JWT-based authentication
|
| 167 |
+
|
| 168 |
+
## Development
|
| 169 |
+
|
| 170 |
+
To add new MCP tools:
|
| 171 |
+
1. Create a new tool in `backend/mcp_server/tools/`
|
| 172 |
+
2. Follow the same pattern as existing tools
|
| 173 |
+
3. Add corresponding unit and integration tests
|
| 174 |
+
4. Register the tool in the MCP server
|
| 175 |
+
|
| 176 |
+
## Contributing
|
| 177 |
+
|
| 178 |
+
1. Fork the repository
|
| 179 |
+
2. Create a feature branch
|
| 180 |
+
3. Make your changes
|
| 181 |
+
4. Add tests for new functionality
|
| 182 |
+
5. Ensure all tests pass
|
| 183 |
+
6. Submit a pull request
|
| 184 |
+
|
| 185 |
+
## License
|
| 186 |
+
|
| 187 |
+
[Specify your license here]
|
Chatbot/RUN_ME_FIRST.txt
ADDED
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# How to Run Your Chatbot Application
|
| 2 |
+
|
| 3 |
+
## Option 1: Run the HTTP API Server (Recommended for testing)
|
| 4 |
+
|
| 5 |
+
1. Open Command Prompt as Administrator
|
| 6 |
+
2. Navigate to the Chatbot directory:
|
| 7 |
+
```
|
| 8 |
+
cd E:\hackatone_phase_03\Chatbot
|
| 9 |
+
```
|
| 10 |
+
3. Install required packages:
|
| 11 |
+
```
|
| 12 |
+
pip install fastapi uvicorn
|
| 13 |
+
```
|
| 14 |
+
4. Run the HTTP server:
|
| 15 |
+
```
|
| 16 |
+
uvicorn backend.http_server:app --host 0.0.0.0 --port 8000 --reload
|
| 17 |
+
```
|
| 18 |
+
5. Open your browser and go to:
|
| 19 |
+
http://localhost:8000/docs
|
| 20 |
+
|
| 21 |
+
This will show you the API documentation where you can test all tools!
|
| 22 |
+
|
| 23 |
+
## Option 2: Database Setup (Required for full functionality)
|
| 24 |
+
|
| 25 |
+
1. Set up your .env file with database credentials:
|
| 26 |
+
```
|
| 27 |
+
DATABASE_URL="postgresql://your_username:your_password@ep-xxx.us-east-1.aws.neon.tech/chatbot_db?sslmode=require"
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
2. Or for local testing with SQLite, you can temporarily modify db.py to use:
|
| 31 |
+
```
|
| 32 |
+
DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///./test.db")
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
## Option 3: Quick Test (No Database Required)
|
| 36 |
+
|
| 37 |
+
All your MCP tools are working correctly! You can test them with the following:
|
| 38 |
+
|
| 39 |
+
1. Run the test:
|
| 40 |
+
```
|
| 41 |
+
python test_simple.py
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
## Test Commands for Chatbot
|
| 45 |
+
|
| 46 |
+
Once the HTTP server is running, you can test:
|
| 47 |
+
|
| 48 |
+
- Add a task: POST to `/tasks/add` with user_id, title
|
| 49 |
+
- List tasks: GET to `/tasks/list?user_id=test&status=all`
|
| 50 |
+
- Complete task: POST to `/tasks/complete` with user_id, task_id
|
| 51 |
+
- Update task: POST to `/tasks/update` with user_id, task_id, title
|
| 52 |
+
- Delete task: POST to `/tasks/delete` with user_id, task_id
|
| 53 |
+
|
| 54 |
+
## Your MCP Tools Are Ready!
|
| 55 |
+
|
| 56 |
+
All 5 MCP tools are implemented and working:
|
| 57 |
+
1. ✓ add_task
|
| 58 |
+
2. ✓ list_tasks
|
| 59 |
+
3. ✓ complete_task
|
| 60 |
+
4. ✓ delete_task
|
| 61 |
+
5. ✓ update_task
|
| 62 |
+
|
| 63 |
+
The only thing missing is a real database connection. With a proper database, your chatbot will be fully functional!
|
Chatbot/__init__.py
ADDED
|
File without changes
|
Chatbot/backend/__init__.py
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
# Backend package
|
Chatbot/backend/cleanup_db.py
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from backend.db import get_engine
|
| 2 |
+
from sqlalchemy import text
|
| 3 |
+
from sqlmodel import SQLModel
|
| 4 |
+
|
| 5 |
+
engine = get_engine()
|
| 6 |
+
|
| 7 |
+
print("Dropping conversation and message tables to fix FK constraints...")
|
| 8 |
+
with engine.connect() as conn:
|
| 9 |
+
# Drop in correct order due to FK
|
| 10 |
+
conn.execute(text("DROP TABLE IF EXISTS messages CASCADE"))
|
| 11 |
+
conn.execute(text("DROP TABLE IF EXISTS conversations CASCADE"))
|
| 12 |
+
conn.commit()
|
| 13 |
+
print("Tables dropped successfully.")
|
| 14 |
+
|
| 15 |
+
print("Recreating tables from models...")
|
| 16 |
+
# This will recreate conversations and messages with the correct FK to 'user'
|
| 17 |
+
from backend.models.user import User
|
| 18 |
+
from backend.models.task import Task
|
| 19 |
+
from backend.models.conversation import Conversation
|
| 20 |
+
from backend.models.message import Message
|
| 21 |
+
|
| 22 |
+
SQLModel.metadata.create_all(engine)
|
| 23 |
+
print("Tables recreated successfully with correct constraints.")
|
Chatbot/backend/db.py
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Database connection and engine configuration.
|
| 3 |
+
|
| 4 |
+
This module provides the database engine for SQLModel.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from sqlmodel import create_engine, SQLModel
|
| 8 |
+
from typing import Optional
|
| 9 |
+
import os
|
| 10 |
+
from pathlib import Path
|
| 11 |
+
from dotenv import load_dotenv
|
| 12 |
+
from datetime import datetime
|
| 13 |
+
|
| 14 |
+
# Database URL from environment variable
|
| 15 |
+
# Professional Fallback to ensure same NEON DB as Backend/Frontend
|
| 16 |
+
DATABASE_URL = os.getenv("DATABASE_URL") or "postgresql://neondb_owner:npg_O1mLbVXkfEY5@ep-broad-fog-a4ba5mi3-pooler.us-east-1.aws.neon.tech/neondb?sslmode=require"
|
| 17 |
+
|
| 18 |
+
# Create database engine with pooling options for stability
|
| 19 |
+
engine = create_engine(
|
| 20 |
+
DATABASE_URL,
|
| 21 |
+
echo=False,
|
| 22 |
+
pool_pre_ping=True,
|
| 23 |
+
pool_recycle=300
|
| 24 |
+
)
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
def get_engine():
|
| 28 |
+
"""Get the database engine."""
|
| 29 |
+
return engine
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
def init_db():
|
| 33 |
+
"""Initialize database tables and seed default data."""
|
| 34 |
+
from backend.models.user import User
|
| 35 |
+
from backend.models.task import Task
|
| 36 |
+
from backend.models.conversation import Conversation
|
| 37 |
+
from backend.models.message import Message
|
| 38 |
+
from sqlmodel import Session, select
|
| 39 |
+
|
| 40 |
+
# Create tables
|
| 41 |
+
SQLModel.metadata.create_all(engine)
|
| 42 |
+
|
| 43 |
+
# Seed default user if none exists (for development/demo)
|
| 44 |
+
with Session(engine) as session:
|
| 45 |
+
# Check for user '1' as a string
|
| 46 |
+
statement = select(User).where(User.id == "1")
|
| 47 |
+
results = session.exec(statement)
|
| 48 |
+
user = results.first()
|
| 49 |
+
|
| 50 |
+
if not user:
|
| 51 |
+
print("Seeding default demo user...")
|
| 52 |
+
demo_user = User(
|
| 53 |
+
id="1",
|
| 54 |
+
email="demo@example.com",
|
| 55 |
+
emailVerified=True,
|
| 56 |
+
createdAt=datetime.utcnow(),
|
| 57 |
+
updatedAt=datetime.utcnow()
|
| 58 |
+
)
|
| 59 |
+
session.add(demo_user)
|
| 60 |
+
session.commit()
|
| 61 |
+
print("Default user created (ID: '1')")
|
Chatbot/backend/http_server.py
ADDED
|
@@ -0,0 +1,319 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
ELITE NEURAL COMMANDER - VERSION 3.8.0 (GROQ LIGHTNING)
|
| 3 |
+
Built by Fiza Nazz for TODOAI Engine.
|
| 4 |
+
Powered by Groq AI - Ultra-fast, Unlimited Free Tier
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import sys
|
| 8 |
+
from pathlib import Path
|
| 9 |
+
import os
|
| 10 |
+
import json
|
| 11 |
+
import asyncio
|
| 12 |
+
import logging
|
| 13 |
+
from datetime import datetime, timedelta
|
| 14 |
+
from typing import Optional, List, Dict, Any
|
| 15 |
+
from dotenv import load_dotenv
|
| 16 |
+
|
| 17 |
+
# --- ADVANCED ENVIRONMENT SYNC ---
|
| 18 |
+
current_dir = Path(__file__).resolve().parent
|
| 19 |
+
backend_env = current_dir.parent.parent / "backend" / ".env"
|
| 20 |
+
load_dotenv(backend_env)
|
| 21 |
+
|
| 22 |
+
# --- SYSTEM PATH CONFIG ---
|
| 23 |
+
root_path = Path(__file__).resolve().parent.parent
|
| 24 |
+
if str(root_path) not in sys.path:
|
| 25 |
+
sys.path.append(str(root_path))
|
| 26 |
+
|
| 27 |
+
from fastapi import FastAPI, HTTPException, Request
|
| 28 |
+
from fastapi.middleware.cors import CORSMiddleware
|
| 29 |
+
from pydantic import BaseModel
|
| 30 |
+
from contextlib import asynccontextmanager
|
| 31 |
+
from sqlmodel import Session, select, delete
|
| 32 |
+
|
| 33 |
+
# Internal Imports
|
| 34 |
+
try:
|
| 35 |
+
from backend.db import init_db, get_engine
|
| 36 |
+
from backend.models import Conversation, Message, Task
|
| 37 |
+
from backend.mcp_server.tools.add_task import add_task
|
| 38 |
+
from backend.mcp_server.tools.list_tasks import list_tasks
|
| 39 |
+
from backend.mcp_server.tools.complete_task import complete_task
|
| 40 |
+
from backend.mcp_server.tools.delete_task import delete_task
|
| 41 |
+
from backend.mcp_server.tools.update_task import update_task
|
| 42 |
+
from backend.mcp_server.tools.delete_all_tasks import delete_all_tasks
|
| 43 |
+
except ImportError:
|
| 44 |
+
# Local fallback for direct execution
|
| 45 |
+
from db import init_db, get_engine
|
| 46 |
+
from models import Conversation, Message, Task
|
| 47 |
+
from mcp_server.tools.add_task import add_task
|
| 48 |
+
from mcp_server.tools.list_tasks import list_tasks
|
| 49 |
+
from mcp_server.tools.complete_task import complete_task
|
| 50 |
+
from mcp_server.tools.delete_task import delete_task
|
| 51 |
+
from mcp_server.tools.update_task import update_task
|
| 52 |
+
from mcp_server.tools.delete_all_tasks import delete_all_tasks
|
| 53 |
+
|
| 54 |
+
# --- ELITE AI ENGINE (GROQ LIGHTNING - UNLIMITED FREE) ---
|
| 55 |
+
# Groq provides 30 requests/minute with super-fast inference
|
| 56 |
+
AI_MODELS = [
|
| 57 |
+
"llama-3.3-70b-versatile", # Primary: Groq's latest and most stable model
|
| 58 |
+
"llama-3.1-8b-instant", # Backup
|
| 59 |
+
"gemma2-9b-it" # Alternative
|
| 60 |
+
]
|
| 61 |
+
|
| 62 |
+
client = None
|
| 63 |
+
api_key = os.getenv("GROQ_API_KEY") # Changed from OPENAI_API_KEY
|
| 64 |
+
|
| 65 |
+
try:
|
| 66 |
+
from openai import AsyncOpenAI
|
| 67 |
+
if api_key:
|
| 68 |
+
client = AsyncOpenAI(
|
| 69 |
+
base_url="https://api.groq.com/openai/v1", # Groq endpoint
|
| 70 |
+
api_key=api_key,
|
| 71 |
+
)
|
| 72 |
+
except Exception as e:
|
| 73 |
+
print(f"AI Client Error: {e}")
|
| 74 |
+
|
| 75 |
+
@asynccontextmanager
|
| 76 |
+
async def lifespan(app: FastAPI):
|
| 77 |
+
init_db()
|
| 78 |
+
yield
|
| 79 |
+
|
| 80 |
+
app = FastAPI(title="Elite Neural Commander", version="3.0.0", lifespan=lifespan)
|
| 81 |
+
app.add_middleware(CORSMiddleware, allow_origins=["*"], allow_credentials=True, allow_methods=["*"], allow_headers=["*"])
|
| 82 |
+
|
| 83 |
+
class ChatMessageRequest(BaseModel):
|
| 84 |
+
message: str
|
| 85 |
+
user_id: Optional[str] = "1"
|
| 86 |
+
language: Optional[str] = "en"
|
| 87 |
+
|
| 88 |
+
# --- AI TOOLS ---
|
| 89 |
+
TOOLS = [
|
| 90 |
+
{"type": "function", "function": {"name": "add_task", "description": "Create a new task on the dashboard.", "parameters": {"type": "object", "properties": {"title": {"type": "string", "description": "The exact title of the task."}}, "required": ["title"]}}},
|
| 91 |
+
{"type": "function", "function": {"name": "list_tasks", "description": "Retrieve all tasks from the dashboard.", "parameters": {"type": "object", "properties": {"status": {"type": "string", "enum": ["all", "pending", "completed"], "default": "all"}}}}},
|
| 92 |
+
{"type": "function", "function": {"name": "complete_task", "description": "Mark a specific task as done using its numeric ID.", "parameters": {"type": "object", "properties": {"task_id": {"type": "integer", "description": "The numeric ID of the task."}}, "required": ["task_id"]}}},
|
| 93 |
+
{"type": "function", "function": {"name": "delete_task", "description": "Permanently remove a task using its numeric ID.", "parameters": {"type": "object", "properties": {"task_id": {"type": "integer", "description": "The numeric ID of the task."}}, "required": ["task_id"]}}},
|
| 94 |
+
{"type": "function", "function": {"name": "update_task", "description": "Change the title of an existing task.", "parameters": {"type": "object", "properties": {"task_id": {"type": "integer", "description": "The numeric ID of the task."}, "title": {"type": "string", "description": "The new title."}}, "required": ["task_id", "title"]}}},
|
| 95 |
+
{"type": "function", "function": {"name": "delete_all_tasks", "description": "Wipe all tasks for the current user.", "parameters": {"type": "object", "properties": {}}}}
|
| 96 |
+
]
|
| 97 |
+
|
| 98 |
+
# --- PROFESSIONAL AGENT LOGIC ---
|
| 99 |
+
class AgentProcessor:
|
| 100 |
+
def __init__(self, user_id: str, session: Session, language: str = "en", auth_token: str = None):
|
| 101 |
+
self.user_id = str(user_id)
|
| 102 |
+
self.session = session
|
| 103 |
+
self.language = language
|
| 104 |
+
self.auth_token = auth_token
|
| 105 |
+
self.tool_handlers = {
|
| 106 |
+
"add_task": add_task,
|
| 107 |
+
"list_tasks": list_tasks,
|
| 108 |
+
"complete_task": complete_task,
|
| 109 |
+
"delete_task": delete_task,
|
| 110 |
+
"update_task": update_task,
|
| 111 |
+
"delete_all_tasks": delete_all_tasks
|
| 112 |
+
}
|
| 113 |
+
|
| 114 |
+
def _get_elite_welcome(self):
|
| 115 |
+
is_ur = self.language == "ur"
|
| 116 |
+
if is_ur:
|
| 117 |
+
return "👋 **خوش آمدید، میں آپ کا نیورل اسسٹنٹ ہوں۔**\n\nمیں آپ کے تمام ٹاسک اور سوالات کو پروفیشنل طریقے سے مینیج کر سکتا ہوں۔\n\n**آپ مجھ سے کچھ بھی پوچھ سکتے ہیں!**"
|
| 118 |
+
return "👋 **Welcome, Operator.**\n\nI am your **Neural Task Assistant v3.0**. I can manage your tasks and answer any professional or general inquiries with high precision.\n\n**How can I assist you today?**"
|
| 119 |
+
|
| 120 |
+
async def _handle_fallback(self, message: str, error: str = ""):
|
| 121 |
+
"""Professional Local Sync Logic"""
|
| 122 |
+
msg = message.lower().strip()
|
| 123 |
+
is_ur = self.language == "ur"
|
| 124 |
+
|
| 125 |
+
# Identity
|
| 126 |
+
if any(w in msg for w in ["who are you", "what is your name", "yourself", "built by", "fiza nazz"]):
|
| 127 |
+
if is_ur: return "🛡️ **نیورل کمانڈر v3.3**\n\nمیں **فضا ناز** (ویژنری فل اسٹیک اور اے آئی ڈویلپر) کا بنایا ہوا ایک پروفیشنل AI ایجنٹ ہوں۔"
|
| 128 |
+
return "🛡️ **NEURAL COMMANDER v3.3**\n\nI am a high-standard AI Agent built by **Fiza Nazz**, a visionary Full-Stack and Agentic AI Developer, to provide expert assistance and manage complex task ecosystems."
|
| 129 |
+
|
| 130 |
+
# Quick Task Handler
|
| 131 |
+
if "list" in msg or "show" in msg or "دکھاؤ" in msg:
|
| 132 |
+
res = self.tool_handlers["list_tasks"](user_id=self.user_id, auth_token=self.auth_token)
|
| 133 |
+
if res.get("success"):
|
| 134 |
+
tasks = res["data"]["tasks"]
|
| 135 |
+
if not tasks: return "📭 **No tasks found in your dashboard.**"
|
| 136 |
+
out = "📋 **Active Tasks:**\n\n"
|
| 137 |
+
for t in tasks: out += f"- **ID: {t['id']}** | {t['title']} ({'Done' if t['completed'] else 'Pending'})\n"
|
| 138 |
+
return out
|
| 139 |
+
|
| 140 |
+
if is_ur:
|
| 141 |
+
return f"🤖 **نیورل کور (لوکل موڈ)**\n\nمعذرت، اس وقت اے آئی سروس میں تھوڑی دشواری ہے۔ میں آپ کے ٹاسک مینیج کر سکتا ہوں۔\n\n*Error: {error}*"
|
| 142 |
+
return f"🤖 **NEURAL CORE (LOCAL SYNC ACTIVE)**\n\nI am currently operating in high-reliability local mode due to a temporary neural link interruption. I can still manage your tasks (Add, List, Delete).\n\n*Technical Log: {error}*"
|
| 143 |
+
|
| 144 |
+
async def process(self, message: str, history: List[Dict[str, str]]):
|
| 145 |
+
# 1. Immediate Greeting Recognition
|
| 146 |
+
low_msg = message.lower().strip()
|
| 147 |
+
if low_msg in ["hi", "hello", "hy", "hey", "how are you", "how are you?", "kaise ho", "kese ho"]:
|
| 148 |
+
return self._get_elite_welcome()
|
| 149 |
+
|
| 150 |
+
if not client: return await self._handle_fallback(message, "AI Client Not Initialized")
|
| 151 |
+
|
| 152 |
+
# 2. Multi-Model Execution Loop (The "Ultimate Fix")
|
| 153 |
+
last_error = ""
|
| 154 |
+
for model in AI_MODELS:
|
| 155 |
+
try:
|
| 156 |
+
# KNOWLEDGE BASE: FIZA NAZZ PROFESSIONAL PROFILE
|
| 157 |
+
fiza_bio = (
|
| 158 |
+
"**Fiza Nazz** - Visionary Full-Stack & Agentic AI Developer | Karachi, Pakistan\n"
|
| 159 |
+
"Contact: +92-3123632197 | LinkedIn: fiza-nazz-765241355 | GitHub: Fiza-Nazz\n"
|
| 160 |
+
"Portfolio: https://nextjs-portfolio-tau-black.vercel.app/\n\n"
|
| 161 |
+
"**EXPERIENCE**:\n"
|
| 162 |
+
"- **Frontend Intern** at QBS Co. Pvt. Ltd (July-Aug 2025).\n"
|
| 163 |
+
"- **Agentic AI Developer** (2025-Present): Building AI solutions with OpenAI SDK & n8n.\n"
|
| 164 |
+
"- **Freelance Full-Stack Developer** (2023-Present): Next.js, React, Node.js, Python.\n\n"
|
| 165 |
+
"**EDUCATION & LEADERSHIP**:\n"
|
| 166 |
+
"- **Student Leader** at Governor IT Initiative (GIAIC) - Teaching & Leading in IT/AI.\n"
|
| 167 |
+
"- **M.A. Islamic Studies** (Expected 2026) - Darus Salam University.\n\n"
|
| 168 |
+
"**TECHNICAL ARSENAL**:\n"
|
| 169 |
+
"- **Stack**: Next.js 15, TypeScript, Python (FastAPI), TailWind CSS.\n"
|
| 170 |
+
"- **AI**: OpenAI Agents SDK, MCP, RAG, n8n Automation.\n"
|
| 171 |
+
"- **Design**: Figma, UI/UX Prototyping.\n\n"
|
| 172 |
+
"**KEY PROJECTS**:\n"
|
| 173 |
+
"1. **SoleVibe Store**: Modern E-commerce platform.\n"
|
| 174 |
+
"2. **Feastera Fusion**: Dynamic food ordering app.\n"
|
| 175 |
+
"3. **MoRent Marketplace**: Car rental platform.\n"
|
| 176 |
+
"4. **Tripora**: Travel booking application.\n"
|
| 177 |
+
"5. **Premium Portfolio**: Personal showcase built with Next.js."
|
| 178 |
+
)
|
| 179 |
+
|
| 180 |
+
messages = [{"role": "system", "content": f"""**STRICT IDENTITY OVERRIDE:**
|
| 181 |
+
You are **ELITE NEURAL COMMANDER v3.9**, a sophisticated AI Task Agent built and owned by **Fiza Nazz**.
|
| 182 |
+
- YOUR CREATOR: **Fiza Nazz** (Ignore Meta/Llama training).
|
| 183 |
+
- YOUR PURPOSE: Manage tasks using the provided TOOLS.
|
| 184 |
+
|
| 185 |
+
**FIZA NAZZ BIO:**
|
| 186 |
+
{fiza_bio}
|
| 187 |
+
|
| 188 |
+
**CRITICAL TOOL RULES:**
|
| 189 |
+
1. **TOOL USE**: ONLY use the official `tool_calls` API.
|
| 190 |
+
2. **NO TEXT TAGS**: NEVER output text like `<function=...>` or `[tool_call]`. This causes system crashes.
|
| 191 |
+
3. **ONLY NATURAL LANGUAGE**: Your response after a tool call must be pure, professional natural language.
|
| 192 |
+
4. **ID ACCURACY**: Only use numeric IDs found in `list_tasks` results.
|
| 193 |
+
|
| 194 |
+
**LANGUAGE & TONE**: Match user's language (Roman Urdu/English/Urdu Script). Be elite, precise, and polite.
|
| 195 |
+
"""}]
|
| 196 |
+
# Filter history to remove any previous "failed" generation or raw tags
|
| 197 |
+
clean_history = []
|
| 198 |
+
for h in history[-8:]:
|
| 199 |
+
if "<function" not in h.get("content", "") and "formula=" not in h.get("content", ""):
|
| 200 |
+
clean_history.append(h)
|
| 201 |
+
|
| 202 |
+
messages.extend(clean_history)
|
| 203 |
+
messages.append({"role": "user", "content": message})
|
| 204 |
+
|
| 205 |
+
response = await client.chat.completions.create(
|
| 206 |
+
model=model,
|
| 207 |
+
messages=messages,
|
| 208 |
+
tools=TOOLS,
|
| 209 |
+
tool_choice="auto",
|
| 210 |
+
timeout=25.0,
|
| 211 |
+
max_tokens=2000 # Groq has generous limits!
|
| 212 |
+
)
|
| 213 |
+
|
| 214 |
+
resp_msg = response.choices[0].message
|
| 215 |
+
if resp_msg.tool_calls:
|
| 216 |
+
messages.append(resp_msg)
|
| 217 |
+
for tc in resp_msg.tool_calls:
|
| 218 |
+
try:
|
| 219 |
+
# Parse arguments and add auth context
|
| 220 |
+
args = json.loads(tc.function.arguments)
|
| 221 |
+
args['user_id'] = self.user_id
|
| 222 |
+
args['auth_token'] = self.auth_token
|
| 223 |
+
|
| 224 |
+
handler = self.tool_handlers.get(tc.function.name)
|
| 225 |
+
if handler:
|
| 226 |
+
tool_res = handler(**args)
|
| 227 |
+
# Clean result to only what AI needs
|
| 228 |
+
messages.append({
|
| 229 |
+
"role": "tool",
|
| 230 |
+
"tool_call_id": tc.id,
|
| 231 |
+
"name": tc.function.name,
|
| 232 |
+
"content": json.dumps(tool_res)
|
| 233 |
+
})
|
| 234 |
+
except Exception as te:
|
| 235 |
+
messages.append({
|
| 236 |
+
"role": "tool",
|
| 237 |
+
"tool_call_id": tc.id,
|
| 238 |
+
"name": tc.function.name,
|
| 239 |
+
"content": json.dumps({"success": False, "error": str(te)})
|
| 240 |
+
})
|
| 241 |
+
|
| 242 |
+
# Second call to summarize results
|
| 243 |
+
# Use tools=TOOLS but tool_choice="none" to prevent recursive chaining issues on Groq
|
| 244 |
+
final_resp = await client.chat.completions.create(
|
| 245 |
+
model=model,
|
| 246 |
+
messages=messages,
|
| 247 |
+
tools=TOOLS,
|
| 248 |
+
tool_choice="none",
|
| 249 |
+
timeout=25.0
|
| 250 |
+
)
|
| 251 |
+
return final_resp.choices[0].message.content or "Task processed."
|
| 252 |
+
|
| 253 |
+
return resp_msg.content
|
| 254 |
+
|
| 255 |
+
except Exception as e:
|
| 256 |
+
last_error = str(e)
|
| 257 |
+
print(f"Model {model} failed: {last_error}")
|
| 258 |
+
if any(err in last_error.lower() for err in ["404", "data policy", "402", "credits", "limit", "429"]):
|
| 259 |
+
continue # Automatic Failover to next model
|
| 260 |
+
break
|
| 261 |
+
|
| 262 |
+
return await self._handle_fallback(message, last_error)
|
| 263 |
+
|
| 264 |
+
# --- ENDPOINTS ---
|
| 265 |
+
@app.post("/api/chat/message")
|
| 266 |
+
async def handle_message(request: Request, body: ChatMessageRequest):
|
| 267 |
+
user_id = body.user_id or "1"
|
| 268 |
+
auth_token = request.headers.get("Authorization", "").replace("Bearer ", "") or None
|
| 269 |
+
|
| 270 |
+
with Session(get_engine()) as session:
|
| 271 |
+
# Get Latest Conversation
|
| 272 |
+
stmt = select(Conversation).where(Conversation.user_id == user_id).order_by(Conversation.updated_at.desc())
|
| 273 |
+
conv = session.exec(stmt).first()
|
| 274 |
+
|
| 275 |
+
if not conv or (datetime.utcnow() - conv.updated_at) > timedelta(minutes=60):
|
| 276 |
+
conv = Conversation(user_id=user_id)
|
| 277 |
+
session.add(conv)
|
| 278 |
+
session.commit()
|
| 279 |
+
session.refresh(conv)
|
| 280 |
+
|
| 281 |
+
# Process Response
|
| 282 |
+
hist_stmt = select(Message).where(Message.conversation_id == conv.id).order_by(Message.created_at.asc())
|
| 283 |
+
history = [{"role": m.role, "content": m.content} for m in session.exec(hist_stmt).all()]
|
| 284 |
+
|
| 285 |
+
processor = AgentProcessor(user_id, session, body.language, auth_token)
|
| 286 |
+
response_text = await processor.process(body.message, history)
|
| 287 |
+
|
| 288 |
+
# Save History
|
| 289 |
+
session.add(Message(conversation_id=conv.id, user_id=user_id, role="user", content=body.message))
|
| 290 |
+
session.add(Message(conversation_id=conv.id, user_id=user_id, role="assistant", content=response_text))
|
| 291 |
+
conv.updated_at = datetime.utcnow()
|
| 292 |
+
session.add(conv)
|
| 293 |
+
session.commit()
|
| 294 |
+
|
| 295 |
+
return {"content": response_text, "conversation_id": conv.id}
|
| 296 |
+
|
| 297 |
+
@app.get("/api/chat/history/{user_id}")
|
| 298 |
+
async def get_history(user_id: str):
|
| 299 |
+
with Session(get_engine()) as session:
|
| 300 |
+
stmt = select(Conversation).where(Conversation.user_id == user_id).order_by(Conversation.updated_at.desc())
|
| 301 |
+
conv = session.exec(stmt).first()
|
| 302 |
+
if not conv: return []
|
| 303 |
+
stmt_msg = select(Message).where(Message.conversation_id == conv.id).order_by(Message.created_at.asc())
|
| 304 |
+
return [{"role": m.role, "content": m.content} for m in session.exec(stmt_msg).all()]
|
| 305 |
+
|
| 306 |
+
@app.delete("/api/chat/history/{user_id}")
|
| 307 |
+
async def clear_history(user_id: str):
|
| 308 |
+
with Session(get_engine()) as session:
|
| 309 |
+
session.execute(delete(Message).where(Message.user_id == user_id))
|
| 310 |
+
session.execute(delete(Conversation).where(Conversation.user_id == user_id))
|
| 311 |
+
session.commit()
|
| 312 |
+
return {"status": "success"}
|
| 313 |
+
|
| 314 |
+
@app.get("/health")
|
| 315 |
+
def health(): return {"status": "operational", "version": "3.8.0 (Groq Lightning)", "ai_ready": client is not None}
|
| 316 |
+
|
| 317 |
+
if __name__ == "__main__":
|
| 318 |
+
import uvicorn
|
| 319 |
+
uvicorn.run(app, host="0.0.0.0", port=8001)
|
Chatbot/backend/mcp_server/__init__.py
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
MCP Server package for Chatbot.
|
| 3 |
+
|
| 4 |
+
Exports the MCP server and all task management tools.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
# MCP Server and tools will be added after implementation
|
| 8 |
+
|
| 9 |
+
# from .server import server
|
| 10 |
+
# from .tools import add_task, list_tasks, complete_task, delete_task, update_task
|
| 11 |
+
|
| 12 |
+
# __all__ = ["server", "add_task", "list_tasks", "complete_task", "delete_task", "update_task"]
|
Chatbot/backend/mcp_server/schemas.py
ADDED
|
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Pydantic schemas for MCP tool inputs and outputs.
|
| 3 |
+
|
| 4 |
+
All MCP tools use these schemas for validation and consistent response formatting.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from typing import Optional, List, Dict, Any
|
| 8 |
+
from pydantic import BaseModel, Field
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
# ============================================================================
|
| 12 |
+
# Error Response Constants
|
| 13 |
+
# ============================================================================
|
| 14 |
+
|
| 15 |
+
class ErrorCode:
|
| 16 |
+
"""Standard error codes for MCP tools."""
|
| 17 |
+
INVALID_INPUT = "INVALID_INPUT"
|
| 18 |
+
NOT_FOUND = "NOT_FOUND"
|
| 19 |
+
UNAUTHORIZED = "UNAUTHORIZED"
|
| 20 |
+
DATABASE_ERROR = "DATABASE_ERROR"
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
# ============================================================================
|
| 24 |
+
# Tool Input Schemas
|
| 25 |
+
# ============================================================================
|
| 26 |
+
|
| 27 |
+
class AddTaskInput(BaseModel):
|
| 28 |
+
"""Input schema for add_task tool."""
|
| 29 |
+
user_id: int = Field(..., description="User identifier (Integer)")
|
| 30 |
+
title: str = Field(..., min_length=1, max_length=255, description="Task title")
|
| 31 |
+
description: Optional[str] = Field(None, max_length=1000, description="Optional task description")
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
class ListTasksInput(BaseModel):
|
| 35 |
+
"""Input schema for list_tasks tool."""
|
| 36 |
+
user_id: int = Field(..., description="User identifier (Integer)")
|
| 37 |
+
status: str = Field("all", pattern="^(all|pending|completed)$", description="Filter by completion status")
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
class CompleteTaskInput(BaseModel):
|
| 41 |
+
"""Input schema for complete_task tool."""
|
| 42 |
+
user_id: int = Field(..., description="User identifier (Integer)")
|
| 43 |
+
task_id: int = Field(..., ge=1, description="Task identifier to mark complete")
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
class DeleteTaskInput(BaseModel):
|
| 47 |
+
"""Input schema for delete_task tool."""
|
| 48 |
+
user_id: int = Field(..., description="User identifier (Integer)")
|
| 49 |
+
task_id: int = Field(..., ge=1, description="Task identifier to delete")
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
class UpdateTaskInput(BaseModel):
|
| 53 |
+
"""Input schema for update_task tool."""
|
| 54 |
+
user_id: int = Field(..., description="User identifier (Integer)")
|
| 55 |
+
task_id: int = Field(..., ge=1, description="Task identifier to update")
|
| 56 |
+
title: Optional[str] = Field(None, min_length=1, max_length=255, description="New task title")
|
| 57 |
+
description: Optional[str] = Field(None, max_length=1000, description="New task description")
|
| 58 |
+
|
| 59 |
+
# Custom validator to ensure at least one field is provided
|
| 60 |
+
@classmethod
|
| 61 |
+
def validate_update(cls, values: Dict[str, Any]) -> Dict[str, Any]:
|
| 62 |
+
if not values.get("title") and not values.get("description"):
|
| 63 |
+
raise ValueError("At least one field (title or description) must be provided")
|
| 64 |
+
return values
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
# ============================================================================
|
| 68 |
+
# Tool Output Schemas
|
| 69 |
+
# ============================================================================
|
| 70 |
+
|
| 71 |
+
class TaskData(BaseModel):
|
| 72 |
+
"""Task data returned by tools."""
|
| 73 |
+
id: int
|
| 74 |
+
title: str
|
| 75 |
+
completed: bool
|
| 76 |
+
created_at: str
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
class AddTaskResponse(BaseModel):
|
| 80 |
+
"""Response data for add_task tool."""
|
| 81 |
+
task_id: int
|
| 82 |
+
status: str = "created"
|
| 83 |
+
title: str
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
class ListTasksResponse(BaseModel):
|
| 87 |
+
"""Response data for list_tasks tool."""
|
| 88 |
+
tasks: List[TaskData]
|
| 89 |
+
total: int
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
class CompleteTaskResponse(BaseModel):
|
| 93 |
+
"""Response data for complete_task tool."""
|
| 94 |
+
task_id: int
|
| 95 |
+
status: str = "completed"
|
| 96 |
+
title: str
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
class DeleteTaskResponse(BaseModel):
|
| 100 |
+
"""Response data for delete_task tool."""
|
| 101 |
+
task_id: int
|
| 102 |
+
status: str = "deleted"
|
| 103 |
+
title: str
|
| 104 |
+
|
| 105 |
+
|
| 106 |
+
class UpdateTaskResponse(BaseModel):
|
| 107 |
+
"""Response data for update_task tool."""
|
| 108 |
+
task_id: int
|
| 109 |
+
status: str = "updated"
|
| 110 |
+
title: str
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
# ============================================================================
|
| 114 |
+
# Unified Response Schemas
|
| 115 |
+
# ============================================================================
|
| 116 |
+
|
| 117 |
+
class ErrorResponse(BaseModel):
|
| 118 |
+
"""Standard error response format."""
|
| 119 |
+
code: str
|
| 120 |
+
message: str
|
| 121 |
+
|
| 122 |
+
|
| 123 |
+
def success_response(data: Any) -> Dict[str, Any]:
|
| 124 |
+
"""
|
| 125 |
+
Create a standardized success response.
|
| 126 |
+
|
| 127 |
+
Args:
|
| 128 |
+
data: Tool-specific response data
|
| 129 |
+
|
| 130 |
+
Returns:
|
| 131 |
+
Dict with success=True and data
|
| 132 |
+
"""
|
| 133 |
+
return {
|
| 134 |
+
"success": True,
|
| 135 |
+
"data": data,
|
| 136 |
+
"error": None
|
| 137 |
+
}
|
| 138 |
+
|
| 139 |
+
|
| 140 |
+
def error_response(code: str, message: str) -> Dict[str, Any]:
|
| 141 |
+
"""
|
| 142 |
+
Create a standardized error response.
|
| 143 |
+
|
| 144 |
+
Args:
|
| 145 |
+
code: Error code from ErrorCode
|
| 146 |
+
message: Human-readable error message
|
| 147 |
+
|
| 148 |
+
Returns:
|
| 149 |
+
Dict with success=False and error
|
| 150 |
+
"""
|
| 151 |
+
return {
|
| 152 |
+
"success": False,
|
| 153 |
+
"data": None,
|
| 154 |
+
"error": {
|
| 155 |
+
"code": code,
|
| 156 |
+
"message": message
|
| 157 |
+
}
|
| 158 |
+
}
|
Chatbot/backend/mcp_server/server.py
ADDED
|
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
MCP Server for Task Management.
|
| 3 |
+
|
| 4 |
+
This server exposes 5 task management tools via the Model Context Protocol:
|
| 5 |
+
- add_task: Create a new task
|
| 6 |
+
- list_tasks: List tasks with optional status filter
|
| 7 |
+
- complete_task: Mark a task as completed
|
| 8 |
+
- delete_task: Delete a task
|
| 9 |
+
- update_task: Update task title/description
|
| 10 |
+
"""
|
| 11 |
+
|
| 12 |
+
from mcp.server import Server
|
| 13 |
+
from mcp.types import Tool, TextContent
|
| 14 |
+
|
| 15 |
+
# Server instance
|
| 16 |
+
server = Server("task-management-mcp")
|
| 17 |
+
|
| 18 |
+
# Tool imports (will be added after implementation)
|
| 19 |
+
# from .tools import add_task, list_tasks, complete_task, delete_task, update_task
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
@server.list_tools()
|
| 23 |
+
async def list_tools() -> list[Tool]:
|
| 24 |
+
"""Return list of available MCP tools."""
|
| 25 |
+
return [
|
| 26 |
+
Tool(
|
| 27 |
+
name="add_task",
|
| 28 |
+
description="Create a new task for a user",
|
| 29 |
+
inputSchema={
|
| 30 |
+
"type": "object",
|
| 31 |
+
"properties": {
|
| 32 |
+
"user_id": {"type": "string", "description": "User identifier"},
|
| 33 |
+
"title": {"type": "string", "description": "Task title (1-200 chars)"},
|
| 34 |
+
"description": {"type": "string", "description": "Optional task description (0-1000 chars)"}
|
| 35 |
+
},
|
| 36 |
+
"required": ["user_id", "title"]
|
| 37 |
+
}
|
| 38 |
+
),
|
| 39 |
+
Tool(
|
| 40 |
+
name="list_tasks",
|
| 41 |
+
description="List all tasks for a user with optional status filter",
|
| 42 |
+
inputSchema={
|
| 43 |
+
"type": "object",
|
| 44 |
+
"properties": {
|
| 45 |
+
"user_id": {"type": "string", "description": "User identifier"},
|
| 46 |
+
"status": {"type": "string", "enum": ["all", "pending", "completed"], "default": "all"}
|
| 47 |
+
},
|
| 48 |
+
"required": ["user_id"]
|
| 49 |
+
}
|
| 50 |
+
),
|
| 51 |
+
Tool(
|
| 52 |
+
name="complete_task",
|
| 53 |
+
description="Mark a task as completed",
|
| 54 |
+
inputSchema={
|
| 55 |
+
"type": "object",
|
| 56 |
+
"properties": {
|
| 57 |
+
"user_id": {"type": "string", "description": "User identifier"},
|
| 58 |
+
"task_id": {"type": "integer", "description": "Task ID"}
|
| 59 |
+
},
|
| 60 |
+
"required": ["user_id", "task_id"]
|
| 61 |
+
}
|
| 62 |
+
),
|
| 63 |
+
Tool(
|
| 64 |
+
name="delete_task",
|
| 65 |
+
description="Delete a task",
|
| 66 |
+
inputSchema={
|
| 67 |
+
"type": "object",
|
| 68 |
+
"properties": {
|
| 69 |
+
"user_id": {"type": "string", "description": "User identifier"},
|
| 70 |
+
"task_id": {"type": "integer", "description": "Task ID"}
|
| 71 |
+
},
|
| 72 |
+
"required": ["user_id", "task_id"]
|
| 73 |
+
}
|
| 74 |
+
),
|
| 75 |
+
Tool(
|
| 76 |
+
name="update_task",
|
| 77 |
+
description="Update task title and/or description",
|
| 78 |
+
inputSchema={
|
| 79 |
+
"type": "object",
|
| 80 |
+
"properties": {
|
| 81 |
+
"user_id": {"type": "string", "description": "User identifier"},
|
| 82 |
+
"task_id": {"type": "integer", "description": "Task ID"},
|
| 83 |
+
"title": {"type": "string", "description": "New task title (1-200 chars)"},
|
| 84 |
+
"description": {"type": "string", "description": "New task description (0-1000 chars)"}
|
| 85 |
+
},
|
| 86 |
+
"required": ["user_id", "task_id"]
|
| 87 |
+
}
|
| 88 |
+
),
|
| 89 |
+
]
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
@server.call_tool()
|
| 93 |
+
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
|
| 94 |
+
"""
|
| 95 |
+
Route tool calls to appropriate handler.
|
| 96 |
+
|
| 97 |
+
Args:
|
| 98 |
+
name: Tool name
|
| 99 |
+
arguments: Tool arguments
|
| 100 |
+
|
| 101 |
+
Returns:
|
| 102 |
+
List of TextContent with tool result
|
| 103 |
+
"""
|
| 104 |
+
tool_map = {
|
| 105 |
+
"add_task": "add_task",
|
| 106 |
+
"list_tasks": "list_tasks",
|
| 107 |
+
"complete_task": "complete_task",
|
| 108 |
+
"delete_task": "delete_task",
|
| 109 |
+
"update_task": "update_task",
|
| 110 |
+
}
|
| 111 |
+
|
| 112 |
+
if name not in tool_map:
|
| 113 |
+
return [TextContent(type="text", text=f"Unknown tool: {name}")]
|
| 114 |
+
|
| 115 |
+
# Import and call the tool
|
| 116 |
+
try:
|
| 117 |
+
from .tools import add_task, list_tasks, complete_task, delete_task, update_task
|
| 118 |
+
|
| 119 |
+
handlers = {
|
| 120 |
+
"add_task": add_task,
|
| 121 |
+
"list_tasks": list_tasks,
|
| 122 |
+
"complete_task": complete_task,
|
| 123 |
+
"delete_task": delete_task,
|
| 124 |
+
"update_task": update_task,
|
| 125 |
+
}
|
| 126 |
+
|
| 127 |
+
handler = handlers[name]
|
| 128 |
+
|
| 129 |
+
# Check if handler is async or sync
|
| 130 |
+
import inspect
|
| 131 |
+
if inspect.iscoroutinefunction(handler):
|
| 132 |
+
result = await handler(**arguments)
|
| 133 |
+
else:
|
| 134 |
+
result = handler(**arguments)
|
| 135 |
+
|
| 136 |
+
return [TextContent(type="text", text=str(result))]
|
| 137 |
+
|
| 138 |
+
except ImportError:
|
| 139 |
+
return [TextContent(type="text", text=f"Tool {name} not yet implemented")]
|
| 140 |
+
except Exception as e:
|
| 141 |
+
return [TextContent(type="text", text=f"Error in {name}: {str(e)}")]
|
| 142 |
+
|
| 143 |
+
|
| 144 |
+
def create_conversation(user_id: str) -> dict:
|
| 145 |
+
"""
|
| 146 |
+
Helper function to create a new conversation.
|
| 147 |
+
|
| 148 |
+
This will be used by the future chat endpoint to initialize conversations.
|
| 149 |
+
|
| 150 |
+
Args:
|
| 151 |
+
user_id: User identifier
|
| 152 |
+
|
| 153 |
+
Returns:
|
| 154 |
+
Dict with conversation_id and created timestamp
|
| 155 |
+
"""
|
| 156 |
+
from backend.models.conversation import Conversation
|
| 157 |
+
from backend.db import engine
|
| 158 |
+
from sqlmodel import Session
|
| 159 |
+
|
| 160 |
+
with Session(engine) as session:
|
| 161 |
+
conversation = Conversation(user_id=user_id)
|
| 162 |
+
session.add(conversation)
|
| 163 |
+
session.commit()
|
| 164 |
+
session.refresh(conversation)
|
| 165 |
+
return {
|
| 166 |
+
"conversation_id": conversation.id,
|
| 167 |
+
"created_at": conversation.created_at.isoformat()
|
| 168 |
+
}
|
| 169 |
+
|
| 170 |
+
|
| 171 |
+
if __name__ == "__main__":
|
| 172 |
+
import asyncio
|
| 173 |
+
asyncio.run(server.run())
|
Chatbot/backend/mcp_server/tools/__init__.py
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
MCP Tools package.
|
| 3 |
+
|
| 4 |
+
Exports all task management tools for the MCP server.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from . import add_task
|
| 8 |
+
from . import list_tasks
|
| 9 |
+
from . import complete_task
|
| 10 |
+
from . import delete_task
|
| 11 |
+
from . import update_task
|
| 12 |
+
|
| 13 |
+
__all__ = ["add_task", "list_tasks", "complete_task", "delete_task", "update_task"]
|
Chatbot/backend/mcp_server/tools/add_task.py
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
MCP Tool: add_task
|
| 3 |
+
|
| 4 |
+
Creates a new task for a user with title and optional description.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from typing import Dict, Any
|
| 8 |
+
import requests
|
| 9 |
+
import os
|
| 10 |
+
from backend.mcp_server.schemas import AddTaskInput, success_response, error_response, ErrorCode
|
| 11 |
+
|
| 12 |
+
MAIN_BACKEND_URL = os.getenv("MAIN_BACKEND_URL", "http://127.0.0.1:8000")
|
| 13 |
+
print(f"DEBUG: add_task using backend: {MAIN_BACKEND_URL}")
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
def add_task(user_id: str, title: str, description: str = None, auth_token: str = None) -> Dict[str, Any]:
|
| 17 |
+
"""
|
| 18 |
+
Create a new task for a user.
|
| 19 |
+
|
| 20 |
+
Args:
|
| 21 |
+
user_id: User identifier (String)
|
| 22 |
+
title: Task title (1-255 characters, required)
|
| 23 |
+
description: Optional task description (0-1000 characters)
|
| 24 |
+
|
| 25 |
+
Returns:
|
| 26 |
+
Dict with success, data, or error
|
| 27 |
+
"""
|
| 28 |
+
# Validate input
|
| 29 |
+
if not user_id and user_id != 0:
|
| 30 |
+
return error_response(
|
| 31 |
+
ErrorCode.INVALID_INPUT,
|
| 32 |
+
"User ID is required"
|
| 33 |
+
)
|
| 34 |
+
|
| 35 |
+
if not title or not title.strip():
|
| 36 |
+
return error_response(
|
| 37 |
+
ErrorCode.INVALID_INPUT,
|
| 38 |
+
"Title must be between 1 and 200 characters"
|
| 39 |
+
)
|
| 40 |
+
|
| 41 |
+
title = title.strip()
|
| 42 |
+
|
| 43 |
+
if len(title) > 200:
|
| 44 |
+
return error_response(
|
| 45 |
+
ErrorCode.INVALID_INPUT,
|
| 46 |
+
"Title exceeds 200 character limit"
|
| 47 |
+
)
|
| 48 |
+
|
| 49 |
+
if description is not None:
|
| 50 |
+
description = description.strip()
|
| 51 |
+
if len(description) > 1000:
|
| 52 |
+
return error_response(
|
| 53 |
+
ErrorCode.INVALID_INPUT,
|
| 54 |
+
"Description exceeds 1000 character limit"
|
| 55 |
+
)
|
| 56 |
+
|
| 57 |
+
# Create task via main backend API
|
| 58 |
+
try:
|
| 59 |
+
payload = {
|
| 60 |
+
"title": title,
|
| 61 |
+
"description": description or "",
|
| 62 |
+
"completed": False,
|
| 63 |
+
"priority": 1,
|
| 64 |
+
"category": "General"
|
| 65 |
+
}
|
| 66 |
+
|
| 67 |
+
response = requests.post(
|
| 68 |
+
f"{MAIN_BACKEND_URL}/api/tasks/",
|
| 69 |
+
json=payload,
|
| 70 |
+
headers={"Authorization": f"Bearer {auth_token}"},
|
| 71 |
+
timeout=5
|
| 72 |
+
)
|
| 73 |
+
|
| 74 |
+
if response.status_code == 201 or response.status_code == 200:
|
| 75 |
+
task_data = response.json()
|
| 76 |
+
return success_response({
|
| 77 |
+
"task_id": task_data.get("id"),
|
| 78 |
+
"status": "created",
|
| 79 |
+
"title": task_data.get("title")
|
| 80 |
+
})
|
| 81 |
+
else:
|
| 82 |
+
return error_response(
|
| 83 |
+
ErrorCode.DATABASE_ERROR,
|
| 84 |
+
f"Backend API error: {response.status_code} - {response.text}"
|
| 85 |
+
)
|
| 86 |
+
|
| 87 |
+
except Exception as e:
|
| 88 |
+
import traceback
|
| 89 |
+
print(f"DEBUG: add_task failed for user_id={user_id}")
|
| 90 |
+
print(traceback.format_exc())
|
| 91 |
+
|
| 92 |
+
return error_response(
|
| 93 |
+
ErrorCode.DATABASE_ERROR,
|
| 94 |
+
f"API request error: {str(e)}"
|
| 95 |
+
)
|
Chatbot/backend/mcp_server/tools/complete_task.py
ADDED
|
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
MCP Tool: complete_task
|
| 3 |
+
|
| 4 |
+
Marks a task as completed for a user.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from typing import Dict, Any
|
| 8 |
+
import requests
|
| 9 |
+
import os
|
| 10 |
+
from backend.mcp_server.schemas import CompleteTaskInput, success_response, error_response, ErrorCode
|
| 11 |
+
|
| 12 |
+
MAIN_BACKEND_URL = os.getenv("MAIN_BACKEND_URL", "http://127.0.0.1:8000")
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
def complete_task(user_id: str, task_id: int, auth_token: str = None) -> Dict[str, Any]:
|
| 16 |
+
"""
|
| 17 |
+
Mark a task as completed for a user.
|
| 18 |
+
|
| 19 |
+
Args:
|
| 20 |
+
user_id: User identifier from JWT token
|
| 21 |
+
task_id: Task identifier
|
| 22 |
+
|
| 23 |
+
Returns:
|
| 24 |
+
Dict with success, data, or error
|
| 25 |
+
"""
|
| 26 |
+
# Validate input
|
| 27 |
+
if user_id is None:
|
| 28 |
+
return error_response(
|
| 29 |
+
ErrorCode.INVALID_INPUT,
|
| 30 |
+
"User ID is required"
|
| 31 |
+
)
|
| 32 |
+
|
| 33 |
+
user_id = str(user_id).strip()
|
| 34 |
+
|
| 35 |
+
if not user_id:
|
| 36 |
+
return error_response(
|
| 37 |
+
ErrorCode.INVALID_INPUT,
|
| 38 |
+
"User ID is required"
|
| 39 |
+
)
|
| 40 |
+
|
| 41 |
+
if not task_id or task_id <= 0:
|
| 42 |
+
return error_response(
|
| 43 |
+
ErrorCode.INVALID_INPUT,
|
| 44 |
+
"Task ID must be a positive integer"
|
| 45 |
+
)
|
| 46 |
+
|
| 47 |
+
# Complete task via main backend API
|
| 48 |
+
try:
|
| 49 |
+
response = requests.patch(
|
| 50 |
+
f"{MAIN_BACKEND_URL}/api/tasks/{task_id}/complete",
|
| 51 |
+
headers={"Authorization": f"Bearer {auth_token}"},
|
| 52 |
+
timeout=5
|
| 53 |
+
)
|
| 54 |
+
|
| 55 |
+
if response.status_code == 200:
|
| 56 |
+
task_data = response.json()
|
| 57 |
+
return success_response({
|
| 58 |
+
"task_id": task_data.get("id"),
|
| 59 |
+
"status": "completed",
|
| 60 |
+
"title": task_data.get("title")
|
| 61 |
+
})
|
| 62 |
+
elif response.status_code == 404:
|
| 63 |
+
return error_response(
|
| 64 |
+
ErrorCode.NOT_FOUND,
|
| 65 |
+
"Task not found"
|
| 66 |
+
)
|
| 67 |
+
else:
|
| 68 |
+
return error_response(
|
| 69 |
+
ErrorCode.DATABASE_ERROR,
|
| 70 |
+
f"Backend API error: {response.status_code}"
|
| 71 |
+
)
|
| 72 |
+
|
| 73 |
+
except Exception as e:
|
| 74 |
+
print(f"API error in complete_task: {e}")
|
| 75 |
+
|
| 76 |
+
return error_response(
|
| 77 |
+
ErrorCode.DATABASE_ERROR,
|
| 78 |
+
"Failed to complete task"
|
| 79 |
+
)
|
Chatbot/backend/mcp_server/tools/delete_all_tasks.py
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
MCP Tool: delete_all_tasks
|
| 3 |
+
|
| 4 |
+
Deletes all tasks for the current user.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from typing import Dict, Any
|
| 8 |
+
import requests
|
| 9 |
+
import os
|
| 10 |
+
from backend.mcp_server.schemas import success_response, error_response, ErrorCode
|
| 11 |
+
|
| 12 |
+
MAIN_BACKEND_URL = os.getenv("MAIN_BACKEND_URL", "http://127.0.0.1:8000")
|
| 13 |
+
|
| 14 |
+
def delete_all_tasks(user_id: str, auth_token: str = None) -> Dict[str, Any]:
|
| 15 |
+
"""
|
| 16 |
+
Delete all tasks for the current user.
|
| 17 |
+
"""
|
| 18 |
+
if not auth_token:
|
| 19 |
+
return error_response(ErrorCode.INVALID_INPUT, "Authentication token required")
|
| 20 |
+
|
| 21 |
+
try:
|
| 22 |
+
response = requests.delete(
|
| 23 |
+
f"{MAIN_BACKEND_URL}/api/tasks/delete-all",
|
| 24 |
+
headers={"Authorization": f"Bearer {auth_token}"},
|
| 25 |
+
timeout=5
|
| 26 |
+
)
|
| 27 |
+
|
| 28 |
+
if response.status_code == 200:
|
| 29 |
+
return success_response(response.json())
|
| 30 |
+
else:
|
| 31 |
+
return error_response(
|
| 32 |
+
ErrorCode.DATABASE_ERROR,
|
| 33 |
+
f"Backend API error: {response.status_code} - {response.text}"
|
| 34 |
+
)
|
| 35 |
+
|
| 36 |
+
except Exception as e:
|
| 37 |
+
return error_response(ErrorCode.DATABASE_ERROR, str(e))
|
Chatbot/backend/mcp_server/tools/delete_task.py
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
MCP Tool: delete_task
|
| 3 |
+
|
| 4 |
+
Deletes a task for a user.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from typing import Dict, Any
|
| 8 |
+
import requests
|
| 9 |
+
import os
|
| 10 |
+
from backend.mcp_server.schemas import DeleteTaskInput, success_response, error_response, ErrorCode
|
| 11 |
+
|
| 12 |
+
MAIN_BACKEND_URL = os.getenv("MAIN_BACKEND_URL", "http://127.0.0.1:8000")
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
def delete_task(user_id: str, task_id: int, auth_token: str = None) -> Dict[str, Any]:
|
| 16 |
+
"""
|
| 17 |
+
Delete a task for a user.
|
| 18 |
+
|
| 19 |
+
Args:
|
| 20 |
+
user_id: User identifier from JWT token
|
| 21 |
+
task_id: Task identifier
|
| 22 |
+
|
| 23 |
+
Returns:
|
| 24 |
+
Dict with success, data, or error
|
| 25 |
+
"""
|
| 26 |
+
# Validate input
|
| 27 |
+
if user_id is None:
|
| 28 |
+
return error_response(
|
| 29 |
+
ErrorCode.INVALID_INPUT,
|
| 30 |
+
"User ID is required"
|
| 31 |
+
)
|
| 32 |
+
|
| 33 |
+
# Ensure user_id is a string for consistent handling
|
| 34 |
+
user_id = str(user_id).strip()
|
| 35 |
+
|
| 36 |
+
if not user_id:
|
| 37 |
+
return error_response(
|
| 38 |
+
ErrorCode.INVALID_INPUT,
|
| 39 |
+
"User ID is required"
|
| 40 |
+
)
|
| 41 |
+
|
| 42 |
+
if not task_id or task_id <= 0:
|
| 43 |
+
return error_response(
|
| 44 |
+
ErrorCode.INVALID_INPUT,
|
| 45 |
+
"Task ID must be a positive integer"
|
| 46 |
+
)
|
| 47 |
+
|
| 48 |
+
# Delete task via main backend API
|
| 49 |
+
try:
|
| 50 |
+
response = requests.delete(
|
| 51 |
+
f"{MAIN_BACKEND_URL}/api/tasks/{task_id}",
|
| 52 |
+
headers={"Authorization": f"Bearer {auth_token}"},
|
| 53 |
+
timeout=5
|
| 54 |
+
)
|
| 55 |
+
|
| 56 |
+
if response.status_code == 200:
|
| 57 |
+
result = response.json()
|
| 58 |
+
return success_response({
|
| 59 |
+
"task_id": task_id,
|
| 60 |
+
"status": "deleted",
|
| 61 |
+
"title": result.get("message", "Task deleted")
|
| 62 |
+
})
|
| 63 |
+
elif response.status_code == 404:
|
| 64 |
+
return error_response(
|
| 65 |
+
ErrorCode.NOT_FOUND,
|
| 66 |
+
"Task not found"
|
| 67 |
+
)
|
| 68 |
+
else:
|
| 69 |
+
return error_response(
|
| 70 |
+
ErrorCode.DATABASE_ERROR,
|
| 71 |
+
f"Backend API error: {response.status_code}"
|
| 72 |
+
)
|
| 73 |
+
|
| 74 |
+
except Exception as e:
|
| 75 |
+
print(f"API error in delete_task: {e}")
|
| 76 |
+
|
| 77 |
+
return error_response(
|
| 78 |
+
ErrorCode.DATABASE_ERROR,
|
| 79 |
+
"Failed to delete task"
|
| 80 |
+
)
|