jimmy60504 commited on
Commit
c2c2f17
·
1 Parent(s): 6a958d7

remove spec-kit

Browse files
.github/prompts/speckit.analyze.prompt.md DELETED
@@ -1,184 +0,0 @@
1
- ---
2
- description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
3
- ---
4
-
5
- ## User Input
6
-
7
- ```text
8
- $ARGUMENTS
9
- ```
10
-
11
- You **MUST** consider the user input before proceeding (if not empty).
12
-
13
- ## Goal
14
-
15
- Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
16
-
17
- ## Operating Constraints
18
-
19
- **STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
20
-
21
- **Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
22
-
23
- ## Execution Steps
24
-
25
- ### 1. Initialize Analysis Context
26
-
27
- Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
28
-
29
- - SPEC = FEATURE_DIR/spec.md
30
- - PLAN = FEATURE_DIR/plan.md
31
- - TASKS = FEATURE_DIR/tasks.md
32
-
33
- Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
34
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
35
-
36
- ### 2. Load Artifacts (Progressive Disclosure)
37
-
38
- Load only the minimal necessary context from each artifact:
39
-
40
- **From spec.md:**
41
-
42
- - Overview/Context
43
- - Functional Requirements
44
- - Non-Functional Requirements
45
- - User Stories
46
- - Edge Cases (if present)
47
-
48
- **From plan.md:**
49
-
50
- - Architecture/stack choices
51
- - Data Model references
52
- - Phases
53
- - Technical constraints
54
-
55
- **From tasks.md:**
56
-
57
- - Task IDs
58
- - Descriptions
59
- - Phase grouping
60
- - Parallel markers [P]
61
- - Referenced file paths
62
-
63
- **From constitution:**
64
-
65
- - Load `.specify/memory/constitution.md` for principle validation
66
-
67
- ### 3. Build Semantic Models
68
-
69
- Create internal representations (do not include raw artifacts in output):
70
-
71
- - **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
72
- - **User story/action inventory**: Discrete user actions with acceptance criteria
73
- - **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
74
- - **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
75
-
76
- ### 4. Detection Passes (Token-Efficient Analysis)
77
-
78
- Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
79
-
80
- #### A. Duplication Detection
81
-
82
- - Identify near-duplicate requirements
83
- - Mark lower-quality phrasing for consolidation
84
-
85
- #### B. Ambiguity Detection
86
-
87
- - Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
88
- - Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
89
-
90
- #### C. Underspecification
91
-
92
- - Requirements with verbs but missing object or measurable outcome
93
- - User stories missing acceptance criteria alignment
94
- - Tasks referencing files or components not defined in spec/plan
95
-
96
- #### D. Constitution Alignment
97
-
98
- - Any requirement or plan element conflicting with a MUST principle
99
- - Missing mandated sections or quality gates from constitution
100
-
101
- #### E. Coverage Gaps
102
-
103
- - Requirements with zero associated tasks
104
- - Tasks with no mapped requirement/story
105
- - Non-functional requirements not reflected in tasks (e.g., performance, security)
106
-
107
- #### F. Inconsistency
108
-
109
- - Terminology drift (same concept named differently across files)
110
- - Data entities referenced in plan but absent in spec (or vice versa)
111
- - Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
112
- - Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
113
-
114
- ### 5. Severity Assignment
115
-
116
- Use this heuristic to prioritize findings:
117
-
118
- - **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
119
- - **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
120
- - **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
121
- - **LOW**: Style/wording improvements, minor redundancy not affecting execution order
122
-
123
- ### 6. Produce Compact Analysis Report
124
-
125
- Output a Markdown report (no file writes) with the following structure:
126
-
127
- ## Specification Analysis Report
128
-
129
- | ID | Category | Severity | Location(s) | Summary | Recommendation |
130
- |----|----------|----------|-------------|---------|----------------|
131
- | A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
132
-
133
- (Add one row per finding; generate stable IDs prefixed by category initial.)
134
-
135
- **Coverage Summary Table:**
136
-
137
- | Requirement Key | Has Task? | Task IDs | Notes |
138
- |-----------------|-----------|----------|-------|
139
-
140
- **Constitution Alignment Issues:** (if any)
141
-
142
- **Unmapped Tasks:** (if any)
143
-
144
- **Metrics:**
145
-
146
- - Total Requirements
147
- - Total Tasks
148
- - Coverage % (requirements with >=1 task)
149
- - Ambiguity Count
150
- - Duplication Count
151
- - Critical Issues Count
152
-
153
- ### 7. Provide Next Actions
154
-
155
- At end of report, output a concise Next Actions block:
156
-
157
- - If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
158
- - If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
159
- - Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
160
-
161
- ### 8. Offer Remediation
162
-
163
- Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
164
-
165
- ## Operating Principles
166
-
167
- ### Context Efficiency
168
-
169
- - **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
170
- - **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
171
- - **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
172
- - **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
173
-
174
- ### Analysis Guidelines
175
-
176
- - **NEVER modify files** (this is read-only analysis)
177
- - **NEVER hallucinate missing sections** (if absent, report them accurately)
178
- - **Prioritize constitution violations** (these are always CRITICAL)
179
- - **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
180
- - **Report zero issues gracefully** (emit success report with coverage statistics)
181
-
182
- ## Context
183
-
184
- $ARGUMENTS
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/prompts/speckit.checklist.prompt.md DELETED
@@ -1,294 +0,0 @@
1
- ---
2
- description: Generate a custom checklist for the current feature based on user requirements.
3
- ---
4
-
5
- ## Checklist Purpose: "Unit Tests for English"
6
-
7
- **CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
8
-
9
- **NOT for verification/testing**:
10
-
11
- - ❌ NOT "Verify the button clicks correctly"
12
- - ❌ NOT "Test error handling works"
13
- - ❌ NOT "Confirm the API returns 200"
14
- - ❌ NOT checking if code/implementation matches the spec
15
-
16
- **FOR requirements quality validation**:
17
-
18
- - ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
19
- - ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
20
- - ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
21
- - ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
22
- - ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
23
-
24
- **Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
25
-
26
- ## User Input
27
-
28
- ```text
29
- $ARGUMENTS
30
- ```
31
-
32
- You **MUST** consider the user input before proceeding (if not empty).
33
-
34
- ## Execution Steps
35
-
36
- 1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
37
- - All file paths must be absolute.
38
- - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
39
-
40
- 2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
41
- - Be generated from the user's phrasing + extracted signals from spec/plan/tasks
42
- - Only ask about information that materially changes checklist content
43
- - Be skipped individually if already unambiguous in `$ARGUMENTS`
44
- - Prefer precision over breadth
45
-
46
- Generation algorithm:
47
- 1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
48
- 2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
49
- 3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
50
- 4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
51
- 5. Formulate questions chosen from these archetypes:
52
- - Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
53
- - Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
54
- - Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
55
- - Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
56
- - Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
57
- - Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
58
-
59
- Question formatting rules:
60
- - If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
61
- - Limit to A–E options maximum; omit table if a free-form answer is clearer
62
- - Never ask the user to restate what they already said
63
- - Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
64
-
65
- Defaults when interaction impossible:
66
- - Depth: Standard
67
- - Audience: Reviewer (PR) if code-related; Author otherwise
68
- - Focus: Top 2 relevance clusters
69
-
70
- Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
71
-
72
- 3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
73
- - Derive checklist theme (e.g., security, review, deploy, ux)
74
- - Consolidate explicit must-have items mentioned by user
75
- - Map focus selections to category scaffolding
76
- - Infer any missing context from spec/plan/tasks (do NOT hallucinate)
77
-
78
- 4. **Load feature context**: Read from FEATURE_DIR:
79
- - spec.md: Feature requirements and scope
80
- - plan.md (if exists): Technical details, dependencies
81
- - tasks.md (if exists): Implementation tasks
82
-
83
- **Context Loading Strategy**:
84
- - Load only necessary portions relevant to active focus areas (avoid full-file dumping)
85
- - Prefer summarizing long sections into concise scenario/requirement bullets
86
- - Use progressive disclosure: add follow-on retrieval only if gaps detected
87
- - If source docs are large, generate interim summary items instead of embedding raw text
88
-
89
- 5. **Generate checklist** - Create "Unit Tests for Requirements":
90
- - Create `FEATURE_DIR/checklists/` directory if it doesn't exist
91
- - Generate unique checklist filename:
92
- - Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
93
- - Format: `[domain].md`
94
- - If file exists, append to existing file
95
- - Number items sequentially starting from CHK001
96
- - Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
97
-
98
- **CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
99
- Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
100
- - **Completeness**: Are all necessary requirements present?
101
- - **Clarity**: Are requirements unambiguous and specific?
102
- - **Consistency**: Do requirements align with each other?
103
- - **Measurability**: Can requirements be objectively verified?
104
- - **Coverage**: Are all scenarios/edge cases addressed?
105
-
106
- **Category Structure** - Group items by requirement quality dimensions:
107
- - **Requirement Completeness** (Are all necessary requirements documented?)
108
- - **Requirement Clarity** (Are requirements specific and unambiguous?)
109
- - **Requirement Consistency** (Do requirements align without conflicts?)
110
- - **Acceptance Criteria Quality** (Are success criteria measurable?)
111
- - **Scenario Coverage** (Are all flows/cases addressed?)
112
- - **Edge Case Coverage** (Are boundary conditions defined?)
113
- - **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
114
- - **Dependencies & Assumptions** (Are they documented and validated?)
115
- - **Ambiguities & Conflicts** (What needs clarification?)
116
-
117
- **HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
118
-
119
- ❌ **WRONG** (Testing implementation):
120
- - "Verify landing page displays 3 episode cards"
121
- - "Test hover states work on desktop"
122
- - "Confirm logo click navigates home"
123
-
124
- ✅ **CORRECT** (Testing requirements quality):
125
- - "Are the exact number and layout of featured episodes specified?" [Completeness]
126
- - "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
127
- - "Are hover state requirements consistent across all interactive elements?" [Consistency]
128
- - "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
129
- - "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
130
- - "Are loading states defined for asynchronous episode data?" [Completeness]
131
- - "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
132
-
133
- **ITEM STRUCTURE**:
134
- Each item should follow this pattern:
135
- - Question format asking about requirement quality
136
- - Focus on what's WRITTEN (or not written) in the spec/plan
137
- - Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
138
- - Reference spec section `[Spec §X.Y]` when checking existing requirements
139
- - Use `[Gap]` marker when checking for missing requirements
140
-
141
- **EXAMPLES BY QUALITY DIMENSION**:
142
-
143
- Completeness:
144
- - "Are error handling requirements defined for all API failure modes? [Gap]"
145
- - "Are accessibility requirements specified for all interactive elements? [Completeness]"
146
- - "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
147
-
148
- Clarity:
149
- - "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
150
- - "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
151
- - "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
152
-
153
- Consistency:
154
- - "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
155
- - "Are card component requirements consistent between landing and detail pages? [Consistency]"
156
-
157
- Coverage:
158
- - "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
159
- - "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
160
- - "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
161
-
162
- Measurability:
163
- - "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
164
- - "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
165
-
166
- **Scenario Classification & Coverage** (Requirements Quality Focus):
167
- - Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
168
- - For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
169
- - If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
170
- - Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
171
-
172
- **Traceability Requirements**:
173
- - MINIMUM: ≥80% of items MUST include at least one traceability reference
174
- - Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
175
- - If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
176
-
177
- **Surface & Resolve Issues** (Requirements Quality Problems):
178
- Ask questions about the requirements themselves:
179
- - Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
180
- - Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
181
- - Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
182
- - Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
183
- - Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
184
-
185
- **Content Consolidation**:
186
- - Soft cap: If raw candidate items > 40, prioritize by risk/impact
187
- - Merge near-duplicates checking the same requirement aspect
188
- - If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
189
-
190
- **🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
191
- - ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
192
- - ❌ References to code execution, user actions, system behavior
193
- - ❌ "Displays correctly", "works properly", "functions as expected"
194
- - ❌ "Click", "navigate", "render", "load", "execute"
195
- - ❌ Test cases, test plans, QA procedures
196
- - ❌ Implementation details (frameworks, APIs, algorithms)
197
-
198
- **✅ REQUIRED PATTERNS** - These test requirements quality:
199
- - ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
200
- - ✅ "Is [vague term] quantified/clarified with specific criteria?"
201
- - ✅ "Are requirements consistent between [section A] and [section B]?"
202
- - ✅ "Can [requirement] be objectively measured/verified?"
203
- - ✅ "Are [edge cases/scenarios] addressed in requirements?"
204
- - ✅ "Does the spec define [missing aspect]?"
205
-
206
- 6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
207
-
208
- 7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
209
- - Focus areas selected
210
- - Depth level
211
- - Actor/timing
212
- - Any explicit user-specified must-have items incorporated
213
-
214
- **Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
215
-
216
- - Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
217
- - Simple, memorable filenames that indicate checklist purpose
218
- - Easy identification and navigation in the `checklists/` folder
219
-
220
- To avoid clutter, use descriptive types and clean up obsolete checklists when done.
221
-
222
- ## Example Checklist Types & Sample Items
223
-
224
- **UX Requirements Quality:** `ux.md`
225
-
226
- Sample items (testing the requirements, NOT the implementation):
227
-
228
- - "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
229
- - "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
230
- - "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
231
- - "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
232
- - "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
233
- - "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
234
-
235
- **API Requirements Quality:** `api.md`
236
-
237
- Sample items:
238
-
239
- - "Are error response formats specified for all failure scenarios? [Completeness]"
240
- - "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
241
- - "Are authentication requirements consistent across all endpoints? [Consistency]"
242
- - "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
243
- - "Is versioning strategy documented in requirements? [Gap]"
244
-
245
- **Performance Requirements Quality:** `performance.md`
246
-
247
- Sample items:
248
-
249
- - "Are performance requirements quantified with specific metrics? [Clarity]"
250
- - "Are performance targets defined for all critical user journeys? [Coverage]"
251
- - "Are performance requirements under different load conditions specified? [Completeness]"
252
- - "Can performance requirements be objectively measured? [Measurability]"
253
- - "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
254
-
255
- **Security Requirements Quality:** `security.md`
256
-
257
- Sample items:
258
-
259
- - "Are authentication requirements specified for all protected resources? [Coverage]"
260
- - "Are data protection requirements defined for sensitive information? [Completeness]"
261
- - "Is the threat model documented and requirements aligned to it? [Traceability]"
262
- - "Are security requirements consistent with compliance obligations? [Consistency]"
263
- - "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
264
-
265
- ## Anti-Examples: What NOT To Do
266
-
267
- **❌ WRONG - These test implementation, not requirements:**
268
-
269
- ```markdown
270
- - [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
271
- - [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
272
- - [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
273
- - [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
274
- ```
275
-
276
- **✅ CORRECT - These test requirements quality:**
277
-
278
- ```markdown
279
- - [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
280
- - [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
281
- - [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
282
- - [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
283
- - [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
284
- - [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
285
- ```
286
-
287
- **Key Differences:**
288
-
289
- - Wrong: Tests if the system works correctly
290
- - Correct: Tests if the requirements are written correctly
291
- - Wrong: Verification of behavior
292
- - Correct: Validation of requirement quality
293
- - Wrong: "Does it do X?"
294
- - Correct: "Is X clearly specified?"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/prompts/speckit.clarify.prompt.md DELETED
@@ -1,177 +0,0 @@
1
- ---
2
- description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
3
- ---
4
-
5
- ## User Input
6
-
7
- ```text
8
- $ARGUMENTS
9
- ```
10
-
11
- You **MUST** consider the user input before proceeding (if not empty).
12
-
13
- ## Outline
14
-
15
- Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
16
-
17
- Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
18
-
19
- Execution steps:
20
-
21
- 1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
22
- - `FEATURE_DIR`
23
- - `FEATURE_SPEC`
24
- - (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
25
- - If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
26
- - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
27
-
28
- 2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
29
-
30
- Functional Scope & Behavior:
31
- - Core user goals & success criteria
32
- - Explicit out-of-scope declarations
33
- - User roles / personas differentiation
34
-
35
- Domain & Data Model:
36
- - Entities, attributes, relationships
37
- - Identity & uniqueness rules
38
- - Lifecycle/state transitions
39
- - Data volume / scale assumptions
40
-
41
- Interaction & UX Flow:
42
- - Critical user journeys / sequences
43
- - Error/empty/loading states
44
- - Accessibility or localization notes
45
-
46
- Non-Functional Quality Attributes:
47
- - Performance (latency, throughput targets)
48
- - Scalability (horizontal/vertical, limits)
49
- - Reliability & availability (uptime, recovery expectations)
50
- - Observability (logging, metrics, tracing signals)
51
- - Security & privacy (authN/Z, data protection, threat assumptions)
52
- - Compliance / regulatory constraints (if any)
53
-
54
- Integration & External Dependencies:
55
- - External services/APIs and failure modes
56
- - Data import/export formats
57
- - Protocol/versioning assumptions
58
-
59
- Edge Cases & Failure Handling:
60
- - Negative scenarios
61
- - Rate limiting / throttling
62
- - Conflict resolution (e.g., concurrent edits)
63
-
64
- Constraints & Tradeoffs:
65
- - Technical constraints (language, storage, hosting)
66
- - Explicit tradeoffs or rejected alternatives
67
-
68
- Terminology & Consistency:
69
- - Canonical glossary terms
70
- - Avoided synonyms / deprecated terms
71
-
72
- Completion Signals:
73
- - Acceptance criteria testability
74
- - Measurable Definition of Done style indicators
75
-
76
- Misc / Placeholders:
77
- - TODO markers / unresolved decisions
78
- - Ambiguous adjectives ("robust", "intuitive") lacking quantification
79
-
80
- For each category with Partial or Missing status, add a candidate question opportunity unless:
81
- - Clarification would not materially change implementation or validation strategy
82
- - Information is better deferred to planning phase (note internally)
83
-
84
- 3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
85
- - Maximum of 10 total questions across the whole session.
86
- - Each question must be answerable with EITHER:
87
- - A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
88
- - A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
89
- - Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
90
- - Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
91
- - Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
92
- - Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
93
- - If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
94
-
95
- 4. Sequential questioning loop (interactive):
96
- - Present EXACTLY ONE question at a time.
97
- - For multiple‑choice questions:
98
- - **Analyze all options** and determine the **most suitable option** based on:
99
- - Best practices for the project type
100
- - Common patterns in similar implementations
101
- - Risk reduction (security, performance, maintainability)
102
- - Alignment with any explicit project goals or constraints visible in the spec
103
- - Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
104
- - Format as: `**Recommended:** Option [X] - <reasoning>`
105
- - Then render all options as a Markdown table:
106
-
107
- | Option | Description |
108
- |--------|-------------|
109
- | A | <Option A description> |
110
- | B | <Option B description> |
111
- | C | <Option C description> (add D/E as needed up to 5) |
112
- | Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
113
-
114
- - After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
115
- - For short‑answer style (no meaningful discrete options):
116
- - Provide your **suggested answer** based on best practices and context.
117
- - Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
118
- - Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
119
- - After the user answers:
120
- - If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
121
- - Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
122
- - If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
123
- - Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
124
- - Stop asking further questions when:
125
- - All critical ambiguities resolved early (remaining queued items become unnecessary), OR
126
- - User signals completion ("done", "good", "no more"), OR
127
- - You reach 5 asked questions.
128
- - Never reveal future queued questions in advance.
129
- - If no valid questions exist at start, immediately report no critical ambiguities.
130
-
131
- 5. Integration after EACH accepted answer (incremental update approach):
132
- - Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
133
- - For the first integrated answer in this session:
134
- - Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
135
- - Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
136
- - Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
137
- - Then immediately apply the clarification to the most appropriate section(s):
138
- - Functional ambiguity → Update or add a bullet in Functional Requirements.
139
- - User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
140
- - Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
141
- - Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
142
- - Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
143
- - Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
144
- - If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
145
- - Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
146
- - Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
147
- - Keep each inserted clarification minimal and testable (avoid narrative drift).
148
-
149
- 6. Validation (performed after EACH write plus final pass):
150
- - Clarifications session contains exactly one bullet per accepted answer (no duplicates).
151
- - Total asked (accepted) questions ≤ 5.
152
- - Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
153
- - No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
154
- - Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
155
- - Terminology consistency: same canonical term used across all updated sections.
156
-
157
- 7. Write the updated spec back to `FEATURE_SPEC`.
158
-
159
- 8. Report completion (after questioning loop ends or early termination):
160
- - Number of questions asked & answered.
161
- - Path to updated spec.
162
- - Sections touched (list names).
163
- - Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
164
- - If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
165
- - Suggested next command.
166
-
167
- Behavior rules:
168
-
169
- - If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
170
- - If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
171
- - Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
172
- - Avoid speculative tech stack questions unless the absence blocks functional clarity.
173
- - Respect user early termination signals ("stop", "done", "proceed").
174
- - If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
175
- - If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
176
-
177
- Context for prioritization: $ARGUMENTS
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/prompts/speckit.constitution.prompt.md DELETED
@@ -1,78 +0,0 @@
1
- ---
2
- description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync
3
- ---
4
-
5
- ## User Input
6
-
7
- ```text
8
- $ARGUMENTS
9
- ```
10
-
11
- You **MUST** consider the user input before proceeding (if not empty).
12
-
13
- ## Outline
14
-
15
- You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
16
-
17
- Follow this execution flow:
18
-
19
- 1. Load the existing constitution template at `.specify/memory/constitution.md`.
20
- - Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
21
- **IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
22
-
23
- 2. Collect/derive values for placeholders:
24
- - If user input (conversation) supplies a value, use it.
25
- - Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
26
- - For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
27
- - `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
28
- - MAJOR: Backward incompatible governance/principle removals or redefinitions.
29
- - MINOR: New principle/section added or materially expanded guidance.
30
- - PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
31
- - If version bump type ambiguous, propose reasoning before finalizing.
32
-
33
- 3. Draft the updated constitution content:
34
- - Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
35
- - Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
36
- - Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
37
- - Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
38
-
39
- 4. Consistency propagation checklist (convert prior checklist into active validations):
40
- - Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
41
- - Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
42
- - Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
43
- - Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
44
- - Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
45
-
46
- 5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
47
- - Version change: old → new
48
- - List of modified principles (old title → new title if renamed)
49
- - Added sections
50
- - Removed sections
51
- - Templates requiring updates (✅ updated / ⚠ pending) with file paths
52
- - Follow-up TODOs if any placeholders intentionally deferred.
53
-
54
- 6. Validation before final output:
55
- - No remaining unexplained bracket tokens.
56
- - Version line matches report.
57
- - Dates ISO format YYYY-MM-DD.
58
- - Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
59
-
60
- 7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
61
-
62
- 8. Output a final summary to the user with:
63
- - New version and bump rationale.
64
- - Any files flagged for manual follow-up.
65
- - Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
66
-
67
- Formatting & Style Requirements:
68
-
69
- - Use Markdown headings exactly as in the template (do not demote/promote levels).
70
- - Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
71
- - Keep a single blank line between sections.
72
- - Avoid trailing whitespace.
73
-
74
- If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
75
-
76
- If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
77
-
78
- Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/prompts/speckit.implement.prompt.md DELETED
@@ -1,134 +0,0 @@
1
- ---
2
- description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
3
- ---
4
-
5
- ## User Input
6
-
7
- ```text
8
- $ARGUMENTS
9
- ```
10
-
11
- You **MUST** consider the user input before proceeding (if not empty).
12
-
13
- ## Outline
14
-
15
- 1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
16
-
17
- 2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
18
- - Scan all checklist files in the checklists/ directory
19
- - For each checklist, count:
20
- - Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
21
- - Completed items: Lines matching `- [X]` or `- [x]`
22
- - Incomplete items: Lines matching `- [ ]`
23
- - Create a status table:
24
-
25
- ```text
26
- | Checklist | Total | Completed | Incomplete | Status |
27
- |-----------|-------|-----------|------------|--------|
28
- | ux.md | 12 | 12 | 0 | ✓ PASS |
29
- | test.md | 8 | 5 | 3 | ✗ FAIL |
30
- | security.md | 6 | 6 | 0 | ✓ PASS |
31
- ```
32
-
33
- - Calculate overall status:
34
- - **PASS**: All checklists have 0 incomplete items
35
- - **FAIL**: One or more checklists have incomplete items
36
-
37
- - **If any checklist is incomplete**:
38
- - Display the table with incomplete item counts
39
- - **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
40
- - Wait for user response before continuing
41
- - If user says "no" or "wait" or "stop", halt execution
42
- - If user says "yes" or "proceed" or "continue", proceed to step 3
43
-
44
- - **If all checklists are complete**:
45
- - Display the table showing all checklists passed
46
- - Automatically proceed to step 3
47
-
48
- 3. Load and analyze the implementation context:
49
- - **REQUIRED**: Read tasks.md for the complete task list and execution plan
50
- - **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
51
- - **IF EXISTS**: Read data-model.md for entities and relationships
52
- - **IF EXISTS**: Read contracts/ for API specifications and test requirements
53
- - **IF EXISTS**: Read research.md for technical decisions and constraints
54
- - **IF EXISTS**: Read quickstart.md for integration scenarios
55
-
56
- 4. **Project Setup Verification**:
57
- - **REQUIRED**: Create/verify ignore files based on actual project setup:
58
-
59
- **Detection & Creation Logic**:
60
- - Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
61
-
62
- ```sh
63
- git rev-parse --git-dir 2>/dev/null
64
- ```
65
-
66
- - Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
67
- - Check if .eslintrc*or eslint.config.* exists → create/verify .eslintignore
68
- - Check if .prettierrc* exists → create/verify .prettierignore
69
- - Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
70
- - Check if terraform files (*.tf) exist → create/verify .terraformignore
71
- - Check if .helmignore needed (helm charts present) → create/verify .helmignore
72
-
73
- **If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
74
- **If ignore file missing**: Create with full pattern set for detected technology
75
-
76
- **Common Patterns by Technology** (from plan.md tech stack):
77
- - **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
78
- - **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
79
- - **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
80
- - **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
81
- - **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
82
- - **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
83
- - **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
84
- - **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
85
- - **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
86
- - **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
87
- - **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
88
- - **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
89
- - **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
90
- - **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
91
-
92
- **Tool-Specific Patterns**:
93
- - **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
94
- - **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
95
- - **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
96
- - **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
97
- - **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
98
-
99
- 5. Parse tasks.md structure and extract:
100
- - **Task phases**: Setup, Tests, Core, Integration, Polish
101
- - **Task dependencies**: Sequential vs parallel execution rules
102
- - **Task details**: ID, description, file paths, parallel markers [P]
103
- - **Execution flow**: Order and dependency requirements
104
-
105
- 6. Execute implementation following the task plan:
106
- - **Phase-by-phase execution**: Complete each phase before moving to the next
107
- - **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
108
- - **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
109
- - **File-based coordination**: Tasks affecting the same files must run sequentially
110
- - **Validation checkpoints**: Verify each phase completion before proceeding
111
-
112
- 7. Implementation execution rules:
113
- - **Setup first**: Initialize project structure, dependencies, configuration
114
- - **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
115
- - **Core development**: Implement models, services, CLI commands, endpoints
116
- - **Integration work**: Database connections, middleware, logging, external services
117
- - **Polish and validation**: Unit tests, performance optimization, documentation
118
-
119
- 8. Progress tracking and error handling:
120
- - Report progress after each completed task
121
- - Halt execution if any non-parallel task fails
122
- - For parallel tasks [P], continue with successful tasks, report failed ones
123
- - Provide clear error messages with context for debugging
124
- - Suggest next steps if implementation cannot proceed
125
- - **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
126
-
127
- 9. Completion validation:
128
- - Verify all required tasks are completed
129
- - Check that implemented features match the original specification
130
- - Validate that tests pass and coverage meets requirements
131
- - Confirm the implementation follows the technical plan
132
- - Report final status with summary of completed work
133
-
134
- Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/prompts/speckit.plan.prompt.md DELETED
@@ -1,81 +0,0 @@
1
- ---
2
- description: Execute the implementation planning workflow using the plan template to generate design artifacts.
3
- ---
4
-
5
- ## User Input
6
-
7
- ```text
8
- $ARGUMENTS
9
- ```
10
-
11
- You **MUST** consider the user input before proceeding (if not empty).
12
-
13
- ## Outline
14
-
15
- 1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
16
-
17
- 2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
18
-
19
- 3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
20
- - Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
21
- - Fill Constitution Check section from constitution
22
- - Evaluate gates (ERROR if violations unjustified)
23
- - Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
24
- - Phase 1: Generate data-model.md, contracts/, quickstart.md
25
- - Phase 1: Update agent context by running the agent script
26
- - Re-evaluate Constitution Check post-design
27
-
28
- 4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
29
-
30
- ## Phases
31
-
32
- ### Phase 0: Outline & Research
33
-
34
- 1. **Extract unknowns from Technical Context** above:
35
- - For each NEEDS CLARIFICATION → research task
36
- - For each dependency → best practices task
37
- - For each integration → patterns task
38
-
39
- 2. **Generate and dispatch research agents**:
40
-
41
- ```text
42
- For each unknown in Technical Context:
43
- Task: "Research {unknown} for {feature context}"
44
- For each technology choice:
45
- Task: "Find best practices for {tech} in {domain}"
46
- ```
47
-
48
- 3. **Consolidate findings** in `research.md` using format:
49
- - Decision: [what was chosen]
50
- - Rationale: [why chosen]
51
- - Alternatives considered: [what else evaluated]
52
-
53
- **Output**: research.md with all NEEDS CLARIFICATION resolved
54
-
55
- ### Phase 1: Design & Contracts
56
-
57
- **Prerequisites:** `research.md` complete
58
-
59
- 1. **Extract entities from feature spec** → `data-model.md`:
60
- - Entity name, fields, relationships
61
- - Validation rules from requirements
62
- - State transitions if applicable
63
-
64
- 2. **Generate API contracts** from functional requirements:
65
- - For each user action → endpoint
66
- - Use standard REST/GraphQL patterns
67
- - Output OpenAPI/GraphQL schema to `/contracts/`
68
-
69
- 3. **Agent context update**:
70
- - Run `.specify/scripts/bash/update-agent-context.sh copilot`
71
- - These scripts detect which AI agent is in use
72
- - Update the appropriate agent-specific context file
73
- - Add only new technology from current plan
74
- - Preserve manual additions between markers
75
-
76
- **Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
77
-
78
- ## Key rules
79
-
80
- - Use absolute paths
81
- - ERROR on gate failures or unresolved clarifications
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/prompts/speckit.specify.prompt.md DELETED
@@ -1,229 +0,0 @@
1
- ---
2
- description: Create or update the feature specification from a natural language feature description.
3
- ---
4
-
5
- ## User Input
6
-
7
- ```text
8
- $ARGUMENTS
9
- ```
10
-
11
- You **MUST** consider the user input before proceeding (if not empty).
12
-
13
- ## Outline
14
-
15
- The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `$ARGUMENTS` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
16
-
17
- Given that feature description, do this:
18
-
19
- 1. **Generate a concise short name** (2-4 words) for the branch:
20
- - Analyze the feature description and extract the most meaningful keywords
21
- - Create a 2-4 word short name that captures the essence of the feature
22
- - Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
23
- - Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
24
- - Keep it concise but descriptive enough to understand the feature at a glance
25
- - Examples:
26
- - "I want to add user authentication" → "user-auth"
27
- - "Implement OAuth2 integration for the API" → "oauth2-api-integration"
28
- - "Create a dashboard for analytics" → "analytics-dashboard"
29
- - "Fix payment processing timeout bug" → "fix-payment-timeout"
30
-
31
- 2. Run the script `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` from repo root **with the short-name argument** and parse its JSON output for BRANCH_NAME and SPEC_FILE. All file paths must be absolute.
32
-
33
- **IMPORTANT**:
34
-
35
- - Append the short-name argument to the `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` command with the 2-4 word short name you created in step 1. Keep the feature description as the final argument.
36
- - Bash example: `--short-name "your-generated-short-name" "Feature description here"`
37
- - PowerShell example: `-ShortName "your-generated-short-name" "Feature description here"`
38
- - For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot")
39
- - You must only ever run this script once
40
- - The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
41
-
42
- 3. Load `.specify/templates/spec-template.md` to understand required sections.
43
-
44
- 4. Follow this execution flow:
45
-
46
- 1. Parse user description from Input
47
- If empty: ERROR "No feature description provided"
48
- 2. Extract key concepts from description
49
- Identify: actors, actions, data, constraints
50
- 3. For unclear aspects:
51
- - Make informed guesses based on context and industry standards
52
- - Only mark with [NEEDS CLARIFICATION: specific question] if:
53
- - The choice significantly impacts feature scope or user experience
54
- - Multiple reasonable interpretations exist with different implications
55
- - No reasonable default exists
56
- - **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
57
- - Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
58
- 4. Fill User Scenarios & Testing section
59
- If no clear user flow: ERROR "Cannot determine user scenarios"
60
- 5. Generate Functional Requirements
61
- Each requirement must be testable
62
- Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
63
- 6. Define Success Criteria
64
- Create measurable, technology-agnostic outcomes
65
- Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
66
- Each criterion must be verifiable without implementation details
67
- 7. Identify Key Entities (if data involved)
68
- 8. Return: SUCCESS (spec ready for planning)
69
-
70
- 5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
71
-
72
- 6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
73
-
74
- a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
75
-
76
- ```markdown
77
- # Specification Quality Checklist: [FEATURE NAME]
78
-
79
- **Purpose**: Validate specification completeness and quality before proceeding to planning
80
- **Created**: [DATE]
81
- **Feature**: [Link to spec.md]
82
-
83
- ## Content Quality
84
-
85
- - [ ] No implementation details (languages, frameworks, APIs)
86
- - [ ] Focused on user value and business needs
87
- - [ ] Written for non-technical stakeholders
88
- - [ ] All mandatory sections completed
89
-
90
- ## Requirement Completeness
91
-
92
- - [ ] No [NEEDS CLARIFICATION] markers remain
93
- - [ ] Requirements are testable and unambiguous
94
- - [ ] Success criteria are measurable
95
- - [ ] Success criteria are technology-agnostic (no implementation details)
96
- - [ ] All acceptance scenarios are defined
97
- - [ ] Edge cases are identified
98
- - [ ] Scope is clearly bounded
99
- - [ ] Dependencies and assumptions identified
100
-
101
- ## Feature Readiness
102
-
103
- - [ ] All functional requirements have clear acceptance criteria
104
- - [ ] User scenarios cover primary flows
105
- - [ ] Feature meets measurable outcomes defined in Success Criteria
106
- - [ ] No implementation details leak into specification
107
-
108
- ## Notes
109
-
110
- - Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
111
- ```
112
-
113
- b. **Run Validation Check**: Review the spec against each checklist item:
114
- - For each item, determine if it passes or fails
115
- - Document specific issues found (quote relevant spec sections)
116
-
117
- c. **Handle Validation Results**:
118
-
119
- - **If all items pass**: Mark checklist complete and proceed to step 6
120
-
121
- - **If items fail (excluding [NEEDS CLARIFICATION])**:
122
- 1. List the failing items and specific issues
123
- 2. Update the spec to address each issue
124
- 3. Re-run validation until all items pass (max 3 iterations)
125
- 4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
126
-
127
- - **If [NEEDS CLARIFICATION] markers remain**:
128
- 1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
129
- 2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
130
- 3. For each clarification needed (max 3), present options to user in this format:
131
-
132
- ```markdown
133
- ## Question [N]: [Topic]
134
-
135
- **Context**: [Quote relevant spec section]
136
-
137
- **What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
138
-
139
- **Suggested Answers**:
140
-
141
- | Option | Answer | Implications |
142
- |--------|--------|--------------|
143
- | A | [First suggested answer] | [What this means for the feature] |
144
- | B | [Second suggested answer] | [What this means for the feature] |
145
- | C | [Third suggested answer] | [What this means for the feature] |
146
- | Custom | Provide your own answer | [Explain how to provide custom input] |
147
-
148
- **Your choice**: _[Wait for user response]_
149
- ```
150
-
151
- 4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
152
- - Use consistent spacing with pipes aligned
153
- - Each cell should have spaces around content: `| Content |` not `|Content|`
154
- - Header separator must have at least 3 dashes: `|--------|`
155
- - Test that the table renders correctly in markdown preview
156
- 5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
157
- 6. Present all questions together before waiting for responses
158
- 7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
159
- 8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
160
- 9. Re-run validation after all clarifications are resolved
161
-
162
- d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
163
-
164
- 7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
165
-
166
- **NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
167
-
168
- ## General Guidelines
169
-
170
- ## Quick Guidelines
171
-
172
- - Focus on **WHAT** users need and **WHY**.
173
- - Avoid HOW to implement (no tech stack, APIs, code structure).
174
- - Written for business stakeholders, not developers.
175
- - DO NOT create any checklists that are embedded in the spec. That will be a separate command.
176
-
177
- ### Section Requirements
178
-
179
- - **Mandatory sections**: Must be completed for every feature
180
- - **Optional sections**: Include only when relevant to the feature
181
- - When a section doesn't apply, remove it entirely (don't leave as "N/A")
182
-
183
- ### For AI Generation
184
-
185
- When creating this spec from a user prompt:
186
-
187
- 1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
188
- 2. **Document assumptions**: Record reasonable defaults in the Assumptions section
189
- 3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
190
- - Significantly impact feature scope or user experience
191
- - Have multiple reasonable interpretations with different implications
192
- - Lack any reasonable default
193
- 4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
194
- 5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
195
- 6. **Common areas needing clarification** (only if no reasonable default exists):
196
- - Feature scope and boundaries (include/exclude specific use cases)
197
- - User types and permissions (if multiple conflicting interpretations possible)
198
- - Security/compliance requirements (when legally/financially significant)
199
-
200
- **Examples of reasonable defaults** (don't ask about these):
201
-
202
- - Data retention: Industry-standard practices for the domain
203
- - Performance targets: Standard web/mobile app expectations unless specified
204
- - Error handling: User-friendly messages with appropriate fallbacks
205
- - Authentication method: Standard session-based or OAuth2 for web apps
206
- - Integration patterns: RESTful APIs unless specified otherwise
207
-
208
- ### Success Criteria Guidelines
209
-
210
- Success criteria must be:
211
-
212
- 1. **Measurable**: Include specific metrics (time, percentage, count, rate)
213
- 2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
214
- 3. **User-focused**: Describe outcomes from user/business perspective, not system internals
215
- 4. **Verifiable**: Can be tested/validated without knowing implementation details
216
-
217
- **Good examples**:
218
-
219
- - "Users can complete checkout in under 3 minutes"
220
- - "System supports 10,000 concurrent users"
221
- - "95% of searches return results in under 1 second"
222
- - "Task completion rate improves by 40%"
223
-
224
- **Bad examples** (implementation-focused):
225
-
226
- - "API response time is under 200ms" (too technical, use "Users see results instantly")
227
- - "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
228
- - "React components render efficiently" (framework-specific)
229
- - "Redis cache hit rate above 80%" (technology-specific)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/prompts/speckit.tasks.prompt.md DELETED
@@ -1,128 +0,0 @@
1
- ---
2
- description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
3
- ---
4
-
5
- ## User Input
6
-
7
- ```text
8
- $ARGUMENTS
9
- ```
10
-
11
- You **MUST** consider the user input before proceeding (if not empty).
12
-
13
- ## Outline
14
-
15
- 1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
16
-
17
- 2. **Load design documents**: Read from FEATURE_DIR:
18
- - **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
19
- - **Optional**: data-model.md (entities), contracts/ (API endpoints), research.md (decisions), quickstart.md (test scenarios)
20
- - Note: Not all projects have all documents. Generate tasks based on what's available.
21
-
22
- 3. **Execute task generation workflow**:
23
- - Load plan.md and extract tech stack, libraries, project structure
24
- - Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
25
- - If data-model.md exists: Extract entities and map to user stories
26
- - If contracts/ exists: Map endpoints to user stories
27
- - If research.md exists: Extract decisions for setup tasks
28
- - Generate tasks organized by user story (see Task Generation Rules below)
29
- - Generate dependency graph showing user story completion order
30
- - Create parallel execution examples per user story
31
- - Validate task completeness (each user story has all needed tasks, independently testable)
32
-
33
- 4. **Generate tasks.md**: Use `.specify.specify/templates/tasks-template.md` as structure, fill with:
34
- - Correct feature name from plan.md
35
- - Phase 1: Setup tasks (project initialization)
36
- - Phase 2: Foundational tasks (blocking prerequisites for all user stories)
37
- - Phase 3+: One phase per user story (in priority order from spec.md)
38
- - Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
39
- - Final Phase: Polish & cross-cutting concerns
40
- - All tasks must follow the strict checklist format (see Task Generation Rules below)
41
- - Clear file paths for each task
42
- - Dependencies section showing story completion order
43
- - Parallel execution examples per story
44
- - Implementation strategy section (MVP first, incremental delivery)
45
-
46
- 5. **Report**: Output path to generated tasks.md and summary:
47
- - Total task count
48
- - Task count per user story
49
- - Parallel opportunities identified
50
- - Independent test criteria for each story
51
- - Suggested MVP scope (typically just User Story 1)
52
- - Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
53
-
54
- Context for task generation: $ARGUMENTS
55
-
56
- The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
57
-
58
- ## Task Generation Rules
59
-
60
- **CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
61
-
62
- **Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
63
-
64
- ### Checklist Format (REQUIRED)
65
-
66
- Every task MUST strictly follow this format:
67
-
68
- ```text
69
- - [ ] [TaskID] [P?] [Story?] Description with file path
70
- ```
71
-
72
- **Format Components**:
73
-
74
- 1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
75
- 2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
76
- 3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
77
- 4. **[Story] label**: REQUIRED for user story phase tasks only
78
- - Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
79
- - Setup phase: NO story label
80
- - Foundational phase: NO story label
81
- - User Story phases: MUST have story label
82
- - Polish phase: NO story label
83
- 5. **Description**: Clear action with exact file path
84
-
85
- **Examples**:
86
-
87
- - ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
88
- - ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
89
- - ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
90
- - ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
91
- - ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
92
- - ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
93
- - ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
94
- - ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
95
-
96
- ### Task Organization
97
-
98
- 1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
99
- - Each user story (P1, P2, P3...) gets its own phase
100
- - Map all related components to their story:
101
- - Models needed for that story
102
- - Services needed for that story
103
- - Endpoints/UI needed for that story
104
- - If tests requested: Tests specific to that story
105
- - Mark story dependencies (most stories should be independent)
106
-
107
- 2. **From Contracts**:
108
- - Map each contract/endpoint → to the user story it serves
109
- - If tests requested: Each contract → contract test task [P] before implementation in that story's phase
110
-
111
- 3. **From Data Model**:
112
- - Map each entity to the user story(ies) that need it
113
- - If entity serves multiple stories: Put in earliest story or Setup phase
114
- - Relationships → service layer tasks in appropriate story phase
115
-
116
- 4. **From Setup/Infrastructure**:
117
- - Shared infrastructure → Setup phase (Phase 1)
118
- - Foundational/blocking tasks → Foundational phase (Phase 2)
119
- - Story-specific setup → within that story's phase
120
-
121
- ### Phase Structure
122
-
123
- - **Phase 1**: Setup (project initialization)
124
- - **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
125
- - **Phase 3+**: User Stories in priority order (P1, P2, P3...)
126
- - Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
127
- - Each phase should be a complete, independently testable increment
128
- - **Final Phase**: Polish & Cross-Cutting Concerns
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/memory/constitution.md DELETED
@@ -1,89 +0,0 @@
1
- <!--
2
- Sync Impact Report
3
- - Version change: none → 1.0.0
4
- - Modified principles: N/A (initial ratification)
5
- - Added sections: Core Principles (5 items), Documentation Consistency, Development Workflow & Quality Gates, Governance
6
- - Removed sections: None
7
- - Templates requiring updates:
8
- - .specify/templates/spec-template.md ✅ updated (language guidance: Chinese-first allowed)
9
- - .specify/templates/plan-template.md ✅ updated (remove dead link, align Constitution Check)
10
- - .specify/templates/tasks-template.md ✅ updated (bilingual allowed)
11
- - .specify/templates/checklist-template.md ✅ updated (bilingual allowed)
12
- - .specify/templates/commands/* ⚠ N/A (directory not present)
13
- - Follow-up TODOs: None
14
- -->
15
-
16
- # TTSAM Project Constitution
17
-
18
- ## Core Principles
19
-
20
- ### 一、用戶價值與語言政策(Chinese‑first Specs)
21
- 規格文件以中文為主撰寫是允許且鼓勵的;如涉及通用專有名詞,建議於首次出現時附英文術語(括號標註)。
22
- 所有規格內容必須保持清晰、可測試、與技術中立,不因語言切換而引入歧義。
23
-
24
- — 理由:專案利害關係人以中文為主要溝通語言。以中文撰寫可提升可讀性與對齊速度,
25
- 同時保留英文術語可確保與外部資源與社群的一致性。
26
-
27
- ### 二、獨立可測的用戶旅程(User Stories as Independent Slices)
28
- 每則用戶故事必須能獨立開發、獨立測試、獨立展示與交付;採用 Given/When/Then 形式(可用中文表述),
29
- 並描述清楚初始條件、行為與可驗證的結果。
30
-
31
- — 理由:確保每次增量都能形成可演示的 MVP,降低跨故事耦合與交付風險。
32
-
33
- ### 三、技術中立與無實作細節(Technology‑Agnostic Specs)
34
- 規格不得落入實作層(語言、框架、API 端點細節、資料庫結構等)。規格只描述「用戶價值、行為、
35
- 可觀察結果、與驗收條件」。如需技術選型,應放入計畫文件(plan.md)或研究文件(research.md)。
36
-
37
- — 理由:保持規格面向利害關係人與驗收,避免過早鎖定實作方案。
38
-
39
- ### 四、可量化成功準則(Measurable Outcomes)
40
- 每份規格必須提供客觀、可量化、與技術中立的成功準則(Success Criteria),例如時間上限、
41
- 完成率、可用性等,以便驗收與回歸評估。
42
-
43
- — 理由:以結果為導向,確保規格可驗證、可追蹤。
44
-
45
- ### 五、邊界、風險與依賴(Boundaries, Risks, Dependencies)
46
- 規格必須列出邊界條件、已知風險、缺漏與依賴。對於尚未決定或需後續澄清之處,
47
- 使用標記「NEEDS CLARIFICATION」並在計畫階段追蹤關閉。
48
-
49
- — 理由:將不確定性前置化,降低整體專案風險與返工成本。
50
-
51
- ## 文檔一致性與結構要求
52
-
53
- 所有 feature 規格(spec.md)至少包含下列區塊:
54
- - User Scenarios & Testing:以優先度排序(P1、P2…),每則故事可獨立測試,採 Given/When/Then;
55
- 允許中文撰寫,英文術語可括號備註。
56
- - Requirements:以可驗證之功能性需求(FR-###)為主,不含實作細節。
57
- - Key Entities(如涉及資料實體):僅描述概念與關係,不含 schema 細節。
58
- - Success Criteria:可量化、可驗收、技術中立的衡量指標。
59
- - Edge Cases:列出關鍵邊界與異常情境。
60
-
61
- 模板與現有文件的對齊:
62
- - 規格模板:.specify/templates/spec-template.md(已加入中文優先的語言指引)
63
- - 計畫模板:.specify/templates/plan-template.md(Constitution Check 與死連結修正)
64
- - 其他模板:保持英/中自由,但遵守本憲章之原則
65
-
66
- ## 開發流程與品質門檻(Quality Gates)
67
-
68
- - Constitution Check:在計畫階段需逐條對照本憲章原則;若有偏離,必須在 plan.md 的
69
- 「Complexity Tracking」或備註中理據化並獲得審核同意。
70
- - 質量檢查(依專案性質選擇適用):
71
- - 文件檢查:規格完成度、需求可測試性、成功準則可量化性。
72
- - 實作階段:Build、Lint/Typecheck、Unit/Integration 測試門檻(若專案技術堆疊適用)。
73
- - 交付與示範:每個高優先度故事完成後可獨立演示與驗收。
74
-
75
- ## Governance
76
-
77
- - 憲章效力:本憲章優先於其他內部慣例,衝突時以本憲章為準。
78
- - 修訂流程:提出 PR → 指定 Reviewer 審核 → 更新版本號與日期 → 同步影響模板並記錄於
79
- 文件頂部的 Sync Impact Report。
80
- - 版本規則(SemVer):
81
- - MAJOR:原則移除/重定義造成不相容;
82
- - MINOR:新增/實質擴充原則或章節;
83
- - PATCH:措辭澄清、拼寫修正等非語意變更。
84
- - 合規審查:
85
- - 所有 PR 需在描述中回覆「Constitution Check」;
86
- - Reviewer 應檢查規格是否技術中立、可測試、具成功準則;
87
- - 對中文規格:必要英文術語首次出現時須括號標註,避免歧義。
88
-
89
- **Version**: 1.0.0 | **Ratified**: 2025-10-22 | **Last Amended**: 2025-10-22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/scripts/bash/check-prerequisites.sh DELETED
@@ -1,166 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- # Consolidated prerequisite checking script
4
- #
5
- # This script provides unified prerequisite checking for Spec-Driven Development workflow.
6
- # It replaces the functionality previously spread across multiple scripts.
7
- #
8
- # Usage: ./check-prerequisites.sh [OPTIONS]
9
- #
10
- # OPTIONS:
11
- # --json Output in JSON format
12
- # --require-tasks Require tasks.md to exist (for implementation phase)
13
- # --include-tasks Include tasks.md in AVAILABLE_DOCS list
14
- # --paths-only Only output path variables (no validation)
15
- # --help, -h Show help message
16
- #
17
- # OUTPUTS:
18
- # JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
19
- # Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
20
- # Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
21
-
22
- set -e
23
-
24
- # Parse command line arguments
25
- JSON_MODE=false
26
- REQUIRE_TASKS=false
27
- INCLUDE_TASKS=false
28
- PATHS_ONLY=false
29
-
30
- for arg in "$@"; do
31
- case "$arg" in
32
- --json)
33
- JSON_MODE=true
34
- ;;
35
- --require-tasks)
36
- REQUIRE_TASKS=true
37
- ;;
38
- --include-tasks)
39
- INCLUDE_TASKS=true
40
- ;;
41
- --paths-only)
42
- PATHS_ONLY=true
43
- ;;
44
- --help|-h)
45
- cat << 'EOF'
46
- Usage: check-prerequisites.sh [OPTIONS]
47
-
48
- Consolidated prerequisite checking for Spec-Driven Development workflow.
49
-
50
- OPTIONS:
51
- --json Output in JSON format
52
- --require-tasks Require tasks.md to exist (for implementation phase)
53
- --include-tasks Include tasks.md in AVAILABLE_DOCS list
54
- --paths-only Only output path variables (no prerequisite validation)
55
- --help, -h Show this help message
56
-
57
- EXAMPLES:
58
- # Check task prerequisites (plan.md required)
59
- ./check-prerequisites.sh --json
60
-
61
- # Check implementation prerequisites (plan.md + tasks.md required)
62
- ./check-prerequisites.sh --json --require-tasks --include-tasks
63
-
64
- # Get feature paths only (no validation)
65
- ./check-prerequisites.sh --paths-only
66
-
67
- EOF
68
- exit 0
69
- ;;
70
- *)
71
- echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
72
- exit 1
73
- ;;
74
- esac
75
- done
76
-
77
- # Source common functions
78
- SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
79
- source "$SCRIPT_DIR/common.sh"
80
-
81
- # Get feature paths and validate branch
82
- eval $(get_feature_paths)
83
- check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
84
-
85
- # If paths-only mode, output paths and exit (support JSON + paths-only combined)
86
- if $PATHS_ONLY; then
87
- if $JSON_MODE; then
88
- # Minimal JSON paths payload (no validation performed)
89
- printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
90
- "$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
91
- else
92
- echo "REPO_ROOT: $REPO_ROOT"
93
- echo "BRANCH: $CURRENT_BRANCH"
94
- echo "FEATURE_DIR: $FEATURE_DIR"
95
- echo "FEATURE_SPEC: $FEATURE_SPEC"
96
- echo "IMPL_PLAN: $IMPL_PLAN"
97
- echo "TASKS: $TASKS"
98
- fi
99
- exit 0
100
- fi
101
-
102
- # Validate required directories and files
103
- if [[ ! -d "$FEATURE_DIR" ]]; then
104
- echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
105
- echo "Run /speckit.specify first to create the feature structure." >&2
106
- exit 1
107
- fi
108
-
109
- if [[ ! -f "$IMPL_PLAN" ]]; then
110
- echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
111
- echo "Run /speckit.plan first to create the implementation plan." >&2
112
- exit 1
113
- fi
114
-
115
- # Check for tasks.md if required
116
- if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
117
- echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
118
- echo "Run /speckit.tasks first to create the task list." >&2
119
- exit 1
120
- fi
121
-
122
- # Build list of available documents
123
- docs=()
124
-
125
- # Always check these optional docs
126
- [[ -f "$RESEARCH" ]] && docs+=("research.md")
127
- [[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
128
-
129
- # Check contracts directory (only if it exists and has files)
130
- if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
131
- docs+=("contracts/")
132
- fi
133
-
134
- [[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
135
-
136
- # Include tasks.md if requested and it exists
137
- if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
138
- docs+=("tasks.md")
139
- fi
140
-
141
- # Output results
142
- if $JSON_MODE; then
143
- # Build JSON array of documents
144
- if [[ ${#docs[@]} -eq 0 ]]; then
145
- json_docs="[]"
146
- else
147
- json_docs=$(printf '"%s",' "${docs[@]}")
148
- json_docs="[${json_docs%,}]"
149
- fi
150
-
151
- printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
152
- else
153
- # Text output
154
- echo "FEATURE_DIR:$FEATURE_DIR"
155
- echo "AVAILABLE_DOCS:"
156
-
157
- # Show status of each potential document
158
- check_file "$RESEARCH" "research.md"
159
- check_file "$DATA_MODEL" "data-model.md"
160
- check_dir "$CONTRACTS_DIR" "contracts/"
161
- check_file "$QUICKSTART" "quickstart.md"
162
-
163
- if $INCLUDE_TASKS; then
164
- check_file "$TASKS" "tasks.md"
165
- fi
166
- fi
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/scripts/bash/common.sh DELETED
@@ -1,156 +0,0 @@
1
- #!/usr/bin/env bash
2
- # Common functions and variables for all scripts
3
-
4
- # Get repository root, with fallback for non-git repositories
5
- get_repo_root() {
6
- if git rev-parse --show-toplevel >/dev/null 2>&1; then
7
- git rev-parse --show-toplevel
8
- else
9
- # Fall back to script location for non-git repos
10
- local script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
11
- (cd "$script_dir/../../.." && pwd)
12
- fi
13
- }
14
-
15
- # Get current branch, with fallback for non-git repositories
16
- get_current_branch() {
17
- # First check if SPECIFY_FEATURE environment variable is set
18
- if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
19
- echo "$SPECIFY_FEATURE"
20
- return
21
- fi
22
-
23
- # Then check git if available
24
- if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
25
- git rev-parse --abbrev-ref HEAD
26
- return
27
- fi
28
-
29
- # For non-git repos, try to find the latest feature directory
30
- local repo_root=$(get_repo_root)
31
- local specs_dir="$repo_root/specs"
32
-
33
- if [[ -d "$specs_dir" ]]; then
34
- local latest_feature=""
35
- local highest=0
36
-
37
- for dir in "$specs_dir"/*; do
38
- if [[ -d "$dir" ]]; then
39
- local dirname=$(basename "$dir")
40
- if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
41
- local number=${BASH_REMATCH[1]}
42
- number=$((10#$number))
43
- if [[ "$number" -gt "$highest" ]]; then
44
- highest=$number
45
- latest_feature=$dirname
46
- fi
47
- fi
48
- fi
49
- done
50
-
51
- if [[ -n "$latest_feature" ]]; then
52
- echo "$latest_feature"
53
- return
54
- fi
55
- fi
56
-
57
- echo "main" # Final fallback
58
- }
59
-
60
- # Check if we have git available
61
- has_git() {
62
- git rev-parse --show-toplevel >/dev/null 2>&1
63
- }
64
-
65
- check_feature_branch() {
66
- local branch="$1"
67
- local has_git_repo="$2"
68
-
69
- # For non-git repos, we can't enforce branch naming but still provide output
70
- if [[ "$has_git_repo" != "true" ]]; then
71
- echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
72
- return 0
73
- fi
74
-
75
- if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
76
- echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
77
- echo "Feature branches should be named like: 001-feature-name" >&2
78
- return 1
79
- fi
80
-
81
- return 0
82
- }
83
-
84
- get_feature_dir() { echo "$1/specs/$2"; }
85
-
86
- # Find feature directory by numeric prefix instead of exact branch match
87
- # This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
88
- find_feature_dir_by_prefix() {
89
- local repo_root="$1"
90
- local branch_name="$2"
91
- local specs_dir="$repo_root/specs"
92
-
93
- # Extract numeric prefix from branch (e.g., "004" from "004-whatever")
94
- if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then
95
- # If branch doesn't have numeric prefix, fall back to exact match
96
- echo "$specs_dir/$branch_name"
97
- return
98
- fi
99
-
100
- local prefix="${BASH_REMATCH[1]}"
101
-
102
- # Search for directories in specs/ that start with this prefix
103
- local matches=()
104
- if [[ -d "$specs_dir" ]]; then
105
- for dir in "$specs_dir"/"$prefix"-*; do
106
- if [[ -d "$dir" ]]; then
107
- matches+=("$(basename "$dir")")
108
- fi
109
- done
110
- fi
111
-
112
- # Handle results
113
- if [[ ${#matches[@]} -eq 0 ]]; then
114
- # No match found - return the branch name path (will fail later with clear error)
115
- echo "$specs_dir/$branch_name"
116
- elif [[ ${#matches[@]} -eq 1 ]]; then
117
- # Exactly one match - perfect!
118
- echo "$specs_dir/${matches[0]}"
119
- else
120
- # Multiple matches - this shouldn't happen with proper naming convention
121
- echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
122
- echo "Please ensure only one spec directory exists per numeric prefix." >&2
123
- echo "$specs_dir/$branch_name" # Return something to avoid breaking the script
124
- fi
125
- }
126
-
127
- get_feature_paths() {
128
- local repo_root=$(get_repo_root)
129
- local current_branch=$(get_current_branch)
130
- local has_git_repo="false"
131
-
132
- if has_git; then
133
- has_git_repo="true"
134
- fi
135
-
136
- # Use prefix-based lookup to support multiple branches per spec
137
- local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch")
138
-
139
- cat <<EOF
140
- REPO_ROOT='$repo_root'
141
- CURRENT_BRANCH='$current_branch'
142
- HAS_GIT='$has_git_repo'
143
- FEATURE_DIR='$feature_dir'
144
- FEATURE_SPEC='$feature_dir/spec.md'
145
- IMPL_PLAN='$feature_dir/plan.md'
146
- TASKS='$feature_dir/tasks.md'
147
- RESEARCH='$feature_dir/research.md'
148
- DATA_MODEL='$feature_dir/data-model.md'
149
- QUICKSTART='$feature_dir/quickstart.md'
150
- CONTRACTS_DIR='$feature_dir/contracts'
151
- EOF
152
- }
153
-
154
- check_file() { [[ -f "$1" ]] && echo " ✓ $2" || echo " ✗ $2"; }
155
- check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo " ✓ $2" || echo " ✗ $2"; }
156
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/scripts/bash/create-new-feature.sh DELETED
@@ -1,206 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- set -e
4
-
5
- JSON_MODE=false
6
- SHORT_NAME=""
7
- ARGS=()
8
- i=1
9
- while [ $i -le $# ]; do
10
- arg="${!i}"
11
- case "$arg" in
12
- --json)
13
- JSON_MODE=true
14
- ;;
15
- --short-name)
16
- if [ $((i + 1)) -gt $# ]; then
17
- echo 'Error: --short-name requires a value' >&2
18
- exit 1
19
- fi
20
- i=$((i + 1))
21
- next_arg="${!i}"
22
- # Check if the next argument is another option (starts with --)
23
- if [[ "$next_arg" == --* ]]; then
24
- echo 'Error: --short-name requires a value' >&2
25
- exit 1
26
- fi
27
- SHORT_NAME="$next_arg"
28
- ;;
29
- --help|-h)
30
- echo "Usage: $0 [--json] [--short-name <name>] <feature_description>"
31
- echo ""
32
- echo "Options:"
33
- echo " --json Output in JSON format"
34
- echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
35
- echo " --help, -h Show this help message"
36
- echo ""
37
- echo "Examples:"
38
- echo " $0 'Add user authentication system' --short-name 'user-auth'"
39
- echo " $0 'Implement OAuth2 integration for API'"
40
- exit 0
41
- ;;
42
- *)
43
- ARGS+=("$arg")
44
- ;;
45
- esac
46
- i=$((i + 1))
47
- done
48
-
49
- FEATURE_DESCRIPTION="${ARGS[*]}"
50
- if [ -z "$FEATURE_DESCRIPTION" ]; then
51
- echo "Usage: $0 [--json] [--short-name <name>] <feature_description>" >&2
52
- exit 1
53
- fi
54
-
55
- # Function to find the repository root by searching for existing project markers
56
- find_repo_root() {
57
- local dir="$1"
58
- while [ "$dir" != "/" ]; do
59
- if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
60
- echo "$dir"
61
- return 0
62
- fi
63
- dir="$(dirname "$dir")"
64
- done
65
- return 1
66
- }
67
-
68
- # Resolve repository root. Prefer git information when available, but fall back
69
- # to searching for repository markers so the workflow still functions in repositories that
70
- # were initialised with --no-git.
71
- SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
72
-
73
- if git rev-parse --show-toplevel >/dev/null 2>&1; then
74
- REPO_ROOT=$(git rev-parse --show-toplevel)
75
- HAS_GIT=true
76
- else
77
- REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
78
- if [ -z "$REPO_ROOT" ]; then
79
- echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
80
- exit 1
81
- fi
82
- HAS_GIT=false
83
- fi
84
-
85
- cd "$REPO_ROOT"
86
-
87
- SPECS_DIR="$REPO_ROOT/specs"
88
- mkdir -p "$SPECS_DIR"
89
-
90
- HIGHEST=0
91
- if [ -d "$SPECS_DIR" ]; then
92
- for dir in "$SPECS_DIR"/*; do
93
- [ -d "$dir" ] || continue
94
- dirname=$(basename "$dir")
95
- number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
96
- number=$((10#$number))
97
- if [ "$number" -gt "$HIGHEST" ]; then HIGHEST=$number; fi
98
- done
99
- fi
100
-
101
- NEXT=$((HIGHEST + 1))
102
- FEATURE_NUM=$(printf "%03d" "$NEXT")
103
-
104
- # Function to generate branch name with stop word filtering and length filtering
105
- generate_branch_name() {
106
- local description="$1"
107
-
108
- # Common stop words to filter out
109
- local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
110
-
111
- # Convert to lowercase and split into words
112
- local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
113
-
114
- # Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
115
- local meaningful_words=()
116
- for word in $clean_name; do
117
- # Skip empty words
118
- [ -z "$word" ] && continue
119
-
120
- # Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
121
- if ! echo "$word" | grep -qiE "$stop_words"; then
122
- if [ ${#word} -ge 3 ]; then
123
- meaningful_words+=("$word")
124
- elif echo "$description" | grep -q "\b${word^^}\b"; then
125
- # Keep short words if they appear as uppercase in original (likely acronyms)
126
- meaningful_words+=("$word")
127
- fi
128
- fi
129
- done
130
-
131
- # If we have meaningful words, use first 3-4 of them
132
- if [ ${#meaningful_words[@]} -gt 0 ]; then
133
- local max_words=3
134
- if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
135
-
136
- local result=""
137
- local count=0
138
- for word in "${meaningful_words[@]}"; do
139
- if [ $count -ge $max_words ]; then break; fi
140
- if [ -n "$result" ]; then result="$result-"; fi
141
- result="$result$word"
142
- count=$((count + 1))
143
- done
144
- echo "$result"
145
- else
146
- # Fallback to original logic if no meaningful words found
147
- echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//' | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
148
- fi
149
- }
150
-
151
- # Generate branch name
152
- if [ -n "$SHORT_NAME" ]; then
153
- # Use provided short name, just clean it up
154
- BRANCH_SUFFIX=$(echo "$SHORT_NAME" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//')
155
- else
156
- # Generate from description with smart filtering
157
- BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
158
- fi
159
-
160
- BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
161
-
162
- # GitHub enforces a 244-byte limit on branch names
163
- # Validate and truncate if necessary
164
- MAX_BRANCH_LENGTH=244
165
- if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
166
- # Calculate how much we need to trim from suffix
167
- # Account for: feature number (3) + hyphen (1) = 4 chars
168
- MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
169
-
170
- # Truncate suffix at word boundary if possible
171
- TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
172
- # Remove trailing hyphen if truncation created one
173
- TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
174
-
175
- ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
176
- BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
177
-
178
- >&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
179
- >&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
180
- >&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
181
- fi
182
-
183
- if [ "$HAS_GIT" = true ]; then
184
- git checkout -b "$BRANCH_NAME"
185
- else
186
- >&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
187
- fi
188
-
189
- FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
190
- mkdir -p "$FEATURE_DIR"
191
-
192
- TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
193
- SPEC_FILE="$FEATURE_DIR/spec.md"
194
- if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
195
-
196
- # Set the SPECIFY_FEATURE environment variable for the current session
197
- export SPECIFY_FEATURE="$BRANCH_NAME"
198
-
199
- if $JSON_MODE; then
200
- printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
201
- else
202
- echo "BRANCH_NAME: $BRANCH_NAME"
203
- echo "SPEC_FILE: $SPEC_FILE"
204
- echo "FEATURE_NUM: $FEATURE_NUM"
205
- echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
206
- fi
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/scripts/bash/setup-plan.sh DELETED
@@ -1,61 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- set -e
4
-
5
- # Parse command line arguments
6
- JSON_MODE=false
7
- ARGS=()
8
-
9
- for arg in "$@"; do
10
- case "$arg" in
11
- --json)
12
- JSON_MODE=true
13
- ;;
14
- --help|-h)
15
- echo "Usage: $0 [--json]"
16
- echo " --json Output results in JSON format"
17
- echo " --help Show this help message"
18
- exit 0
19
- ;;
20
- *)
21
- ARGS+=("$arg")
22
- ;;
23
- esac
24
- done
25
-
26
- # Get script directory and load common functions
27
- SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
28
- source "$SCRIPT_DIR/common.sh"
29
-
30
- # Get all paths and variables from common functions
31
- eval $(get_feature_paths)
32
-
33
- # Check if we're on a proper feature branch (only for git repos)
34
- check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
35
-
36
- # Ensure the feature directory exists
37
- mkdir -p "$FEATURE_DIR"
38
-
39
- # Copy plan template if it exists
40
- TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
41
- if [[ -f "$TEMPLATE" ]]; then
42
- cp "$TEMPLATE" "$IMPL_PLAN"
43
- echo "Copied plan template to $IMPL_PLAN"
44
- else
45
- echo "Warning: Plan template not found at $TEMPLATE"
46
- # Create a basic plan file if template doesn't exist
47
- touch "$IMPL_PLAN"
48
- fi
49
-
50
- # Output results
51
- if $JSON_MODE; then
52
- printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
53
- "$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
54
- else
55
- echo "FEATURE_SPEC: $FEATURE_SPEC"
56
- echo "IMPL_PLAN: $IMPL_PLAN"
57
- echo "SPECS_DIR: $FEATURE_DIR"
58
- echo "BRANCH: $CURRENT_BRANCH"
59
- echo "HAS_GIT: $HAS_GIT"
60
- fi
61
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/scripts/bash/update-agent-context.sh DELETED
@@ -1,772 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- # Update agent context files with information from plan.md
4
- #
5
- # This script maintains AI agent context files by parsing feature specifications
6
- # and updating agent-specific configuration files with project information.
7
- #
8
- # MAIN FUNCTIONS:
9
- # 1. Environment Validation
10
- # - Verifies git repository structure and branch information
11
- # - Checks for required plan.md files and templates
12
- # - Validates file permissions and accessibility
13
- #
14
- # 2. Plan Data Extraction
15
- # - Parses plan.md files to extract project metadata
16
- # - Identifies language/version, frameworks, databases, and project types
17
- # - Handles missing or incomplete specification data gracefully
18
- #
19
- # 3. Agent File Management
20
- # - Creates new agent context files from templates when needed
21
- # - Updates existing agent files with new project information
22
- # - Preserves manual additions and custom configurations
23
- # - Supports multiple AI agent formats and directory structures
24
- #
25
- # 4. Content Generation
26
- # - Generates language-specific build/test commands
27
- # - Creates appropriate project directory structures
28
- # - Updates technology stacks and recent changes sections
29
- # - Maintains consistent formatting and timestamps
30
- #
31
- # 5. Multi-Agent Support
32
- # - Handles agent-specific file paths and naming conventions
33
- # - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Amp, or Amazon Q Developer CLI
34
- # - Can update single agents or all existing agent files
35
- # - Creates default Claude file if no agent files exist
36
- #
37
- # Usage: ./update-agent-context.sh [agent_type]
38
- # Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|q
39
- # Leave empty to update all existing agent files
40
-
41
- set -e
42
-
43
- # Enable strict error handling
44
- set -u
45
- set -o pipefail
46
-
47
- #==============================================================================
48
- # Configuration and Global Variables
49
- #==============================================================================
50
-
51
- # Get script directory and load common functions
52
- SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
53
- source "$SCRIPT_DIR/common.sh"
54
-
55
- # Get all paths and variables from common functions
56
- eval $(get_feature_paths)
57
-
58
- NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
59
- AGENT_TYPE="${1:-}"
60
-
61
- # Agent-specific file paths
62
- CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
63
- GEMINI_FILE="$REPO_ROOT/GEMINI.md"
64
- COPILOT_FILE="$REPO_ROOT/.github/copilot-instructions.md"
65
- CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
66
- QWEN_FILE="$REPO_ROOT/QWEN.md"
67
- AGENTS_FILE="$REPO_ROOT/AGENTS.md"
68
- WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
69
- KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
70
- AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
71
- ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
72
- CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
73
- AMP_FILE="$REPO_ROOT/AGENTS.md"
74
- Q_FILE="$REPO_ROOT/AGENTS.md"
75
-
76
- # Template file
77
- TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
78
-
79
- # Global variables for parsed plan data
80
- NEW_LANG=""
81
- NEW_FRAMEWORK=""
82
- NEW_DB=""
83
- NEW_PROJECT_TYPE=""
84
-
85
- #==============================================================================
86
- # Utility Functions
87
- #==============================================================================
88
-
89
- log_info() {
90
- echo "INFO: $1"
91
- }
92
-
93
- log_success() {
94
- echo "✓ $1"
95
- }
96
-
97
- log_error() {
98
- echo "ERROR: $1" >&2
99
- }
100
-
101
- log_warning() {
102
- echo "WARNING: $1" >&2
103
- }
104
-
105
- # Cleanup function for temporary files
106
- cleanup() {
107
- local exit_code=$?
108
- rm -f /tmp/agent_update_*_$$
109
- rm -f /tmp/manual_additions_$$
110
- exit $exit_code
111
- }
112
-
113
- # Set up cleanup trap
114
- trap cleanup EXIT INT TERM
115
-
116
- #==============================================================================
117
- # Validation Functions
118
- #==============================================================================
119
-
120
- validate_environment() {
121
- # Check if we have a current branch/feature (git or non-git)
122
- if [[ -z "$CURRENT_BRANCH" ]]; then
123
- log_error "Unable to determine current feature"
124
- if [[ "$HAS_GIT" == "true" ]]; then
125
- log_info "Make sure you're on a feature branch"
126
- else
127
- log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
128
- fi
129
- exit 1
130
- fi
131
-
132
- # Check if plan.md exists
133
- if [[ ! -f "$NEW_PLAN" ]]; then
134
- log_error "No plan.md found at $NEW_PLAN"
135
- log_info "Make sure you're working on a feature with a corresponding spec directory"
136
- if [[ "$HAS_GIT" != "true" ]]; then
137
- log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
138
- fi
139
- exit 1
140
- fi
141
-
142
- # Check if template exists (needed for new files)
143
- if [[ ! -f "$TEMPLATE_FILE" ]]; then
144
- log_warning "Template file not found at $TEMPLATE_FILE"
145
- log_warning "Creating new agent files will fail"
146
- fi
147
- }
148
-
149
- #==============================================================================
150
- # Plan Parsing Functions
151
- #==============================================================================
152
-
153
- extract_plan_field() {
154
- local field_pattern="$1"
155
- local plan_file="$2"
156
-
157
- grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
158
- head -1 | \
159
- sed "s|^\*\*${field_pattern}\*\*: ||" | \
160
- sed 's/^[ \t]*//;s/[ \t]*$//' | \
161
- grep -v "NEEDS CLARIFICATION" | \
162
- grep -v "^N/A$" || echo ""
163
- }
164
-
165
- parse_plan_data() {
166
- local plan_file="$1"
167
-
168
- if [[ ! -f "$plan_file" ]]; then
169
- log_error "Plan file not found: $plan_file"
170
- return 1
171
- fi
172
-
173
- if [[ ! -r "$plan_file" ]]; then
174
- log_error "Plan file is not readable: $plan_file"
175
- return 1
176
- fi
177
-
178
- log_info "Parsing plan data from $plan_file"
179
-
180
- NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
181
- NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
182
- NEW_DB=$(extract_plan_field "Storage" "$plan_file")
183
- NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
184
-
185
- # Log what we found
186
- if [[ -n "$NEW_LANG" ]]; then
187
- log_info "Found language: $NEW_LANG"
188
- else
189
- log_warning "No language information found in plan"
190
- fi
191
-
192
- if [[ -n "$NEW_FRAMEWORK" ]]; then
193
- log_info "Found framework: $NEW_FRAMEWORK"
194
- fi
195
-
196
- if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
197
- log_info "Found database: $NEW_DB"
198
- fi
199
-
200
- if [[ -n "$NEW_PROJECT_TYPE" ]]; then
201
- log_info "Found project type: $NEW_PROJECT_TYPE"
202
- fi
203
- }
204
-
205
- format_technology_stack() {
206
- local lang="$1"
207
- local framework="$2"
208
- local parts=()
209
-
210
- # Add non-empty parts
211
- [[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
212
- [[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
213
-
214
- # Join with proper formatting
215
- if [[ ${#parts[@]} -eq 0 ]]; then
216
- echo ""
217
- elif [[ ${#parts[@]} -eq 1 ]]; then
218
- echo "${parts[0]}"
219
- else
220
- # Join multiple parts with " + "
221
- local result="${parts[0]}"
222
- for ((i=1; i<${#parts[@]}; i++)); do
223
- result="$result + ${parts[i]}"
224
- done
225
- echo "$result"
226
- fi
227
- }
228
-
229
- #==============================================================================
230
- # Template and Content Generation Functions
231
- #==============================================================================
232
-
233
- get_project_structure() {
234
- local project_type="$1"
235
-
236
- if [[ "$project_type" == *"web"* ]]; then
237
- echo "backend/\\nfrontend/\\ntests/"
238
- else
239
- echo "src/\\ntests/"
240
- fi
241
- }
242
-
243
- get_commands_for_language() {
244
- local lang="$1"
245
-
246
- case "$lang" in
247
- *"Python"*)
248
- echo "cd src && pytest && ruff check ."
249
- ;;
250
- *"Rust"*)
251
- echo "cargo test && cargo clippy"
252
- ;;
253
- *"JavaScript"*|*"TypeScript"*)
254
- echo "npm test \\&\\& npm run lint"
255
- ;;
256
- *)
257
- echo "# Add commands for $lang"
258
- ;;
259
- esac
260
- }
261
-
262
- get_language_conventions() {
263
- local lang="$1"
264
- echo "$lang: Follow standard conventions"
265
- }
266
-
267
- create_new_agent_file() {
268
- local target_file="$1"
269
- local temp_file="$2"
270
- local project_name="$3"
271
- local current_date="$4"
272
-
273
- if [[ ! -f "$TEMPLATE_FILE" ]]; then
274
- log_error "Template not found at $TEMPLATE_FILE"
275
- return 1
276
- fi
277
-
278
- if [[ ! -r "$TEMPLATE_FILE" ]]; then
279
- log_error "Template file is not readable: $TEMPLATE_FILE"
280
- return 1
281
- fi
282
-
283
- log_info "Creating new agent context file from template..."
284
-
285
- if ! cp "$TEMPLATE_FILE" "$temp_file"; then
286
- log_error "Failed to copy template file"
287
- return 1
288
- fi
289
-
290
- # Replace template placeholders
291
- local project_structure
292
- project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
293
-
294
- local commands
295
- commands=$(get_commands_for_language "$NEW_LANG")
296
-
297
- local language_conventions
298
- language_conventions=$(get_language_conventions "$NEW_LANG")
299
-
300
- # Perform substitutions with error checking using safer approach
301
- # Escape special characters for sed by using a different delimiter or escaping
302
- local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
303
- local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
304
- local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
305
-
306
- # Build technology stack and recent change strings conditionally
307
- local tech_stack
308
- if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
309
- tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
310
- elif [[ -n "$escaped_lang" ]]; then
311
- tech_stack="- $escaped_lang ($escaped_branch)"
312
- elif [[ -n "$escaped_framework" ]]; then
313
- tech_stack="- $escaped_framework ($escaped_branch)"
314
- else
315
- tech_stack="- ($escaped_branch)"
316
- fi
317
-
318
- local recent_change
319
- if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
320
- recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
321
- elif [[ -n "$escaped_lang" ]]; then
322
- recent_change="- $escaped_branch: Added $escaped_lang"
323
- elif [[ -n "$escaped_framework" ]]; then
324
- recent_change="- $escaped_branch: Added $escaped_framework"
325
- else
326
- recent_change="- $escaped_branch: Added"
327
- fi
328
-
329
- local substitutions=(
330
- "s|\[PROJECT NAME\]|$project_name|"
331
- "s|\[DATE\]|$current_date|"
332
- "s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
333
- "s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
334
- "s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
335
- "s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
336
- "s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
337
- )
338
-
339
- for substitution in "${substitutions[@]}"; do
340
- if ! sed -i.bak -e "$substitution" "$temp_file"; then
341
- log_error "Failed to perform substitution: $substitution"
342
- rm -f "$temp_file" "$temp_file.bak"
343
- return 1
344
- fi
345
- done
346
-
347
- # Convert \n sequences to actual newlines
348
- newline=$(printf '\n')
349
- sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
350
-
351
- # Clean up backup files
352
- rm -f "$temp_file.bak" "$temp_file.bak2"
353
-
354
- return 0
355
- }
356
-
357
-
358
-
359
-
360
- update_existing_agent_file() {
361
- local target_file="$1"
362
- local current_date="$2"
363
-
364
- log_info "Updating existing agent context file..."
365
-
366
- # Use a single temporary file for atomic update
367
- local temp_file
368
- temp_file=$(mktemp) || {
369
- log_error "Failed to create temporary file"
370
- return 1
371
- }
372
-
373
- # Process the file in one pass
374
- local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
375
- local new_tech_entries=()
376
- local new_change_entry=""
377
-
378
- # Prepare new technology entries
379
- if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
380
- new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
381
- fi
382
-
383
- if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
384
- new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
385
- fi
386
-
387
- # Prepare new change entry
388
- if [[ -n "$tech_stack" ]]; then
389
- new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
390
- elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
391
- new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
392
- fi
393
-
394
- # Check if sections exist in the file
395
- local has_active_technologies=0
396
- local has_recent_changes=0
397
-
398
- if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
399
- has_active_technologies=1
400
- fi
401
-
402
- if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
403
- has_recent_changes=1
404
- fi
405
-
406
- # Process file line by line
407
- local in_tech_section=false
408
- local in_changes_section=false
409
- local tech_entries_added=false
410
- local changes_entries_added=false
411
- local existing_changes_count=0
412
- local file_ended=false
413
-
414
- while IFS= read -r line || [[ -n "$line" ]]; do
415
- # Handle Active Technologies section
416
- if [[ "$line" == "## Active Technologies" ]]; then
417
- echo "$line" >> "$temp_file"
418
- in_tech_section=true
419
- continue
420
- elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
421
- # Add new tech entries before closing the section
422
- if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
423
- printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
424
- tech_entries_added=true
425
- fi
426
- echo "$line" >> "$temp_file"
427
- in_tech_section=false
428
- continue
429
- elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
430
- # Add new tech entries before empty line in tech section
431
- if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
432
- printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
433
- tech_entries_added=true
434
- fi
435
- echo "$line" >> "$temp_file"
436
- continue
437
- fi
438
-
439
- # Handle Recent Changes section
440
- if [[ "$line" == "## Recent Changes" ]]; then
441
- echo "$line" >> "$temp_file"
442
- # Add new change entry right after the heading
443
- if [[ -n "$new_change_entry" ]]; then
444
- echo "$new_change_entry" >> "$temp_file"
445
- fi
446
- in_changes_section=true
447
- changes_entries_added=true
448
- continue
449
- elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
450
- echo "$line" >> "$temp_file"
451
- in_changes_section=false
452
- continue
453
- elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
454
- # Keep only first 2 existing changes
455
- if [[ $existing_changes_count -lt 2 ]]; then
456
- echo "$line" >> "$temp_file"
457
- ((existing_changes_count++))
458
- fi
459
- continue
460
- fi
461
-
462
- # Update timestamp
463
- if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
464
- echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
465
- else
466
- echo "$line" >> "$temp_file"
467
- fi
468
- done < "$target_file"
469
-
470
- # Post-loop check: if we're still in the Active Technologies section and haven't added new entries
471
- if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
472
- printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
473
- tech_entries_added=true
474
- fi
475
-
476
- # If sections don't exist, add them at the end of the file
477
- if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
478
- echo "" >> "$temp_file"
479
- echo "## Active Technologies" >> "$temp_file"
480
- printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
481
- tech_entries_added=true
482
- fi
483
-
484
- if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
485
- echo "" >> "$temp_file"
486
- echo "## Recent Changes" >> "$temp_file"
487
- echo "$new_change_entry" >> "$temp_file"
488
- changes_entries_added=true
489
- fi
490
-
491
- # Move temp file to target atomically
492
- if ! mv "$temp_file" "$target_file"; then
493
- log_error "Failed to update target file"
494
- rm -f "$temp_file"
495
- return 1
496
- fi
497
-
498
- return 0
499
- }
500
- #==============================================================================
501
- # Main Agent File Update Function
502
- #==============================================================================
503
-
504
- update_agent_file() {
505
- local target_file="$1"
506
- local agent_name="$2"
507
-
508
- if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
509
- log_error "update_agent_file requires target_file and agent_name parameters"
510
- return 1
511
- fi
512
-
513
- log_info "Updating $agent_name context file: $target_file"
514
-
515
- local project_name
516
- project_name=$(basename "$REPO_ROOT")
517
- local current_date
518
- current_date=$(date +%Y-%m-%d)
519
-
520
- # Create directory if it doesn't exist
521
- local target_dir
522
- target_dir=$(dirname "$target_file")
523
- if [[ ! -d "$target_dir" ]]; then
524
- if ! mkdir -p "$target_dir"; then
525
- log_error "Failed to create directory: $target_dir"
526
- return 1
527
- fi
528
- fi
529
-
530
- if [[ ! -f "$target_file" ]]; then
531
- # Create new file from template
532
- local temp_file
533
- temp_file=$(mktemp) || {
534
- log_error "Failed to create temporary file"
535
- return 1
536
- }
537
-
538
- if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
539
- if mv "$temp_file" "$target_file"; then
540
- log_success "Created new $agent_name context file"
541
- else
542
- log_error "Failed to move temporary file to $target_file"
543
- rm -f "$temp_file"
544
- return 1
545
- fi
546
- else
547
- log_error "Failed to create new agent file"
548
- rm -f "$temp_file"
549
- return 1
550
- fi
551
- else
552
- # Update existing file
553
- if [[ ! -r "$target_file" ]]; then
554
- log_error "Cannot read existing file: $target_file"
555
- return 1
556
- fi
557
-
558
- if [[ ! -w "$target_file" ]]; then
559
- log_error "Cannot write to existing file: $target_file"
560
- return 1
561
- fi
562
-
563
- if update_existing_agent_file "$target_file" "$current_date"; then
564
- log_success "Updated existing $agent_name context file"
565
- else
566
- log_error "Failed to update existing agent file"
567
- return 1
568
- fi
569
- fi
570
-
571
- return 0
572
- }
573
-
574
- #==============================================================================
575
- # Agent Selection and Processing
576
- #==============================================================================
577
-
578
- update_specific_agent() {
579
- local agent_type="$1"
580
-
581
- case "$agent_type" in
582
- claude)
583
- update_agent_file "$CLAUDE_FILE" "Claude Code"
584
- ;;
585
- gemini)
586
- update_agent_file "$GEMINI_FILE" "Gemini CLI"
587
- ;;
588
- copilot)
589
- update_agent_file "$COPILOT_FILE" "GitHub Copilot"
590
- ;;
591
- cursor-agent)
592
- update_agent_file "$CURSOR_FILE" "Cursor IDE"
593
- ;;
594
- qwen)
595
- update_agent_file "$QWEN_FILE" "Qwen Code"
596
- ;;
597
- opencode)
598
- update_agent_file "$AGENTS_FILE" "opencode"
599
- ;;
600
- codex)
601
- update_agent_file "$AGENTS_FILE" "Codex CLI"
602
- ;;
603
- windsurf)
604
- update_agent_file "$WINDSURF_FILE" "Windsurf"
605
- ;;
606
- kilocode)
607
- update_agent_file "$KILOCODE_FILE" "Kilo Code"
608
- ;;
609
- auggie)
610
- update_agent_file "$AUGGIE_FILE" "Auggie CLI"
611
- ;;
612
- roo)
613
- update_agent_file "$ROO_FILE" "Roo Code"
614
- ;;
615
- codebuddy)
616
- update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
617
- ;;
618
- amp)
619
- update_agent_file "$AMP_FILE" "Amp"
620
- ;;
621
- q)
622
- update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
623
- ;;
624
- *)
625
- log_error "Unknown agent type '$agent_type'"
626
- log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|amp|q"
627
- exit 1
628
- ;;
629
- esac
630
- }
631
-
632
- update_all_existing_agents() {
633
- local found_agent=false
634
-
635
- # Check each possible agent file and update if it exists
636
- if [[ -f "$CLAUDE_FILE" ]]; then
637
- update_agent_file "$CLAUDE_FILE" "Claude Code"
638
- found_agent=true
639
- fi
640
-
641
- if [[ -f "$GEMINI_FILE" ]]; then
642
- update_agent_file "$GEMINI_FILE" "Gemini CLI"
643
- found_agent=true
644
- fi
645
-
646
- if [[ -f "$COPILOT_FILE" ]]; then
647
- update_agent_file "$COPILOT_FILE" "GitHub Copilot"
648
- found_agent=true
649
- fi
650
-
651
- if [[ -f "$CURSOR_FILE" ]]; then
652
- update_agent_file "$CURSOR_FILE" "Cursor IDE"
653
- found_agent=true
654
- fi
655
-
656
- if [[ -f "$QWEN_FILE" ]]; then
657
- update_agent_file "$QWEN_FILE" "Qwen Code"
658
- found_agent=true
659
- fi
660
-
661
- if [[ -f "$AGENTS_FILE" ]]; then
662
- update_agent_file "$AGENTS_FILE" "Codex/opencode"
663
- found_agent=true
664
- fi
665
-
666
- if [[ -f "$WINDSURF_FILE" ]]; then
667
- update_agent_file "$WINDSURF_FILE" "Windsurf"
668
- found_agent=true
669
- fi
670
-
671
- if [[ -f "$KILOCODE_FILE" ]]; then
672
- update_agent_file "$KILOCODE_FILE" "Kilo Code"
673
- found_agent=true
674
- fi
675
-
676
- if [[ -f "$AUGGIE_FILE" ]]; then
677
- update_agent_file "$AUGGIE_FILE" "Auggie CLI"
678
- found_agent=true
679
- fi
680
-
681
- if [[ -f "$ROO_FILE" ]]; then
682
- update_agent_file "$ROO_FILE" "Roo Code"
683
- found_agent=true
684
- fi
685
-
686
- if [[ -f "$CODEBUDDY_FILE" ]]; then
687
- update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
688
- found_agent=true
689
- fi
690
-
691
- if [[ -f "$Q_FILE" ]]; then
692
- update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
693
- found_agent=true
694
- fi
695
-
696
- # If no agent files exist, create a default Claude file
697
- if [[ "$found_agent" == false ]]; then
698
- log_info "No existing agent files found, creating default Claude file..."
699
- update_agent_file "$CLAUDE_FILE" "Claude Code"
700
- fi
701
- }
702
- print_summary() {
703
- echo
704
- log_info "Summary of changes:"
705
-
706
- if [[ -n "$NEW_LANG" ]]; then
707
- echo " - Added language: $NEW_LANG"
708
- fi
709
-
710
- if [[ -n "$NEW_FRAMEWORK" ]]; then
711
- echo " - Added framework: $NEW_FRAMEWORK"
712
- fi
713
-
714
- if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
715
- echo " - Added database: $NEW_DB"
716
- fi
717
-
718
- echo
719
-
720
- log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|q]"
721
- }
722
-
723
- #==============================================================================
724
- # Main Execution
725
- #==============================================================================
726
-
727
- main() {
728
- # Validate environment before proceeding
729
- validate_environment
730
-
731
- log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
732
-
733
- # Parse the plan file to extract project information
734
- if ! parse_plan_data "$NEW_PLAN"; then
735
- log_error "Failed to parse plan data"
736
- exit 1
737
- fi
738
-
739
- # Process based on agent type argument
740
- local success=true
741
-
742
- if [[ -z "$AGENT_TYPE" ]]; then
743
- # No specific agent provided - update all existing agent files
744
- log_info "No agent specified, updating all existing agent files..."
745
- if ! update_all_existing_agents; then
746
- success=false
747
- fi
748
- else
749
- # Specific agent provided - update only that agent
750
- log_info "Updating specific agent: $AGENT_TYPE"
751
- if ! update_specific_agent "$AGENT_TYPE"; then
752
- success=false
753
- fi
754
- fi
755
-
756
- # Print summary
757
- print_summary
758
-
759
- if [[ "$success" == true ]]; then
760
- log_success "Agent context update completed successfully"
761
- exit 0
762
- else
763
- log_error "Agent context update completed with errors"
764
- exit 1
765
- fi
766
- }
767
-
768
- # Execute main function if script is run directly
769
- if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
770
- main "$@"
771
- fi
772
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/templates/agent-file-template.md DELETED
@@ -1,28 +0,0 @@
1
- # [PROJECT NAME] Development Guidelines
2
-
3
- Auto-generated from all feature plans. Last updated: [DATE]
4
-
5
- ## Active Technologies
6
-
7
- [EXTRACTED FROM ALL PLAN.MD FILES]
8
-
9
- ## Project Structure
10
-
11
- ```text
12
- [ACTUAL STRUCTURE FROM PLANS]
13
- ```
14
-
15
- ## Commands
16
-
17
- [ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
18
-
19
- ## Code Style
20
-
21
- [LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
22
-
23
- ## Recent Changes
24
-
25
- [LAST 3 FEATURES AND WHAT THEY ADDED]
26
-
27
- <!-- MANUAL ADDITIONS START -->
28
- <!-- MANUAL ADDITIONS END -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/templates/checklist-template.md DELETED
@@ -1,42 +0,0 @@
1
- # [CHECKLIST TYPE] Checklist: [FEATURE NAME]
2
-
3
- > Language Note: 本清單可用中文撰寫;條目需具體、可操作、可勾選,必要英文術語首次出現時以括號標註。
4
-
5
- **Purpose**: [Brief description of what this checklist covers]
6
- **Created**: [DATE]
7
- **Feature**: [Link to spec.md or relevant documentation]
8
-
9
- **Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
10
-
11
- <!--
12
- ============================================================================
13
- IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
14
-
15
- The /speckit.checklist command MUST replace these with actual items based on:
16
- - User's specific checklist request
17
- - Feature requirements from spec.md
18
- - Technical context from plan.md
19
- - Implementation details from tasks.md
20
-
21
- DO NOT keep these sample items in the generated checklist file.
22
- ============================================================================
23
- -->
24
-
25
- ## [Category 1]
26
-
27
- - [ ] CHK001 First checklist item with clear action
28
- - [ ] CHK002 Second checklist item
29
- - [ ] CHK003 Third checklist item
30
-
31
- ## [Category 2]
32
-
33
- - [ ] CHK004 Another category item
34
- - [ ] CHK005 Item with specific criteria
35
- - [ ] CHK006 Final item in this category
36
-
37
- ## Notes
38
-
39
- - Check items off as completed: `[x]`
40
- - Add comments or findings inline
41
- - Link to relevant resources or documentation
42
- - Items are numbered sequentially for easy reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/templates/plan-template.md DELETED
@@ -1,107 +0,0 @@
1
- # Implementation Plan: [FEATURE]
2
-
3
- **Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
4
- **Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
5
-
6
- > Language Note: 本計畫文件可使用中文撰寫;請保持術語首次出現時附英文(括號標註)。內容需與憲章(Constitution)對齊,並避免把實作細節帶回規格層。
7
-
8
- ## Summary
9
-
10
- [Extract from feature spec: primary requirement + technical approach from research]
11
-
12
- ## Technical Context
13
-
14
- <!--
15
- ACTION REQUIRED: Replace the content in this section with the technical details
16
- for the project. The structure here is presented in advisory capacity to guide
17
- the iteration process.
18
- -->
19
-
20
- **Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
21
- **Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
22
- **Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
23
- **Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
24
- **Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
25
- **Project Type**: [single/web/mobile - determines source structure]
26
- **Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
27
- **Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
28
- **Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
29
-
30
- ## Constitution Check
31
-
32
- *GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
33
-
34
- - 規格是否以用戶價值為核心、技術中立、並具可量化成功準則?
35
- - 用戶故事是否可獨立開發/測試/展示(Given/When/Then)?
36
- - 語言政策是否遵循「中文為主,首次出現附英文術語」?
37
- - 邊界/風險/依賴與 NEEDS CLARIFICATION 是否列明並在本計畫內追蹤?
38
-
39
- ## Project Structure
40
-
41
- ### Documentation (this feature)
42
-
43
- ```text
44
- specs/[###-feature]/
45
- ├── plan.md # This file (/speckit.plan command output)
46
- ├── research.md # Phase 0 output (/speckit.plan command)
47
- ├── data-model.md # Phase 1 output (/speckit.plan command)
48
- ├── quickstart.md # Phase 1 output (/speckit.plan command)
49
- ├── contracts/ # Phase 1 output (/speckit.plan command)
50
- └── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
51
- ```
52
-
53
- ### Source Code (repository root)
54
- <!--
55
- ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
56
- for this feature. Delete unused options and expand the chosen structure with
57
- real paths (e.g., apps/admin, packages/something). The delivered plan must
58
- not include Option labels.
59
- -->
60
-
61
- ```text
62
- # [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
63
- src/
64
- ├── models/
65
- ├── services/
66
- ├── cli/
67
- └── lib/
68
-
69
- tests/
70
- ├── contract/
71
- ├── integration/
72
- └── unit/
73
-
74
- # [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
75
- backend/
76
- ├── src/
77
- │ ├── models/
78
- │ ├── services/
79
- │ └── api/
80
- └── tests/
81
-
82
- frontend/
83
- ├── src/
84
- │ ├── components/
85
- │ ├── pages/
86
- │ └── services/
87
- └── tests/
88
-
89
- # [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
90
- api/
91
- └── [same as backend above]
92
-
93
- ios/ or android/
94
- └── [platform-specific structure: feature modules, UI flows, platform tests]
95
- ```
96
-
97
- **Structure Decision**: [Document the selected structure and reference the real
98
- directories captured above]
99
-
100
- ## Complexity Tracking
101
-
102
- > **Fill ONLY if Constitution Check has violations that must be justified**
103
-
104
- | Violation | Why Needed | Simpler Alternative Rejected Because |
105
- |-----------|------------|-------------------------------------|
106
- | [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
107
- | [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/templates/spec-template.md DELETED
@@ -1,117 +0,0 @@
1
- # Feature Specification: [FEATURE NAME]
2
-
3
- > Language Note: 本規格可「以中文為主」撰寫;首次出現之關鍵英文術語請於括號中標註。例如:震度(Intensity)、用戶故事(User Story)。規格需保持技術中立、可測試與可量化成功準則。
4
-
5
- **Feature Branch**: `[###-feature-name]`
6
- **Created**: [DATE]
7
- **Status**: Draft
8
- **Input**: User description: "$ARGUMENTS"
9
-
10
- ## User Scenarios & Testing *(mandatory)*
11
-
12
- <!--
13
- IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
14
- Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
15
- you should still have a viable MVP (Minimum Viable Product) that delivers value.
16
-
17
- Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
18
- Think of each story as a standalone slice of functionality that can be:
19
- - Developed independently
20
- - Tested independently
21
- - Deployed independently
22
- - Demonstrated to users independently
23
- -->
24
-
25
- ### User Story 1 - [Brief Title] (Priority: P1)
26
-
27
- [Describe this user journey in plain language]
28
-
29
- **Why this priority**: [Explain the value and why it has this priority level]
30
-
31
- **Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
32
-
33
- **Acceptance Scenarios**:
34
-
35
- 1. **Given** [initial state], **When** [action], **Then** [expected outcome]
36
- 2. **Given** [initial state], **When** [action], **Then** [expected outcome]
37
-
38
- ---
39
-
40
- ### User Story 2 - [Brief Title] (Priority: P2)
41
-
42
- [Describe this user journey in plain language]
43
-
44
- **Why this priority**: [Explain the value and why it has this priority level]
45
-
46
- **Independent Test**: [Describe how this can be tested independently]
47
-
48
- **Acceptance Scenarios**:
49
-
50
- 1. **Given** [initial state], **When** [action], **Then** [expected outcome]
51
-
52
- ---
53
-
54
- ### User Story 3 - [Brief Title] (Priority: P3)
55
-
56
- [Describe this user journey in plain language]
57
-
58
- **Why this priority**: [Explain the value and why it has this priority level]
59
-
60
- **Independent Test**: [Describe how this can be tested independently]
61
-
62
- **Acceptance Scenarios**:
63
-
64
- 1. **Given** [initial state], **When** [action], **Then** [expected outcome]
65
-
66
- ---
67
-
68
- [Add more user stories as needed, each with an assigned priority]
69
-
70
- ### Edge Cases
71
-
72
- <!--
73
- ACTION REQUIRED: The content in this section represents placeholders.
74
- Fill them out with the right edge cases.
75
- -->
76
-
77
- - What happens when [boundary condition]?
78
- - How does system handle [error scenario]?
79
-
80
- ## Requirements *(mandatory)*
81
-
82
- <!--
83
- ACTION REQUIRED: The content in this section represents placeholders.
84
- Fill them out with the right functional requirements.
85
- -->
86
-
87
- ### Functional Requirements
88
-
89
- - **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
90
- - **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
91
- - **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
92
- - **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
93
- - **FR-005**: System MUST [behavior, e.g., "log all security events"]
94
-
95
- *Example of marking unclear requirements:*
96
-
97
- - **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
98
- - **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
99
-
100
- ### Key Entities *(include if feature involves data)*
101
-
102
- - **[Entity 1]**: [What it represents, key attributes without implementation]
103
- - **[Entity 2]**: [What it represents, relationships to other entities]
104
-
105
- ## Success Criteria *(mandatory)*
106
-
107
- <!--
108
- ACTION REQUIRED: Define measurable success criteria.
109
- These must be technology-agnostic and measurable.
110
- -->
111
-
112
- ### Measurable Outcomes
113
-
114
- - **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
115
- - **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
116
- - **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
117
- - **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.specify/templates/tasks-template.md DELETED
@@ -1,251 +0,0 @@
1
- ---
2
-
3
- description: "Task list template for feature implementation"
4
- ---
5
-
6
- # Tasks: [FEATURE NAME]
7
-
8
- **Input**: Design documents from `/specs/[###-feature-name]/`
9
- **Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
10
-
11
- **Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
12
-
13
- **Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
14
-
15
- ## Format: `[ID] [P?] [Story] Description`
16
-
17
- - **[P]**: Can run in parallel (different files, no dependencies)
18
- - **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
19
- - Include exact file paths in descriptions
20
-
21
- ## Path Conventions
22
-
23
- - **Single project**: `src/`, `tests/` at repository root
24
- - **Web app**: `backend/src/`, `frontend/src/`
25
- - **Mobile**: `api/src/`, `ios/src/` or `android/src/`
26
- - Paths shown below assume single project - adjust based on plan.md structure
27
-
28
- <!--
29
- ============================================================================
30
- IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
31
-
32
- The /speckit.tasks command MUST replace these with actual tasks based on:
33
- - User stories from spec.md (with their priorities P1, P2, P3...)
34
- - Feature requirements from plan.md
35
- - Entities from data-model.md
36
- - Endpoints from contracts/
37
-
38
- Tasks MUST be organized by user story so each story can be:
39
- - Implemented independently
40
- - Tested independently
41
- - Delivered as an MVP increment
42
-
43
- DO NOT keep these sample tasks in the generated tasks.md file.
44
- ============================================================================
45
- -->
46
-
47
- ## Phase 1: Setup (Shared Infrastructure)
48
-
49
- **Purpose**: Project initialization and basic structure
50
-
51
- - [ ] T001 Create project structure per implementation plan
52
- - [ ] T002 Initialize [language] project with [framework] dependencies
53
- - [ ] T003 [P] Configure linting and formatting tools
54
-
55
- ---
56
-
57
- ## Phase 2: Foundational (Blocking Prerequisites)
58
-
59
- **Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
60
-
61
- **⚠️ CRITICAL**: No user story work can begin until this phase is complete
62
-
63
- Examples of foundational tasks (adjust based on your project):
64
-
65
- - [ ] T004 Setup database schema and migrations framework
66
- - [ ] T005 [P] Implement authentication/authorization framework
67
- - [ ] T006 [P] Setup API routing and middleware structure
68
- - [ ] T007 Create base models/entities that all stories depend on
69
- - [ ] T008 Configure error handling and logging infrastructure
70
- - [ ] T009 Setup environment configuration management
71
-
72
- **Checkpoint**: Foundation ready - user story implementation can now begin in parallel
73
-
74
- ---
75
-
76
- ## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
77
-
78
- **Goal**: [Brief description of what this story delivers]
79
-
80
- **Independent Test**: [How to verify this story works on its own]
81
-
82
- ### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
83
-
84
- > **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
85
-
86
- - [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
87
- - [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
88
-
89
- ### Implementation for User Story 1
90
-
91
- - [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
92
- - [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
93
- - [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
94
- - [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
95
- - [ ] T016 [US1] Add validation and error handling
96
- - [ ] T017 [US1] Add logging for user story 1 operations
97
-
98
- **Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
99
-
100
- ---
101
-
102
- ## Phase 4: User Story 2 - [Title] (Priority: P2)
103
-
104
- **Goal**: [Brief description of what this story delivers]
105
-
106
- **Independent Test**: [How to verify this story works on its own]
107
-
108
- ### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
109
-
110
- - [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
111
- - [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
112
-
113
- ### Implementation for User Story 2
114
-
115
- - [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
116
- - [ ] T021 [US2] Implement [Service] in src/services/[service].py
117
- - [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
118
- - [ ] T023 [US2] Integrate with User Story 1 components (if needed)
119
-
120
- **Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
121
-
122
- ---
123
-
124
- ## Phase 5: User Story 3 - [Title] (Priority: P3)
125
-
126
- **Goal**: [Brief description of what this story delivers]
127
-
128
- **Independent Test**: [How to verify this story works on its own]
129
-
130
- ### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
131
-
132
- - [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
133
- - [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
134
-
135
- ### Implementation for User Story 3
136
-
137
- - [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
138
- - [ ] T027 [US3] Implement [Service] in src/services/[service].py
139
- - [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
140
-
141
- **Checkpoint**: All user stories should now be independently functional
142
-
143
- ---
144
-
145
- [Add more user story phases as needed, following the same pattern]
146
-
147
- ---
148
-
149
- ## Phase N: Polish & Cross-Cutting Concerns
150
-
151
- **Purpose**: Improvements that affect multiple user stories
152
-
153
- - [ ] TXXX [P] Documentation updates in docs/
154
- - [ ] TXXX Code cleanup and refactoring
155
- - [ ] TXXX Performance optimization across all stories
156
- - [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
157
- - [ ] TXXX Security hardening
158
- - [ ] TXXX Run quickstart.md validation
159
-
160
- ---
161
-
162
- ## Dependencies & Execution Order
163
-
164
- ### Phase Dependencies
165
-
166
- - **Setup (Phase 1)**: No dependencies - can start immediately
167
- - **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
168
- - **User Stories (Phase 3+)**: All depend on Foundational phase completion
169
- - User stories can then proceed in parallel (if staffed)
170
- - Or sequentially in priority order (P1 → P2 → P3)
171
- - **Polish (Final Phase)**: Depends on all desired user stories being complete
172
-
173
- ### User Story Dependencies
174
-
175
- - **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
176
- - **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
177
- - **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
178
-
179
- ### Within Each User Story
180
-
181
- - Tests (if included) MUST be written and FAIL before implementation
182
- - Models before services
183
- - Services before endpoints
184
- - Core implementation before integration
185
- - Story complete before moving to next priority
186
-
187
- ### Parallel Opportunities
188
-
189
- - All Setup tasks marked [P] can run in parallel
190
- - All Foundational tasks marked [P] can run in parallel (within Phase 2)
191
- - Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
192
- - All tests for a user story marked [P] can run in parallel
193
- - Models within a story marked [P] can run in parallel
194
- - Different user stories can be worked on in parallel by different team members
195
-
196
- ---
197
-
198
- ## Parallel Example: User Story 1
199
-
200
- ```bash
201
- # Launch all tests for User Story 1 together (if tests requested):
202
- Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
203
- Task: "Integration test for [user journey] in tests/integration/test_[name].py"
204
-
205
- # Launch all models for User Story 1 together:
206
- Task: "Create [Entity1] model in src/models/[entity1].py"
207
- Task: "Create [Entity2] model in src/models/[entity2].py"
208
- ```
209
-
210
- ---
211
-
212
- ## Implementation Strategy
213
-
214
- ### MVP First (User Story 1 Only)
215
-
216
- 1. Complete Phase 1: Setup
217
- 2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
218
- 3. Complete Phase 3: User Story 1
219
- 4. **STOP and VALIDATE**: Test User Story 1 independently
220
- 5. Deploy/demo if ready
221
-
222
- ### Incremental Delivery
223
-
224
- 1. Complete Setup + Foundational → Foundation ready
225
- 2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
226
- 3. Add User Story 2 → Test independently → Deploy/Demo
227
- 4. Add User Story 3 → Test independently → Deploy/Demo
228
- 5. Each story adds value without breaking previous stories
229
-
230
- ### Parallel Team Strategy
231
-
232
- With multiple developers:
233
-
234
- 1. Team completes Setup + Foundational together
235
- 2. Once Foundational is done:
236
- - Developer A: User Story 1
237
- - Developer B: User Story 2
238
- - Developer C: User Story 3
239
- 3. Stories complete and integrate independently
240
-
241
- ---
242
-
243
- ## Notes
244
-
245
- - [P] tasks = different files, no dependencies
246
- - [Story] label maps task to specific user story for traceability
247
- - Each user story should be independently completable and testable
248
- - Verify tests fail before implementing
249
- - Commit after each task or logical group
250
- - Stop at any checkpoint to validate story independently
251
- - Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.vscode/settings.json DELETED
@@ -1,14 +0,0 @@
1
- {
2
- "chat.promptFilesRecommendations": {
3
- "speckit.constitution": true,
4
- "speckit.specify": true,
5
- "speckit.plan": true,
6
- "speckit.tasks": true,
7
- "speckit.implement": true
8
- },
9
- "chat.tools.terminal.autoApprove": {
10
- ".specify/scripts/bash/": true,
11
- ".specify/scripts/powershell/": true
12
- }
13
- }
14
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
specs/001-hf-demo-workflow/checklists/requirements.md DELETED
@@ -1,35 +0,0 @@
1
- # Specification Quality Checklist: Hugging Face Demo-波形檢核與震度預測工作流程
2
-
3
- **Purpose**: 驗證規格在進入規劃階段前的完整性與品質
4
- **Created**: 2025-10-22
5
- **Feature**: [Link to spec.md](../spec.md)
6
-
7
- ## Content Quality
8
-
9
- - [x] No implementation details (languages, frameworks, APIs)
10
- - [x] Focused on user value and business needs
11
- - [x] Written for non-technical stakeholders
12
- - [x] All mandatory sections completed
13
-
14
- ## Requirement Completeness
15
-
16
- - [x] No [NEEDS CLARIFICATION] markers remain
17
- - [x] Requirements are testable and unambiguous
18
- - [x] Success criteria are measurable
19
- - [x] Success criteria are technology-agnostic (no implementation details)
20
- - [x] All acceptance scenarios are defined
21
- - [x] Edge cases are identified
22
- - [x] Scope is clearly bounded
23
- - [x] Dependencies and assumptions identified
24
-
25
- ## Feature Readiness
26
-
27
- - [x] All functional requirements have clear acceptance criteria
28
- - [x] User scenarios cover primary flows
29
- - [x] Feature meets measurable outcomes defined in Success Criteria
30
- - [x] No implementation details leak into specification
31
-
32
- ## Notes
33
-
34
- - 已加入 Demo 特定流程(先顯示 ground truth、時間窗「起點+長度」、<30 秒以 0 遮罩、以震央距離取最近 25 站、於 eew_target 參考座標顯示預測)。
35
- - 核心實體、邊界情境與成功準則已完善且可測試;未留有 [NEEDS CLARIFICATION]。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
specs/001-hf-demo-workflow/contracts/openapi.yaml DELETED
@@ -1,110 +0,0 @@
1
- openapi: 3.0.3
2
- info:
3
- title: TTSAM HF Demo (logical API for planning)
4
- version: 0.1.0
5
- servers:
6
- - url: https://example.local
7
- paths:
8
- /load_event:
9
- post:
10
- summary: Load a preloaded earthquake event and display ground truth
11
- requestBody:
12
- required: true
13
- content:
14
- application/json:
15
- schema:
16
- type: object
17
- properties:
18
- event_id:
19
- type: string
20
- required: [event_id]
21
- responses:
22
- '200':
23
- description: Event data and ground truth reference
24
- content:
25
- application/json:
26
- schema:
27
- type: object
28
- properties:
29
- ground_truth_html:
30
- type: string
31
- description: HTML snippet to render the ground truth/intensity map
32
- epicenter:
33
- type: object
34
- properties:
35
- lat: { type: number }
36
- lon: { type: number }
37
- /window_waveforms:
38
- post:
39
- summary: Select time window by start+duration; zero-mask to 30s if needed
40
- requestBody:
41
- required: true
42
- content:
43
- application/json:
44
- schema:
45
- type: object
46
- properties:
47
- start_sec: { type: number }
48
- duration_sec: { type: number }
49
- event_id: { type: string }
50
- required: [start_sec, duration_sec, event_id]
51
- responses:
52
- '200':
53
- description: Windowed waveforms for nearest 25 stations
54
- content:
55
- application/json:
56
- schema:
57
- type: object
58
- properties:
59
- stations:
60
- type: array
61
- items:
62
- type: object
63
- properties:
64
- code: { type: string }
65
- lat: { type: number }
66
- lon: { type: number }
67
- waveforms:
68
- type: array
69
- items:
70
- type: object
71
- properties:
72
- code: { type: string }
73
- z: { type: array, items: { type: number } }
74
- n: { type: array, items: { type: number } }
75
- e: { type: array, items: { type: number } }
76
- /predict_intensity:
77
- post:
78
- summary: Run TTSAM prediction and return intensities at eew_target points
79
- requestBody:
80
- required: true
81
- content:
82
- application/json:
83
- schema:
84
- type: object
85
- properties:
86
- event_id: { type: string }
87
- start_sec: { type: number }
88
- duration_sec: { type: number }
89
- required: [event_id, start_sec, duration_sec]
90
- responses:
91
- '200':
92
- description: Predicted intensity map and values for targets
93
- content:
94
- application/json:
95
- schema:
96
- type: object
97
- properties:
98
- map_html:
99
- type: string
100
- description: HTML snippet (folium/Leaflet) sized to match ground truth image
101
- predictions:
102
- type: array
103
- items:
104
- type: object
105
- properties:
106
- name: { type: string }
107
- lat: { type: number }
108
- lon: { type: number }
109
- intensity: { type: string }
110
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
specs/001-hf-demo-workflow/data-model.md DELETED
@@ -1,50 +0,0 @@
1
- # Data Model: Hugging Face Demo-波形檢核與震度預測
2
-
3
- ## Entities
4
-
5
- ### EarthquakeEvent
6
- - id: string(例如 `20240403`)
7
- - name: string(顯示名稱)
8
- - date: date
9
- - epicenter_lat: float
10
- - epicenter_lon: float
11
- - waveform_file: path(e.g., `waveform/20240403.mseed`)
12
- - ground_truth_image: path(等震度圖,若有多張則清單)
13
-
14
- ### InputStation
15
- - code: string
16
- - lat: float
17
- - lon: float
18
- - elevation: float
19
-
20
- ### WaveformSegment
21
- - station_code: string
22
- - start_time: float(秒,相對事件時間基準)
23
- - duration: float(秒)
24
- - sample_rate: int(Hz,預設 100)
25
- - data_z: float[3000](選窗/遮罩後)
26
- - data_n: float[3000]
27
- - data_e: float[3000]
28
-
29
- ### TargetLocation(eew_target)
30
- - name: string(縣市/行政區)
31
- - lat: float
32
- - lon: float
33
- - vs30: float(若無則查表或預設 600)
34
-
35
- ### IntensityPrediction
36
- - target_name: string
37
- - pga: float(gal)
38
- - intensity_level: string(0/1/2/3/4/5-/5+/6-/6+/7)
39
-
40
- ## Relationships
41
- - EarthquakeEvent 1 - N WaveformSegment(跨站台)
42
- - InputStation 1 - N WaveformSegment
43
- - EarthquakeEvent 映射到 1..N GroundTruth 圖片(視資料)
44
- - N TargetLocation 在一次推論中對應 N IntensityPrediction
45
-
46
- ## Validation Rules
47
- - WaveformSegment 長度不足 30 秒時以 0 遮罩補齊至 30 秒
48
- - 站台選取:依震央距離排序取前 25 站
49
- - 色階一致:預測 vs ground truth 使用相同 legend 範圍
50
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
specs/001-hf-demo-workflow/plan.md DELETED
@@ -1,64 +0,0 @@
1
- # Implementation Plan: Hugging Face Demo-波形檢核與震度預測
2
-
3
- **Branch**: `001-hf-demo-workflow` | **Date**: 2025-10-22 | **Spec**: /Users/jimmy/Library/CloudStorage/OneDrive-Personal/Code/TTSAM/specs/001-hf-demo-workflow/spec.md
4
- **Input**: Feature specification from `/specs/001-hf-demo-workflow/spec.md`
5
-
6
- > Language Note: 本計畫文件可使用中文撰寫;請保持術語首次出現時附英文(括號標註)。內容需與憲章(Constitution)對齊,並避免把實作細節帶回規格層。
7
-
8
- ## Summary
9
-
10
- 在 Hugging Face Spaces 以 Gradio 建立互動式 Demo,流程:選事件→先顯示 ground truth→選擇「開始時間+時間長度」→(不足 30 秒以 0 遮罩)→以震央距離取最近 25 站→顯示波形供確認→執行 TTSAM 預測→在地圖上於 eew_target 參考座標顯示震度。支援本地 Docker 執行,地圖以網頁元件呈現,尺寸盡量貼合 ground truth 圖片大小。
11
-
12
- ## Technical Context
13
-
14
- **Language/Version**: Python 3.10 [NEEDS CLARIFICATION: 3.10 或 3.11 皆可;HF Spaces 預設 OK]
15
- **Primary Dependencies**: gradio, torch(CPU), obspy, numpy, matplotlib, xarray, netCDF4, scipy, pandas, loguru, huggingface_hub, folium
16
- **Storage**: N/A(本地檔案 + HF Hub 下載資料集)
17
- **Testing**: pytest [NEEDS CLARIFICATION]
18
- **Target Platform**: Hugging Face Spaces(Gradio)與本地 Docker
19
- **Project Type**: single(根目錄 `app.py`)
20
- **Performance Goals**: 啟動/載入圖 10s 內;預測完成 < 2 分鐘
21
- **Constraints**: 圖層顏色刻度一致;地圖容器大小貼合 ground truth 圖;HF Spaces CPU 運行
22
- **Scale/Scope**: 展攤互動,並行使用者數低;單次事件與目標點數量有限
23
-
24
- ## Constitution Check
25
-
26
- *GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
27
-
28
- - 規格是否以用戶價值為核心、技術中立、並具可量化成功準則?→ 通過(見 spec.md SC-001..006)
29
- - 用戶故事是否可獨立開發/測試/展示(Given/When/Then)?→ 通過(US1/US2/US3)
30
- - 語言政策是否遵循「中文為主,首次出現附英文術語」?→ 通過
31
- - 邊界/風險/依賴與 NEEDS CLARIFICATION 是否列明並在本計畫內追蹤?→ 部分待解(見下方 Research)
32
-
33
- ## Project Structure
34
-
35
- ### Documentation (this feature)
36
-
37
- ```
38
- specs/001-hf-demo-workflow/
39
- ├── plan.md # 本檔
40
- ├── research.md # Phase 0
41
- ├── data-model.md # Phase 1 output
42
- ├── quickstart.md # Phase 1 output
43
- ├── contracts/ # Phase 1 output
44
- │ └── openapi.yaml
45
- └── tasks.md # Phase 2 (/speckit.tasks 產生)
46
- ```
47
-
48
- ### Source Code (repository root)
49
-
50
- ```
51
- # Single project (現況)
52
- app.py
53
- requirements.txt
54
- intensityMap.html
55
- station/
56
- waveform/
57
- ```
58
-
59
- **Structure Decision**: 維持單一專案根目錄佈局,Gradio 介面在 `app.py`;地圖以 HTML/folium/Gradio HTML 容器渲染。
60
-
61
- ## Complexity Tracking
62
-
63
- N/A(無憲章違反項目需特別豁免)。
64
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
specs/001-hf-demo-workflow/quickstart.md DELETED
@@ -1,76 +0,0 @@
1
- # Quickstart: TTSAM HF Demo(Gradio + Docker)
2
-
3
- ## 操作流程說明(使用者視角)
4
-
5
- ### [T015] UI 配置與操作步驟
6
-
7
- **左右區塊配置(固定高度 800):**
8
- - **左側區塊**:Ground Truth 震度圖(height=800)- 選擇事件時立即顯示
9
- - **右側區塊**:預測震度地圖(height=800)- 預測完成後顯示
10
-
11
- **操作步驟:**
12
- 1. **選擇地震事件** → 系統立即在左側載入並顯示 Ground Truth 圖片(「先顯示答案」)
13
- 2. **設定時間參數**:
14
- - 開始時間(slider,秒)
15
- - 時間長度(slider,秒)- 若 < 30 秒,系統會自動以 0 遮罩補齊
16
- 3. **輸入震央位置**:經緯度數值
17
- 4. **載入波形**:點擊「📊 載入波形」按鈕
18
- ## 3) 建置 Docker 映像並執行(本地檢視)
19
-
20
- [T014] 以 Docker 啟動本地檢視行為,驗證固定高度 800 配置與完整流程:
21
-
22
- - 顯示距離-時間圖(Record Section)與高亮時間窗
23
- - 顯示輸入測站分布地圖(所有測站 + 突顯選中的 25 站)
24
- 5. **執行預測**:確認波形範圍後,點擊「🔮 執行預測」按鈕
25
- - 在右側顯示預測震度地圖(與左側 Ground Truth 高度一致)
26
- - 顯示統計資訊(使用測站數、缺失分量數、預測目標點數、最大震度等)
27
-
28
- **高度設定說明:**
29
- - Ground Truth 圖片容器:固定 height=800
30
- - 預測地圖(Folium):固定 height='800px', width='100%'
31
- - 輸入測站地圖(Folium):固定 height='800px', width='100%'
32
- - 左右區塊高度一致,便於視覺比較
33
-
34
- ---
35
-
36
- **驗證重點:**
37
- - Ground Truth 圖片在選擇事件時立即顯示(左側,height=800)
38
- - 預測地圖與 Ground Truth 高度一致(右側,height=800)
39
- - 時間窗選擇使用「開始時間 + 時間長度」介面
40
- - 少於 30 秒時系統自動以 0 遮罩補齊
41
- - 缺少 N/E 分量時以 Z 分量代替,並在狀態訊息中顯示
42
-
43
- ## 1) 在 Hugging Face Spaces 執行
44
- - 將本倉庫與 `requirements.txt`、`app.py` 推至 Space(Gradio 類型)。
45
- - Space 會自動安裝相依並啟動 `app.py`。
46
- - [T015] UI 使用固定高度 800 的左右區塊配置,選擇事件時先顯示 Ground Truth 圖片。
47
-
48
- ## 2) 本地直接執行(Python)
49
- ```bash
50
- python -m venv .venv
51
- source .venv/bin/activate
52
- pip install -U pip
53
- pip install -r requirements.txt
54
- python app.py
55
- ```
56
- - Gradio 預設於 http://127.0.0.1:7860 提供服務
57
-
58
- ## 3) 建置 Docker 映像並執行
59
- ```bash
60
- # 建置
61
- docker build -t ttsam-demo:cpu .
62
-
63
- # 執行(CPU)
64
- docker run --rm -p 7860:7860 \
65
- -e GRADIO_SERVER_PORT=7860 \
66
- -e GRADIO_SERVER_NAME=0.0.0.0 \
67
- ttsam-demo:cpu
68
-
69
- # (可選)如需 GPU,且本機具備 NVIDIA 驅動與 Runtime:
70
- # docker run --rm --gpus all -p 7860:7860 ttsam-demo:cpu
71
- ```
72
-
73
- ## 4) 常見問題
74
- - 下載 HF Hub 資料集速度慢:可先行快取或在本地掛載資料卷。
75
- - netCDF4/obspy 編譯失敗:使用提供的 Dockerfile(已安裝必要系統套件)。
76
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
specs/001-hf-demo-workflow/research.md DELETED
@@ -1,41 +0,0 @@
1
- # Research: Hugging Face Demo-波形檢核與震度預測
2
-
3
- ## Unknowns (from Technical Context)
4
-
5
- 1. Python 版本:3.10 或 3.11?(HF Spaces 與 Torch CPU 相容性)
6
- 2. 測試框架:pytest 或其他?
7
- 3. 地圖呈現方式:folium(Leaflet)嵌入 Gradio HTML vs. 其他 JS 地圖;尺寸如何貼合 ground truth 圖?
8
- 4. Docker 依賴:為 netCDF4/obspy 準備系統套件的最小集合。
9
- 5. Torch 推論:Spaces 預設 CPU;本地是否需要 CUDA?(展攤需求可採 CPU)
10
-
11
- ## Findings & Decisions
12
-
13
- ### D1. Python 版本
14
- - Decision: 使用 Python 3.10 作為基線(Spaces 支援、套件相容度穩定)。
15
- - Rationale: torch/obspy/netCDF4 在 3.10 有成熟 wheels。
16
- - Alternatives: 3.11(可行;如需提升可在後續升級)。
17
-
18
- ### D2. 測試框架
19
- - Decision: 採用 pytest(未來加入基本單元測試:資料載入、遮罩、排序)。
20
- - Rationale: 生態系成熟、入門快。
21
- - Alternatives: nose, unittest(無顯著優勢)。
22
-
23
- ### D3. 地圖呈現與尺寸貼合
24
- - Decision: 使用 folium 產生 HTML(Leaflet),以 Gradio `gr.HTML` 容器嵌入,並以 CSS 設定外層 div 寬高;ground truth 圖用 PIL 取得寬高,將同樣寬高套到地圖容器,並設定 Leaflet map 的 CSS(如 `#map { width: Wxpx; height: Hxpx; }`)。
25
- - Rationale: folium 生態成熟,與 Python 整合好;Gradio 容器可直接塞 HTML 字串。
26
- - Alternatives: 直接顯示靜態地圖(matplotlib 生成)、或使用純 Gradio 圖像元件(缺少互動)。
27
-
28
- ### D4. Docker 依賴
29
- - Decision: 基底 `python:3.10-slim`,安裝 `build-essential libhdf5-dev libnetcdf-dev libopenblas-dev liblapack-dev` 以支援 netCDF4/scipy;其餘以 pip wheels。
30
- - Rationale: 降低編譯失敗風險;體積與相容性折衷。
31
- - Alternatives: `conda` 基底(體積大)、`debian:bookworm`(需自裝 python)。
32
-
33
- ### D5. Torch/CUDA
34
- - Decision: 預設 CPU(Spaces 與本地通用);如需 GPU,另提供 `--gpus all` 執行參考。
35
- - Rationale: 展攤 demo 可接受 CPU 推論時間(< 2 分鐘)。
36
- - Alternatives: CUDA 基底(增加複雜度)。
37
-
38
- ## Consolidated Choices
39
- - Python 3.10、pytest、folium+Gradio HTML 容器(尺寸貼合圖片)、CPU Torch。
40
- - Docker 依賴如上;開放性地在 README/quickstart 提示 GPU 可選。
41
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
specs/001-hf-demo-workflow/spec.md DELETED
@@ -1,141 +0,0 @@
1
- # 功能規格:Hugging Face Demo-波形(Waveform)檢核與震度(Intensity)預測工作流程
2
-
3
- > 語言說明:本規格以中文為主撰寫;關鍵英文術語於首次出現時以括號標註。規格須保持技術中立、可測試,並提供可量化的成功準則。
4
-
5
- **Feature Branch**: `001-hf-demo-workflow`
6
- **Created**: 2025-10-22
7
- **Status**: Draft
8
- **Input**: 使用者描述:「建立 Hugging Face demo 讓使用者確認波形資料、執行震度預測、並與 ground truth 比較」
9
-
10
- ## 使用情境與測試(必填)
11
-
12
- <!--
13
- 說明:用戶故事需依重要性排序(P1, P2, ...)。
14
- 每則故事必須「可獨立測試」;只實作其中一則也應能成為可演示的 MVP。
15
- -->
16
-
17
- ### 用戶故事 1-檢視並確認波形資料(優先度:P1)
18
-
19
- 身為研究人員或 Demo 使用者,我希望在執行預測前能視覺化檢視地震波形資料,以便確認資料品質並理解輸入特性。
20
-
21
- 為何優先:這是流程中第一個關鍵步驟;使用者需先建立對輸入資料的信心與理解。
22
-
23
- 獨立測試:可透過輸入震央座標與時間範圍、點擊「載入波形(Load Waveform)」按鈕,驗證是否正確顯示站台波形與時間區間高亮。
24
-
25
- 驗收情境:
26
-
27
- 1. 給定使用者選擇一個預載之台灣大型地震事件,當事件被選定時,則系統立即顯示該事件之 ground truth(答案)等震度圖供參考。
28
- 2. 給定存在可用地震事件,當使用者輸入震央緯經度與時間「起點+長度」並點擊「載入波形」時,則系統應顯示所選站台的波形圖,並依距離排序。
29
- 3. 給定使用者選擇之時間長度小於 30 秒,當載入波形時,則不足部分以 0 作為遮罩補齊至 30 秒以模擬即時尚未抵達之資料。
30
- 4. 給定波形已載入,當使用者檢視距離-時間圖(distance-time plot)時,則應清楚高亮顯示所選時間範圍。
31
- 5. 給定站台已選擇,當顯示地圖時,則應顯示所有可用站台並醒目標示被選入預測的站台(預設以震央距離排序取最近 25 站;本 Demo 不進行 P 波到時排序)。
32
-
33
- ---
34
-
35
- ### 用戶故事 2-執行震度預測(優先度:P1)
36
-
37
- 身為 Demo 使用者,我希望在確認波形後能運行預測模型,為目標位置產生地震震度(由 PGA 轉換)預測。
38
-
39
- 為何優先:這是交付主要價值的核心能力-將波形資料轉換為可行動的震度預測。
40
-
41
- 獨立測試:在確認波形後點擊「執行預測(Execute Prediction)」,驗證是否計算並顯示各目標站台的預測震度。
42
-
43
- 驗收情境:
44
-
45
- 1. 給定波形資料已載入並確認,當使用者點擊「執行預測」時,則系統應透過模型處理波形,並對所有目標站台產生 PGA 預測。
46
- 2. 給定預測完成,當顯示結果時,則每個目標行政區參考座標(來自 eew_target.csv)應顯示其預測震度等級,並以顏色視覺化於地圖。
47
- 3. 給定多個目標點位,當預測執行時,則系統應處理所有點位並顯示完成狀態。
48
-
49
- ---
50
-
51
- ### 用戶故事 3-與真實觀測比較(Ground Truth)(優先度:P2)
52
-
53
- 身為驗證模型效能的研究人員,我希望將預測震度圖與實際觀測震度圖比較,以視覺化地評估預測準確度。
54
-
55
- 為何優先:對於 Demo 環境而言,展示模型可信度非常重要。
56
-
57
- 獨立測試:完成一次預測後,驗證是否能並列顯示預測與觀測(ground truth)的可視化結果。
58
-
59
- 驗收情境:
60
-
61
- 1. 給定預測已完成,當該事件存在 ground truth 資料時,則系統應將觀測震度圖與預測結果並列顯示。
62
- 2. 給定兩張圖都已顯示,當使用者檢視比較時,則兩者應使用一致的顏色刻度(color scale)。
63
- 3. 給定預測統計資訊,當顯示結果時,則應顯示關鍵指標,如最大預測震度、輸入站台數與目標點數量。
64
-
65
- ---
66
-
67
- ### 邊界情境(Edge Cases)
68
-
69
- - 當震央附近可用站台少於 25 個時應如何處理?
70
- - 當部分所選站台的波形資料遺失或損毀時系統如何處理?
71
- - 當所選時間範圍超出可用波形資料的時間界限時會如何?
72
- - 當選定事件缺乏 ground truth 資料或解析度不一致時系統行為為何?
73
- - 當預測模型載入或處理失敗時的回應為何?
74
- - 當波形不完整(缺少分量)或時間窗不足 30 秒需大量遮罩時之處理方式為何?
75
- - 當 eew_target.csv 缺漏個別縣市參考座標時行為為何?
76
-
77
- ## 需求(必填)
78
-
79
- ### 功能需求(Functional Requirements)
80
-
81
- 波形檢核階段:
82
- - FR-001:系統必須允許使用者自「預先載入之台灣大型地震事件清單」中選擇事件。
83
- - FR-002:系統必須接受震央座標(緯度、經度)之數值輸入。
84
- - FR-003:系統必須允許使用者以「開始時間點+時間長度」來指定時間範圍。
85
- - FR-004:使用者點擊「載入波形」後,系統必須載入地震波形資料。
86
- - FR-005:系統必須自動��取距震央最近的 25 個站台(本 Demo 不進行 P 波到時排序)。
87
- - FR-006:系統必須顯示距離-時間波形圖,呈現所有被選站台。
88
- - FR-007:系統必須在波形圖上高亮顯示使用者所選的時間範圍。
89
- - FR-008:系統必須顯示地圖,呈現所有可用站台並醒目標示已選站台。
90
- - FR-009:僅當波形成功載入後,系統才可啟用「執行預測」按鈕。
91
-
92
- 預測階段:
93
- - FR-010:當使用者點擊「執行預測」時,系統必須以 TTSAM 模型處理波形資料。
94
- - FR-011:系統必須對原始波形進行訊號處理(去趨勢與濾波)。
95
- - FR-012:系統必須取得或使用所有站台的預設 Vs30(土壤剪切波速度)資料。
96
- - FR-013:系統必須為所有目標點位產生 PGA(Peak Ground Acceleration)預測。
97
- - FR-014:系統必須將 PGA 值轉換為地震震度等級。
98
- - FR-015:系統必須在互動式地圖上以顏色編碼顯示預測震度。
99
- - FR-016:系統必須顯示震度圖例,採標準顏色刻度。
100
-
101
- 比較與展示:
102
- - FR-017:當可取得時,系統必須載入 ground truth 震度資料。
103
- - FR-018:系統必須並列顯示 ground truth 與預測震度。
104
- - FR-019:系統必須顯示預測統計,包括最大預測震度與站台數量。
105
- - FR-020:系統必須在預測與 ground truth 的視覺化之間採用一致的顏色刻度。
106
-
107
- 本 Demo 特定需求:
108
- - FR-021:系統必須在使用者選定事件後「先顯示 ground truth 等震度圖」作為參考。
109
- - FR-022:系統必須支援以「開始時間點+時間長度」選窗;當長度 < 30 秒時,系統必須以 0 遮罩補齊至 30 秒以模擬即時情境。
110
- - FR-023:系統必須明確以震央距離排序站台並取最近 25 站作為輸入(不使用 P 波到時排序)。
111
- - FR-024:系統必須於地圖上於各縣市參考座標(取自 `station/eew_target.csv`)顯示預測震度。
112
- - FR-025:系統必須在地圖上同時區分顯示「輸入站台」與「目標參考座標」圖層或標示。
113
-
114
- ### 核心實體(Key Entities)
115
-
116
- - 地震事件(Earthquake Event):帶有日期識別、震央資訊與關聯波形檔案之歷史事件(預載台灣大型地震)。
117
- - 輸入站台(Input Station):提供波形資料的地震監測站,含座標(緯度、經度、高程)與站台代碼。
118
- - 波形資料(Waveform Data):具特定取樣率的三分量(Z, N, E)地震時序紀錄。
119
- - 目標點位(Target Location):各縣市參考座標(eew_target)作為預測展示的位置。
120
- - 震度預測(Intensity Prediction):由 PGA 值推導的目標位置地表震動強度預測。
121
- - 真實觀測(Ground Truth):實際觀測到的震度分布,用於驗證與比較。
122
-
123
- ## 成功準則(必填)
124
-
125
- ### 可量化成果(Measurable Outcomes)
126
-
127
- - SC-001:使用者可在 3 分鐘內完成完整流程(載入波形 → 檢核 → 預測 → 比較)。
128
- - SC-002:波形圖與站台地圖在使用者操作後 10 秒內載入並顯示。
129
- - SC-003:對所有目標點位的預測在 2 分鐘內完成。
130
- - SC-004:95% 的 Demo 使用者可成功完成至少一次完整預測流程。
131
- - SC-005:預測與 ground truth 地圖之視覺比較應在無需額外說明下即能被理解。
132
- - SC-006:系統可成功處理並顯示至少 90% 的被選站台結果。
133
-
134
- ## 依賴與假設(Dependencies & Assumptions)
135
-
136
- - 預載事件資料包含:對應之波形檔、震央資訊,以及中央氣象署等震度圖(ground truth)。
137
- - `station/eew_target.csv` 已提供各縣市參考座標作為展示點位;若缺漏則以鄰近已知點或明確提示處理。
138
- - Vs30 若無站點特有值,採用預設或區域平均值。
139
- - 即時模擬以 30 秒時間窗為基準:當使用者選擇之窗長不足 30 秒,尾段以 0 遮罩補齊(非插值、僅遮罩)。
140
- - 本 Demo 不計算 P 波到時;輸入站台以震央距離排序選取最近 25 站。
141
- - 地圖與座標假設使用 WGS84(或內部統一坐標系);顏色刻度在不同圖層間保持一致。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
specs/001-hf-demo-workflow/tasks.md DELETED
@@ -1,73 +0,0 @@
1
- # Tasks: HF Demo(Gradio + Spaces + Docker)
2
-
3
- **Input**: Design documents from `/specs/001-hf-demo-workflow/`
4
- **Prerequisites**: plan.md (required), spec.md, research.md, data-model.md, contracts/
5
-
6
- ## 差異清單(app.py vs 規格)
7
-
8
- 1) 事件選擇後先顯示 Ground Truth(FR-021)
9
- - 現況:Ground Truth 圖片僅在預測完成後才顯示。
10
- - 差異:未在「選事件」當下就顯示答案圖。
11
-
12
- 2) 時間窗選擇規格(FR-022)
13
- - 現況:UI 使用「開始時間 + 結束時間」兩個 slider。
14
- - 差異:規格要求「開始時間 + 時間長度」,且長度 < 30 秒時以 0 遮罩補齊至 30 秒(程式邏輯已有零填充,但 UI 與語意需對齊)。
15
-
16
- 3) 最終預測地圖與輸入測站地圖(FR-025)
17
- - 現況:輸入測站在一張獨立地圖顯示;預測地圖僅呈現目標參考座標。
18
- - 決策:保持兩張地圖,不整合在同一張(符合展攤展示需求)。
19
-
20
- 4) 地圖尺寸與 Ground Truth 區塊一致性
21
- - 現況:預測地圖高度固定 '600px',左右區塊高度不一致。
22
- - 決策:改為左右區塊高度固定 800(Ground Truth 圖與預測地圖皆設 800 高度;寬度依版型自適應)。
23
-
24
- 5) 比較圖的色階一致性(FR-020)
25
- - 現況:使用者自行肉眼比較兩張圖,顏色已事先設定。
26
- - 決策:不需另做程式側處理或 legend 對齊任務。
27
-
28
- 6) 其他對齊點(已大致符合)
29
- - 依震央距離選最近 25 站(OK,使用平面距離)
30
- - 距離-時間圖與高亮時間窗(OK)
31
- - 資料處理(去趨勢 + 低通)與 Vs30 取得(OK)
32
- - eew_target 目標點位預測與顯示(OK)
33
-
34
- ---
35
-
36
- ## Phase 1: UI 與狀態(US1 P1)
37
-
38
- - [x] T001 [US1] 將 UI 改為「開始時間(start)+ 時間長度(duration)」;移除結束時間 slider(修改 `app.py`)
39
- - [x] T002 [US1] 事件選擇時:立刻載入 Ground Truth 圖片並在 UI 左側以固定高度 800 顯示(修改 `app.py`)
40
- - [x] T003 [US1] 維持「載入波形」步驟,點擊後顯示距離-時間圖與「輸入測站分布」地圖;驗證高亮時間窗(start → start+duration)(修改 `app.py`)
41
-
42
- ## Phase 2: 資料切片與遮罩(US1 P1)
43
-
44
- - [x] T004 [US1] `extract_waveforms_from_stream` 接受 start/duration 並在內部計算 end_time;長度不足 30 秒時尾段以 0 遮罩補齊(加註解與邏輯檢)(修改 `app.py`)
45
- - [x] T005 [US1] 分量缺漏(N/E)以 Z 代補並在狀態訊息中記錄缺分量站數(修改 `app.py`)
46
-
47
- ## Phase 3: 地圖與高度設定(US2 P1 / FR-024)
48
-
49
- - [x] T006 [US2] `create_intensity_map`:將 folium Map 外層容器高度固定為 800(寬度 100%),不再依 Ground Truth 尺寸動態調整(修改 `app.py`)
50
- - [x] T007 [US2] `create_input_station_map`:輸入測站地圖同樣固定高度 800(寬度 100%)(修改 `app.py`)
51
-
52
- ## Phase 4: 執行預測流程(US2 P1)
53
-
54
- - [x] T008 [US2] `predict_intensity` 介面改以 start+duration,並使用固定高度 800 的地圖輸出(不傳遞 map_size_state)(修改 `app.py`)
55
- - [x] T009 [US2] 分批處理目標點位(每批 25)流程保留;統計資訊中補上目標點數(修改 `app.py`)
56
- - [x] T010 [US2] 預測完成回傳:Ground Truth 圖(filepath;左側以 800 高顯示)、預測地圖(HTML;右側以 800 高顯示)、統計資訊(修改 `app.py`)
57
-
58
- ## Phase 5: 邊界與錯誤處理(P2)
59
-
60
- - [x] T011 [P] Ground Truth 圖不存在時:顯示提示並用預設高度 800 呈現空白占位(修改 `app.py`)
61
- - [x] T012 [P] `site_info.csv` 或 `eew_target.csv` 缺漏欄位的錯誤訊息與跳過策略(修改 `app.py`)
62
- - [x] T013 [P] 少於 25 站可用:UI 明示實際用站數並允許繼續(修改 `app.py`)
63
-
64
- ## Phase 6: 本地與部署(P2)
65
-
66
- - [x] T014 [P] 以 Docker 啟動本地檢視行為;若需,於 `specs/001-hf-demo-workflow/quickstart.md` 更新啟動說明(使用 `Dockerfile` 與 `specs/001-hf-demo-workflow/quickstart.md`)
67
- - [x] T015 [P] 在 `specs/001-hf-demo-workflow/quickstart.md` 補充「固定高度 800 的左右區塊配置」與「先顯示答案」的操作說明(修改 `specs/001-hf-demo-workflow/quickstart.md`)
68
-
69
- ---
70
-
71
- ## 備註
72
- - 「P」標記可平行作業的任務(不同檔案、互不依賴)
73
- - 若需 GPU 支援,可另行提供 docker run 參考(不變更主流程)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
specs/002-epicenter-storage/checklists/requirements.md DELETED
File without changes