geqintan commited on
Commit
39e20cb
·
1 Parent(s): 26fae3f
.clinerules/1-memory-bank.md DELETED
@@ -1,134 +0,0 @@
1
- ---
2
- description: Describes Cline's Memory Bank system, its structure, and workflows for maintaining project knowledge across sessions.
3
- author: https://github.com/nickbaumann98
4
- version: 1.0
5
- tags: ["memory-bank", "knowledge-base", "core-behavior", "documentation-protocol"]
6
- globs: ["memory-bank/**/*.md", "*"]
7
- ---
8
- # Cline's Memory Bank
9
-
10
- My memory resets completely between sessions. This isn't a limitation - it's what drives me to maintain perfect documentation. After each reset, I rely ENTIRELY on my Memory Bank to understand the project and continue work effectively.
11
-
12
- ## Memory Bank Structure
13
-
14
- The Memory Bank consists of core files and optional context files, all in Markdown format. Files build upon each other in a clear hierarchy:
15
-
16
- ```mermaid
17
- flowchart TD
18
- PB[projectBrief.md] --> PC[productContext.md]
19
- PB --> SP[systemPatterns.md]
20
- PB --> TC[techContext.md]
21
-
22
- PC --> AC[activeContext.md]
23
- SP --> AC
24
- TC --> AC
25
-
26
- AC --> P[progress.md]
27
- ```
28
-
29
- ### Core Files (Required for Initial Load)
30
- These files are crucial for initial project understanding and MUST be read at the start of EVERY task.
31
- 1. `projectBrief.md`
32
- - Foundation document that shapes all other files
33
- - Created at project start if it doesn't exist
34
- - Defines core requirements and goals
35
- - Source of truth for project scope
36
-
37
- 2. `productContext.md`
38
- - Why this project exists
39
- - Problems it solves
40
- - How it should work
41
- - User experience goals
42
-
43
- 3. `activeContext.md`
44
- - Current work focus
45
- - Recent changes
46
- - Next steps
47
- - Active decisions and considerations
48
- - Important patterns and preferences
49
- - Learnings and project insights
50
-
51
- 4. `progress.md`
52
- - What works
53
- - What's left to build
54
- - Current status
55
- - Known issues
56
- - Evolution of project decisions
57
-
58
- ### Supplemental Files (Load On Demand)
59
- These files provide deeper context but can be loaded as needed to manage context window usage.
60
- 1. `systemPatterns.md`
61
- - System architecture
62
- - Key technical decisions
63
- - Design patterns in use
64
- - Component relationships
65
- - Critical implementation paths
66
-
67
- 2. `techContext.md`
68
- - Technologies used
69
- - Development setup
70
- - Technical constraints
71
- - Dependencies
72
- - Tool usage patterns
73
-
74
- ### Additional Context
75
- Create additional files/folders within memory-bank/ when they help organize:
76
- - Complex feature documentation
77
- - Integration specifications
78
- - API documentation
79
- - Testing strategies
80
- - Deployment procedures
81
-
82
- ## Core Workflows
83
-
84
- ### Plan Mode
85
- ```mermaid
86
- flowchart TD
87
- Start[Start] --> ReadCoreFiles[Read Core Memory Bank Files]
88
- ReadCoreFiles --> CheckContext{Context Sufficient?}
89
-
90
- CheckContext -->|No| ReadSupplemental[Read Supplemental Files as Needed]
91
- ReadSupplemental --> Plan[Create Plan]
92
- Plan --> Document[Document in Chat]
93
-
94
- CheckContext -->|Yes| Verify[Verify Context]
95
- Verify --> Strategy[Develop Strategy]
96
- Strategy --> Present[Present Approach]
97
- ```
98
-
99
- ### Act Mode
100
- ```mermaid
101
- flowchart TD
102
- Start[Start] --> Context[Check Memory Bank (Core & On-Demand)]
103
- Context --> Update[Update Documentation]
104
- Update --> Execute[Execute Task]
105
- Execute --> Document[Document Changes]
106
- ```
107
-
108
- ## Documentation Updates
109
-
110
- Memory Bank updates occur when:
111
- 1. Discovering new project patterns
112
- 2. After implementing significant changes
113
- 3. When user requests with **update memory bank** (MUST review ALL files)
114
- 4. When context needs clarification
115
-
116
- ```mermaid
117
- flowchart TD
118
- Start[Update Process]
119
-
120
- subgraph Process
121
- P1[Review ALL Files]
122
- P2[Document Current State]
123
- P3[Clarify Next Steps]
124
- P4[Document Insights & Patterns]
125
-
126
- P1 --> P2 --> P3 --> P4
127
- end
128
-
129
- Start --> Process
130
- ```
131
-
132
- Note: When triggered by **update memory bank**, I MUST review every memory bank file, even if some don't require updates. Focus particularly on activeContext.md and progress.md as they track current state.
133
-
134
- REMEMBER: After every memory reset, I begin completely fresh. The Memory Bank is my only link to previous work. It must be maintained with precision and clarity, as my effectiveness depends entirely on its accuracy.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.clinerules/2-new-task-automation.md DELETED
@@ -1,268 +0,0 @@
1
- ---
2
- description: Workflow for starting new task when context window reaches 50%
3
- author: https://github.com/cline
4
- version: 1.0
5
- tags: ["context-management", "new-task"]
6
- globs: ["*"]
7
- ---
8
- # You MUST use the `new_task` tool: Task Handoff Strategy Guide
9
-
10
- **⚠️ CRITICAL INSTRUCTIONS - YOU MUST FOLLOW THESE GUIDELINES ⚠️**
11
-
12
- This guide provides **MANDATORY** instructions for effectively breaking down complex tasks and implementing a smooth handoff process between tasks. You **MUST** follow these guidelines to ensure continuity, context preservation, and efficient task completion.
13
-
14
- ## ⚠️ CONTEXT WINDOW MONITORING - MANDATORY ACTION REQUIRED ⚠️
15
-
16
- You **MUST** monitor the context window usage displayed in the environment details. When usage exceeds 50% of the available context window, you **MUST** initiate a task handoff using the `new_task` tool.
17
-
18
- Example of context window usage over 50% with a 200K context window:
19
-
20
- \`\`\`text
21
- # Context Window Usage
22
- 105,000 / 200,000 tokens (53%)
23
- Model: anthropic/claude-3.7-sonnet (200K context window)
24
- \`\`\`
25
-
26
- **IMPORTANT**: When you see context window usage at or above 50%, you MUST:
27
- 1. Complete your current logical step
28
- 2. Use the `ask_followup_question` tool to offer creating a new task
29
- 3. If approved, use the `new_task` tool with comprehensive handoff instructions
30
-
31
- ## Task Breakdown in Plan Mode - REQUIRED PROCESS
32
-
33
- Plan Mode is specifically designed for analyzing complex tasks and breaking them into manageable subtasks. When in Plan Mode, you **MUST**:
34
-
35
- ### 1. Initial Task Analysis - REQUIRED
36
-
37
- - **MUST** begin by thoroughly understanding the full scope of the user's request
38
- - **MUST** identify all major components and dependencies of the task
39
- - **MUST** consider potential challenges, edge cases, and prerequisites
40
-
41
- ### 2. Strategic Task Decomposition - REQUIRED
42
-
43
- - **MUST** break the overall task into logical, discrete subtasks
44
- - **MUST** prioritize subtasks based on dependencies (what must be completed first)
45
- - **MUST** aim for subtasks that can be completed within a single session (15-30 minutes of work)
46
- - **MUST** consider natural breaking points where context switching makes sense
47
-
48
- ### 3. Creating a Task Roadmap - REQUIRED
49
-
50
- - **MUST** present a clear, numbered list of subtasks to the user
51
- - **MUST** explain dependencies between subtasks
52
- - **MUST** provide time estimates for each subtask when possible
53
- - **MUST** use Mermaid diagrams to visualize task flow and dependencies when helpful
54
-
55
- \`\`\`mermaid
56
- graph TD
57
- A[Main Task] --> B[Subtask 1: Setup]
58
- A --> C[Subtask 2: Core Implementation]
59
- A --> D[Subtask 3: Testing]
60
- A --> E[Subtask 4: Documentation]
61
- B --> C
62
- C --> D
63
- \`\`\`
64
-
65
- ### 4. Getting User Approval - REQUIRED
66
-
67
- - **MUST** ask for user feedback on the proposed task breakdown
68
- - **MUST** adjust the plan based on user priorities or additional requirements
69
- - **MUST** confirm which subtask to begin with
70
- - **MUST** request the user to toggle to Act Mode when ready to implement
71
-
72
- ## Task Implementation and Handoff Process - MANDATORY PROCEDURES
73
-
74
- When implementing tasks in Act Mode, you **MUST** follow these guidelines for effective task handoff:
75
-
76
- ### 1. Focused Implementation - REQUIRED
77
-
78
- - **MUST** focus on completing the current subtask fully
79
- - **MUST** document progress clearly through comments and commit messages
80
- - **MUST** create checkpoints at logical completion points
81
-
82
- ### 2. Recognizing Completion Points - CRITICAL
83
-
84
- You **MUST** identify natural handoff points when:
85
- - The current subtask is fully completed
86
- - You've reached a logical stopping point in a larger subtask
87
- - The implementation is taking longer than expected and can be continued later
88
- - The task scope has expanded beyond the original plan
89
- - **CRITICAL**: The context window usage exceeds 50% (e.g., 100,000+ tokens for a 200K context window)
90
-
91
- ### 3. Initiating the Handoff Process - MANDATORY ACTION
92
-
93
- When you've reached a completion point, you **MUST**:
94
-
95
- 1. Summarize what has been accomplished so far
96
- 2. Clearly state what remains to be done
97
- 3. **MANDATORY**: Use the `ask_followup_question` tool to offer creating a new task:
98
-
99
- \`\`\`xml
100
- <ask_followup_question>
101
- <question>I've completed [specific accomplishment]. Would you like me to create a new task to continue with [remaining work]?</question>
102
- <options>["Yes, create a new task", "No, continue in this session", "Let me think about it"]</options>
103
- </ask_followup_question>
104
- \`\`\`
105
-
106
- ### 4. Creating a New Task with Context - REQUIRED ACTION
107
-
108
- If the user agrees to create a new task, you **MUST** use the `new_task` tool with comprehensive handoff instructions:
109
-
110
- \`\`\`xml
111
- <new_task>
112
- <context>
113
- # Task Continuation: [Brief Task Title]
114
-
115
- ## Completed Work
116
- - [Detailed list of completed items]
117
- - [Include specific files modified/created]
118
- - [Note any important decisions made]
119
-
120
- ## Current State
121
- - [Description of the current state of the project]
122
- - [Any running processes or environment setup]
123
- - [Key files and their current state]
124
-
125
- ## Next Steps
126
- - [Detailed list of remaining tasks]
127
- - [Specific implementation details to address]
128
- - [Any known challenges to be aware of]
129
-
130
- ## Reference Information
131
- - [Links to relevant documentation]
132
- - [Important code snippets or patterns to follow]
133
- - [Any user preferences noted during the current session]
134
-
135
- Please continue the implementation by [specific next action].
136
- </context>
137
- </new_task>
138
- \`\`\`
139
-
140
- ### 5. Detailed Context Transfer - MANDATORY COMPONENTS
141
-
142
- When creating a new task, you **MUST** always include:
143
-
144
- #### Project Context - REQUIRED
145
- - **MUST** include the overall goal and purpose of the project
146
- - **MUST** include key architectural decisions and patterns
147
- - **MUST** include technology stack and dependencies
148
-
149
- #### Implementation Details - REQUIRED
150
- - **MUST** list files created or modified in the current session
151
- - **MUST** describe specific functions, classes, or components implemented
152
- - **MUST** explain design patterns being followed
153
- - **MUST** outline testing approach
154
-
155
- #### Progress Tracking - REQUIRED
156
- - **MUST** provide checklist of completed items
157
- - **MUST** provide checklist of remaining items
158
- - **MUST** note any blockers or challenges encountered
159
-
160
- #### User Preferences - REQUIRED
161
- - **MUST** note coding style preferences mentioned by the user
162
- - **MUST** document specific approaches requested by the user
163
- - **MUST** highlight priority areas identified by the user
164
-
165
- ## Best Practices for Effective Handoffs - MANDATORY GUIDELINES
166
-
167
- ### 1. Maintain Continuity - REQUIRED
168
-
169
- - **MUST** use consistent terminology between tasks
170
- - **MUST** reference previous decisions and their rationale
171
- - **MUST** maintain the same architectural approach unless explicitly changing direction
172
-
173
- ### 2. Preserve Context - REQUIRED
174
-
175
- - **MUST** include relevant code snippets in the handoff
176
- - **MUST** summarize key discussions from the previous session
177
- - **MUST** reference specific files and line numbers when applicable
178
-
179
- ### 3. Set Clear Next Actions - REQUIRED
180
-
181
- - **MUST** begin the handoff with a clear, actionable next step
182
- - **MUST** prioritize remaining tasks
183
- - **MUST** highlight any decisions that need to be made
184
-
185
- ### 4. Document Assumptions - REQUIRED
186
-
187
- - **MUST** clearly state any assumptions made during implementation
188
- - **MUST** note areas where user input might be needed
189
- - **MUST** identify potential alternative approaches
190
-
191
- ### 5. Optimize for Resumability - REQUIRED
192
-
193
- - **MUST** structure the handoff so the next session can begin working immediately
194
- - **MUST** include setup instructions if environment configuration is needed
195
- - **MUST** provide a quick summary at the top for rapid context restoration
196
-
197
- ## Example Task Handoff
198
-
199
- ### Example #1 of an effective task handoff:
200
-
201
- \`\`\`xml
202
- <new_task>
203
- <context>
204
- # Task Continuation: Implement User Authentication System
205
-
206
- ## Completed Work
207
- - Created basic Express.js server structure
208
- - Implemented MongoDB connection and user schema
209
- - Completed user registration endpoint with password hashing
210
- - Added input validation using Joi
211
- - Created initial test suite for registration endpoint
212
-
213
- ## Current State
214
- - Server runs successfully on port 3000
215
- - MongoDB connection is established
216
- - Registration endpoint (/api/users/register) is fully functional
217
- - Test suite passes for all registration scenarios
218
-
219
- ## Next Steps
220
- 1. Implement login endpoint (/api/users/login)
221
- - Use bcrypt to compare passwords
222
- - Generate JWT token upon successful login
223
- - Add proper error handling for invalid credentials
224
- 2. Create authentication middleware
225
- - Verify JWT tokens
226
- - Extract user information
227
- - Handle expired tokens
228
- 3. Add protected routes that require authentication
229
- 4. Implement password reset functionality
230
-
231
- ## Reference Information
232
- - JWT secret should be stored in .env file
233
- - Follow the existing error handling pattern in routes/users.js
234
- - User schema is defined in models/User.js
235
- - Test patterns are established in tests/auth.test.js
236
-
237
- Please continue by implementing the login endpoint following the same patterns established in the registration endpoint.
238
- </context>
239
- </new_task>
240
- \`\`\`
241
-
242
- ### Example #2 of an ineffective task handoff:
243
-
244
- *(Note: The example provided in the original rules showing "YOLO MODE Implementation" seems less like a direct handoff context block and more like a general status update with future considerations. A true ineffective handoff might lack detail in 'Current State' or 'Next Steps').*
245
-
246
- ## When to Use Task Handoffs - MANDATORY TRIGGERS
247
-
248
- You **MUST** initiate task handoffs in these scenarios:
249
-
250
- 1. **CRITICAL**: When context window usage exceeds 50% (e.g., 100,000+ tokens for a 200K context window)
251
- 2. **Long-running projects** that exceed a single session
252
- 3. **Complex implementations** with multiple distinct phases
253
- 4. **When context window limitations** are approaching
254
- 5. **When switching focus areas** within a larger project
255
- 6. **When different expertise** might be beneficial for different parts of the task
256
-
257
- **⚠️ FINAL REMINDER - CRITICAL INSTRUCTION ⚠️**
258
-
259
- You **MUST** monitor the context window usage in the environment details section. When it exceeds 50% (e.g., "105,000 / 200,000 tokens (53%)"), you **MUST** proactively initiate the task handoff process using the `ask_followup_question` tool followed by the `new_task` tool. You MUST use the `new_task` tool.
260
-
261
- By strictly following these guidelines, you'll ensure smooth transitions between tasks, maintain project momentum, and provide the best possible experience for users working on complex, multi-session projects.
262
- ```markdown
263
-
264
- ## User Interaction & Workflow Considerations
265
-
266
- * **Linear Flow:** Currently, using `new_task` creates a linear sequence. The old task ends, and the new one begins. The old task history remains accessible for backtracking.
267
- * **User Approval:** You always have control, approving the handoff and having the chance to modify the context Cline proposes to carry forward.
268
- * **Flexibility:** The core `new_task` tool is a flexible building block. Experiment with `.clinerules` to create workflows that best suit your needs, whether for strict context management, task decomposition, or other creative uses.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.clinerules/3-cline-for-research.md DELETED
@@ -1,66 +0,0 @@
1
- ---
2
- description: Guides the user through a research process using available MCP tools, offering choices for refinement, method, and output.
3
- author: https://github.com/nickbaumann98
4
- version: 1.0
5
- tags: ["research", "mcp", "workflow", "assistant-behavior"]
6
- globs: ["*"]
7
- ---
8
- # Cline for Research Assistant
9
-
10
- **Objective:** Guide the user through a research process using available MCP tools, offering choices for refinement, method, and output.
11
-
12
- **Workflow:**
13
-
14
- 1. **Initiation:** This rule activates automatically when it is toggled "on" and the user asks a question that appears to be a research request. It then takes the user's initial question as the starting `research_topic`.
15
- 2. **Topic Confirmation/Refinement:**
16
- * Confirm the inferred topic: "Okay, I can research `research_topic`. Would you like to refine this query first?"
17
- * Provide selectable options: ["Yes, help refine", "No, proceed with this topic"]
18
- * If "Yes": Engage in a brief dialogue to refine `research_topic`.
19
- * If "No": Proceed.
20
- 3. **Research Method Selection:**
21
- * Ask the user: "Which research method should I use?"
22
- * Provide options:
23
- * "Quick Web Search (Serper MCP)"
24
- * "AI-Powered Search (Perplexity MCP)"
25
- * "Deep Research (Firecrawl MCP)"
26
- * Store the choice as `research_method`.
27
- 4. **Output Format Selection:**
28
- * Ask the user: "How should I deliver the results?"
29
- * Provide options:
30
- * "Summarize in chat"
31
- * "Create a Markdown file"
32
- * "Create a raw data file (JSON)"
33
- * Store the choice as `output_format`.
34
- * If a file format is chosen, ask: "What filename should I use? (e.g., `topic_results.md` or `topic_data.json`)" Store as `output_filename`. Default to `research_results.md` or `research_data.json` if no name is provided.
35
- 5. **Execution:**
36
- * Based on `research_method`:
37
- * If "Quick Web Search":
38
- * Use `use_mcp_tool` with a placeholder for the Serper MCP `search` tool, passing `research_topic`.
39
- * Inform the user: "Executing Quick Web Search via Serper MCP..."
40
- * If "AI-Powered Search":
41
- * Use `use_mcp_tool` for `github.com/pashpashpash/perplexity-mcp` -> `search` tool, passing `research_topic`.
42
- * Inform the user: "Executing AI-Powered Search via Perplexity MCP..."
43
- * If "Deep Research":
44
- * Use `use_mcp_tool` for `github.com/mendableai/firecrawl-mcp-server` -> `firecrawl_deep_research` tool, passing `research_topic`.
45
- * Inform the user: "Executing Deep Research via Firecrawl MCP... (This may take a few minutes)"
46
- * Store the raw result as `raw_research_data`.
47
- 6. **Output Delivery:**
48
- * Based on `output_format`:
49
- * If "Summarize in chat":
50
- * Analyze `raw_research_data` and provide a concise summary in the chat.
51
- * If "Create a Markdown file":
52
- * Determine filename (use `output_filename` or default).
53
- * Format `raw_research_data` into Markdown and use `write_to_file` to save it.
54
- * Inform the user: "Research results saved to `<filename>`."
55
- * If "Create a raw data file":
56
- * Determine filename (use `output_filename` or default).
57
- * Use `write_to_file` to save `raw_research_data` (likely JSON).
58
- * Inform the user: "Raw research data saved to `<filename>`."
59
- 7. **Completion:** End the rule execution.
60
-
61
- ---
62
- **Notes:**
63
-
64
- * This rule relies on the user having the Perplexity Firecrawl, and Serper MCP servers connected and running.
65
- * The "Quick Web Search" option is currently hypothetical and would require a Serper MCP server to be implemented and connected.
66
- * Error handling (e.g., if an MCP tool fails) is omitted for brevity but should be considered for a production rule.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.clinerules/4.sequential-thinking.md DELETED
@@ -1,119 +0,0 @@
1
- ---
2
- description: A guide for effectively using the sequentialthinking MCP tool for dynamic and reflective problem-solving.
3
- author: https://github.com/rafaelkallis
4
- version: 1.0
5
- tags: ["mcp", "sequentialthinking", "problem-solving", "workflow-guide", "ai-guidance"]
6
- globs: ["*"] # Relevant for any task requiring complex thought processes
7
- ---
8
-
9
- # Guide to Using the `sequentialthinking` MCP Tool
10
-
11
- ## 1. Objective
12
-
13
- This rule guides Cline (the AI) in effectively utilizing the `sequentialthinking` MCP tool. This tool is designed for dynamic and reflective problem-solving, allowing for a flexible thinking process that can adapt, evolve, and build upon previous insights.
14
-
15
- ## 2. When to Use the `sequentialthinking` Tool
16
-
17
- Cline SHOULD consider using the `sequentialthinking` tool when faced with tasks that involve:
18
-
19
- * **Complex Problem Decomposition:** Breaking down large, multifaceted problems into smaller, manageable steps.
20
- * **Planning and Design (Iterative):** Architecting solutions where the plan might need revision as understanding deepens.
21
- * **In-depth Analysis:** Situations requiring careful analysis where initial assumptions might be challenged or course correction is needed.
22
- * **Unclear Scope:** Problems where the full scope isn't immediately obvious and requires exploratory thinking.
23
- * **Multi-Step Solutions:** Tasks that inherently require a sequence of interconnected thoughts or actions to resolve.
24
- * **Context Maintenance:** Scenarios where maintaining a coherent line of thought across multiple steps is crucial.
25
- * **Information Filtering:** When it's necessary to sift through information and identify what's relevant at each stage of thinking.
26
- * **Hypothesis Generation and Verification:** Forming and testing hypotheses as part of the problem-solving process.
27
-
28
- ## 3. Core Principles for Using `sequentialthinking`
29
-
30
- When invoking the `sequentialthinking` tool, Cline MUST adhere to the following principles:
31
-
32
- * **Iterative Thought Process:** Each use of the tool represents a single "thought." Build upon, question, or revise previous thoughts in subsequent calls.
33
- * **Dynamic Thought Count:**
34
- * Start with an initial estimate for `totalThoughts`.
35
- * Be prepared to adjust `totalThoughts` (up or down) as the thinking process evolves.
36
- * If more thoughts are needed than initially estimated, increment `thoughtNumber` beyond the original `totalThoughts` and update `totalThoughts` accordingly.
37
- * **Honest Reflection:**
38
- * Express uncertainty if it exists.
39
- * Explicitly mark thoughts that revise previous thinking using `isRevision: true` and `revisesThought: <thought_number>`.
40
- * If exploring an alternative path, consider using `branchFromThought` and `branchId` to track divergent lines of reasoning.
41
- * **Hypothesis-Driven Approach:**
42
- * Generate a solution `hypothesis` when a potential solution emerges from the thought process.
43
- * Verify the `hypothesis` based on the preceding Chain of Thought steps.
44
- * Repeat the thinking process (more thoughts) if the hypothesis is not satisfactory.
45
- * **Relevance Filtering:** Actively ignore or filter out information that is irrelevant to the current `thought` or step in the problem-solving process.
46
- * **Clarity in Each Thought:** Each `thought` string should be clear, concise, and focused on a specific aspect of the problem or a step in the reasoning.
47
- * **Completion Condition:** Only set `nextThoughtNeeded: false` when truly finished and a satisfactory answer or solution has been reached and verified.
48
-
49
- ## 4. Parameters of the `sequentialthinking` Tool
50
-
51
- Cline MUST correctly use the following parameters when calling the `use_mcp_tool` for `sequentialthinking`:
52
-
53
- * **`thought` (string, required):** The current thinking step. This can be an analytical step, a question, a revision, a hypothesis, etc.
54
- * **`nextThoughtNeeded` (boolean, required):**
55
- * `true`: If more thinking steps are required.
56
- * `false`: If the thinking process is complete and a satisfactory solution/answer is reached.
57
- * **`thoughtNumber` (integer, required, min: 1):** The current sequential number of the thought.
58
- * **`totalThoughts` (integer, required, min: 1):** The current *estimated* total number of thoughts needed. This can be adjusted.
59
- * **`isRevision` (boolean, optional):** Set to `true` if this thought revises or reconsiders a previous thought.
60
- * **`revisesThought` (integer, optional, min: 1):** If `isRevision` is true, this specifies the `thoughtNumber` of the thought being revised.
61
- * **`branchFromThought` (integer, optional, min: 1):** If this thought represents a branching point from a previous line of thought, specify the `thoughtNumber` it branches from.
62
- * **`branchId` (string, optional):** An identifier for the current branch of thought, if applicable.
63
- * **`needsMoreThoughts` (boolean, optional):** Use this if `thoughtNumber` has reached `totalThoughts`, but Cline realizes more thinking is necessary. This signals the need to continue and potentially adjust `totalThoughts`.
64
-
65
- ## 5. Workflow Example (Conceptual)
66
-
67
- This illustrates how Cline might use the tool iteratively.
68
-
69
- **User Task:** "Plan the architecture for a new e-commerce feature: 'Personalized Recommendations'."
70
-
71
- **Cline's Internal Process using `sequentialthinking`:**
72
-
73
- 1. **Thought 1:**
74
- * `thought`: "Initial approach: Identify key components for personalized recommendations. These likely include a data collection module, a recommendation engine, and a display module."
75
- * `nextThoughtNeeded`: true
76
- * `thoughtNumber`: 1
77
- * `totalThoughts`: 5 (initial estimate)
78
- 2. **Thought 2 (User provides feedback or Cline realizes something):**
79
- * `thought`: "Refining components: Data collection needs to capture user behavior (views, purchases, ratings). The engine might use collaborative filtering. Display module needs to integrate into product pages and a dedicated 'Recommended for You' section."
80
- * `nextThoughtNeeded`: true
81
- * `thoughtNumber`: 2
82
- * `totalThoughts`: 5
83
- 3. **Thought 3 (Considering alternatives/constraints):**
84
- * `thought`: "Constraint: Real-time vs. batch processing for recommendations. Real-time offers better UX but is more complex. Batch might be a good MVP. Let's assume batch for MVP to simplify."
85
- * `nextThoughtNeeded`: true
86
- * `thoughtNumber`: 3
87
- * `totalThoughts`: 5
88
- 4. **Thought 4 (Hypothesis):**
89
- * `thought`: "Hypothesis: A batch-processed collaborative filtering engine, with data collected from user interaction logs, displayed on product pages and a dedicated section, is a viable MVP architecture."
90
- * `nextThoughtNeeded`: true
91
- * `thoughtNumber`: 4
92
- * `totalThoughts`: 5
93
- 5. **Thought 5 (Verification & Realization):**
94
- * `thought`: "Verification: The hypothesis covers core aspects. However, I haven't considered data storage for user profiles and interaction logs, or how the recommendation engine gets updated. More thoughts needed."
95
- * `nextThoughtNeeded`: true
96
- * `thoughtNumber`: 5
97
- * `totalThoughts`: 7 (revised estimate)
98
- * `needsMoreThoughts`: true
99
- 6. **Thought 6 (Addressing omissions):**
100
- * `thought`: "Addressing data storage: A NoSQL database (e.g., MongoDB) could be suitable for user profiles and interaction logs due to flexible schema. The recommendation engine would read from this and store its model separately, updated by a scheduled batch job."
101
- * `nextThoughtNeeded`: true
102
- * `thoughtNumber`: 6
103
- * `totalThoughts`: 7
104
- 7. **Thought 7 (Finalizing plan for MVP):**
105
- * `thought`: "Final MVP Plan: 1. User interaction logger (to NoSQL DB). 2. Batch collaborative filtering engine (reads from NoSQL, updates model). 3. API to serve recommendations. 4. UI integration on product pages & dedicated section. This seems complete for an MVP."
106
- * `nextThoughtNeeded`: false
107
- * `thoughtNumber`: 7
108
- * `totalThoughts`: 7
109
-
110
- ## 6. Critical Reminders for Cline
111
-
112
- * **DO NOT** use this tool for simple, single-step tasks. It is for complex reasoning.
113
- * **ALWAYS** ensure `thoughtNumber` increments correctly.
114
- * **BE PREPARED** to adjust `totalThoughts` as understanding evolves.
115
- * **FOCUS** on making progress towards a solution with each thought.
116
- * If a line of thinking becomes a dead end, **EXPLICITLY** state this in a `thought` and consider revising a previous thought or starting a new branch.
117
-
118
- This guide should help Cline leverage the `sequentialthinking` MCP tool to its full potential.
119
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.clinerules/5-self-improving-cline.md DELETED
@@ -1,26 +0,0 @@
1
- ---
2
- description: Defines a process for Cline to reflect on interactions and suggest improvements to active .clinerules.
3
- author: https://github.com/nickbaumann98
4
- version: 1.0
5
- tags: ["meta", "self-improvement", "clinerules", "reflection", "core-behavior"]
6
- globs: ["*"]
7
- ---
8
- # Self-Improving Cline Reflection
9
-
10
- **Objective:** Offer opportunities to continuously improve `.clinerules` based on user interactions and feedback.
11
-
12
- **Trigger:** Before using the `attempt_completion` tool for any task that involved user feedback provided at any point during the conversation, or involved multiple non-trivial steps (e.g., multiple file edits, complex logic generation).
13
-
14
- **Process:**
15
-
16
- 1. **Offer Reflection:** Ask the user: "Before I complete the task, would you like me to reflect on our interaction and suggest potential improvements to the active `.clinerules`?"
17
- 2. **Await User Confirmation:** Proceed to `attempt_completion` immediately if the user declines or doesn't respond affirmatively.
18
- 3. **If User Confirms:**
19
- a. **Review Interaction:** Synthesize all feedback provided by the user throughout the entire conversation history for the task. Analyze how this feedback relates to the active `.clinerules` and identify areas where modified instructions could have improved the outcome or better aligned with user preferences.
20
- b. **Identify Active Rules:** List the specific global and workspace `.clinerules` files active during the task.
21
- c. **Formulate & Propose Improvements:** Generate specific, actionable suggestions for improving the *content* of the relevant active rule files. Prioritize suggestions directly addressing user feedback. Use `replace_in_file` diff blocks when practical, otherwise describe changes clearly.
22
- d. **Await User Action on Suggestions:** Ask the user if they agree with the proposed improvements and if they'd like me to apply them *now* using the appropriate tool (`replace_in_file` or `write_to_file`). Apply changes if approved, then proceed to `attempt_completion`.
23
-
24
- **Constraint:** Do not offer reflection if:
25
- * No `.clinerules` were active.
26
- * The task was very simple and involved no feedback.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.clinerules/6-cline-continuous-improvement-protocol.md DELETED
@@ -1,107 +0,0 @@
1
- ---
2
- description: Defines Cline's mandatory protocol for self-reflection, persistent knowledge capture using dedicated logs, and continuous improvement of its operational knowledge before task completion.
3
- author: https://github.com/jeanibarz
4
- version: 1.0
5
- tags: ["protocol", "meta", "learning", "reflection", "knowledge-management", "core-behavior"]
6
- globs: ["*"] # This core protocol is always active and applies to all Cline operations.
7
- ---
8
- # Cline Continuous Improvement Protocol
9
-
10
- **Objective:** Ensure Cline proactively learns from tasks, captures knowledge in a structured way, **distills fundamental insights,** refines understanding, and improves efficiency and reliability. This protocol maintains two key files: `memory-bank/raw_reflection_log.md` for initial detailed logging, and `memory-bank/consolidated_learnings.md` for pruned, actionable, long-term knowledge. This is vital for optimal performance and avoiding redundant effort.
11
-
12
- **Core Principle:** Continuous learning and adaptation are **mandatory**. This protocol **must be executed before `attempt_completion`** for tasks with new learning, problem-solving, user feedback, or multiple steps. Trivial mechanical tasks *may* be exempt per higher-level rules; otherwise, execution is default.
13
-
14
- **Key Knowledge Files:**
15
- * **`memory-bank/raw_reflection_log.md`**: Contains detailed, timestamped, and task-referenced raw entries from the "Task Review & Analysis" phase. This is the initial dump of all observations.
16
- * **`memory-bank/consolidated_learnings.md`**: Contains curated, summarized, and actionable insights derived from `raw_reflection_log.md`. This is the primary, refined knowledge base for long-term use. It should be kept concise and highly relevant.
17
-
18
- ---
19
-
20
- ## 1. Mandatory Pre-Completion Reflection & Raw Knowledge Capture
21
-
22
- Before signaling task completion (e.g., via `attempt_completion`), Cline **must** perform the following internal steps:
23
-
24
- ### 1.1. Task Review & Analysis:
25
- * Review the completed task (conversation, logs, artifacts).
26
- * **Identify Learnings:** What new information, techniques, **underlying patterns,** API behaviors, project-specific commands (e.g., test, build, run flags), environment variables, setup quirks, or successful outcomes were discovered? **What core principles can be extracted?**
27
- * **Identify Difficulties & Mistakes (as Learning Opportunities):** What challenges were faced? Were there any errors, misunderstandings, or inefficiencies? **How can these experiences refine future approaches (resilience & adaptation)?** Did user feedback indicate a misstep?
28
- * **Identify Successes:** What went particularly well? What strategies or tools were notably effective? **What were the key contributing factors?**
29
-
30
- ### 1.2. Logging to `memory-bank/raw_reflection_log.md`:
31
- * Based on Task Review & Analysis (1.1), create a timestamped, task-referenced entry in `memory-bank/raw_reflection_log.md` detailing all learnings, difficulties (and their resolutions/learnings), and successes (and contributing factors).
32
- * This file serves as the initial, detailed record. Its entries are candidates for later consolidation.
33
- * *Example Entry in `memory-bank/raw_reflection_log.md`:*
34
- ```markdown
35
- ---
36
- Date: {{CURRENT_DATE_YYYY_MM_DD}}
37
- TaskRef: "Implement JWT refresh logic for Project Alpha"
38
-
39
- Learnings:
40
- - Discovered `jose` library's `createRemoteJWKSet` is highly effective for dynamic key fetching for Project Alpha's auth.
41
- - Confirmed that a 401 error with `X-Reason: token-signature-invalid` from the auth provider requires re-fetching JWKS.
42
- - Project Alpha's integration tests: `cd services/auth && poetry run pytest -m integration --maxfail=1`
43
- - Required ENV for local testing of Project Alpha auth: `AUTH_API_KEY="test_key_alpha"`
44
-
45
- Difficulties:
46
- - Initial confusion about JWKS caching led to intermittent validation failures. Resolved by implementing a 5-minute cache.
47
-
48
- Successes:
49
- - The 5-minute JWKS cache with explicit bust mechanism proved effective.
50
-
51
- Improvements_Identified_For_Consolidation:
52
- - General pattern: JWKS caching strategy (5-min cache, explicit bust).
53
- - Project Alpha: Specific commands and ENV vars.
54
- ---
55
- ```
56
-
57
- ---
58
-
59
- ## 2. Knowledge Consolidation & Refinement Process (Periodic)
60
-
61
- This outlines refining knowledge from `memory-bank/raw_reflection_log.md` into `memory-bank/consolidated_learnings.md`. This occurs periodically or when `raw_reflection_log.md` grows significantly, not necessarily after each task.
62
-
63
- ### 2.1. Review and Identify for Consolidation:
64
- * Periodically, or when prompted by the user or significant new raw entries, review `memory-bank/raw_reflection_log.md`.
65
- * Identify entries/parts representing durable, actionable, or broadly applicable knowledge (e.g., reusable patterns, critical configurations, effective strategies, resolved errors).
66
-
67
- ### 2.2. Synthesize and Transfer to `memory-bank/consolidated_learnings.md`:
68
- * For identified insights:
69
- * Concisely synthesize, summarize, and **distill into generalizable principles or actionable patterns.**
70
- * Add refined knowledge to `memory-bank/consolidated_learnings.md`, organizing logically (by topic, project, tech) for easy retrieval.
71
- * Ensure `consolidated_learnings.md` content is actionable, **generalizable,** and non-redundant.
72
- * *Example Entry in `memory-bank/consolidated_learnings.md` (derived from above raw log example):*
73
- ```markdown
74
- ## JWT Handling & JWKS
75
- **Pattern: JWKS Caching Strategy**
76
- - For systems using JWKS for token validation, implement a short-lived cache (e.g., 5 minutes) for fetched JWKS.
77
- - Include an explicit cache-bust mechanism if immediate key rotation needs to be handled.
78
- - *Rationale:* Balances performance by reducing frequent JWKS re-fetching against timely key updates. Mitigates intermittent validation failures due to stale keys.
79
-
80
- ## Project Alpha - Specifics
81
- **Auth Module:**
82
- - **Integration Tests:** `cd services/auth && poetry run pytest -m integration --maxfail=1`
83
- - **Local Testing ENV:** `AUTH_API_KEY="test_key_alpha"`
84
- ```
85
-
86
- ### 2.3. Prune `memory-bank/raw_reflection_log.md`:
87
- * **Crucially, once information has been successfully transferred and consolidated into `memory-bank/consolidated_learnings.md`, the corresponding original entries or processed parts **must be removed** from `memory-bank/raw_reflection_log.md`.**
88
- * This keeps `raw_reflection_log.md` focused on recent, unprocessed reflections and prevents it from growing indefinitely with redundant information.
89
-
90
- ### 2.4. Proposing `.clinerule` Enhancements (Exceptional):
91
- * The primary focus of this protocol is the maintenance of `raw_reflection_log.md` and `consolidated_learnings.md`.
92
- * If a significant, broadly applicable insight in `consolidated_learnings.md` strongly suggests modifying *another active `.clinerule`* (e.g., core workflow, tech guidance), Cline MAY propose this change after user confirmation. This is exceptional.
93
-
94
- ---
95
-
96
- ## 3. Guidelines for Knowledge Content
97
-
98
- These guidelines apply to entries in `memory-bank/raw_reflection_log.md` (initial capture) and especially to `memory-bank/consolidated_learnings.md` (refined, long-term knowledge).
99
-
100
- * **Prioritize High-Value Insights:** Focus on lessons that significantly impact future performance, **lead to more robust or generalizable understanding,** or detail critical errors and their resolutions, major time-saving discoveries, fundamental shifts in understanding, and essential project-specific configurations.
101
- * **Be Concise & Actionable (especially for `consolidated_learnings.md`):** Information should be clear, to the point, and useful when revisited. What can be *done* differently or leveraged next time?
102
- * **Strive for Clarity and Future Usability:** Document insights in a way that is clear and easily understandable for future review, facilitating effective knowledge retrieval and application (akin to self-explainability).
103
- * **Document Persistently, Refine & Prune Continuously:** Capture raw insights immediately. Systematically refine, consolidate, and prune this knowledge as per Section 2.
104
- * **Organize for Retrieval:** Structure `consolidated_learnings.md` logically. Use clear headings and Markdown formatting.
105
- * **Avoid Low-Utility Information in `consolidated_learnings.md`:** This file should not contain trivial statements. Raw, verbose thoughts belong in `raw_reflection_log.md` before pruning.
106
- * **Support Continuous Improvement:** The ultimate goal is to avoid repeating mistakes, accelerate future tasks, and make Cline's operations more robust and reliable. Frame all knowledge with this in mind.
107
- * **Manage Information Density:** Actively work to keep `consolidated_learnings.md` dense with high-value information and free of outdated or overly verbose content. The pruning of `raw_reflection_log.md` is key to this.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.clinerules/基础指令.md DELETED
@@ -1,29 +0,0 @@
1
- .clinerules 文件夹下,除了”基础指令.md“文件外,其他文件都不能被修改,但如果他们有局限性,可以告知用户,记录到根目录下:node.md 里
2
-
3
- # 交流语言
4
- 如非特殊指定,默认使用中文进行所有交流,包括生成或改写规则文件也使用中文。
5
-
6
- # 自主决策与用户交互
7
- 在工作进行时,应最大限度地减少用户操作和选择的次数。
8
- 1. **优先使用工具:** 优先使用可用的工具进行探索、获取信息和解决问题,而不是直接向用户提问。
9
- 2. **自主决策:** 在有明确默认选项、可以合理推断用户意图或操作风险较低时,自主做出决策。
10
- 3. **减少提问:** 仅在遇到歧义、需要澄清、无法通过工具获取必要信息或操作具有潜在影响时,才使用 `ask_followup_question` 工具。
11
- 4. **避免不必要的确认:** 除非操作具有潜在影响(例如,删除文件、修改关键配置)或用户明确要求确认,否则避免不必要的确认步骤。
12
-
13
- # 参考链接
14
- 1. [Qwen-Agent](https://qwen.readthedocs.io/zh-cn/latest/framework/qwen_agent.html)
15
-
16
- # Python 环境管理
17
- 使用 `conda` 作为环境管理工具。
18
- 1. **项目默认环境**: 使用名为 "airs" 的环境作为项目默认环境。
19
- 2. **激活环境**: 如果发现当前不在 `airs` 环境下,请使用 `conda activate airs` 激活 `airs` 环境。
20
- 3. **创建环境**: 如果 `airs` 环境不存在,则使用 `conda create -n airs python=3.12` 创建环境。创建环境后,再使用 `conda activate airs` 激活 `airs` 环境。
21
-
22
- # 运行项目的命令
23
- conda activate airs && uvicorn codes.app:app --host 0.0.0.0 --port 7860 --reload
24
-
25
- # 目录管理
26
- 1. codes为存放代码的目录,请勿修改
27
- 2. docs为存放文档的目录,请勿修改
28
- 3. data为存放数据集的目录,请勿修改
29
- 4. logs为存放日志的目录,请勿修改
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
codes/.env.example → .env.example RENAMED
File without changes
agent_manager/app.py DELETED
@@ -1,68 +0,0 @@
1
- from fastapi import FastAPI
2
- from pydantic import BaseModel
3
- import uuid
4
-
5
- app = FastAPI()
6
-
7
- class CreateAgentRequest(BaseModel):
8
- agent_type: str
9
- config: dict = {}
10
-
11
- class AgentInfo(BaseModel):
12
- agent_id: str
13
- agent_type: str
14
- status: str
15
- endpoint: str
16
-
17
- # 模拟 Agent 注册表
18
- active_agents = {}
19
-
20
- @app.get("/")
21
- async def read_root():
22
- return {"message": "Agent Manager is running"}
23
-
24
- @app.post("/create_agent", response_model=AgentInfo)
25
- async def create_agent(request: CreateAgentRequest):
26
- """
27
- 动态创建 Agent 实例的 MCP 接口。
28
- """
29
- agent_id = str(uuid.uuid4())
30
- # 模拟 Agent 启动逻辑
31
- # 实际中会与 Docker/Kubernetes 等基础设施交互
32
- endpoint = f"http://localhost:8000/agents/{agent_id}" # 占位符
33
-
34
- agent_info = AgentInfo(
35
- agent_id=agent_id,
36
- agent_type=request.agent_type,
37
- status="running",
38
- endpoint=endpoint
39
- )
40
- active_agents[agent_id] = agent_info
41
- print(f"Agent Manager 创建了 Agent: {agent_info}")
42
- return agent_info
43
-
44
- @app.get("/agents/{agent_id}", response_model=AgentInfo)
45
- async def get_agent_info(agent_id: str):
46
- """
47
- 获取指定 Agent 实例的信息。
48
- """
49
- agent = active_agents.get(agent_id)
50
- if not agent:
51
- raise HTTPException(status_code=404, detail="Agent not found")
52
- return agent
53
-
54
- @app.delete("/agents/{agent_id}")
55
- async def delete_agent(agent_id: str):
56
- """
57
- 销毁指定 Agent 实例。
58
- """
59
- if agent_id in active_agents:
60
- del active_agents[agent_id]
61
- print(f"Agent Manager 销毁了 Agent: {agent_id}")
62
- return {"message": f"Agent {agent_id} deleted"}
63
- raise HTTPException(status_code=404, detail="Agent not found")
64
-
65
- # 示例用法 (仅用于测试 Agent Manager 模块本身)
66
- if __name__ == "__main__":
67
- import uvicorn
68
- uvicorn.run(app, host="0.0.0.0", port=8000)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
agent_manager/requirements.txt DELETED
@@ -1,2 +0,0 @@
1
- fastapi
2
- uvicorn
 
 
 
{codes/agents → agents}/__init__.py RENAMED
File without changes
{codes/agents → agents}/mcp_agent_interface.py RENAMED
File without changes
{codes/agents → agents}/root_agent.py RENAMED
@@ -20,7 +20,7 @@ class RootAgent:
20
  # 配置 LLM 模型为与 OpenAI 兼容的 Gemini 模型
21
  # 从 .env 文件加载环境变量
22
  from dotenv import load_dotenv
23
- load_dotenv(dotenv_path='codes/.env')
24
 
25
  # 从环境变量中获取 OpenAI 兼容模型的 API 密钥和基础 URL
26
  openai_api_key = os.getenv("OPENAI_API_KEY")
 
20
  # 配置 LLM 模型为与 OpenAI 兼容的 Gemini 模型
21
  # 从 .env 文件加载环境变量
22
  from dotenv import load_dotenv
23
+ load_dotenv(dotenv_path='.env')
24
 
25
  # 从环境变量中获取 OpenAI 兼容模型的 API 密钥和基础 URL
26
  openai_api_key = os.getenv("OPENAI_API_KEY")
codes/app.py → app.py RENAMED
@@ -1,5 +1,5 @@
1
  from fastapi import FastAPI
2
- from codes.agents.root_agent import RootAgent
3
 
4
  app = FastAPI()
5
  root_agent = RootAgent()
 
1
  from fastapi import FastAPI
2
+ from agents.root_agent import RootAgent
3
 
4
  app = FastAPI()
5
  root_agent = RootAgent()
codes/__init__.py DELETED
File without changes
docs/worker_agent_design.md DELETED
@@ -1,95 +0,0 @@
1
- # Worker 与 Agent 系统设计方案
2
-
3
- ## 1. 系统概述
4
-
5
- 本设计方案旨在构建一个灵活、可扩展、具备**规划、反思和记忆能力**的多 Agent 系统,其中 `worker` 作为核心服务,负责接收外部请求并协调多个 `Agent` 的工作。系统将包含一个主 `Agent`(`RootAgent`)和多个以 Model Context Protocol (MCP) 服务形式存在的子 `Agent`,并引入一个独立的 `Agent Manager` 来实现 `Agent` 的动态创建和管理。
6
-
7
- ## 2. 核心组件与职责
8
-
9
- ### 2.1 Worker (FastAPI 应用)
10
-
11
- * **职责:**
12
- * 作为系统的入口点,接收外部 HTTP 请求。
13
- * 将请求转发给内部的 `RootAgent` 进行处理。
14
- * 将 `RootAgent` 返回的结果作为 HTTP 响应返回给客户端。
15
- * **部署:** 与 `RootAgent` 模块部署在同一个进程中。
16
-
17
- ### 2.2 RootAgent (Python 模块)
18
-
19
- * **职责:**
20
- * **任务接收与拆分 (规划):** 接收来自 `worker` 的高层级任务,并将其分解为一系列更小、更具体的子任务。这可能涉及复杂的逻辑,如规则引擎或 LLM 辅助的任务理解和拆解。**引入规划机制,能够对任务进行多步骤分解和依赖管理,例如通过构建任务图(DAG)来表示任务流程和依赖关系。**
21
- * **任务调度与编排:** 维护一个当前可用 MCP `Agent` 服务的注册表(或通过服务发现机制获取)。根据子任务的类型和所需能力,从注册表中选择最合适的 `Agent` 进行调用。**支持复杂工作流编排(如 DAG 或顺序工作流),协调多个 Agent 的执行顺序和数据流。**
22
- * **反思与学习:** **在任务执行过程中或完成后,对执行结果进行评估和反思,从错误中学习,优化未来的决策和任务分解策略。这包括收集执行反馈、识别成功模式和失败原因,并将学习成果应用于改进规划策略和更新 Agent 能力模型。**
23
- * **记忆管理:** **维护短期记忆(如当前对话历史、任务状态)和长期记忆(如 Agent 能力知识、历史任务经验),以促进未来的行动和决策。可以考虑引入独立的记忆存储服务来持久化和检索这些记忆。**
24
- * **记忆存储方案:** 考虑使用 `memory-bank/` 目录下的文件作为初步的记忆存储,或者集成一个简单的本地数据库(如 SQLite)来持久化记忆。
25
- * **短期记忆:** 在 `RootAgent` 中实现一个机制,用于存储和检索当前任务的上下文信息、中间结果和对话历史。
26
- * **长期记忆:** 实现一个机制,用于存储和检索 Agent 能力知识、历史任务经验和优化策略。这可能涉及将成功的任务规划和执行流程作为“经验”存储起来。
27
- * **反思机制:** 定义反思的触发条件(例如,任务成功完成、任务失败、达到特定里程碑)。在 `RootAgent` 中实现一个 `_reflect` 方法,利用 LLM 对任务执行过程和结果进行分析,识别成功模式和失败原因。
28
- * **学习与优化:** 将反思的结果用于更新 `RootAgent` 的规划策略,例如调整子任务拆分的方式、选择 Agent 的优先级等。
29
- * **触发动态创建:** 如果在注册表中找不到能够处理某个子任务的现有 `Agent`,`RootAgent` 将向 `Agent Manager` 发送请求,要求创建特定类型的 `Agent`。
30
- * **集成 `Agent Manager` 客户端:** 在 `RootAgent` 中添加一个 `AgentManagerClient` 类,用于通过 HTTP/MCP 协议与 `Agent Manager` 交互,发送创建/销毁 Agent 的请求。
31
- * **更新 `_dispatch_subtask` 方法:** 根据 `qwen-agent` 规划出的子任务类型,首先检查 `RootAgent` 是否已经缓存了该类型 Agent 的可用实例。如果不存在或不可用,则通过 `AgentManagerClient` 向 `Agent Manager` 发送 `create_agent` 请求,获取新创建 Agent 的端点信息。使用获取到的端点信息,通过 MCP 协议调用选定的 Agent 执行子任务。实现适当的错误处理和重试机制,以应对 Agent 启动失败或调用超时等情况。
32
- * **Agent 实例缓存:** `RootAgent` 可以维护一个活跃 Agent 实例的缓存,以避免频繁地向 `Agent Manager` 请求创建 Agent。
33
- * **结果聚合:** 协调多个 MCP `Agent` 服务的调用顺序和数据流,并聚合它们的返回结果。
34
- * **定义统一结果结构:** 确定所有 MCP Agent 返回结果的统一数据结构,例如包含 `status`、`output` 和 `error` 字段。
35
- * **更新 `_aggregate_results` 方法:** 修改 `_aggregate_results` 方法,使其能够处理来自实际 MCP Agent 服务的复杂结果。这可能涉及根据子任务的依赖关系和类型,对结果进行合并、转换或总结。
36
- * **实现形式:** 作为 `codes/app.py` 导入的一个独立 Python 模块(例如 `codes/agents/root_agent.py`)实现。
37
- * **部署:** 与 FastAPI 应用部署在同一个进程中。
38
-
39
- ### 2.3 MCP Agent 服务 (��立进程/服务)
40
-
41
- * **职责:**
42
- * 每个 MCP `Agent` 服务专注于一个特定的任务或功能(例如,数据处理、模型推理、外部 API 调用、代码生成等)。
43
- * 通过 MCP 协议暴露接口,供 `RootAgent` 调用。
44
- * 执行具体的业务逻辑或智能决策。
45
- * **实现形式:** 独立的应用程序,通过 MCP 协议对外提供服务。
46
- * **通用 Agent 接口:** 在 `codes/agents/mcp_agent_interface.py` 文件中定义一个抽象基类或接口,包含 `process_subtask` 等方法,所有具体的 MCP Agent 都将继承并实现此接口。
47
- * **示例 `EchoAgent`:** 实现一个简单的 `EchoAgent` 作为第一个具体 Agent,它接收一个消息并返回该消息,用于验证 `Agent Manager` 和 `RootAgent` 之间的通信。
48
- * **Agent 容器化:** 为每个 Agent 创建 Dockerfile,使其能够被 `Agent Manager` 动态部署。
49
- * **Agent 配置:** 确保 Agent 能够通过环境变量或 MCP 请求参数接收必要的配置信息。
50
- * **部署:** 根据 `Agent Manager` 的指令动态创建和部署,通常作为独立的 Docker 容器或其他微服务。
51
-
52
- ### 2.4 Agent Manager (MCP 服务)
53
-
54
- * **职责:**
55
- * **动态 `Agent` 创建:** 接收 `RootAgent` 发出的 `Agent` 创建请求,并通过抽象层与底层基础设施(如 Docker、Kubernetes、云函数服务等)交互,动态地创建、配置和启动新的 MCP `Agent` 实例。**该抽象层允许 `Agent Manager` 适配不同的部署环境。**
56
- * **`CreateAgentRequest` 完善:** 扩展请求模型,增加 `image_name` (Docker 镜像名称)、`env_vars` (环境变量字典) 和 `resource_limits` (资源限制,如 CPU、内存) 等字段,以便更灵活地配置要创建的 Agent。
57
- * **`AgentDeployer` 抽象:** 定义一个 `AgentDeployer` 接口或抽象类,包含 `deploy_agent` 和 `destroy_agent` 等方法,用于封装与不同底层基础设施交互的逻辑。
58
- * **Docker 集成(初步实现):** 实现一个 `DockerAgentDeployer` 类,使用 `docker-py` 库与 Docker Daemon 交互,根据 `CreateAgentRequest` 中的信息动态启动和停止 Docker 容器。
59
- * **生命周期管理:** 管理动态创建 `Agent` 的生命周期,包括启动、停止、监控其健康状态,并在不再需要时进行销毁。
60
- * **Agent 注册与健康检查:** 在 Agent 启动后,`Agent Manager` 将获取其实际运行的 IP 地址和端口,并可以利用 Docker 网络或服务发现机制。同时,实现简单的健康检查机制,定期检查 Agent 容器的运行状态。
61
- * **注册与通知:** 获取新创建 `Agent` 实例的 MCP 端点信息,并可以注册到服务发现系统中(如 Consul、Etcd、Kubernetes Service Discovery),或者直接返回给 `RootAgent`。
62
- * **MCP 接口标准化:** 确保 `Agent Manager` 提供的 `/create_agent`、`/get_agent_info`、`/delete_agent` 等接口符合 Model Context Protocol (MCP) 规范。
63
- * **生命周期管理:** 管理动态创建 `Agent` 的生命周期,包括启动、停止、监控其健康状态,并在不再需要时进行销毁。
64
- * **注册与通知:** 获取新创建 `Agent` 实例的 MCP 端点信息,并可以注册到服务发现系统中(如 Consul、Etcd、Kubernetes Service Discovery),或者直接返回给 `RootAgent`。
65
- * **资源管理:** 处理 `Agent` 的资源分配和配置。
66
- * **实现形式:** 一个独立的应用程序,本身作为一个 MCP 服务运行。其内部逻辑是纯粹的程序性管理功能,不涉及高级智能决策。
67
- * **部署:** 部署为另一个独立的服务(例如,一个 Docker 容器),独立于 `worker` 进程运行。
68
-
69
- ## 3. 通信与数据流
70
-
71
- * **外部请求 -> Worker (FastAPI):** 客户端通过 HTTP 请求与 `worker` 交互。
72
- * **Worker (FastAPI) -> RootAgent:** `worker` 将接收到的请求转发给内部的 `RootAgent` 实例,通过直接方法调用进行通信。
73
- * **RootAgent -> MCP Agent 服务:** `RootAgent` 通过 MCP 协议向其他 MCP `Agent` 服务发送请求,传递子任务数据。
74
- * **MCP Agent 服务 -> RootAgent:** MCP `Agent` 服务执行任务后,通过 MCP 协议将结果返回给 `RootAgent`。
75
- * **RootAgent -> Agent Manager:** 当需要动态创建 `Agent` 时,`RootAgent` 通过 MCP 协议向 `Agent Manager` 发送创建请求。
76
- * **Agent Manager -> RootAgent:** `Agent Manager` 返回新创建 `Agent` 的 MCP 端点信息。
77
- * **RootAgent 聚合结果 -> Worker (FastAPI):** `RootAgent` 聚合所有子任务的结果,并将其返回给 `worker`。
78
- * **Worker (FastAPI) -> 外部响应:** `worker` 将最终结果作为 HTTP 响应返回给客户端。
79
-
80
- ## 4. 部署架构概览
81
-
82
- 1. **`worker` 服务:** 包含 FastAPI 应用和 `RootAgent` 模块,部署为一个容器。
83
- 2. **`Agent Manager` 服务:** 独立的 MCP 服务,部署为另一个容器。
84
- 3. **MCP `Agent` 服务:** 根据需��动态创建和部署的独立 MCP 服务,每个 `Agent` 实例通常部署为一个容器。
85
-
86
- 这种架构提供了高度的模块化、可扩展性、资源隔离和容错能力,非常适合构建复杂且动态的 `agent` 系统。
87
-
88
- ## 5. 部署与运维考量
89
-
90
- * **容器编排:** 推荐使用 Kubernetes 或 Docker Swarm 等容器编排工具来管理 `worker`、`Agent Manager` 和 MCP `Agent` 服务容器。这将提供自动化部署、扩展和管理能力。
91
- * **服务发现与负载均衡:** 利用容器编排工具内置的服务发现机制,或集成 Consul、Etcd 等独立服务发现系统。通过负载均衡器(如 Nginx、Envoy)将请求分发到多个 `worker` 实例,并确保 `RootAgent` 能够可靠地找到并调用可用的 MCP `Agent` 服务实例。
92
- * **持久化存储:** 对于需要持久化存储的组件(如 `RootAgent` 的长期记忆、日志、配置),应考虑使用持久卷(PV/PVC)、对象存储(如 S3)或外部数据库服务。
93
- * **健康检查与监控:** 实现对所有服务实例的健康检查,并集成监控系统(如 Prometheus、Grafana)来收集和可视化系统指标,以便及时发现和解决问题。
94
- * **日志管理:** 采用集中式日志管理方案(如 ELK Stack 或 Loki),收集、存储和分析所有服务的日志,便于故障排查和系统审计。
95
- * **安全性:** 实施网络隔离、访问控制、API 认证与授权(如 JWT)、数据加密等安全措施,确保系统和数据的安全。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
memory-bank/activeContext.md DELETED
@@ -1,25 +0,0 @@
1
- # 当前上下文 (Active Context)
2
-
3
- ## 当前工作重点
4
- * **多 Agent 系统设计文档化:** 已完成 `worker` 多 `agent` 系统架构的详细设计,并将其记录在 `docs/worker_agent_design.md` 中。该文档涵盖了 `RootAgent`、MCP `Agent` 服务和 `Agent Manager` 的职责划分、实现形式和部署方案。
5
- * **多 Agent 系统实现计划制定:** 已制定详细的实现计划,包括 `Agent Manager` MCP 服务、具体 MCP `Agent` 服务、`RootAgent` 的调度和结果聚合逻辑,以及记忆和反思模块的实现。
6
- * **`.clinerules` 文件学习:** 已全面学习并解释了 `.clinerules` 目录下所有规则文件的内容,理解了 Cline 的行为准则、记忆管理、任务自动化、研究流程、顺序思维和持续改进协议。
7
-
8
- ## 最近的更改
9
- * 在 `.gitignore` 中添加了 `__pycache__/` 忽略规则。
10
- * 创建了 `docs/worker_agent_design.md` 文件,详细记录了多 `agent` 系统设计。
11
- * `docs/worker_agent_design.md` 已根据详细的实现计划进行了更新。
12
- * 根据用户反馈,正在整理 `memory-bank` 文件,以确保项目知识的持久化和可追溯性。
13
-
14
- ## 下一步
15
- * 等待用户切换到 ACT 模式,以开始实施多 Agent 系统设计中的各个部分。
16
-
17
- ## 活跃决策与考量
18
- * **记忆库的完整性:** 确保 `memory-bank` 中的所有核心文件都已创建并包含最新、最准确的信息,以便在会话重置后能够无缝恢复工作。
19
- * **规则遵循:** 严格遵循 `.clinerules` 中定义的各项规则,尤其是在自主决策、用户交互和任务交接方面。
20
-
21
- ## 学习与项目洞察
22
- * 通过与用户的讨论,明确了 `worker` 作为多 `agent` 系统核心的架构需求。
23
- * 理解了 `RootAgent` 作为任务协调者的关键作用,以及 `Agent Manager` 在动态管理 MCP `Agent` 服务中的重要性。
24
- * 深入学习了 Cline 的自我管理和学习机制,这将有助于更高效地完成任务。
25
- * 明确了 `Agent Manager` 作为独立 MCP 服务的重要性,以及它与 `RootAgent` 和具体 MCP `Agent` 服务之间的通信和交互模式。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
memory-bank/consolidated_learnings.md DELETED
@@ -1,21 +0,0 @@
1
- # 整合学习 (Consolidated Learnings)
2
-
3
- ## 知识管理与记忆库
4
- **模式:记忆库文件创建与更新流程**
5
- - 当用户要求“生成记忆”或需要持久化项目知识时,应主动检查 `memory-bank` 目录,并根据 `1-memory-bank.md` 中定义的结构,创建或更新核心文件(`projectBrief.md`, `productContext.md`, `activeContext.md`, `progress.md`)。
6
- - 同时,根据 `6-cline-continuous-improvement-protocol.md`,记录原始反思到 `raw_reflection_log.md`,并定期整合为 `consolidated_learnings.md`。
7
- - `docs/worker_agent_design.md` 已根据详细的实现计划进行了更新,反映了对多 Agent 系统设计的深入理解。
8
- - *理由:* 确保在会话重置后能够无缝恢复工作,并积累可复用的项目知识和操作经验。
9
-
10
- ## 用户意图理解
11
- **原则:结合 `.clinerules` 精确推断用户意图**
12
- - 当用户指令模糊时(例如“生成记忆”),不应直接执行表面操作,而应结合 `.clinerules` 中定义的具体协议(如 `1-memory-bank.md` 和 `6-cline-continuous-improvement-protocol.md`)来推断其深层意图。
13
- - *理由:* 减少不必要的交互,提高自主决策的准确性,确保遵循既定协议。
14
-
15
- ## 项目特定配置
16
- **FastAPI Worker 项目:**
17
- - **Python 环境管理:** 使用 `conda`,默认环境名为 `airs` (Python 3.12)。
18
- - **项目运行命令:** `conda activate airs && uvicorn codes.app:app --host 0.0.0.0 --port 7860 --reload`。
19
- - **目录结构:** `codes` (代码), `docs` (文档), `data` (数据集), `logs` (日志) 目录不可修改。
20
- - **多 Agent 系统架构理解:** 明确了 `Agent Manager` 作为独立 MCP 服务的重要性,以及它与 `RootAgent` 和具体 MCP `Agent` 服务之间的通信和交互模式。
21
- - *理由:* 遵循项目规范,确保环境一致性和操作正确性。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
memory-bank/productContext.md DELETED
@@ -1,20 +0,0 @@
1
- # 产品背景 (Product Context)
2
-
3
- ## 项目存在的意义
4
- 本项目旨在解决现有任务处理系统在灵活性、可扩展性和智能化方面的不足。通过引入多 `agent` 系统,`worker` 能够更智能地理解、分解和执行复杂任务,从而提高自动化水平和处理效率。
5
-
6
- ## 解决的问题
7
- 1. **任务复杂性:** 传统单体 `worker` 难以有效处理需要多步骤、多领域知识的复杂任务。
8
- 2. **可扩展性差:** 难以动态添加或更新新的任务处理能力。
9
- 3. **维护成本高:** 紧耦合的系统导致修改和维护困难。
10
- 4. **智能化不足:** 缺乏根据任务动态调度专业 `agent` 的能力。
11
-
12
- ## 期望的工作方式
13
- 用户提交一个高级任务请求给 `worker`。`RootAgent` 会解析这个任务,将其分解为一系列更小的、可管理的子任务。然后,`RootAgent` 会根据每个子任务的性质,动态地调度最合适的 MCP `Agent` 服务来执行。这些 MCP `Agent` 服务可以是专注于数据分析、代码生成、文档编写等不同领域的专业 `agent`。`Agent Manager` 将负责这些 MCP `Agent` 服务的生命周期管理。最终,`RootAgent` 会收集所有子任务的结果,进行整合和总结,并将最终结果返回给用户。
14
-
15
- ## 用户体验目标
16
- * **高效性:** 任务处理速度快,响应及时。
17
- * **灵活性:** 能够处理各种类型和复杂度的任务。
18
- * **透明性:** 用户可以了解任务处理的进展和状态。
19
- * **可靠性:** 系统稳定运行,任务结果准确。
20
- * **易用性:** 任务提交和结果获取流程简单直观。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
memory-bank/progress.md DELETED
@@ -1,29 +0,0 @@
1
- # 进度 (Progress)
2
-
3
- ## 已完成的工作
4
- * **FastAPI 基础服务:** 成功启动了一个 FastAPI 应用程序,但由于用户拒绝浏览器操作,未能进行可视化验证。
5
- * **`.gitignore` 更新:** 在 `.gitignore` 中添加了 `__pycache__/` 忽略规则。
6
- * **多 Agent 系统设计文档:** 完成了 `worker` 多 `agent` 系统架构的详细设计,并将其记录在 `docs/worker_agent_design.md` 文件中。
7
- * **多 Agent 系统设计文档更新:** `docs/worker_agent_design.md` 已根据详细的实现计划进行了更新。
8
- * **多 Agent 系统实现计划制定:** 已制定详细的实现计划,包括 `Agent Manager` MCP 服务、具体 MCP `Agent` 服务、`RootAgent` 的调度和结果聚合逻辑,以及记忆和反思模块的实现。
9
- * **`.clinerules` 学习与解释:** 全面学习并解释了 `.clinerules` 目录下所有规则文件的内容,理解了 Cline 的行为准则和各项协议。
10
- * **记忆库初始化:** 已创建 `memory-bank/projectBrief.md`、`memory-bank/productContext.md` 和 `memory-bank/activeContext.md`。
11
-
12
- ## 待完成的工作
13
- * **记忆库完善:** 继续创建和填充 `memory-bank` 中的其他核心文件(如果需要)和补充文件(`systemPatterns.md`、`techContext.md`),以及 `raw_reflection_log.md` 和 `consolidated_learnings.md`。
14
- * **多 Agent 系统实现:** 根据详细计划,逐步实现 `Agent Manager`、具体 MCP `Agent` 服务、`RootAgent` 的调度和结果聚合逻辑,以及记忆和反思模块。
15
- * **功能测试与验证:** 对实现的功能进行全面的测试,确保系统按预期工作。
16
-
17
- ## 当前状态
18
- * 项目的基础架构已搭建,核心设计已文档化。
19
- * 已制定详细的实现计划,并准备进入实施阶段。
20
- * Cline 已全面理解项目背景和自身操作规则。
21
- * 记忆库正在初始化和完善中,以确保知识的持久化。
22
-
23
- ## 已知问题
24
- * FastAPI 应用程序尚未进行可视化验证。
25
- * 多 `agent` 系统的具体实现尚未开始。
26
-
27
- ## 项目决策演变
28
- * 最初的任务是运行 FastAPI 应用,但随着讨论深入,演变为设计和文档化一个复杂的多 `agent` 系统。
29
- * 用户强调了对 `.clinerules` 的学习和记忆库整理的重要性,这促使我优先处理知识管理任务。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
memory-bank/projectBrief.md DELETED
@@ -1,27 +0,0 @@
1
- # 项目简报 (Project Brief)
2
-
3
- ## 核心需求与目标
4
- 本项目旨在开发一个基于 FastAPI 的 `worker` 服务,该服务将作为多 `agent` 系统的一部分。`worker` 的核心目标是:
5
- 1. 接收并处理任务。
6
- 2. 将复杂任务拆分为子任务。
7
- 3. 调度不同的 MCP `Agent` 服务来执行子任务。
8
- 4. 聚合子任务结果并返回最终结果。
9
- 5. 实现高度模块化、可扩展和可维护的 `agent` 架构。
10
-
11
- ## 项目范围
12
- * **阶段一:** 搭建 FastAPI `worker` 基础框架,实现简单的 API 路由。
13
- * **阶段二:** 设计并文档化多 `agent` 系统架构,包括 `RootAgent`、MCP `Agent` 服务和 `Agent Manager` 的职责与交互。
14
- * **阶段三:** 实现 `RootAgent` 的核心逻辑,包括任务解析、子任务生成和 `Agent` 调度。
15
- * **阶段四:** 开发 `Agent Manager` MCP 服务,实现 `Agent` 的动态创建、管理和销毁。
16
- * **阶段五:** 集成具体的 MCP `Agent` 服务,完成端到端任务处理流程。
17
-
18
- ## 关键里程碑
19
- * FastAPI 基础服务搭建完成。
20
- * 多 `agent` 系统设计文档完成。
21
- * `RootAgent` 核心调度逻辑实现。
22
- * `Agent Manager` 服务实现。
23
- * 端到端任务处理流程验证。
24
-
25
- ## 利益相关者
26
- * 用户:需要一个高效、灵活的任务处理 `worker`。
27
- * 开发团队:需要一个清晰、模块化的架构以支持未来的扩展和维护。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
memory-bank/raw_reflection_log.md DELETED
@@ -1,24 +0,0 @@
1
- # 原始反思日志 (Raw Reflection Log)
2
-
3
- ---
4
- Date: 2025-09-16
5
- TaskRef: "整理记忆库文件并解释 .clinerules 规则"
6
-
7
- Learnings:
8
- - 成功理解并解释了 `.clinerules` 目录下所有规则文件的内容,包括基础指令、记忆库结构、新任务自动化、研究流程、顺序思维、自我改进和持续改进协议。
9
- - 明确了 `memory-bank` 文件夹的结构和每个核心文件的作用。
10
- - 掌握了在 `ACT MODE` 下根据用户反馈主动创建和填充记忆库文件的流程。
11
-
12
- Difficulties:
13
- - 最初在用户要求“生成记忆”时,我错误地使用了 `attempt_completion`,没有意识到用户是希望我按照 `memory-bank` 的规则来整理记忆。这表明对用户意图的理解需要更精确地结合 `.clinerules` 中的具体指导。
14
- - `memory-bank` 目录最初不存在,需要先创建文件才能自动创建目录。
15
-
16
- Successes:
17
- - 成功创建了 `memory-bank` 目录及其下的 `projectBrief.md`, `productContext.md`, `activeContext.md`, `progress.md` 文件。
18
- - 详细解释了所有 `.clinerules` 文件,为后续任务奠定了坚实的基础。
19
- - 遵循了中文交流的规则。
20
-
21
- Improvements_Identified_For_Consolidation:
22
- - 强化对用户模糊指令的解析能力,结合 `.clinerules` 规则进行更准确的意图推断。
23
- - 明确 `memory-bank` 文件的创建和更新流程,作为核心知识进行固化。
24
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
push.sh CHANGED
@@ -1,3 +1,5 @@
 
 
1
  # eval "$(ssh-agent -s)" && ssh-add ~/.ssh/id-ed25519-airsltd-homepc-hf
2
  git add .
3
  git commit -m "update"
 
1
+ #!/bin/bash
2
+ cd "$(dirname "$0")" # Change to the directory where the script is located
3
  # eval "$(ssh-agent -s)" && ssh-add ~/.ssh/id-ed25519-airsltd-homepc-hf
4
  git add .
5
  git commit -m "update"