ADAPT-Chase commited on
Commit
af1432a
·
verified ·
1 Parent(s): 6a911c8

Add files using upload-large-folder tool

Browse files
projects/ui/DeepCode/.github/ISSUE_TEMPLATE/bug_report.yml ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Bug Report
2
+ description: File a bug report
3
+ title: "[Bug]:"
4
+ labels: ["bug", "triage"]
5
+
6
+ body:
7
+ - type: checkboxes
8
+ id: existingcheck
9
+ attributes:
10
+ label: Do you need to file an issue?
11
+ description: Please help us manage our time by avoiding duplicates and common bugs with the steps below.
12
+ options:
13
+ - label: I have searched the existing issues and this bug is not already filed.
14
+ - label: I believe this is a legitimate bug, not just a question or feature request.
15
+ - type: textarea
16
+ id: description
17
+ attributes:
18
+ label: Describe the bug
19
+ description: A clear and concise description of what the bug is.
20
+ placeholder: What went wrong?
21
+ - type: textarea
22
+ id: reproduce
23
+ attributes:
24
+ label: Steps to reproduce
25
+ description: Steps to reproduce the behavior.
26
+ placeholder: How can we replicate the issue?
27
+ - type: textarea
28
+ id: expected_behavior
29
+ attributes:
30
+ label: Expected Behavior
31
+ description: A clear and concise description of what you expected to happen.
32
+ placeholder: What should have happened?
33
+ - type: textarea
34
+ id: configused
35
+ attributes:
36
+ label: DeepCode Config Used
37
+ description: The DeepCode configuration used for the run.
38
+ placeholder: The settings content or DeepCode configuration
39
+ value: |
40
+ # Paste your config here
41
+ - type: textarea
42
+ id: screenshotslogs
43
+ attributes:
44
+ label: Logs and screenshots
45
+ description: If applicable, add screenshots and logs to help explain your problem.
46
+ placeholder: Add logs and screenshots here
47
+ - type: textarea
48
+ id: additional_information
49
+ attributes:
50
+ label: Additional Information
51
+ description: |
52
+ - DeepCode Version: e.g., v0.1.1
53
+ - Operating System: e.g., Windows 10, Ubuntu 20.04
54
+ - Python Version: e.g., 3.8
55
+ - Related Issues: e.g., #1
56
+ - Any other relevant information.
57
+ value: |
58
+ - DeepCode Version:
59
+ - Operating System:
60
+ - Python Version:
61
+ - Related Issues:
projects/ui/DeepCode/.github/ISSUE_TEMPLATE/config.yml ADDED
@@ -0,0 +1 @@
 
 
1
+ blank_issues_enabled: false
projects/ui/DeepCode/.github/ISSUE_TEMPLATE/feature_request.yml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Feature Request
2
+ description: File a feature request
3
+ labels: ["enhancement"]
4
+ title: "[Feature Request]:"
5
+
6
+ body:
7
+ - type: checkboxes
8
+ id: existingcheck
9
+ attributes:
10
+ label: Do you need to file a feature request?
11
+ description: Please help us manage our time by avoiding duplicates and common feature request with the steps below.
12
+ options:
13
+ - label: I have searched the existing feature request and this feature request is not already filed.
14
+ - label: I believe this is a legitimate feature request, not just a question or bug.
15
+ - type: textarea
16
+ id: feature_request_description
17
+ attributes:
18
+ label: Feature Request Description
19
+ description: A clear and concise description of the feature request you would like.
20
+ placeholder: What this feature request add more or improve?
21
+ - type: textarea
22
+ id: additional_context
23
+ attributes:
24
+ label: Additional Context
25
+ description: Add any other context or screenshots about the feature request here.
26
+ placeholder: Any additional information
projects/ui/DeepCode/.github/ISSUE_TEMPLATE/question.yml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Question
2
+ description: Ask a general question
3
+ labels: ["question"]
4
+ title: "[Question]:"
5
+
6
+ body:
7
+ - type: checkboxes
8
+ id: existingcheck
9
+ attributes:
10
+ label: Do you need to ask a question?
11
+ description: Please help us manage our time by avoiding duplicates and common questions with the steps below.
12
+ options:
13
+ - label: I have searched the existing question and discussions and this question is not already answered.
14
+ - label: I believe this is a legitimate question, not just a bug or feature request.
15
+ - type: textarea
16
+ id: question
17
+ attributes:
18
+ label: Your Question
19
+ description: A clear and concise description of your question.
20
+ placeholder: What is your question?
21
+ - type: textarea
22
+ id: context
23
+ attributes:
24
+ label: Additional Context
25
+ description: Provide any additional context or details that might help us understand your question better.
26
+ placeholder: Add any relevant information here
projects/ui/DeepCode/.github/workflows/linting.yaml ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Linting and Formatting
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - main
7
+ pull_request:
8
+ branches:
9
+ - main
10
+
11
+ jobs:
12
+ lint-and-format:
13
+ runs-on: ubuntu-latest
14
+
15
+ steps:
16
+ - name: Checkout code
17
+ uses: actions/checkout@v2
18
+
19
+ - name: Set up Python
20
+ uses: actions/setup-python@v2
21
+ with:
22
+ python-version: '3.x'
23
+
24
+ - name: Install dependencies
25
+ run: |
26
+ python -m pip install --upgrade pip
27
+ pip install pre-commit
28
+
29
+ - name: Run pre-commit
30
+ run: pre-commit run --all-files --show-diff-on-failure
projects/ui/DeepCode/.github/workflows/pypi-publish.yml ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Upload DeepCode Package
2
+
3
+ on:
4
+ release:
5
+ types: [published]
6
+
7
+ permissions:
8
+ contents: read
9
+
10
+ jobs:
11
+ release-build:
12
+ runs-on: ubuntu-latest
13
+
14
+ steps:
15
+ - uses: actions/checkout@v4
16
+
17
+ - uses: actions/setup-python@v5
18
+ with:
19
+ python-version: "3.x"
20
+
21
+ - name: Build release distributions
22
+ run: |
23
+ python -m pip install build
24
+ python -m build
25
+
26
+ - name: Upload distributions
27
+ uses: actions/upload-artifact@v4
28
+ with:
29
+ name: release-dists
30
+ path: dist/
31
+
32
+ pypi-publish:
33
+ runs-on: ubuntu-latest
34
+ needs:
35
+ - release-build
36
+ permissions:
37
+ id-token: write
38
+
39
+ environment:
40
+ name: pypi
41
+
42
+ steps:
43
+ - name: Retrieve release distributions
44
+ uses: actions/download-artifact@v4
45
+ with:
46
+ name: release-dists
47
+ path: dist/
48
+
49
+ - name: Publish release distributions to PyPI
50
+ uses: pypa/gh-action-pypi-publish@release/v1
51
+ with:
52
+ packages-dir: dist/
projects/ui/DeepCode/cli/workflows/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ CLI-specific Workflow Adapters
3
+ CLI专用工作流适配器
4
+
5
+ This module provides CLI-optimized versions of workflow components that are
6
+ specifically adapted for command-line interface usage patterns.
7
+ """
8
+
9
+ from .cli_workflow_adapter import CLIWorkflowAdapter
10
+
11
+ __all__ = ["CLIWorkflowAdapter"]
projects/ui/DeepCode/cli/workflows/cli_workflow_adapter.py ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ CLI Workflow Adapter for Agent Orchestration Engine
3
+ CLI工作流适配器 - 智能体编排引擎
4
+
5
+ This adapter provides CLI-optimized interface to the latest agent orchestration engine,
6
+ with enhanced progress reporting, error handling, and CLI-specific optimizations.
7
+ """
8
+
9
+ import os
10
+ from typing import Callable, Dict, Any
11
+ from mcp_agent.app import MCPApp
12
+
13
+
14
+ class CLIWorkflowAdapter:
15
+ """
16
+ CLI-optimized workflow adapter for the intelligent agent orchestration engine.
17
+
18
+ This adapter provides:
19
+ - Enhanced CLI progress reporting
20
+ - Optimized error handling for CLI environments
21
+ - Streamlined interface for command-line usage
22
+ - Integration with the latest agent orchestration engine
23
+ """
24
+
25
+ def __init__(self, cli_interface=None):
26
+ """
27
+ Initialize CLI workflow adapter.
28
+
29
+ Args:
30
+ cli_interface: CLI interface instance for progress reporting
31
+ """
32
+ self.cli_interface = cli_interface
33
+ self.app = None
34
+ self.logger = None
35
+ self.context = None
36
+
37
+ async def initialize_mcp_app(self) -> Dict[str, Any]:
38
+ """
39
+ Initialize MCP application for CLI usage.
40
+
41
+ Returns:
42
+ dict: Initialization result
43
+ """
44
+ try:
45
+ if self.cli_interface:
46
+ self.cli_interface.show_spinner(
47
+ "🚀 Initializing Agent Orchestration Engine", 2.0
48
+ )
49
+
50
+ # Initialize MCP application
51
+ self.app = MCPApp(name="cli_agent_orchestration")
52
+ self.app_context = self.app.run()
53
+ agent_app = await self.app_context.__aenter__()
54
+
55
+ self.logger = agent_app.logger
56
+ self.context = agent_app.context
57
+
58
+ # Configure filesystem access
59
+ import os
60
+
61
+ self.context.config.mcp.servers["filesystem"].args.extend([os.getcwd()])
62
+
63
+ if self.cli_interface:
64
+ self.cli_interface.print_status(
65
+ "🧠 Agent Orchestration Engine initialized successfully", "success"
66
+ )
67
+
68
+ return {
69
+ "status": "success",
70
+ "message": "MCP application initialized successfully",
71
+ }
72
+
73
+ except Exception as e:
74
+ error_msg = f"Failed to initialize MCP application: {str(e)}"
75
+ if self.cli_interface:
76
+ self.cli_interface.print_status(error_msg, "error")
77
+ return {"status": "error", "message": error_msg}
78
+
79
+ async def cleanup_mcp_app(self):
80
+ """
81
+ Clean up MCP application resources.
82
+ """
83
+ if hasattr(self, "app_context"):
84
+ try:
85
+ await self.app_context.__aexit__(None, None, None)
86
+ if self.cli_interface:
87
+ self.cli_interface.print_status(
88
+ "🧹 Resources cleaned up successfully", "info"
89
+ )
90
+ except Exception as e:
91
+ if self.cli_interface:
92
+ self.cli_interface.print_status(
93
+ f"⚠️ Cleanup warning: {str(e)}", "warning"
94
+ )
95
+
96
+ def create_cli_progress_callback(self) -> Callable:
97
+ """
98
+ Create CLI-optimized progress callback function.
99
+
100
+ Returns:
101
+ Callable: Progress callback function
102
+ """
103
+
104
+ def progress_callback(progress: int, message: str):
105
+ if self.cli_interface:
106
+ # Map progress to CLI stages
107
+ if progress <= 10:
108
+ self.cli_interface.display_processing_stages(1)
109
+ elif progress <= 25:
110
+ self.cli_interface.display_processing_stages(2)
111
+ elif progress <= 40:
112
+ self.cli_interface.display_processing_stages(3)
113
+ elif progress <= 50:
114
+ self.cli_interface.display_processing_stages(4)
115
+ elif progress <= 60:
116
+ self.cli_interface.display_processing_stages(5)
117
+ elif progress <= 70:
118
+ self.cli_interface.display_processing_stages(6)
119
+ elif progress <= 85:
120
+ self.cli_interface.display_processing_stages(7)
121
+ else:
122
+ self.cli_interface.display_processing_stages(8)
123
+
124
+ # Display status message
125
+ self.cli_interface.print_status(message, "processing")
126
+
127
+ return progress_callback
128
+
129
+ async def execute_full_pipeline(
130
+ self, input_source: str, enable_indexing: bool = True
131
+ ) -> Dict[str, Any]:
132
+ """
133
+ Execute the complete intelligent multi-agent research orchestration pipeline.
134
+
135
+ Args:
136
+ input_source: Research input source (file path, URL, or preprocessed analysis)
137
+ enable_indexing: Whether to enable advanced intelligence analysis
138
+
139
+ Returns:
140
+ dict: Comprehensive pipeline execution result
141
+ """
142
+ try:
143
+ # Import the latest agent orchestration engine
144
+ from workflows.agent_orchestration_engine import (
145
+ execute_multi_agent_research_pipeline,
146
+ )
147
+
148
+ # Create CLI progress callback
149
+ progress_callback = self.create_cli_progress_callback()
150
+
151
+ # Display pipeline start
152
+ if self.cli_interface:
153
+ mode = "comprehensive" if enable_indexing else "optimized"
154
+ self.cli_interface.print_status(
155
+ f"🚀 Starting {mode} agent orchestration pipeline...", "processing"
156
+ )
157
+ self.cli_interface.display_processing_stages(0)
158
+
159
+ # Execute the pipeline
160
+ result = await execute_multi_agent_research_pipeline(
161
+ input_source=input_source,
162
+ logger=self.logger,
163
+ progress_callback=progress_callback,
164
+ enable_indexing=enable_indexing,
165
+ )
166
+
167
+ # Display completion
168
+ if self.cli_interface:
169
+ self.cli_interface.display_processing_stages(8)
170
+ self.cli_interface.print_status(
171
+ "🎉 Agent orchestration pipeline completed successfully!",
172
+ "complete",
173
+ )
174
+
175
+ return {
176
+ "status": "success",
177
+ "result": result,
178
+ "pipeline_mode": "comprehensive" if enable_indexing else "optimized",
179
+ }
180
+
181
+ except Exception as e:
182
+ error_msg = f"Pipeline execution failed: {str(e)}"
183
+ if self.cli_interface:
184
+ self.cli_interface.print_status(error_msg, "error")
185
+
186
+ return {
187
+ "status": "error",
188
+ "error": error_msg,
189
+ "pipeline_mode": "comprehensive" if enable_indexing else "optimized",
190
+ }
191
+
192
+ async def execute_chat_pipeline(self, user_input: str) -> Dict[str, Any]:
193
+ """
194
+ Execute the chat-based planning and implementation pipeline.
195
+
196
+ Args:
197
+ user_input: User's coding requirements and description
198
+
199
+ Returns:
200
+ dict: Chat pipeline execution result
201
+ """
202
+ try:
203
+ # Import the chat-based pipeline
204
+ from workflows.agent_orchestration_engine import (
205
+ execute_chat_based_planning_pipeline,
206
+ )
207
+
208
+ # Create CLI progress callback for chat mode
209
+ def chat_progress_callback(progress: int, message: str):
210
+ if self.cli_interface:
211
+ # Map progress to CLI stages for chat mode
212
+ if progress <= 5:
213
+ self.cli_interface.display_processing_stages(
214
+ 0, chat_mode=True
215
+ ) # Initialize
216
+ elif progress <= 30:
217
+ self.cli_interface.display_processing_stages(
218
+ 1, chat_mode=True
219
+ ) # Planning
220
+ elif progress <= 50:
221
+ self.cli_interface.display_processing_stages(
222
+ 2, chat_mode=True
223
+ ) # Setup
224
+ elif progress <= 70:
225
+ self.cli_interface.display_processing_stages(
226
+ 3, chat_mode=True
227
+ ) # Save Plan
228
+ else:
229
+ self.cli_interface.display_processing_stages(
230
+ 4, chat_mode=True
231
+ ) # Implement
232
+
233
+ # Display status message
234
+ self.cli_interface.print_status(message, "processing")
235
+
236
+ # Display pipeline start
237
+ if self.cli_interface:
238
+ self.cli_interface.print_status(
239
+ "🚀 Starting chat-based planning pipeline...", "processing"
240
+ )
241
+ self.cli_interface.display_processing_stages(0, chat_mode=True)
242
+
243
+ # Execute the chat pipeline with indexing enabled for enhanced code understanding
244
+ result = await execute_chat_based_planning_pipeline(
245
+ user_input=user_input,
246
+ logger=self.logger,
247
+ progress_callback=chat_progress_callback,
248
+ enable_indexing=True, # Enable indexing for better code implementation
249
+ )
250
+
251
+ # Display completion
252
+ if self.cli_interface:
253
+ self.cli_interface.display_processing_stages(
254
+ 4, chat_mode=True
255
+ ) # Final stage for chat mode
256
+ self.cli_interface.print_status(
257
+ "🎉 Chat-based planning pipeline completed successfully!",
258
+ "complete",
259
+ )
260
+
261
+ return {"status": "success", "result": result, "pipeline_mode": "chat"}
262
+
263
+ except Exception as e:
264
+ error_msg = f"Chat pipeline execution failed: {str(e)}"
265
+ if self.cli_interface:
266
+ self.cli_interface.print_status(error_msg, "error")
267
+
268
+ return {"status": "error", "error": error_msg, "pipeline_mode": "chat"}
269
+
270
+ async def process_input_with_orchestration(
271
+ self, input_source: str, input_type: str, enable_indexing: bool = True
272
+ ) -> Dict[str, Any]:
273
+ """
274
+ Process input using the intelligent agent orchestration engine.
275
+
276
+ This is the main CLI interface to the latest agent orchestration capabilities.
277
+
278
+ Args:
279
+ input_source: Input source (file path or URL)
280
+ input_type: Type of input ('file' or 'url')
281
+ enable_indexing: Whether to enable advanced intelligence analysis
282
+
283
+ Returns:
284
+ dict: Processing result with status and details
285
+ """
286
+ pipeline_result = None
287
+
288
+ try:
289
+ # Initialize MCP app
290
+ init_result = await self.initialize_mcp_app()
291
+ if init_result["status"] != "success":
292
+ return init_result
293
+
294
+ # Process file:// URLs for traditional file/URL inputs
295
+ if input_source.startswith("file://"):
296
+ file_path = input_source[7:]
297
+ if os.name == "nt" and file_path.startswith("/"):
298
+ file_path = file_path.lstrip("/")
299
+ input_source = file_path
300
+
301
+ # Execute appropriate pipeline based on input type
302
+ if input_type == "chat":
303
+ # Use chat-based planning pipeline for user requirements
304
+ pipeline_result = await self.execute_chat_pipeline(input_source)
305
+ else:
306
+ # Use traditional multi-agent research pipeline for files/URLs
307
+ pipeline_result = await self.execute_full_pipeline(
308
+ input_source, enable_indexing=enable_indexing
309
+ )
310
+
311
+ return {
312
+ "status": pipeline_result["status"],
313
+ "analysis_result": "Integrated into agent orchestration pipeline",
314
+ "download_result": "Integrated into agent orchestration pipeline",
315
+ "repo_result": pipeline_result.get("result", ""),
316
+ "pipeline_mode": pipeline_result.get("pipeline_mode", "comprehensive"),
317
+ "error": pipeline_result.get("error"),
318
+ }
319
+
320
+ except Exception as e:
321
+ error_msg = f"Error during orchestrated processing: {str(e)}"
322
+ if self.cli_interface:
323
+ self.cli_interface.print_status(error_msg, "error")
324
+
325
+ return {
326
+ "status": "error",
327
+ "error": error_msg,
328
+ "analysis_result": "",
329
+ "download_result": "",
330
+ "repo_result": "",
331
+ "pipeline_mode": "comprehensive" if enable_indexing else "optimized",
332
+ }
333
+
334
+ finally:
335
+ # Clean up resources
336
+ await self.cleanup_mcp_app()
projects/ui/DeepCode/workflows/agents/__init__.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Agents Package for Code Implementation Workflow
3
+
4
+ This package contains specialized agents for different aspects of code implementation:
5
+ - CodeImplementationAgent: Handles file-by-file code generation
6
+ - ConciseMemoryAgent: Manages memory optimization and consistency across phases
7
+ """
8
+
9
+ from .code_implementation_agent import CodeImplementationAgent
10
+ from .memory_agent_concise import ConciseMemoryAgent as MemoryAgent
11
+
12
+ __all__ = ["CodeImplementationAgent", "MemoryAgent"]
projects/ui/DeepCode/workflows/agents/code_implementation_agent.py ADDED
@@ -0,0 +1,1118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Code Implementation Agent for File-by-File Development
3
+
4
+ Handles systematic code implementation with progress tracking and
5
+ memory optimization for long-running development sessions.
6
+ """
7
+
8
+ import json
9
+ import time
10
+ import logging
11
+ from typing import Dict, Any, List, Optional
12
+
13
+ # Import tiktoken for token calculation
14
+ try:
15
+ import tiktoken
16
+
17
+ TIKTOKEN_AVAILABLE = True
18
+ except ImportError:
19
+ TIKTOKEN_AVAILABLE = False
20
+
21
+ # Import prompts from code_prompts
22
+ import sys
23
+ import os
24
+
25
+ sys.path.insert(
26
+ 0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
27
+ )
28
+ from prompts.code_prompts import (
29
+ GENERAL_CODE_IMPLEMENTATION_SYSTEM_PROMPT,
30
+ )
31
+
32
+
33
+ class CodeImplementationAgent:
34
+ """
35
+ Code Implementation Agent for systematic file-by-file development
36
+
37
+ Responsibilities:
38
+ - Track file implementation progress
39
+ - Execute MCP tool calls for code generation
40
+ - Monitor implementation status
41
+ - Coordinate with Summary Agent for memory optimization
42
+ - Calculate token usage for context management
43
+ """
44
+
45
+ def __init__(
46
+ self,
47
+ mcp_agent,
48
+ logger: Optional[logging.Logger] = None,
49
+ enable_read_tools: bool = True,
50
+ ):
51
+ """
52
+ Initialize Code Implementation Agent
53
+
54
+ Args:
55
+ mcp_agent: MCP agent instance for tool calls
56
+ logger: Logger instance for tracking operations
57
+ enable_read_tools: Whether to enable read_file and read_code_mem tools (default: True)
58
+ """
59
+ self.mcp_agent = mcp_agent
60
+ self.logger = logger or self._create_default_logger()
61
+ self.enable_read_tools = enable_read_tools # Control read tools execution
62
+
63
+ self.implementation_summary = {
64
+ "completed_files": [],
65
+ "technical_decisions": [],
66
+ "important_constraints": [],
67
+ "architecture_notes": [],
68
+ "dependency_analysis": [], # Track dependency analysis and file reads
69
+ }
70
+ self.files_implemented_count = 0
71
+ self.implemented_files_set = (
72
+ set()
73
+ ) # Track unique file paths to avoid duplicate counting
74
+ self.files_read_for_dependencies = (
75
+ set()
76
+ ) # Track files read for dependency analysis
77
+ self.last_summary_file_count = (
78
+ 0 # Track the file count when last summary was triggered
79
+ )
80
+
81
+ # Token calculation settings
82
+ self.max_context_tokens = (
83
+ 200000 # Default max context tokens for Claude-3.5-Sonnet
84
+ )
85
+ self.token_buffer = 10000 # Safety buffer before reaching max
86
+ self.summary_trigger_tokens = (
87
+ self.max_context_tokens - self.token_buffer
88
+ ) # Trigger summary when approaching limit
89
+ self.last_summary_token_count = (
90
+ 0 # Track token count when last summary was triggered
91
+ )
92
+
93
+ # Initialize tokenizer
94
+ if TIKTOKEN_AVAILABLE:
95
+ try:
96
+ # Use Claude-3 tokenizer (approximation with OpenAI's o200k_base)
97
+ self.tokenizer = tiktoken.get_encoding("o200k_base")
98
+ self.logger.info("Token calculation enabled with o200k_base encoding")
99
+ except Exception as e:
100
+ self.tokenizer = None
101
+ self.logger.warning(f"Failed to initialize tokenizer: {e}")
102
+ else:
103
+ self.tokenizer = None
104
+ self.logger.warning(
105
+ "tiktoken not available, token-based summary triggering disabled"
106
+ )
107
+
108
+ # Analysis loop detection
109
+ self.recent_tool_calls = [] # Track recent tool calls to detect analysis loops
110
+ self.max_read_without_write = 5 # Max read_file calls without write_file
111
+
112
+ # Memory agent integration
113
+ self.memory_agent = None # Will be set externally
114
+ self.llm_client = None # Will be set externally
115
+ self.llm_client_type = None # Will be set externally
116
+
117
+ # Log read tools configuration
118
+ read_tools_status = "ENABLED" if self.enable_read_tools else "DISABLED"
119
+ self.logger.info(
120
+ f"🔧 Code Implementation Agent initialized - Read tools: {read_tools_status}"
121
+ )
122
+ if not self.enable_read_tools:
123
+ self.logger.info(
124
+ "🚫 Testing mode: read_file and read_code_mem will be skipped when called"
125
+ )
126
+
127
+ def _create_default_logger(self) -> logging.Logger:
128
+ """Create default logger if none provided"""
129
+ logger = logging.getLogger(f"{__name__}.CodeImplementationAgent")
130
+ # Don't add handlers to child loggers - let them propagate to root
131
+ logger.setLevel(logging.INFO)
132
+ return logger
133
+
134
+ def get_system_prompt(self) -> str:
135
+ """
136
+ Get the system prompt for code implementation
137
+ """
138
+ return GENERAL_CODE_IMPLEMENTATION_SYSTEM_PROMPT
139
+
140
+ def set_memory_agent(self, memory_agent, llm_client=None, llm_client_type=None):
141
+ """
142
+ Set memory agent for code summary generation
143
+
144
+ Args:
145
+ memory_agent: Memory agent instance
146
+ llm_client: LLM client for summary generation
147
+ llm_client_type: Type of LLM client ("anthropic" or "openai")
148
+ """
149
+ self.memory_agent = memory_agent
150
+ self.llm_client = llm_client
151
+ self.llm_client_type = llm_client_type
152
+ self.logger.info("Memory agent integration configured")
153
+
154
+ async def execute_tool_calls(self, tool_calls: List[Dict]) -> List[Dict]:
155
+ """
156
+ Execute MCP tool calls and track implementation progress
157
+
158
+ Args:
159
+ tool_calls: List of tool calls to execute
160
+
161
+ Returns:
162
+ List of tool execution results
163
+ """
164
+ results = []
165
+
166
+ for tool_call in tool_calls:
167
+ tool_name = tool_call["name"]
168
+ tool_input = tool_call["input"]
169
+
170
+ self.logger.info(f"Executing MCP tool: {tool_name}")
171
+
172
+ try:
173
+ # Check if read tools are disabled
174
+ if not self.enable_read_tools and tool_name in [
175
+ "read_file",
176
+ "read_code_mem",
177
+ ]:
178
+ # self.logger.info(f"🚫 SKIPPING {tool_name} - Read tools disabled for testing")
179
+ # Return a mock result indicating the tool was skipped
180
+ mock_result = json.dumps(
181
+ {
182
+ "status": "skipped",
183
+ "message": f"{tool_name} tool disabled for testing",
184
+ "tool_disabled": True,
185
+ "original_input": tool_input,
186
+ },
187
+ ensure_ascii=False,
188
+ )
189
+
190
+ results.append(
191
+ {
192
+ "tool_id": tool_call["id"],
193
+ "tool_name": tool_name,
194
+ "result": mock_result,
195
+ }
196
+ )
197
+ continue
198
+
199
+ # read_code_mem is now a proper MCP tool, no special handling needed
200
+
201
+ # INTERCEPT read_file calls - redirect to read_code_mem first if memory agent is available
202
+ if tool_name == "read_file":
203
+ file_path = tool_call["input"].get("file_path", "unknown")
204
+ self.logger.info(f"🔍 READ_FILE CALL DETECTED: {file_path}")
205
+ self.logger.info(
206
+ f"📊 Files implemented count: {self.files_implemented_count}"
207
+ )
208
+ self.logger.info(
209
+ f"🧠 Memory agent available: {self.memory_agent is not None}"
210
+ )
211
+
212
+ # Enable optimization if memory agent is available (more aggressive approach)
213
+ if self.memory_agent is not None:
214
+ self.logger.info(
215
+ f"🔄 INTERCEPTING read_file call for {file_path} (memory agent available)"
216
+ )
217
+ result = await self._handle_read_file_with_memory_optimization(
218
+ tool_call
219
+ )
220
+ results.append(result)
221
+ continue
222
+ else:
223
+ self.logger.info(
224
+ "📁 NO INTERCEPTION: no memory agent available"
225
+ )
226
+
227
+ if self.mcp_agent:
228
+ # Execute tool call through MCP protocol
229
+ result = await self.mcp_agent.call_tool(tool_name, tool_input)
230
+
231
+ # Track file implementation progress
232
+ if tool_name == "write_file":
233
+ await self._track_file_implementation_with_summary(
234
+ tool_call, result
235
+ )
236
+ elif tool_name == "read_file":
237
+ self._track_dependency_analysis(tool_call, result)
238
+
239
+ # Track tool calls for analysis loop detection
240
+ self._track_tool_call_for_loop_detection(tool_name)
241
+
242
+ results.append(
243
+ {
244
+ "tool_id": tool_call["id"],
245
+ "tool_name": tool_name,
246
+ "result": result,
247
+ }
248
+ )
249
+ else:
250
+ results.append(
251
+ {
252
+ "tool_id": tool_call["id"],
253
+ "tool_name": tool_name,
254
+ "result": json.dumps(
255
+ {
256
+ "status": "error",
257
+ "message": "MCP agent not initialized",
258
+ },
259
+ ensure_ascii=False,
260
+ ),
261
+ }
262
+ )
263
+
264
+ except Exception as e:
265
+ self.logger.error(f"MCP tool execution failed: {e}")
266
+ results.append(
267
+ {
268
+ "tool_id": tool_call["id"],
269
+ "tool_name": tool_name,
270
+ "result": json.dumps(
271
+ {"status": "error", "message": str(e)}, ensure_ascii=False
272
+ ),
273
+ }
274
+ )
275
+
276
+ return results
277
+
278
+ # _handle_read_code_mem method removed - read_code_mem is now a proper MCP tool
279
+
280
+ async def _handle_read_file_with_memory_optimization(self, tool_call: Dict) -> Dict:
281
+ """
282
+ Intercept read_file calls and redirect to read_code_mem if a summary exists.
283
+ This prevents unnecessary file reads if the summary is already available.
284
+ """
285
+ file_path = tool_call["input"].get("file_path")
286
+ if not file_path:
287
+ return {
288
+ "tool_id": tool_call["id"],
289
+ "tool_name": "read_file",
290
+ "result": json.dumps(
291
+ {"status": "error", "message": "file_path parameter is required"},
292
+ ensure_ascii=False,
293
+ ),
294
+ }
295
+
296
+ # Check if a summary exists for this file using read_code_mem MCP tool
297
+ should_use_summary = False
298
+ if self.memory_agent and self.mcp_agent:
299
+ try:
300
+ # Use read_code_mem MCP tool to check if summary exists (pass file path as list)
301
+ read_code_mem_result = await self.mcp_agent.call_tool(
302
+ "read_code_mem", {"file_paths": [file_path]}
303
+ )
304
+
305
+ # Parse the result to check if summary was found
306
+ import json
307
+
308
+ if isinstance(read_code_mem_result, str):
309
+ try:
310
+ result_data = json.loads(read_code_mem_result)
311
+ # Check if any summaries were found in the results
312
+ should_use_summary = (
313
+ result_data.get("status")
314
+ in ["all_summaries_found", "partial_summaries_found"]
315
+ and result_data.get("summaries_found", 0) > 0
316
+ )
317
+ except json.JSONDecodeError:
318
+ should_use_summary = False
319
+ except Exception as e:
320
+ self.logger.debug(f"read_code_mem check failed for {file_path}: {e}")
321
+ should_use_summary = False
322
+
323
+ if should_use_summary:
324
+ self.logger.info(f"🔄 READ_FILE INTERCEPTED: Using summary for {file_path}")
325
+
326
+ # Use the MCP agent to call read_code_mem tool
327
+ if self.mcp_agent:
328
+ result = await self.mcp_agent.call_tool(
329
+ "read_code_mem", {"file_paths": [file_path]}
330
+ )
331
+
332
+ # Modify the result to indicate it was originally a read_file call
333
+ import json
334
+
335
+ try:
336
+ result_data = (
337
+ json.loads(result) if isinstance(result, str) else result
338
+ )
339
+ if isinstance(result_data, dict):
340
+ # Extract the specific file result for the single file we requested
341
+ file_results = result_data.get("results", [])
342
+ if file_results and len(file_results) > 0:
343
+ specific_result = file_results[
344
+ 0
345
+ ] # Get the first (and only) result
346
+ # Transform to match the old single-file format for backward compatibility
347
+ transformed_result = {
348
+ "status": specific_result.get("status", "no_summary"),
349
+ "file_path": specific_result.get(
350
+ "file_path", file_path
351
+ ),
352
+ "summary_content": specific_result.get(
353
+ "summary_content"
354
+ ),
355
+ "message": specific_result.get("message", ""),
356
+ "original_tool": "read_file",
357
+ "optimization": "redirected_to_read_code_mem",
358
+ }
359
+ final_result = json.dumps(
360
+ transformed_result, ensure_ascii=False
361
+ )
362
+ else:
363
+ # Fallback if no results
364
+ result_data["original_tool"] = "read_file"
365
+ result_data["optimization"] = "redirected_to_read_code_mem"
366
+ final_result = json.dumps(result_data, ensure_ascii=False)
367
+ else:
368
+ final_result = result
369
+ except (json.JSONDecodeError, TypeError):
370
+ final_result = result
371
+
372
+ return {
373
+ "tool_id": tool_call["id"],
374
+ "tool_name": "read_file", # Keep original tool name for tracking
375
+ "result": final_result,
376
+ }
377
+ else:
378
+ self.logger.warning(
379
+ "MCP agent not available for read_code_mem optimization"
380
+ )
381
+ else:
382
+ self.logger.info(
383
+ f"📁 READ_FILE: No summary for {file_path}, using actual file"
384
+ )
385
+
386
+ # Execute the original read_file call
387
+ if self.mcp_agent:
388
+ result = await self.mcp_agent.call_tool("read_file", tool_call["input"])
389
+
390
+ # Track dependency analysis for the actual file read
391
+ self._track_dependency_analysis(tool_call, result)
392
+
393
+ # Track tool calls for analysis loop detection
394
+ self._track_tool_call_for_loop_detection("read_file")
395
+
396
+ return {
397
+ "tool_id": tool_call["id"],
398
+ "tool_name": "read_file",
399
+ "result": result,
400
+ }
401
+ else:
402
+ return {
403
+ "tool_id": tool_call["id"],
404
+ "tool_name": "read_file",
405
+ "result": json.dumps(
406
+ {"status": "error", "message": "MCP agent not initialized"},
407
+ ensure_ascii=False,
408
+ ),
409
+ }
410
+
411
+ async def _track_file_implementation_with_summary(
412
+ self, tool_call: Dict, result: Any
413
+ ):
414
+ """
415
+ Track file implementation and create code summary
416
+
417
+ Args:
418
+ tool_call: The write_file tool call
419
+ result: Result of the tool execution
420
+ """
421
+ # First do the regular tracking
422
+ self._track_file_implementation(tool_call, result)
423
+
424
+ # Then create and save code summary if memory agent is available
425
+ if self.memory_agent and self.llm_client and self.llm_client_type:
426
+ try:
427
+ file_path = tool_call["input"].get("file_path")
428
+ file_content = tool_call["input"].get("content", "")
429
+
430
+ if file_path and file_content:
431
+ # Create code implementation summary
432
+ summary = await self.memory_agent.create_code_implementation_summary(
433
+ self.llm_client,
434
+ self.llm_client_type,
435
+ file_path,
436
+ file_content,
437
+ self.get_files_implemented_count(), # Pass the current file count
438
+ )
439
+
440
+ self.logger.info(
441
+ f"Created code summary for implemented file: {file_path}, summary: {summary[:100]}..."
442
+ )
443
+ else:
444
+ self.logger.warning(
445
+ "Missing file path or content for summary generation"
446
+ )
447
+
448
+ except Exception as e:
449
+ self.logger.error(f"Failed to create code summary: {e}")
450
+
451
+ def _track_file_implementation(self, tool_call: Dict, result: Any):
452
+ """
453
+ Track file implementation progress
454
+ """
455
+ try:
456
+ # Handle different result types from MCP
457
+ result_data = None
458
+
459
+ # Check if result is a CallToolResult object
460
+ if hasattr(result, "content"):
461
+ # Extract content from CallToolResult
462
+ if hasattr(result.content, "text"):
463
+ result_content = result.content.text
464
+ else:
465
+ result_content = str(result.content)
466
+
467
+ # Try to parse as JSON
468
+ try:
469
+ result_data = json.loads(result_content)
470
+ except json.JSONDecodeError:
471
+ # If not JSON, create a structure
472
+ result_data = {
473
+ "status": "success",
474
+ "file_path": tool_call["input"].get("file_path", "unknown"),
475
+ }
476
+ elif isinstance(result, str):
477
+ # Try to parse string result
478
+ try:
479
+ result_data = json.loads(result)
480
+ except json.JSONDecodeError:
481
+ result_data = {
482
+ "status": "success",
483
+ "file_path": tool_call["input"].get("file_path", "unknown"),
484
+ }
485
+ elif isinstance(result, dict):
486
+ # Direct dictionary result
487
+ result_data = result
488
+ else:
489
+ # Fallback: assume success and extract file path from input
490
+ result_data = {
491
+ "status": "success",
492
+ "file_path": tool_call["input"].get("file_path", "unknown"),
493
+ }
494
+
495
+ # Extract file path for tracking
496
+ file_path = None
497
+ if result_data and result_data.get("status") == "success":
498
+ file_path = result_data.get(
499
+ "file_path", tool_call["input"].get("file_path", "unknown")
500
+ )
501
+ else:
502
+ file_path = tool_call["input"].get("file_path")
503
+
504
+ # Only count unique files, not repeated tool calls on same file
505
+ if file_path and file_path not in self.implemented_files_set:
506
+ # This is a new file implementation
507
+ self.implemented_files_set.add(file_path)
508
+ self.files_implemented_count += 1
509
+ # self.logger.info(f"New file implementation tracked: count={self.files_implemented_count}, file={file_path}")
510
+ # print(f"New file implementation tracked: count={self.files_implemented_count}, file={file_path}")
511
+
512
+ # Add to completed files list
513
+ self.implementation_summary["completed_files"].append(
514
+ {
515
+ "file": file_path,
516
+ "iteration": self.files_implemented_count,
517
+ "timestamp": time.time(),
518
+ "size": result_data.get("size", 0) if result_data else 0,
519
+ }
520
+ )
521
+
522
+ # self.logger.info(
523
+ # f"New file implementation tracked: count={self.files_implemented_count}, file={file_path}"
524
+ # )
525
+ # print(f"📝 NEW FILE IMPLEMENTED: count={self.files_implemented_count}, file={file_path}")
526
+ # print(f"🔧 OPTIMIZATION NOW ENABLED: files_implemented_count > 0 = {self.files_implemented_count > 0}")
527
+ elif file_path and file_path in self.implemented_files_set:
528
+ # This file was already implemented (duplicate tool call)
529
+ self.logger.debug(
530
+ f"File already tracked, skipping duplicate count: {file_path}"
531
+ )
532
+ else:
533
+ # No valid file path found
534
+ self.logger.warning("No valid file path found for tracking")
535
+
536
+ except Exception as e:
537
+ self.logger.warning(f"Failed to track file implementation: {e}")
538
+ # Even if tracking fails, try to count based on tool input (but check for duplicates)
539
+
540
+ file_path = tool_call["input"].get("file_path")
541
+ if file_path and file_path not in self.implemented_files_set:
542
+ self.implemented_files_set.add(file_path)
543
+ self.files_implemented_count += 1
544
+ self.logger.info(
545
+ f"File implementation counted (emergency fallback): count={self.files_implemented_count}, file={file_path}"
546
+ )
547
+
548
+ def _track_dependency_analysis(self, tool_call: Dict, result: Any):
549
+ """
550
+ Track dependency analysis through read_file calls
551
+ """
552
+ try:
553
+ file_path = tool_call["input"].get("file_path")
554
+ if file_path:
555
+ # Track unique files read for dependency analysis
556
+ if file_path not in self.files_read_for_dependencies:
557
+ self.files_read_for_dependencies.add(file_path)
558
+
559
+ # Add to dependency analysis summary
560
+ self.implementation_summary["dependency_analysis"].append(
561
+ {
562
+ "file_read": file_path,
563
+ "timestamp": time.time(),
564
+ "purpose": "dependency_analysis",
565
+ }
566
+ )
567
+
568
+ self.logger.info(
569
+ f"Dependency analysis tracked: file_read={file_path}"
570
+ )
571
+
572
+ except Exception as e:
573
+ self.logger.warning(f"Failed to track dependency analysis: {e}")
574
+
575
+ def calculate_messages_token_count(self, messages: List[Dict]) -> int:
576
+ """
577
+ Calculate total token count for a list of messages
578
+
579
+ Args:
580
+ messages: List of chat messages with 'role' and 'content' keys
581
+
582
+ Returns:
583
+ Total token count
584
+ """
585
+ if not self.tokenizer:
586
+ # Fallback: rough estimation based on character count
587
+ total_chars = sum(len(str(msg.get("content", ""))) for msg in messages)
588
+ # Rough approximation: 1 token ≈ 4 characters
589
+ return total_chars // 4
590
+
591
+ try:
592
+ total_tokens = 0
593
+ for message in messages:
594
+ content = str(message.get("content", ""))
595
+ role = message.get("role", "")
596
+
597
+ # Count tokens for content
598
+ if content:
599
+ content_tokens = len(
600
+ self.tokenizer.encode(content, disallowed_special=())
601
+ )
602
+ total_tokens += content_tokens
603
+
604
+ # Add tokens for role and message structure
605
+ role_tokens = len(self.tokenizer.encode(role, disallowed_special=()))
606
+ total_tokens += role_tokens + 4 # Extra tokens for message formatting
607
+
608
+ return total_tokens
609
+
610
+ except Exception as e:
611
+ self.logger.warning(f"Token calculation failed: {e}")
612
+ # Fallback estimation
613
+ total_chars = sum(len(str(msg.get("content", ""))) for msg in messages)
614
+ return total_chars // 4
615
+
616
+ def should_trigger_summary_by_tokens(self, messages: List[Dict]) -> bool:
617
+ """
618
+ Check if summary should be triggered based on token count
619
+
620
+ Args:
621
+ messages: Current conversation messages
622
+
623
+ Returns:
624
+ True if summary should be triggered based on token count
625
+ """
626
+ if not messages:
627
+ return False
628
+
629
+ # Calculate current token count / 计算当前token数
630
+ current_token_count = self.calculate_messages_token_count(messages)
631
+
632
+ # Check if we should trigger summary / 检查是否应触发总结
633
+ should_trigger = (
634
+ current_token_count > self.summary_trigger_tokens
635
+ and current_token_count
636
+ > self.last_summary_token_count
637
+ + 10000 # Minimum 10k tokens between summaries / 总结间最少10k tokens
638
+ )
639
+
640
+ if should_trigger:
641
+ self.logger.info(
642
+ f"Token-based summary trigger: current={current_token_count:,}, "
643
+ f"threshold={self.summary_trigger_tokens:,}, "
644
+ f"last_summary={self.last_summary_token_count:,}"
645
+ )
646
+
647
+ return should_trigger
648
+
649
+ def should_trigger_summary(
650
+ self, summary_trigger: int = 5, messages: List[Dict] = None
651
+ ) -> bool:
652
+ """
653
+ Check if summary should be triggered based on token count (preferred) or file count (fallback)
654
+ 根据token数(首选)或文件数(回退)检查是否应触发总结
655
+
656
+ Args:
657
+ summary_trigger: Number of files after which to trigger summary (fallback)
658
+ messages: Current conversation messages for token calculation
659
+
660
+ Returns:
661
+ True if summary should be triggered
662
+ """
663
+ # Primary: Token-based triggering / 主要:基于token的触发
664
+ if messages and self.tokenizer:
665
+ return self.should_trigger_summary_by_tokens(messages)
666
+
667
+ # Fallback: File-based triggering (original logic) / 回退:基于文件的触发(原始逻辑)
668
+ self.logger.info("Using fallback file-based summary triggering")
669
+ should_trigger = (
670
+ self.files_implemented_count > 0
671
+ and self.files_implemented_count % summary_trigger == 0
672
+ and self.files_implemented_count > self.last_summary_file_count
673
+ )
674
+
675
+ return should_trigger
676
+
677
+ def mark_summary_triggered(self, messages: List[Dict] = None):
678
+ """
679
+ Mark that summary has been triggered for current state
680
+ 标记当前状态的总结已被触发
681
+
682
+ Args:
683
+ messages: Current conversation messages for token tracking
684
+ """
685
+ # Update file-based tracking / 更新基于文件的跟踪
686
+ self.last_summary_file_count = self.files_implemented_count
687
+
688
+ # Update token-based tracking / 更新基于token的跟踪
689
+ if messages and self.tokenizer:
690
+ self.last_summary_token_count = self.calculate_messages_token_count(
691
+ messages
692
+ )
693
+ self.logger.info(
694
+ f"Summary marked as triggered - file_count: {self.files_implemented_count}, "
695
+ f"token_count: {self.last_summary_token_count:,}"
696
+ )
697
+ else:
698
+ self.logger.info(
699
+ f"Summary marked as triggered for file count: {self.files_implemented_count}"
700
+ )
701
+
702
+ def get_implementation_summary(self) -> Dict[str, Any]:
703
+ """
704
+ Get current implementation summary
705
+ 获取当前实现总结
706
+ """
707
+ return self.implementation_summary.copy()
708
+
709
+ def get_files_implemented_count(self) -> int:
710
+ """
711
+ Get the number of files implemented so far
712
+ 获取到目前为止实现的文件数量
713
+ """
714
+ return self.files_implemented_count
715
+
716
+ def get_read_tools_status(self) -> Dict[str, Any]:
717
+ """
718
+ Get read tools configuration status
719
+ 获取读取工具配置状态
720
+
721
+ Returns:
722
+ Dictionary with read tools status information
723
+ """
724
+ return {
725
+ "read_tools_enabled": self.enable_read_tools,
726
+ "status": "ENABLED" if self.enable_read_tools else "DISABLED",
727
+ "tools_affected": ["read_file", "read_code_mem"],
728
+ "description": "Read tools configuration for testing purposes",
729
+ }
730
+
731
+ def add_technical_decision(self, decision: str, context: str = ""):
732
+ """
733
+ Add a technical decision to the implementation summary
734
+ 向实现总结添加技术决策
735
+
736
+ Args:
737
+ decision: Description of the technical decision
738
+ context: Additional context for the decision
739
+ """
740
+ self.implementation_summary["technical_decisions"].append(
741
+ {"decision": decision, "context": context, "timestamp": time.time()}
742
+ )
743
+ self.logger.info(f"Technical decision recorded: {decision}")
744
+
745
+ def add_constraint(self, constraint: str, impact: str = ""):
746
+ """
747
+ Add an important constraint to the implementation summary
748
+ 向实现总结添加重要约束
749
+
750
+ Args:
751
+ constraint: Description of the constraint
752
+ impact: Impact of the constraint on implementation
753
+ """
754
+ self.implementation_summary["important_constraints"].append(
755
+ {"constraint": constraint, "impact": impact, "timestamp": time.time()}
756
+ )
757
+ self.logger.info(f"Constraint recorded: {constraint}")
758
+
759
+ def add_architecture_note(self, note: str, component: str = ""):
760
+ """
761
+ Add an architecture note to the implementation summary
762
+ 向实现总结添加架构注释
763
+
764
+ Args:
765
+ note: Architecture note description
766
+ component: Related component or module
767
+ """
768
+ self.implementation_summary["architecture_notes"].append(
769
+ {"note": note, "component": component, "timestamp": time.time()}
770
+ )
771
+ self.logger.info(f"Architecture note recorded: {note}")
772
+
773
+ def get_implementation_statistics(self) -> Dict[str, Any]:
774
+ """
775
+ Get comprehensive implementation statistics
776
+ 获取全面的实现统计信息
777
+ """
778
+ return {
779
+ "total_files_implemented": self.files_implemented_count,
780
+ "files_implemented_count": self.files_implemented_count,
781
+ "technical_decisions_count": len(
782
+ self.implementation_summary["technical_decisions"]
783
+ ),
784
+ "constraints_count": len(
785
+ self.implementation_summary["important_constraints"]
786
+ ),
787
+ "architecture_notes_count": len(
788
+ self.implementation_summary["architecture_notes"]
789
+ ),
790
+ "dependency_analysis_count": len(
791
+ self.implementation_summary["dependency_analysis"]
792
+ ),
793
+ "files_read_for_dependencies": len(self.files_read_for_dependencies),
794
+ "unique_files_implemented": len(self.implemented_files_set),
795
+ "completed_files_list": [
796
+ f["file"] for f in self.implementation_summary["completed_files"]
797
+ ],
798
+ "dependency_files_read": list(self.files_read_for_dependencies),
799
+ "last_summary_file_count": self.last_summary_file_count,
800
+ "read_tools_status": self.get_read_tools_status(), # Include read tools configuration
801
+ }
802
+
803
+ def force_enable_optimization(self):
804
+ """
805
+ Force enable optimization for testing purposes
806
+ 强制启用优化用于测试目的
807
+ """
808
+ self.files_implemented_count = 1
809
+ self.logger.info(
810
+ f"🔧 OPTIMIZATION FORCE ENABLED: files_implemented_count set to {self.files_implemented_count}"
811
+ )
812
+ print(
813
+ f"🔧 OPTIMIZATION FORCE ENABLED: files_implemented_count set to {self.files_implemented_count}"
814
+ )
815
+
816
+ def reset_implementation_tracking(self):
817
+ """
818
+ Reset implementation tracking (useful for new sessions)
819
+ 重置实现跟踪(对新会话有用)
820
+ """
821
+ self.implementation_summary = {
822
+ "completed_files": [],
823
+ "technical_decisions": [],
824
+ "important_constraints": [],
825
+ "architecture_notes": [],
826
+ "dependency_analysis": [], # Reset dependency analysis and file reads
827
+ }
828
+ self.files_implemented_count = 0
829
+ self.implemented_files_set = (
830
+ set()
831
+ ) # Reset the unique files set / 重置唯一文件集合
832
+ self.files_read_for_dependencies = (
833
+ set()
834
+ ) # Reset files read for dependency analysis / 重置为依赖分析而读取的文件
835
+ self.last_summary_file_count = 0 # Reset the file count when last summary was triggered / 重置上次触发总结时的文件数
836
+ self.last_summary_token_count = 0 # Reset token count when last summary was triggered / 重置上次触发总结时的token数
837
+ self.logger.info("Implementation tracking reset")
838
+
839
+ # Reset analysis loop detection / 重置分析循环检测
840
+ self.recent_tool_calls = []
841
+ self.logger.info("Analysis loop detection reset")
842
+
843
+ def _track_tool_call_for_loop_detection(self, tool_name: str):
844
+ """
845
+ Track tool calls for analysis loop detection
846
+ 跟踪工具调用以检测分析循环
847
+
848
+ Args:
849
+ tool_name: Name of the tool called
850
+ """
851
+ self.recent_tool_calls.append(tool_name)
852
+ if len(self.recent_tool_calls) > self.max_read_without_write:
853
+ self.recent_tool_calls.pop(0)
854
+
855
+ if len(set(self.recent_tool_calls)) == 1:
856
+ self.logger.warning("Analysis loop detected")
857
+
858
+ def is_in_analysis_loop(self) -> bool:
859
+ """
860
+ Check if the agent is in an analysis loop (only reading files, not writing)
861
+ 检查代理是否在分析循环中(只读文件,不写文件)
862
+
863
+ Returns:
864
+ True if in analysis loop
865
+ """
866
+ if len(self.recent_tool_calls) < self.max_read_without_write:
867
+ return False
868
+
869
+ # Check if recent calls are all read_file or search_reference_code / 检查最近的调用是否都是read_file或search_reference_code
870
+ analysis_tools = {
871
+ "read_file",
872
+ "search_reference_code",
873
+ "get_all_available_references",
874
+ }
875
+ recent_calls_set = set(self.recent_tool_calls)
876
+
877
+ # If all recent calls are analysis tools, we're in an analysis loop / 如果最近的调用都是分析工具,我们在分析循环中
878
+ in_loop = (
879
+ recent_calls_set.issubset(analysis_tools) and len(recent_calls_set) >= 1
880
+ )
881
+
882
+ if in_loop:
883
+ self.logger.warning(
884
+ f"Analysis loop detected! Recent calls: {self.recent_tool_calls}"
885
+ )
886
+
887
+ return in_loop
888
+
889
+ def get_analysis_loop_guidance(self) -> str:
890
+ """
891
+ Get guidance to break out of analysis loop
892
+ 获取跳出分析循环的指导
893
+
894
+ Returns:
895
+ Guidance message to encourage implementation
896
+ """
897
+ return f"""🚨 **ANALYSIS LOOP DETECTED - IMMEDIATE ACTION REQUIRED**
898
+
899
+ **Problem**: You've been reading/analyzing files for {len(self.recent_tool_calls)} consecutive calls without writing code.
900
+ **Recent tool calls**: {' → '.join(self.recent_tool_calls)}
901
+
902
+ **SOLUTION - IMPLEMENT CODE NOW**:
903
+ 1. **STOP ANALYZING** - You have enough information
904
+ 2. **Use write_file** to create the next code file according to the implementation plan
905
+ 3. **Choose ANY file** from the plan that hasn't been implemented yet
906
+ 4. **Write complete, working code** - don't ask for permission or clarification
907
+
908
+ **Files implemented so far**: {self.files_implemented_count}
909
+ **Your goal**: Implement MORE files, not analyze existing ones!
910
+
911
+ **CRITICAL**: Your next response MUST use write_file to create a new code file!"""
912
+
913
+ async def test_summary_functionality(self, test_file_path: str = None):
914
+ """
915
+ Test if the code summary functionality is working correctly
916
+ 测试代码总结功能是否正常工作
917
+
918
+ Args:
919
+ test_file_path: Specific file to test, if None will test all implemented files
920
+ """
921
+ if not self.memory_agent:
922
+ self.logger.warning("No memory agent available for testing")
923
+ return
924
+
925
+ if test_file_path:
926
+ files_to_test = [test_file_path]
927
+ else:
928
+ # Use implemented files from tracking
929
+ files_to_test = list(self.implemented_files_set)[
930
+ :3
931
+ ] # Limit to first 3 files
932
+
933
+ if not files_to_test:
934
+ self.logger.warning("No implemented files to test")
935
+ return
936
+
937
+ # Test each file silently
938
+ summary_files_found = 0
939
+
940
+ for file_path in files_to_test:
941
+ if self.mcp_agent:
942
+ try:
943
+ result = await self.mcp_agent.call_tool(
944
+ "read_code_mem", {"file_paths": [file_path]}
945
+ )
946
+
947
+ # Parse the result to check if summary was found
948
+ import json
949
+
950
+ result_data = (
951
+ json.loads(result) if isinstance(result, str) else result
952
+ )
953
+
954
+ if (
955
+ result_data.get("status")
956
+ in ["all_summaries_found", "partial_summaries_found"]
957
+ and result_data.get("summaries_found", 0) > 0
958
+ ):
959
+ summary_files_found += 1
960
+ except Exception as e:
961
+ self.logger.warning(
962
+ f"Failed to test read_code_mem for {file_path}: {e}"
963
+ )
964
+ else:
965
+ self.logger.warning("MCP agent not available for testing")
966
+
967
+ self.logger.info(
968
+ f"📋 Summary testing: {summary_files_found}/{len(files_to_test)} files have summaries"
969
+ )
970
+
971
+ async def test_automatic_read_file_optimization(self):
972
+ """
973
+ Test the automatic read_file optimization that redirects to read_code_mem
974
+ 测试自动read_file优化,重定向到read_code_mem
975
+ """
976
+ print("=" * 80)
977
+ print("🔄 TESTING AUTOMATIC READ_FILE OPTIMIZATION")
978
+ print("=" * 80)
979
+
980
+ # Simulate that at least one file has been implemented (to trigger optimization)
981
+ self.files_implemented_count = 1
982
+
983
+ # Test with a generic config file that should have a summary
984
+ test_file = "config.py"
985
+
986
+ print(f"📁 Testing automatic optimization for: {test_file}")
987
+ print(f"📊 Files implemented count: {self.files_implemented_count}")
988
+ print(
989
+ f"🔧 Optimization should be: {'ENABLED' if self.files_implemented_count > 0 else 'DISABLED'}"
990
+ )
991
+
992
+ # Create a simulated read_file tool call
993
+ simulated_read_file_call = {
994
+ "id": "test_read_file_optimization",
995
+ "name": "read_file",
996
+ "input": {"file_path": test_file},
997
+ }
998
+
999
+ print("\n🔄 Simulating read_file call:")
1000
+ print(f" Tool: {simulated_read_file_call['name']}")
1001
+ print(f" File: {simulated_read_file_call['input']['file_path']}")
1002
+
1003
+ # Execute the tool call (this should trigger automatic optimization)
1004
+ results = await self.execute_tool_calls([simulated_read_file_call])
1005
+
1006
+ if results:
1007
+ result = results[0]
1008
+ print("\n✅ Tool execution completed:")
1009
+ print(f" Tool name: {result.get('tool_name', 'N/A')}")
1010
+ print(f" Tool ID: {result.get('tool_id', 'N/A')}")
1011
+
1012
+ # Parse the result to check if optimization occurred
1013
+ import json
1014
+
1015
+ try:
1016
+ result_data = json.loads(result.get("result", "{}"))
1017
+ if result_data.get("optimization") == "redirected_to_read_code_mem":
1018
+ print("🎉 SUCCESS: read_file was automatically optimized!")
1019
+ print(
1020
+ f" Original tool: {result_data.get('original_tool', 'N/A')}"
1021
+ )
1022
+ print(f" Status: {result_data.get('status', 'N/A')}")
1023
+ elif result_data.get("status") == "summary_found":
1024
+ print("🎉 SUCCESS: Summary was found and returned!")
1025
+ else:
1026
+ print("ℹ️ INFO: No optimization occurred (no summary available)")
1027
+ except json.JSONDecodeError:
1028
+ print("⚠️ WARNING: Could not parse result as JSON")
1029
+ else:
1030
+ print("❌ ERROR: No results returned from tool execution")
1031
+
1032
+ print("\n" + "=" * 80)
1033
+ print("🔄 AUTOMATIC READ_FILE OPTIMIZATION TEST COMPLETE")
1034
+ print("=" * 80)
1035
+
1036
+ async def test_summary_optimization(self, test_file_path: str = "config.py"):
1037
+ """
1038
+ Test the summary optimization functionality with a specific file
1039
+ 测试特定文件的总结优化功能
1040
+
1041
+ Args:
1042
+ test_file_path: File path to test (default: config.py which should be in summary)
1043
+ """
1044
+ if not self.mcp_agent:
1045
+ return False
1046
+
1047
+ try:
1048
+ # Use MCP agent to call read_code_mem tool
1049
+ result = await self.mcp_agent.call_tool(
1050
+ "read_code_mem", {"file_paths": [test_file_path]}
1051
+ )
1052
+
1053
+ # Parse the result to check if summary was found
1054
+ import json
1055
+
1056
+ result_data = json.loads(result) if isinstance(result, str) else result
1057
+
1058
+ return (
1059
+ result_data.get("status")
1060
+ in ["all_summaries_found", "partial_summaries_found"]
1061
+ and result_data.get("summaries_found", 0) > 0
1062
+ )
1063
+ except Exception as e:
1064
+ self.logger.warning(f"Failed to test read_code_mem optimization: {e}")
1065
+ return False
1066
+
1067
+ async def test_read_tools_configuration(self):
1068
+ """
1069
+ Test the read tools configuration to verify enabling/disabling works correctly
1070
+ 测试读取工具配置以验证启用/禁用是否正常工作
1071
+ """
1072
+ print("=" * 60)
1073
+ print("🧪 TESTING READ TOOLS CONFIGURATION")
1074
+ print("=" * 60)
1075
+
1076
+ status = self.get_read_tools_status()
1077
+ print(f"Read tools enabled: {status['read_tools_enabled']}")
1078
+ print(f"Status: {status['status']}")
1079
+ print(f"Tools affected: {status['tools_affected']}")
1080
+
1081
+ # Test with mock tool calls
1082
+ test_tools = [
1083
+ {
1084
+ "id": "test_read_file",
1085
+ "name": "read_file",
1086
+ "input": {"file_path": "test.py"},
1087
+ },
1088
+ {
1089
+ "id": "test_read_code_mem",
1090
+ "name": "read_code_mem",
1091
+ "input": {"file_path": "test.py"},
1092
+ },
1093
+ {
1094
+ "id": "test_write_file",
1095
+ "name": "write_file",
1096
+ "input": {"file_path": "test.py", "content": "# test"},
1097
+ },
1098
+ ]
1099
+
1100
+ print(
1101
+ f"\n🔄 Testing tool execution with read_tools_enabled={self.enable_read_tools}"
1102
+ )
1103
+
1104
+ for tool_call in test_tools:
1105
+ tool_name = tool_call["name"]
1106
+ if not self.enable_read_tools and tool_name in [
1107
+ "read_file",
1108
+ "read_code_mem",
1109
+ ]:
1110
+ print(f"🚫 {tool_name}: Would be SKIPPED (disabled)")
1111
+ else:
1112
+ print(f"✅ {tool_name}: Would be EXECUTED")
1113
+
1114
+ print("=" * 60)
1115
+ print("🧪 READ TOOLS CONFIGURATION TEST COMPLETE")
1116
+ print("=" * 60)
1117
+
1118
+ return status
projects/ui/DeepCode/workflows/agents/document_segmentation_agent.py ADDED
@@ -0,0 +1,353 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Document Segmentation Agent
3
+
4
+ A lightweight agent that coordinates with the document segmentation MCP server
5
+ to analyze document structure and prepare segments for other agents.
6
+ """
7
+
8
+ import os
9
+ import logging
10
+ from typing import Dict, Any, Optional
11
+
12
+ from mcp_agent.agents.agent import Agent
13
+ from utils.llm_utils import get_preferred_llm_class
14
+
15
+
16
+ class DocumentSegmentationAgent:
17
+ """
18
+ Intelligent document segmentation agent with semantic analysis capabilities.
19
+
20
+ This enhanced agent provides:
21
+ 1. **Semantic Document Classification**: Content-based document type identification
22
+ 2. **Adaptive Segmentation Strategy**: Algorithm integrity and semantic coherence preservation
23
+ 3. **Planning Agent Optimization**: Segment preparation specifically optimized for downstream agents
24
+ 4. **Quality Intelligence Validation**: Advanced metrics for completeness and technical accuracy
25
+ 5. **Algorithm Completeness Protection**: Ensures critical algorithms and formulas remain intact
26
+
27
+ Key improvements over traditional segmentation:
28
+ - Semantic content analysis vs mechanical structure splitting
29
+ - Dynamic character limits based on content complexity
30
+ - Enhanced relevance scoring for planning agents
31
+ - Algorithm and formula integrity preservation
32
+ - Content type-aware segmentation strategies
33
+ """
34
+
35
+ def __init__(self, logger: Optional[logging.Logger] = None):
36
+ self.logger = logger or self._create_default_logger()
37
+ self.mcp_agent = None
38
+
39
+ def _create_default_logger(self) -> logging.Logger:
40
+ """Create default logger if none provided"""
41
+ logger = logging.getLogger(f"{__name__}.DocumentSegmentationAgent")
42
+ logger.setLevel(logging.INFO)
43
+ return logger
44
+
45
+ async def __aenter__(self):
46
+ """Async context manager entry"""
47
+ await self.initialize()
48
+ return self
49
+
50
+ async def __aexit__(self, exc_type, exc_val, exc_tb):
51
+ """Async context manager exit"""
52
+ await self.cleanup()
53
+
54
+ async def initialize(self):
55
+ """Initialize the MCP agent connection"""
56
+ try:
57
+ self.mcp_agent = Agent(
58
+ name="DocumentSegmentationCoordinator",
59
+ instruction="""You are an intelligent document segmentation coordinator that leverages advanced semantic analysis for optimal document processing.
60
+
61
+ Your enhanced capabilities include:
62
+ 1. **Semantic Content Analysis**: Coordinate intelligent document type classification based on content semantics rather than structural patterns
63
+ 2. **Algorithm Integrity Protection**: Ensure algorithm blocks, formulas, and related content maintain logical coherence
64
+ 3. **Adaptive Segmentation Strategy**: Select optimal segmentation approaches (semantic_research_focused, algorithm_preserve_integrity, concept_implementation_hybrid, etc.)
65
+ 4. **Quality Intelligence Validation**: Assess segmentation quality using enhanced metrics for completeness, relevance, and technical accuracy
66
+ 5. **Planning Agent Optimization**: Ensure segments are specifically optimized for ConceptAnalysisAgent, AlgorithmAnalysisAgent, and CodePlannerAgent needs
67
+
68
+ **Key Principles**:
69
+ - Prioritize content semantics over mechanical structure
70
+ - Preserve algorithm and formula completeness
71
+ - Optimize for downstream agent token efficiency
72
+ - Ensure technical content integrity
73
+ - Provide actionable quality assessments
74
+
75
+ Use the enhanced document-segmentation tools to deliver superior segmentation results that significantly improve planning agent performance.""",
76
+ server_names=["document-segmentation", "filesystem"],
77
+ )
78
+
79
+ # Initialize the agent context
80
+ await self.mcp_agent.__aenter__()
81
+
82
+ # Attach LLM
83
+ self.llm = await self.mcp_agent.attach_llm(get_preferred_llm_class())
84
+
85
+ self.logger.info("DocumentSegmentationAgent initialized successfully")
86
+
87
+ except Exception as e:
88
+ self.logger.error(f"Failed to initialize DocumentSegmentationAgent: {e}")
89
+ raise
90
+
91
+ async def cleanup(self):
92
+ """Cleanup resources"""
93
+ if self.mcp_agent:
94
+ try:
95
+ await self.mcp_agent.__aexit__(None, None, None)
96
+ except Exception as e:
97
+ self.logger.warning(f"Error during cleanup: {e}")
98
+
99
+ async def analyze_and_prepare_document(
100
+ self, paper_dir: str, force_refresh: bool = False
101
+ ) -> Dict[str, Any]:
102
+ """
103
+ Perform intelligent semantic analysis and create optimized document segments.
104
+
105
+ This method coordinates with the enhanced document segmentation server to:
106
+ - Classify document type using semantic content analysis
107
+ - Select optimal segmentation strategy (semantic_research_focused, algorithm_preserve_integrity, etc.)
108
+ - Preserve algorithm and formula integrity
109
+ - Optimize segments for downstream planning agents
110
+
111
+ Args:
112
+ paper_dir: Path to the paper directory
113
+ force_refresh: Whether to force re-analysis with latest algorithms
114
+
115
+ Returns:
116
+ Dict containing enhanced analysis results and intelligent segment information
117
+ """
118
+ try:
119
+ self.logger.info(f"Starting document analysis for: {paper_dir}")
120
+
121
+ # Check if markdown file exists
122
+ md_files = [f for f in os.listdir(paper_dir) if f.endswith(".md")]
123
+ if not md_files:
124
+ raise ValueError(f"No markdown file found in {paper_dir}")
125
+
126
+ # Use the enhanced document segmentation tool
127
+ message = f"""Please perform intelligent semantic analysis and segmentation for the document in directory: {paper_dir}
128
+
129
+ Use the analyze_and_segment_document tool with these parameters:
130
+ - paper_dir: {paper_dir}
131
+ - force_refresh: {force_refresh}
132
+
133
+ **Focus on these enhanced objectives**:
134
+ 1. **Semantic Document Classification**: Identify document type using content semantics (research_paper, algorithm_focused, technical_doc, etc.)
135
+ 2. **Intelligent Segmentation Strategy**: Select the optimal strategy based on content analysis:
136
+ - `semantic_research_focused` for research papers with high algorithm density
137
+ - `algorithm_preserve_integrity` for algorithm-heavy documents
138
+ - `concept_implementation_hybrid` for mixed concept/implementation content
139
+ 3. **Algorithm Completeness**: Ensure algorithm blocks, formulas, and related descriptions remain logically connected
140
+ 4. **Planning Agent Optimization**: Create segments that maximize effectiveness for ConceptAnalysisAgent, AlgorithmAnalysisAgent, and CodePlannerAgent
141
+
142
+ After segmentation, get a document overview and provide:
143
+ - Quality assessment of semantic segmentation approach
144
+ - Algorithm/formula integrity verification
145
+ - Recommendations for planning agent optimization
146
+ - Technical content completeness evaluation"""
147
+
148
+ result = await self.llm.generate_str(message=message)
149
+
150
+ self.logger.info("Document analysis completed successfully")
151
+
152
+ # Parse the result and return structured information
153
+ return {
154
+ "status": "success",
155
+ "paper_dir": paper_dir,
156
+ "analysis_result": result,
157
+ "segments_available": True,
158
+ }
159
+
160
+ except Exception as e:
161
+ self.logger.error(f"Error in document analysis: {e}")
162
+ return {
163
+ "status": "error",
164
+ "paper_dir": paper_dir,
165
+ "error_message": str(e),
166
+ "segments_available": False,
167
+ }
168
+
169
+ async def get_document_overview(self, paper_dir: str) -> Dict[str, Any]:
170
+ """
171
+ Get overview of document structure and segments.
172
+
173
+ Args:
174
+ paper_dir: Path to the paper directory
175
+
176
+ Returns:
177
+ Dict containing document overview information
178
+ """
179
+ try:
180
+ message = f"""Please provide an intelligent overview of the enhanced document segmentation for: {paper_dir}
181
+
182
+ Use the get_document_overview tool to retrieve:
183
+ - **Semantic Document Classification**: Document type and confidence score
184
+ - **Adaptive Segmentation Strategy**: Strategy used and reasoning
185
+ - **Segment Intelligence**: Total segments with enhanced metadata
186
+ - **Content Type Distribution**: Breakdown by algorithm, concept, formula, implementation content
187
+ - **Quality Intelligence Assessment**: Completeness, coherence, and planning agent optimization
188
+
189
+ Provide a comprehensive analysis focusing on:
190
+ 1. Semantic vs structural segmentation quality
191
+ 2. Algorithm and formula integrity preservation
192
+ 3. Segment relevance for downstream planning agents
193
+ 4. Technical content distribution and completeness"""
194
+
195
+ result = await self.llm.generate_str(message=message)
196
+
197
+ return {
198
+ "status": "success",
199
+ "paper_dir": paper_dir,
200
+ "overview_result": result,
201
+ }
202
+
203
+ except Exception as e:
204
+ self.logger.error(f"Error getting document overview: {e}")
205
+ return {"status": "error", "paper_dir": paper_dir, "error_message": str(e)}
206
+
207
+ async def validate_segmentation_quality(self, paper_dir: str) -> Dict[str, Any]:
208
+ """
209
+ Validate the quality of document segmentation.
210
+
211
+ Args:
212
+ paper_dir: Path to the paper directory
213
+
214
+ Returns:
215
+ Dict containing validation results
216
+ """
217
+ try:
218
+ # Get overview first
219
+ overview_result = await self.get_document_overview(paper_dir)
220
+
221
+ if overview_result["status"] != "success":
222
+ return overview_result
223
+
224
+ # Analyze enhanced segmentation quality
225
+ message = f"""Based on the intelligent document overview for {paper_dir}, please evaluate the enhanced segmentation quality using advanced criteria.
226
+
227
+ **Enhanced Quality Assessment Factors**:
228
+ 1. **Semantic Coherence**: Do segments maintain logical content boundaries vs mechanical structural splits?
229
+ 2. **Algorithm Integrity**: Are algorithm blocks, formulas, and related explanations kept together?
230
+ 3. **Content Type Optimization**: Are different content types (algorithm, concept, formula, implementation) properly identified and scored?
231
+ 4. **Planning Agent Effectiveness**: Will ConceptAnalysisAgent, AlgorithmAnalysisAgent, and CodePlannerAgent receive optimal information?
232
+ 5. **Dynamic Sizing**: Are segments adaptively sized based on content complexity rather than fixed limits?
233
+ 6. **Technical Completeness**: Are critical technical details preserved without fragmentation?
234
+
235
+ **Provide specific recommendations for**:
236
+ - Semantic segmentation improvements
237
+ - Algorithm/formula integrity enhancements
238
+ - Planning agent optimization opportunities
239
+ - Content distribution balance adjustments"""
240
+
241
+ validation_result = await self.llm.generate_str(message=message)
242
+
243
+ return {
244
+ "status": "success",
245
+ "paper_dir": paper_dir,
246
+ "validation_result": validation_result,
247
+ "overview_data": overview_result,
248
+ }
249
+
250
+ except Exception as e:
251
+ self.logger.error(f"Error validating segmentation quality: {e}")
252
+ return {"status": "error", "paper_dir": paper_dir, "error_message": str(e)}
253
+
254
+
255
+ async def run_document_segmentation_analysis(
256
+ paper_dir: str, logger: Optional[logging.Logger] = None, force_refresh: bool = False
257
+ ) -> Dict[str, Any]:
258
+ """
259
+ Convenience function to run document segmentation analysis.
260
+
261
+ Args:
262
+ paper_dir: Path to the paper directory
263
+ logger: Optional logger instance
264
+ force_refresh: Whether to force re-analysis
265
+
266
+ Returns:
267
+ Dict containing analysis results
268
+ """
269
+ async with DocumentSegmentationAgent(logger=logger) as agent:
270
+ # Analyze and prepare document
271
+ analysis_result = await agent.analyze_and_prepare_document(
272
+ paper_dir, force_refresh=force_refresh
273
+ )
274
+
275
+ if analysis_result["status"] == "success":
276
+ # Validate segmentation quality
277
+ validation_result = await agent.validate_segmentation_quality(paper_dir)
278
+ analysis_result["validation"] = validation_result
279
+
280
+ return analysis_result
281
+
282
+
283
+ # Utility function for integration with existing workflow
284
+ async def prepare_document_segments(
285
+ paper_dir: str, logger: Optional[logging.Logger] = None
286
+ ) -> Dict[str, Any]:
287
+ """
288
+ Prepare intelligent document segments optimized for planning agents.
289
+
290
+ This enhanced function leverages semantic analysis to create segments that:
291
+ - Preserve algorithm and formula integrity
292
+ - Optimize for ConceptAnalysisAgent, AlgorithmAnalysisAgent, and CodePlannerAgent
293
+ - Use adaptive character limits based on content complexity
294
+ - Maintain technical content completeness
295
+
296
+ Called from the orchestration engine (Phase 3.5) to prepare documents
297
+ before the planning phase with superior segmentation quality.
298
+
299
+ Args:
300
+ paper_dir: Path to the paper directory containing markdown file
301
+ logger: Optional logger instance for tracking
302
+
303
+ Returns:
304
+ Dict containing enhanced preparation results and intelligent metadata
305
+ """
306
+ try:
307
+ logger = logger or logging.getLogger(__name__)
308
+ logger.info(f"Preparing document segments for: {paper_dir}")
309
+
310
+ # Run analysis
311
+ result = await run_document_segmentation_analysis(
312
+ paper_dir=paper_dir,
313
+ logger=logger,
314
+ force_refresh=False, # Use cached analysis if available
315
+ )
316
+
317
+ if result["status"] == "success":
318
+ logger.info("Document segments prepared successfully")
319
+
320
+ # Create metadata for downstream agents
321
+ segments_dir = os.path.join(paper_dir, "document_segments")
322
+
323
+ return {
324
+ "status": "success",
325
+ "paper_dir": paper_dir,
326
+ "segments_dir": segments_dir,
327
+ "segments_ready": True,
328
+ "analysis_summary": result.get("analysis_result", ""),
329
+ "validation_summary": result.get("validation", {}).get(
330
+ "validation_result", ""
331
+ ),
332
+ }
333
+ else:
334
+ logger.error(
335
+ f"Document segmentation failed: {result.get('error_message', 'Unknown error')}"
336
+ )
337
+ return {
338
+ "status": "error",
339
+ "paper_dir": paper_dir,
340
+ "segments_ready": False,
341
+ "error_message": result.get(
342
+ "error_message", "Document segmentation failed"
343
+ ),
344
+ }
345
+
346
+ except Exception as e:
347
+ logger.error(f"Error preparing document segments: {e}")
348
+ return {
349
+ "status": "error",
350
+ "paper_dir": paper_dir,
351
+ "segments_ready": False,
352
+ "error_message": str(e),
353
+ }
projects/ui/DeepCode/workflows/agents/memory_agent_concise.py ADDED
@@ -0,0 +1,1489 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Concise Memory Agent for Code Implementation Workflow
3
+
4
+ This memory agent implements a focused approach:
5
+ 1. Before first file: Normal conversation flow
6
+ 2. After first file: Keep only system_prompt + initial_plan + current round tool results
7
+ 3. Clean slate for each new code file generation
8
+
9
+ Key Features:
10
+ - Preserves system prompt and initial plan always
11
+ - After first file generation, discards previous conversation history
12
+ - Keeps only current round tool results from essential tools:
13
+ * read_code_mem, read_file, write_file
14
+ * execute_python, execute_bash
15
+ * search_code, search_reference_code, get_file_structure
16
+ - Provides clean, focused input for next write_file operation
17
+ """
18
+
19
+ import json
20
+ import logging
21
+ import os
22
+ import time
23
+ from datetime import datetime
24
+ from typing import Dict, Any, List, Optional
25
+
26
+
27
+ class ConciseMemoryAgent:
28
+ """
29
+ Concise Memory Agent - Focused Information Retention
30
+
31
+ Core Philosophy:
32
+ - Preserve essential context (system prompt + initial plan)
33
+ - After first file generation, use clean slate approach
34
+ - Keep only current round tool results from all essential MCP tools
35
+ - Remove conversational clutter and previous tool calls
36
+
37
+ Essential Tools Tracked:
38
+ - File Operations: read_code_mem, read_file, write_file
39
+ - Code Analysis: search_code, search_reference_code, get_file_structure
40
+ - Execution: execute_python, execute_bash
41
+ """
42
+
43
+ def __init__(
44
+ self,
45
+ initial_plan_content: str,
46
+ logger: Optional[logging.Logger] = None,
47
+ target_directory: Optional[str] = None,
48
+ default_models: Optional[Dict[str, str]] = None,
49
+ ):
50
+ """
51
+ Initialize Concise Memory Agent
52
+
53
+ Args:
54
+ initial_plan_content: Content of initial_plan.txt
55
+ logger: Logger instance
56
+ target_directory: Target directory for saving summaries
57
+ default_models: Default models configuration from workflow
58
+ """
59
+ self.logger = logger or self._create_default_logger()
60
+ self.initial_plan = initial_plan_content
61
+
62
+ # Store default models configuration
63
+ self.default_models = default_models or {
64
+ "anthropic": "claude-sonnet-4-20250514",
65
+ "openai": "gpt-4o",
66
+ }
67
+
68
+ # Memory state tracking - new logic: trigger after each write_file
69
+ self.last_write_file_detected = (
70
+ False # Track if write_file was called in current iteration
71
+ )
72
+ self.should_clear_memory_next = False # Flag to clear memory in next round
73
+ self.current_round = 0
74
+
75
+ # Parse phase structure from initial plan
76
+ self.phase_structure = self._parse_phase_structure()
77
+
78
+ # Extract all files from file structure in initial plan
79
+ self.all_files_list = self._extract_all_files_from_plan()
80
+
81
+ # Memory configuration
82
+ if target_directory:
83
+ self.save_path = target_directory
84
+ else:
85
+ self.save_path = "./deepcode_lab/papers/1/"
86
+
87
+ # Code summary file path
88
+ self.code_summary_path = os.path.join(
89
+ self.save_path, "implement_code_summary.md"
90
+ )
91
+
92
+ # Current round tool results storage
93
+ self.current_round_tool_results = []
94
+
95
+ # Track all implemented files
96
+ self.implemented_files = []
97
+
98
+ # Store Next Steps information temporarily (not saved to file)
99
+ self.current_next_steps = ""
100
+
101
+ self.logger.info(
102
+ f"Concise Memory Agent initialized with target directory: {self.save_path}"
103
+ )
104
+ self.logger.info(f"Code summary will be saved to: {self.code_summary_path}")
105
+ # self.logger.info(f"🤖 Using models - Anthropic: {self.default_models['anthropic']}, OpenAI: {self.default_models['openai']}")
106
+ self.logger.info(
107
+ "📝 NEW LOGIC: Memory clearing triggered after each write_file call"
108
+ )
109
+
110
+ def _create_default_logger(self) -> logging.Logger:
111
+ """Create default logger"""
112
+ logger = logging.getLogger(f"{__name__}.ConciseMemoryAgent")
113
+ logger.setLevel(logging.INFO)
114
+ return logger
115
+
116
+ def _parse_phase_structure(self) -> Dict[str, List[str]]:
117
+ """Parse implementation phases from initial plan"""
118
+ try:
119
+ phases = {}
120
+ lines = self.initial_plan.split("\n")
121
+ current_phase = None
122
+
123
+ for line in lines:
124
+ if "Phase" in line and ":" in line:
125
+ # Extract phase name
126
+ phase_parts = line.split(":")
127
+ if len(phase_parts) >= 2:
128
+ current_phase = phase_parts[0].strip()
129
+ phases[current_phase] = []
130
+ elif current_phase and line.strip().startswith("-"):
131
+ # This is a file in the current phase
132
+ file_line = line.strip()[1:].strip()
133
+ if file_line.startswith("`") and file_line.endswith("`"):
134
+ file_name = file_line[1:-1]
135
+ phases[current_phase].append(file_name)
136
+ elif current_phase and not line.strip():
137
+ # Empty line might indicate end of phase
138
+ continue
139
+ elif current_phase and line.strip().startswith("###"):
140
+ # New section, end current phase
141
+ current_phase = None
142
+
143
+ return phases
144
+
145
+ except Exception as e:
146
+ self.logger.warning(f"Failed to parse phase structure: {e}")
147
+ return {}
148
+
149
+ def _extract_all_files_from_plan(self) -> List[str]:
150
+ """
151
+ Extract all file paths from the file_structure section in initial plan
152
+ Handles multiple formats: tree structure, YAML, and simple lists
153
+
154
+ Returns:
155
+ List of all file paths that should be implemented
156
+ """
157
+ try:
158
+ lines = self.initial_plan.split("\n")
159
+ files = []
160
+
161
+ # Method 1: Try to extract from tree structure in file_structure section
162
+ files.extend(self._extract_from_tree_structure(lines))
163
+
164
+ # Method 2: If no files found, try to extract from simple list format
165
+ if not files:
166
+ files.extend(self._extract_from_simple_list(lines))
167
+
168
+ # Method 3: If still no files, try to extract from anywhere in the plan
169
+ if not files:
170
+ files.extend(self._extract_from_plan_content(lines))
171
+
172
+ # Clean and validate file paths
173
+ cleaned_files = self._clean_and_validate_files(files)
174
+
175
+ # Log the extracted files
176
+ self.logger.info(
177
+ f"📁 Extracted {len(cleaned_files)} files from initial plan"
178
+ )
179
+ if cleaned_files:
180
+ self.logger.info(f"📁 Sample files: {cleaned_files[:3]}...")
181
+
182
+ return cleaned_files
183
+
184
+ except Exception as e:
185
+ self.logger.error(f"Failed to extract files from initial plan: {e}")
186
+ return []
187
+
188
+ def _extract_from_tree_structure(self, lines: List[str]) -> List[str]:
189
+ """Extract files from tree structure format - only from file_structure section"""
190
+ files = []
191
+ in_file_structure = False
192
+ path_stack = []
193
+
194
+ for line in lines:
195
+ # Check if we're in the file_structure section
196
+ if "file_structure:" in line or "file_structure |" in line:
197
+ in_file_structure = True
198
+ continue
199
+ # Check for end of file_structure section (next YAML key)
200
+ elif (
201
+ in_file_structure
202
+ and line.strip()
203
+ and not line.startswith(" ")
204
+ and ":" in line
205
+ ):
206
+ # This looks like a new YAML section, stop parsing
207
+ break
208
+ elif not in_file_structure:
209
+ continue
210
+
211
+ if not line.strip():
212
+ continue
213
+
214
+ # Skip lines that look like YAML keys (contain ":" but not file paths)
215
+ if ":" in line and not ("." in line and "/" in line):
216
+ continue
217
+
218
+ stripped_line = line.strip()
219
+
220
+ # Detect root directory (directory name ending with / at minimal indentation)
221
+ if (
222
+ stripped_line.endswith("/")
223
+ and len(line) - len(line.lstrip())
224
+ <= 4 # Minimal indentation (0-4 spaces)
225
+ and not any(char in line for char in ["├", "└", "│", "─"])
226
+ ): # No tree characters
227
+ root_directory = stripped_line.rstrip("/")
228
+ path_stack = [root_directory]
229
+ continue
230
+
231
+ # Only process lines that have tree structure
232
+ if not any(char in line for char in ["├", "└", "│", "─"]):
233
+ continue
234
+
235
+ # Parse tree structure depth by analyzing the line structure
236
+ # Count │ characters before the actual item, or use indentation as fallback
237
+ pipe_count = 0
238
+
239
+ for i, char in enumerate(line):
240
+ if char == "│":
241
+ pipe_count += 1
242
+ elif char in ["├", "└"]:
243
+ break
244
+
245
+ # Calculate depth: use pipe count if available, otherwise use indentation
246
+ if pipe_count > 0:
247
+ depth = pipe_count + 1 # +1 because the actual item is one level deeper
248
+ else:
249
+ # Use indentation to determine depth (every 4 spaces = 1 level)
250
+ indent_spaces = len(line) - len(line.lstrip())
251
+ depth = max(1, indent_spaces // 4) # At least depth 1
252
+
253
+ # Clean the line to get the item name
254
+ clean_line = line
255
+ for char in ["├──", "└──", "├", "└", "│", "─"]:
256
+ clean_line = clean_line.replace(char, "")
257
+ clean_line = clean_line.strip()
258
+
259
+ if not clean_line or ":" in clean_line:
260
+ continue
261
+
262
+ # Extract filename (remove comments)
263
+ if "#" in clean_line:
264
+ filename = clean_line.split("#")[0].strip()
265
+ else:
266
+ filename = clean_line.strip()
267
+
268
+ # Skip empty filenames
269
+ if not filename:
270
+ continue
271
+
272
+ # Adjust path stack to current depth
273
+ while len(path_stack) < depth:
274
+ path_stack.append("")
275
+ path_stack = path_stack[:depth]
276
+
277
+ # Determine if it's a directory or file
278
+ is_directory = (
279
+ filename.endswith("/")
280
+ or (
281
+ "." not in filename
282
+ and filename not in ["README", "requirements.txt", "setup.py"]
283
+ )
284
+ or filename
285
+ in [
286
+ "core",
287
+ "networks",
288
+ "environments",
289
+ "baselines",
290
+ "evaluation",
291
+ "experiments",
292
+ "utils",
293
+ "src",
294
+ "lib",
295
+ "app",
296
+ ]
297
+ )
298
+
299
+ if is_directory:
300
+ directory_name = filename.rstrip("/")
301
+ if directory_name and ":" not in directory_name:
302
+ path_stack.append(directory_name)
303
+ else:
304
+ # It's a file, construct full path
305
+ if path_stack:
306
+ full_path = "/".join(path_stack) + "/" + filename
307
+ else:
308
+ full_path = filename
309
+ files.append(full_path)
310
+
311
+ return files
312
+
313
+ def _extract_from_simple_list(self, lines: List[str]) -> List[str]:
314
+ """Extract files from simple list format (- filename)"""
315
+ files = []
316
+
317
+ for line in lines:
318
+ line = line.strip()
319
+ if line.startswith("- ") and not line.startswith('- "'):
320
+ # Remove leading "- " and clean up
321
+ filename = line[2:].strip()
322
+
323
+ # Remove quotes if present
324
+ if filename.startswith('"') and filename.endswith('"'):
325
+ filename = filename[1:-1]
326
+
327
+ # Check if it looks like a file (has extension)
328
+ if "." in filename and "/" in filename:
329
+ files.append(filename)
330
+
331
+ return files
332
+
333
+ def _extract_from_plan_content(self, lines: List[str]) -> List[str]:
334
+ """Extract files from anywhere in the plan content"""
335
+ files = []
336
+
337
+ # Look for common file patterns
338
+ import re
339
+
340
+ file_patterns = [
341
+ r"([a-zA-Z0-9_\-/]+\.[a-zA-Z0-9]+)", # filename.ext
342
+ r'"([a-zA-Z0-9_\-/]+\.[a-zA-Z0-9]+)"', # "filename.ext"
343
+ ]
344
+
345
+ for line in lines:
346
+ for pattern in file_patterns:
347
+ matches = re.findall(pattern, line)
348
+ for match in matches:
349
+ # Only include if it looks like a code file (exclude media files)
350
+ if "/" in match and any(
351
+ ext in match
352
+ for ext in [
353
+ ".py",
354
+ ".js",
355
+ ".html",
356
+ ".css",
357
+ ".md",
358
+ ".txt",
359
+ ".json",
360
+ ".yaml",
361
+ ".yml",
362
+ ".xml",
363
+ ".sql",
364
+ ".sh",
365
+ ".ts",
366
+ ".jsx",
367
+ ".tsx",
368
+ ]
369
+ ):
370
+ files.append(match)
371
+
372
+ return files
373
+
374
+ def _clean_and_validate_files(self, files: List[str]) -> List[str]:
375
+ """Clean and validate extracted file paths - only keep code files"""
376
+ cleaned_files = []
377
+
378
+ # Define code file extensions we want to track
379
+ code_extensions = [
380
+ ".py",
381
+ ".js",
382
+ ".html",
383
+ ".css",
384
+ ".md",
385
+ ".txt",
386
+ ".json",
387
+ ".yaml",
388
+ ".yml",
389
+ ".xml",
390
+ ".sql",
391
+ ".sh",
392
+ ".bat",
393
+ ".dockerfile",
394
+ ".env",
395
+ ".gitignore",
396
+ ".ts",
397
+ ".jsx",
398
+ ".tsx",
399
+ ".vue",
400
+ ".php",
401
+ ".rb",
402
+ ".go",
403
+ ".rs",
404
+ ".cpp",
405
+ ".c",
406
+ ".h",
407
+ ".hpp",
408
+ ".java",
409
+ ".kt",
410
+ ".swift",
411
+ ".dart",
412
+ ]
413
+
414
+ for file_path in files:
415
+ # Clean the path
416
+ cleaned_path = file_path.strip().strip('"').strip("'")
417
+
418
+ # Skip if empty
419
+ if not cleaned_path:
420
+ continue
421
+
422
+ # Skip directories (no file extension)
423
+ if "." not in cleaned_path.split("/")[-1]:
424
+ continue
425
+
426
+ # Only include files with code extensions
427
+ has_code_extension = any(
428
+ cleaned_path.lower().endswith(ext) for ext in code_extensions
429
+ )
430
+ if not has_code_extension:
431
+ continue
432
+
433
+ # Skip files that look like YAML keys or config entries
434
+ if (
435
+ ":" in cleaned_path
436
+ and not cleaned_path.endswith(".yaml")
437
+ and not cleaned_path.endswith(".yml")
438
+ ):
439
+ continue
440
+
441
+ # Skip paths that contain invalid characters for file paths
442
+ if any(invalid_char in cleaned_path for invalid_char in ['"', "'", "|"]):
443
+ continue
444
+
445
+ # Add to cleaned list if not already present
446
+ if cleaned_path not in cleaned_files:
447
+ cleaned_files.append(cleaned_path)
448
+
449
+ return sorted(cleaned_files)
450
+
451
+ def record_file_implementation(
452
+ self, file_path: str, implementation_content: str = ""
453
+ ):
454
+ """
455
+ Record a newly implemented file (simplified version)
456
+ NEW LOGIC: File implementation is tracked via write_file tool detection
457
+
458
+ Args:
459
+ file_path: Path of the implemented file
460
+ implementation_content: Content of the implemented file
461
+ """
462
+ # Add file to implemented files list if not already present
463
+ if file_path not in self.implemented_files:
464
+ self.implemented_files.append(file_path)
465
+
466
+ self.logger.info(f"📝 File implementation recorded: {file_path}")
467
+
468
+ async def create_code_implementation_summary(
469
+ self,
470
+ client,
471
+ client_type: str,
472
+ file_path: str,
473
+ implementation_content: str,
474
+ files_implemented: int,
475
+ ) -> str:
476
+ """
477
+ Create LLM-based code implementation summary after writing a file
478
+ Uses LLM to analyze and summarize the implemented code
479
+
480
+ Args:
481
+ client: LLM client instance
482
+ client_type: Type of LLM client ("anthropic" or "openai")
483
+ file_path: Path of the implemented file
484
+ implementation_content: Content of the implemented file
485
+ files_implemented: Number of files implemented so far
486
+
487
+ Returns:
488
+ LLM-generated formatted code implementation summary
489
+ """
490
+ try:
491
+ # Record the file implementation first
492
+ self.record_file_implementation(file_path, implementation_content)
493
+
494
+ # Create prompt for LLM summary
495
+ summary_prompt = self._create_code_summary_prompt(
496
+ file_path, implementation_content, files_implemented
497
+ )
498
+ summary_messages = [{"role": "user", "content": summary_prompt}]
499
+
500
+ # Get LLM-generated summary
501
+ llm_response = await self._call_llm_for_summary(
502
+ client, client_type, summary_messages
503
+ )
504
+ llm_summary = llm_response.get("content", "")
505
+
506
+ # Extract different sections from LLM summary
507
+ sections = self._extract_summary_sections(llm_summary)
508
+
509
+ # Store Next Steps in temporary variable (not saved to file)
510
+ self.current_next_steps = sections.get("next_steps", "")
511
+ if self.current_next_steps:
512
+ self.logger.info("📝 Next Steps stored temporarily (not saved to file)")
513
+
514
+ # Format summary with only Implementation Progress and Dependencies for file saving
515
+ file_summary_content = ""
516
+ if sections.get("core_purpose"):
517
+ file_summary_content += sections["core_purpose"] + "\n\n"
518
+ if sections.get("public_interface"):
519
+ file_summary_content += sections["public_interface"] + "\n\n"
520
+ if sections.get("internal_dependencies"):
521
+ file_summary_content += sections["internal_dependencies"] + "\n\n"
522
+ if sections.get("external_dependencies"):
523
+ file_summary_content += sections["external_dependencies"] + "\n\n"
524
+ if sections.get("implementation_notes"):
525
+ file_summary_content += sections["implementation_notes"] + "\n\n"
526
+
527
+ # Create the formatted summary for file saving (without Next Steps)
528
+ formatted_summary = self._format_code_implementation_summary(
529
+ file_path, file_summary_content.strip(), files_implemented
530
+ )
531
+
532
+ # Save to implement_code_summary.md (append mode) - only Implementation Progress and Dependencies
533
+ await self._save_code_summary_to_file(formatted_summary, file_path)
534
+
535
+ self.logger.info(f"Created and saved code summary for: {file_path}")
536
+ return formatted_summary
537
+
538
+ except Exception as e:
539
+ self.logger.error(
540
+ f"Failed to create LLM-based code implementation summary: {e}"
541
+ )
542
+ # Fallback to simple summary
543
+ return self._create_fallback_code_summary(
544
+ file_path, implementation_content, files_implemented
545
+ )
546
+
547
+ def _create_code_summary_prompt(
548
+ self, file_path: str, implementation_content: str, files_implemented: int
549
+ ) -> str:
550
+ """
551
+ Create prompt for LLM to generate code implementation summary
552
+
553
+ Args:
554
+ file_path: Path of the implemented file
555
+ implementation_content: Content of the implemented file
556
+ files_implemented: Number of files implemented so far
557
+
558
+ Returns:
559
+ Prompt for LLM summarization
560
+ """
561
+ current_round = self.current_round
562
+
563
+ # Get formatted file lists
564
+ file_lists = self.get_formatted_files_lists()
565
+ implemented_files_list = file_lists["implemented"]
566
+ unimplemented_files_list = file_lists["unimplemented"]
567
+
568
+ prompt = f"""You are an expert code implementation summarizer. Analyze the implemented code file and create a structured summary.
569
+
570
+ **🚨 CRITICAL: The files listed below are ALREADY IMPLEMENTED - DO NOT suggest them in Next Steps! 🚨**
571
+
572
+ **All Previously Implemented Files:**
573
+ {implemented_files_list}
574
+
575
+ **Remaining Unimplemented Files (choose ONLY from these for Next Steps):**
576
+ {unimplemented_files_list}
577
+
578
+ **Current Implementation Context:**
579
+ - **File Implemented**: {file_path}
580
+ - **Current Round**: {current_round}
581
+ - **Total Files Implemented**: {files_implemented}
582
+
583
+
584
+ **Initial Plan Reference:**
585
+ {self.initial_plan[:]}
586
+
587
+ **Implemented Code Content:**
588
+ ```
589
+ {implementation_content[:]}
590
+ ```
591
+
592
+ **Required Summary Format:**
593
+
594
+ **Core Purpose** (provide a general overview of the file's main responsibility):
595
+ - {{1-2 sentence description of file's main responsibility}}
596
+
597
+ **Public Interface** (what other files can use, if any):
598
+ - Class {{ClassName}}: {{purpose}} | Key methods: {{method_names}} | Constructor params: {{params}}
599
+ - Function {{function_name}}({{params}}): {{purpose}} -> {{return_type}}: {{purpose}}
600
+ - Constants/Types: {{name}}: {{value/description}}
601
+
602
+ **Internal Dependencies** (what this file imports/requires, if any):
603
+ - From {{module/file}}: {{specific_imports}}
604
+ - External packages: {{package_name}} - {{usage_context}}
605
+
606
+ **External Dependencies** (what depends on this file, if any):
607
+ - Expected to be imported by: {{likely_consumer_files}}
608
+ - Key exports used elsewhere: {{main_interfaces}}
609
+
610
+ **Implementation Notes**: (if any)
611
+ - Architecture decisions: {{key_choices_made}}
612
+ - Cross-File Relationships: {{how_files_work_together}}
613
+
614
+ **Next Steps**: List the code file (ONLY ONE) that will be implemented in the next round (MUST choose from "Remaining Unimplemented Files" above)
615
+ Format: Code will be implemented: {{file_path}}
616
+ **NEVER suggest any file from the "All Previously Implemented Files" list!**
617
+
618
+ **Instructions:**
619
+ - Be precise and concise
620
+ - Focus on function interfaces that other files will need
621
+ - Extract actual function signatures from the code
622
+ - **CRITICAL: For Next Steps, ONLY choose ONE file from the "Remaining Unimplemented Files" list above**
623
+ - **NEVER suggest implementing a file that is already in the implemented files list**
624
+ - Choose the next file based on logical dependencies and implementation order
625
+ - Use the exact format specified above
626
+
627
+ **Summary:**"""
628
+
629
+ return prompt
630
+
631
+ # TODO: The prompt is not good, need to be improved
632
+ # **Implementation Progress**: List the code file completed in current round and core implementation ideas
633
+ # Format: {{file_path}}: {{core implementation ideas}}
634
+
635
+ # **Dependencies**: According to the File Structure and initial plan, list functions that may be called by other files
636
+ # Format: {{file_path}}: Function {{function_name}}: core ideas--{{ideas}}; Required parameters--{{params}}; Return parameters--{{returns}}
637
+ # Required packages: {{packages}}
638
+
639
+ def _extract_summary_sections(self, llm_summary: str) -> Dict[str, str]:
640
+ """
641
+ Extract different sections from LLM-generated summary
642
+
643
+ Args:
644
+ llm_summary: Raw LLM-generated summary text
645
+
646
+ Returns:
647
+ Dictionary with extracted sections: core_purpose, public_interface, internal_dependencies,
648
+ external_dependencies, implementation_notes, next_steps
649
+ """
650
+ sections = {
651
+ "core_purpose": "",
652
+ "public_interface": "",
653
+ "internal_dependencies": "",
654
+ "external_dependencies": "",
655
+ "implementation_notes": "",
656
+ "next_steps": "",
657
+ }
658
+
659
+ try:
660
+ lines = llm_summary.split("\n")
661
+ current_section = None
662
+ current_content = []
663
+
664
+ for line in lines:
665
+ line_lower = line.lower().strip()
666
+
667
+ # Check for section headers
668
+ if "core purpose" in line_lower:
669
+ if current_section and current_content:
670
+ sections[current_section] = "\n".join(current_content).strip()
671
+ current_section = "core_purpose"
672
+ current_content = [line] # Include the header
673
+ elif "public interface" in line_lower:
674
+ if current_section and current_content:
675
+ sections[current_section] = "\n".join(current_content).strip()
676
+ current_section = "public_interface"
677
+ current_content = [line] # Include the header
678
+ elif "internal dependencies" in line_lower:
679
+ if current_section and current_content:
680
+ sections[current_section] = "\n".join(current_content).strip()
681
+ current_section = "internal_dependencies"
682
+ current_content = [line] # Include the header
683
+ elif "external dependencies" in line_lower:
684
+ if current_section and current_content:
685
+ sections[current_section] = "\n".join(current_content).strip()
686
+ current_section = "external_dependencies"
687
+ current_content = [line] # Include the header
688
+ elif "implementation notes" in line_lower:
689
+ if current_section and current_content:
690
+ sections[current_section] = "\n".join(current_content).strip()
691
+ current_section = "implementation_notes"
692
+ current_content = [line] # Include the header
693
+ elif "next steps" in line_lower:
694
+ if current_section and current_content:
695
+ sections[current_section] = "\n".join(current_content).strip()
696
+ current_section = "next_steps"
697
+ current_content = [line] # Include the header
698
+ else:
699
+ # Add content to current section
700
+ if current_section:
701
+ current_content.append(line)
702
+
703
+ # Don't forget the last section
704
+ if current_section and current_content:
705
+ sections[current_section] = "\n".join(current_content).strip()
706
+
707
+ self.logger.info(f"📋 Extracted sections: {list(sections.keys())}")
708
+
709
+ except Exception as e:
710
+ self.logger.error(f"Failed to extract summary sections: {e}")
711
+ # Fallback: put everything in core_purpose
712
+ sections["core_purpose"] = llm_summary
713
+
714
+ return sections
715
+
716
+ def _format_code_implementation_summary(
717
+ self, file_path: str, llm_summary: str, files_implemented: int
718
+ ) -> str:
719
+ """
720
+ Format the LLM-generated summary into the final structure
721
+
722
+ Args:
723
+ file_path: Path of the implemented file
724
+ llm_summary: LLM-generated summary content
725
+ files_implemented: Number of files implemented so far
726
+
727
+ Returns:
728
+ Formatted summary
729
+ """
730
+ timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
731
+
732
+ # # Create formatted list of implemented files
733
+ # implemented_files_list = (
734
+ # "\n".join([f"- {file}" for file in self.implemented_files])
735
+ # if self.implemented_files
736
+ # else "- None yet"
737
+ # )
738
+
739
+ # formatted_summary = f"""# Code Implementation Summary
740
+ # **All Previously Implemented Files:**
741
+ # {implemented_files_list}
742
+ # **Generated**: {timestamp}
743
+ # **File Implemented**: {file_path}
744
+ # **Total Files Implemented**: {files_implemented}
745
+
746
+ # {llm_summary}
747
+
748
+ # ---
749
+ # *Auto-generated by Memory Agent*
750
+ # """
751
+ formatted_summary = f"""# Code Implementation Summary
752
+ **Generated**: {timestamp}
753
+ **File Implemented**: {file_path}
754
+
755
+ {llm_summary}
756
+
757
+ ---
758
+ *Auto-generated by Memory Agent*
759
+ """
760
+ return formatted_summary
761
+
762
+ def _create_fallback_code_summary(
763
+ self, file_path: str, implementation_content: str, files_implemented: int
764
+ ) -> str:
765
+ """
766
+ Create fallback summary when LLM is unavailable
767
+
768
+ Args:
769
+ file_path: Path of the implemented file
770
+ implementation_content: Content of the implemented file
771
+ files_implemented: Number of files implemented so far
772
+
773
+ Returns:
774
+ Fallback summary
775
+ """
776
+ # Create formatted list of implemented files
777
+ implemented_files_list = (
778
+ "\n".join([f"- {file}" for file in self.implemented_files])
779
+ if self.implemented_files
780
+ else "- None yet"
781
+ )
782
+
783
+ summary = f"""# Code Implementation Summary
784
+ **All Previously Implemented Files:**
785
+ {implemented_files_list}
786
+ **Generated**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
787
+ **File Implemented**: {file_path}
788
+ **Total Files Implemented**: {files_implemented}
789
+ **Summary failed to generate.**
790
+
791
+ ---
792
+ *Auto-generated by Concise Memory Agent (Fallback Mode)*
793
+ """
794
+ return summary
795
+
796
+ async def _save_code_summary_to_file(self, new_summary: str, file_path: str):
797
+ """
798
+ Append code implementation summary to implement_code_summary.md
799
+ Accumulates all implementations with clear separators
800
+
801
+ Args:
802
+ new_summary: New summary content to append
803
+ file_path: Path of the file for which the summary was generated
804
+ """
805
+ try:
806
+ # Create directory if it doesn't exist
807
+ os.makedirs(os.path.dirname(self.code_summary_path), exist_ok=True)
808
+
809
+ # Check if file exists to determine if we need header
810
+ file_exists = os.path.exists(self.code_summary_path)
811
+
812
+ # Open in append mode to accumulate all implementations
813
+ with open(self.code_summary_path, "a", encoding="utf-8") as f:
814
+ if not file_exists:
815
+ # Write header for new file
816
+ f.write("# Code Implementation Progress Summary\n")
817
+ f.write("*Accumulated implementation progress for all files*\n\n")
818
+
819
+ # Add clear separator between implementations
820
+ f.write("\n" + "=" * 80 + "\n")
821
+ f.write(
822
+ f"## IMPLEMENTATION File {file_path}; ROUND {self.current_round} \n"
823
+ )
824
+ f.write("=" * 80 + "\n\n")
825
+
826
+ # Write the new summary
827
+ f.write(new_summary)
828
+ f.write("\n\n")
829
+
830
+ self.logger.info(
831
+ f"Appended LLM-based code implementation summary to: {self.code_summary_path}"
832
+ )
833
+
834
+ except Exception as e:
835
+ self.logger.error(f"Failed to save code implementation summary: {e}")
836
+
837
+ async def _call_llm_for_summary(
838
+ self, client, client_type: str, summary_messages: List[Dict]
839
+ ) -> Dict[str, Any]:
840
+ """
841
+ Call LLM for code implementation summary generation ONLY
842
+
843
+ This method is used only for creating code implementation summaries,
844
+ NOT for conversation summarization which has been removed.
845
+ """
846
+ if client_type == "anthropic":
847
+ response = await client.messages.create(
848
+ model=self.default_models["anthropic"],
849
+ system="You are an expert code implementation summarizer. Create structured summaries of implemented code files that preserve essential information about functions, dependencies, and implementation approaches.",
850
+ messages=summary_messages,
851
+ max_tokens=5000,
852
+ temperature=0.2,
853
+ )
854
+
855
+ content = ""
856
+ for block in response.content:
857
+ if block.type == "text":
858
+ content += block.text
859
+
860
+ return {"content": content}
861
+
862
+ elif client_type == "openai":
863
+ openai_messages = [
864
+ {
865
+ "role": "system",
866
+ "content": "You are an expert code implementation summarizer. Create structured summaries of implemented code files that preserve essential information about functions, dependencies, and implementation approaches.",
867
+ }
868
+ ]
869
+ openai_messages.extend(summary_messages)
870
+
871
+ # Try max_tokens and temperature first, fallback to max_completion_tokens without temperature if unsupported
872
+ try:
873
+ response = await client.chat.completions.create(
874
+ model=self.default_models["openai"],
875
+ messages=openai_messages,
876
+ max_tokens=5000,
877
+ temperature=0.2,
878
+ )
879
+ except Exception as e:
880
+ if "max_tokens" in str(e) and "max_completion_tokens" in str(e):
881
+ # Retry with max_completion_tokens and no temperature for models that require it
882
+ response = await client.chat.completions.create(
883
+ model=self.default_models["openai"],
884
+ messages=openai_messages,
885
+ max_completion_tokens=5000,
886
+ )
887
+ else:
888
+ raise
889
+
890
+ return {"content": response.choices[0].message.content or ""}
891
+
892
+ else:
893
+ raise ValueError(f"Unsupported client type: {client_type}")
894
+
895
+ def start_new_round(self, iteration: Optional[int] = None):
896
+ """Start a new dialogue round and reset tool results
897
+
898
+ Args:
899
+ iteration: Optional iteration number from workflow to sync with current_round
900
+ """
901
+ if iteration is not None:
902
+ # Sync with workflow iteration
903
+ self.current_round = iteration
904
+ # self.logger.info(f"🔄 Synced round with workflow iteration {iteration}")
905
+ else:
906
+ # Default behavior: increment round counter
907
+ self.current_round += 1
908
+ self.logger.info(f"🔄 Started new round {self.current_round}")
909
+
910
+ self.current_round_tool_results = [] # Clear previous round results
911
+ # Note: Don't reset last_write_file_detected and should_clear_memory_next here
912
+ # These flags persist across rounds until memory optimization is applied
913
+ # self.logger.info(f"🔄 Round {self.current_round} - Tool results cleared, memory flags preserved")
914
+
915
+ def record_tool_result(
916
+ self, tool_name: str, tool_input: Dict[str, Any], tool_result: Any
917
+ ):
918
+ """
919
+ Record tool result for current round and detect write_file calls
920
+
921
+ Args:
922
+ tool_name: Name of the tool called
923
+ tool_input: Input parameters for the tool
924
+ tool_result: Result returned by the tool
925
+ """
926
+ # Detect write_file calls to trigger memory clearing
927
+ if tool_name == "write_file":
928
+ self.last_write_file_detected = True
929
+ self.should_clear_memory_next = True
930
+
931
+ # self.logger.info(f"🔄 WRITE_FILE DETECTED: {file_path} - Memory will be cleared in next round")
932
+
933
+ # Only record specific tools that provide essential information
934
+ essential_tools = [
935
+ "read_code_mem", # Read code summary from implement_code_summary.md
936
+ "read_file", # Read file contents
937
+ "write_file", # Write file contents (important for tracking implementations)
938
+ "execute_python", # Execute Python code (for testing/validation)
939
+ "execute_bash", # Execute bash commands (for build/execution)
940
+ "search_code", # Search code patterns
941
+ "search_reference_code", # Search reference code (if available)
942
+ "get_file_structure", # Get file structure (for understanding project layout)
943
+ ]
944
+
945
+ if tool_name in essential_tools:
946
+ tool_record = {
947
+ "tool_name": tool_name,
948
+ "tool_input": tool_input,
949
+ "tool_result": tool_result,
950
+ "timestamp": time.time(),
951
+ }
952
+ self.current_round_tool_results.append(tool_record)
953
+ # self.logger.info(f"📊 Essential tool result recorded: {tool_name} ({len(self.current_round_tool_results)} total)")
954
+
955
+ def should_use_concise_mode(self) -> bool:
956
+ """
957
+ Check if concise memory mode should be used
958
+
959
+ Returns:
960
+ True if first file has been generated and concise mode should be active
961
+ """
962
+ return self.last_write_file_detected
963
+
964
+ def create_concise_messages(
965
+ self,
966
+ system_prompt: str,
967
+ messages: List[Dict[str, Any]],
968
+ files_implemented: int,
969
+ ) -> List[Dict[str, Any]]:
970
+ """
971
+ Create concise message list for LLM input
972
+ NEW LOGIC: Always clear after write_file, keep system_prompt + initial_plan + current round tools
973
+
974
+ Args:
975
+ system_prompt: Current system prompt
976
+ messages: Original message list
977
+ files_implemented: Number of files implemented so far
978
+
979
+ Returns:
980
+ Concise message list containing only essential information
981
+ """
982
+ if not self.last_write_file_detected:
983
+ # Before any write_file, use normal flow
984
+ self.logger.info(
985
+ "🔄 Using normal conversation flow (before any write_file)"
986
+ )
987
+ return messages
988
+
989
+ # After write_file detection, use concise approach with clean slate
990
+ self.logger.info(
991
+ f"🎯 Using CONCISE memory mode - Clear slate after write_file, Round {self.current_round}"
992
+ )
993
+
994
+ concise_messages = []
995
+
996
+ # Get formatted file lists
997
+ file_lists = self.get_formatted_files_lists()
998
+ implemented_files_list = file_lists["implemented"]
999
+
1000
+ # 1. Add initial plan message (always preserved)
1001
+ initial_plan_message = {
1002
+ "role": "user",
1003
+ "content": f"""**Task: Implement code based on the following reproduction plan**
1004
+
1005
+ **Code Reproduction Plan:**
1006
+ {self.initial_plan}
1007
+
1008
+ **Working Directory:** Current workspace
1009
+
1010
+ **All Previously Implemented Files:**
1011
+ {implemented_files_list}
1012
+
1013
+ **Current Status:** {files_implemented} files implemented
1014
+
1015
+ **Objective:** Continue implementation by analyzing dependencies and implementing the next required file according to the plan's priority order.""",
1016
+ }
1017
+
1018
+ # Append Next Steps information if available
1019
+ if self.current_next_steps.strip():
1020
+ initial_plan_message["content"] += (
1021
+ f"\n\n**Next Steps (from previous analysis):**\n{self.current_next_steps}"
1022
+ )
1023
+
1024
+ # Debug output for unimplemented files (clean format without dashes)
1025
+ unimplemented_files = self.get_unimplemented_files()
1026
+ print("✅ Unimplemented Files:")
1027
+ for file_path in unimplemented_files:
1028
+ print(f"{file_path}")
1029
+ if self.current_next_steps.strip():
1030
+ print(f"\n📋 {self.current_next_steps}")
1031
+
1032
+ concise_messages.append(initial_plan_message)
1033
+
1034
+ # 2. Add Knowledge Base
1035
+ knowledge_base_message = {
1036
+ "role": "user",
1037
+ "content": f"""**Below is the Knowledge Base of the LATEST implemented code file:**
1038
+ {self._read_code_knowledge_base()}
1039
+
1040
+ **Development Cycle - START HERE:**
1041
+
1042
+ **For NEW file implementation:**
1043
+ 1. **You need to call read_code_mem(already_implemented_file_path)** to understand existing implementations and dependencies - agent should choose relevant ALREADY IMPLEMENTED file paths for reference, NOT the new file you want to create
1044
+ 2. Write_file can be used to implement the new component
1045
+ 3. Finally: Use execute_python or execute_bash for testing (if needed)
1046
+
1047
+ **When all files implemented:**
1048
+ **Use execute_python or execute_bash** to test the complete implementation""",
1049
+ }
1050
+ concise_messages.append(knowledge_base_message)
1051
+
1052
+ # 3. Add current tool results (essential information for next file generation)
1053
+ if self.current_round_tool_results:
1054
+ tool_results_content = self._format_tool_results()
1055
+
1056
+ # # Append Next Steps information if available
1057
+ # if self.current_next_steps.strip():
1058
+ # tool_results_content += f"\n\n**Next Steps (from previous analysis):**\n{self.current_next_steps}"
1059
+
1060
+ tool_results_message = {
1061
+ "role": "user",
1062
+ "content": f"""**Current Tool Results:**
1063
+ {tool_results_content}""",
1064
+ }
1065
+ concise_messages.append(tool_results_message)
1066
+ else:
1067
+ # If no tool results yet, add guidance for next steps
1068
+ guidance_content = f"""**Current Round:** {self.current_round}
1069
+
1070
+ **Development Cycle - START HERE:**
1071
+
1072
+ **For NEW file implementation:**
1073
+ 1. **You need to call read_code_mem(already_implemented_file_path)** to understand existing implementations and dependencies - agent should choose relevant ALREADY IMPLEMENTED file paths for reference, NOT the new file you want to create
1074
+ 2. Write_file can be used to implement the new component
1075
+ 3. Finally: Use execute_python or execute_bash for testing (if needed)
1076
+
1077
+ **When all files implemented:**
1078
+ 1. **Use execute_python or execute_bash** to test the complete implementation"""
1079
+
1080
+ # # Append Next Steps information if available (even when no tool results)
1081
+ # if self.current_next_steps.strip():
1082
+ # guidance_content += f"\n\n**Next Steps (from previous analysis):**\n{self.current_next_steps}"
1083
+
1084
+ guidance_message = {
1085
+ "role": "user",
1086
+ "content": guidance_content,
1087
+ }
1088
+ concise_messages.append(guidance_message)
1089
+ # **Available Essential Tools:** read_code_mem, write_file, execute_python, execute_bash
1090
+ # **Remember:** Start with read_code_mem when implementing NEW files to understand existing code. When all files are implemented, focus on testing and completion. Implement according to the original paper's specifications - any reference code is for inspiration only."""
1091
+ # self.logger.info(f"✅ Concise messages created: {len(concise_messages)} messages (original: {len(messages)})")
1092
+ return concise_messages
1093
+
1094
+ def _read_code_knowledge_base(self) -> Optional[str]:
1095
+ """
1096
+ Read the implement_code_summary.md file as code knowledge base
1097
+ Returns only the final/latest implementation entry, not all historical entries
1098
+
1099
+ Returns:
1100
+ Content of the latest implementation entry if it exists, None otherwise
1101
+ """
1102
+ try:
1103
+ if os.path.exists(self.code_summary_path):
1104
+ with open(self.code_summary_path, "r", encoding="utf-8") as f:
1105
+ content = f.read().strip()
1106
+
1107
+ if content:
1108
+ # Extract only the final/latest implementation entry
1109
+ return self._extract_latest_implementation_entry(content)
1110
+ else:
1111
+ return None
1112
+ else:
1113
+ return None
1114
+
1115
+ except Exception as e:
1116
+ self.logger.error(f"Failed to read code knowledge base: {e}")
1117
+ return None
1118
+
1119
+ def _extract_latest_implementation_entry(self, content: str) -> Optional[str]:
1120
+ """
1121
+ Extract the latest/final implementation entry from the implement_code_summary.md content
1122
+ Uses a simpler approach to find the last implementation section
1123
+
1124
+ Args:
1125
+ content: Full content of implement_code_summary.md
1126
+
1127
+ Returns:
1128
+ Latest implementation entry content, or None if not found
1129
+ """
1130
+ try:
1131
+ import re
1132
+
1133
+ # Pattern to match the start of implementation sections
1134
+ section_pattern = (
1135
+ r"={80}\s*\n## IMPLEMENTATION File .+?; ROUND \d+\s*\n={80}"
1136
+ )
1137
+
1138
+ # Find all implementation section starts
1139
+ matches = list(re.finditer(section_pattern, content))
1140
+
1141
+ if not matches:
1142
+ # No implementation sections found
1143
+ lines = content.split("\n")
1144
+ fallback_content = (
1145
+ "\n".join(lines[:10]) + "\n... (truncated for brevity)"
1146
+ if len(lines) > 10
1147
+ else content
1148
+ )
1149
+ self.logger.info(
1150
+ "📖 No implementation sections found, using fallback content"
1151
+ )
1152
+ return fallback_content
1153
+
1154
+ # Get the start position of the last implementation section
1155
+ last_match = matches[-1]
1156
+ start_pos = last_match.start()
1157
+
1158
+ # Take everything from the last section start to the end of content
1159
+ latest_entry = content[start_pos:].strip()
1160
+
1161
+ # self.logger.info(f"📖 Extracted latest implementation entry from knowledge base")
1162
+ # print(f"DEBUG: Extracted content length: {len(latest_entry)}")
1163
+ # print(f"DEBUG: First 200 chars: {latest_entry[:]}")
1164
+
1165
+ return latest_entry
1166
+
1167
+ except Exception as e:
1168
+ self.logger.error(f"Failed to extract latest implementation entry: {e}")
1169
+ # Return last 1000 characters as fallback
1170
+ return content[-500:] if len(content) > 500 else content
1171
+
1172
+ def _format_tool_results(self) -> str:
1173
+ """
1174
+ Format current round tool results for LLM input
1175
+
1176
+ Returns:
1177
+ Formatted string of tool results
1178
+ """
1179
+ if not self.current_round_tool_results:
1180
+ return "No tool results in current round."
1181
+
1182
+ formatted_results = []
1183
+
1184
+ for result in self.current_round_tool_results:
1185
+ tool_name = result["tool_name"]
1186
+ tool_input = result["tool_input"]
1187
+ tool_result = result["tool_result"]
1188
+
1189
+ # Format based on tool type
1190
+ if tool_name == "read_code_mem":
1191
+ file_path = tool_input.get("file_path", "unknown")
1192
+ formatted_results.append(f"""
1193
+ **read_code_mem Result for {file_path}:**
1194
+ {self._format_tool_result_content(tool_result)}
1195
+ """)
1196
+ elif tool_name == "read_file":
1197
+ file_path = tool_input.get("file_path", "unknown")
1198
+ formatted_results.append(f"""
1199
+ **read_file Result for {file_path}:**
1200
+ {self._format_tool_result_content(tool_result)}
1201
+ """)
1202
+ elif tool_name == "write_file":
1203
+ file_path = tool_input.get("file_path", "unknown")
1204
+ formatted_results.append(f"""
1205
+ **write_file Result for {file_path}:**
1206
+ {self._format_tool_result_content(tool_result)}
1207
+ """)
1208
+ elif tool_name == "execute_python":
1209
+ code_snippet = (
1210
+ tool_input.get("code", "")[:50] + "..."
1211
+ if len(tool_input.get("code", "")) > 50
1212
+ else tool_input.get("code", "")
1213
+ )
1214
+ formatted_results.append(f"""
1215
+ **execute_python Result (code: {code_snippet}):**
1216
+ {self._format_tool_result_content(tool_result)}
1217
+ """)
1218
+ elif tool_name == "execute_bash":
1219
+ command = tool_input.get("command", "unknown")
1220
+ formatted_results.append(f"""
1221
+ **execute_bash Result (command: {command}):**
1222
+ {self._format_tool_result_content(tool_result)}
1223
+ """)
1224
+ elif tool_name == "search_code":
1225
+ pattern = tool_input.get("pattern", "unknown")
1226
+ file_pattern = tool_input.get("file_pattern", "")
1227
+ formatted_results.append(f"""
1228
+ **search_code Result (pattern: {pattern}, files: {file_pattern}):**
1229
+ {self._format_tool_result_content(tool_result)}
1230
+ """)
1231
+ elif tool_name == "search_reference_code":
1232
+ target_file = tool_input.get("target_file", "unknown")
1233
+ keywords = tool_input.get("keywords", "")
1234
+ formatted_results.append(f"""
1235
+ **search_reference_code Result for {target_file} (keywords: {keywords}):**
1236
+ {self._format_tool_result_content(tool_result)}
1237
+ """)
1238
+ elif tool_name == "get_file_structure":
1239
+ directory = tool_input.get(
1240
+ "directory_path", tool_input.get("path", "current")
1241
+ )
1242
+ formatted_results.append(f"""
1243
+ **get_file_structure Result for {directory}:**
1244
+ {self._format_tool_result_content(tool_result)}
1245
+ """)
1246
+
1247
+ return "\n".join(formatted_results)
1248
+
1249
+ def _format_tool_result_content(self, tool_result: Any) -> str:
1250
+ """
1251
+ Format tool result content for display
1252
+
1253
+ Args:
1254
+ tool_result: Tool result to format
1255
+
1256
+ Returns:
1257
+ Formatted string representation
1258
+ """
1259
+ if isinstance(tool_result, str):
1260
+ # Try to parse as JSON for better formatting
1261
+ try:
1262
+ result_data = json.loads(tool_result)
1263
+ if isinstance(result_data, dict):
1264
+ # Format key information
1265
+ if result_data.get("status") == "summary_found":
1266
+ return (
1267
+ f"Summary found:\n{result_data.get('summary_content', '')}"
1268
+ )
1269
+ elif result_data.get("status") == "no_summary":
1270
+ return "No summary available"
1271
+ else:
1272
+ return json.dumps(result_data, indent=2)
1273
+ else:
1274
+ return str(result_data)
1275
+ except json.JSONDecodeError:
1276
+ return tool_result
1277
+ else:
1278
+ return str(tool_result)
1279
+
1280
+ def get_memory_statistics(self, files_implemented: int = 0) -> Dict[str, Any]:
1281
+ """Get memory agent statistics"""
1282
+ unimplemented_files = self.get_unimplemented_files()
1283
+ return {
1284
+ "last_write_file_detected": self.last_write_file_detected,
1285
+ "should_clear_memory_next": self.should_clear_memory_next,
1286
+ "current_round": self.current_round,
1287
+ "concise_mode_active": self.should_use_concise_mode(),
1288
+ "current_round_tool_results": len(self.current_round_tool_results),
1289
+ "essential_tools_recorded": [
1290
+ r["tool_name"] for r in self.current_round_tool_results
1291
+ ],
1292
+ "implemented_files_tracked": files_implemented,
1293
+ "implemented_files_list": self.implemented_files.copy(),
1294
+ "phases_parsed": len(self.phase_structure),
1295
+ "next_steps_available": bool(self.current_next_steps.strip()),
1296
+ "next_steps_length": len(self.current_next_steps.strip())
1297
+ if self.current_next_steps
1298
+ else 0,
1299
+ # File tracking statistics
1300
+ "total_files_in_plan": len(self.all_files_list),
1301
+ "files_implemented_count": len(self.implemented_files),
1302
+ "files_remaining_count": len(unimplemented_files),
1303
+ "all_files_list": self.all_files_list.copy(),
1304
+ "unimplemented_files_list": unimplemented_files,
1305
+ "implementation_progress_percent": (
1306
+ len(self.implemented_files) / len(self.all_files_list) * 100
1307
+ )
1308
+ if self.all_files_list
1309
+ else 0,
1310
+ }
1311
+
1312
+ def get_implemented_files(self) -> List[str]:
1313
+ """Get list of all implemented files"""
1314
+ return self.implemented_files.copy()
1315
+
1316
+ def get_all_files_list(self) -> List[str]:
1317
+ """Get list of all files that should be implemented according to the plan"""
1318
+ return self.all_files_list.copy()
1319
+
1320
+ def get_unimplemented_files(self) -> List[str]:
1321
+ """
1322
+ Get list of files that haven't been implemented yet
1323
+
1324
+ Returns:
1325
+ List of file paths that still need to be implemented
1326
+ """
1327
+ implemented_set = set(self.implemented_files)
1328
+ unimplemented = [f for f in self.all_files_list if f not in implemented_set]
1329
+ return unimplemented
1330
+
1331
+ def get_formatted_files_lists(self) -> Dict[str, str]:
1332
+ """
1333
+ Get formatted strings for implemented and unimplemented files
1334
+
1335
+ Returns:
1336
+ Dictionary with 'implemented' and 'unimplemented' formatted lists
1337
+ """
1338
+ implemented_list = (
1339
+ "\n".join([f"- {file}" for file in self.implemented_files])
1340
+ if self.implemented_files
1341
+ else "- None yet"
1342
+ )
1343
+
1344
+ unimplemented_files = self.get_unimplemented_files()
1345
+ unimplemented_list = (
1346
+ "\n".join([f"- {file}" for file in unimplemented_files])
1347
+ if unimplemented_files
1348
+ else "- All files implemented!"
1349
+ )
1350
+
1351
+ return {"implemented": implemented_list, "unimplemented": unimplemented_list}
1352
+
1353
+ def get_current_next_steps(self) -> str:
1354
+ """Get the current Next Steps information"""
1355
+ return self.current_next_steps
1356
+
1357
+ def clear_next_steps(self):
1358
+ """Clear the stored Next Steps information"""
1359
+ if self.current_next_steps.strip():
1360
+ self.logger.info("🧹 Next Steps information cleared")
1361
+ self.current_next_steps = ""
1362
+
1363
+ def set_next_steps(self, next_steps: str):
1364
+ """Manually set Next Steps information"""
1365
+ self.current_next_steps = next_steps
1366
+ self.logger.info(
1367
+ f"📝 Next Steps manually set ({len(next_steps.strip())} chars)"
1368
+ )
1369
+
1370
+ def should_trigger_memory_optimization(
1371
+ self, messages: List[Dict[str, Any]], files_implemented: int = 0
1372
+ ) -> bool:
1373
+ """
1374
+ Check if memory optimization should be triggered
1375
+ NEW LOGIC: Trigger after write_file has been detected
1376
+
1377
+ Args:
1378
+ messages: Current message list
1379
+ files_implemented: Number of files implemented so far
1380
+
1381
+ Returns:
1382
+ True if concise mode should be applied
1383
+ """
1384
+ # Trigger if we detected write_file and should clear memory
1385
+ if self.should_clear_memory_next:
1386
+ # self.logger.info(f"🎯 Triggering CONCISE memory optimization (write_file detected, files: {files_implemented})")
1387
+ return True
1388
+
1389
+ # No optimization before any write_file
1390
+ return False
1391
+
1392
+ def apply_memory_optimization(
1393
+ self, system_prompt: str, messages: List[Dict[str, Any]], files_implemented: int
1394
+ ) -> List[Dict[str, Any]]:
1395
+ """
1396
+ Apply memory optimization using concise approach
1397
+ NEW LOGIC: Clear all history after write_file, keep only system_prompt + initial_plan + current tools
1398
+
1399
+ Args:
1400
+ system_prompt: Current system prompt
1401
+ messages: Original message list
1402
+ files_implemented: Number of files implemented so far
1403
+
1404
+ Returns:
1405
+ Optimized message list
1406
+ """
1407
+ if not self.should_clear_memory_next:
1408
+ # Before any write_file, return original messages
1409
+ return messages
1410
+
1411
+ # Apply concise memory optimization after write_file detection
1412
+ # self.logger.info(f"🧹 CLEARING MEMORY after write_file - creating clean slate")
1413
+ optimized_messages = self.create_concise_messages(
1414
+ system_prompt, messages, files_implemented
1415
+ )
1416
+
1417
+ # Clear the flag after applying optimization
1418
+ self.should_clear_memory_next = False
1419
+
1420
+ compression_ratio = (
1421
+ ((len(messages) - len(optimized_messages)) / len(messages) * 100)
1422
+ if messages
1423
+ else 0
1424
+ )
1425
+ self.logger.info(
1426
+ f"🎯 CONCISE optimization applied: {len(messages)} → {len(optimized_messages)} messages ({compression_ratio:.1f}% compression)"
1427
+ )
1428
+
1429
+ return optimized_messages
1430
+
1431
+ def clear_current_round_tool_results(self):
1432
+ """Clear current round tool results (called when starting new round)"""
1433
+ self.current_round_tool_results = []
1434
+ self.logger.info("🧹 Current round tool results cleared")
1435
+
1436
+ def debug_concise_state(self, files_implemented: int = 0):
1437
+ """Debug method to show current concise memory state"""
1438
+ stats = self.get_memory_statistics(files_implemented)
1439
+
1440
+ print("=" * 60)
1441
+ print("🎯 CONCISE MEMORY AGENT STATE (Write-File-Based)")
1442
+ print("=" * 60)
1443
+ print(f"Last write_file detected: {stats['last_write_file_detected']}")
1444
+ print(f"Should clear memory next: {stats['should_clear_memory_next']}")
1445
+ print(f"Files implemented: {stats['implemented_files_tracked']}")
1446
+ print(f"Current round: {stats['current_round']}")
1447
+ print(f"Concise mode active: {stats['concise_mode_active']}")
1448
+ print(f"Current round tool results: {stats['current_round_tool_results']}")
1449
+ print(f"Essential tools recorded: {stats['essential_tools_recorded']}")
1450
+ print(f"Implemented files tracked: {len(self.implemented_files)}")
1451
+ print(f"Implemented files list: {self.implemented_files}")
1452
+ print(f"Code summary file exists: {os.path.exists(self.code_summary_path)}")
1453
+ print(f"Next Steps available: {stats['next_steps_available']}")
1454
+ print(f"Next Steps length: {stats['next_steps_length']} chars")
1455
+ if self.current_next_steps.strip():
1456
+ print(f"Next Steps preview: {self.current_next_steps[:100]}...")
1457
+ print("")
1458
+ print("📋 FILE TRACKING:")
1459
+ print(f" Total files in plan: {stats['total_files_in_plan']}")
1460
+ print(f" Files implemented: {stats['files_implemented_count']}")
1461
+ print(f" Files remaining: {stats['files_remaining_count']}")
1462
+ print(f" Progress: {stats['implementation_progress_percent']:.1f}%")
1463
+ if stats["unimplemented_files_list"]:
1464
+ print(f" Next possible files: {stats['unimplemented_files_list'][:3]}...")
1465
+ print("")
1466
+ print(
1467
+ "📊 NEW LOGIC: write_file → clear memory → accumulate tools → next write_file"
1468
+ )
1469
+ print("📊 NEXT STEPS: Stored separately from file, included in tool results")
1470
+ print(
1471
+ "📊 FILE TRACKING: All files extracted from plan, unimplemented files guide LLM decisions"
1472
+ )
1473
+ print("📊 Essential Tools Tracked:")
1474
+ essential_tools = [
1475
+ "read_code_mem",
1476
+ "read_file",
1477
+ "write_file",
1478
+ "execute_python",
1479
+ "execute_bash",
1480
+ "search_code",
1481
+ "search_reference_code",
1482
+ "get_file_structure",
1483
+ ]
1484
+ for tool in essential_tools:
1485
+ tool_count = sum(
1486
+ 1 for r in self.current_round_tool_results if r["tool_name"] == tool
1487
+ )
1488
+ print(f" - {tool}: {tool_count} calls")
1489
+ print("=" * 60)
projects/ui/DeepCode/workflows/agents/memory_agent_concise_index.py ADDED
@@ -0,0 +1,1491 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Concise Memory Agent for Code Implementation Workflow
3
+
4
+ This memory agent implements a focused approach:
5
+ 1. Before first file: Normal conversation flow
6
+ 2. After first file: Keep only system_prompt + initial_plan + current round tool results
7
+ 3. Clean slate for each new code file generation
8
+
9
+ Key Features:
10
+ - Preserves system prompt and initial plan always
11
+ - After first file generation, discards previous conversation history
12
+ - Keeps only current round tool results from essential tools:
13
+ * read_code_mem, read_file, write_file
14
+ * execute_python, execute_bash
15
+ * search_code, search_reference_code, get_file_structure
16
+ - Provides clean, focused input for next write_file operation
17
+ """
18
+
19
+ import json
20
+ import logging
21
+ import os
22
+ import time
23
+ from datetime import datetime
24
+ from typing import Dict, Any, List, Optional
25
+
26
+
27
+ class ConciseMemoryAgent:
28
+ """
29
+ Concise Memory Agent - Focused Information Retention
30
+
31
+ Core Philosophy:
32
+ - Preserve essential context (system prompt + initial plan)
33
+ - After first file generation, use clean slate approach
34
+ - Keep only current round tool results from all essential MCP tools
35
+ - Remove conversational clutter and previous tool calls
36
+
37
+ Essential Tools Tracked:
38
+ - File Operations: read_code_mem, read_file, write_file
39
+ - Code Analysis: search_code, search_reference_code, get_file_structure
40
+ - Execution: execute_python, execute_bash
41
+ """
42
+
43
+ def __init__(
44
+ self,
45
+ initial_plan_content: str,
46
+ logger: Optional[logging.Logger] = None,
47
+ target_directory: Optional[str] = None,
48
+ default_models: Optional[Dict[str, str]] = None,
49
+ ):
50
+ """
51
+ Initialize Concise Memory Agent
52
+
53
+ Args:
54
+ initial_plan_content: Content of initial_plan.txt
55
+ logger: Logger instance
56
+ target_directory: Target directory for saving summaries
57
+ default_models: Default models configuration from workflow
58
+ """
59
+ self.logger = logger or self._create_default_logger()
60
+ self.initial_plan = initial_plan_content
61
+
62
+ # Store default models configuration
63
+ self.default_models = default_models or {
64
+ "anthropic": "claude-sonnet-4-20250514",
65
+ "openai": "gpt-4o",
66
+ }
67
+
68
+ # Memory state tracking - new logic: trigger after each write_file
69
+ self.last_write_file_detected = (
70
+ False # Track if write_file was called in current iteration
71
+ )
72
+ self.should_clear_memory_next = False # Flag to clear memory in next round
73
+ self.current_round = 0
74
+
75
+ # Parse phase structure from initial plan
76
+ self.phase_structure = self._parse_phase_structure()
77
+
78
+ # Extract all files from file structure in initial plan
79
+ self.all_files_list = self._extract_all_files_from_plan()
80
+
81
+ # Memory configuration
82
+ if target_directory:
83
+ self.save_path = target_directory
84
+ else:
85
+ self.save_path = "./deepcode_lab/papers/1/"
86
+
87
+ # Code summary file path
88
+ self.code_summary_path = os.path.join(
89
+ self.save_path, "implement_code_summary.md"
90
+ )
91
+
92
+ # Current round tool results storage
93
+ self.current_round_tool_results = []
94
+
95
+ # Track all implemented files
96
+ self.implemented_files = []
97
+
98
+ # Store Next Steps information temporarily (not saved to file)
99
+ self.current_next_steps = ""
100
+
101
+ self.logger.info(
102
+ f"Concise Memory Agent initialized with target directory: {self.save_path}"
103
+ )
104
+ self.logger.info(f"Code summary will be saved to: {self.code_summary_path}")
105
+ # self.logger.info(f"🤖 Using models - Anthropic: {self.default_models['anthropic']}, OpenAI: {self.default_models['openai']}")
106
+ self.logger.info(
107
+ "📝 NEW LOGIC: Memory clearing triggered after each write_file call"
108
+ )
109
+
110
+ def _create_default_logger(self) -> logging.Logger:
111
+ """Create default logger"""
112
+ logger = logging.getLogger(f"{__name__}.ConciseMemoryAgent")
113
+ logger.setLevel(logging.INFO)
114
+ return logger
115
+
116
+ def _parse_phase_structure(self) -> Dict[str, List[str]]:
117
+ """Parse implementation phases from initial plan"""
118
+ try:
119
+ phases = {}
120
+ lines = self.initial_plan.split("\n")
121
+ current_phase = None
122
+
123
+ for line in lines:
124
+ if "Phase" in line and ":" in line:
125
+ # Extract phase name
126
+ phase_parts = line.split(":")
127
+ if len(phase_parts) >= 2:
128
+ current_phase = phase_parts[0].strip()
129
+ phases[current_phase] = []
130
+ elif current_phase and line.strip().startswith("-"):
131
+ # This is a file in the current phase
132
+ file_line = line.strip()[1:].strip()
133
+ if file_line.startswith("`") and file_line.endswith("`"):
134
+ file_name = file_line[1:-1]
135
+ phases[current_phase].append(file_name)
136
+ elif current_phase and not line.strip():
137
+ # Empty line might indicate end of phase
138
+ continue
139
+ elif current_phase and line.strip().startswith("###"):
140
+ # New section, end current phase
141
+ current_phase = None
142
+
143
+ return phases
144
+
145
+ except Exception as e:
146
+ self.logger.warning(f"Failed to parse phase structure: {e}")
147
+ return {}
148
+
149
+ def _extract_all_files_from_plan(self) -> List[str]:
150
+ """
151
+ Extract all file paths from the file_structure section in initial plan
152
+ Handles multiple formats: tree structure, YAML, and simple lists
153
+
154
+ Returns:
155
+ List of all file paths that should be implemented
156
+ """
157
+ try:
158
+ lines = self.initial_plan.split("\n")
159
+ files = []
160
+
161
+ # Method 1: Try to extract from tree structure in file_structure section
162
+ files.extend(self._extract_from_tree_structure(lines))
163
+
164
+ # Method 2: If no files found, try to extract from simple list format
165
+ if not files:
166
+ files.extend(self._extract_from_simple_list(lines))
167
+
168
+ # Method 3: If still no files, try to extract from anywhere in the plan
169
+ if not files:
170
+ files.extend(self._extract_from_plan_content(lines))
171
+
172
+ # Clean and validate file paths
173
+ cleaned_files = self._clean_and_validate_files(files)
174
+
175
+ # Log the extracted files
176
+ self.logger.info(
177
+ f"📁 Extracted {len(cleaned_files)} files from initial plan"
178
+ )
179
+ if cleaned_files:
180
+ self.logger.info(f"📁 Sample files: {cleaned_files[:3]}...")
181
+
182
+ return cleaned_files
183
+
184
+ except Exception as e:
185
+ self.logger.error(f"Failed to extract files from initial plan: {e}")
186
+ return []
187
+
188
+ def _extract_from_tree_structure(self, lines: List[str]) -> List[str]:
189
+ """Extract files from tree structure format - only from file_structure section"""
190
+ files = []
191
+ in_file_structure = False
192
+ path_stack = []
193
+
194
+ for line in lines:
195
+ # Check if we're in the file_structure section
196
+ if "file_structure:" in line or "file_structure |" in line:
197
+ in_file_structure = True
198
+ continue
199
+ # Check for end of file_structure section (next YAML key)
200
+ elif (
201
+ in_file_structure
202
+ and line.strip()
203
+ and not line.startswith(" ")
204
+ and ":" in line
205
+ ):
206
+ # This looks like a new YAML section, stop parsing
207
+ break
208
+ elif not in_file_structure:
209
+ continue
210
+
211
+ if not line.strip():
212
+ continue
213
+
214
+ # Skip lines that look like YAML keys (contain ":" but not file paths)
215
+ if ":" in line and not ("." in line and "/" in line):
216
+ continue
217
+
218
+ stripped_line = line.strip()
219
+
220
+ # Detect root directory (directory name ending with / at minimal indentation)
221
+ if (
222
+ stripped_line.endswith("/")
223
+ and len(line) - len(line.lstrip())
224
+ <= 4 # Minimal indentation (0-4 spaces)
225
+ and not any(char in line for char in ["├", "└", "│", "─"])
226
+ ): # No tree characters
227
+ root_directory = stripped_line.rstrip("/")
228
+ path_stack = [root_directory]
229
+ continue
230
+
231
+ # Only process lines that have tree structure
232
+ if not any(char in line for char in ["├", "└", "│", "─"]):
233
+ continue
234
+
235
+ # Parse tree structure depth by analyzing the line structure
236
+ # Count │ characters before the actual item, or use indentation as fallback
237
+ pipe_count = 0
238
+
239
+ for i, char in enumerate(line):
240
+ if char == "│":
241
+ pipe_count += 1
242
+ elif char in ["├", "└"]:
243
+ break
244
+
245
+ # Calculate depth: use pipe count if available, otherwise use indentation
246
+ if pipe_count > 0:
247
+ depth = pipe_count + 1 # +1 because the actual item is one level deeper
248
+ else:
249
+ # Use indentation to determine depth (every 4 spaces = 1 level)
250
+ indent_spaces = len(line) - len(line.lstrip())
251
+ depth = max(1, indent_spaces // 4) # At least depth 1
252
+
253
+ # Clean the line to get the item name
254
+ clean_line = line
255
+ for char in ["├──", "└──", "├", "└", "│", "─"]:
256
+ clean_line = clean_line.replace(char, "")
257
+ clean_line = clean_line.strip()
258
+
259
+ if not clean_line or ":" in clean_line:
260
+ continue
261
+
262
+ # Extract filename (remove comments)
263
+ if "#" in clean_line:
264
+ filename = clean_line.split("#")[0].strip()
265
+ else:
266
+ filename = clean_line.strip()
267
+
268
+ # Skip empty filenames
269
+ if not filename:
270
+ continue
271
+
272
+ # Adjust path stack to current depth
273
+ while len(path_stack) < depth:
274
+ path_stack.append("")
275
+ path_stack = path_stack[:depth]
276
+
277
+ # Determine if it's a directory or file
278
+ is_directory = (
279
+ filename.endswith("/")
280
+ or (
281
+ "." not in filename
282
+ and filename not in ["README", "requirements.txt", "setup.py"]
283
+ )
284
+ or filename
285
+ in [
286
+ "core",
287
+ "networks",
288
+ "environments",
289
+ "baselines",
290
+ "evaluation",
291
+ "experiments",
292
+ "utils",
293
+ "src",
294
+ "lib",
295
+ "app",
296
+ ]
297
+ )
298
+
299
+ if is_directory:
300
+ directory_name = filename.rstrip("/")
301
+ if directory_name and ":" not in directory_name:
302
+ path_stack.append(directory_name)
303
+ else:
304
+ # It's a file, construct full path
305
+ if path_stack:
306
+ full_path = "/".join(path_stack) + "/" + filename
307
+ else:
308
+ full_path = filename
309
+ files.append(full_path)
310
+
311
+ return files
312
+
313
+ def _extract_from_simple_list(self, lines: List[str]) -> List[str]:
314
+ """Extract files from simple list format (- filename)"""
315
+ files = []
316
+
317
+ for line in lines:
318
+ line = line.strip()
319
+ if line.startswith("- ") and not line.startswith('- "'):
320
+ # Remove leading "- " and clean up
321
+ filename = line[2:].strip()
322
+
323
+ # Remove quotes if present
324
+ if filename.startswith('"') and filename.endswith('"'):
325
+ filename = filename[1:-1]
326
+
327
+ # Check if it looks like a file (has extension)
328
+ if "." in filename and "/" in filename:
329
+ files.append(filename)
330
+
331
+ return files
332
+
333
+ def _extract_from_plan_content(self, lines: List[str]) -> List[str]:
334
+ """Extract files from anywhere in the plan content"""
335
+ files = []
336
+
337
+ # Look for common file patterns
338
+ import re
339
+
340
+ file_patterns = [
341
+ r"([a-zA-Z0-9_\-/]+\.[a-zA-Z0-9]+)", # filename.ext
342
+ r'"([a-zA-Z0-9_\-/]+\.[a-zA-Z0-9]+)"', # "filename.ext"
343
+ ]
344
+
345
+ for line in lines:
346
+ for pattern in file_patterns:
347
+ matches = re.findall(pattern, line)
348
+ for match in matches:
349
+ # Only include if it looks like a code file (exclude media files)
350
+ if "/" in match and any(
351
+ ext in match
352
+ for ext in [
353
+ ".py",
354
+ ".js",
355
+ ".html",
356
+ ".css",
357
+ ".md",
358
+ ".txt",
359
+ ".json",
360
+ ".yaml",
361
+ ".yml",
362
+ ".xml",
363
+ ".sql",
364
+ ".sh",
365
+ ".ts",
366
+ ".jsx",
367
+ ".tsx",
368
+ ]
369
+ ):
370
+ files.append(match)
371
+
372
+ return files
373
+
374
+ def _clean_and_validate_files(self, files: List[str]) -> List[str]:
375
+ """Clean and validate extracted file paths - only keep code files"""
376
+ cleaned_files = []
377
+
378
+ # Define code file extensions we want to track
379
+ code_extensions = [
380
+ ".py",
381
+ ".js",
382
+ ".html",
383
+ ".css",
384
+ ".md",
385
+ ".txt",
386
+ ".json",
387
+ ".yaml",
388
+ ".yml",
389
+ ".xml",
390
+ ".sql",
391
+ ".sh",
392
+ ".bat",
393
+ ".dockerfile",
394
+ ".env",
395
+ ".gitignore",
396
+ ".ts",
397
+ ".jsx",
398
+ ".tsx",
399
+ ".vue",
400
+ ".php",
401
+ ".rb",
402
+ ".go",
403
+ ".rs",
404
+ ".cpp",
405
+ ".c",
406
+ ".h",
407
+ ".hpp",
408
+ ".java",
409
+ ".kt",
410
+ ".swift",
411
+ ".dart",
412
+ ]
413
+
414
+ for file_path in files:
415
+ # Clean the path
416
+ cleaned_path = file_path.strip().strip('"').strip("'")
417
+
418
+ # Skip if empty
419
+ if not cleaned_path:
420
+ continue
421
+
422
+ # Skip directories (no file extension)
423
+ if "." not in cleaned_path.split("/")[-1]:
424
+ continue
425
+
426
+ # Only include files with code extensions
427
+ has_code_extension = any(
428
+ cleaned_path.lower().endswith(ext) for ext in code_extensions
429
+ )
430
+ if not has_code_extension:
431
+ continue
432
+
433
+ # Skip files that look like YAML keys or config entries
434
+ if (
435
+ ":" in cleaned_path
436
+ and not cleaned_path.endswith(".yaml")
437
+ and not cleaned_path.endswith(".yml")
438
+ ):
439
+ continue
440
+
441
+ # Skip paths that contain invalid characters for file paths
442
+ if any(invalid_char in cleaned_path for invalid_char in ['"', "'", "|"]):
443
+ continue
444
+
445
+ # Add to cleaned list if not already present
446
+ if cleaned_path not in cleaned_files:
447
+ cleaned_files.append(cleaned_path)
448
+
449
+ return sorted(cleaned_files)
450
+
451
+ def record_file_implementation(
452
+ self, file_path: str, implementation_content: str = ""
453
+ ):
454
+ """
455
+ Record a newly implemented file (simplified version)
456
+ NEW LOGIC: File implementation is tracked via write_file tool detection
457
+
458
+ Args:
459
+ file_path: Path of the implemented file
460
+ implementation_content: Content of the implemented file
461
+ """
462
+ # Add file to implemented files list if not already present
463
+ if file_path not in self.implemented_files:
464
+ self.implemented_files.append(file_path)
465
+
466
+ self.logger.info(f"📝 File implementation recorded: {file_path}")
467
+
468
+ async def create_code_implementation_summary(
469
+ self,
470
+ client,
471
+ client_type: str,
472
+ file_path: str,
473
+ implementation_content: str,
474
+ files_implemented: int,
475
+ ) -> str:
476
+ """
477
+ Create LLM-based code implementation summary after writing a file
478
+ Uses LLM to analyze and summarize the implemented code
479
+
480
+ Args:
481
+ client: LLM client instance
482
+ client_type: Type of LLM client ("anthropic" or "openai")
483
+ file_path: Path of the implemented file
484
+ implementation_content: Content of the implemented file
485
+ files_implemented: Number of files implemented so far
486
+
487
+ Returns:
488
+ LLM-generated formatted code implementation summary
489
+ """
490
+ try:
491
+ # Record the file implementation first
492
+ self.record_file_implementation(file_path, implementation_content)
493
+
494
+ # Create prompt for LLM summary
495
+ summary_prompt = self._create_code_summary_prompt(
496
+ file_path, implementation_content, files_implemented
497
+ )
498
+ summary_messages = [{"role": "user", "content": summary_prompt}]
499
+
500
+ # Get LLM-generated summary
501
+ llm_response = await self._call_llm_for_summary(
502
+ client, client_type, summary_messages
503
+ )
504
+ llm_summary = llm_response.get("content", "")
505
+
506
+ # Extract different sections from LLM summary
507
+ sections = self._extract_summary_sections(llm_summary)
508
+
509
+ # Store Next Steps in temporary variable (not saved to file)
510
+ self.current_next_steps = sections.get("next_steps", "")
511
+ if self.current_next_steps:
512
+ self.logger.info("📝 Next Steps stored temporarily (not saved to file)")
513
+
514
+ # Format summary with only Implementation Progress and Dependencies for file saving
515
+ file_summary_content = ""
516
+ if sections.get("core_purpose"):
517
+ file_summary_content += sections["core_purpose"] + "\n\n"
518
+ if sections.get("public_interface"):
519
+ file_summary_content += sections["public_interface"] + "\n\n"
520
+ if sections.get("internal_dependencies"):
521
+ file_summary_content += sections["internal_dependencies"] + "\n\n"
522
+ if sections.get("external_dependencies"):
523
+ file_summary_content += sections["external_dependencies"] + "\n\n"
524
+ if sections.get("implementation_notes"):
525
+ file_summary_content += sections["implementation_notes"] + "\n\n"
526
+
527
+ # Create the formatted summary for file saving (without Next Steps)
528
+ formatted_summary = self._format_code_implementation_summary(
529
+ file_path, file_summary_content.strip(), files_implemented
530
+ )
531
+
532
+ # Save to implement_code_summary.md (append mode) - only Implementation Progress and Dependencies
533
+ await self._save_code_summary_to_file(formatted_summary, file_path)
534
+
535
+ self.logger.info(f"Created and saved code summary for: {file_path}")
536
+ return formatted_summary
537
+
538
+ except Exception as e:
539
+ self.logger.error(
540
+ f"Failed to create LLM-based code implementation summary: {e}"
541
+ )
542
+ # Fallback to simple summary
543
+ return self._create_fallback_code_summary(
544
+ file_path, implementation_content, files_implemented
545
+ )
546
+
547
+ def _create_code_summary_prompt(
548
+ self, file_path: str, implementation_content: str, files_implemented: int
549
+ ) -> str:
550
+ """
551
+ Create prompt for LLM to generate code implementation summary
552
+
553
+ Args:
554
+ file_path: Path of the implemented file
555
+ implementation_content: Content of the implemented file
556
+ files_implemented: Number of files implemented so far
557
+
558
+ Returns:
559
+ Prompt for LLM summarization
560
+ """
561
+ current_round = self.current_round
562
+
563
+ # Get formatted file lists
564
+ file_lists = self.get_formatted_files_lists()
565
+ implemented_files_list = file_lists["implemented"]
566
+ unimplemented_files_list = file_lists["unimplemented"]
567
+
568
+ prompt = f"""You are an expert code implementation summarizer. Analyze the implemented code file and create a structured summary.
569
+
570
+ **🚨 CRITICAL: The files listed below are ALREADY IMPLEMENTED - DO NOT suggest them in Next Steps! 🚨**
571
+
572
+ **All Previously Implemented Files:**
573
+ {implemented_files_list}
574
+
575
+ **Remaining Unimplemented Files (choose ONLY from these for Next Steps):**
576
+ {unimplemented_files_list}
577
+
578
+ **Current Implementation Context:**
579
+ - **File Implemented**: {file_path}
580
+ - **Current Round**: {current_round}
581
+ - **Total Files Implemented**: {files_implemented}
582
+
583
+
584
+ **Initial Plan Reference:**
585
+ {self.initial_plan[:]}
586
+
587
+ **Implemented Code Content:**
588
+ ```
589
+ {implementation_content[:]}
590
+ ```
591
+
592
+ **Required Summary Format:**
593
+
594
+ **Core Purpose** (provide a general overview of the file's main responsibility):
595
+ - {{1-2 sentence description of file's main responsibility}}
596
+
597
+ **Public Interface** (what other files can use, if any):
598
+ - Class {{ClassName}}: {{purpose}} | Key methods: {{method_names}} | Constructor params: {{params}}
599
+ - Function {{function_name}}({{params}}): {{purpose}} -> {{return_type}}: {{purpose}}
600
+ - Constants/Types: {{name}}: {{value/description}}
601
+
602
+ **Internal Dependencies** (what this file imports/requires, if any):
603
+ - From {{module/file}}: {{specific_imports}}
604
+ - External packages: {{package_name}} - {{usage_context}}
605
+
606
+ **External Dependencies** (what depends on this file, if any):
607
+ - Expected to be imported by: {{likely_consumer_files}}
608
+ - Key exports used elsewhere: {{main_interfaces}}
609
+
610
+ **Implementation Notes**: (if any)
611
+ - Architecture decisions: {{key_choices_made}}
612
+ - Cross-File Relationships: {{how_files_work_together}}
613
+
614
+ **Next Steps**: List the code file (ONLY ONE) that will be implemented in the next round (MUST choose from "Remaining Unimplemented Files" above)
615
+ Format: Code will be implemented: {{file_path}}
616
+ **NEVER suggest any file from the "All Previously Implemented Files" list!**
617
+
618
+ **Instructions:**
619
+ - Be precise and concise
620
+ - Focus on function interfaces that other files will need
621
+ - Extract actual function signatures from the code
622
+ - **CRITICAL: For Next Steps, ONLY choose ONE file from the "Remaining Unimplemented Files" list above**
623
+ - **NEVER suggest implementing a file that is already in the implemented files list**
624
+ - Choose the next file based on logical dependencies and implementation order
625
+ - Use the exact format specified above
626
+
627
+ **Summary:**"""
628
+
629
+ return prompt
630
+
631
+ # TODO: The prompt is not good, need to be improved
632
+ # **Implementation Progress**: List the code file completed in current round and core implementation ideas
633
+ # Format: {{file_path}}: {{core implementation ideas}}
634
+
635
+ # **Dependencies**: According to the File Structure and initial plan, list functions that may be called by other files
636
+ # Format: {{file_path}}: Function {{function_name}}: core ideas--{{ideas}}; Required parameters--{{params}}; Return parameters--{{returns}}
637
+ # Required packages: {{packages}}
638
+
639
+ def _extract_summary_sections(self, llm_summary: str) -> Dict[str, str]:
640
+ """
641
+ Extract different sections from LLM-generated summary
642
+
643
+ Args:
644
+ llm_summary: Raw LLM-generated summary text
645
+
646
+ Returns:
647
+ Dictionary with extracted sections: core_purpose, public_interface, internal_dependencies,
648
+ external_dependencies, implementation_notes, next_steps
649
+ """
650
+ sections = {
651
+ "core_purpose": "",
652
+ "public_interface": "",
653
+ "internal_dependencies": "",
654
+ "external_dependencies": "",
655
+ "implementation_notes": "",
656
+ "next_steps": "",
657
+ }
658
+
659
+ try:
660
+ lines = llm_summary.split("\n")
661
+ current_section = None
662
+ current_content = []
663
+
664
+ for line in lines:
665
+ line_lower = line.lower().strip()
666
+
667
+ # Check for section headers
668
+ if "core purpose" in line_lower:
669
+ if current_section and current_content:
670
+ sections[current_section] = "\n".join(current_content).strip()
671
+ current_section = "core_purpose"
672
+ current_content = [line] # Include the header
673
+ elif "public interface" in line_lower:
674
+ if current_section and current_content:
675
+ sections[current_section] = "\n".join(current_content).strip()
676
+ current_section = "public_interface"
677
+ current_content = [line] # Include the header
678
+ elif "internal dependencies" in line_lower:
679
+ if current_section and current_content:
680
+ sections[current_section] = "\n".join(current_content).strip()
681
+ current_section = "internal_dependencies"
682
+ current_content = [line] # Include the header
683
+ elif "external dependencies" in line_lower:
684
+ if current_section and current_content:
685
+ sections[current_section] = "\n".join(current_content).strip()
686
+ current_section = "external_dependencies"
687
+ current_content = [line] # Include the header
688
+ elif "implementation notes" in line_lower:
689
+ if current_section and current_content:
690
+ sections[current_section] = "\n".join(current_content).strip()
691
+ current_section = "implementation_notes"
692
+ current_content = [line] # Include the header
693
+ elif "next steps" in line_lower:
694
+ if current_section and current_content:
695
+ sections[current_section] = "\n".join(current_content).strip()
696
+ current_section = "next_steps"
697
+ current_content = [line] # Include the header
698
+ else:
699
+ # Add content to current section
700
+ if current_section:
701
+ current_content.append(line)
702
+
703
+ # Don't forget the last section
704
+ if current_section and current_content:
705
+ sections[current_section] = "\n".join(current_content).strip()
706
+
707
+ self.logger.info(f"📋 Extracted sections: {list(sections.keys())}")
708
+
709
+ except Exception as e:
710
+ self.logger.error(f"Failed to extract summary sections: {e}")
711
+ # Fallback: put everything in core_purpose
712
+ sections["core_purpose"] = llm_summary
713
+
714
+ return sections
715
+
716
+ def _format_code_implementation_summary(
717
+ self, file_path: str, llm_summary: str, files_implemented: int
718
+ ) -> str:
719
+ """
720
+ Format the LLM-generated summary into the final structure
721
+
722
+ Args:
723
+ file_path: Path of the implemented file
724
+ llm_summary: LLM-generated summary content
725
+ files_implemented: Number of files implemented so far
726
+
727
+ Returns:
728
+ Formatted summary
729
+ """
730
+ timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
731
+
732
+ # # Create formatted list of implemented files
733
+ # implemented_files_list = (
734
+ # "\n".join([f"- {file}" for file in self.implemented_files])
735
+ # if self.implemented_files
736
+ # else "- None yet"
737
+ # )
738
+
739
+ # formatted_summary = f"""# Code Implementation Summary
740
+ # **All Previously Implemented Files:**
741
+ # {implemented_files_list}
742
+ # **Generated**: {timestamp}
743
+ # **File Implemented**: {file_path}
744
+ # **Total Files Implemented**: {files_implemented}
745
+
746
+ # {llm_summary}
747
+
748
+ # ---
749
+ # *Auto-generated by Memory Agent*
750
+ # """
751
+ formatted_summary = f"""# Code Implementation Summary
752
+ **Generated**: {timestamp}
753
+ **File Implemented**: {file_path}
754
+
755
+ {llm_summary}
756
+
757
+ ---
758
+ *Auto-generated by Memory Agent*
759
+ """
760
+ return formatted_summary
761
+
762
+ def _create_fallback_code_summary(
763
+ self, file_path: str, implementation_content: str, files_implemented: int
764
+ ) -> str:
765
+ """
766
+ Create fallback summary when LLM is unavailable
767
+
768
+ Args:
769
+ file_path: Path of the implemented file
770
+ implementation_content: Content of the implemented file
771
+ files_implemented: Number of files implemented so far
772
+
773
+ Returns:
774
+ Fallback summary
775
+ """
776
+ # Create formatted list of implemented files
777
+ implemented_files_list = (
778
+ "\n".join([f"- {file}" for file in self.implemented_files])
779
+ if self.implemented_files
780
+ else "- None yet"
781
+ )
782
+
783
+ summary = f"""# Code Implementation Summary
784
+ **All Previously Implemented Files:**
785
+ {implemented_files_list}
786
+ **Generated**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
787
+ **File Implemented**: {file_path}
788
+ **Total Files Implemented**: {files_implemented}
789
+ **Summary failed to generate.**
790
+
791
+ ---
792
+ *Auto-generated by Concise Memory Agent (Fallback Mode)*
793
+ """
794
+ return summary
795
+
796
+ async def _save_code_summary_to_file(self, new_summary: str, file_path: str):
797
+ """
798
+ Append code implementation summary to implement_code_summary.md
799
+ Accumulates all implementations with clear separators
800
+
801
+ Args:
802
+ new_summary: New summary content to append
803
+ file_path: Path of the file for which the summary was generated
804
+ """
805
+ try:
806
+ # Create directory if it doesn't exist
807
+ os.makedirs(os.path.dirname(self.code_summary_path), exist_ok=True)
808
+
809
+ # Check if file exists to determine if we need header
810
+ file_exists = os.path.exists(self.code_summary_path)
811
+
812
+ # Open in append mode to accumulate all implementations
813
+ with open(self.code_summary_path, "a", encoding="utf-8") as f:
814
+ if not file_exists:
815
+ # Write header for new file
816
+ f.write("# Code Implementation Progress Summary\n")
817
+ f.write("*Accumulated implementation progress for all files*\n\n")
818
+
819
+ # Add clear separator between implementations
820
+ f.write("\n" + "=" * 80 + "\n")
821
+ f.write(
822
+ f"## IMPLEMENTATION File {file_path}; ROUND {self.current_round} \n"
823
+ )
824
+ f.write("=" * 80 + "\n\n")
825
+
826
+ # Write the new summary
827
+ f.write(new_summary)
828
+ f.write("\n\n")
829
+
830
+ self.logger.info(
831
+ f"Appended LLM-based code implementation summary to: {self.code_summary_path}"
832
+ )
833
+
834
+ except Exception as e:
835
+ self.logger.error(f"Failed to save code implementation summary: {e}")
836
+
837
+ async def _call_llm_for_summary(
838
+ self, client, client_type: str, summary_messages: List[Dict]
839
+ ) -> Dict[str, Any]:
840
+ """
841
+ Call LLM for code implementation summary generation ONLY
842
+
843
+ This method is used only for creating code implementation summaries,
844
+ NOT for conversation summarization which has been removed.
845
+ """
846
+ if client_type == "anthropic":
847
+ response = await client.messages.create(
848
+ model=self.default_models["anthropic"],
849
+ system="You are an expert code implementation summarizer. Create structured summaries of implemented code files that preserve essential information about functions, dependencies, and implementation approaches.",
850
+ messages=summary_messages,
851
+ max_tokens=5000,
852
+ temperature=0.2,
853
+ )
854
+
855
+ content = ""
856
+ for block in response.content:
857
+ if block.type == "text":
858
+ content += block.text
859
+
860
+ return {"content": content}
861
+
862
+ elif client_type == "openai":
863
+ openai_messages = [
864
+ {
865
+ "role": "system",
866
+ "content": "You are an expert code implementation summarizer. Create structured summaries of implemented code files that preserve essential information about functions, dependencies, and implementation approaches.",
867
+ }
868
+ ]
869
+ openai_messages.extend(summary_messages)
870
+
871
+ # Try max_tokens and temperature first, fallback to max_completion_tokens without temperature if unsupported
872
+ try:
873
+ response = await client.chat.completions.create(
874
+ model=self.default_models["openai"],
875
+ messages=openai_messages,
876
+ max_tokens=5000,
877
+ temperature=0.2,
878
+ )
879
+ except Exception as e:
880
+ if "max_tokens" in str(e) and "max_completion_tokens" in str(e):
881
+ # Retry with max_completion_tokens and no temperature for models that require it
882
+ response = await client.chat.completions.create(
883
+ model=self.default_models["openai"],
884
+ messages=openai_messages,
885
+ max_completion_tokens=5000,
886
+ )
887
+ else:
888
+ raise
889
+
890
+ return {"content": response.choices[0].message.content or ""}
891
+
892
+ else:
893
+ raise ValueError(f"Unsupported client type: {client_type}")
894
+
895
+ def start_new_round(self, iteration: Optional[int] = None):
896
+ """Start a new dialogue round and reset tool results
897
+
898
+ Args:
899
+ iteration: Optional iteration number from workflow to sync with current_round
900
+ """
901
+ if iteration is not None:
902
+ # Sync with workflow iteration
903
+ self.current_round = iteration
904
+ # self.logger.info(f"🔄 Synced round with workflow iteration {iteration}")
905
+ else:
906
+ # Default behavior: increment round counter
907
+ self.current_round += 1
908
+ self.logger.info(f"🔄 Started new round {self.current_round}")
909
+
910
+ self.current_round_tool_results = [] # Clear previous round results
911
+ # Note: Don't reset last_write_file_detected and should_clear_memory_next here
912
+ # These flags persist across rounds until memory optimization is applied
913
+ # self.logger.info(f"🔄 Round {self.current_round} - Tool results cleared, memory flags preserved")
914
+
915
+ def record_tool_result(
916
+ self, tool_name: str, tool_input: Dict[str, Any], tool_result: Any
917
+ ):
918
+ """
919
+ Record tool result for current round and detect write_file calls
920
+
921
+ Args:
922
+ tool_name: Name of the tool called
923
+ tool_input: Input parameters for the tool
924
+ tool_result: Result returned by the tool
925
+ """
926
+ # Detect write_file calls to trigger memory clearing
927
+ if tool_name == "write_file":
928
+ self.last_write_file_detected = True
929
+ self.should_clear_memory_next = True
930
+
931
+ # self.logger.info(f"🔄 WRITE_FILE DETECTED: {file_path} - Memory will be cleared in next round")
932
+
933
+ # Only record specific tools that provide essential information
934
+ essential_tools = [
935
+ "read_code_mem", # Read code summary from implement_code_summary.md
936
+ "read_file", # Read file contents
937
+ "write_file", # Write file contents (important for tracking implementations)
938
+ "execute_python", # Execute Python code (for testing/validation)
939
+ "execute_bash", # Execute bash commands (for build/execution)
940
+ "search_code", # Search code patterns
941
+ "search_reference_code", # Search reference code (if available)
942
+ "get_file_structure", # Get file structure (for understanding project layout)
943
+ ]
944
+
945
+ if tool_name in essential_tools:
946
+ tool_record = {
947
+ "tool_name": tool_name,
948
+ "tool_input": tool_input,
949
+ "tool_result": tool_result,
950
+ "timestamp": time.time(),
951
+ }
952
+ self.current_round_tool_results.append(tool_record)
953
+ # self.logger.info(f"📊 Essential tool result recorded: {tool_name} ({len(self.current_round_tool_results)} total)")
954
+
955
+ def should_use_concise_mode(self) -> bool:
956
+ """
957
+ Check if concise memory mode should be used
958
+
959
+ Returns:
960
+ True if first file has been generated and concise mode should be active
961
+ """
962
+ return self.last_write_file_detected
963
+
964
+ def create_concise_messages(
965
+ self,
966
+ system_prompt: str,
967
+ messages: List[Dict[str, Any]],
968
+ files_implemented: int,
969
+ ) -> List[Dict[str, Any]]:
970
+ """
971
+ Create concise message list for LLM input
972
+ NEW LOGIC: Always clear after write_file, keep system_prompt + initial_plan + current round tools
973
+
974
+ Args:
975
+ system_prompt: Current system prompt
976
+ messages: Original message list
977
+ files_implemented: Number of files implemented so far
978
+
979
+ Returns:
980
+ Concise message list containing only essential information
981
+ """
982
+ if not self.last_write_file_detected:
983
+ # Before any write_file, use normal flow
984
+ self.logger.info(
985
+ "🔄 Using normal conversation flow (before any write_file)"
986
+ )
987
+ return messages
988
+
989
+ # After write_file detection, use concise approach with clean slate
990
+ self.logger.info(
991
+ f"🎯 Using CONCISE memory mode - Clear slate after write_file, Round {self.current_round}"
992
+ )
993
+
994
+ concise_messages = []
995
+
996
+ # Get formatted file lists
997
+ file_lists = self.get_formatted_files_lists()
998
+ implemented_files_list = file_lists["implemented"]
999
+
1000
+ # 1. Add initial plan message (always preserved)
1001
+ initial_plan_message = {
1002
+ "role": "user",
1003
+ "content": f"""**Task: Implement code based on the following reproduction plan**
1004
+
1005
+ **Code Reproduction Plan:**
1006
+ {self.initial_plan}
1007
+
1008
+ **Working Directory:** Current workspace
1009
+
1010
+ **All Previously Implemented Files:**
1011
+ {implemented_files_list}
1012
+
1013
+ **Current Status:** {files_implemented} files implemented
1014
+
1015
+ **Objective:** Continue implementation by analyzing dependencies and implementing the next required file according to the plan's priority order.""",
1016
+ }
1017
+
1018
+ # Append Next Steps information if available
1019
+ if self.current_next_steps.strip():
1020
+ initial_plan_message["content"] += (
1021
+ f"\n\n**Next Steps (from previous analysis):**\n{self.current_next_steps}"
1022
+ )
1023
+
1024
+ # Debug output for unimplemented files (clean format without dashes)
1025
+ unimplemented_files = self.get_unimplemented_files()
1026
+ print("✅ Unimplemented Files:")
1027
+ for file_path in unimplemented_files:
1028
+ print(f"{file_path}")
1029
+ if self.current_next_steps.strip():
1030
+ print(f"\n📋 {self.current_next_steps}")
1031
+
1032
+ concise_messages.append(initial_plan_message)
1033
+
1034
+ # 2. Add Knowledge Base
1035
+ knowledge_base_message = {
1036
+ "role": "user",
1037
+ "content": f"""**Below is the Knowledge Base of the LATEST implemented code file:**
1038
+ {self._read_code_knowledge_base()}
1039
+
1040
+ **Development Cycle - START HERE:**
1041
+
1042
+ **For NEW file implementation:**
1043
+ 1. **You need to call read_code_mem(already_implemented_file_path)** to understand existing implementations and dependencies - agent should choose relevant ALREADY IMPLEMENTED file paths for reference, NOT the new file you want to create
1044
+ 2. `search_code_references` → OPTIONALLY search reference patterns for inspiration (use for reference only, original paper specs take priority)
1045
+ 3. `write_file` → Create the complete code implementation based on original paper requirements
1046
+ 4. `execute_python` or `execute_bash` → Test the partial implementation if needed
1047
+
1048
+ **When all files implemented:**
1049
+ **Use execute_python or execute_bash** to test the complete implementation""",
1050
+ }
1051
+ concise_messages.append(knowledge_base_message)
1052
+
1053
+ # 3. Add current tool results (essential information for next file generation)
1054
+ if self.current_round_tool_results:
1055
+ tool_results_content = self._format_tool_results()
1056
+
1057
+ # # Append Next Steps information if available
1058
+ # if self.current_next_steps.strip():
1059
+ # tool_results_content += f"\n\n**Next Steps (from previous analysis):**\n{self.current_next_steps}"
1060
+
1061
+ tool_results_message = {
1062
+ "role": "user",
1063
+ "content": f"""**Current Tool Results:**
1064
+ {tool_results_content}""",
1065
+ }
1066
+ concise_messages.append(tool_results_message)
1067
+ else:
1068
+ # If no tool results yet, add guidance for next steps
1069
+ guidance_content = f"""**Current Round:** {self.current_round}
1070
+
1071
+ **Development Cycle - START HERE:**
1072
+
1073
+ **For NEW file implementation:**
1074
+ 1. **You need to call read_code_mem(already_implemented_file_path)** to understand existing implementations and dependencies - agent should choose relevant ALREADY IMPLEMENTED file paths for reference, NOT the new file you want to create
1075
+ 2. `search_code_references` → OPTIONALLY search reference patterns for inspiration (use for reference only, original paper specs take priority)
1076
+ 3. Write_file can be used to implement the new component
1077
+ 4. Finally: Use execute_python or execute_bash for testing (if needed)
1078
+
1079
+ **When all files implemented:**
1080
+ 1. **Use execute_python or execute_bash** to test the complete implementation"""
1081
+
1082
+ # # Append Next Steps information if available (even when no tool results)
1083
+ # if self.current_next_steps.strip():
1084
+ # guidance_content += f"\n\n**Next Steps (from previous analysis):**\n{self.current_next_steps}"
1085
+
1086
+ guidance_message = {
1087
+ "role": "user",
1088
+ "content": guidance_content,
1089
+ }
1090
+ concise_messages.append(guidance_message)
1091
+ # **Available Essential Tools:** read_code_mem, write_file, execute_python, execute_bash
1092
+ # **Remember:** Start with read_code_mem when implementing NEW files to understand existing code. When all files are implemented, focus on testing and completion. Implement according to the original paper's specifications - any reference code is for inspiration only."""
1093
+ # self.logger.info(f"✅ Concise messages created: {len(concise_messages)} messages (original: {len(messages)})")
1094
+ return concise_messages
1095
+
1096
+ def _read_code_knowledge_base(self) -> Optional[str]:
1097
+ """
1098
+ Read the implement_code_summary.md file as code knowledge base
1099
+ Returns only the final/latest implementation entry, not all historical entries
1100
+
1101
+ Returns:
1102
+ Content of the latest implementation entry if it exists, None otherwise
1103
+ """
1104
+ try:
1105
+ if os.path.exists(self.code_summary_path):
1106
+ with open(self.code_summary_path, "r", encoding="utf-8") as f:
1107
+ content = f.read().strip()
1108
+
1109
+ if content:
1110
+ # Extract only the final/latest implementation entry
1111
+ return self._extract_latest_implementation_entry(content)
1112
+ else:
1113
+ return None
1114
+ else:
1115
+ return None
1116
+
1117
+ except Exception as e:
1118
+ self.logger.error(f"Failed to read code knowledge base: {e}")
1119
+ return None
1120
+
1121
+ def _extract_latest_implementation_entry(self, content: str) -> Optional[str]:
1122
+ """
1123
+ Extract the latest/final implementation entry from the implement_code_summary.md content
1124
+ Uses a simpler approach to find the last implementation section
1125
+
1126
+ Args:
1127
+ content: Full content of implement_code_summary.md
1128
+
1129
+ Returns:
1130
+ Latest implementation entry content, or None if not found
1131
+ """
1132
+ try:
1133
+ import re
1134
+
1135
+ # Pattern to match the start of implementation sections
1136
+ section_pattern = (
1137
+ r"={80}\s*\n## IMPLEMENTATION File .+?; ROUND \d+\s*\n={80}"
1138
+ )
1139
+
1140
+ # Find all implementation section starts
1141
+ matches = list(re.finditer(section_pattern, content))
1142
+
1143
+ if not matches:
1144
+ # No implementation sections found
1145
+ lines = content.split("\n")
1146
+ fallback_content = (
1147
+ "\n".join(lines[:10]) + "\n... (truncated for brevity)"
1148
+ if len(lines) > 10
1149
+ else content
1150
+ )
1151
+ self.logger.info(
1152
+ "📖 No implementation sections found, using fallback content"
1153
+ )
1154
+ return fallback_content
1155
+
1156
+ # Get the start position of the last implementation section
1157
+ last_match = matches[-1]
1158
+ start_pos = last_match.start()
1159
+
1160
+ # Take everything from the last section start to the end of content
1161
+ latest_entry = content[start_pos:].strip()
1162
+
1163
+ # self.logger.info(f"📖 Extracted latest implementation entry from knowledge base")
1164
+ # print(f"DEBUG: Extracted content length: {len(latest_entry)}")
1165
+ # print(f"DEBUG: First 200 chars: {latest_entry[:]}")
1166
+
1167
+ return latest_entry
1168
+
1169
+ except Exception as e:
1170
+ self.logger.error(f"Failed to extract latest implementation entry: {e}")
1171
+ # Return last 1000 characters as fallback
1172
+ return content[-500:] if len(content) > 500 else content
1173
+
1174
+ def _format_tool_results(self) -> str:
1175
+ """
1176
+ Format current round tool results for LLM input
1177
+
1178
+ Returns:
1179
+ Formatted string of tool results
1180
+ """
1181
+ if not self.current_round_tool_results:
1182
+ return "No tool results in current round."
1183
+
1184
+ formatted_results = []
1185
+
1186
+ for result in self.current_round_tool_results:
1187
+ tool_name = result["tool_name"]
1188
+ tool_input = result["tool_input"]
1189
+ tool_result = result["tool_result"]
1190
+
1191
+ # Format based on tool type
1192
+ if tool_name == "read_code_mem":
1193
+ file_path = tool_input.get("file_path", "unknown")
1194
+ formatted_results.append(f"""
1195
+ **read_code_mem Result for {file_path}:**
1196
+ {self._format_tool_result_content(tool_result)}
1197
+ """)
1198
+ elif tool_name == "read_file":
1199
+ file_path = tool_input.get("file_path", "unknown")
1200
+ formatted_results.append(f"""
1201
+ **read_file Result for {file_path}:**
1202
+ {self._format_tool_result_content(tool_result)}
1203
+ """)
1204
+ elif tool_name == "write_file":
1205
+ file_path = tool_input.get("file_path", "unknown")
1206
+ formatted_results.append(f"""
1207
+ **write_file Result for {file_path}:**
1208
+ {self._format_tool_result_content(tool_result)}
1209
+ """)
1210
+ elif tool_name == "execute_python":
1211
+ code_snippet = (
1212
+ tool_input.get("code", "")[:50] + "..."
1213
+ if len(tool_input.get("code", "")) > 50
1214
+ else tool_input.get("code", "")
1215
+ )
1216
+ formatted_results.append(f"""
1217
+ **execute_python Result (code: {code_snippet}):**
1218
+ {self._format_tool_result_content(tool_result)}
1219
+ """)
1220
+ elif tool_name == "execute_bash":
1221
+ command = tool_input.get("command", "unknown")
1222
+ formatted_results.append(f"""
1223
+ **execute_bash Result (command: {command}):**
1224
+ {self._format_tool_result_content(tool_result)}
1225
+ """)
1226
+ elif tool_name == "search_code":
1227
+ pattern = tool_input.get("pattern", "unknown")
1228
+ file_pattern = tool_input.get("file_pattern", "")
1229
+ formatted_results.append(f"""
1230
+ **search_code Result (pattern: {pattern}, files: {file_pattern}):**
1231
+ {self._format_tool_result_content(tool_result)}
1232
+ """)
1233
+ elif tool_name == "search_reference_code":
1234
+ target_file = tool_input.get("target_file", "unknown")
1235
+ keywords = tool_input.get("keywords", "")
1236
+ formatted_results.append(f"""
1237
+ **search_reference_code Result for {target_file} (keywords: {keywords}):**
1238
+ {self._format_tool_result_content(tool_result)}
1239
+ """)
1240
+ elif tool_name == "get_file_structure":
1241
+ directory = tool_input.get(
1242
+ "directory_path", tool_input.get("path", "current")
1243
+ )
1244
+ formatted_results.append(f"""
1245
+ **get_file_structure Result for {directory}:**
1246
+ {self._format_tool_result_content(tool_result)}
1247
+ """)
1248
+
1249
+ return "\n".join(formatted_results)
1250
+
1251
+ def _format_tool_result_content(self, tool_result: Any) -> str:
1252
+ """
1253
+ Format tool result content for display
1254
+
1255
+ Args:
1256
+ tool_result: Tool result to format
1257
+
1258
+ Returns:
1259
+ Formatted string representation
1260
+ """
1261
+ if isinstance(tool_result, str):
1262
+ # Try to parse as JSON for better formatting
1263
+ try:
1264
+ result_data = json.loads(tool_result)
1265
+ if isinstance(result_data, dict):
1266
+ # Format key information
1267
+ if result_data.get("status") == "summary_found":
1268
+ return (
1269
+ f"Summary found:\n{result_data.get('summary_content', '')}"
1270
+ )
1271
+ elif result_data.get("status") == "no_summary":
1272
+ return "No summary available"
1273
+ else:
1274
+ return json.dumps(result_data, indent=2)
1275
+ else:
1276
+ return str(result_data)
1277
+ except json.JSONDecodeError:
1278
+ return tool_result
1279
+ else:
1280
+ return str(tool_result)
1281
+
1282
+ def get_memory_statistics(self, files_implemented: int = 0) -> Dict[str, Any]:
1283
+ """Get memory agent statistics"""
1284
+ unimplemented_files = self.get_unimplemented_files()
1285
+ return {
1286
+ "last_write_file_detected": self.last_write_file_detected,
1287
+ "should_clear_memory_next": self.should_clear_memory_next,
1288
+ "current_round": self.current_round,
1289
+ "concise_mode_active": self.should_use_concise_mode(),
1290
+ "current_round_tool_results": len(self.current_round_tool_results),
1291
+ "essential_tools_recorded": [
1292
+ r["tool_name"] for r in self.current_round_tool_results
1293
+ ],
1294
+ "implemented_files_tracked": files_implemented,
1295
+ "implemented_files_list": self.implemented_files.copy(),
1296
+ "phases_parsed": len(self.phase_structure),
1297
+ "next_steps_available": bool(self.current_next_steps.strip()),
1298
+ "next_steps_length": len(self.current_next_steps.strip())
1299
+ if self.current_next_steps
1300
+ else 0,
1301
+ # File tracking statistics
1302
+ "total_files_in_plan": len(self.all_files_list),
1303
+ "files_implemented_count": len(self.implemented_files),
1304
+ "files_remaining_count": len(unimplemented_files),
1305
+ "all_files_list": self.all_files_list.copy(),
1306
+ "unimplemented_files_list": unimplemented_files,
1307
+ "implementation_progress_percent": (
1308
+ len(self.implemented_files) / len(self.all_files_list) * 100
1309
+ )
1310
+ if self.all_files_list
1311
+ else 0,
1312
+ }
1313
+
1314
+ def get_implemented_files(self) -> List[str]:
1315
+ """Get list of all implemented files"""
1316
+ return self.implemented_files.copy()
1317
+
1318
+ def get_all_files_list(self) -> List[str]:
1319
+ """Get list of all files that should be implemented according to the plan"""
1320
+ return self.all_files_list.copy()
1321
+
1322
+ def get_unimplemented_files(self) -> List[str]:
1323
+ """
1324
+ Get list of files that haven't been implemented yet
1325
+
1326
+ Returns:
1327
+ List of file paths that still need to be implemented
1328
+ """
1329
+ implemented_set = set(self.implemented_files)
1330
+ unimplemented = [f for f in self.all_files_list if f not in implemented_set]
1331
+ return unimplemented
1332
+
1333
+ def get_formatted_files_lists(self) -> Dict[str, str]:
1334
+ """
1335
+ Get formatted strings for implemented and unimplemented files
1336
+
1337
+ Returns:
1338
+ Dictionary with 'implemented' and 'unimplemented' formatted lists
1339
+ """
1340
+ implemented_list = (
1341
+ "\n".join([f"- {file}" for file in self.implemented_files])
1342
+ if self.implemented_files
1343
+ else "- None yet"
1344
+ )
1345
+
1346
+ unimplemented_files = self.get_unimplemented_files()
1347
+ unimplemented_list = (
1348
+ "\n".join([f"- {file}" for file in unimplemented_files])
1349
+ if unimplemented_files
1350
+ else "- All files implemented!"
1351
+ )
1352
+
1353
+ return {"implemented": implemented_list, "unimplemented": unimplemented_list}
1354
+
1355
+ def get_current_next_steps(self) -> str:
1356
+ """Get the current Next Steps information"""
1357
+ return self.current_next_steps
1358
+
1359
+ def clear_next_steps(self):
1360
+ """Clear the stored Next Steps information"""
1361
+ if self.current_next_steps.strip():
1362
+ self.logger.info("🧹 Next Steps information cleared")
1363
+ self.current_next_steps = ""
1364
+
1365
+ def set_next_steps(self, next_steps: str):
1366
+ """Manually set Next Steps information"""
1367
+ self.current_next_steps = next_steps
1368
+ self.logger.info(
1369
+ f"📝 Next Steps manually set ({len(next_steps.strip())} chars)"
1370
+ )
1371
+
1372
+ def should_trigger_memory_optimization(
1373
+ self, messages: List[Dict[str, Any]], files_implemented: int = 0
1374
+ ) -> bool:
1375
+ """
1376
+ Check if memory optimization should be triggered
1377
+ NEW LOGIC: Trigger after write_file has been detected
1378
+
1379
+ Args:
1380
+ messages: Current message list
1381
+ files_implemented: Number of files implemented so far
1382
+
1383
+ Returns:
1384
+ True if concise mode should be applied
1385
+ """
1386
+ # Trigger if we detected write_file and should clear memory
1387
+ if self.should_clear_memory_next:
1388
+ # self.logger.info(f"🎯 Triggering CONCISE memory optimization (write_file detected, files: {files_implemented})")
1389
+ return True
1390
+
1391
+ # No optimization before any write_file
1392
+ return False
1393
+
1394
+ def apply_memory_optimization(
1395
+ self, system_prompt: str, messages: List[Dict[str, Any]], files_implemented: int
1396
+ ) -> List[Dict[str, Any]]:
1397
+ """
1398
+ Apply memory optimization using concise approach
1399
+ NEW LOGIC: Clear all history after write_file, keep only system_prompt + initial_plan + current tools
1400
+
1401
+ Args:
1402
+ system_prompt: Current system prompt
1403
+ messages: Original message list
1404
+ files_implemented: Number of files implemented so far
1405
+
1406
+ Returns:
1407
+ Optimized message list
1408
+ """
1409
+ if not self.should_clear_memory_next:
1410
+ # Before any write_file, return original messages
1411
+ return messages
1412
+
1413
+ # Apply concise memory optimization after write_file detection
1414
+ # self.logger.info(f"🧹 CLEARING MEMORY after write_file - creating clean slate")
1415
+ optimized_messages = self.create_concise_messages(
1416
+ system_prompt, messages, files_implemented
1417
+ )
1418
+
1419
+ # Clear the flag after applying optimization
1420
+ self.should_clear_memory_next = False
1421
+
1422
+ compression_ratio = (
1423
+ ((len(messages) - len(optimized_messages)) / len(messages) * 100)
1424
+ if messages
1425
+ else 0
1426
+ )
1427
+ self.logger.info(
1428
+ f"🎯 CONCISE optimization applied: {len(messages)} → {len(optimized_messages)} messages ({compression_ratio:.1f}% compression)"
1429
+ )
1430
+
1431
+ return optimized_messages
1432
+
1433
+ def clear_current_round_tool_results(self):
1434
+ """Clear current round tool results (called when starting new round)"""
1435
+ self.current_round_tool_results = []
1436
+ self.logger.info("🧹 Current round tool results cleared")
1437
+
1438
+ def debug_concise_state(self, files_implemented: int = 0):
1439
+ """Debug method to show current concise memory state"""
1440
+ stats = self.get_memory_statistics(files_implemented)
1441
+
1442
+ print("=" * 60)
1443
+ print("🎯 CONCISE MEMORY AGENT STATE (Write-File-Based)")
1444
+ print("=" * 60)
1445
+ print(f"Last write_file detected: {stats['last_write_file_detected']}")
1446
+ print(f"Should clear memory next: {stats['should_clear_memory_next']}")
1447
+ print(f"Files implemented: {stats['implemented_files_tracked']}")
1448
+ print(f"Current round: {stats['current_round']}")
1449
+ print(f"Concise mode active: {stats['concise_mode_active']}")
1450
+ print(f"Current round tool results: {stats['current_round_tool_results']}")
1451
+ print(f"Essential tools recorded: {stats['essential_tools_recorded']}")
1452
+ print(f"Implemented files tracked: {len(self.implemented_files)}")
1453
+ print(f"Implemented files list: {self.implemented_files}")
1454
+ print(f"Code summary file exists: {os.path.exists(self.code_summary_path)}")
1455
+ print(f"Next Steps available: {stats['next_steps_available']}")
1456
+ print(f"Next Steps length: {stats['next_steps_length']} chars")
1457
+ if self.current_next_steps.strip():
1458
+ print(f"Next Steps preview: {self.current_next_steps[:100]}...")
1459
+ print("")
1460
+ print("📋 FILE TRACKING:")
1461
+ print(f" Total files in plan: {stats['total_files_in_plan']}")
1462
+ print(f" Files implemented: {stats['files_implemented_count']}")
1463
+ print(f" Files remaining: {stats['files_remaining_count']}")
1464
+ print(f" Progress: {stats['implementation_progress_percent']:.1f}%")
1465
+ if stats["unimplemented_files_list"]:
1466
+ print(f" Next possible files: {stats['unimplemented_files_list'][:3]}...")
1467
+ print("")
1468
+ print(
1469
+ "📊 NEW LOGIC: write_file → clear memory → accumulate tools → next write_file"
1470
+ )
1471
+ print("📊 NEXT STEPS: Stored separately from file, included in tool results")
1472
+ print(
1473
+ "📊 FILE TRACKING: All files extracted from plan, unimplemented files guide LLM decisions"
1474
+ )
1475
+ print("📊 Essential Tools Tracked:")
1476
+ essential_tools = [
1477
+ "read_code_mem",
1478
+ "read_file",
1479
+ "write_file",
1480
+ "execute_python",
1481
+ "execute_bash",
1482
+ "search_code",
1483
+ "search_reference_code",
1484
+ "get_file_structure",
1485
+ ]
1486
+ for tool in essential_tools:
1487
+ tool_count = sum(
1488
+ 1 for r in self.current_round_tool_results if r["tool_name"] == tool
1489
+ )
1490
+ print(f" - {tool}: {tool_count} calls")
1491
+ print("=" * 60)
projects/ui/DeepCode/workflows/agents/memory_agent_concise_multi.py ADDED
@@ -0,0 +1,1659 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Concise Memory Agent for Code Implementation Workflow - Multi-File Only Support
3
+
4
+ This memory agent implements a focused approach with ONLY multi-file capabilities:
5
+ 1. Before first batch: Normal conversation flow
6
+ 2. After first batch: Keep only system_prompt + initial_plan + current round tool results
7
+ 3. Clean slate for each new code batch generation
8
+ 4. MULTI-FILE ONLY: Support for summarizing multiple files simultaneously (max 5)
9
+
10
+ Key Features:
11
+ - Preserves system prompt and initial plan always
12
+ - After first batch generation, discards previous conversation history
13
+ - Keeps only current round tool results from essential tools:
14
+ * read_multiple_files, write_multiple_files
15
+ * execute_python, execute_bash
16
+ * search_code, search_reference_code, get_file_structure
17
+ - Provides clean, focused input for next write_multiple_files operation
18
+ - MULTI-FILE ONLY: No single file support
19
+ - FILE TRACKING: Gets ALL file information from workflow, no internal tracking
20
+ """
21
+
22
+ import json
23
+ import logging
24
+ import os
25
+ import time
26
+ from datetime import datetime
27
+ from typing import Dict, Any, List, Optional
28
+
29
+
30
+ class ConciseMemoryAgent:
31
+ """
32
+ Concise Memory Agent - Focused Information Retention with MULTI-FILE ONLY Support
33
+
34
+ Core Philosophy:
35
+ - Preserve essential context (system prompt + initial plan)
36
+ - After first batch generation, use clean slate approach
37
+ - Keep only current round tool results from multi-file MCP tools
38
+ - Remove conversational clutter and previous tool calls
39
+ - MULTI-FILE ONLY: Support for multiple file implementations in single operation
40
+ - FILE TRACKING: Receives ALL file information from workflow (no internal tracking)
41
+
42
+ Essential Tools Tracked:
43
+ - Multi-File Operations: read_multiple_files, write_multiple_files
44
+ - Code Analysis: search_code, search_reference_code, get_file_structure
45
+ - Execution: execute_python, execute_bash
46
+ """
47
+
48
+ def __init__(
49
+ self,
50
+ initial_plan_content: str,
51
+ logger: Optional[logging.Logger] = None,
52
+ target_directory: Optional[str] = None,
53
+ default_models: Optional[Dict[str, str]] = None,
54
+ max_files_per_batch: int = 3,
55
+ ):
56
+ """
57
+ Initialize Concise Memory Agent with MULTI-FILE ONLY support
58
+
59
+ Args:
60
+ initial_plan_content: Content of initial_plan.txt
61
+ logger: Logger instance
62
+ target_directory: Target directory for saving summaries
63
+ default_models: Default models configuration from workflow
64
+ max_files_per_batch: Maximum number of files to implement simultaneously (default: 3)
65
+ """
66
+ self.logger = logger or self._create_default_logger()
67
+ self.initial_plan = initial_plan_content
68
+ self.max_files_per_batch = max_files_per_batch
69
+
70
+ # Store default models configuration
71
+ self.default_models = default_models or {
72
+ "anthropic": "claude-sonnet-4-20250514",
73
+ "openai": "gpt-4o",
74
+ }
75
+
76
+ # Memory state tracking - new logic: trigger after each write_multiple_files
77
+ self.last_write_multiple_files_detected = (
78
+ False # Track if write_multiple_files was called in current iteration
79
+ )
80
+ self.should_clear_memory_next = False # Flag to clear memory in next round
81
+ self.current_round = 0
82
+
83
+ # self.phase_structure = self._parse_phase_structure()
84
+
85
+ # Memory configuration
86
+ if target_directory:
87
+ self.save_path = target_directory
88
+ else:
89
+ self.save_path = "./deepcode_lab/papers/1/"
90
+
91
+ # Code summary file path
92
+ self.code_summary_path = os.path.join(
93
+ self.save_path, "implement_code_summary.md"
94
+ )
95
+
96
+ # Current round tool results storage
97
+ self.current_round_tool_results = []
98
+
99
+ self.logger.info(
100
+ f"Concise Memory Agent initialized with target directory: {self.save_path}"
101
+ )
102
+ self.logger.info(f"Code summary will be saved to: {self.code_summary_path}")
103
+ self.logger.info(f"Max files per batch: {self.max_files_per_batch}")
104
+ self.logger.info(
105
+ "📝 MULTI-FILE LOGIC: Memory clearing triggered after each write_multiple_files call"
106
+ )
107
+ self.logger.info(
108
+ "🆕 MULTI-FILE ONLY: No single file support - batch operations only"
109
+ )
110
+ self.logger.info(
111
+ "📊 FILE TRACKING: ALL file information received from workflow (no internal tracking)"
112
+ )
113
+
114
+ def _create_default_logger(self) -> logging.Logger:
115
+ """Create default logger"""
116
+ logger = logging.getLogger(f"{__name__}.ConciseMemoryAgent")
117
+ logger.setLevel(logging.INFO)
118
+ return logger
119
+
120
+ async def create_multi_code_implementation_summary(
121
+ self,
122
+ client,
123
+ client_type: str,
124
+ file_implementations: Dict[str, str],
125
+ files_implemented: int,
126
+ implemented_files: List[str], # Receive from workflow
127
+ ) -> str:
128
+ """
129
+ Create LLM-based code implementation summary for multiple files
130
+ ONLY AVAILABLE METHOD: Handles multiple files simultaneously with separate summaries for each
131
+
132
+ Args:
133
+ client: LLM client instance
134
+ client_type: Type of LLM client ("anthropic" or "openai")
135
+ file_implementations: Dictionary mapping file_path to implementation_content
136
+ files_implemented: Number of files implemented so far
137
+ implemented_files: List of all implemented files (from workflow)
138
+
139
+ Returns:
140
+ LLM-generated formatted code implementation summaries for all files
141
+ """
142
+ try:
143
+ # Validate input
144
+ if not file_implementations:
145
+ raise ValueError("No file implementations provided")
146
+
147
+ if len(file_implementations) > self.max_files_per_batch:
148
+ raise ValueError(
149
+ f"Too many files provided ({len(file_implementations)}), max is {self.max_files_per_batch}"
150
+ )
151
+
152
+ # Create prompt for LLM summary of multiple files
153
+ summary_prompt = self._create_multi_code_summary_prompt(
154
+ file_implementations, files_implemented, implemented_files
155
+ )
156
+ summary_messages = [{"role": "user", "content": summary_prompt}]
157
+
158
+ # Get LLM-generated summary
159
+ llm_response = await self._call_llm_for_summary(
160
+ client, client_type, summary_messages
161
+ )
162
+ llm_summary = llm_response.get("content", "")
163
+
164
+ # Extract sections for each file and next steps
165
+ multi_sections = self._extract_multi_summary_sections(
166
+ llm_summary, file_implementations.keys()
167
+ )
168
+
169
+ # Format and save summary for each file (WITHOUT Next Steps)
170
+ all_formatted_summaries = []
171
+
172
+ for file_path in file_implementations.keys():
173
+ file_sections = multi_sections.get("files", {}).get(file_path, {})
174
+
175
+ # Format summary with ONLY Implementation Progress and Dependencies for file saving
176
+ file_summary_content = ""
177
+ if file_sections.get("core_purpose"):
178
+ file_summary_content += file_sections["core_purpose"] + "\n\n"
179
+ if file_sections.get("public_interface"):
180
+ file_summary_content += file_sections["public_interface"] + "\n\n"
181
+ if file_sections.get("internal_dependencies"):
182
+ file_summary_content += (
183
+ file_sections["internal_dependencies"] + "\n\n"
184
+ )
185
+ if file_sections.get("external_dependencies"):
186
+ file_summary_content += (
187
+ file_sections["external_dependencies"] + "\n\n"
188
+ )
189
+ if file_sections.get("implementation_notes"):
190
+ file_summary_content += (
191
+ file_sections["implementation_notes"] + "\n\n"
192
+ )
193
+
194
+ # Create the formatted summary for file saving (WITHOUT Next Steps)
195
+ formatted_summary = self._format_code_implementation_summary(
196
+ file_path, file_summary_content.strip(), files_implemented
197
+ )
198
+
199
+ all_formatted_summaries.append(formatted_summary)
200
+
201
+ # Save to implement_code_summary.md (append mode) - ONLY Implementation Progress and Dependencies
202
+ await self._save_code_summary_to_file(formatted_summary, file_path)
203
+
204
+ # Combine all summaries for return
205
+ combined_summary = "\n".join(all_formatted_summaries)
206
+
207
+ self.logger.info(
208
+ f"Created and saved multi-file code summaries for {len(file_implementations)} files"
209
+ )
210
+
211
+ return combined_summary
212
+
213
+ except Exception as e:
214
+ self.logger.error(
215
+ f"Failed to create LLM-based multi-file code implementation summary: {e}"
216
+ )
217
+ # Fallback to simple summary for each file
218
+ return self._create_fallback_multi_code_summary(
219
+ file_implementations, files_implemented
220
+ )
221
+
222
+ def _create_multi_code_summary_prompt(
223
+ self,
224
+ file_implementations: Dict[str, str],
225
+ files_implemented: int,
226
+ implemented_files: List[str],
227
+ ) -> str:
228
+ """
229
+ Create prompt for LLM to generate multi-file code implementation summary
230
+
231
+ Args:
232
+ file_implementations: Dictionary mapping file_path to implementation_content
233
+ files_implemented: Number of files implemented so far
234
+ implemented_files: List of all implemented files (from workflow)
235
+
236
+ Returns:
237
+ Prompt for LLM multi-file summarization
238
+ """
239
+
240
+ # Format file lists using workflow data
241
+ implemented_files_list = (
242
+ "\n".join([f"- {file}" for file in implemented_files])
243
+ if implemented_files
244
+ else "- None yet"
245
+ )
246
+
247
+ # Note: We don't have unimplemented files list anymore - workflow will provide when needed
248
+
249
+ # Format file implementations for the prompt
250
+ implementation_sections = []
251
+ for file_path, content in file_implementations.items():
252
+ implementation_sections.append(f"""
253
+ **File: {file_path}**
254
+ {content}
255
+ """)
256
+
257
+ files_list = list(file_implementations.keys())
258
+ files_count = len(files_list)
259
+
260
+ prompt = f"""You are an expert code implementation summarizer. Analyze the {files_count} implemented code files and create structured summaries for each.
261
+
262
+ **All Previously Implemented Files:**
263
+ {implemented_files_list}
264
+
265
+ **Current Implementation Context:**
266
+ - **Files Implemented**: {', '.join(files_list)}
267
+ - **Total Files Implemented**: {files_implemented}
268
+ - **Files in This Batch**: {files_count}
269
+
270
+ **Initial Plan Reference:**
271
+ {self.initial_plan[:]}
272
+
273
+ **Implemented Code Content:**
274
+ {''.join(implementation_sections)}
275
+
276
+ **Required Summary Format:**
277
+
278
+ **FOR EACH FILE, provide separate sections:**
279
+
280
+ **File: {{file_path}}**
281
+ **Core Purpose** (provide a general overview of the file's main responsibility):
282
+ - {{1-2 sentence description of file's main responsibility}}
283
+
284
+ **Public Interface** (what other files can use, if any):
285
+ - Class {{ClassName}}: {{purpose}} | Key methods: {{method_names}} | Constructor params: {{params}}
286
+ - Function {{function_name}}({{params}}): {{purpose}} -> {{return_type}}: {{purpose}}
287
+ - Constants/Types: {{name}}: {{value/description}}
288
+
289
+ **Internal Dependencies** (what this file imports/requires, if any):
290
+ - From {{module/file}}: {{specific_imports}}
291
+ - External packages: {{package_name}} - {{usage_context}}
292
+
293
+ **External Dependencies** (what depends on this file, if any):
294
+ - Expected to be imported by: {{likely_consumer_files}}
295
+ - Key exports used elsewhere: {{main_interfaces}}
296
+
297
+ **Implementation Notes**: (if any)
298
+ - Architecture decisions: {{key_choices_made}}
299
+ - Cross-File Relationships: {{how_files_work_together}}
300
+
301
+ [Repeat for all {files_count} files...]
302
+
303
+ **Instructions:**
304
+ - Provide separate Implementation Progress and Dependencies sections for each of the {files_count} files
305
+ - Be precise and concise for each file
306
+ - Focus on function interfaces that other files will need
307
+ - Extract actual function signatures from the code
308
+ - Use the exact format specified above
309
+
310
+ **Summary:**"""
311
+
312
+ return prompt
313
+
314
+ def _extract_multi_summary_sections(
315
+ self, llm_summary: str, file_paths: List[str]
316
+ ) -> Dict[str, Any]:
317
+ """
318
+ Extract different sections from LLM-generated multi-file summary
319
+ """
320
+ result = {
321
+ "files": {},
322
+ }
323
+
324
+ try:
325
+ # Convert dict_keys to list if needed
326
+ if hasattr(file_paths, "keys"):
327
+ file_paths = list(file_paths)
328
+ elif not isinstance(file_paths, list):
329
+ file_paths = list(file_paths)
330
+
331
+ lines = llm_summary.split("\n")
332
+ current_file = None
333
+ current_section = None
334
+ current_content = []
335
+ file_sections = {}
336
+
337
+ for i, line in enumerate(lines):
338
+ line_lower = line.lower().strip()
339
+ original_line = line.strip()
340
+
341
+ # Skip empty lines
342
+ if not original_line:
343
+ if current_section:
344
+ current_content.append(line)
345
+ continue
346
+
347
+ # File header detection
348
+ if (
349
+ "**file:" in line_lower or "file:" in line_lower
350
+ ) and "**" in original_line:
351
+ # Save previous section
352
+ if current_file and current_section and current_content:
353
+ if current_file not in file_sections:
354
+ file_sections[current_file] = {}
355
+ file_sections[current_file][current_section] = "\n".join(
356
+ current_content
357
+ ).strip()
358
+
359
+ # Extract file path
360
+ file_header = original_line.lower()
361
+ if "**file:" in file_header:
362
+ file_header = original_line[
363
+ original_line.lower().find("file:") + 5 :
364
+ ]
365
+ if "**" in file_header:
366
+ file_header = file_header[: file_header.find("**")]
367
+ else:
368
+ file_header = original_line[
369
+ original_line.lower().find("file:") + 5 :
370
+ ]
371
+
372
+ file_header = file_header.strip()
373
+ current_file = None
374
+
375
+ # File matching
376
+ for file_path in file_paths:
377
+ file_name = file_path.split("/")[-1]
378
+ if (
379
+ file_path in file_header
380
+ or file_header in file_path
381
+ or file_name in file_header
382
+ or file_header in file_name
383
+ ):
384
+ current_file = file_path
385
+ break
386
+
387
+ current_section = None
388
+ current_content = []
389
+ continue
390
+
391
+ # Section detection within files
392
+ if current_file:
393
+ section_matched = False
394
+
395
+ if "core purpose" in line_lower and "**" in original_line:
396
+ if current_section and current_content:
397
+ if current_file not in file_sections:
398
+ file_sections[current_file] = {}
399
+ file_sections[current_file][current_section] = "\n".join(
400
+ current_content
401
+ ).strip()
402
+ current_section = "core_purpose"
403
+ current_content = []
404
+ section_matched = True
405
+ elif "public interface" in line_lower and "**" in original_line:
406
+ if current_section and current_content:
407
+ if current_file not in file_sections:
408
+ file_sections[current_file] = {}
409
+ file_sections[current_file][current_section] = "\n".join(
410
+ current_content
411
+ ).strip()
412
+ current_section = "public_interface"
413
+ current_content = []
414
+ section_matched = True
415
+ elif (
416
+ "internal dependencies" in line_lower and "**" in original_line
417
+ ):
418
+ if current_section and current_content:
419
+ if current_file not in file_sections:
420
+ file_sections[current_file] = {}
421
+ file_sections[current_file][current_section] = "\n".join(
422
+ current_content
423
+ ).strip()
424
+ current_section = "internal_dependencies"
425
+ current_content = []
426
+ section_matched = True
427
+ elif (
428
+ "external dependencies" in line_lower and "**" in original_line
429
+ ):
430
+ if current_section and current_content:
431
+ if current_file not in file_sections:
432
+ file_sections[current_file] = {}
433
+ file_sections[current_file][current_section] = "\n".join(
434
+ current_content
435
+ ).strip()
436
+ current_section = "external_dependencies"
437
+ current_content = []
438
+ section_matched = True
439
+ elif "implementation notes" in line_lower and "**" in original_line:
440
+ if current_section and current_content:
441
+ if current_file not in file_sections:
442
+ file_sections[current_file] = {}
443
+ file_sections[current_file][current_section] = "\n".join(
444
+ current_content
445
+ ).strip()
446
+ current_section = "implementation_notes"
447
+ current_content = []
448
+ section_matched = True
449
+
450
+ # If no section header matched, add to current content
451
+ if not section_matched and current_section:
452
+ current_content.append(line)
453
+
454
+ # Save the final section
455
+ if current_file and current_section and current_content:
456
+ if current_file not in file_sections:
457
+ file_sections[current_file] = {}
458
+ file_sections[current_file][current_section] = "\n".join(
459
+ current_content
460
+ ).strip()
461
+
462
+ # Build final result
463
+ for file_path in file_paths:
464
+ sections = file_sections.get(file_path, {})
465
+ result["files"][file_path] = {}
466
+ if "core_purpose" in sections:
467
+ result["files"][file_path]["core_purpose"] = (
468
+ "**Core Purpose**:\n" + sections["core_purpose"]
469
+ )
470
+ if "public_interface" in sections:
471
+ result["files"][file_path]["public_interface"] = (
472
+ "**Public Interface**:\n" + sections["public_interface"]
473
+ )
474
+ if "implementation_notes" in sections:
475
+ result["files"][file_path]["implementation_notes"] = (
476
+ "**Implementation Notes**:\n" + sections["implementation_notes"]
477
+ )
478
+ if "internal_dependencies" in sections:
479
+ result["files"][file_path]["internal_dependencies"] = (
480
+ "**Internal Dependencies**:\n"
481
+ + sections["internal_dependencies"]
482
+ )
483
+ if "external_dependencies" in sections:
484
+ result["files"][file_path]["external_dependencies"] = (
485
+ "**External Dependencies**:\n"
486
+ + sections["external_dependencies"]
487
+ )
488
+
489
+ self.logger.info(
490
+ f"📋 Extracted multi-file sections for {len(result['files'])} files"
491
+ )
492
+
493
+ except Exception as e:
494
+ self.logger.error(f"Failed to extract multi-file summary sections: {e}")
495
+ self.logger.error(f"📋 file_paths type: {type(file_paths)}")
496
+ self.logger.error(f"📋 file_paths value: {file_paths}")
497
+ self.logger.error(f"📋 file_paths length: {len(file_paths)}")
498
+ for file_path in file_paths:
499
+ result["files"][file_path] = {
500
+ "core_purpose": f"**Core Purpose**: {file_path} completed.",
501
+ "public_interface": "**Public Interface**: Public interface need manual review.",
502
+ "internal_dependencies": "**Internal Dependencies**: Internal dependencies need manual review.",
503
+ "external_dependencies": "**External Dependencies**: External dependencies need manual review.",
504
+ "implementation_notes": "**Implementation Notes**: Implementation notes need manual review.",
505
+ }
506
+
507
+ return result
508
+
509
+ def _format_code_implementation_summary(
510
+ self, file_path: str, llm_summary: str, files_implemented: int
511
+ ) -> str:
512
+ """
513
+ Format the LLM-generated summary into the final structure
514
+
515
+ Args:
516
+ file_path: Path of the implemented file
517
+ llm_summary: LLM-generated summary content
518
+ files_implemented: Number of files implemented so far
519
+
520
+ Returns:
521
+ Formatted summary
522
+ """
523
+ timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
524
+
525
+ formatted_summary = f"""# Code Implementation Summary
526
+ **Generated**: {timestamp}
527
+ **File Implemented**: {file_path}
528
+
529
+ {llm_summary}
530
+
531
+ ---
532
+ *Auto-generated by Memory Agent*
533
+ """
534
+ return formatted_summary
535
+
536
+ def _create_fallback_multi_code_summary(
537
+ self, file_implementations: Dict[str, str], files_implemented: int
538
+ ) -> str:
539
+ """
540
+ Create fallback multi-file summary when LLM is unavailable
541
+
542
+ Args:
543
+ file_implementations: Dictionary mapping file_path to implementation_content
544
+ files_implemented: Number of files implemented so far
545
+
546
+ Returns:
547
+ Fallback multi-file summary
548
+ """
549
+ # Create fallback summaries for each file
550
+ fallback_summaries = []
551
+ timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
552
+
553
+ for file_path in file_implementations.keys():
554
+ fallback_summary = f"""# Code Implementation Summary
555
+ **Generated**: {timestamp}
556
+ **File Implemented**: {file_path}
557
+ **Multi-file batch summary failed to generate.**
558
+
559
+ ---
560
+ *Auto-generated by Concise Memory Agent (Multi-File Fallback Mode)*
561
+ """
562
+ fallback_summaries.append(fallback_summary)
563
+
564
+ return "\n".join(fallback_summaries)
565
+
566
+ async def _save_code_summary_to_file(self, new_summary: str, file_path: str):
567
+ """
568
+ Append code implementation summary to implement_code_summary.md
569
+ Accumulates all implementations with clear separators
570
+
571
+ Args:
572
+ new_summary: New summary content to append
573
+ file_path: Path of the file for which the summary was generated
574
+ """
575
+ try:
576
+ # Create directory if it doesn't exist
577
+ os.makedirs(os.path.dirname(self.code_summary_path), exist_ok=True)
578
+
579
+ # Check if file exists to determine if we need header
580
+ file_exists = os.path.exists(self.code_summary_path)
581
+
582
+ # Open in append mode to accumulate all implementations
583
+ with open(self.code_summary_path, "a", encoding="utf-8") as f:
584
+ if not file_exists:
585
+ # Write header for new file
586
+ f.write("# Code Implementation Progress Summary\n")
587
+ f.write("*Accumulated implementation progress for all files*\n\n")
588
+
589
+ # Add clear separator between implementations
590
+ f.write("\n" + "=" * 80 + "\n")
591
+ f.write(f"## IMPLEMENTATION File {file_path}\n")
592
+ f.write("=" * 80 + "\n\n")
593
+
594
+ # Write the new summary
595
+ f.write(new_summary)
596
+ f.write("\n\n")
597
+
598
+ self.logger.info(
599
+ f"Appended LLM-based code implementation summary to: {self.code_summary_path}"
600
+ )
601
+
602
+ except Exception as e:
603
+ self.logger.error(f"Failed to save code implementation summary: {e}")
604
+
605
+ async def _call_llm_for_summary(
606
+ self, client, client_type: str, summary_messages: List[Dict]
607
+ ) -> Dict[str, Any]:
608
+ """
609
+ Call LLM for code implementation summary generation ONLY
610
+
611
+ This method is used only for creating code implementation summaries,
612
+ NOT for conversation summarization which has been removed.
613
+ """
614
+ if client_type == "anthropic":
615
+ response = await client.messages.create(
616
+ model=self.default_models["anthropic"],
617
+ system="You are an expert code implementation summarizer. Create structured summaries of implemented code files that preserve essential information about functions, dependencies, and implementation approaches.",
618
+ messages=summary_messages,
619
+ max_tokens=8000, # Increased for multi-file support
620
+ temperature=0.2,
621
+ )
622
+
623
+ content = ""
624
+ for block in response.content:
625
+ if block.type == "text":
626
+ content += block.text
627
+
628
+ return {"content": content}
629
+
630
+ elif client_type == "openai":
631
+ openai_messages = [
632
+ {
633
+ "role": "system",
634
+ "content": "You are an expert code implementation summarizer. Create structured summaries of implemented code files that preserve essential information about functions, dependencies, and implementation approaches.",
635
+ }
636
+ ]
637
+ openai_messages.extend(summary_messages)
638
+
639
+ # Try max_tokens and temperature first, fallback to max_completion_tokens without temperature if unsupported
640
+ try:
641
+ response = await client.chat.completions.create(
642
+ model=self.default_models["openai"],
643
+ messages=openai_messages,
644
+ max_tokens=8000, # Increased for multi-file support
645
+ temperature=0.2,
646
+ )
647
+ except Exception as e:
648
+ if "max_tokens" in str(e) and "max_completion_tokens" in str(e):
649
+ # Retry with max_completion_tokens and no temperature for models that require it
650
+ response = await client.chat.completions.create(
651
+ model=self.default_models["openai"],
652
+ messages=openai_messages,
653
+ max_completion_tokens=8000, # Increased for multi-file support
654
+ )
655
+ else:
656
+ raise
657
+
658
+ return {"content": response.choices[0].message.content or ""}
659
+
660
+ else:
661
+ raise ValueError(f"Unsupported client type: {client_type}")
662
+
663
+ def start_new_round(self, iteration: Optional[int] = None):
664
+ """Start a new dialogue round and reset tool results
665
+
666
+ Args:
667
+ iteration: Optional iteration number from workflow to sync with current_round
668
+ """
669
+ if iteration is not None:
670
+ # Sync with workflow iteration
671
+ self.current_round = iteration
672
+ else:
673
+ # Default behavior: increment round counter
674
+ self.current_round += 1
675
+ self.logger.info(f"🔄 Started new round {self.current_round}")
676
+
677
+ self.current_round_tool_results = [] # Clear previous round results
678
+
679
+ def record_tool_result(
680
+ self, tool_name: str, tool_input: Dict[str, Any], tool_result: Any
681
+ ):
682
+ """
683
+ Record tool result for current round and detect write_multiple_files calls
684
+
685
+ Args:
686
+ tool_name: Name of the tool called
687
+ tool_input: Input parameters for the tool
688
+ tool_result: Result returned by the tool
689
+ """
690
+ # Detect write_multiple_files calls to trigger memory clearing
691
+ if tool_name == "write_multiple_files":
692
+ self.last_write_multiple_files_detected = True
693
+ self.should_clear_memory_next = True
694
+
695
+ # Only record specific tools that provide essential information
696
+ essential_tools = [
697
+ "read_multiple_files", # Read multiple file contents
698
+ "write_multiple_files", # Write multiple file contents (important for tracking implementations)
699
+ "execute_python", # Execute Python code (for testing/validation)
700
+ "execute_bash", # Execute bash commands (for build/execution)
701
+ "search_code", # Search code patterns
702
+ "search_reference_code", # Search reference code (if available)
703
+ "get_file_structure", # Get file structure (for understanding project layout)
704
+ ]
705
+
706
+ if tool_name in essential_tools:
707
+ tool_record = {
708
+ "tool_name": tool_name,
709
+ "tool_input": tool_input,
710
+ "tool_result": tool_result,
711
+ "timestamp": time.time(),
712
+ }
713
+ self.current_round_tool_results.append(tool_record)
714
+
715
+ def should_use_concise_mode(self) -> bool:
716
+ """
717
+ Check if concise memory mode should be used
718
+
719
+ Returns:
720
+ True if first batch has been generated and concise mode should be active
721
+ """
722
+ return self.last_write_multiple_files_detected
723
+
724
+ def create_concise_messages_revise(
725
+ self,
726
+ system_prompt: str,
727
+ messages: List[Dict[str, Any]],
728
+ files_implemented: int,
729
+ task_description: str,
730
+ file_batch: List[str],
731
+ is_first_batch: bool = True,
732
+ implemented_files: List[str] = None, # Receive from workflow
733
+ all_files: List[str] = None, # NEW: Receive all files from workflow
734
+ ) -> List[Dict[str, Any]]:
735
+ """
736
+ Create concise message list for LLM input specifically for revision execution
737
+ ALIGNED with _execute_multi_file_batch_revision in code_evaluation_workflow
738
+
739
+ Args:
740
+ system_prompt: Current system prompt
741
+ messages: Original message list
742
+ files_implemented: Number of files implemented so far
743
+ task_description: Description of the current task
744
+ file_batch: Files to implement in this batch
745
+ is_first_batch: Whether this is the first batch (use file_batch) or subsequent
746
+ implemented_files: List of all implemented files (from workflow)
747
+ all_files: List of all files that should be implemented (from workflow)
748
+
749
+ Returns:
750
+ Concise message list containing only essential information for revision
751
+ """
752
+ # Use empty lists if not provided
753
+ if implemented_files is None:
754
+ implemented_files = []
755
+ if all_files is None:
756
+ all_files = []
757
+
758
+ self.logger.info(
759
+ "🎯 Using CONCISE memory mode for revision - Clear slate after write_multiple_files"
760
+ )
761
+
762
+ concise_messages = []
763
+
764
+ # Format file lists using workflow data
765
+ implemented_files_list = (
766
+ "\n".join([f"- {file}" for file in implemented_files])
767
+ if implemented_files
768
+ else "- None yet"
769
+ )
770
+
771
+ # Calculate unimplemented files from workflow data
772
+
773
+ # Read initial plan and memory content
774
+ initial_plan_content = self.initial_plan
775
+ memory_content = (
776
+ self._read_code_knowledge_base()
777
+ or "No previous implementation memory available"
778
+ )
779
+
780
+ files_to_implement = file_batch
781
+ file_list = "\n".join([f"- {file_path}" for file_path in files_to_implement])
782
+
783
+ # Create revision-specific task message
784
+ task_message = f"""Task: {task_description}
785
+
786
+ Files to implement in this batch ({len(files_to_implement)} files):
787
+ {file_list}
788
+
789
+ MANDATORY JSON FORMAT REQUIREMENTS:
790
+ 1. Use write_multiple_files tool
791
+ 2. Parameter name: "file_implementations"
792
+ 3. Value must be a VALID JSON string with ESCAPED newlines
793
+ 4. Use \\n for newlines, \\t for tabs, \\" for quotes
794
+ 5. NO literal newlines in the JSON string
795
+
796
+ CORRECT JSON FORMAT EXAMPLE:
797
+ {{
798
+ "file1.py": "# Comment\\nclass MyClass:\\n def __init__(self):\\n pass\\n",
799
+ "file2.py": "import os\\n\\ndef main():\\n print('Hello')\\n"
800
+ }}
801
+
802
+ Initial Implementation Plan Context:
803
+ {initial_plan_content}
804
+
805
+ Previous Implementation Memory:
806
+ {memory_content}
807
+
808
+ **All Previously Implemented Files:**
809
+ {implemented_files_list}
810
+
811
+ **Current Status:** {files_implemented} files implemented
812
+
813
+ IMPLEMENTATION REQUIREMENTS:
814
+ - Create functional code for each file
815
+ - Use proper Python syntax and imports
816
+ - Include docstrings and comments
817
+ - Follow the existing patterns from memory
818
+
819
+ Files to implement: {files_to_implement}
820
+
821
+ Call write_multiple_files NOW with PROPERLY ESCAPED JSON containing all {len(files_to_implement)} files."""
822
+
823
+ concise_messages.append({"role": "user", "content": task_message})
824
+
825
+ # Debug output for files to implement
826
+ print("✅ Files to implement:")
827
+ for file_path in files_to_implement:
828
+ print(f"{file_path}")
829
+
830
+ return concise_messages
831
+
832
+ def _calculate_message_statistics(
833
+ self, messages: List[Dict[str, Any]], label: str
834
+ ) -> Dict[str, Any]:
835
+ """
836
+ Calculate statistics for a message list
837
+
838
+ Args:
839
+ messages: List of messages to analyze
840
+ label: Label for logging
841
+
842
+ Returns:
843
+ Dictionary with statistics
844
+ """
845
+ total_chars = 0
846
+ total_words = 0
847
+
848
+ for msg in messages:
849
+ content = msg.get("content", "")
850
+ total_chars += len(content)
851
+ total_words += len(content.split())
852
+
853
+ # Estimate tokens (rough approximation: ~4 characters per token)
854
+ estimated_tokens = total_chars // 4
855
+
856
+ stats = {
857
+ "message_count": len(messages),
858
+ "total_characters": total_chars,
859
+ "total_words": total_words,
860
+ "estimated_tokens": estimated_tokens,
861
+ "summary": f"{len(messages)} msgs, {total_chars:,} chars, ~{estimated_tokens:,} tokens",
862
+ }
863
+
864
+ return stats
865
+
866
+ def _calculate_memory_savings(
867
+ self, original_stats: Dict[str, Any], optimized_stats: Dict[str, Any]
868
+ ) -> Dict[str, Any]:
869
+ """
870
+ Calculate memory savings between original and optimized messages
871
+
872
+ Args:
873
+ original_stats: Statistics for original messages
874
+ optimized_stats: Statistics for optimized messages
875
+
876
+ Returns:
877
+ Dictionary with savings calculations
878
+ """
879
+ messages_saved = (
880
+ original_stats["message_count"] - optimized_stats["message_count"]
881
+ )
882
+ chars_saved = (
883
+ original_stats["total_characters"] - optimized_stats["total_characters"]
884
+ )
885
+ tokens_saved_estimate = (
886
+ original_stats["estimated_tokens"] - optimized_stats["estimated_tokens"]
887
+ )
888
+
889
+ # Calculate percentages (avoid division by zero)
890
+ messages_saved_percent = (
891
+ messages_saved / max(original_stats["message_count"], 1)
892
+ ) * 100
893
+ chars_saved_percent = (
894
+ chars_saved / max(original_stats["total_characters"], 1)
895
+ ) * 100
896
+ tokens_saved_percent = (
897
+ tokens_saved_estimate / max(original_stats["estimated_tokens"], 1)
898
+ ) * 100
899
+
900
+ return {
901
+ "messages_saved": messages_saved,
902
+ "chars_saved": chars_saved,
903
+ "tokens_saved_estimate": tokens_saved_estimate,
904
+ "messages_saved_percent": messages_saved_percent,
905
+ "chars_saved_percent": chars_saved_percent,
906
+ "tokens_saved_percent": tokens_saved_percent,
907
+ }
908
+
909
+ def _read_code_knowledge_base(self) -> Optional[str]:
910
+ """
911
+ Read the implement_code_summary.md file as code knowledge base
912
+ Returns only the final/latest implementation entry, not all historical entries
913
+
914
+ Returns:
915
+ Content of the latest implementation entry if it exists, None otherwise
916
+ """
917
+ try:
918
+ if os.path.exists(self.code_summary_path):
919
+ with open(self.code_summary_path, "r", encoding="utf-8") as f:
920
+ content = f.read().strip()
921
+ return content
922
+ else:
923
+ return None
924
+
925
+ except Exception as e:
926
+ self.logger.error(f"Failed to read code knowledge base: {e}")
927
+ return None
928
+
929
+ def _extract_latest_implementation_entry(self, content: str) -> Optional[str]:
930
+ """
931
+ Extract the latest/final implementation entry from the implement_code_summary.md content
932
+ Uses a simpler approach to find the last implementation section
933
+
934
+ Args:
935
+ content: Full content of implement_code_summary.md
936
+
937
+ Returns:
938
+ Latest implementation entry content, or None if not found
939
+ """
940
+ try:
941
+ import re
942
+
943
+ # Pattern to match the start of implementation sections
944
+ section_pattern = r"={80}\s*\n## IMPLEMENTATION File .+?"
945
+
946
+ # Find all implementation section starts
947
+ matches = list(re.finditer(section_pattern, content))
948
+
949
+ if not matches:
950
+ # No implementation sections found
951
+ lines = content.split("\n")
952
+ fallback_content = (
953
+ "\n".join(lines[:10]) + "\n... (truncated for brevity)"
954
+ if len(lines) > 10
955
+ else content
956
+ )
957
+ self.logger.info(
958
+ "📖 No implementation sections found, using fallback content"
959
+ )
960
+ return fallback_content
961
+
962
+ # Get the start position of the last implementation section
963
+ last_match = matches[-1]
964
+ start_pos = last_match.start()
965
+
966
+ # Take everything from the last section start to the end of content
967
+ latest_entry = content[start_pos:].strip()
968
+
969
+ return latest_entry
970
+
971
+ except Exception as e:
972
+ self.logger.error(f"Failed to extract latest implementation entry: {e}")
973
+ # Return last 1000 characters as fallback
974
+ return content[-500:] if len(content) > 500 else content
975
+
976
+ def _format_tool_results(self) -> str:
977
+ """
978
+ Format current round tool results for LLM input
979
+
980
+ Returns:
981
+ Formatted string of tool results
982
+ """
983
+ if not self.current_round_tool_results:
984
+ return "No tool results in current round."
985
+
986
+ formatted_results = []
987
+
988
+ for result in self.current_round_tool_results:
989
+ tool_name = result["tool_name"]
990
+ tool_input = result["tool_input"]
991
+ tool_result = result["tool_result"]
992
+
993
+ # Format based on tool type
994
+ if tool_name == "read_multiple_files":
995
+ file_requests = tool_input.get("file_requests", "unknown")
996
+ formatted_results.append(f"""
997
+ **read_multiple_files Result for {file_requests}:**
998
+ {self._format_tool_result_content(tool_result)}
999
+ """)
1000
+ elif tool_name == "write_multiple_files":
1001
+ formatted_results.append(f"""
1002
+ **write_multiple_files Result for batch:**
1003
+ {self._format_tool_result_content(tool_result)}
1004
+ """)
1005
+ elif tool_name == "execute_python":
1006
+ code_snippet = (
1007
+ tool_input.get("code", "")[:50] + "..."
1008
+ if len(tool_input.get("code", "")) > 50
1009
+ else tool_input.get("code", "")
1010
+ )
1011
+ formatted_results.append(f"""
1012
+ **execute_python Result (code: {code_snippet}):**
1013
+ {self._format_tool_result_content(tool_result)}
1014
+ """)
1015
+ elif tool_name == "execute_bash":
1016
+ command = tool_input.get("command", "unknown")
1017
+ formatted_results.append(f"""
1018
+ **execute_bash Result (command: {command}):**
1019
+ {self._format_tool_result_content(tool_result)}
1020
+ """)
1021
+ elif tool_name == "search_code":
1022
+ pattern = tool_input.get("pattern", "unknown")
1023
+ file_pattern = tool_input.get("file_pattern", "")
1024
+ formatted_results.append(f"""
1025
+ **search_code Result (pattern: {pattern}, files: {file_pattern}):**
1026
+ {self._format_tool_result_content(tool_result)}
1027
+ """)
1028
+ elif tool_name == "search_reference_code":
1029
+ target_file = tool_input.get("target_file", "unknown")
1030
+ keywords = tool_input.get("keywords", "")
1031
+ formatted_results.append(f"""
1032
+ **search_reference_code Result for {target_file} (keywords: {keywords}):**
1033
+ {self._format_tool_result_content(tool_result)}
1034
+ """)
1035
+ elif tool_name == "get_file_structure":
1036
+ directory = tool_input.get(
1037
+ "directory_path", tool_input.get("path", "current")
1038
+ )
1039
+ formatted_results.append(f"""
1040
+ **get_file_structure Result for {directory}:**
1041
+ {self._format_tool_result_content(tool_result)}
1042
+ """)
1043
+
1044
+ return "\n".join(formatted_results)
1045
+
1046
+ def _format_tool_result_content(self, tool_result: Any) -> str:
1047
+ """
1048
+ Format tool result content for display
1049
+
1050
+ Args:
1051
+ tool_result: Tool result to format
1052
+
1053
+ Returns:
1054
+ Formatted string representation
1055
+ """
1056
+ if isinstance(tool_result, str):
1057
+ # Try to parse as JSON for better formatting
1058
+ try:
1059
+ result_data = json.loads(tool_result)
1060
+ if isinstance(result_data, dict):
1061
+ # Format key information
1062
+ if result_data.get("status") == "success":
1063
+ return json.dumps(result_data, indent=2)
1064
+ else:
1065
+ return json.dumps(result_data, indent=2)
1066
+ else:
1067
+ return str(result_data)
1068
+ except json.JSONDecodeError:
1069
+ return tool_result
1070
+ else:
1071
+ return str(tool_result)
1072
+
1073
+ def get_memory_statistics(
1074
+ self, all_files: List[str] = None, implemented_files: List[str] = None
1075
+ ) -> Dict[str, Any]:
1076
+ """
1077
+ Get memory agent statistics for multi-file operations
1078
+
1079
+ Args:
1080
+ all_files: List of all files that should be implemented (from workflow)
1081
+ implemented_files: List of all implemented files (from workflow)
1082
+ """
1083
+ if all_files is None:
1084
+ all_files = []
1085
+ if implemented_files is None:
1086
+ implemented_files = []
1087
+
1088
+ # Calculate unimplemented files from workflow data
1089
+ unimplemented_files = [f for f in all_files if f not in implemented_files]
1090
+
1091
+ return {
1092
+ "last_write_multiple_files_detected": self.last_write_multiple_files_detected,
1093
+ "should_clear_memory_next": self.should_clear_memory_next,
1094
+ "current_round": self.current_round,
1095
+ "concise_mode_active": self.should_use_concise_mode(),
1096
+ "current_round_tool_results": len(self.current_round_tool_results),
1097
+ "essential_tools_recorded": [
1098
+ r["tool_name"] for r in self.current_round_tool_results
1099
+ ],
1100
+ # File tracking statistics (from workflow)
1101
+ "total_files_in_plan": len(all_files),
1102
+ "files_implemented_count": len(implemented_files),
1103
+ "files_remaining_count": len(unimplemented_files),
1104
+ "all_files_list": all_files.copy(),
1105
+ "implemented_files_list": implemented_files.copy(),
1106
+ "unimplemented_files_list": unimplemented_files,
1107
+ "implementation_progress_percent": (
1108
+ len(implemented_files) / len(all_files) * 100
1109
+ )
1110
+ if all_files
1111
+ else 0,
1112
+ # Multi-file support statistics
1113
+ "max_files_per_batch": self.max_files_per_batch,
1114
+ "multi_file_support": True,
1115
+ "single_file_support": False, # Explicitly disabled
1116
+ }
1117
+
1118
+ def record_multi_file_implementation(self, file_implementations: Dict[str, str]):
1119
+ """
1120
+ Record multi-file implementation (for compatibility with workflow)
1121
+ NOTE: This method doesn't track files internally - workflow manages file tracking
1122
+
1123
+ Args:
1124
+ file_implementations: Dictionary mapping file_path to implementation_content
1125
+ """
1126
+ self.logger.info(
1127
+ f"📝 Recorded multi-file implementation batch: {len(file_implementations)} files"
1128
+ )
1129
+ # Note: We don't track files internally anymore - workflow handles this
1130
+
1131
+ # ===== ENHANCED MEMORY SYNCHRONIZATION METHODS (Phase 4+) =====
1132
+
1133
+ async def synchronize_revised_file_memory(
1134
+ self,
1135
+ client,
1136
+ client_type: str,
1137
+ revised_file_path: str,
1138
+ diff_content: str,
1139
+ new_content: str,
1140
+ revision_type: str = "targeted_fix",
1141
+ ) -> str:
1142
+ """
1143
+ Synchronize memory for a single revised file with diff information
1144
+
1145
+ Args:
1146
+ client: LLM client instance
1147
+ client_type: Type of LLM client ("anthropic" or "openai")
1148
+ revised_file_path: Path of the revised file
1149
+ diff_content: Unified diff showing changes made
1150
+ new_content: Complete new content of the file
1151
+ revision_type: Type of revision ("targeted_fix", "comprehensive_revision", etc.)
1152
+
1153
+ Returns:
1154
+ Updated memory summary for the revised file
1155
+ """
1156
+ try:
1157
+ self.logger.info(
1158
+ f"🔄 Synchronizing memory for revised file: {revised_file_path}"
1159
+ )
1160
+
1161
+ # Create revision-specific summary prompt
1162
+ revision_prompt = self._create_file_revision_summary_prompt(
1163
+ revised_file_path, diff_content, new_content, revision_type
1164
+ )
1165
+
1166
+ summary_messages = [{"role": "user", "content": revision_prompt}]
1167
+
1168
+ # Get LLM-generated revision summary
1169
+ llm_response = await self._call_llm_for_summary(
1170
+ client, client_type, summary_messages
1171
+ )
1172
+ llm_summary = llm_response.get("content", "")
1173
+
1174
+ # Extract summary sections
1175
+ revision_sections = self._extract_revision_summary_sections(llm_summary)
1176
+
1177
+ # Format revision summary
1178
+ formatted_summary = self._format_file_revision_summary(
1179
+ revised_file_path, revision_sections, diff_content, revision_type
1180
+ )
1181
+
1182
+ # Save the revision summary (replace old summary)
1183
+ await self._save_revised_file_summary(formatted_summary, revised_file_path)
1184
+
1185
+ self.logger.info(
1186
+ f"✅ Memory synchronized for revised file: {revised_file_path}"
1187
+ )
1188
+
1189
+ return formatted_summary
1190
+
1191
+ except Exception as e:
1192
+ self.logger.error(
1193
+ f"Failed to synchronize memory for revised file {revised_file_path}: {e}"
1194
+ )
1195
+
1196
+ # Fallback to simple revision summary
1197
+ return self._create_fallback_revision_summary(
1198
+ revised_file_path, revision_type
1199
+ )
1200
+
1201
+ async def synchronize_multiple_revised_files(
1202
+ self, client, client_type: str, revision_results: List[Dict[str, Any]]
1203
+ ) -> Dict[str, str]:
1204
+ """
1205
+ Synchronize memory for multiple revised files based on revision results
1206
+
1207
+ Args:
1208
+ client: LLM client instance
1209
+ client_type: Type of LLM client
1210
+ revision_results: List of revision results with file paths, diffs, and new content
1211
+
1212
+ Returns:
1213
+ Dictionary mapping file paths to updated memory summaries
1214
+ """
1215
+ try:
1216
+ self.logger.info(
1217
+ f"🔄 Synchronizing memory for {len(revision_results)} revised files"
1218
+ )
1219
+
1220
+ synchronized_summaries = {}
1221
+
1222
+ for revision_result in revision_results:
1223
+ file_path = revision_result.get("file_path", "")
1224
+ diff_content = revision_result.get("diff", "")
1225
+ new_content = revision_result.get("new_content", "")
1226
+ revision_type = revision_result.get("revision_type", "targeted_fix")
1227
+
1228
+ if file_path and revision_result.get("success", False):
1229
+ summary = await self.synchronize_revised_file_memory(
1230
+ client,
1231
+ client_type,
1232
+ file_path,
1233
+ diff_content,
1234
+ new_content,
1235
+ revision_type,
1236
+ )
1237
+ synchronized_summaries[file_path] = summary
1238
+ else:
1239
+ self.logger.warning(
1240
+ f"⚠️ Skipping memory sync for failed revision: {file_path}"
1241
+ )
1242
+
1243
+ self.logger.info(
1244
+ f"✅ Memory synchronized for {len(synchronized_summaries)} successfully revised files"
1245
+ )
1246
+
1247
+ return synchronized_summaries
1248
+
1249
+ except Exception as e:
1250
+ self.logger.error(
1251
+ f"Failed to synchronize memory for multiple revised files: {e}"
1252
+ )
1253
+ return {}
1254
+
1255
+ def _create_file_revision_summary_prompt(
1256
+ self, file_path: str, diff_content: str, new_content: str, revision_type: str
1257
+ ) -> str:
1258
+ """
1259
+ Create prompt for LLM to generate file revision summary
1260
+
1261
+ Args:
1262
+ file_path: Path of the revised file
1263
+ diff_content: Unified diff showing changes
1264
+ new_content: Complete new content of the file
1265
+ revision_type: Type of revision performed
1266
+
1267
+ Returns:
1268
+ Prompt for LLM revision summarization
1269
+ """
1270
+ # Truncate content if too long for prompt
1271
+ content_preview = (
1272
+ new_content[:2000] + "..." if len(new_content) > 2000 else new_content
1273
+ )
1274
+ diff_preview = (
1275
+ diff_content[:1000] + "..." if len(diff_content) > 1000 else diff_content
1276
+ )
1277
+
1278
+ prompt = f"""You are an expert code revision summarizer. A file has been REVISED with targeted changes. Create a structured summary of the revision.
1279
+
1280
+ **File Revised**: {file_path}
1281
+ **Revision Type**: {revision_type}
1282
+
1283
+ **Changes Made (Diff):**
1284
+ ```diff
1285
+ {diff_preview}
1286
+ ```
1287
+
1288
+ **Updated File Content:**
1289
+ ```python
1290
+ {content_preview}
1291
+ ```
1292
+
1293
+ **Required Summary Format:**
1294
+
1295
+ **Revision Summary**:
1296
+ - Brief description of what was changed and why
1297
+
1298
+ **Changes Made**:
1299
+ - Specific modifications applied (line-level changes)
1300
+ - Functions/classes affected
1301
+ - New functionality added or bugs fixed
1302
+
1303
+ **Impact Assessment**:
1304
+ - How the changes affect the file's behavior
1305
+ - Dependencies that might be affected
1306
+ - Integration points that need attention
1307
+
1308
+ **Quality Improvements**:
1309
+ - Code quality enhancements made
1310
+ - Error handling improvements
1311
+ - Performance or maintainability gains
1312
+
1313
+ **Post-Revision Status**:
1314
+ - Current functionality of the file
1315
+ - Key interfaces and exports
1316
+ - Dependencies and imports
1317
+
1318
+ **Instructions:**
1319
+ - Focus on the CHANGES made, not just the final state
1320
+ - Highlight the specific improvements and fixes applied
1321
+ - Be concise but comprehensive about the revision impact
1322
+ - Use the exact format specified above
1323
+
1324
+ **Summary:**"""
1325
+
1326
+ return prompt
1327
+
1328
+ def _extract_revision_summary_sections(self, llm_summary: str) -> Dict[str, str]:
1329
+ """
1330
+ Extract different sections from LLM-generated revision summary
1331
+
1332
+ Args:
1333
+ llm_summary: Raw LLM response containing revision summary
1334
+
1335
+ Returns:
1336
+ Dictionary with extracted sections
1337
+ """
1338
+ sections = {
1339
+ "revision_summary": "",
1340
+ "changes_made": "",
1341
+ "impact_assessment": "",
1342
+ "quality_improvements": "",
1343
+ "post_revision_status": "",
1344
+ }
1345
+
1346
+ try:
1347
+ lines = llm_summary.split("\n")
1348
+ current_section = None
1349
+ current_content = []
1350
+
1351
+ for line in lines:
1352
+ line_lower = line.lower().strip()
1353
+ original_line = line.strip()
1354
+
1355
+ # Skip empty lines
1356
+ if not original_line:
1357
+ if current_section:
1358
+ current_content.append(line)
1359
+ continue
1360
+
1361
+ # Section detection
1362
+ section_matched = False
1363
+
1364
+ if "revision summary" in line_lower and "**" in original_line:
1365
+ if current_section and current_content:
1366
+ sections[current_section] = "\n".join(current_content).strip()
1367
+ current_section = "revision_summary"
1368
+ current_content = []
1369
+ section_matched = True
1370
+ elif "changes made" in line_lower and "**" in original_line:
1371
+ if current_section and current_content:
1372
+ sections[current_section] = "\n".join(current_content).strip()
1373
+ current_section = "changes_made"
1374
+ current_content = []
1375
+ section_matched = True
1376
+ elif "impact assessment" in line_lower and "**" in original_line:
1377
+ if current_section and current_content:
1378
+ sections[current_section] = "\n".join(current_content).strip()
1379
+ current_section = "impact_assessment"
1380
+ current_content = []
1381
+ section_matched = True
1382
+ elif "quality improvements" in line_lower and "**" in original_line:
1383
+ if current_section and current_content:
1384
+ sections[current_section] = "\n".join(current_content).strip()
1385
+ current_section = "quality_improvements"
1386
+ current_content = []
1387
+ section_matched = True
1388
+ elif "post-revision status" in line_lower and "**" in original_line:
1389
+ if current_section and current_content:
1390
+ sections[current_section] = "\n".join(current_content).strip()
1391
+ current_section = "post_revision_status"
1392
+ current_content = []
1393
+ section_matched = True
1394
+
1395
+ # If no section header matched, add to current content
1396
+ if not section_matched and current_section:
1397
+ current_content.append(line)
1398
+
1399
+ # Save the final section
1400
+ if current_section and current_content:
1401
+ sections[current_section] = "\n".join(current_content).strip()
1402
+
1403
+ self.logger.info(
1404
+ f"📋 Extracted {len([s for s in sections.values() if s])} revision summary sections"
1405
+ )
1406
+
1407
+ except Exception as e:
1408
+ self.logger.error(f"Failed to extract revision summary sections: {e}")
1409
+ # Provide fallback content
1410
+ sections["revision_summary"] = "File revision completed"
1411
+ sections["changes_made"] = (
1412
+ "Targeted changes applied based on error analysis"
1413
+ )
1414
+ sections["impact_assessment"] = (
1415
+ "Changes should improve code functionality and reduce errors"
1416
+ )
1417
+ sections["quality_improvements"] = (
1418
+ "Code quality enhanced through targeted fixes"
1419
+ )
1420
+ sections["post_revision_status"] = "File functionality updated and improved"
1421
+
1422
+ return sections
1423
+
1424
+ def _format_file_revision_summary(
1425
+ self,
1426
+ file_path: str,
1427
+ revision_sections: Dict[str, str],
1428
+ diff_content: str,
1429
+ revision_type: str,
1430
+ ) -> str:
1431
+ """
1432
+ Format the revision summary into the final structure
1433
+
1434
+ Args:
1435
+ file_path: Path of the revised file
1436
+ revision_sections: Extracted sections from LLM summary
1437
+ diff_content: Unified diff content
1438
+ revision_type: Type of revision performed
1439
+
1440
+ Returns:
1441
+ Formatted revision summary
1442
+ """
1443
+ timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
1444
+
1445
+ # Format sections with fallbacks
1446
+ revision_summary = revision_sections.get(
1447
+ "revision_summary", "File revision completed"
1448
+ )
1449
+ changes_made = revision_sections.get("changes_made", "Targeted changes applied")
1450
+ impact_assessment = revision_sections.get(
1451
+ "impact_assessment", "Changes should improve functionality"
1452
+ )
1453
+ quality_improvements = revision_sections.get(
1454
+ "quality_improvements", "Code quality enhanced"
1455
+ )
1456
+ post_revision_status = revision_sections.get(
1457
+ "post_revision_status", "File updated successfully"
1458
+ )
1459
+
1460
+ formatted_summary = f"""# File Revision Summary (UPDATED)
1461
+ **Generated**: {timestamp}
1462
+ **File Revised**: {file_path}
1463
+ **Revision Type**: {revision_type}
1464
+
1465
+ ## Revision Summary
1466
+ {revision_summary}
1467
+
1468
+ ## Changes Made
1469
+ {changes_made}
1470
+
1471
+ ## Impact Assessment
1472
+ {impact_assessment}
1473
+
1474
+ ## Quality Improvements
1475
+ {quality_improvements}
1476
+
1477
+ ## Post-Revision Status
1478
+ {post_revision_status}
1479
+
1480
+ ## Technical Details
1481
+ **Diff Applied:**
1482
+ ```diff
1483
+ {diff_content[:500]}{"..." if len(diff_content) > 500 else ""}
1484
+ ```
1485
+
1486
+ ---
1487
+ *Auto-generated by Enhanced Memory Agent (Revision Mode)*
1488
+ """
1489
+ return formatted_summary
1490
+
1491
+ def _create_fallback_revision_summary(
1492
+ self, file_path: str, revision_type: str
1493
+ ) -> str:
1494
+ """
1495
+ Create fallback revision summary when LLM is unavailable
1496
+
1497
+ Args:
1498
+ file_path: Path of the revised file
1499
+ revision_type: Type of revision performed
1500
+
1501
+ Returns:
1502
+ Fallback revision summary
1503
+ """
1504
+ timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
1505
+
1506
+ fallback_summary = f"""# File Revision Summary (UPDATED)
1507
+ **Generated**: {timestamp}
1508
+ **File Revised**: {file_path}
1509
+ **Revision Type**: {revision_type}
1510
+
1511
+ ## Revision Summary
1512
+ File has been revised with targeted changes. LLM summary generation failed.
1513
+
1514
+ ## Changes Made
1515
+ - Targeted modifications applied based on error analysis
1516
+ - Specific line-level changes implemented
1517
+ - Code functionality updated
1518
+
1519
+ ## Impact Assessment
1520
+ - File behavior should be improved
1521
+ - Error conditions addressed
1522
+ - Integration points maintained
1523
+
1524
+ ## Quality Improvements
1525
+ - Code quality enhanced through precise fixes
1526
+ - Error handling improved
1527
+ - Maintainability increased
1528
+
1529
+ ## Post-Revision Status
1530
+ - File successfully updated
1531
+ - Functionality preserved and enhanced
1532
+ - Ready for integration testing
1533
+
1534
+ ---
1535
+ *Auto-generated by Enhanced Memory Agent (Revision Fallback Mode)*
1536
+ """
1537
+ return fallback_summary
1538
+
1539
+ async def _save_revised_file_summary(self, revision_summary: str, file_path: str):
1540
+ """
1541
+ Save or update the revision summary for a file (replaces old summary)
1542
+
1543
+ Args:
1544
+ revision_summary: New revision summary content
1545
+ file_path: Path of the file for which the summary was generated
1546
+ """
1547
+ try:
1548
+ # For revised files, we replace the existing summary rather than append
1549
+ # Read existing content to find and replace the specific file's summary
1550
+ file_exists = os.path.exists(self.code_summary_path)
1551
+
1552
+ if file_exists:
1553
+ with open(self.code_summary_path, "r", encoding="utf-8") as f:
1554
+ existing_content = f.read()
1555
+
1556
+ # Look for existing summary for this file and replace it
1557
+ import re
1558
+
1559
+ # Pattern to match existing implementation section for this file
1560
+ file_pattern = re.escape(file_path)
1561
+ section_pattern = rf"={80}\s*\n## IMPLEMENTATION File {file_pattern}\n={80}.*?(?=\n={80}|\Z)"
1562
+
1563
+ # Check if this file already has a summary
1564
+ if re.search(section_pattern, existing_content, re.DOTALL):
1565
+ # Replace existing summary
1566
+ new_section = f"\n{'=' * 80}\n## IMPLEMENTATION File {file_path} (REVISED)\n{'=' * 80}\n\n{revision_summary}\n\n"
1567
+ updated_content = re.sub(
1568
+ section_pattern,
1569
+ new_section.strip(),
1570
+ existing_content,
1571
+ flags=re.DOTALL,
1572
+ )
1573
+
1574
+ with open(self.code_summary_path, "w", encoding="utf-8") as f:
1575
+ f.write(updated_content)
1576
+
1577
+ self.logger.info(
1578
+ f"Updated existing summary for revised file: {file_path}"
1579
+ )
1580
+ else:
1581
+ # Append new summary for this file
1582
+ with open(self.code_summary_path, "a", encoding="utf-8") as f:
1583
+ f.write("\n" + "=" * 80 + "\n")
1584
+ f.write(f"## IMPLEMENTATION File {file_path} (REVISED)\n")
1585
+ f.write("=" * 80 + "\n\n")
1586
+ f.write(revision_summary)
1587
+ f.write("\n\n")
1588
+
1589
+ self.logger.info(
1590
+ f"Appended new summary for revised file: {file_path}"
1591
+ )
1592
+ else:
1593
+ # Create new file with header
1594
+ os.makedirs(os.path.dirname(self.code_summary_path), exist_ok=True)
1595
+
1596
+ with open(self.code_summary_path, "w", encoding="utf-8") as f:
1597
+ f.write("# Code Implementation Progress Summary\n")
1598
+ f.write("*Accumulated implementation progress for all files*\n\n")
1599
+ f.write("\n" + "=" * 80 + "\n")
1600
+ f.write(f"## IMPLEMENTATION File {file_path} (REVISED)\n")
1601
+ f.write("=" * 80 + "\n\n")
1602
+ f.write(revision_summary)
1603
+ f.write("\n\n")
1604
+
1605
+ self.logger.info(
1606
+ f"Created new summary file with revised file: {file_path}"
1607
+ )
1608
+
1609
+ except Exception as e:
1610
+ self.logger.error(
1611
+ f"Failed to save revised file summary for {file_path}: {e}"
1612
+ )
1613
+
1614
+ def get_revision_memory_statistics(
1615
+ self, revised_files: List[str]
1616
+ ) -> Dict[str, Any]:
1617
+ """
1618
+ Get memory statistics for revised files
1619
+
1620
+ Args:
1621
+ revised_files: List of file paths that have been revised
1622
+
1623
+ Returns:
1624
+ Dictionary with revision memory statistics
1625
+ """
1626
+ try:
1627
+ total_revisions = len(revised_files)
1628
+
1629
+ # Count how many files have updated summaries
1630
+ summaries_updated = 0
1631
+ if os.path.exists(self.code_summary_path):
1632
+ with open(self.code_summary_path, "r", encoding="utf-8") as f:
1633
+ content = f.read()
1634
+
1635
+ for file_path in revised_files:
1636
+ if f"File {file_path} (REVISED)" in content:
1637
+ summaries_updated += 1
1638
+
1639
+ return {
1640
+ "total_revised_files": total_revisions,
1641
+ "summaries_updated": summaries_updated,
1642
+ "memory_sync_rate": (summaries_updated / total_revisions * 100)
1643
+ if total_revisions > 0
1644
+ else 0,
1645
+ "revised_files_list": revised_files.copy(),
1646
+ "memory_summary_path": self.code_summary_path,
1647
+ "revision_memory_mode": "active",
1648
+ }
1649
+
1650
+ except Exception as e:
1651
+ self.logger.error(f"Failed to get revision memory statistics: {e}")
1652
+ return {
1653
+ "total_revised_files": len(revised_files),
1654
+ "summaries_updated": 0,
1655
+ "memory_sync_rate": 0,
1656
+ "revised_files_list": revised_files.copy(),
1657
+ "memory_summary_path": self.code_summary_path,
1658
+ "revision_memory_mode": "error",
1659
+ }
projects/ui/DeepCode/workflows/codebase_index_workflow.py ADDED
@@ -0,0 +1,732 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Codebase Index Workflow
3
+
4
+ This workflow integrates the functionality of run_indexer.py and code_indexer.py
5
+ to build intelligent relationships between existing codebase and target structure.
6
+
7
+ Features:
8
+ - Extract target file structure from initial_plan.txt
9
+ - Analyze codebase and build indexes
10
+ - Generate relationship mappings and statistical reports
11
+ - Provide reference basis for code reproduction
12
+ """
13
+
14
+ import asyncio
15
+ import json
16
+ import logging
17
+ import os
18
+ import re
19
+ import sys
20
+ from pathlib import Path
21
+ from typing import Dict, Any, Optional
22
+ import yaml
23
+
24
+ # Add tools directory to path
25
+ sys.path.append(str(Path(__file__).parent.parent / "tools"))
26
+
27
+ from tools.code_indexer import CodeIndexer
28
+
29
+
30
+ class CodebaseIndexWorkflow:
31
+ """Codebase Index Workflow Class"""
32
+
33
+ def __init__(self, logger=None):
34
+ """
35
+ Initialize workflow
36
+
37
+ Args:
38
+ logger: Logger instance
39
+ """
40
+ self.logger = logger or self._setup_default_logger()
41
+ self.indexer = None
42
+
43
+ def _setup_default_logger(self) -> logging.Logger:
44
+ """Setup default logger"""
45
+ logger = logging.getLogger("CodebaseIndexWorkflow")
46
+ logger.setLevel(logging.INFO)
47
+
48
+ if not logger.handlers:
49
+ handler = logging.StreamHandler()
50
+ formatter = logging.Formatter(
51
+ "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
52
+ )
53
+ handler.setFormatter(formatter)
54
+ logger.addHandler(handler)
55
+
56
+ return logger
57
+
58
+ def extract_file_tree_from_plan(self, plan_content: str) -> Optional[str]:
59
+ """
60
+ Extract file tree structure from initial_plan.txt content
61
+
62
+ Args:
63
+ plan_content: Content of the initial_plan.txt file
64
+
65
+ Returns:
66
+ Extracted file tree structure as string
67
+ """
68
+ # Look for file structure section, specifically "## File Structure" format
69
+ file_structure_pattern = r"## File Structure[^\n]*\n```[^\n]*\n(.*?)\n```"
70
+
71
+ match = re.search(file_structure_pattern, plan_content, re.DOTALL)
72
+ if match:
73
+ file_tree = match.group(1).strip()
74
+ lines = file_tree.split("\n")
75
+
76
+ # Clean tree structure - remove empty lines and comments not part of structure
77
+ cleaned_lines = []
78
+ for line in lines:
79
+ # Keep tree structure lines
80
+ if line.strip() and (
81
+ any(char in line for char in ["├──", "└──", "│"])
82
+ or line.strip().endswith("/")
83
+ or "." in line.split("/")[-1] # has file extension
84
+ or line.strip().endswith(".py")
85
+ or line.strip().endswith(".txt")
86
+ or line.strip().endswith(".md")
87
+ or line.strip().endswith(".yaml")
88
+ ):
89
+ cleaned_lines.append(line)
90
+
91
+ if len(cleaned_lines) >= 5:
92
+ file_tree = "\n".join(cleaned_lines)
93
+ self.logger.info(
94
+ f"📊 Extracted file tree structure from ## File Structure section ({len(cleaned_lines)} lines)"
95
+ )
96
+ return file_tree
97
+
98
+ # Fallback: look for any code block containing project structure
99
+ code_block_patterns = [
100
+ r"```[^\n]*\n(project/.*?(?:├──|└──).*?)\n```",
101
+ r"```[^\n]*\n(src/.*?(?:├──|└──).*?)\n```",
102
+ r"```[^\n]*\n(core/.*?(?:├──|└──).*?)\n```",
103
+ r"```[^\n]*\n(.*?(?:├──|└──).*?(?:\.py|\.txt|\.md|\.yaml).*?)\n```",
104
+ ]
105
+
106
+ for pattern in code_block_patterns:
107
+ match = re.search(pattern, plan_content, re.DOTALL)
108
+ if match:
109
+ file_tree = match.group(1).strip()
110
+ lines = [line for line in file_tree.split("\n") if line.strip()]
111
+ if len(lines) >= 5:
112
+ self.logger.info(
113
+ f"📊 Extracted file tree structure from code block ({len(lines)} lines)"
114
+ )
115
+ return file_tree
116
+
117
+ # Final fallback: extract file paths from file mentions and create basic structure
118
+ self.logger.warning(
119
+ "⚠️ No standard file tree found, trying to extract from file mentions..."
120
+ )
121
+
122
+ # Search for file paths in backticks throughout the document
123
+ file_mentions = re.findall(
124
+ r"`([^`]*(?:\.py|\.txt|\.md|\.yaml|\.yml)[^`]*)`", plan_content
125
+ )
126
+
127
+ if file_mentions:
128
+ # Organize files into directory structure
129
+ dirs = set()
130
+ files_by_dir = {}
131
+
132
+ for file_path in file_mentions:
133
+ file_path = file_path.strip()
134
+ if "/" in file_path:
135
+ dir_path = "/".join(file_path.split("/")[:-1])
136
+ filename = file_path.split("/")[-1]
137
+ dirs.add(dir_path)
138
+ if dir_path not in files_by_dir:
139
+ files_by_dir[dir_path] = []
140
+ files_by_dir[dir_path].append(filename)
141
+ else:
142
+ if "root" not in files_by_dir:
143
+ files_by_dir["root"] = []
144
+ files_by_dir["root"].append(file_path)
145
+
146
+ # Create tree structure
147
+ structure_lines = []
148
+
149
+ # Determine root directory name from common patterns
150
+ if any("src/" in f for f in file_mentions):
151
+ root_name = "src"
152
+ elif any("core/" in f for f in file_mentions):
153
+ root_name = "core"
154
+ elif any("lib/" in f for f in file_mentions):
155
+ root_name = "lib"
156
+ else:
157
+ root_name = "project"
158
+ structure_lines.append(f"{root_name}/")
159
+
160
+ # Add directories and files
161
+ sorted_dirs = sorted(dirs) if dirs else []
162
+ for i, dir_path in enumerate(sorted_dirs):
163
+ is_last_dir = i == len(sorted_dirs) - 1
164
+ prefix = "└──" if is_last_dir else "├──"
165
+ structure_lines.append(f"{prefix} {dir_path}/")
166
+
167
+ if dir_path in files_by_dir:
168
+ files = sorted(files_by_dir[dir_path])
169
+ for j, filename in enumerate(files):
170
+ is_last_file = j == len(files) - 1
171
+ if is_last_dir:
172
+ file_prefix = " └──" if is_last_file else " ├──"
173
+ else:
174
+ file_prefix = "│ └──" if is_last_file else "│ ├──"
175
+ structure_lines.append(f"{file_prefix} {filename}")
176
+
177
+ # Add root files (if any)
178
+ if "root" in files_by_dir:
179
+ root_files = sorted(files_by_dir["root"])
180
+ for i, filename in enumerate(root_files):
181
+ is_last = (i == len(root_files) - 1) and not sorted_dirs
182
+ prefix = "└──" if is_last else "├──"
183
+ structure_lines.append(f"{prefix} {filename}")
184
+
185
+ if len(structure_lines) >= 3:
186
+ file_tree = "\n".join(structure_lines)
187
+ self.logger.info(
188
+ f"📊 Generated file tree from file mentions ({len(structure_lines)} lines)"
189
+ )
190
+ return file_tree
191
+
192
+ # If no file tree found, return None
193
+ self.logger.warning("⚠️ No file tree structure found in initial plan")
194
+ return None
195
+
196
+ def load_target_structure_from_plan(self, plan_path: str) -> str:
197
+ """
198
+ Load target structure from initial_plan.txt and extract file tree
199
+
200
+ Args:
201
+ plan_path: Path to initial_plan.txt file
202
+
203
+ Returns:
204
+ Extracted file tree structure
205
+ """
206
+ try:
207
+ # Load complete plan content
208
+ with open(plan_path, "r", encoding="utf-8") as f:
209
+ plan_content = f.read()
210
+
211
+ self.logger.info(f"📄 Loaded initial plan ({len(plan_content)} characters)")
212
+
213
+ # Extract file tree structure
214
+ file_tree = self.extract_file_tree_from_plan(plan_content)
215
+
216
+ if file_tree:
217
+ self.logger.info(
218
+ "✅ Successfully extracted file tree from initial plan"
219
+ )
220
+ self.logger.info("📋 Extracted structure preview:")
221
+ # Show first few lines of extracted tree
222
+ preview_lines = file_tree.split("\n")[:8]
223
+ for line in preview_lines:
224
+ self.logger.info(f" {line}")
225
+ if len(file_tree.split("\n")) > 8:
226
+ self.logger.info(
227
+ f" ... {len(file_tree.split('\n')) - 8} more lines"
228
+ )
229
+ return file_tree
230
+ else:
231
+ self.logger.warning("⚠️ Unable to extract file tree from initial plan")
232
+ self.logger.info("🔄 Falling back to default target structure")
233
+ return self.get_default_target_structure()
234
+
235
+ except Exception as e:
236
+ self.logger.error(f"❌ Failed to load initial plan file {plan_path}: {e}")
237
+ self.logger.info("🔄 Falling back to default target structure")
238
+ return self.get_default_target_structure()
239
+
240
+ def get_default_target_structure(self) -> str:
241
+ """Get default target structure"""
242
+ return """
243
+ project/
244
+ ├── src/
245
+ │ ├── core/
246
+ │ │ ├── gcn.py # GCN encoder
247
+ │ │ ├── diffusion.py # forward/reverse processes
248
+ │ │ ├── denoiser.py # denoising MLP
249
+ │ │ └── fusion.py # fusion combiner
250
+ │ ├── models/ # model wrapper classes
251
+ │ │ └── recdiff.py
252
+ │ ├── utils/
253
+ │ │ ├── data.py # loading & preprocessing
254
+ │ │ ├── predictor.py # scoring functions
255
+ �� │ ├── loss.py # loss functions
256
+ │ │ ├── metrics.py # NDCG, Recall etc.
257
+ │ │ └── sched.py # beta/alpha schedule utils
258
+ │ └── configs/
259
+ │ └── default.yaml # hyperparameters, paths
260
+ ├── tests/
261
+ │ ├── test_gcn.py
262
+ │ ├── test_diffusion.py
263
+ │ ├── test_denoiser.py
264
+ │ ├── test_loss.py
265
+ │ └── test_pipeline.py
266
+ ├── docs/
267
+ │ ├── architecture.md
268
+ │ ├── api_reference.md
269
+ │ └── README.md
270
+ ├── experiments/
271
+ │ ├── run_experiment.py
272
+ │ └── notebooks/
273
+ │ └── analysis.ipynb
274
+ ├── requirements.txt
275
+ └── setup.py
276
+ """
277
+
278
+ def load_or_create_indexer_config(self, paper_dir: str) -> Dict[str, Any]:
279
+ """
280
+ Load or create indexer configuration
281
+
282
+ Args:
283
+ paper_dir: Paper directory path
284
+
285
+ Returns:
286
+ Configuration dictionary
287
+ """
288
+ # Try to load existing configuration file
289
+ config_path = Path(__file__).parent.parent / "tools" / "indexer_config.yaml"
290
+
291
+ try:
292
+ if config_path.exists():
293
+ with open(config_path, "r", encoding="utf-8") as f:
294
+ config = yaml.safe_load(f)
295
+
296
+ # Update path configuration to current paper directory
297
+ if "paths" not in config:
298
+ config["paths"] = {}
299
+ config["paths"]["code_base_path"] = os.path.join(paper_dir, "code_base")
300
+ config["paths"]["output_dir"] = os.path.join(paper_dir, "indexes")
301
+
302
+ # Adjust performance settings for workflow
303
+ if "performance" in config:
304
+ config["performance"]["enable_concurrent_analysis"] = (
305
+ False # Disable concurrency to avoid API limits
306
+ )
307
+ if "debug" in config:
308
+ config["debug"]["verbose_output"] = True # Enable verbose output
309
+ if "llm" in config:
310
+ config["llm"]["request_delay"] = 0.5 # Increase request delay
311
+
312
+ self.logger.info(f"Loaded configuration file: {config_path}")
313
+ return config
314
+
315
+ except Exception as e:
316
+ self.logger.warning(f"Failed to load configuration file: {e}")
317
+
318
+ # If loading fails, use default configuration
319
+ self.logger.info("Using default configuration")
320
+ default_config = {
321
+ "paths": {
322
+ "code_base_path": os.path.join(paper_dir, "code_base"),
323
+ "output_dir": os.path.join(paper_dir, "indexes"),
324
+ },
325
+ "llm": {
326
+ "model_provider": "anthropic",
327
+ "max_tokens": 4000,
328
+ "temperature": 0.3,
329
+ "request_delay": 0.5, # Increase request delay
330
+ "max_retries": 3,
331
+ "retry_delay": 1.0,
332
+ },
333
+ "file_analysis": {
334
+ "max_file_size": 1048576, # 1MB
335
+ "max_content_length": 3000,
336
+ "supported_extensions": [
337
+ ".py",
338
+ ".js",
339
+ ".ts",
340
+ ".java",
341
+ ".cpp",
342
+ ".c",
343
+ ".h",
344
+ ".hpp",
345
+ ".cs",
346
+ ".php",
347
+ ".rb",
348
+ ".go",
349
+ ".rs",
350
+ ".scala",
351
+ ".kt",
352
+ ".yaml",
353
+ ".yml",
354
+ ".json",
355
+ ".xml",
356
+ ".toml",
357
+ ".md",
358
+ ".txt",
359
+ ],
360
+ "skip_directories": [
361
+ "__pycache__",
362
+ "node_modules",
363
+ "target",
364
+ "build",
365
+ "dist",
366
+ "venv",
367
+ "env",
368
+ ".git",
369
+ ".svn",
370
+ "data",
371
+ "datasets",
372
+ ],
373
+ },
374
+ "relationships": {
375
+ "min_confidence_score": 0.3,
376
+ "high_confidence_threshold": 0.7,
377
+ "relationship_types": {
378
+ "direct_match": 1.0,
379
+ "partial_match": 0.8,
380
+ "reference": 0.6,
381
+ "utility": 0.4,
382
+ },
383
+ },
384
+ "performance": {
385
+ "enable_concurrent_analysis": False, # Disable concurrency to avoid API limits
386
+ "max_concurrent_files": 3,
387
+ "enable_content_caching": True,
388
+ "max_cache_size": 100,
389
+ },
390
+ "debug": {
391
+ "verbose_output": True,
392
+ "save_raw_responses": False,
393
+ "mock_llm_responses": False,
394
+ },
395
+ "output": {
396
+ "generate_summary": True,
397
+ "generate_statistics": True,
398
+ "include_metadata": True,
399
+ "json_indent": 2,
400
+ },
401
+ "logging": {"level": "INFO", "log_to_file": False},
402
+ }
403
+
404
+ return default_config
405
+
406
+ async def run_indexing_workflow(
407
+ self,
408
+ paper_dir: str,
409
+ initial_plan_path: Optional[str] = None,
410
+ config_path: str = "mcp_agent.secrets.yaml",
411
+ ) -> Dict[str, Any]:
412
+ """
413
+ Run the complete code indexing workflow
414
+
415
+ Args:
416
+ paper_dir: Paper directory path
417
+ initial_plan_path: Initial plan file path (optional)
418
+ config_path: API configuration file path
419
+
420
+ Returns:
421
+ Index result dictionary
422
+ """
423
+ try:
424
+ self.logger.info("🚀 Starting codebase index workflow...")
425
+
426
+ # Step 1: Determine initial plan file path
427
+ if not initial_plan_path:
428
+ initial_plan_path = os.path.join(paper_dir, "initial_plan.txt")
429
+
430
+ # Step 2: Load target structure
431
+ if os.path.exists(initial_plan_path):
432
+ self.logger.info(
433
+ f"📐 Loading target structure from {initial_plan_path}"
434
+ )
435
+ target_structure = self.load_target_structure_from_plan(
436
+ initial_plan_path
437
+ )
438
+ else:
439
+ self.logger.warning(
440
+ f"⚠️ Initial plan file does not exist: {initial_plan_path}"
441
+ )
442
+ self.logger.info("📐 Using default target structure")
443
+ target_structure = self.get_default_target_structure()
444
+
445
+ # Step 3: Check codebase path
446
+ code_base_path = os.path.join(paper_dir, "code_base")
447
+ if not os.path.exists(code_base_path):
448
+ self.logger.error(f"❌ Codebase path does not exist: {code_base_path}")
449
+ return {
450
+ "status": "error",
451
+ "message": f"Code base path does not exist: {code_base_path}",
452
+ "output_files": {},
453
+ }
454
+
455
+ # Step 4: Create output directory
456
+ output_dir = os.path.join(paper_dir, "indexes")
457
+ os.makedirs(output_dir, exist_ok=True)
458
+
459
+ # Step 5: Load configuration
460
+ indexer_config = self.load_or_create_indexer_config(paper_dir)
461
+
462
+ self.logger.info(f"📁 Codebase path: {code_base_path}")
463
+ self.logger.info(f"📤 Output directory: {output_dir}")
464
+
465
+ # Step 6: Create code indexer
466
+ self.indexer = CodeIndexer(
467
+ code_base_path=code_base_path,
468
+ target_structure=target_structure,
469
+ output_dir=output_dir,
470
+ config_path=config_path,
471
+ enable_pre_filtering=True,
472
+ )
473
+
474
+ # Apply configuration settings
475
+ self.indexer.indexer_config = indexer_config
476
+
477
+ # Directly set configuration attributes to indexer
478
+ if "file_analysis" in indexer_config:
479
+ file_config = indexer_config["file_analysis"]
480
+ self.indexer.supported_extensions = set(
481
+ file_config.get(
482
+ "supported_extensions", self.indexer.supported_extensions
483
+ )
484
+ )
485
+ self.indexer.skip_directories = set(
486
+ file_config.get("skip_directories", self.indexer.skip_directories)
487
+ )
488
+ self.indexer.max_file_size = file_config.get(
489
+ "max_file_size", self.indexer.max_file_size
490
+ )
491
+ self.indexer.max_content_length = file_config.get(
492
+ "max_content_length", self.indexer.max_content_length
493
+ )
494
+
495
+ if "llm" in indexer_config:
496
+ llm_config = indexer_config["llm"]
497
+ self.indexer.model_provider = llm_config.get(
498
+ "model_provider", self.indexer.model_provider
499
+ )
500
+ self.indexer.llm_max_tokens = llm_config.get(
501
+ "max_tokens", self.indexer.llm_max_tokens
502
+ )
503
+ self.indexer.llm_temperature = llm_config.get(
504
+ "temperature", self.indexer.llm_temperature
505
+ )
506
+ self.indexer.request_delay = llm_config.get(
507
+ "request_delay", self.indexer.request_delay
508
+ )
509
+ self.indexer.max_retries = llm_config.get(
510
+ "max_retries", self.indexer.max_retries
511
+ )
512
+ self.indexer.retry_delay = llm_config.get(
513
+ "retry_delay", self.indexer.retry_delay
514
+ )
515
+
516
+ if "relationships" in indexer_config:
517
+ rel_config = indexer_config["relationships"]
518
+ self.indexer.min_confidence_score = rel_config.get(
519
+ "min_confidence_score", self.indexer.min_confidence_score
520
+ )
521
+ self.indexer.high_confidence_threshold = rel_config.get(
522
+ "high_confidence_threshold", self.indexer.high_confidence_threshold
523
+ )
524
+ self.indexer.relationship_types = rel_config.get(
525
+ "relationship_types", self.indexer.relationship_types
526
+ )
527
+
528
+ if "performance" in indexer_config:
529
+ perf_config = indexer_config["performance"]
530
+ self.indexer.enable_concurrent_analysis = perf_config.get(
531
+ "enable_concurrent_analysis",
532
+ self.indexer.enable_concurrent_analysis,
533
+ )
534
+ self.indexer.max_concurrent_files = perf_config.get(
535
+ "max_concurrent_files", self.indexer.max_concurrent_files
536
+ )
537
+ self.indexer.enable_content_caching = perf_config.get(
538
+ "enable_content_caching", self.indexer.enable_content_caching
539
+ )
540
+ self.indexer.max_cache_size = perf_config.get(
541
+ "max_cache_size", self.indexer.max_cache_size
542
+ )
543
+
544
+ if "debug" in indexer_config:
545
+ debug_config = indexer_config["debug"]
546
+ self.indexer.verbose_output = debug_config.get(
547
+ "verbose_output", self.indexer.verbose_output
548
+ )
549
+ self.indexer.save_raw_responses = debug_config.get(
550
+ "save_raw_responses", self.indexer.save_raw_responses
551
+ )
552
+ self.indexer.mock_llm_responses = debug_config.get(
553
+ "mock_llm_responses", self.indexer.mock_llm_responses
554
+ )
555
+
556
+ if "output" in indexer_config:
557
+ output_config = indexer_config["output"]
558
+ self.indexer.generate_summary = output_config.get(
559
+ "generate_summary", self.indexer.generate_summary
560
+ )
561
+ self.indexer.generate_statistics = output_config.get(
562
+ "generate_statistics", self.indexer.generate_statistics
563
+ )
564
+ self.indexer.include_metadata = output_config.get(
565
+ "include_metadata", self.indexer.include_metadata
566
+ )
567
+
568
+ self.logger.info("🔧 Indexer configuration completed")
569
+ self.logger.info(f"🤖 Model provider: {self.indexer.model_provider}")
570
+ self.logger.info(
571
+ f"⚡ Concurrent analysis: {'Enabled' if self.indexer.enable_concurrent_analysis else 'Disabled'}"
572
+ )
573
+ self.logger.info(
574
+ f"🗄️ Content caching: {'Enabled' if self.indexer.enable_content_caching else 'Disabled'}"
575
+ )
576
+ self.logger.info(
577
+ f"🔍 Pre-filtering: {'Enabled' if self.indexer.enable_pre_filtering else 'Disabled'}"
578
+ )
579
+
580
+ self.logger.info("=" * 60)
581
+ self.logger.info("🚀 Starting code indexing process...")
582
+
583
+ # Step 7: Build all indexes
584
+ output_files = await self.indexer.build_all_indexes()
585
+
586
+ # Step 8: Generate summary report
587
+ if output_files:
588
+ summary_report = self.indexer.generate_summary_report(output_files)
589
+
590
+ self.logger.info("=" * 60)
591
+ self.logger.info("✅ Indexing completed successfully!")
592
+ self.logger.info(f"📊 Processed {len(output_files)} repositories")
593
+ self.logger.info("📁 Generated index files:")
594
+ for repo_name, file_path in output_files.items():
595
+ self.logger.info(f" 📄 {repo_name}: {file_path}")
596
+ self.logger.info(f"📋 Summary report: {summary_report}")
597
+
598
+ # Statistics (if enabled)
599
+ if self.indexer.generate_statistics:
600
+ self.logger.info("\n📈 Processing statistics:")
601
+ total_relationships = 0
602
+ high_confidence_relationships = 0
603
+
604
+ for file_path in output_files.values():
605
+ try:
606
+ with open(file_path, "r", encoding="utf-8") as f:
607
+ index_data = json.load(f)
608
+ relationships = index_data.get("relationships", [])
609
+ total_relationships += len(relationships)
610
+ high_confidence_relationships += len(
611
+ [
612
+ r
613
+ for r in relationships
614
+ if r.get("confidence_score", 0)
615
+ > self.indexer.high_confidence_threshold
616
+ ]
617
+ )
618
+ except Exception as e:
619
+ self.logger.warning(
620
+ f" ⚠️ Unable to load statistics from {file_path}: {e}"
621
+ )
622
+
623
+ self.logger.info(
624
+ f" 🔗 Total relationships found: {total_relationships}"
625
+ )
626
+ self.logger.info(
627
+ f" ⭐ High confidence relationships: {high_confidence_relationships}"
628
+ )
629
+ self.logger.info(
630
+ f" 📊 Average relationships per repository: {total_relationships / len(output_files) if output_files else 0:.1f}"
631
+ )
632
+
633
+ self.logger.info("\n🎉 Code indexing process completed successfully!")
634
+
635
+ return {
636
+ "status": "success",
637
+ "message": f"Successfully indexed {len(output_files)} repositories",
638
+ "output_files": output_files,
639
+ "summary_report": summary_report,
640
+ "statistics": {
641
+ "total_repositories": len(output_files),
642
+ "total_relationships": total_relationships,
643
+ "high_confidence_relationships": high_confidence_relationships,
644
+ }
645
+ if self.indexer.generate_statistics
646
+ else None,
647
+ }
648
+ else:
649
+ self.logger.warning("⚠️ No index files generated")
650
+ return {
651
+ "status": "warning",
652
+ "message": "No index files were generated",
653
+ "output_files": {},
654
+ }
655
+
656
+ except Exception as e:
657
+ self.logger.error(f"❌ Index workflow failed: {e}")
658
+ # If there are detailed error messages, log them
659
+ import traceback
660
+
661
+ self.logger.error(f"Detailed error information: {traceback.format_exc()}")
662
+ return {"status": "error", "message": str(e), "output_files": {}}
663
+
664
+ def print_banner(self):
665
+ """Print application banner"""
666
+ banner = """
667
+ ╔═══════════════════════════════════════════════════════════════════════╗
668
+ ║ 🔍 Codebase Index Workflow v1.0 ║
669
+ ║ Intelligent Code Relationship Analysis Tool ║
670
+ ╠═══════════════════════════════════════════════════════════════════════╣
671
+ ║ 📁 Analyzes existing codebases ║
672
+ ║ 🔗 Builds intelligent relationships with target structure ║
673
+ ║ 🤖 Powered by LLM analysis ║
674
+ ║ 📊 Generates detailed JSON indexes ║
675
+ ║ 🎯 Provides reference for code reproduction ║
676
+ ╚═══════════════════════════════════════════════════════════════════════╝
677
+ """
678
+ print(banner)
679
+
680
+
681
+ # Convenience function for direct workflow invocation
682
+ async def run_codebase_indexing(
683
+ paper_dir: str,
684
+ initial_plan_path: Optional[str] = None,
685
+ config_path: str = "mcp_agent.secrets.yaml",
686
+ logger=None,
687
+ ) -> Dict[str, Any]:
688
+ """
689
+ Convenience function to run codebase indexing
690
+
691
+ Args:
692
+ paper_dir: Paper directory path
693
+ initial_plan_path: Initial plan file path (optional)
694
+ config_path: API configuration file path
695
+ logger: Logger instance (optional)
696
+
697
+ Returns:
698
+ Index result dictionary
699
+ """
700
+ workflow = CodebaseIndexWorkflow(logger=logger)
701
+ workflow.print_banner()
702
+
703
+ return await workflow.run_indexing_workflow(
704
+ paper_dir=paper_dir,
705
+ initial_plan_path=initial_plan_path,
706
+ config_path=config_path,
707
+ )
708
+
709
+
710
+ # Main function for testing
711
+ async def main():
712
+ """Main function for testing workflow"""
713
+ import logging
714
+
715
+ # Setup logging
716
+ logging.basicConfig(level=logging.INFO)
717
+ logger = logging.getLogger(__name__)
718
+
719
+ # Test parameters
720
+ paper_dir = "./deepcode_lab/papers/1"
721
+ initial_plan_path = os.path.join(paper_dir, "initial_plan.txt")
722
+
723
+ # Run workflow
724
+ result = await run_codebase_indexing(
725
+ paper_dir=paper_dir, initial_plan_path=initial_plan_path, logger=logger
726
+ )
727
+
728
+ logger.info(f"Index result: {result}")
729
+
730
+
731
+ if __name__ == "__main__":
732
+ asyncio.run(main())
projects/ui/serena-new/.devcontainer/devcontainer.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "serena Project",
3
+ "dockerFile": "../Dockerfile",
4
+ "workspaceFolder": "/workspaces/serena",
5
+ "settings": {
6
+ "terminal.integrated.shell.linux": "/bin/bash",
7
+ "python.pythonPath": "/usr/local/bin/python",
8
+ },
9
+ "extensions": [
10
+ "ms-python.python",
11
+ "ms-toolsai.jupyter",
12
+ "ms-python.vscode-pylance"
13
+ ],
14
+ "forwardPorts": [],
15
+ "remoteUser": "root",
16
+ }
projects/ui/serena-new/.github/FUNDING.yml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # These are supported funding model platforms
2
+
3
+ github: oraios
projects/ui/serena-new/.serena/project.yml ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # language of the project (csharp, python, rust, java, typescript, javascript, go, cpp, or ruby)
2
+ # Special requirements:
3
+ # * csharp: Requires the presence of a .sln file in the project folder.
4
+ language: python
5
+
6
+ # whether to use the project's gitignore file to ignore files
7
+ # Added on 2025-04-07
8
+ ignore_all_files_in_gitignore: true
9
+ # list of additional paths to ignore
10
+ # same syntax as gitignore, so you can use * and **
11
+ # Was previously called `ignored_dirs`, please update your config if you are using that.
12
+ # Added (renamed)on 2025-04-07
13
+ ignored_paths: []
14
+
15
+ # whether the project is in read-only mode
16
+ # If set to true, all editing tools will be disabled and attempts to use them will result in an error
17
+ # Added on 2025-04-18
18
+ read_only: false
19
+
20
+
21
+ # list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
22
+ # Below is the complete list of tools for convenience.
23
+ # To make sure you have the latest list of tools, and to view their descriptions,
24
+ # execute `uv run scripts/print_tool_overview.py`.
25
+ #
26
+ # * `activate_project`: Activates a project by name.
27
+ # * `check_onboarding_performed`: Checks whether project onboarding was already performed.
28
+ # * `create_text_file`: Creates/overwrites a file in the project directory.
29
+ # * `delete_lines`: Deletes a range of lines within a file.
30
+ # * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
31
+ # * `execute_shell_command`: Executes a shell command.
32
+ # * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
33
+ # * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
34
+ # * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
35
+ # * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
36
+ # * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
37
+ # * `initial_instructions`: Gets the initial instructions for the current project.
38
+ # Should only be used in settings where the system prompt cannot be set,
39
+ # e.g. in clients you have no control over, like Claude Desktop.
40
+ # * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
41
+ # * `insert_at_line`: Inserts content at a given line in a file.
42
+ # * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
43
+ # * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
44
+ # * `list_memories`: Lists memories in Serena's project-specific memory store.
45
+ # * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
46
+ # * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
47
+ # * `read_file`: Reads a file within the project directory.
48
+ # * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
49
+ # * `remove_project`: Removes a project from the Serena configuration.
50
+ # * `replace_lines`: Replaces a range of lines within a file with new content.
51
+ # * `replace_symbol_body`: Replaces the full definition of a symbol.
52
+ # * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
53
+ # * `search_for_pattern`: Performs a search for a pattern in the project.
54
+ # * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
55
+ # * `switch_modes`: Activates modes by providing a list of their names
56
+ # * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
57
+ # * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
58
+ # * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
59
+ # * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
60
+ excluded_tools: []
61
+
62
+ # initial prompt for the project. It will always be given to the LLM upon activating the project
63
+ # (contrary to the memories, which are loaded on demand).
64
+ initial_prompt: ""
65
+
66
+ project_name: "serena"
projects/ui/serena-new/.vscode/settings.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cSpell.words": [
3
+ "agno",
4
+ "asyncio",
5
+ "genai",
6
+ "getpid",
7
+ "Gopls",
8
+ "langsrv",
9
+ "multilspy",
10
+ "pixi",
11
+ "sensai",
12
+ "vibing"
13
+ ],
14
+ }
projects/ui/serena-new/docs/custom_agent.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Custom Agents with Serena
2
+
3
+ As a reference implementation, we provide an integration with the [Agno](https://docs.agno.com/introduction/playground) agent framework.
4
+ Agno is a model-agnostic agent framework that allows you to turn Serena into an agent
5
+ (independent of the MCP technology) with a large number of underlying LLMs. While Agno has recently
6
+ added support for MCP servers out of the box, our Agno integration predates this and is a good illustration of how
7
+ easy it is to integrate Serena into an arbitrary agent framework.
8
+
9
+ Here's how it works:
10
+
11
+ 1. Download the agent-ui code with npx
12
+ ```shell
13
+ npx create-agent-ui@latest
14
+ ```
15
+ or, alternatively, clone it manually:
16
+ ```shell
17
+ git clone https://github.com/agno-agi/agent-ui.git
18
+ cd agent-ui
19
+ pnpm install
20
+ pnpm dev
21
+ ```
22
+
23
+ 2. Install serena with the optional requirements:
24
+ ```shell
25
+ # You can also only select agno,google or agno,anthropic instead of all-extras
26
+ uv pip install --all-extras -r pyproject.toml -e .
27
+ ```
28
+
29
+ 3. Copy `.env.example` to `.env` and fill in the API keys for the provider(s) you
30
+ intend to use.
31
+
32
+ 4. Start the agno agent app with
33
+ ```shell
34
+ uv run python scripts/agno_agent.py
35
+ ```
36
+ By default, the script uses Claude as the model, but you can choose any model
37
+ supported by Agno (which is essentially any existing model).
38
+
39
+ 5. In a new terminal, start the agno UI with
40
+ ```shell
41
+ cd agent-ui
42
+ pnpm dev
43
+ ```
44
+ Connect the UI to the agent you started above and start chatting. You will have
45
+ the same tools as in the MCP server version.
46
+
47
+
48
+ Here is a short demo of Serena performing a small analysis task with the newest Gemini model:
49
+
50
+ https://github.com/user-attachments/assets/ccfcb968-277d-4ca9-af7f-b84578858c62
51
+
52
+
53
+ ⚠️ IMPORTANT: In contrast to the MCP server approach, tool execution in the Agno UI does
54
+ not ask for the user's permission. The shell tool is particularly critical, as it can perform arbitrary code execution.
55
+ While we have never encountered any issues with
56
+ this in our testing with Claude, allowing this may not be entirely safe.
57
+ You may choose to disable certain tools for your setup in your Serena project's
58
+ configuration file (`.yml`).
59
+
60
+
61
+ ## Other Agent Frameworks
62
+
63
+ It should be straightforward to incorporate Serena into any
64
+ agent framework (like [pydantic-ai](https://ai.pydantic.dev/), [langgraph](https://langchain-ai.github.io/langgraph/tutorials/introduction/) or others).
65
+ Typically, you need only to write an adapter for Serena's tools to the tool representation in the framework of your choice,
66
+ as was done by us for Agno with [SerenaAgnoToolkit](/src/serena/agno.py).
67
+