OnyxMunk commited on
Commit
46b394e
·
1 Parent(s): fb8d591

feat: Implement rate limiting, retry logic, input validation for API endpoints, add CI workflow, and enhance frontend UI with copy functionality and Tailwind CSS.

Browse files
.github/workflows/ci.yml ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: CI
2
+
3
+ on:
4
+ push:
5
+ branches: [ main, develop ]
6
+ pull_request:
7
+ branches: [ main, develop ]
8
+
9
+ jobs:
10
+ test:
11
+ runs-on: ubuntu-latest
12
+ strategy:
13
+ matrix:
14
+ python-version: ["3.9", "3.10", "3.11", "3.12"]
15
+
16
+ steps:
17
+ - uses: actions/checkout@v4
18
+
19
+ - name: Set up Python ${{ matrix.python-version }}
20
+ uses: actions/setup-python@v5
21
+ with:
22
+ python-version: ${{ matrix.python-version }}
23
+
24
+ - name: Cache pip packages
25
+ uses: actions/cache@v3
26
+ with:
27
+ path: ~/.cache/pip
28
+ key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}
29
+ restore-keys: |
30
+ ${{ runner.os }}-pip-
31
+
32
+ - name: Install dependencies
33
+ run: |
34
+ python -m pip install --upgrade pip
35
+ pip install -r requirements.txt
36
+ pip install pytest pytest-cov mypy
37
+
38
+ - name: Run tests
39
+ run: |
40
+ pytest tests/ -v --cov=llm_agent_builder --cov=server --cov-report=xml --cov-report=term
41
+
42
+ - name: Type checking
43
+ run: |
44
+ mypy llm_agent_builder server --ignore-missing-imports
45
+
46
+ - name: Upload coverage to Codecov
47
+ uses: codecov/codecov-action@v3
48
+ with:
49
+ file: ./coverage.xml
50
+ flags: unittests
51
+ name: codecov-umbrella
52
+
53
+ lint:
54
+ runs-on: ubuntu-latest
55
+ steps:
56
+ - uses: actions/checkout@v4
57
+
58
+ - name: Set up Python
59
+ uses: actions/setup-python@v5
60
+ with:
61
+ python-version: "3.11"
62
+
63
+ - name: Install linting tools
64
+ run: |
65
+ python -m pip install --upgrade pip
66
+ pip install flake8 black isort
67
+
68
+ - name: Run flake8
69
+ run: |
70
+ flake8 llm_agent_builder server tests --max-line-length=120 --exclude=__pycache__,*.pyc
71
+
72
+ - name: Check code formatting with black
73
+ run: |
74
+ black --check llm_agent_builder server tests
75
+
76
+ - name: Check import sorting with isort
77
+ run: |
78
+ isort --check-only llm_agent_builder server tests
79
+
80
+ frontend:
81
+ runs-on: ubuntu-latest
82
+ steps:
83
+ - uses: actions/checkout@v4
84
+
85
+ - name: Set up Node.js
86
+ uses: actions/setup-node@v4
87
+ with:
88
+ node-version: '18'
89
+ cache: 'npm'
90
+ cache-dependency-path: frontend/package-lock.json
91
+
92
+ - name: Install dependencies
93
+ run: |
94
+ cd frontend
95
+ npm ci
96
+
97
+ - name: Run linter
98
+ run: |
99
+ cd frontend
100
+ npm run lint
101
+
102
+ - name: Build frontend
103
+ run: |
104
+ cd frontend
105
+ npm run build
106
+
107
+ docker:
108
+ runs-on: ubuntu-latest
109
+ steps:
110
+ - uses: actions/checkout@v4
111
+
112
+ - name: Set up Docker Buildx
113
+ uses: docker/setup-buildx-action@v3
114
+
115
+ - name: Build Docker image
116
+ run: |
117
+ docker build -t llm-agent-builder:test .
118
+
Dockerfile CHANGED
@@ -1,20 +1,28 @@
1
  # Stage 1: Build Frontend
2
- FROM node:18-alpine as frontend-build
3
  WORKDIR /app/frontend
4
  COPY frontend/package*.json ./
5
  RUN npm install
6
  COPY frontend/ ./
7
  RUN npm run build
8
 
9
- # Stage 2: Build Backend & Serve
10
- FROM python:3.9-slim
11
- WORKDIR /app
12
-
13
- # Install system dependencies if needed (e.g. for some python packages)
14
  RUN apt-get update && apt-get install -y --no-install-recommends \
 
15
  build-essential \
16
  && rm -rf /var/lib/apt/lists/*
17
 
 
 
 
 
 
 
 
 
18
  # Copy requirements and install
19
  COPY requirements.txt .
20
  RUN pip install --no-cache-dir -r requirements.txt
@@ -22,6 +30,7 @@ RUN pip install --no-cache-dir -r requirements.txt
22
  # Copy backend code
23
  COPY llm_agent_builder/ ./llm_agent_builder/
24
  COPY server/ ./server/
 
25
  # Create empty init for server if not exists (though we have it)
26
  # COPY server/__init__.py ./server/
27
 
 
1
  # Stage 1: Build Frontend
2
+ FROM node:22-alpine as frontend-build
3
  WORKDIR /app/frontend
4
  COPY frontend/package*.json ./
5
  RUN npm install
6
  COPY frontend/ ./
7
  RUN npm run build
8
 
9
+ # Stage 2: Base Python image with git (for Hugging Face Spaces dev-mode compatibility)
10
+ # Using python:3.10 (not slim) which includes git by default
11
+ FROM python:3.10 as base
12
+ # Ensure git and build tools are available
 
13
  RUN apt-get update && apt-get install -y --no-install-recommends \
14
+ git \
15
  build-essential \
16
  && rm -rf /var/lib/apt/lists/*
17
 
18
+ # Stage 3: Build Backend & Serve
19
+ FROM base
20
+ WORKDIR /app
21
+
22
+ # Ensure git is available for Hugging Face Spaces dev-mode stages
23
+ # (git is already in base, but this ensures it's present when HF Spaces wraps this stage)
24
+ RUN apt-get update && apt-get install -y --no-install-recommends git && rm -rf /var/lib/apt/lists/*
25
+
26
  # Copy requirements and install
27
  COPY requirements.txt .
28
  RUN pip install --no-cache-dir -r requirements.txt
 
30
  # Copy backend code
31
  COPY llm_agent_builder/ ./llm_agent_builder/
32
  COPY server/ ./server/
33
+ COPY main.py .
34
  # Create empty init for server if not exists (though we have it)
35
  # COPY server/__init__.py ./server/
36
 
README.md CHANGED
@@ -6,163 +6,389 @@ colorTo: indigo
6
  sdk: docker
7
  app_port: 8000
8
  ---
 
9
  # LLM Agent Builder
10
 
11
- This project is a PyCharm application that contains an LLM agent capable of building other LLM agents.
 
 
 
 
12
 
13
- ## Getting Started
 
 
 
 
 
 
 
 
14
 
15
  ### Prerequisites
16
 
17
- - Python 3.8 or higher
 
18
  - pip
19
 
20
  ### Installation
21
 
22
- 1. Create and activate a virtual environment (recommended):
23
 
24
- ```bash
25
- python3 -m venv venv
26
- source venv/bin/activate # On Windows: venv\Scripts\activate
27
- ```
28
 
29
- 2. Install the package in development mode:
30
 
31
- ```bash
32
- pip install -e .
33
- ```
 
34
 
35
- Or install dependencies directly:
36
 
37
- ```bash
38
- pip install -r requirements.txt
39
- ```
40
 
41
- 3. Set up your Anthropic API key as an environment variable:
42
 
43
- **For Testing (Mock Key):**
 
 
44
 
45
- ```bash
46
- export ANTHROPIC_API_KEY="sk-ant-test-mock-key-for-testing-purposes-1234567890abcdef"
47
- ```
48
 
49
- **For Production (Real Key):**
50
 
51
- ```bash
52
- export ANTHROPIC_API_KEY="your-actual-api-key-here"
53
- ```
 
54
 
55
- > **Note:** The mock key above is for testing code structure only. It will not work for actual API calls. Replace it with your real Anthropic API key for production use.
 
 
56
 
57
- You can also configure the model by setting the `ANTHROPIC_MODEL` environment variable in your `.env` file.
58
- Available models include:
59
- - `claude-3-5-sonnet-20241022` (Default)
60
- - `claude-3-opus-20240229`
61
- - `claude-3-sonnet-20240229`
62
- - `claude-3-haiku-20240307`
63
 
64
- 4. Run the `main.py` script to generate a new agent:
65
 
66
- **Basic Usage:**
67
 
68
- ```bash
69
- python main.py
70
- ```
 
 
 
71
 
72
- ## Web Interface (New!)
 
 
 
 
 
 
 
 
73
 
74
- You can also use the modern web interface to generate agents.
75
 
76
- ### Prerequisites
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
- - Node.js installed
79
- - Python dependencies installed (`pip install -r requirements.txt`)
 
80
 
81
- ### Running the Web App
82
 
83
  1. **Start the Backend Server:**
84
 
85
- ```bash
86
- uvicorn server.main:app --reload
87
- ```
88
 
89
- The API will be available at `http://localhost:8000`.
90
 
91
  2. **Start the Frontend:**
92
- Open a new terminal:
93
 
94
- ```bash
95
- cd frontend
96
- npm run dev
97
- ```
 
 
 
 
 
 
 
98
 
99
- Open your browser to `http://localhost:5173`.
 
 
 
 
100
 
101
- ## Deployment (Hugging Face Spaces)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
- This project is configured for deployment to Hugging Face Spaces using Docker.
104
 
105
- 1. Create a new Space on Hugging Face.
106
- 2. Select **Docker** as the SDK.
107
- 3. Push the entire repository to the Space.
108
- - The `Dockerfile` will automatically build the React frontend and serve it via the FastAPI backend.
109
- - The application is stateless: generated agents will be downloaded to your local machine.
110
 
111
- **Advanced Usage (CLI):**
112
- You can customize the agent generation using command-line arguments:
113
 
114
- ```bash
115
- llm-agent-builder --name "DataAnalyst" \
116
- --prompt "You are a data analyst expert in Pandas." \
117
- --task "Analyze this CSV file and provide summary statistics." \
118
- --model "claude-3-opus-20240229"
119
- ```
120
 
121
- **Interactive Mode:**
122
- If you run the command without arguments, it will launch in interactive mode:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
 
124
- ```bash
125
- llm-agent-builder
126
- ```
127
 
128
- **Available Arguments:**
129
- - `--name`: Name of the agent (default: "MyAwesomeAgent")
130
- - `--prompt`: System prompt for the agent
131
- - `--task`: Example task for the agent
132
- - `--output`: Output directory (default: "generated_agents")
133
- - `--model`: Anthropic model to use (overrides `.env`)
134
- - `--interactive`: Force interactive mode
135
 
136
- ## Development
 
 
 
 
137
 
138
- ### Testing
139
 
140
- Run unit tests using `pytest`:
141
 
142
  ```bash
 
143
  pytest
 
 
 
 
 
 
144
  ```
145
 
146
  ### Type Checking
147
 
148
- Run static type checking using `mypy`:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
 
150
  ```bash
151
- mypy llm_agent_builder
 
 
 
 
 
 
 
152
  ```
153
 
154
- ### CI/CD
 
 
 
 
 
 
 
 
 
 
155
 
156
- This project uses GitHub Actions for Continuous Integration. Tests are automatically run on every push and pull request to the `main` branch.
 
 
 
157
 
158
- ## Project Structure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
159
 
160
- - `llm_agent_builder/` - Main package containing the agent builder
161
- - `agent_builder.py` - Core AgentBuilder class
162
- - `cli.py` - Command-line interface logic
163
- - `templates/` - Jinja2 templates for agent generation
164
- - `main.py` - Entry point script (calls `cli.main`)
165
- - `test_with_mock_key.sh` - Test script using mock API key for testing
166
- - `.env.example` - Example environment file with mock API key
167
- - `generated_agents/` - Output directory for generated agents (created automatically)
168
- - `tests/` - Unit tests
 
6
  sdk: docker
7
  app_port: 8000
8
  ---
9
+
10
  # LLM Agent Builder
11
 
12
+ > **A powerful, production-ready tool for generating custom LLM agents via CLI, web UI, and Hugging Face Spaces**
13
+
14
+ LLM Agent Builder is a comprehensive Python application that enables developers to quickly scaffold and generate AI agents using Anthropic's Claude models or Hugging Face models. Built with FastAPI, React 19, and modern Python tooling.
15
+
16
+ ## ✨ Features
17
 
18
+ - 🚀 **Multi-Provider Support**: Generate agents for Anthropic Claude or Hugging Face models
19
+ - 🎨 **Modern Web UI**: Beautiful React 19 interface with dark/light theme toggle
20
+ - 💻 **Powerful CLI**: Interactive mode, batch generation, agent testing, and listing
21
+ - 🔧 **Tool Integration**: Built-in support for tool calling and multi-step workflows
22
+ - 🛡️ **Production Ready**: Rate limiting, retry logic, input validation, and sandboxed execution
23
+ - 📦 **Easy Deployment**: Docker-ready for Hugging Face Spaces
24
+ - 🧪 **Comprehensive Testing**: Full test coverage with pytest and CI/CD
25
+
26
+ ## 🚀 Quick Start
27
 
28
  ### Prerequisites
29
 
30
+ - Python 3.9 or higher
31
+ - Node.js 18+ (for web UI)
32
  - pip
33
 
34
  ### Installation
35
 
36
+ 1. **Clone the repository:**
37
 
38
+ ```bash
39
+ git clone https://github.com/kwizzlesurp10-ctrl/LLMAgentbuilder.git
40
+ cd LLMAgentbuilder
41
+ ```
42
 
43
+ 2. **Create and activate a virtual environment:**
44
 
45
+ ```bash
46
+ python3 -m venv venv
47
+ source venv/bin/activate # On Windows: venv\Scripts\activate
48
+ ```
49
 
50
+ 3. **Install the package:**
51
 
52
+ ```bash
53
+ pip install -e .
54
+ ```
55
 
56
+ Or install dependencies directly:
57
 
58
+ ```bash
59
+ pip install -r requirements.txt
60
+ ```
61
 
62
+ 4. **Set up your API key:**
 
 
63
 
64
+ Create a `.env` file:
65
 
66
+ ```bash
67
+ # For Anthropic
68
+ ANTHROPIC_API_KEY="your-anthropic-api-key-here"
69
+ ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"
70
 
71
+ # For Hugging Face (optional)
72
+ HUGGINGFACEHUB_API_TOKEN="your-hf-token-here"
73
+ ```
74
 
75
+ ## 📖 Usage
 
 
 
 
 
76
 
77
+ ### Command Line Interface
78
 
79
+ #### Generate an Agent
80
 
81
+ **Interactive Mode:**
82
+ ```bash
83
+ llm-agent-builder generate
84
+ # or simply
85
+ llm-agent-builder
86
+ ```
87
 
88
+ **Command-Line Mode:**
89
+ ```bash
90
+ llm-agent-builder generate \
91
+ --name "CodeReviewer" \
92
+ --prompt "You are an expert code reviewer specializing in Python." \
93
+ --task "Review this function for bugs and suggest improvements." \
94
+ --model "claude-3-5-sonnet-20241022" \
95
+ --provider "anthropic"
96
+ ```
97
 
98
+ #### List Generated Agents
99
 
100
+ ```bash
101
+ llm-agent-builder list
102
+ # or specify custom output directory
103
+ llm-agent-builder list --output ./my_agents
104
+ ```
105
+
106
+ #### Test an Agent
107
+
108
+ ```bash
109
+ llm-agent-builder test generated_agents/codereviewer.py --task "Review this code: def add(a, b): return a + b"
110
+ ```
111
+
112
+ #### Batch Generation
113
+
114
+ Create a JSON config file (`agents.json`):
115
+
116
+ ```json
117
+ [
118
+ {
119
+ "name": "DataAnalyst",
120
+ "prompt": "You are a data analyst expert in Pandas and NumPy.",
121
+ "task": "Analyze this CSV file and provide summary statistics.",
122
+ "model": "claude-3-5-sonnet-20241022",
123
+ "provider": "anthropic"
124
+ },
125
+ {
126
+ "name": "CodeWriter",
127
+ "prompt": "You are a Python programming assistant.",
128
+ "task": "Write a function to calculate fibonacci numbers.",
129
+ "model": "claude-3-5-sonnet-20241022",
130
+ "provider": "anthropic"
131
+ }
132
+ ]
133
+ ```
134
+
135
+ Then run:
136
 
137
+ ```bash
138
+ llm-agent-builder batch agents.json
139
+ ```
140
 
141
+ ### Web Interface
142
 
143
  1. **Start the Backend Server:**
144
 
145
+ ```bash
146
+ uvicorn server.main:app --reload
147
+ ```
148
 
149
+ The API will be available at `http://localhost:8000`.
150
 
151
  2. **Start the Frontend:**
 
152
 
153
+ Open a new terminal:
154
+
155
+ ```bash
156
+ cd frontend
157
+ npm install
158
+ npm run dev
159
+ ```
160
+
161
+ Open your browser to `http://localhost:5173`.
162
+
163
+ ### Features in Web UI
164
 
165
+ - **Live Code Preview**: See generated code in real-time
166
+ - 🎨 **Theme Toggle**: Switch between dark and light themes
167
+ - 📋 **Copy to Clipboard**: One-click code copying
168
+ - 🧪 **Test Agent**: Execute agents directly in the browser (sandboxed)
169
+ - 📥 **Auto-Download**: Generated agents automatically download
170
 
171
+ ## 🏗️ Architecture
172
+
173
+ ### Project Structure
174
+
175
+ ```
176
+ LLMAgentbuilder/
177
+ ├── llm_agent_builder/ # Core package
178
+ │ ├── agent_builder.py # AgentBuilder class with multi-step & tool support
179
+ │ ├── cli.py # CLI with subcommands (generate, list, test, batch)
180
+ │ └── templates/ # Jinja2 templates for agent generation
181
+ │ ├── agent_template.py.j2
182
+ │ └── agent_template_hf.py.j2
183
+ ├── server/ # FastAPI backend
184
+ │ ├── main.py # API endpoints with rate limiting & retries
185
+ │ ├── models.py # Pydantic models for validation
186
+ │ └── sandbox.py # Sandboxed code execution
187
+ ├── frontend/ # React 19 frontend
188
+ │ ├── src/
189
+ │ │ ├── App.jsx # Main app with theme toggle
190
+ │ │ └── components/
191
+ │ │ ├── AgentForm.jsx # Agent configuration form
192
+ │ │ └── CodePreview.jsx # Code preview with copy button
193
+ │ └── tailwind.config.js # Tailwind CSS configuration
194
+ ├── tests/ # Comprehensive test suite
195
+ │ ├── test_agent_builder.py
196
+ │ ├── test_cli.py
197
+ │ └── test_api.py
198
+ ├── .github/workflows/ # CI/CD workflows
199
+ │ └── ci.yml # GitHub Actions for testing & linting
200
+ ├── pyproject.toml # Modern Python project configuration
201
+ ├── requirements.txt # Python dependencies
202
+ └── Dockerfile # Docker configuration for deployment
203
+ ```
204
 
205
+ ## 🔧 Advanced Features
206
 
207
+ ### Multi-Step Workflows
 
 
 
 
208
 
209
+ Agents can be generated with multi-step workflow capabilities:
 
210
 
211
+ ```python
212
+ # In your generated agent
213
+ agent = MyAgent(api_key="your-key")
214
+ result = agent.run_multi_step("Complete this complex task", max_steps=5)
215
+ ```
 
216
 
217
+ ### Tool Integration
218
+
219
+ Generate agents with tool calling support:
220
+
221
+ ```python
222
+ builder = AgentBuilder()
223
+ code = builder.build_agent(
224
+ agent_name="ToolAgent",
225
+ prompt="You are an agent with tools",
226
+ example_task="Use tools to complete tasks",
227
+ tools=[
228
+ {
229
+ "name": "search_web",
230
+ "description": "Search the web",
231
+ "input_schema": {
232
+ "type": "object",
233
+ "properties": {
234
+ "query": {"type": "string"}
235
+ }
236
+ }
237
+ }
238
+ ],
239
+ enable_multi_step=True
240
+ )
241
+ ```
242
 
243
+ ### API Endpoints
 
 
244
 
245
+ The FastAPI backend provides:
 
 
 
 
 
 
246
 
247
+ - `POST /api/generate` - Generate a new agent (rate limited: 20/min)
248
+ - `POST /api/execute` - Execute agent code in sandbox (rate limited: 10/min)
249
+ - `GET /health` - Health check endpoint
250
+ - `GET /healthz` - Kubernetes health check
251
+ - `GET /metrics` - Prometheus metrics
252
 
253
+ ## 🧪 Testing
254
 
255
+ Run the test suite:
256
 
257
  ```bash
258
+ # All tests
259
  pytest
260
+
261
+ # With coverage
262
+ pytest --cov=llm_agent_builder --cov=server --cov-report=html
263
+
264
+ # Specific test file
265
+ pytest tests/test_cli.py -v
266
  ```
267
 
268
  ### Type Checking
269
 
270
+ ```bash
271
+ mypy llm_agent_builder server
272
+ ```
273
+
274
+ ### Linting
275
+
276
+ ```bash
277
+ # Install dev dependencies
278
+ pip install -e ".[dev]"
279
+
280
+ # Run linters
281
+ flake8 llm_agent_builder server tests
282
+ black --check llm_agent_builder server tests
283
+ isort --check-only llm_agent_builder server tests
284
+ ```
285
+
286
+ ## 🚢 Deployment
287
+
288
+ ### Hugging Face Spaces
289
+
290
+ 1. Create a new Space on Hugging Face
291
+ 2. Select **Docker** as the SDK
292
+ 3. Push the repository:
293
+
294
+ ```bash
295
+ git push https://huggingface.co/spaces/your-username/your-space
296
+ ```
297
+
298
+ The `Dockerfile` automatically builds the React frontend and serves it via FastAPI.
299
+
300
+ ### Docker
301
+
302
+ Build and run locally:
303
+
304
+ ```bash
305
+ docker build -t llm-agent-builder .
306
+ docker run -p 8000:8000 -e ANTHROPIC_API_KEY=your-key llm-agent-builder
307
+ ```
308
+
309
+ ## 📊 Supported Models
310
+
311
+ ### Anthropic Claude
312
+
313
+ - `claude-3-5-sonnet-20241022` (Default)
314
+ - `claude-3-5-haiku-20241022`
315
+ - `claude-3-opus-20240229`
316
+ - `claude-3-haiku-20240307`
317
+
318
+ ### Hugging Face
319
+
320
+ - `meta-llama/Meta-Llama-3-8B-Instruct`
321
+ - `mistralai/Mistral-7B-Instruct-v0.3`
322
+
323
+ ## 🤝 Contributing
324
+
325
+ Contributions are welcome! Please follow these steps:
326
+
327
+ 1. Fork the repository
328
+ 2. Create a feature branch (`git checkout -b feature/amazing-feature`)
329
+ 3. Make your changes
330
+ 4. Add tests for new functionality
331
+ 5. Ensure all tests pass (`pytest`)
332
+ 6. Run linting (`black`, `isort`, `flake8`)
333
+ 7. Commit your changes (`git commit -m 'Add amazing feature'`)
334
+ 8. Push to the branch (`git push origin feature/amazing-feature`)
335
+ 9. Open a Pull Request
336
+
337
+ ### Development Setup
338
 
339
  ```bash
340
+ # Install in development mode with dev dependencies
341
+ pip install -e ".[dev]"
342
+
343
+ # Install frontend dependencies
344
+ cd frontend && npm install
345
+
346
+ # Run pre-commit hooks (if configured)
347
+ pre-commit install
348
  ```
349
 
350
+ ## 📝 License
351
+
352
+ This project is licensed under the MIT License - see the LICENSE file for details.
353
+
354
+ ## 🙏 Acknowledgments
355
+
356
+ - Built with [Anthropic Claude](https://www.anthropic.com/)
357
+ - Powered by [FastAPI](https://fastapi.tiangolo.com/) and [React](https://react.dev/)
358
+ - Deployed on [Hugging Face Spaces](https://huggingface.co/spaces)
359
+
360
+ ## 📚 Additional Resources
361
 
362
+ - [Anthropic API Documentation](https://docs.anthropic.com/)
363
+ - [Hugging Face Hub Documentation](https://huggingface.co/docs/hub/)
364
+ - [FastAPI Documentation](https://fastapi.tiangolo.com/)
365
+ - [React Documentation](https://react.dev/)
366
 
367
+ ## 🐛 Troubleshooting
368
+
369
+ ### Common Issues
370
+
371
+ **Issue**: `ANTHROPIC_API_KEY not found`
372
+ - **Solution**: Ensure your `.env` file is in the project root and contains `ANTHROPIC_API_KEY=your-key`
373
+
374
+ **Issue**: Frontend build fails
375
+ - **Solution**: Ensure Node.js 18+ is installed and run `npm install` in the `frontend/` directory
376
+
377
+ **Issue**: Rate limit errors
378
+ - **Solution**: The API has rate limiting (20 requests/min for generation, 10/min for execution). Wait a moment and retry.
379
+
380
+ **Issue**: Agent execution times out
381
+ - **Solution**: Check that your agent code is valid Python and doesn't have infinite loops. The sandbox has a 30-second timeout.
382
+
383
+ ## 📈 Roadmap
384
+
385
+ - [ ] Support for OpenAI models
386
+ - [ ] Agent marketplace/sharing
387
+ - [ ] Visual workflow builder
388
+ - [ ] Agent versioning
389
+ - [ ] Advanced tool library
390
+ - [ ] Multi-agent orchestration
391
+
392
+ ---
393
 
394
+ **Made with ❤️ by the LLM Agent Builder Team**
 
 
 
 
 
 
 
 
frontend/package-lock.json CHANGED
@@ -16,10 +16,13 @@
16
  "@types/react": "^19.2.5",
17
  "@types/react-dom": "^19.2.3",
18
  "@vitejs/plugin-react": "^5.1.1",
 
19
  "eslint": "^9.39.1",
20
  "eslint-plugin-react-hooks": "^7.0.1",
21
  "eslint-plugin-react-refresh": "^0.4.24",
22
  "globals": "^16.5.0",
 
 
23
  "vite": "^7.2.4"
24
  }
25
  },
@@ -1484,6 +1487,44 @@
1484
  "dev": true,
1485
  "license": "Python-2.0"
1486
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1487
  "node_modules/balanced-match": {
1488
  "version": "1.0.2",
1489
  "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz",
@@ -2021,6 +2062,20 @@
2021
  "dev": true,
2022
  "license": "ISC"
2023
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2024
  "node_modules/fsevents": {
2025
  "version": "2.3.3",
2026
  "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz",
@@ -2343,6 +2398,16 @@
2343
  "dev": true,
2344
  "license": "MIT"
2345
  },
 
 
 
 
 
 
 
 
 
 
2346
  "node_modules/optionator": {
2347
  "version": "0.9.4",
2348
  "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz",
@@ -2475,6 +2540,13 @@
2475
  "node": "^10 || ^12 || >=14"
2476
  }
2477
  },
 
 
 
 
 
 
 
2478
  "node_modules/prelude-ls": {
2479
  "version": "1.2.1",
2480
  "resolved": "https://registry.npmjs.org/prelude-ls/-/prelude-ls-1.2.1.tgz",
@@ -2653,6 +2725,13 @@
2653
  "node": ">=8"
2654
  }
2655
  },
 
 
 
 
 
 
 
2656
  "node_modules/tinyglobby": {
2657
  "version": "0.2.15",
2658
  "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
 
16
  "@types/react": "^19.2.5",
17
  "@types/react-dom": "^19.2.3",
18
  "@vitejs/plugin-react": "^5.1.1",
19
+ "autoprefixer": "^10.4.22",
20
  "eslint": "^9.39.1",
21
  "eslint-plugin-react-hooks": "^7.0.1",
22
  "eslint-plugin-react-refresh": "^0.4.24",
23
  "globals": "^16.5.0",
24
+ "postcss": "^8.5.6",
25
+ "tailwindcss": "^4.1.17",
26
  "vite": "^7.2.4"
27
  }
28
  },
 
1487
  "dev": true,
1488
  "license": "Python-2.0"
1489
  },
1490
+ "node_modules/autoprefixer": {
1491
+ "version": "10.4.22",
1492
+ "resolved": "https://registry.npmjs.org/autoprefixer/-/autoprefixer-10.4.22.tgz",
1493
+ "integrity": "sha512-ARe0v/t9gO28Bznv6GgqARmVqcWOV3mfgUPn9becPHMiD3o9BwlRgaeccZnwTpZ7Zwqrm+c1sUSsMxIzQzc8Xg==",
1494
+ "dev": true,
1495
+ "funding": [
1496
+ {
1497
+ "type": "opencollective",
1498
+ "url": "https://opencollective.com/postcss/"
1499
+ },
1500
+ {
1501
+ "type": "tidelift",
1502
+ "url": "https://tidelift.com/funding/github/npm/autoprefixer"
1503
+ },
1504
+ {
1505
+ "type": "github",
1506
+ "url": "https://github.com/sponsors/ai"
1507
+ }
1508
+ ],
1509
+ "license": "MIT",
1510
+ "dependencies": {
1511
+ "browserslist": "^4.27.0",
1512
+ "caniuse-lite": "^1.0.30001754",
1513
+ "fraction.js": "^5.3.4",
1514
+ "normalize-range": "^0.1.2",
1515
+ "picocolors": "^1.1.1",
1516
+ "postcss-value-parser": "^4.2.0"
1517
+ },
1518
+ "bin": {
1519
+ "autoprefixer": "bin/autoprefixer"
1520
+ },
1521
+ "engines": {
1522
+ "node": "^10 || ^12 || >=14"
1523
+ },
1524
+ "peerDependencies": {
1525
+ "postcss": "^8.1.0"
1526
+ }
1527
+ },
1528
  "node_modules/balanced-match": {
1529
  "version": "1.0.2",
1530
  "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz",
 
2062
  "dev": true,
2063
  "license": "ISC"
2064
  },
2065
+ "node_modules/fraction.js": {
2066
+ "version": "5.3.4",
2067
+ "resolved": "https://registry.npmjs.org/fraction.js/-/fraction.js-5.3.4.tgz",
2068
+ "integrity": "sha512-1X1NTtiJphryn/uLQz3whtY6jK3fTqoE3ohKs0tT+Ujr1W59oopxmoEh7Lu5p6vBaPbgoM0bzveAW4Qi5RyWDQ==",
2069
+ "dev": true,
2070
+ "license": "MIT",
2071
+ "engines": {
2072
+ "node": "*"
2073
+ },
2074
+ "funding": {
2075
+ "type": "github",
2076
+ "url": "https://github.com/sponsors/rawify"
2077
+ }
2078
+ },
2079
  "node_modules/fsevents": {
2080
  "version": "2.3.3",
2081
  "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz",
 
2398
  "dev": true,
2399
  "license": "MIT"
2400
  },
2401
+ "node_modules/normalize-range": {
2402
+ "version": "0.1.2",
2403
+ "resolved": "https://registry.npmjs.org/normalize-range/-/normalize-range-0.1.2.tgz",
2404
+ "integrity": "sha512-bdok/XvKII3nUpklnV6P2hxtMNrCboOjAcyBuQnWEhO665FwrSNRxU+AqpsyvO6LgGYPspN+lu5CLtw4jPRKNA==",
2405
+ "dev": true,
2406
+ "license": "MIT",
2407
+ "engines": {
2408
+ "node": ">=0.10.0"
2409
+ }
2410
+ },
2411
  "node_modules/optionator": {
2412
  "version": "0.9.4",
2413
  "resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz",
 
2540
  "node": "^10 || ^12 || >=14"
2541
  }
2542
  },
2543
+ "node_modules/postcss-value-parser": {
2544
+ "version": "4.2.0",
2545
+ "resolved": "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-4.2.0.tgz",
2546
+ "integrity": "sha512-1NNCs6uurfkVbeXG4S8JFT9t19m45ICnif8zWLd5oPSZ50QnwMfK+H3jv408d4jw/7Bttv5axS5IiHoLaVNHeQ==",
2547
+ "dev": true,
2548
+ "license": "MIT"
2549
+ },
2550
  "node_modules/prelude-ls": {
2551
  "version": "1.2.1",
2552
  "resolved": "https://registry.npmjs.org/prelude-ls/-/prelude-ls-1.2.1.tgz",
 
2725
  "node": ">=8"
2726
  }
2727
  },
2728
+ "node_modules/tailwindcss": {
2729
+ "version": "4.1.17",
2730
+ "resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-4.1.17.tgz",
2731
+ "integrity": "sha512-j9Ee2YjuQqYT9bbRTfTZht9W/ytp5H+jJpZKiYdP/bpnXARAuELt9ofP0lPnmHjbga7SNQIxdTAXCmtKVYjN+Q==",
2732
+ "dev": true,
2733
+ "license": "MIT"
2734
+ },
2735
  "node_modules/tinyglobby": {
2736
  "version": "0.2.15",
2737
  "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
frontend/package.json CHANGED
@@ -18,10 +18,14 @@
18
  "@types/react": "^19.2.5",
19
  "@types/react-dom": "^19.2.3",
20
  "@vitejs/plugin-react": "^5.1.1",
 
21
  "eslint": "^9.39.1",
22
  "eslint-plugin-react-hooks": "^7.0.1",
23
  "eslint-plugin-react-refresh": "^0.4.24",
24
  "globals": "^16.5.0",
 
 
 
25
  "vite": "^7.2.4"
26
  }
27
- }
 
18
  "@types/react": "^19.2.5",
19
  "@types/react-dom": "^19.2.3",
20
  "@vitejs/plugin-react": "^5.1.1",
21
+ "autoprefixer": "^10.4.22",
22
  "eslint": "^9.39.1",
23
  "eslint-plugin-react-hooks": "^7.0.1",
24
  "eslint-plugin-react-refresh": "^0.4.24",
25
  "globals": "^16.5.0",
26
+ "postcss": "^8.5.6",
27
+ "@tailwindcss/postcss": "^4.0.0",
28
+ "tailwindcss": "^4.1.17",
29
  "vite": "^7.2.4"
30
  }
31
+ }
frontend/postcss.config.js ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ export default {
2
+ plugins: {
3
+ '@tailwindcss/postcss': {},
4
+ autoprefixer: {},
5
+ },
6
+ }
7
+
frontend/src/App.jsx CHANGED
@@ -1,4 +1,4 @@
1
- import React, { useState } from 'react';
2
  import AgentForm from './components/AgentForm';
3
  import CodePreview from './components/CodePreview';
4
 
@@ -7,6 +7,21 @@ function App() {
7
  const [generatedPath, setGeneratedPath] = useState(null);
8
  const [isLoading, setIsLoading] = useState(false);
9
  const [error, setError] = useState(null);
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  const handleGenerate = async (formData) => {
12
  setIsLoading(true);
@@ -55,8 +70,28 @@ function App() {
55
  return (
56
  <div className="container">
57
  <header className="header">
58
- <h1>LLM Agent Builder</h1>
59
- <p>Design, configure, and generate AI agents in seconds.</p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  </header>
61
 
62
  {error && (
 
1
+ import React, { useState, useEffect } from 'react';
2
  import AgentForm from './components/AgentForm';
3
  import CodePreview from './components/CodePreview';
4
 
 
7
  const [generatedPath, setGeneratedPath] = useState(null);
8
  const [isLoading, setIsLoading] = useState(false);
9
  const [error, setError] = useState(null);
10
+ const [theme, setTheme] = useState('dark');
11
+
12
+ useEffect(() => {
13
+ // Load theme from localStorage or default to dark
14
+ const savedTheme = localStorage.getItem('theme') || 'dark';
15
+ setTheme(savedTheme);
16
+ document.documentElement.setAttribute('data-theme', savedTheme);
17
+ }, []);
18
+
19
+ const toggleTheme = () => {
20
+ const newTheme = theme === 'dark' ? 'light' : 'dark';
21
+ setTheme(newTheme);
22
+ document.documentElement.setAttribute('data-theme', newTheme);
23
+ localStorage.setItem('theme', newTheme);
24
+ };
25
 
26
  const handleGenerate = async (formData) => {
27
  setIsLoading(true);
 
70
  return (
71
  <div className="container">
72
  <header className="header">
73
+ <div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', width: '100%', marginBottom: '1rem' }}>
74
+ <div>
75
+ <h1>LLM Agent Builder</h1>
76
+ <p>Design, configure, and generate AI agents in seconds.</p>
77
+ </div>
78
+ <button
79
+ onClick={toggleTheme}
80
+ className="btn-secondary"
81
+ style={{
82
+ padding: '0.5rem 1rem',
83
+ borderRadius: '0.5rem',
84
+ border: '1px solid var(--glass-border)',
85
+ background: 'var(--glass-bg)',
86
+ color: 'var(--text-primary)',
87
+ cursor: 'pointer',
88
+ fontSize: '1.5rem'
89
+ }}
90
+ aria-label="Toggle theme"
91
+ >
92
+ {theme === 'dark' ? '☀️' : '🌙'}
93
+ </button>
94
+ </div>
95
  </header>
96
 
97
  {error && (
frontend/src/components/CodePreview.jsx CHANGED
@@ -1,10 +1,22 @@
1
- import React from 'react';
2
 
3
  const CodePreview = ({ code, path }) => {
 
 
 
 
 
 
 
 
 
 
4
  if (!code) {
5
  return (
6
- <div className="card" style={{ height: '100%', display: 'flex', alignItems: 'center', justifyContent: 'center', color: 'var(--text-secondary)' }}>
 
7
  <p>Generated code will appear here</p>
 
8
  </div>
9
  );
10
  }
@@ -12,12 +24,34 @@ const CodePreview = ({ code, path }) => {
12
  return (
13
  <div className="card">
14
  <div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: '1rem' }}>
15
- <h2>Preview</h2>
16
- {path && <span className="status-badge status-success">Saved to {path.split('/').pop()}</span>}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  </div>
18
- <pre>
19
- <code>{code}</code>
20
- </pre>
21
  </div>
22
  );
23
  };
 
1
+ import React, { useState } from 'react';
2
 
3
  const CodePreview = ({ code, path }) => {
4
+ const [copied, setCopied] = useState(false);
5
+
6
+ const copyToClipboard = () => {
7
+ if (code) {
8
+ navigator.clipboard.writeText(code);
9
+ setCopied(true);
10
+ setTimeout(() => setCopied(false), 2000);
11
+ }
12
+ };
13
+
14
  if (!code) {
15
  return (
16
+ <div className="card" style={{ height: '100%', display: 'flex', flexDirection: 'column', alignItems: 'center', justifyContent: 'center', color: 'var(--text-secondary)' }}>
17
+ <div style={{ fontSize: '3rem', marginBottom: '1rem' }}>📝</div>
18
  <p>Generated code will appear here</p>
19
+ <p style={{ fontSize: '0.875rem', marginTop: '0.5rem', opacity: 0.7 }}>Fill out the form and click "Generate Agent"</p>
20
  </div>
21
  );
22
  }
 
24
  return (
25
  <div className="card">
26
  <div style={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', marginBottom: '1rem' }}>
27
+ <h2>Code Preview</h2>
28
+ <div style={{ display: 'flex', gap: '0.5rem', alignItems: 'center' }}>
29
+ {path && <span className="status-badge status-success">Saved to {path.split('/').pop()}</span>}
30
+ <button
31
+ onClick={copyToClipboard}
32
+ style={{
33
+ padding: '0.5rem 1rem',
34
+ borderRadius: '0.5rem',
35
+ border: '1px solid var(--glass-border)',
36
+ background: copied ? 'rgba(34, 197, 94, 0.2)' : 'var(--glass-bg)',
37
+ color: copied ? '#4ade80' : 'var(--text-primary)',
38
+ cursor: 'pointer',
39
+ fontSize: '0.875rem',
40
+ transition: 'all 0.2s ease'
41
+ }}
42
+ >
43
+ {copied ? '✓ Copied!' : '📋 Copy'}
44
+ </button>
45
+ </div>
46
+ </div>
47
+ <div style={{ position: 'relative' }}>
48
+ <pre style={{ maxHeight: '600px', overflow: 'auto' }}>
49
+ <code>{code}</code>
50
+ </pre>
51
+ </div>
52
+ <div style={{ marginTop: '1rem', padding: '0.75rem', background: 'rgba(59, 130, 246, 0.1)', borderRadius: '0.5rem', fontSize: '0.875rem', color: 'var(--text-secondary)' }}>
53
+ 💡 <strong>Tip:</strong> Save this code to a .py file and run it with your API key set.
54
  </div>
 
 
 
55
  </div>
56
  );
57
  };
frontend/src/index.css CHANGED
@@ -1,3 +1,7 @@
 
 
 
 
1
  :root {
2
  --bg-primary: #0f172a;
3
  --bg-secondary: #1e293b;
@@ -11,6 +15,15 @@
11
  --font-sans: 'Inter', system-ui, -apple-system, sans-serif;
12
  }
13
 
 
 
 
 
 
 
 
 
 
14
  * {
15
  box-sizing: border-box;
16
  margin: 0;
 
1
+ @tailwind base;
2
+ @tailwind components;
3
+ @tailwind utilities;
4
+
5
  :root {
6
  --bg-primary: #0f172a;
7
  --bg-secondary: #1e293b;
 
15
  --font-sans: 'Inter', system-ui, -apple-system, sans-serif;
16
  }
17
 
18
+ [data-theme="light"] {
19
+ --bg-primary: #ffffff;
20
+ --bg-secondary: #f1f5f9;
21
+ --text-primary: #0f172a;
22
+ --text-secondary: #64748b;
23
+ --glass-bg: rgba(255, 255, 255, 0.9);
24
+ --glass-border: rgba(0, 0, 0, 0.1);
25
+ }
26
+
27
  * {
28
  box-sizing: border-box;
29
  margin: 0;
frontend/tailwind.config.js ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /** @type {import('tailwindcss').Config} */
2
+ export default {
3
+ content: [
4
+ "./index.html",
5
+ "./src/**/*.{js,ts,jsx,tsx}",
6
+ ],
7
+ theme: {
8
+ extend: {},
9
+ },
10
+ plugins: [],
11
+ }
12
+
llm_agent_builder/agent_builder.py CHANGED
@@ -1,5 +1,5 @@
1
  import os
2
- from typing import Optional
3
  from jinja2 import Environment, FileSystemLoader
4
 
5
  class AgentBuilder:
@@ -9,7 +9,17 @@ class AgentBuilder:
9
  self.env = Environment(loader=FileSystemLoader(template_path))
10
  self.template = self.env.get_template('agent_template.py.j2')
11
 
12
- def build_agent(self, agent_name: str, prompt: str, example_task: str, model: str = "claude-3-5-sonnet-20241022", provider: str = "anthropic", stream: bool = False) -> str:
 
 
 
 
 
 
 
 
 
 
13
  """
14
  Generates the Python code for a new agent.
15
 
@@ -19,6 +29,8 @@ class AgentBuilder:
19
  :param model: The model to use.
20
  :param provider: The provider (anthropic or huggingface).
21
  :param stream: Whether to stream the response.
 
 
22
  :return: The generated Python code as a string.
23
  """
24
  template_name = 'agent_template_hf.py.j2' if provider == 'huggingface' else 'agent_template.py.j2'
@@ -29,5 +41,7 @@ class AgentBuilder:
29
  prompt=prompt,
30
  example_task=example_task,
31
  model=model,
32
- stream=stream
 
 
33
  )
 
1
  import os
2
+ from typing import Optional, List, Dict, Any
3
  from jinja2 import Environment, FileSystemLoader
4
 
5
  class AgentBuilder:
 
9
  self.env = Environment(loader=FileSystemLoader(template_path))
10
  self.template = self.env.get_template('agent_template.py.j2')
11
 
12
+ def build_agent(
13
+ self,
14
+ agent_name: str,
15
+ prompt: str,
16
+ example_task: str,
17
+ model: str = "claude-3-5-sonnet-20241022",
18
+ provider: str = "anthropic",
19
+ stream: bool = False,
20
+ tools: Optional[List[Dict[str, Any]]] = None,
21
+ enable_multi_step: bool = False
22
+ ) -> str:
23
  """
24
  Generates the Python code for a new agent.
25
 
 
29
  :param model: The model to use.
30
  :param provider: The provider (anthropic or huggingface).
31
  :param stream: Whether to stream the response.
32
+ :param tools: Optional list of tool definitions for tool calling support.
33
+ :param enable_multi_step: Enable multi-step workflow capabilities.
34
  :return: The generated Python code as a string.
35
  """
36
  template_name = 'agent_template_hf.py.j2' if provider == 'huggingface' else 'agent_template.py.j2'
 
41
  prompt=prompt,
42
  example_task=example_task,
43
  model=model,
44
+ stream=stream,
45
+ tools=tools or [],
46
+ enable_multi_step=enable_multi_step
47
  )
llm_agent_builder/cli.py CHANGED
@@ -1,83 +1,285 @@
1
  import os
2
  import argparse
3
  import sys
4
- from typing import Optional
 
 
 
5
  from llm_agent_builder.agent_builder import AgentBuilder
6
  from dotenv import load_dotenv
7
 
8
- def get_input(prompt: str, default: str) -> str:
9
- value = input(f"{prompt} [{default}]: ").strip()
10
- return value if value else default
 
 
 
 
 
 
 
 
 
 
11
 
12
- def main() -> None:
13
- load_dotenv()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- parser = argparse.ArgumentParser(description="Generate an LLM agent using Anthropic API.")
16
- parser.add_argument("--name", default="MyAwesomeAgent", help="Name of the agent to be built")
17
- parser.add_argument("--prompt", default="You are a helpful assistant that specializes in writing Python code.", help="System prompt for the agent")
18
- parser.add_argument("--task", default="Write a Python function that calculates the factorial of a number.", help="Example task for the agent")
19
- parser.add_argument("--output", default="generated_agents", help="Output directory for the generated agent")
20
- parser.add_argument("--model", help="Anthropic model to use (overrides .env)")
21
- parser.add_argument("--provider", default="anthropic", choices=["anthropic", "huggingface"], help="LLM Provider to use (anthropic or huggingface)")
22
- parser.add_argument("--interactive", action="store_true", help="Run in interactive mode")
23
 
24
- # Check if we should run in interactive mode (explicit flag or no args)
25
- # Note: We want to preserve the ability to run with defaults using a flag if needed,
26
- # but for now let's say if NO args are passed, we default to interactive?
27
- # Or maybe just add an --interactive flag.
28
- # The plan said "If no arguments are provided, use input()".
29
- # But argparse sets defaults.
30
- # Let's stick to the plan: if len(sys.argv) == 1, go interactive.
31
 
32
- if len(sys.argv) == 1:
33
- print("No arguments provided. Starting interactive mode...")
34
- name = get_input("Agent Name", "MyAwesomeAgent")
35
- prompt = get_input("System Prompt", "You are a helpful assistant that specializes in writing Python code.")
36
- task = get_input("Example Task", "Write a Python function that calculates the factorial of a number.")
37
- output = get_input("Output Directory", "generated_agents")
38
- default_model = os.environ.get("ANTHROPIC_MODEL", "claude-3-5-sonnet-20241022")
39
- model = get_input("Anthropic Model", default_model)
40
- provider = get_input("Provider (anthropic/huggingface)", "anthropic")
 
 
 
 
 
41
 
42
- args = argparse.Namespace(
43
- name=name,
44
- prompt=prompt,
45
- task=task,
46
- output=output,
47
- model=model,
48
- provider=provider
49
- )
50
- else:
51
- args = parser.parse_args()
 
 
 
 
 
 
 
 
52
 
53
- # Override ANTHROPIC_MODEL if provided via CLI or Interactive
54
- if args.model:
55
- os.environ["ANTHROPIC_MODEL"] = args.model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
 
57
- # Create an instance of the AgentBuilder
58
- builder = AgentBuilder()
 
 
 
 
 
 
 
 
59
 
60
- # Generate the agent code
61
- # We need to handle the model argument being passed only if it exists, but build_agent has a default.
62
- # However, we now have a provider argument too.
63
- agent_code = builder.build_agent(
64
- agent_name=args.name,
65
- prompt=args.prompt,
66
- example_task=args.task,
67
- model=args.model if args.model else ("claude-3-5-sonnet-20241022" if args.provider == "anthropic" else "HuggingFaceH4/zephyr-7b-beta"),
68
- provider=args.provider
69
- )
70
 
71
- # Define the output path for the generated agent
72
- os.makedirs(args.output, exist_ok=True)
73
- output_path = os.path.join(args.output, f"{args.name.lower()}.py")
74
 
75
- # Write the generated code to a file
76
- with open(output_path, "w") as f:
77
- f.write(agent_code)
78
 
79
- print(f"Agent '{args.name}' has been created and saved to '{output_path}'")
80
- print("To use the agent, you need to set the ANTHROPIC_API_KEY environment variable.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
 
82
  if __name__ == "__main__":
83
  main()
 
1
  import os
2
  import argparse
3
  import sys
4
+ import json
5
+ import subprocess
6
+ from typing import Optional, List
7
+ from pathlib import Path
8
  from llm_agent_builder.agent_builder import AgentBuilder
9
  from dotenv import load_dotenv
10
 
11
+ def get_input(prompt: str, default: str, validator=None) -> str:
12
+ """Get user input with optional validation."""
13
+ while True:
14
+ value = input(f"{prompt} [{default}]: ").strip()
15
+ value = value if value else default
16
+ if validator:
17
+ try:
18
+ validator(value)
19
+ return value
20
+ except ValueError as e:
21
+ print(f"Error: {e}. Please try again.")
22
+ continue
23
+ return value
24
 
25
+ def validate_agent_name(name: str) -> None:
26
+ """Validate agent name."""
27
+ if not name:
28
+ raise ValueError("Agent name cannot be empty")
29
+ if not name.replace("_", "").replace("-", "").isalnum():
30
+ raise ValueError("Agent name must be alphanumeric (with underscores or hyphens)")
31
+
32
+ def list_agents(output_dir: str = "generated_agents") -> None:
33
+ """List all generated agents."""
34
+ output_path = Path(output_dir)
35
+ if not output_path.exists():
36
+ print(f"Output directory '{output_dir}' does not exist.")
37
+ return
38
+
39
+ agents = list(output_path.glob("*.py"))
40
+ if not agents:
41
+ print(f"No agents found in '{output_dir}'.")
42
+ return
43
 
44
+ print(f"\nFound {len(agents)} agent(s) in '{output_dir}':")
45
+ print("-" * 60)
46
+ for agent_file in sorted(agents):
47
+ print(f" {agent_file.stem}")
48
+ print("-" * 60)
 
 
 
49
 
50
+ def test_agent(agent_path: str, task: Optional[str] = None) -> None:
51
+ """Test a generated agent."""
52
+ agent_file = Path(agent_path)
53
+ if not agent_file.exists():
54
+ print(f"Error: Agent file '{agent_path}' not found.")
55
+ sys.exit(1)
 
56
 
57
+ api_key = os.environ.get("ANTHROPIC_API_KEY") or os.environ.get("HUGGINGFACEHUB_API_TOKEN")
58
+ if not api_key:
59
+ print("Error: API key not found. Please set ANTHROPIC_API_KEY or HUGGINGFACEHUB_API_TOKEN.")
60
+ sys.exit(1)
61
+
62
+ if not task:
63
+ task = input("Enter task to test: ").strip()
64
+ if not task:
65
+ print("Error: Task cannot be empty.")
66
+ sys.exit(1)
67
+
68
+ try:
69
+ cmd = [sys.executable, str(agent_file), "--task", task]
70
+ result = subprocess.run(cmd, capture_output=True, text=True, timeout=60)
71
 
72
+ if result.returncode == 0:
73
+ print("\n" + "=" * 60)
74
+ print("Agent Execution Result:")
75
+ print("=" * 60)
76
+ print(result.stdout)
77
+ if result.stderr:
78
+ print("\nWarnings/Errors:")
79
+ print(result.stderr)
80
+ else:
81
+ print(f"Error: Agent execution failed with code {result.returncode}")
82
+ print(result.stderr)
83
+ sys.exit(1)
84
+ except subprocess.TimeoutExpired:
85
+ print("Error: Agent execution timed out after 60 seconds.")
86
+ sys.exit(1)
87
+ except Exception as e:
88
+ print(f"Error running agent: {e}")
89
+ sys.exit(1)
90
 
91
+ def batch_generate(config_file: str, output_dir: str = "generated_agents") -> None:
92
+ """Generate multiple agents from a JSON configuration file."""
93
+ config_path = Path(config_file)
94
+ if not config_path.exists():
95
+ print(f"Error: Configuration file '{config_file}' not found.")
96
+ sys.exit(1)
97
+
98
+ try:
99
+ with open(config_path, 'r') as f:
100
+ configs = json.load(f)
101
+
102
+ if not isinstance(configs, list):
103
+ print("Error: Configuration file must contain a JSON array of agent configurations.")
104
+ sys.exit(1)
105
+
106
+ builder = AgentBuilder()
107
+ output_path = Path(output_dir)
108
+ output_path.mkdir(exist_ok=True)
109
+
110
+ print(f"Generating {len(configs)} agent(s)...")
111
+ print("-" * 60)
112
+
113
+ for i, config in enumerate(configs, 1):
114
+ try:
115
+ agent_name = config.get("name", f"Agent{i}")
116
+ prompt = config.get("prompt", "")
117
+ task = config.get("task", "")
118
+ model = config.get("model", "claude-3-5-sonnet-20241022")
119
+ provider = config.get("provider", "anthropic")
120
+
121
+ if not prompt or not task:
122
+ print(f" [{i}] Skipping '{agent_name}': missing prompt or task")
123
+ continue
124
+
125
+ agent_code = builder.build_agent(
126
+ agent_name=agent_name,
127
+ prompt=prompt,
128
+ example_task=task,
129
+ model=model,
130
+ provider=provider
131
+ )
132
+
133
+ agent_file = output_path / f"{agent_name.lower()}.py"
134
+ with open(agent_file, "w") as f:
135
+ f.write(agent_code)
136
+
137
+ print(f" [{i}] ✓ Generated '{agent_name}' -> {agent_file}")
138
+ except Exception as e:
139
+ print(f" [{i}] ✗ Error generating '{config.get('name', f'Agent{i}')}': {e}")
140
+
141
+ print("-" * 60)
142
+ print(f"Batch generation complete. Check '{output_dir}' for generated agents.")
143
+ except json.JSONDecodeError as e:
144
+ print(f"Error: Invalid JSON in configuration file: {e}")
145
+ sys.exit(1)
146
+ except Exception as e:
147
+ print(f"Error: {e}")
148
+ sys.exit(1)
149
 
150
+ def main() -> None:
151
+ load_dotenv()
152
+
153
+ parser = argparse.ArgumentParser(
154
+ description="LLM Agent Builder - Generate, test, and manage AI agents",
155
+ formatter_class=argparse.RawDescriptionHelpFormatter,
156
+ epilog="""
157
+ Examples:
158
+ # Generate an agent interactively
159
+ llm-agent-builder generate
160
 
161
+ # Generate with command-line arguments
162
+ llm-agent-builder generate --name CodeReviewer --prompt "You are a code reviewer" --task "Review this code"
 
 
 
 
 
 
 
 
163
 
164
+ # List all generated agents
165
+ llm-agent-builder list
 
166
 
167
+ # Test an agent
168
+ llm-agent-builder test generated_agents/myagent.py --task "Review this function"
 
169
 
170
+ # Batch generate from config file
171
+ llm-agent-builder batch agents.json
172
+ """
173
+ )
174
+
175
+ subparsers = parser.add_subparsers(dest="command", help="Available commands")
176
+
177
+ # Generate subcommand
178
+ gen_parser = subparsers.add_parser("generate", help="Generate a new agent")
179
+ gen_parser.add_argument("--name", default="MyAwesomeAgent", help="Name of the agent to be built")
180
+ gen_parser.add_argument("--prompt", default="You are a helpful assistant that specializes in writing Python code.", help="System prompt for the agent")
181
+ gen_parser.add_argument("--task", default="Write a Python function that calculates the factorial of a number.", help="Example task for the agent")
182
+ gen_parser.add_argument("--output", default="generated_agents", help="Output directory for the generated agent")
183
+ gen_parser.add_argument("--model", help="Model to use (overrides .env)")
184
+ gen_parser.add_argument("--provider", default="anthropic", choices=["anthropic", "huggingface"], help="LLM Provider to use")
185
+ gen_parser.add_argument("--interactive", action="store_true", help="Run in interactive mode")
186
+
187
+ # List subcommand
188
+ list_parser = subparsers.add_parser("list", help="List all generated agents")
189
+ list_parser.add_argument("--output", default="generated_agents", help="Output directory to search")
190
+
191
+ # Test subcommand
192
+ test_parser = subparsers.add_parser("test", help="Test a generated agent")
193
+ test_parser.add_argument("agent_path", help="Path to the agent Python file")
194
+ test_parser.add_argument("--task", help="Task to test the agent with")
195
+
196
+ # Batch subcommand
197
+ batch_parser = subparsers.add_parser("batch", help="Generate multiple agents from a JSON config file")
198
+ batch_parser.add_argument("config_file", help="Path to JSON configuration file")
199
+ batch_parser.add_argument("--output", default="generated_agents", help="Output directory for generated agents")
200
+
201
+ args = parser.parse_args()
202
+
203
+ # Handle no command (default to generate in interactive mode)
204
+ if not args.command:
205
+ args.command = "generate"
206
+ args.interactive = True
207
+
208
+ try:
209
+ if args.command == "generate":
210
+ # Interactive mode
211
+ if args.interactive or len(sys.argv) == 1:
212
+ print("Starting interactive agent generation...")
213
+ name = get_input("Agent Name", args.name, validator=validate_agent_name)
214
+ prompt = get_input("System Prompt", args.prompt)
215
+ task = get_input("Example Task", args.task)
216
+ output = get_input("Output Directory", args.output)
217
+ default_model = os.environ.get("ANTHROPIC_MODEL", "claude-3-5-sonnet-20241022")
218
+ model = get_input("Model", default_model)
219
+ provider = get_input("Provider (anthropic/huggingface)", args.provider)
220
+
221
+ # Validate provider
222
+ if provider not in ["anthropic", "huggingface"]:
223
+ print(f"Error: Invalid provider '{provider}'. Must be 'anthropic' or 'huggingface'.")
224
+ sys.exit(1)
225
+ else:
226
+ name = args.name
227
+ prompt = args.prompt
228
+ task = args.task
229
+ output = args.output
230
+ model = args.model
231
+ provider = args.provider
232
+
233
+ # Validate agent name
234
+ try:
235
+ validate_agent_name(name)
236
+ except ValueError as e:
237
+ print(f"Error: {e}")
238
+ sys.exit(1)
239
+
240
+ # Override ANTHROPIC_MODEL if provided
241
+ if model:
242
+ os.environ["ANTHROPIC_MODEL"] = model
243
+
244
+ # Create an instance of the AgentBuilder
245
+ builder = AgentBuilder()
246
+
247
+ # Generate the agent code
248
+ default_model = model or ("claude-3-5-sonnet-20241022" if provider == "anthropic" else "meta-llama/Meta-Llama-3-8B-Instruct")
249
+ agent_code = builder.build_agent(
250
+ agent_name=name,
251
+ prompt=prompt,
252
+ example_task=task,
253
+ model=default_model,
254
+ provider=provider
255
+ )
256
+
257
+ # Define the output path for the generated agent
258
+ os.makedirs(output, exist_ok=True)
259
+ output_path = os.path.join(output, f"{name.lower()}.py")
260
+
261
+ # Write the generated code to a file
262
+ with open(output_path, "w") as f:
263
+ f.write(agent_code)
264
+
265
+ print(f"\n✓ Agent '{name}' has been created and saved to '{output_path}'")
266
+ print("To use the agent, you need to set the ANTHROPIC_API_KEY environment variable.")
267
+
268
+ elif args.command == "list":
269
+ list_agents(args.output)
270
+
271
+ elif args.command == "test":
272
+ test_agent(args.agent_path, args.task)
273
+
274
+ elif args.command == "batch":
275
+ batch_generate(args.config_file, args.output)
276
+
277
+ except KeyboardInterrupt:
278
+ print("\n\nOperation cancelled by user.")
279
+ sys.exit(1)
280
+ except Exception as e:
281
+ print(f"Error: {e}", file=sys.stderr)
282
+ sys.exit(1)
283
 
284
  if __name__ == "__main__":
285
  main()
llm_agent_builder/templates/agent_template.py.j2 CHANGED
@@ -1,21 +1,127 @@
1
  import anthropic
2
  import os
 
 
 
 
3
 
4
  class {{ agent_name }}:
5
  def __init__(self, api_key):
6
  self.client = anthropic.Anthropic(api_key=api_key)
7
  self.prompt = "{{- prompt -}}"
 
 
 
8
 
9
- def run(self, task):
10
- response = self.client.messages.create(
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
 
 
 
 
 
 
 
12
  model=os.environ.get("ANTHROPIC_MODEL", "{{ model }}"),
13
- max_tokens=1024,
14
  system=self.prompt,
15
  messages=[
16
  {"role": "user", "content": task}
17
- ]
 
18
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  return response.content[0].text
20
 
21
  if __name__ == '__main__':
 
1
  import anthropic
2
  import os
3
+ from typing import Optional, List, Dict, Any
4
+ {% if enable_multi_step or tools %}
5
+ import json
6
+ {% endif %}
7
 
8
  class {{ agent_name }}:
9
  def __init__(self, api_key):
10
  self.client = anthropic.Anthropic(api_key=api_key)
11
  self.prompt = "{{- prompt -}}"
12
+ {% if tools %}
13
+ self.tools = {{ tools | tojson }}
14
+ {% endif %}
15
 
16
+ {% if tools %}
17
+ def _execute_tool(self, tool_name: str, tool_input: Dict[str, Any]) -> Dict[str, Any]:
18
+ """Execute a tool call. Override this method to implement custom tool logic."""
19
+ # Default implementation - override in subclass for custom tools
20
+ return {"result": f"Tool {tool_name} executed with input: {tool_input}"}
21
+ {% endif %}
22
+
23
+ {% if enable_multi_step %}
24
+ def run_multi_step(self, task: str, max_steps: int = 5) -> str:
25
+ """Run a multi-step workflow where the agent can iterate on the task."""
26
+ messages = [{"role": "user", "content": task}]
27
+ final_result = None
28
+
29
+ for step in range(max_steps):
30
+ response = self.client.messages.create(
31
+ model=os.environ.get("ANTHROPIC_MODEL", "{{ model }}"),
32
+ max_tokens=2048,
33
+ system=self.prompt,
34
+ messages=messages{% if tools %},
35
+ tools=self.tools{% endif %}
36
+ )
37
+
38
+ # Handle tool use if present
39
+ {% if tools %}
40
+ if response.stop_reason == "tool_use":
41
+ tool_uses = [block for block in response.content if block.type == "tool_use"]
42
+ for tool_use in tool_uses:
43
+ tool_result = self._execute_tool(tool_use.name, tool_use.input)
44
+ messages.append({
45
+ "role": "assistant",
46
+ "content": response.content
47
+ })
48
+ messages.append({
49
+ "role": "user",
50
+ "content": [{
51
+ "type": "tool_result",
52
+ "tool_use_id": tool_use.id,
53
+ "content": json.dumps(tool_result)
54
+ }]
55
+ })
56
+ continue
57
+ {% endif %}
58
+
59
+ # Extract text response
60
+ text_content = [block.text for block in response.content if block.type == "text"]
61
+ if text_content:
62
+ final_result = text_content[0]
63
+ messages.append({
64
+ "role": "assistant",
65
+ "content": response.content
66
+ })
67
+
68
+ # Check if task is complete (simple heuristic - can be enhanced)
69
+ if "complete" in final_result.lower() or "finished" in final_result.lower():
70
+ break
71
+
72
+ # Add continuation prompt for next step
73
+ if step < max_steps - 1:
74
+ messages.append({
75
+ "role": "user",
76
+ "content": "Continue or refine your response if needed."
77
+ })
78
+
79
+ return final_result or "Multi-step workflow completed."
80
+ {% endif %}
81
 
82
+ def run(self, task: str{% if enable_multi_step %}, use_multi_step: bool = False{% endif %}):
83
+ {% if enable_multi_step %}
84
+ if use_multi_step:
85
+ return self.run_multi_step(task)
86
+ {% endif %}
87
+
88
+ response = self.client.messages.create(
89
  model=os.environ.get("ANTHROPIC_MODEL", "{{ model }}"),
90
+ max_tokens=2048,
91
  system=self.prompt,
92
  messages=[
93
  {"role": "user", "content": task}
94
+ ]{% if tools %},
95
+ tools=self.tools{% endif %}
96
  )
97
+
98
+ {% if tools %}
99
+ # Handle tool use in single-step mode
100
+ if response.stop_reason == "tool_use":
101
+ tool_uses = [block for block in response.content if block.type == "tool_use"]
102
+ tool_results = []
103
+ for tool_use in tool_uses:
104
+ tool_result = self._execute_tool(tool_use.name, tool_use.input)
105
+ tool_results.append({
106
+ "type": "tool_result",
107
+ "tool_use_id": tool_use.id,
108
+ "content": json.dumps(tool_result)
109
+ })
110
+
111
+ # Get final response after tool execution
112
+ follow_up = self.client.messages.create(
113
+ model=os.environ.get("ANTHROPIC_MODEL", "{{ model }}"),
114
+ max_tokens=2048,
115
+ system=self.prompt,
116
+ messages=[
117
+ {"role": "user", "content": task},
118
+ {"role": "assistant", "content": response.content},
119
+ {"role": "user", "content": tool_results}
120
+ ]
121
+ )
122
+ return follow_up.content[0].text
123
+ {% endif %}
124
+
125
  return response.content[0].text
126
 
127
  if __name__ == '__main__':
pyproject.toml ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = ["setuptools>=61.0", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "llm-agent-builder"
7
+ version = "1.0.0"
8
+ description = "A tool to scaffold and generate Anthropic-based LLM agents"
9
+ readme = "README.md"
10
+ requires-python = ">=3.9"
11
+ license = {text = "MIT"}
12
+ authors = [
13
+ {name = "LLMAgentBuilder Team"}
14
+ ]
15
+ keywords = ["llm", "agent", "anthropic", "claude", "ai", "automation"]
16
+ classifiers = [
17
+ "Development Status :: 4 - Beta",
18
+ "Intended Audience :: Developers",
19
+ "License :: OSI Approved :: MIT License",
20
+ "Programming Language :: Python :: 3",
21
+ "Programming Language :: Python :: 3.9",
22
+ "Programming Language :: Python :: 3.10",
23
+ "Programming Language :: Python :: 3.11",
24
+ "Programming Language :: Python :: 3.12",
25
+ "Topic :: Software Development :: Libraries :: Python Modules",
26
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
27
+ ]
28
+
29
+ dependencies = [
30
+ "anthropic>=0.18.0",
31
+ "Jinja2>=3.1.0",
32
+ "python-dotenv>=1.0.0",
33
+ "fastapi>=0.104.0",
34
+ "uvicorn[standard]>=0.24.0",
35
+ "pydantic>=2.0.0",
36
+ "prometheus-fastapi-instrumentator>=6.0.0",
37
+ "huggingface_hub>=0.19.0",
38
+ "slowapi>=0.1.9",
39
+ "tenacity>=8.2.0",
40
+ ]
41
+
42
+ [project.optional-dependencies]
43
+ dev = [
44
+ "pytest>=7.4.0",
45
+ "pytest-cov>=4.1.0",
46
+ "mypy>=1.5.0",
47
+ "flake8>=6.1.0",
48
+ "black>=23.9.0",
49
+ "isort>=5.12.0",
50
+ ]
51
+
52
+ [project.scripts]
53
+ llm-agent-builder = "llm_agent_builder.cli:main"
54
+
55
+ [project.urls]
56
+ Homepage = "https://github.com/kwizzlesurp10-ctrl/LLMAgentbuilder"
57
+ Documentation = "https://github.com/kwizzlesurp10-ctrl/LLMAgentbuilder#readme"
58
+ Repository = "https://github.com/kwizzlesurp10-ctrl/LLMAgentbuilder"
59
+ Issues = "https://github.com/kwizzlesurp10-ctrl/LLMAgentbuilder/issues"
60
+
61
+ [tool.setuptools]
62
+ packages = ["llm_agent_builder", "server"]
63
+
64
+ [tool.setuptools.package-data]
65
+ llm_agent_builder = ["templates/*.j2"]
66
+
67
+ [tool.black]
68
+ line-length = 120
69
+ target-version = ['py39', 'py310', 'py311', 'py312']
70
+ include = '\.pyi?$'
71
+
72
+ [tool.isort]
73
+ profile = "black"
74
+ line_length = 120
75
+
76
+ [tool.mypy]
77
+ python_version = "3.9"
78
+ warn_return_any = true
79
+ warn_unused_configs = true
80
+ disallow_untyped_defs = false
81
+ ignore_missing_imports = true
82
+
83
+ [tool.pytest.ini_options]
84
+ testpaths = ["tests"]
85
+ python_files = ["test_*.py"]
86
+ python_classes = ["Test*"]
87
+ python_functions = ["test_*"]
88
+ addopts = "-v --tb=short"
89
+
90
+ [tool.coverage.run]
91
+ source = ["llm_agent_builder", "server"]
92
+ omit = ["tests/*", "*/__pycache__/*"]
93
+
94
+ [tool.coverage.report]
95
+ exclude_lines = [
96
+ "pragma: no cover",
97
+ "def __repr__",
98
+ "raise AssertionError",
99
+ "raise NotImplementedError",
100
+ "if __name__ == .__main__.:",
101
+ "if TYPE_CHECKING:",
102
+ ]
103
+
requirements.txt CHANGED
@@ -5,4 +5,6 @@ fastapi
5
  uvicorn
6
  pydantic
7
  prometheus-fastapi-instrumentator
8
- huggingface_hub
 
 
 
5
  uvicorn
6
  pydantic
7
  prometheus-fastapi-instrumentator
8
+ huggingface_hub
9
+ slowapi
10
+ tenacity
server/main.py CHANGED
@@ -1,22 +1,33 @@
1
  import os
2
  import sys
3
- from fastapi import FastAPI, HTTPException
4
- from pydantic import BaseModel
 
 
5
  from fastapi.middleware.cors import CORSMiddleware
6
  from fastapi.staticfiles import StaticFiles
7
  from fastapi.responses import FileResponse
 
 
 
 
 
 
8
 
9
  # Add the parent directory to sys.path to import llm_agent_builder
10
  sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
11
 
12
  from llm_agent_builder.agent_builder import AgentBuilder
13
-
14
  from server.models import GenerateRequest, ProviderEnum
15
-
16
  from server.sandbox import run_in_sandbox
17
  from prometheus_fastapi_instrumentator import Instrumentator
18
 
19
- app = FastAPI()
 
 
 
 
 
20
 
21
  # Instrumentator
22
  Instrumentator().instrument(app).expose(app)
@@ -34,36 +45,65 @@ class ExecuteRequest(BaseModel):
34
  code: str
35
  task: str
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  @app.post("/api/execute")
38
- async def execute_agent(request: ExecuteRequest):
 
 
39
  try:
40
- output = run_in_sandbox(request.code, request.task)
 
 
 
 
41
  return {"status": "success", "output": output}
 
 
 
 
42
  except Exception as e:
43
- raise HTTPException(status_code=500, detail=str(e))
44
 
45
  @app.post("/api/generate")
46
- async def generate_agent(request: GenerateRequest):
 
 
47
  try:
48
- builder = AgentBuilder()
49
- code = builder.build_agent(
50
- agent_name=request.name,
51
- prompt=request.prompt,
52
- example_task=request.task,
53
- model=request.model,
54
- provider=request.provider,
55
- stream=request.stream
56
- )
57
 
58
  # Stateless: Return code directly, do not save to disk
59
  return {
60
  "status": "success",
61
  "message": "Agent generated successfully",
62
  "code": code,
63
- "filename": f"{request.name.lower()}.py"
64
  }
 
 
65
  except Exception as e:
66
- raise HTTPException(status_code=500, detail=str(e))
67
 
68
  @app.get("/health")
69
  @app.get("/healthz")
 
1
  import os
2
  import sys
3
+ import time
4
+ from typing import Dict
5
+ import subprocess
6
+ from fastapi import FastAPI, HTTPException, Request
7
  from fastapi.middleware.cors import CORSMiddleware
8
  from fastapi.staticfiles import StaticFiles
9
  from fastapi.responses import FileResponse
10
+ from fastapi.middleware.trustedhost import TrustedHostMiddleware
11
+ from pydantic import BaseModel
12
+ from slowapi import Limiter, _rate_limit_exceeded_handler
13
+ from slowapi.util import get_remote_address
14
+ from slowapi.errors import RateLimitExceeded
15
+ from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception_type
16
 
17
  # Add the parent directory to sys.path to import llm_agent_builder
18
  sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
19
 
20
  from llm_agent_builder.agent_builder import AgentBuilder
 
21
  from server.models import GenerateRequest, ProviderEnum
 
22
  from server.sandbox import run_in_sandbox
23
  from prometheus_fastapi_instrumentator import Instrumentator
24
 
25
+ app = FastAPI(title="LLM Agent Builder API", version="1.0.0")
26
+
27
+ # Rate limiting
28
+ limiter = Limiter(key_func=get_remote_address)
29
+ app.state.limiter = limiter
30
+ app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
31
 
32
  # Instrumentator
33
  Instrumentator().instrument(app).expose(app)
 
45
  code: str
46
  task: str
47
 
48
+ @retry(
49
+ stop=stop_after_attempt(3),
50
+ wait=wait_exponential(multiplier=1, min=2, max=10),
51
+ retry=retry_if_exception_type((ConnectionError, TimeoutError))
52
+ )
53
+ def _generate_agent_with_retry(request: GenerateRequest) -> str:
54
+ """Generate agent code with retry logic."""
55
+ builder = AgentBuilder()
56
+ return builder.build_agent(
57
+ agent_name=request.name,
58
+ prompt=request.prompt,
59
+ example_task=request.task,
60
+ model=request.model,
61
+ provider=request.provider,
62
+ stream=request.stream
63
+ )
64
+
65
  @app.post("/api/execute")
66
+ @limiter.limit("10/minute")
67
+ async def execute_agent(request: Request, execute_request: ExecuteRequest):
68
+ """Execute agent code in a sandboxed environment."""
69
  try:
70
+ # Validate code length
71
+ if len(execute_request.code) > 100000: # 100KB limit
72
+ raise HTTPException(status_code=400, detail="Code exceeds maximum size limit (100KB)")
73
+
74
+ output = run_in_sandbox(execute_request.code, execute_request.task)
75
  return {"status": "success", "output": output}
76
+ except ValueError as e:
77
+ raise HTTPException(status_code=400, detail=str(e))
78
+ except subprocess.TimeoutExpired:
79
+ raise HTTPException(status_code=408, detail="Execution timed out")
80
  except Exception as e:
81
+ raise HTTPException(status_code=500, detail=f"Execution error: {str(e)}")
82
 
83
  @app.post("/api/generate")
84
+ @limiter.limit("20/minute")
85
+ async def generate_agent(request: Request, generate_request: GenerateRequest):
86
+ """Generate a new agent with retry logic and rate limiting."""
87
  try:
88
+ # Validate input lengths
89
+ if len(generate_request.prompt) > 10000:
90
+ raise HTTPException(status_code=400, detail="Prompt exceeds maximum length (10000 characters)")
91
+ if len(generate_request.task) > 5000:
92
+ raise HTTPException(status_code=400, detail="Task exceeds maximum length (5000 characters)")
93
+
94
+ code = _generate_agent_with_retry(generate_request)
 
 
95
 
96
  # Stateless: Return code directly, do not save to disk
97
  return {
98
  "status": "success",
99
  "message": "Agent generated successfully",
100
  "code": code,
101
+ "filename": f"{generate_request.name.lower()}.py"
102
  }
103
+ except ValueError as e:
104
+ raise HTTPException(status_code=400, detail=str(e))
105
  except Exception as e:
106
+ raise HTTPException(status_code=500, detail=f"Generation error: {str(e)}")
107
 
108
  @app.get("/health")
109
  @app.get("/healthz")
tests/test_api.py CHANGED
@@ -9,12 +9,18 @@ def test_health_check():
9
  assert response.status_code == 200
10
  assert response.json() == {"status": "ok"}
11
 
 
 
 
 
 
12
  def test_generate_agent():
13
  payload = {
14
  "name": "TestApiAgent",
15
  "prompt": "You are a test agent.",
16
  "task": "Do nothing.",
17
- "model": "claude-3-5-sonnet-20241022"
 
18
  }
19
  response = client.post("/api/generate", json=payload)
20
  assert response.status_code == 200
@@ -24,3 +30,75 @@ def test_generate_agent():
24
  assert "filename" in data
25
  assert data["filename"] == "testapiagent.py"
26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  assert response.status_code == 200
10
  assert response.json() == {"status": "ok"}
11
 
12
+ def test_healthz_check():
13
+ response = client.get("/healthz")
14
+ assert response.status_code == 200
15
+ assert response.json() == {"status": "ok"}
16
+
17
  def test_generate_agent():
18
  payload = {
19
  "name": "TestApiAgent",
20
  "prompt": "You are a test agent.",
21
  "task": "Do nothing.",
22
+ "model": "claude-3-5-sonnet-20241022",
23
+ "provider": "anthropic"
24
  }
25
  response = client.post("/api/generate", json=payload)
26
  assert response.status_code == 200
 
30
  assert "filename" in data
31
  assert data["filename"] == "testapiagent.py"
32
 
33
+ def test_generate_agent_huggingface():
34
+ payload = {
35
+ "name": "TestHfAgent",
36
+ "prompt": "You are a test agent.",
37
+ "task": "Do nothing.",
38
+ "model": "meta-llama/Meta-Llama-3-8B-Instruct",
39
+ "provider": "huggingface"
40
+ }
41
+ response = client.post("/api/generate", json=payload)
42
+ assert response.status_code == 200
43
+ data = response.json()
44
+ assert data["status"] == "success"
45
+ assert "TestHfAgent" in data["code"]
46
+
47
+ def test_generate_agent_invalid_model():
48
+ payload = {
49
+ "name": "TestAgent",
50
+ "prompt": "Test",
51
+ "task": "Test",
52
+ "model": "invalid-model",
53
+ "provider": "anthropic"
54
+ }
55
+ response = client.post("/api/generate", json=payload)
56
+ assert response.status_code == 422 # Validation error
57
+
58
+ def test_generate_agent_long_prompt():
59
+ payload = {
60
+ "name": "TestAgent",
61
+ "prompt": "A" * 10001, # Exceeds 10000 char limit
62
+ "task": "Test",
63
+ "model": "claude-3-5-sonnet-20241022",
64
+ "provider": "anthropic"
65
+ }
66
+ response = client.post("/api/generate", json=payload)
67
+ assert response.status_code == 400
68
+ assert "exceeds maximum length" in response.json()["detail"].lower()
69
+
70
+ def test_execute_agent():
71
+ test_code = '''
72
+ class TestAgent:
73
+ def __init__(self, api_key):
74
+ self.api_key = api_key
75
+ def run(self, task):
76
+ return f"Processed: {task}"
77
+
78
+ if __name__ == '__main__':
79
+ import argparse
80
+ parser = argparse.ArgumentParser()
81
+ parser.add_argument("--task", default="test")
82
+ args = parser.parse_args()
83
+ agent = TestAgent("test")
84
+ print(agent.run(args.task))
85
+ '''
86
+ payload = {
87
+ "code": test_code,
88
+ "task": "Hello World"
89
+ }
90
+ response = client.post("/api/execute", json=payload)
91
+ assert response.status_code == 200
92
+ data = response.json()
93
+ assert data["status"] == "success"
94
+ assert "Processed: Hello World" in data["output"]
95
+
96
+ def test_execute_agent_large_code():
97
+ payload = {
98
+ "code": "A" * 100001, # Exceeds 100KB limit
99
+ "task": "Test"
100
+ }
101
+ response = client.post("/api/execute", json=payload)
102
+ assert response.status_code == 400
103
+ assert "exceeds maximum size" in response.json()["detail"].lower()
104
+
tests/test_cli.py CHANGED
@@ -2,20 +2,24 @@ import pytest
2
  import subprocess
3
  import sys
4
  import os
 
 
5
 
6
  def test_cli_help():
7
- result = subprocess.run([sys.executable, "main.py", "--help"], capture_output=True, text=True)
8
  assert result.returncode == 0
9
- assert "Generate an LLM agent using Anthropic API" in result.stdout
10
 
11
  def test_cli_generate_agent(temp_output_dir):
12
  agent_name = "CLITestAgent"
13
 
14
  result = subprocess.run([
15
- sys.executable, "main.py",
16
  "--name", agent_name,
17
  "--output", temp_output_dir,
18
- "--model", "claude-3-test"
 
 
19
  ], capture_output=True, text=True)
20
 
21
  assert result.returncode == 0
@@ -27,4 +31,72 @@ def test_cli_generate_agent(temp_output_dir):
27
  with open(output_file, "r") as f:
28
  content = f.read()
29
  assert f"class {agent_name}:" in content
30
- assert 'model=os.environ.get("ANTHROPIC_MODEL", "claude-3-test")' in content
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  import subprocess
3
  import sys
4
  import os
5
+ import json
6
+ from pathlib import Path
7
 
8
  def test_cli_help():
9
+ result = subprocess.run([sys.executable, "-m", "llm_agent_builder.cli", "--help"], capture_output=True, text=True)
10
  assert result.returncode == 0
11
+ assert "LLM Agent Builder" in result.stdout
12
 
13
  def test_cli_generate_agent(temp_output_dir):
14
  agent_name = "CLITestAgent"
15
 
16
  result = subprocess.run([
17
+ sys.executable, "-m", "llm_agent_builder.cli", "generate",
18
  "--name", agent_name,
19
  "--output", temp_output_dir,
20
+ "--model", "claude-3-5-sonnet-20241022",
21
+ "--prompt", "Test prompt",
22
+ "--task", "Test task"
23
  ], capture_output=True, text=True)
24
 
25
  assert result.returncode == 0
 
31
  with open(output_file, "r") as f:
32
  content = f.read()
33
  assert f"class {agent_name}:" in content
34
+ assert "claude-3-5-sonnet-20241022" in content
35
+
36
+ def test_cli_list_agents(temp_output_dir):
37
+ # First create an agent
38
+ agent_name = "ListTestAgent"
39
+ subprocess.run([
40
+ sys.executable, "-m", "llm_agent_builder.cli", "generate",
41
+ "--name", agent_name,
42
+ "--output", temp_output_dir,
43
+ "--prompt", "Test",
44
+ "--task", "Test"
45
+ ], capture_output=True, text=True)
46
+
47
+ # Then list agents
48
+ result = subprocess.run([
49
+ sys.executable, "-m", "llm_agent_builder.cli", "list",
50
+ "--output", temp_output_dir
51
+ ], capture_output=True, text=True)
52
+
53
+ assert result.returncode == 0
54
+ assert agent_name.lower() in result.stdout.lower()
55
+
56
+ def test_cli_batch_generate(temp_output_dir):
57
+ # Create a batch config file
58
+ config = [
59
+ {
60
+ "name": "BatchAgent1",
61
+ "prompt": "Test prompt 1",
62
+ "task": "Test task 1",
63
+ "model": "claude-3-5-sonnet-20241022",
64
+ "provider": "anthropic"
65
+ },
66
+ {
67
+ "name": "BatchAgent2",
68
+ "prompt": "Test prompt 2",
69
+ "task": "Test task 2",
70
+ "model": "claude-3-5-sonnet-20241022",
71
+ "provider": "anthropic"
72
+ }
73
+ ]
74
+
75
+ config_file = os.path.join(temp_output_dir, "batch_config.json")
76
+ with open(config_file, "w") as f:
77
+ json.dump(config, f)
78
+
79
+ result = subprocess.run([
80
+ sys.executable, "-m", "llm_agent_builder.cli", "batch",
81
+ config_file,
82
+ "--output", temp_output_dir
83
+ ], capture_output=True, text=True)
84
+
85
+ assert result.returncode == 0
86
+ assert "BatchAgent1" in result.stdout
87
+ assert "BatchAgent2" in result.stdout
88
+
89
+ # Verify files were created
90
+ assert os.path.exists(os.path.join(temp_output_dir, "batchagent1.py"))
91
+ assert os.path.exists(os.path.join(temp_output_dir, "batchagent2.py"))
92
+
93
+ def test_cli_invalid_agent_name():
94
+ result = subprocess.run([
95
+ sys.executable, "-m", "llm_agent_builder.cli", "generate",
96
+ "--name", "Invalid Name!",
97
+ "--prompt", "Test",
98
+ "--task", "Test"
99
+ ], capture_output=True, text=True)
100
+
101
+ assert result.returncode != 0
102
+ assert "Error" in result.stdout or "Error" in result.stderr