OnyxMunk's picture
feat: Update to Web-First CLI, Python 3.11, and optimize Dockerfile for HF Spaces
dbd76ba
metadata
title: LLM Agent Builder
emoji: πŸ€–
colorFrom: blue
colorTo: indigo
sdk: docker
app_port: 7860

LLM Agent Builder

A powerful, production-ready tool for generating custom LLM agents via CLI, web UI, and Hugging Face Spaces

LLM Agent Builder is a comprehensive Python application that enables developers to quickly scaffold and generate AI agents using Anthropic's Claude models or Hugging Face models. Built with FastAPI, React 19, and modern Python tooling.

✨ Features

  • πŸš€ Multi-Provider Support: Generate agents for Anthropic Claude or Hugging Face models
  • 🎨 Modern Web UI: Beautiful React 19 interface with dark/light theme toggle
  • πŸ’» Powerful CLI: Interactive mode, batch generation, agent testing, and listing
  • πŸ”§ Tool Integration: Built-in support for tool calling and multi-step workflows
  • πŸ›‘οΈ Production Ready: Rate limiting, retry logic, input validation, and sandboxed execution
  • πŸ“¦ Easy Deployment: Docker-ready for Hugging Face Spaces
  • πŸ§ͺ Comprehensive Testing: Full test coverage with pytest and CI/CD

πŸš€ Quick Start

Prerequisites

  • Python 3.9 or higher
  • Node.js 18+ (for web UI)
  • pip

Installation

  1. Clone the repository:

    git clone https://github.com/kwizzlesurp10-ctrl/LLMAgentbuilder.git
    cd LLMAgentbuilder
    
  2. Create and activate a virtual environment:

    python3 -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
    
  3. Install the package:

    pip install -e .
    

    Or install dependencies directly:

    pip install -r requirements.txt
    
  4. Set up your API key:

    Create a .env file:

    # For Anthropic
    ANTHROPIC_API_KEY="your-anthropic-api-key-here"
    ANTHROPIC_MODEL="claude-3-5-sonnet-20241022"
    
    # For Hugging Face (optional)
    HUGGINGFACEHUB_API_TOKEN="your-hf-token-here"
    

πŸ“– Usage

Web Interface (Default)

The easiest way to use LLM Agent Builder is via the web interface.

  1. Launch the application:

    python main.py
    # or
    llm-agent-builder
    
  2. Access the UI:

    Open your browser to http://localhost:7860.

    The web interface allows you to:

    • Generate agents using a simple form
    • Preview and copy generated code
    • Test agents directly in the browser
    • Switch between dark and light themes

Command Line Interface

You can still use the CLI for scripting or if you prefer the terminal.

Generate an Agent

Interactive Mode:

llm-agent-builder generate --interactive

Command-Line Mode:

llm-agent-builder generate \
  --name "CodeReviewer" \
  --prompt "You are an expert code reviewer specializing in Python." \
  --task "Review this function for bugs and suggest improvements." \
  --model "claude-3-5-sonnet-20241022" \
  --provider "anthropic"

List Generated Agents

llm-agent-builder list
# or specify custom output directory
llm-agent-builder list --output ./my_agents

Test an Agent

llm-agent-builder test generated_agents/codereviewer.py --task "Review this code: def add(a, b): return a + b"

Batch Generation

Create a JSON config file (agents.json):

[
  {
    "name": "DataAnalyst",
    "prompt": "You are a data analyst expert in Pandas and NumPy.",
    "task": "Analyze this CSV file and provide summary statistics.",
    "model": "claude-3-5-sonnet-20241022",
    "provider": "anthropic"
  },
  {
    "name": "CodeWriter",
    "prompt": "You are a Python programming assistant.",
    "task": "Write a function to calculate fibonacci numbers.",
    "model": "claude-3-5-sonnet-20241022",
    "provider": "anthropic"
  }
]

Then run:

llm-agent-builder batch agents.json

Web Interface

  1. Start the Backend Server:

    uvicorn server.main:app --reload
    

    The API will be available at http://localhost:8000.

  2. Start the Frontend:

    Open a new terminal:

    cd frontend
    npm install
    npm run dev
    

    Open your browser to http://localhost:5173.

Features in Web UI

  • ✨ Live Code Preview: See generated code in real-time
  • 🎨 Theme Toggle: Switch between dark and light themes
  • πŸ“‹ Copy to Clipboard: One-click code copying
  • πŸ§ͺ Test Agent: Execute agents directly in the browser (sandboxed)
  • πŸ“₯ Auto-Download: Generated agents automatically download

πŸ—οΈ Architecture

Project Structure

LLMAgentbuilder/
β”œβ”€β”€ llm_agent_builder/      # Core package
β”‚   β”œβ”€β”€ agent_builder.py    # AgentBuilder class with multi-step & tool support
β”‚   β”œβ”€β”€ cli.py               # CLI with subcommands (generate, list, test, batch)
β”‚   └── templates/           # Jinja2 templates for agent generation
β”‚       β”œβ”€β”€ agent_template.py.j2
β”‚       └── agent_template_hf.py.j2
β”œβ”€β”€ server/                  # FastAPI backend
β”‚   β”œβ”€β”€ main.py             # API endpoints with rate limiting & retries
β”‚   β”œβ”€β”€ models.py           # Pydantic models for validation
β”‚   └── sandbox.py          # Sandboxed code execution
β”œβ”€β”€ frontend/                # React 19 frontend
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ App.jsx          # Main app with theme toggle
β”‚   β”‚   └── components/
β”‚   β”‚       β”œβ”€β”€ AgentForm.jsx    # Agent configuration form
β”‚   β”‚       └── CodePreview.jsx  # Code preview with copy button
β”‚   └── tailwind.config.js   # Tailwind CSS configuration
β”œβ”€β”€ tests/                   # Comprehensive test suite
β”‚   β”œβ”€β”€ test_agent_builder.py
β”‚   β”œβ”€β”€ test_cli.py
β”‚   └── test_api.py
β”œβ”€β”€ .github/workflows/       # CI/CD workflows
β”‚   └── ci.yml              # GitHub Actions for testing & linting
β”œβ”€β”€ pyproject.toml          # Modern Python project configuration
β”œβ”€β”€ requirements.txt        # Python dependencies
└── Dockerfile              # Docker configuration for deployment

πŸ”§ Advanced Features

Multi-Step Workflows

Agents can be generated with multi-step workflow capabilities:

# In your generated agent
agent = MyAgent(api_key="your-key")
result = agent.run_multi_step("Complete this complex task", max_steps=5)

Tool Integration

Generate agents with tool calling support:

builder = AgentBuilder()
code = builder.build_agent(
    agent_name="ToolAgent",
    prompt="You are an agent with tools",
    example_task="Use tools to complete tasks",
    tools=[
        {
            "name": "search_web",
            "description": "Search the web",
            "input_schema": {
                "type": "object",
                "properties": {
                    "query": {"type": "string"}
                }
            }
        }
    ],
    enable_multi_step=True
)

API Endpoints

The FastAPI backend provides:

  • POST /api/generate - Generate a new agent (rate limited: 20/min)
  • POST /api/execute - Execute agent code in sandbox (rate limited: 10/min)
  • GET /health - Health check endpoint
  • GET /healthz - Kubernetes health check
  • GET /metrics - Prometheus metrics

πŸ§ͺ Testing

Run the test suite:

# All tests
pytest

# With coverage
pytest --cov=llm_agent_builder --cov=server --cov-report=html

# Specific test file
pytest tests/test_cli.py -v

Type Checking

mypy llm_agent_builder server

Linting

# Install dev dependencies
pip install -e ".[dev]"

# Run linters
flake8 llm_agent_builder server tests
black --check llm_agent_builder server tests
isort --check-only llm_agent_builder server tests

🚒 Deployment

Hugging Face Spaces

  1. Create a new Space on Hugging Face

  2. Select Docker as the SDK

  3. Push the repository:

    git push https://huggingface.co/spaces/your-username/your-space
    

    The Dockerfile automatically builds the React frontend and serves it via FastAPI.

Docker

Build and run locally:

docker build -t llm-agent-builder .
docker run -p 8000:8000 -e ANTHROPIC_API_KEY=your-key llm-agent-builder

πŸ“Š Supported Models

Anthropic Claude

  • claude-3-5-sonnet-20241022 (Default)
  • claude-3-5-haiku-20241022
  • claude-3-opus-20240229
  • claude-3-haiku-20240307

Hugging Face

  • meta-llama/Meta-Llama-3-8B-Instruct
  • mistralai/Mistral-7B-Instruct-v0.3

🀝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Add tests for new functionality
  5. Ensure all tests pass (pytest)
  6. Run linting (black, isort, flake8)
  7. Commit your changes (git commit -m 'Add amazing feature')
  8. Push to the branch (git push origin feature/amazing-feature)
  9. Open a Pull Request

Development Setup

# Install in development mode with dev dependencies
pip install -e ".[dev]"

# Install frontend dependencies
cd frontend && npm install

# Run pre-commit hooks (if configured)
pre-commit install

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

πŸ“š Additional Resources

πŸ› Troubleshooting

Common Issues

Issue: ANTHROPIC_API_KEY not found

  • Solution: Ensure your .env file is in the project root and contains ANTHROPIC_API_KEY=your-key

Issue: Frontend build fails

  • Solution: Ensure Node.js 18+ is installed and run npm install in the frontend/ directory

Issue: Rate limit errors

  • Solution: The API has rate limiting (20 requests/min for generation, 10/min for execution). Wait a moment and retry.

Issue: Agent execution times out

  • Solution: Check that your agent code is valid Python and doesn't have infinite loops. The sandbox has a 30-second timeout.

Issue: Hugging Face Spaces build fails with "openvscode-server" download error

  • Cause: This is a known issue with Hugging Face Spaces' dev-mode feature. The injected vscode stage tries to download openvscode-server from GitHub, which can fail due to network issues.
  • Solutions:
    1. Disable dev-mode (recommended): In your Space settings, disable "Dev Mode" if you don't need the VS Code interface
    2. Retry the build: This is often a temporary network issue on HF Spaces' side
    3. Wait and retry: HF Spaces infrastructure issues are usually resolved within a few hours
  • Note: Our Dockerfile includes all necessary tools (wget, tar, git) for dev-mode compatibility, but we cannot control the injected stages that HF Spaces adds.

πŸ“ˆ Roadmap

  • Support for OpenAI models
  • Agent marketplace/sharing
  • Visual workflow builder
  • Agent versioning
  • Advanced tool library
  • Multi-agent orchestration

Made with ❀️ by the LLM Agent Builder Team