Spaces:
Sleeping
title: LLM Agent Builder
emoji: π€
colorFrom: blue
colorTo: indigo
sdk: docker
app_port: 7860
LLM Agent Builder
A powerful, production-ready tool for generating custom LLM agents via CLI, web UI, and Hugging Face Spaces
LLM Agent Builder is a comprehensive Python application that enables developers to quickly scaffold and generate AI agents using Anthropic's Claude models or Hugging Face models. Built with FastAPI, React 19, and modern Python tooling.
β¨ Features
- π Multi-Provider Support: Generate agents for Anthropic Claude or Hugging Face models
- π¨ Modern Web UI: Beautiful React 19 interface with dark/light theme toggle
- π» Powerful CLI: Interactive mode, batch generation, agent testing, and listing
- π§ Tool Integration: Built-in support for tool calling and multi-step workflows
- π‘οΈ Production Ready: Rate limiting, retry logic, input validation, and sandboxed execution
- π¦ Easy Deployment: Docker-ready for Hugging Face Spaces
- π§ͺ Comprehensive Testing: Full test coverage with pytest and CI/CD
π Quick Start
Prerequisites
- Python 3.9 or higher
- Node.js 18+ (for web UI)
- pip
Installation
Clone the repository:
git clone https://github.com/kwizzlesurp10-ctrl/LLMAgentbuilder.git cd LLMAgentbuilderCreate and activate a virtual environment:
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activateInstall the package:
pip install -e .Or install dependencies directly:
pip install -r requirements.txtSet up your API key:
Create a
.envfile:# For Anthropic ANTHROPIC_API_KEY="your-anthropic-api-key-here" ANTHROPIC_MODEL="claude-3-5-sonnet-20241022" # For Hugging Face (optional) HUGGINGFACEHUB_API_TOKEN="your-hf-token-here"
π Usage
Web Interface (Default)
The easiest way to use LLM Agent Builder is via the web interface.
Launch the application:
python main.py # or llm-agent-builderAccess the UI:
Open your browser to
http://localhost:7860.The web interface allows you to:
- Generate agents using a simple form
- Preview and copy generated code
- Test agents directly in the browser
- Switch between dark and light themes
Command Line Interface
You can still use the CLI for scripting or if you prefer the terminal.
Generate an Agent
Interactive Mode:
llm-agent-builder generate --interactive
Command-Line Mode:
llm-agent-builder generate \
--name "CodeReviewer" \
--prompt "You are an expert code reviewer specializing in Python." \
--task "Review this function for bugs and suggest improvements." \
--model "claude-3-5-sonnet-20241022" \
--provider "anthropic"
List Generated Agents
llm-agent-builder list
# or specify custom output directory
llm-agent-builder list --output ./my_agents
Test an Agent
llm-agent-builder test generated_agents/codereviewer.py --task "Review this code: def add(a, b): return a + b"
Batch Generation
Create a JSON config file (agents.json):
[
{
"name": "DataAnalyst",
"prompt": "You are a data analyst expert in Pandas and NumPy.",
"task": "Analyze this CSV file and provide summary statistics.",
"model": "claude-3-5-sonnet-20241022",
"provider": "anthropic"
},
{
"name": "CodeWriter",
"prompt": "You are a Python programming assistant.",
"task": "Write a function to calculate fibonacci numbers.",
"model": "claude-3-5-sonnet-20241022",
"provider": "anthropic"
}
]
Then run:
llm-agent-builder batch agents.json
Web Interface
Start the Backend Server:
uvicorn server.main:app --reloadThe API will be available at
http://localhost:8000.Start the Frontend:
Open a new terminal:
cd frontend npm install npm run devOpen your browser to
http://localhost:5173.
Features in Web UI
- β¨ Live Code Preview: See generated code in real-time
- π¨ Theme Toggle: Switch between dark and light themes
- π Copy to Clipboard: One-click code copying
- π§ͺ Test Agent: Execute agents directly in the browser (sandboxed)
- π₯ Auto-Download: Generated agents automatically download
ποΈ Architecture
Project Structure
LLMAgentbuilder/
βββ llm_agent_builder/ # Core package
β βββ agent_builder.py # AgentBuilder class with multi-step & tool support
β βββ cli.py # CLI with subcommands (generate, list, test, batch)
β βββ templates/ # Jinja2 templates for agent generation
β βββ agent_template.py.j2
β βββ agent_template_hf.py.j2
βββ server/ # FastAPI backend
β βββ main.py # API endpoints with rate limiting & retries
β βββ models.py # Pydantic models for validation
β βββ sandbox.py # Sandboxed code execution
βββ frontend/ # React 19 frontend
β βββ src/
β β βββ App.jsx # Main app with theme toggle
β β βββ components/
β β βββ AgentForm.jsx # Agent configuration form
β β βββ CodePreview.jsx # Code preview with copy button
β βββ tailwind.config.js # Tailwind CSS configuration
βββ tests/ # Comprehensive test suite
β βββ test_agent_builder.py
β βββ test_cli.py
β βββ test_api.py
βββ .github/workflows/ # CI/CD workflows
β βββ ci.yml # GitHub Actions for testing & linting
βββ pyproject.toml # Modern Python project configuration
βββ requirements.txt # Python dependencies
βββ Dockerfile # Docker configuration for deployment
π§ Advanced Features
Multi-Step Workflows
Agents can be generated with multi-step workflow capabilities:
# In your generated agent
agent = MyAgent(api_key="your-key")
result = agent.run_multi_step("Complete this complex task", max_steps=5)
Tool Integration
Generate agents with tool calling support:
builder = AgentBuilder()
code = builder.build_agent(
agent_name="ToolAgent",
prompt="You are an agent with tools",
example_task="Use tools to complete tasks",
tools=[
{
"name": "search_web",
"description": "Search the web",
"input_schema": {
"type": "object",
"properties": {
"query": {"type": "string"}
}
}
}
],
enable_multi_step=True
)
API Endpoints
The FastAPI backend provides:
POST /api/generate- Generate a new agent (rate limited: 20/min)POST /api/execute- Execute agent code in sandbox (rate limited: 10/min)GET /health- Health check endpointGET /healthz- Kubernetes health checkGET /metrics- Prometheus metrics
π§ͺ Testing
Run the test suite:
# All tests
pytest
# With coverage
pytest --cov=llm_agent_builder --cov=server --cov-report=html
# Specific test file
pytest tests/test_cli.py -v
Type Checking
mypy llm_agent_builder server
Linting
# Install dev dependencies
pip install -e ".[dev]"
# Run linters
flake8 llm_agent_builder server tests
black --check llm_agent_builder server tests
isort --check-only llm_agent_builder server tests
π’ Deployment
Hugging Face Spaces
Create a new Space on Hugging Face
Select Docker as the SDK
Push the repository:
git push https://huggingface.co/spaces/your-username/your-spaceThe
Dockerfileautomatically builds the React frontend and serves it via FastAPI.
Docker
Build and run locally:
docker build -t llm-agent-builder .
docker run -p 8000:8000 -e ANTHROPIC_API_KEY=your-key llm-agent-builder
π Supported Models
Anthropic Claude
claude-3-5-sonnet-20241022(Default)claude-3-5-haiku-20241022claude-3-opus-20240229claude-3-haiku-20240307
Hugging Face
meta-llama/Meta-Llama-3-8B-Instructmistralai/Mistral-7B-Instruct-v0.3
π€ Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests for new functionality
- Ensure all tests pass (
pytest) - Run linting (
black,isort,flake8) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Development Setup
# Install in development mode with dev dependencies
pip install -e ".[dev]"
# Install frontend dependencies
cd frontend && npm install
# Run pre-commit hooks (if configured)
pre-commit install
π License
This project is licensed under the MIT License - see the LICENSE file for details.
π Acknowledgments
- Built with Anthropic Claude
- Powered by FastAPI and React
- Deployed on Hugging Face Spaces
π Additional Resources
- Anthropic API Documentation
- Hugging Face Hub Documentation
- FastAPI Documentation
- React Documentation
π Troubleshooting
Common Issues
Issue: ANTHROPIC_API_KEY not found
- Solution: Ensure your
.envfile is in the project root and containsANTHROPIC_API_KEY=your-key
Issue: Frontend build fails
- Solution: Ensure Node.js 18+ is installed and run
npm installin thefrontend/directory
Issue: Rate limit errors
- Solution: The API has rate limiting (20 requests/min for generation, 10/min for execution). Wait a moment and retry.
Issue: Agent execution times out
- Solution: Check that your agent code is valid Python and doesn't have infinite loops. The sandbox has a 30-second timeout.
Issue: Hugging Face Spaces build fails with "openvscode-server" download error
- Cause: This is a known issue with Hugging Face Spaces' dev-mode feature. The injected vscode stage tries to download openvscode-server from GitHub, which can fail due to network issues.
- Solutions:
- Disable dev-mode (recommended): In your Space settings, disable "Dev Mode" if you don't need the VS Code interface
- Retry the build: This is often a temporary network issue on HF Spaces' side
- Wait and retry: HF Spaces infrastructure issues are usually resolved within a few hours
- Note: Our Dockerfile includes all necessary tools (
wget,tar,git) for dev-mode compatibility, but we cannot control the injected stages that HF Spaces adds.
π Roadmap
- Support for OpenAI models
- Agent marketplace/sharing
- Visual workflow builder
- Agent versioning
- Advanced tool library
- Multi-agent orchestration
Made with β€οΈ by the LLM Agent Builder Team