Spaces:
Sleeping
title: CUGA Agent
emoji: π€
colorFrom: purple
colorTo: blue
sdk: docker
sdk_version: '4.36'
app_file: app.py
pinned: false
app_port: 7860
description: Configurable Generalist Agent, leader in AppWorld Benchmark
short_description: Configurable Generalist Agent, leader in AppWorld Benchmark
CUGA: The Configurable Generalist Agent
Start with a generalist. Customize for your domain. Deploy faster!
Building a domain-specific enterprise agent from scratch is complex and requires significant effort: agent and tool orchestration, planning logic, safety and alignment policies, evaluation for performance/cost tradeoffs and ongoing improvements. CUGA is a state-of-the-art generalist agent designed with enterprise needs in mind, so you can focus on configuring your domain tools, policies and workflow.
Why CUGA?
π Benchmark Performance
CUGA achieves state-of-the-art performance on leading benchmarks:
- π₯ #1 on AppWorld β a benchmark with 750 real-world tasks across 457 APIs
- π₯ Top-tier on WebArena (#1 from 02/25 - 09/25) β a complex benchmark for autonomous web agents across application domains
β¨ Key Features & Capabilities
High-performing generalist agent β Benchmarked on complex web and API tasks. Combines best-of-breed agentic patterns (e.g. planner-executor, code-act) with structured planning and smart variable management to prevent hallucination and handle complexity
Configurable reasoning modes β Balance performance and cost/latency with flexible modes ranging from fast heuristics to deep planning, optimizing for your specific task requirements
Flexible agent and tool integration β Seamlessly integrate tools via OpenAPI specs, MCP servers, and Langchain, enabling rapid connection to REST APIs, custom protocols, and Python functions
Integrates with Langflow β Low-code visual build experience for designing and deploying agent workflows without extensive coding
Open-source and composable β Built with modularity in mind, CUGA itself can be exposed as a tool to other agents, enabling nested reasoning and multi-agent collaboration. Evolving toward enterprise-grade reliability
Configurable policy and human-in-the-loop instructions (Experimental) β Configure policy-aware instructions and approval gates to improve alignment and ensure safe agent behavior in enterprise contexts
Save-and-reuse capabilities (Experimental) β Capture and reuse successful execution paths (plans, code, and trajectories) for faster and consistent behavior across repeated tasks
Explore the Roadmap to see what's ahead, or join the π€ Call for the Community to get involved.
π¬ CUGA in Action
Hybrid Task Execution
Watch CUGA seamlessly combine web and API operations in a single workflow:
Example Task: get top account by revenue from digital sales, then add it to current page
https://github.com/user-attachments/assets/0cef8264-8d50-46d9-871a-ab3cefe1dde5
Would you like to test this? (Advanced Demo)
Experience CUGA's hybrid capabilities by combining API calls with web interactions:
Setup Steps:
Switch to hybrid mode:
# Edit ./src/cuga/settings.toml and change: mode = 'hybrid' # under [advanced_features] sectionInstall browser API support:
- Installs playwright browser API and Chromium browser
- The
playwrightinstaller should already be included after installing with Quick Start
playwright install chromiumStart the demo:
cuga start demoEnable the browser extension:
- Click the extension puzzle icon in your browser
- Toggle the CUGA extension to activate it
- This will open the CUGA side panel
Open the test application:
- Navigate to: Sales app
Try the hybrid task:
get top account by revenue from digital sales then add it to current page
π― What you'll see: CUGA will fetch data from the Digital Sales API and then interact with the web page to add the account information directly to the current page - demonstrating seamless API-to-web workflow integration!
Human in the Loop Task Execution
Watch CUGA pause for human approval during critical decision points:
Example Task: get best accounts
https://github.com/user-attachments/assets/d103c299-3280-495a-ba66-373e72554e78
Would you like to try this? (HITL Demo)
Experience CUGA's Human-in-the-Loop capabilities where the agent pauses for human approval at key decision points:
Setup Steps:
Enable HITL mode:
# Edit ./src/cuga/settings.toml and ensure: api_planner_hitl = true # under [advanced_features] sectionStart the demo:
cuga start demoTry the HITL task:
get best accounts
π― What you'll see: CUGA will pause at critical decision points, showing you the planned actions and waiting for your approval before proceeding.
π Quick Start
π Prerequisites (click to expand)
- Python 3.12+ - Download here
- uv package manager - Installation guide
π§ Optional: Local Digital Sales API Setup (only if remote endpoint fails)
The demo comes pre-configured with the Digital Sales API β π API Docs
Only follow these steps if you encounter issues with the remote Digital Sales endpoint:
# Start the Digital Sales API locally on port 8000
uv run digital_sales_openapi
# Then update ./src/cuga/backend/tools_env/registry/config/mcp_servers.yaml to use localhost:
# Change the digital_sales URL from the remote endpoint to:
# http://localhost:8000
# In terminal, clone the repository and navigate into it
git clone https://github.com/cuga-project/cuga-agent.git
cd cuga-agent
# 1. Create and activate virtual environment
uv venv --python=3.12 && source .venv/bin/activate
# 2. Install dependencies
uv sync
# 3. Set up environment variables
# Create .env file with your API keys
echo "OPENAI_API_KEY=your-openai-api-key-here" > .env
# 4. Start the demo
cuga start demo
# Chrome will open automatically at https://localhost:7860
# then try sending your task to CUGA: 'get top account by revenue from digital sales'
# 5. View agent trajectories (optional)
cuga viz
# This launches a web-based dashboard for visualizing and analyzing
# agent execution trajectories, decision-making, and tool usage
π€ LLM Configuration - Advanced Options
Refer to: .env.example for detailed examples.
CUGA supports multiple LLM providers with flexible configuration options. You can configure models through TOML files or override specific settings using environment variables.
Supported Platforms
- OpenAI - GPT models via OpenAI API (also supports LiteLLM via base URL override)
- IBM WatsonX - IBM's enterprise LLM platform
- Azure OpenAI - Microsoft's Azure OpenAI service
- RITS - Internal IBM research platform
- OpenRouter - LLM API gateway provider
Configuration Priority
- Environment Variables (highest priority)
- TOML Configuration (medium priority)
- Default Values (lowest priority)
Option 1: OpenAI π
Setup Instructions:
- Create an account at platform.openai.com
- Generate an API key from your API keys page
- Add to your
.envfile:# OpenAI Configuration OPENAI_API_KEY=sk-...your-key-here... AGENT_SETTING_CONFIG="settings.openai.toml" # Optional overrides MODEL_NAME=gpt-4o # Override model name OPENAI_BASE_URL=https://api.openai.com/v1 # Override base URL OPENAI_API_VERSION=2024-08-06 # Override API version
Default Values:
- Model:
gpt-4o - API Version: OpenAI's default API Version
- Base URL: OpenAI's default endpoint
Option 2: IBM WatsonX π΅
Setup Instructions:
Access IBM WatsonX
Create a project and get your credentials:
- Project ID
- API Key
- Region/URL
Add to your
.envfile:# WatsonX Configuration WATSONX_API_KEY=your-watsonx-api-key WATSONX_PROJECT_ID=your-project-id WATSONX_URL=https://us-south.ml.cloud.ibm.com # or your region AGENT_SETTING_CONFIG="settings.watsonx.toml" # Optional override MODEL_NAME=meta-llama/llama-4-maverick-17b-128e-instruct-fp8 # Override model for all agents
Default Values:
- Model:
meta-llama/llama-4-maverick-17b-128e-instruct-fp8
Option 3: Azure OpenAI
Setup Instructions:
- Add to your
.envfile:AGENT_SETTING_CONFIG="settings.azure.toml" # Default config uses ETE AZURE_OPENAI_API_KEY="<your azure apikey>" AZURE_OPENAI_ENDPOINT="<your azure endpoint>" OPENAI_API_VERSION="2024-08-01-preview"
Option 4: LiteLLM Support
CUGA supports LiteLLM through the OpenAI configuration by overriding the base URL:
Add to your
.envfile:# LiteLLM Configuration (using OpenAI settings) OPENAI_API_KEY=your-api-key AGENT_SETTING_CONFIG="settings.openai.toml" # Override for LiteLLM MODEL_NAME=Azure/gpt-4o # Override model name OPENAI_BASE_URL=https://your-litellm-endpoint.com # Override base URL OPENAI_API_VERSION=2024-08-06 # Override API version
Option 5: OpenRouter Support
Setup Instructions:
- Create an account at openrouter.ai
- Generate an API key from your account settings
- Add to your
.envfile:# OpenRouter Configuration OPENROUTER_API_KEY=your-openrouter-api-key AGENT_SETTING_CONFIG="settings.openrouter.toml" OPENROUTER_BASE_URL="https://openrouter.ai/api/v1" # Optional override MODEL_NAME=openai/gpt-4o # Override model name
Configuration Files
CUGA uses TOML configuration files located in src/cuga/configurations/models/:
settings.openai.toml- OpenAI configuration (also supports LiteLLM via base URL override)settings.watsonx.toml- WatsonX configurationsettings.azure.toml- Azure OpenAI configurationsettings.openrouter.toml- OpenRouter configuration
Each file contains agent-specific model settings that can be overridden by environment variables.
π‘ Tip: Want to use your own tools or add your MCP tools? Check out src/cuga/backend/tools_env/registry/config/mcp_servers.yaml for examples of how to configure custom tools and APIs, including those for digital sales.
Configurations
π Running with a secure code sandbox
Cuga supports isolated code execution using Docker/Podman containers for enhanced security.
Install container runtime: Download and install Rancher Desktop or Docker.
Install sandbox dependencies:
uv sync --group sandboxStart with remote sandbox enabled:
cuga start demo --sandboxThis automatically configures Cuga to use Docker/Podman for code execution instead of local execution.
Test your sandbox setup (optional):
# Test local sandbox (default) cuga test-sandbox # Test remote sandbox with Docker/Podman cuga test-sandbox --remoteYou should see the output:
('test succeeded\n', {})
Note: Without the --sandbox flag, Cuga uses local Python execution (default), which is faster but provides less isolation.
βοΈ Running with E2B Cloud Sandbox
CUGA supports E2B for cloud-based code execution in secure, ephemeral sandboxes. This provides better isolation than local execution while being faster than Docker/Podman containers.
Prerequisites:
Get an E2B API key:
Set up the E2B template:
# Install E2B CLI npm install -g @e2b/cli # Login with your API key e2b auth login # Create a template (one-time setup) # This creates a 'cuga-langchain' template that CUGA uses e2b template build --name cuga-langchainInstall E2B dependencies:
uv sync --group e2bConfigure environment: Add to your
.envfile:E2B_API_KEY=your-e2b-api-key-here
Exposing Registry to E2B (Required)
E2B runs in the cloud and needs to call your local API registry to execute tools. You need to expose your local registry publicly using a tunneling service like ngrok.
Option 1: Expose Registry Directly (Port 8001)
Best if you have multiple ports available:
# In a separate terminal, start ngrok tunnel to registry
ngrok http 8001
# You'll get a public URL like: https://abc123.ngrok.io
# Copy this URL
Then edit ./src/cuga/settings.toml:
[server_ports]
function_call_host = "https://abc123.ngrok.io" # Your ngrok URL
Option 2: Expose CUGA Port with Proxy (Port 7860)
Best if you're restricted to 1 port - CUGA will proxy calls to the registry:
# In a separate terminal, start ngrok tunnel to CUGA
ngrok http 7860
# You'll get a public URL like: https://xyz789.ngrok.io
# Copy this URL
Then edit ./src/cuga/settings.toml:
[server_ports]
function_call_host = "https://xyz789.ngrok.io" # Your ngrok URL
CUGA automatically proxies /functions/call requests to the registry when using the CUGA port.
Enable E2B in Settings
Edit ./src/cuga/settings.toml:
[advanced_features]
e2b_sandbox = true
e2b_sandbox_mode = "per-session" # Options: "per-session" | "single" | "per-call"
e2b_sandbox_ttl = 600 # Cache TTL in seconds (10 minutes)
Sandbox Modes:
per-session(default): One sandbox per conversation thread, cached for reusesingle: Single shared sandbox across all threads (most cost-effective)per-call: New sandbox for each execution (most isolated, highest cost)
Start CUGA with E2B:
# Make sure ngrok is running in another terminal
cuga start demo
E2B will automatically execute code in cloud sandboxes. You'll see logs indicating "CODE SENT TO E2B SANDBOX" when E2B is active.
Troubleshooting:
- Error: "function_call_host not configured": Make sure you've set
function_call_hostin settings.toml with your ngrok URL - Tool execution fails: Verify ngrok is running and the URL in settings.toml matches your ngrok URL
- Connection timeout: Check that your firewall allows ngrok connections
Benefits of E2B:
- β No Docker/Podman required
- β Faster than container-based sandboxing
- β Cloud-native with automatic scaling
- β Better isolation than local execution
- β Supports per-session caching for cost optimization
Note: E2B is a paid service with a free tier. Check e2b.dev/pricing for details.
βοΈ Reasoning modes - Switch between Fast/Balanced/Accurate modes
Available Modes under ./src/cuga
| Mode | File | Description |
|---|---|---|
fast |
./configurations/modes/fast.toml |
Optimized for speed |
balanced |
./configurations/modes/balanced.toml |
Balance between speed and precision (default) |
accurate |
./configurations/modes/accurate.toml |
Optimized for precision |
custom |
./configurations/modes/custom.toml |
User-defined settings |
Configuration
configurations/
βββ modes/fast.toml
βββ modes/balanced.toml
βββ modes/accurate.toml
βββ modes/custom.toml
Edit settings.toml:
[features]
cuga_mode = "fast" # or "balanced" or "accurate" or "custom"
Documentation: ./docs/flags.html
π― Task Mode Configuration - Switch between API/Web/Hybrid modes
Available Task Modes
| Mode | Description |
|---|---|
api |
API-only mode - executes API tasks (default) |
web |
Web-only mode - executes web tasks using browser extension |
hybrid |
Hybrid mode - executes both API tasks and web tasks using browser extension |
How Task Modes Work
API Mode (mode = 'api')
- Opens tasks in a regular web browser
- Best for API/Tools-focused workflows and testing
Web Mode (mode = 'web')
- Interface inside a browser extension (available next to browser)
- Optimized for web-specific tasks and interactions
- Direct access to web page content and controls
Hybrid Mode (mode = 'hybrid')
- Opens inside browser extension like web mode
- Can execute both API/Tools tasks and web page tasks simultaneously
- Starts from configurable URL defined in
demo_mode.start_url - Most versatile mode for complex workflows combining web and API operations
Configuration
Edit ./src/cuga/settings.toml:
[demo_mode]
start_url = "https://opensource-demo.orangehrmlive.com/web/index.php/auth/login" # Starting URL for hybrid mode
[advanced_features]
mode = 'api' # 'api', 'web', or 'hybrid'
π Special Instructions Configuration
How It Works
Each .md file contains specialized instructions that are automatically integrated into the CUGA's internal prompts when that component is active. Simply edit the markdown files to customize behavior for each node type.
Available instruction sets: answer, api_planner, code_agent, plan_controller, reflection, shortlister, task_decomposition
Configuration
configurations/
βββ instructions/
βββ instructions.toml
βββ default/
β βββ answer.md
β βββ api_planner.md
β βββ code_agent.md
β βββ plan_controller.md
β βββ reflection.md
β βββ shortlister.md
β βββ task_decomposition.md
βββ [other instruction sets]/
Edit configurations/instructions/instructions.toml:
[instructions]
instruction_set = "default" # or any instruction set above
πΉ Optional: Run with memory
- Install memory dependencies
uv sync --group memory - Change
enable_memory = trueinsetting.toml - Run
cuga start memory
Watch CUGA with Memory enabled
[LINK]
Would you like to test this? (Advanced Demo)
Setup Steps:
- set
enable_memoryflag to true - Run
cuga start memory - Run
cuga start demo_crm --sample-memory-data - go to the cuga webpage and type
Identify the common cities between my cuga_workspace/cities.txt and cuga_workspace/company.txt. Here you should see the errors related to CodeAgent. Wait for a minute fortipsto be generated.Tipsgeneration can be confirmed from the terminal wherecuga start memorywas run - Re-run the same utterance again and it should finish in lesser number of steps
π§ Advanced Usage
πΎ Save & Reuse
Setup
β’ Change ./src/cuga/settings.toml: cuga_mode = "save_reuse_fast"
β’ Run: cuga start demo
Demo Steps
β’ First run: get top account by revenue
- This is a new flow (first time)
- Wait for task to finish
- Approve to save the workflow
- Provide another example to help generalization of flow e.g.
get top 2 accounts by revenue
β’ Flow now will be saved:
- May take some time
- Flow will be successfully saved
β’ Verify reuse: get top 4 accounts by revenue
- Should run faster using saved workflow
π§ Adding Tools: Comprehensive Examples
CUGA supports three types of tool integrations. Each approach has its own use cases and benefits:
π Tool Types Overview
| Tool Type | Best For | Configuration | Runtime Loading |
|---|---|---|---|
| OpenAPI | REST APIs, existing services | mcp_servers.yaml |
β Build |
| MCP | Custom protocols, complex integrations | mcp_servers.yaml |
β Build |
| LangChain | Python functions, rapid prototyping | Direct import | β Runtime |
π Additional Resources
- Tool Registry: ./src/cuga/backend/tools_env/registry/README.md
- Comprehensive example with different tools + MCP: [./docs/examples/cuga_with_runtime_tools/README.md](Adding Tools)
- CUGA as MCP: ./docs/examples/cuga_as_mcp/README.md
Test Scenarios - E2E
The test suite covers various execution modes across different scenarios:
| Scenario | Fast Mode | Balanced Mode | Accurate Mode | Save & Reuse Mode |
|---|---|---|---|---|
| Find VP Sales High-Value Accounts | β | β | β | - |
| Get top account by revenue | β | β | β | β |
| List my accounts | β | β | β | - |
Additional Test Categories
Unit Tests
- Variables Manager: Core functionality, metadata handling, singleton pattern, reset operations
- Value Preview: Intelligent truncation, nested structure preservation, length-aware formatting
Integration Tests
- API Response Handling: Error cases, validation, timeout scenarios, parameter extraction
- Registry Services: OpenAPI integration, MCP server functionality, mixed service configurations
- Tool Environment: Service loading, parameter handling, function calling, isolation testing
π§ͺ Running Tests
Focused suites:
./src/scripts/run_tests.sh
π Evaluation
For information on how to evaluate, see the CUGA Evaluation Documentation
π Resources
- π Example applications
- π§ Contact: CUGA Team
Call for the Community
CUGA is open source because we believe trustworthy enterprise agents must be built together.
Here's how you can help:
- Share use cases β Show us how you'd use CUGA in real workflows.
- Request features β Suggest capabilities that would make it more useful.
- Report bugs β Help improve stability by filing clear, reproducible reports.
All contributions are welcome through GitHub Issues - whether it's sharing use cases, requesting features, or reporting bugs!
Roadmap
Amongst other, we're exploring the following directions:
- Policy support: procedural SOPs, domain knowledge, input/output guards, context- and tool-based constraints
- Performance improvements: dynamic reasoning strategies that adapt to task complexity
Before Submitting a PR
Please follow the contribution guide in CONTRIBUTING.md.