AdithyaSK's picture
AdithyaSK HF Staff
Upload folder using huggingface_hub
d4d3fde verified
---
title: OpenCode Env
emoji: πŸ–₯️
colorFrom: indigo
colorTo: pink
sdk: docker
app_port: 8000
pinned: false
---
# opencode-openenv
An [OpenEnv](https://github.com/meta-pytorch/OpenEnv)-compatible environment
for the [OpenCode](https://opencode.ai/) CLI coding agent. Each call runs
the agent inside an isolated E2B sandbox against an OpenAI-compatible LLM
endpoint of your choice, executes a user-supplied bash verifier, and
returns the scalar reward plus artifacts.
Layout mirrors [`jupyter-agent-openenv`](https://huggingface.co/spaces/AdithyaSK/jupyter-agent-openenv).
## The one tool
| Property | Value |
|---|---|
| Framework | OpenEnv `MCPEnvironment` |
| Execution backend | E2B sandbox |
| Server | FastAPI + Gradio UI at `/` |
| Client | `OpenCodeEnv(MCPToolClient)` |
| Tool | Description |
|---|---|
| `run_rollout` | Spawn an E2B sandbox, run `opencode run` against your LLM endpoint, run your verifier, return reward + trace + workdir files |
## Environment variables
| Variable | Required | Default | Description |
|---|---|---|---|
| `E2B_API_KEY` | Yes | - | API key from [e2b.dev](https://e2b.dev/) |
| `ENABLE_WEB_INTERFACE` | No | `true` | Enable Gradio UI at `/` |
| `MAX_CONCURRENT_ENVS` | No | `4` | Max concurrent sandbox sessions |
## Run locally
**Prerequisites:** Python 3.10+, [uv](https://docs.astral.sh/uv/)
```bash
cd trl-internal/environments/opencode/openenv
uv sync
E2B_API_KEY=e2b_... uv run uvicorn server.app:app --host 0.0.0.0 --port 8000
```
The server starts at `http://localhost:8000`; the Gradio UI is mounted at
the root path.
**Verify it works:**
```bash
curl http://localhost:8000/health
# {"status":"healthy"}
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1,"params":{}}'
```
## Run with Docker
```bash
docker build -t opencode-openenv .
docker run -p 8000:8000 -e E2B_API_KEY=e2b_... opencode-openenv
```
## Python client usage
```python
from opencode_env_server import OpenCodeEnv
with OpenCodeEnv(base_url="http://localhost:8000") as env:
env.reset()
result = env.run_rollout(
vllm_url="https://your-llm-host/v1",
model="Qwen/Qwen3.5-4B",
instruction="Write fizzbuzz.py in the current directory.",
test_script=open("my_tests/fizzbuzz.sh").read(),
task_id="fizzbuzz_001",
mode="transparent_proxy",
disable_thinking=True,
)
print(result.reward, len(result.proxy_turns))
```
## REST API
Standard OpenEnv endpoints are available:
```
GET /health # {"status": "healthy"}
GET /metadata # env name, version, description
GET /schema # action + observation JSON schemas
POST /reset # start new episode
POST /step # execute an action
POST /mcp # JSON-RPC 2.0 for MCP tool calls
GET / # Gradio UI
```
## Project structure
```
opencode-openenv/
β”œβ”€β”€ __init__.py # Package exports
β”œβ”€β”€ client.py # OpenCodeEnv(MCPToolClient)
β”œβ”€β”€ models.py # OpenCodeState, RolloutTurn, RolloutResult
β”œβ”€β”€ openenv.yaml # OpenEnv manifest
β”œβ”€β”€ pyproject.toml # Dependencies
β”œβ”€β”€ .env.example # Environment variable template
β”œβ”€β”€ Dockerfile # Multi-stage uv build on openenv-base
└── server/
β”œβ”€β”€ app.py # FastAPI + Gradio mount
β”œβ”€β”€ opencode_environment.py # MCPEnvironment implementation
β”œβ”€β”€ gradio_ui.py # Interactive UI
└── requirements.txt # Pip fallback deps
```