Spaces:
Sleeping
Sleeping
File size: 3,625 Bytes
28a00d6 7698d12 28a00d6 7698d12 28a00d6 7698d12 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | ---
title: OpenCode Env
emoji: π₯οΈ
colorFrom: indigo
colorTo: pink
sdk: docker
app_port: 8000
pinned: false
---
# opencode-openenv
An [OpenEnv](https://github.com/meta-pytorch/OpenEnv)-compatible environment
for the [OpenCode](https://opencode.ai/) CLI coding agent. Each call runs
the agent inside an isolated E2B sandbox against an OpenAI-compatible LLM
endpoint of your choice, executes a user-supplied bash verifier, and
returns the scalar reward plus artifacts.
Layout mirrors [`jupyter-agent-openenv`](https://huggingface.co/spaces/AdithyaSK/jupyter-agent-openenv).
## The one tool
| Property | Value |
|---|---|
| Framework | OpenEnv `MCPEnvironment` |
| Execution backend | E2B sandbox |
| Server | FastAPI + Gradio UI at `/` |
| Client | `OpenCodeEnv(MCPToolClient)` |
| Tool | Description |
|---|---|
| `run_rollout` | Spawn an E2B sandbox, run `opencode run` against your LLM endpoint, run your verifier, return reward + trace + workdir files |
## Environment variables
| Variable | Required | Default | Description |
|---|---|---|---|
| `E2B_API_KEY` | Yes | - | API key from [e2b.dev](https://e2b.dev/) |
| `ENABLE_WEB_INTERFACE` | No | `true` | Enable Gradio UI at `/` |
| `MAX_CONCURRENT_ENVS` | No | `4` | Max concurrent sandbox sessions |
## Run locally
**Prerequisites:** Python 3.10+, [uv](https://docs.astral.sh/uv/)
```bash
cd trl-internal/environments/opencode/openenv
uv sync
E2B_API_KEY=e2b_... uv run uvicorn server.app:app --host 0.0.0.0 --port 8000
```
The server starts at `http://localhost:8000`; the Gradio UI is mounted at
the root path.
**Verify it works:**
```bash
curl http://localhost:8000/health
# {"status":"healthy"}
curl -X POST http://localhost:8000/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"tools/list","id":1,"params":{}}'
```
## Run with Docker
```bash
docker build -t opencode-openenv .
docker run -p 8000:8000 -e E2B_API_KEY=e2b_... opencode-openenv
```
## Python client usage
```python
from opencode_env_server import OpenCodeEnv
with OpenCodeEnv(base_url="http://localhost:8000") as env:
env.reset()
result = env.run_rollout(
vllm_url="https://your-llm-host/v1",
model="Qwen/Qwen3.5-4B",
instruction="Write fizzbuzz.py in the current directory.",
test_script=open("my_tests/fizzbuzz.sh").read(),
task_id="fizzbuzz_001",
mode="transparent_proxy",
disable_thinking=True,
)
print(result.reward, len(result.proxy_turns))
```
## REST API
Standard OpenEnv endpoints are available:
```
GET /health # {"status": "healthy"}
GET /metadata # env name, version, description
GET /schema # action + observation JSON schemas
POST /reset # start new episode
POST /step # execute an action
POST /mcp # JSON-RPC 2.0 for MCP tool calls
GET / # Gradio UI
```
## Project structure
```
opencode-openenv/
βββ __init__.py # Package exports
βββ client.py # OpenCodeEnv(MCPToolClient)
βββ models.py # OpenCodeState, RolloutTurn, RolloutResult
βββ openenv.yaml # OpenEnv manifest
βββ pyproject.toml # Dependencies
βββ .env.example # Environment variable template
βββ Dockerfile # Multi-stage uv build on openenv-base
βββ server/
βββ app.py # FastAPI + Gradio mount
βββ opencode_environment.py # MCPEnvironment implementation
βββ gradio_ui.py # Interactive UI
βββ requirements.txt # Pip fallback deps
```
|