Spaces:
No application file
No application file
File size: 8,123 Bytes
c8daa82 8e7d77e c8daa82 d95de8e c8daa82 d95de8e c8daa82 d95de8e c8daa82 8e7d77e 0394a5e 8e7d77e 0394a5e 8e7d77e c8daa82 0394a5e c8daa82 d95de8e c8daa82 0394a5e c8daa82 8e7d77e c8daa82 8e7d77e c8daa82 8e7d77e c8daa82 d95de8e c8daa82 d95de8e c8daa82 0394a5e 8e7d77e c8daa82 0394a5e c8daa82 d95de8e c8daa82 d95de8e 8e7d77e c8daa82 8e7d77e c8daa82 d95de8e c8daa82 d95de8e 8e7d77e c8daa82 8e7d77e c8daa82 0394a5e c8daa82 0394a5e c8daa82 d95de8e c8daa82 0394a5e c8daa82 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 | ---
title: Coenv Environment Server
emoji: ⏱️
colorFrom: red
colorTo: pink
sdk: docker
pinned: false
app_port: 8000
base_path: /web
tags:
- openenv
---
# Coenv Environment
A Kubernetes incident-response simulation environment for OpenEnv.
The environment exposes realistic cluster state (nodes, pods, deployments, services, events) and supports operational actions such as scaling, restarting rollouts, patching resources, setting HPA, and draining nodes.
## Quick Start
The simplest way to use the Coenv environment is through the `CoEnv` class:
```python
from coenv import CoenvAction, CoEnv
try:
# Create environment from Docker image
coenvenv = CoEnv.from_docker_image("coenv-env:latest")
# Reset with a task
result = coenvenv.reset(task="pod_recovery")
print(f"Objective: {result.observation.objective}")
print(f"Pods observed: {len(result.observation.pods)}")
# Example remediation action
result = coenvenv.step(
CoenvAction(
action_type="scale",
deployment="frontend",
replicas=3,
)
)
print(f"Step: {result.observation.step}")
print(f"Reward: {result.reward}")
print(f"Done: {result.done}")
finally:
# Always clean up
coenvenv.close()
```
That's it! The `CoEnv.from_docker_image()` method handles:
- Starting the Docker container
- Waiting for the server to be ready
- Connecting to the environment
- Container cleanup when you call `close()`
## Building the Docker Image
Before using the environment, you need to build the Docker image:
```bash
# From project root
docker build -t coenv-env:latest -f server/Dockerfile .
```
## Deploying to Hugging Face Spaces
You can easily deploy your OpenEnv environment to Hugging Face Spaces using the `openenv push` command:
```bash
# From the environment directory (where openenv.yaml is located)
openenv push
# Or specify options
openenv push --namespace my-org --private
```
The `openenv push` command will:
1. Validate that the directory is an OpenEnv environment (checks for `openenv.yaml`)
2. Prepare a custom build for Hugging Face Docker space (enables web interface)
3. Upload to Hugging Face (ensuring you're logged in)
### Prerequisites
- Authenticate with Hugging Face: The command will prompt for login if not already authenticated
### Options
- `--directory`, `-d`: Directory containing the OpenEnv environment (defaults to current directory)
- `--repo-id`, `-r`: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)
- `--base-image`, `-b`: Base Docker image to use (overrides Dockerfile FROM)
- `--private`: Deploy the space as private (default: public)
### Examples
```bash
# Push to your personal namespace (defaults to username/env-name from openenv.yaml)
openenv push
# Push to a specific repository
openenv push --repo-id my-org/my-env
# Push with a custom base image
openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest
# Push as a private space
openenv push --private
# Combine options
openenv push --repo-id my-org/my-env --base-image custom-base:latest --private
```
After deployment, your space will be available at:
`https://huggingface.co/spaces/<repo-id>`
The deployed space includes:
- **Web Interface** at `/web` - Interactive UI for exploring the environment
- **API Documentation** at `/docs` - Full OpenAPI/Swagger interface
- **Health Check** at `/health` - Container health monitoring
- **WebSocket** at `/ws` - Persistent session endpoint for low-latency interactions
## Environment Details
### Action
**CoenvAction** supports the following `action_type` values:
- `scale`
- `delete_pod`
- `patch`
- `rollout_restart`
- `set_hpa`
- `drain_node`
- `describe`
Action-specific fields include `deployment`, `replicas`, `pod_name`, `resource_type`, `name`, `patch`, `min_replicas`, `max_replicas`, `cpu_target_percent`, and `node_name`.
### Observation
**CoenvObservation** contains a typed cluster snapshot and episode metadata:
- `nodes`, `pods`, `deployments`, `services`, `configmaps`, `hpas`, `events`
- `step` (int)
- `objective` (str)
- `reward` (float)
- `done` (bool)
- `metadata` (dict)
### Reward
Reward is task-dependent and based on service health progression:
- `pod_recovery`: fraction of frontend pods in Running state
- `autoscaling`: backend availability progress
- `incident`: proportion of key services restored to healthy
## Advanced Usage
### Connecting to an Existing Server
If you already have a Coenv environment server running, you can connect directly:
```python
from coenv import CoenvAction, CoEnv
# Connect to existing server
coenvenv = CoEnv(base_url="<ENV_HTTP_URL_HERE>")
# Use as normal
result = coenvenv.reset(task="incident")
result = coenvenv.step(
CoenvAction(action_type="describe", resource_type="deployment", name="api-gateway")
)
```
Note: When connecting to an existing server, `coenvenv.close()` will NOT stop the server.
### Using the Context Manager
The client supports context manager usage for automatic connection management:
```python
from coenv import CoenvAction, CoEnv
# Connect with context manager (auto-connects and closes)
with CoEnv(base_url="http://localhost:8000") as env:
result = env.reset(task="autoscaling")
print(f"Reset objective: {result.observation.objective}")
# Multiple steps with low latency
for replicas in [3, 4, 5]:
result = env.step(
CoenvAction(action_type="scale", deployment="backend", replicas=replicas)
)
print(f"Replicas set to {replicas}, reward={result.reward}")
```
The client uses WebSocket connections for:
- **Lower latency**: No HTTP connection overhead per request
- **Persistent session**: Server maintains your environment state
- **Efficient for episodes**: Better for many sequential steps
### Concurrent WebSocket Sessions
The server supports multiple concurrent WebSocket connections. To enable this,
modify `server/app.py` to use factory mode:
```python
# In server/app.py - use factory mode for concurrent sessions
app = create_app(
CoenvEnvironment, # Pass class, not instance
CoenvAction,
CoenvObservation,
max_concurrent_envs=4, # Allow 4 concurrent sessions
)
```
Then multiple clients can connect simultaneously:
```python
from coenv import CoenvAction, CoEnv
from concurrent.futures import ThreadPoolExecutor
def run_episode(client_id: int):
with CoEnv(base_url="http://localhost:8000") as env:
result = env.reset(task="pod_recovery")
for i in range(10):
result = env.step(
CoenvAction(action_type="describe", resource_type="deployment", name="frontend")
)
return client_id, result.observation.step
# Run 4 episodes concurrently
with ThreadPoolExecutor(max_workers=4) as executor:
results = list(executor.map(run_episode, range(4)))
```
## Development & Testing
### Direct Environment Testing
Test the environment logic directly without starting the HTTP server:
```bash
# From the server directory
python3 server/coenv_environment.py
```
This verifies that:
- Environment resets correctly
- Step executes actions properly
- State tracking works
- Rewards are calculated correctly
### Running Locally
Run the server locally for development:
```bash
uvicorn server.app:app --reload
```
## Project Structure
```
coenv/
├── .dockerignore # Docker build exclusions
├── __init__.py # Module exports
├── README.md # This file
├── openenv.yaml # OpenEnv manifest
├── pyproject.toml # Project metadata and dependencies
├── uv.lock # Locked dependencies (generated)
├── client.py # CoEnv client
├── models.py # Action and Observation models
└── server/
├── __init__.py # Server module exports
├── coenv_environment.py # Core environment logic
├── app.py # FastAPI application (HTTP + WebSocket endpoints)
└── Dockerfile # Container image definition
```
|