Ram Narayanan commited on
Commit
c0578b7
·
1 Parent(s): dfe76e9

Adding base setup of the OpenEnv

Browse files
README.md DELETED
@@ -1,11 +0,0 @@
1
- ---
2
- title: Voice Agent
3
- emoji: 🌍
4
- colorFrom: gray
5
- colorTo: gray
6
- sdk: docker
7
- pinned: false
8
- license: mit
9
- ---
10
-
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
.gitattributes → customer_env/.gitattributes RENAMED
File without changes
customer_env/README.md ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Customer Env Environment Server
3
+ emoji: 🕰️
4
+ colorFrom: red
5
+ colorTo: blue
6
+ sdk: docker
7
+ pinned: false
8
+ app_port: 8000
9
+ base_path: /web
10
+ tags:
11
+ - openenv
12
+ ---
13
+
14
+ # Customer Env Environment
15
+
16
+ A simple test environment that echoes back messages. Perfect for testing the env APIs as well as demonstrating environment usage patterns.
17
+
18
+ ## Quick Start
19
+
20
+ The simplest way to use the Customer Env environment is through the `CustomerEnv` class:
21
+
22
+ ```python
23
+ from customer_env import CustomerAction, CustomerEnv
24
+
25
+ try:
26
+ # Create environment from Docker image
27
+ customer_envenv = CustomerEnv.from_docker_image("customer_env-env:latest")
28
+
29
+ # Reset
30
+ result = customer_envenv.reset()
31
+ print(f"Reset: {result.observation.echoed_message}")
32
+
33
+ # Send multiple messages
34
+ messages = ["Hello, World!", "Testing echo", "Final message"]
35
+
36
+ for msg in messages:
37
+ result = customer_envenv.step(CustomerAction(message=msg))
38
+ print(f"Sent: '{msg}'")
39
+ print(f" → Echoed: '{result.observation.echoed_message}'")
40
+ print(f" → Length: {result.observation.message_length}")
41
+ print(f" → Reward: {result.reward}")
42
+
43
+ finally:
44
+ # Always clean up
45
+ customer_envenv.close()
46
+ ```
47
+
48
+ That's it! The `CustomerEnv.from_docker_image()` method handles:
49
+ - Starting the Docker container
50
+ - Waiting for the server to be ready
51
+ - Connecting to the environment
52
+ - Container cleanup when you call `close()`
53
+
54
+ ## Building the Docker Image
55
+
56
+ Before using the environment, you need to build the Docker image:
57
+
58
+ ```bash
59
+ # From project root
60
+ docker build -t customer_env-env:latest -f server/Dockerfile .
61
+ ```
62
+
63
+ ## Deploying to Hugging Face Spaces
64
+
65
+ You can easily deploy your OpenEnv environment to Hugging Face Spaces using the `openenv push` command:
66
+
67
+ ```bash
68
+ # From the environment directory (where openenv.yaml is located)
69
+ openenv push
70
+
71
+ # Or specify options
72
+ openenv push --namespace my-org --private
73
+ ```
74
+
75
+ The `openenv push` command will:
76
+ 1. Validate that the directory is an OpenEnv environment (checks for `openenv.yaml`)
77
+ 2. Prepare a custom build for Hugging Face Docker space (enables web interface)
78
+ 3. Upload to Hugging Face (ensuring you're logged in)
79
+
80
+ ### Prerequisites
81
+
82
+ - Authenticate with Hugging Face: The command will prompt for login if not already authenticated
83
+
84
+ ### Options
85
+
86
+ - `--directory`, `-d`: Directory containing the OpenEnv environment (defaults to current directory)
87
+ - `--repo-id`, `-r`: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)
88
+ - `--base-image`, `-b`: Base Docker image to use (overrides Dockerfile FROM)
89
+ - `--private`: Deploy the space as private (default: public)
90
+
91
+ ### Examples
92
+
93
+ ```bash
94
+ # Push to your personal namespace (defaults to username/env-name from openenv.yaml)
95
+ openenv push
96
+
97
+ # Push to a specific repository
98
+ openenv push --repo-id my-org/my-env
99
+
100
+ # Push with a custom base image
101
+ openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest
102
+
103
+ # Push as a private space
104
+ openenv push --private
105
+
106
+ # Combine options
107
+ openenv push --repo-id my-org/my-env --base-image custom-base:latest --private
108
+ ```
109
+
110
+ After deployment, your space will be available at:
111
+ `https://huggingface.co/spaces/<repo-id>`
112
+
113
+ The deployed space includes:
114
+ - **Web Interface** at `/web` - Interactive UI for exploring the environment
115
+ - **API Documentation** at `/docs` - Full OpenAPI/Swagger interface
116
+ - **Health Check** at `/health` - Container health monitoring
117
+ - **WebSocket** at `/ws` - Persistent session endpoint for low-latency interactions
118
+
119
+ ## Environment Details
120
+
121
+ ### Action
122
+ **CustomerAction**: Contains a single field
123
+ - `message` (str) - The message to echo back
124
+
125
+ ### Observation
126
+ **CustomerObservation**: Contains the echo response and metadata
127
+ - `echoed_message` (str) - The message echoed back
128
+ - `message_length` (int) - Length of the message
129
+ - `reward` (float) - Reward based on message length (length × 0.1)
130
+ - `done` (bool) - Always False for echo environment
131
+ - `metadata` (dict) - Additional info like step count
132
+
133
+ ### Reward
134
+ The reward is calculated as: `message_length × 0.1`
135
+ - "Hi" → reward: 0.2
136
+ - "Hello, World!" → reward: 1.3
137
+ - Empty message → reward: 0.0
138
+
139
+ ## Advanced Usage
140
+
141
+ ### Connecting to an Existing Server
142
+
143
+ If you already have a Customer Env environment server running, you can connect directly:
144
+
145
+ ```python
146
+ from customer_env import CustomerEnv
147
+
148
+ # Connect to existing server
149
+ customer_envenv = CustomerEnv(base_url="<ENV_HTTP_URL_HERE>")
150
+
151
+ # Use as normal
152
+ result = customer_envenv.reset()
153
+ result = customer_envenv.step(CustomerAction(message="Hello!"))
154
+ ```
155
+
156
+ Note: When connecting to an existing server, `customer_envenv.close()` will NOT stop the server.
157
+
158
+ ### Using the Context Manager
159
+
160
+ The client supports context manager usage for automatic connection management:
161
+
162
+ ```python
163
+ from customer_env import CustomerAction, CustomerEnv
164
+
165
+ # Connect with context manager (auto-connects and closes)
166
+ with CustomerEnv(base_url="http://localhost:8000") as env:
167
+ result = env.reset()
168
+ print(f"Reset: {result.observation.echoed_message}")
169
+ # Multiple steps with low latency
170
+ for msg in ["Hello", "World", "!"]:
171
+ result = env.step(CustomerAction(message=msg))
172
+ print(f"Echoed: {result.observation.echoed_message}")
173
+ ```
174
+
175
+ The client uses WebSocket connections for:
176
+ - **Lower latency**: No HTTP connection overhead per request
177
+ - **Persistent session**: Server maintains your environment state
178
+ - **Efficient for episodes**: Better for many sequential steps
179
+
180
+ ### Concurrent WebSocket Sessions
181
+
182
+ The server supports multiple concurrent WebSocket connections. To enable this,
183
+ modify `server/app.py` to use factory mode:
184
+
185
+ ```python
186
+ # In server/app.py - use factory mode for concurrent sessions
187
+ app = create_app(
188
+ CustomerEnvironment, # Pass class, not instance
189
+ CustomerAction,
190
+ CustomerObservation,
191
+ max_concurrent_envs=4, # Allow 4 concurrent sessions
192
+ )
193
+ ```
194
+
195
+ Then multiple clients can connect simultaneously:
196
+
197
+ ```python
198
+ from customer_env import CustomerAction, CustomerEnv
199
+ from concurrent.futures import ThreadPoolExecutor
200
+
201
+ def run_episode(client_id: int):
202
+ with CustomerEnv(base_url="http://localhost:8000") as env:
203
+ result = env.reset()
204
+ for i in range(10):
205
+ result = env.step(CustomerAction(message=f"Client {client_id}, step {i}"))
206
+ return client_id, result.observation.message_length
207
+
208
+ # Run 4 episodes concurrently
209
+ with ThreadPoolExecutor(max_workers=4) as executor:
210
+ results = list(executor.map(run_episode, range(4)))
211
+ ```
212
+
213
+ ## Development & Testing
214
+
215
+ ### Direct Environment Testing
216
+
217
+ Test the environment logic directly without starting the HTTP server:
218
+
219
+ ```bash
220
+ # From the server directory
221
+ python3 server/customer_env_environment.py
222
+ ```
223
+
224
+ This verifies that:
225
+ - Environment resets correctly
226
+ - Step executes actions properly
227
+ - State tracking works
228
+ - Rewards are calculated correctly
229
+
230
+ ### Running Locally
231
+
232
+ Run the server locally for development:
233
+
234
+ ```bash
235
+ uvicorn server.app:app --reload
236
+ ```
237
+
238
+ ## Project Structure
239
+
240
+ ```
241
+ customer_env/
242
+ ├── .dockerignore # Docker build exclusions
243
+ ├── __init__.py # Module exports
244
+ ├── README.md # This file
245
+ ├── openenv.yaml # OpenEnv manifest
246
+ ├── pyproject.toml # Project metadata and dependencies
247
+ ├── uv.lock # Locked dependencies (generated)
248
+ ├── client.py # CustomerEnv client
249
+ ├── models.py # Action and Observation models
250
+ └── server/
251
+ ├── __init__.py # Server module exports
252
+ ├── customer_env_environment.py # Core environment logic
253
+ ├── app.py # FastAPI application (HTTP + WebSocket endpoints)
254
+ └── Dockerfile # Container image definition
255
+ ```
customer_env/__init__.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Customer Env Environment."""
8
+
9
+ from .client import CustomerEnv
10
+ from .models import CustomerAction, CustomerObservation
11
+
12
+ __all__ = [
13
+ "CustomerAction",
14
+ "CustomerObservation",
15
+ "CustomerEnv",
16
+ ]
customer_env/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (431 Bytes). View file
 
customer_env/__pycache__/client.cpython-310.pyc ADDED
Binary file (2.06 kB). View file
 
customer_env/__pycache__/models.cpython-310.pyc ADDED
Binary file (1.36 kB). View file
 
customer_env/client.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+
4
+ """Customer Env Environment Client."""
5
+
6
+ from typing import Dict, Any
7
+
8
+ from openenv.core import EnvClient
9
+ from openenv.core.client_types import StepResult
10
+ from openenv.core.env_server.types import State
11
+
12
+ from models import CustomerAction, CustomerObservation
13
+
14
+
15
+ class CustomerEnv(
16
+ EnvClient[CustomerAction, CustomerObservation, State]
17
+ ):
18
+ """
19
+ Client for the Customer Env Environment (Banking POMDP).
20
+
21
+ This client maintains a persistent WebSocket connection to the environment server,
22
+ enabling efficient multi-step interactions with lower latency.
23
+ """
24
+
25
+ def _step_payload(self, action: CustomerAction) -> Dict[str, Any]:
26
+ """
27
+ Convert CustomerAction to JSON payload for step message.
28
+ """
29
+ return {
30
+ "action_type": action.action_type,
31
+ "content": action.content,
32
+ "tool_args": action.tool_args,
33
+ }
34
+
35
+ def _parse_result(self, payload: Dict[str, Any]) -> StepResult[CustomerObservation]:
36
+ """
37
+ Parse server response into StepResult[CustomerObservation].
38
+ """
39
+ obs_data = payload.get("observation", {})
40
+
41
+ observation = CustomerObservation(
42
+ customer_reply=obs_data.get("customer_reply"),
43
+ tool_response=obs_data.get("tool_response"),
44
+ conversation_history=obs_data.get("conversation_history", ""),
45
+ done=payload.get("done", False),
46
+ reward=payload.get("reward", 0.0),
47
+ metadata=obs_data.get("metadata", {}),
48
+ )
49
+
50
+ return StepResult(
51
+ observation=observation,
52
+ reward=payload.get("reward", 0.0),
53
+ done=payload.get("done", False),
54
+ )
55
+
56
+ def _parse_state(self, payload: Dict[str, Any]) -> State:
57
+ """
58
+ Parse server response into State object.
59
+ """
60
+ return State(
61
+ episode_id=payload.get("episode_id"),
62
+ step_count=payload.get("step_count", 0),
63
+ )
customer_env/models.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pydantic import BaseModel, Field
2
+ from typing import Optional, Dict, Any
3
+
4
+ class CustomerAction(BaseModel):
5
+ action_type: str = Field(..., description="Must be 'speak', 'tool_call', or 'end_call'")
6
+ content: str = Field(..., description="The spoken text or the name of the tool")
7
+ tool_args: Dict[str, Any] = Field(default_factory=dict, description="Arguments for the tool")
8
+
9
+ class CustomerObservation(BaseModel):
10
+ customer_reply: Optional[str] = Field(None, description="What the customer said")
11
+ tool_response: Optional[str] = Field(None, description="Result of the tool call")
12
+ conversation_history: str = Field(..., description="Full transcript of the episode")
13
+
14
+ done: bool = Field(False, description="Whether the episode has ended")
15
+ reward: float = Field(0.0, description="Reward received for this step")
16
+ metadata: Dict[str, Any] = Field(default_factory=dict)
customer_env/openenv.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ spec_version: 1
2
+ name: customer_env
3
+ type: space
4
+ runtime: fastapi
5
+ app: server.app:app
6
+ port: 8000
7
+
customer_env/pyproject.toml ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ [build-system]
8
+ requires = ["setuptools>=45", "wheel"]
9
+ build-backend = "setuptools.build_meta"
10
+
11
+ [project]
12
+ name = "openenv-customer_env"
13
+ version = "0.1.0"
14
+ description = "Customer Env environment for OpenEnv"
15
+ requires-python = ">=3.10"
16
+ dependencies = [
17
+ # Core OpenEnv runtime (provides FastAPI server + HTTP client types)
18
+ # install from github
19
+ # "openenv-core[core] @ git+https://github.com/meta-pytorch/OpenEnv.git",
20
+ "openenv-core[core]>=0.2.1",
21
+ # Environment-specific dependencies
22
+ # Add all dependencies needed for your environment here
23
+ # Examples:
24
+ # "numpy>=1.19.0",
25
+ # "torch>=2.0.0",
26
+ # "gymnasium>=0.29.0",
27
+ # "openspiel>=1.0.0",
28
+ # "smolagents>=1.22.0,<2",
29
+ ]
30
+
31
+ [project.optional-dependencies]
32
+ dev = [
33
+ "pytest>=8.0.0",
34
+ "pytest-cov>=4.0.0",
35
+ ]
36
+
37
+ [project.scripts]
38
+ # Server entry point - enables running via: uv run --project . server
39
+ # or: python -m customer_env.server.app
40
+ server = "customer_env.server.app:main"
41
+
42
+ [tool.setuptools]
43
+ include-package-data = true
44
+ packages = ["customer_env", "customer_env.server"]
45
+ package-dir = { "customer_env" = ".", "customer_env.server" = "server" }
customer_env/server/Dockerfile ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ # Multi-stage build using openenv-base
8
+ # This Dockerfile is flexible and works for both:
9
+ # - In-repo environments (with local OpenEnv sources)
10
+ # - Standalone environments (with openenv from PyPI/Git)
11
+ # The build script (openenv build) handles context detection and sets appropriate build args.
12
+
13
+ ARG BASE_IMAGE=ghcr.io/meta-pytorch/openenv-base:latest
14
+ FROM ${BASE_IMAGE} AS builder
15
+
16
+ WORKDIR /app
17
+
18
+ # Ensure git is available (required for installing dependencies from VCS)
19
+ RUN apt-get update && \
20
+ apt-get install -y --no-install-recommends git && \
21
+ rm -rf /var/lib/apt/lists/*
22
+
23
+ # Build argument to control whether we're building standalone or in-repo
24
+ ARG BUILD_MODE=in-repo
25
+ ARG ENV_NAME=customer_env
26
+
27
+ # Copy environment code (always at root of build context)
28
+ COPY . /app/env
29
+
30
+ # For in-repo builds, openenv is already vendored in the build context
31
+ # For standalone builds, openenv will be installed via pyproject.toml
32
+ WORKDIR /app/env
33
+
34
+ # Ensure uv is available (for local builds where base image lacks it)
35
+ RUN if ! command -v uv >/dev/null 2>&1; then \
36
+ curl -LsSf https://astral.sh/uv/install.sh | sh && \
37
+ mv /root/.local/bin/uv /usr/local/bin/uv && \
38
+ mv /root/.local/bin/uvx /usr/local/bin/uvx; \
39
+ fi
40
+
41
+ # Install dependencies using uv sync
42
+ # If uv.lock exists, use it; otherwise resolve on the fly
43
+ RUN --mount=type=cache,target=/root/.cache/uv \
44
+ if [ -f uv.lock ]; then \
45
+ uv sync --frozen --no-install-project --no-editable; \
46
+ else \
47
+ uv sync --no-install-project --no-editable; \
48
+ fi
49
+
50
+ RUN --mount=type=cache,target=/root/.cache/uv \
51
+ if [ -f uv.lock ]; then \
52
+ uv sync --frozen --no-editable; \
53
+ else \
54
+ uv sync --no-editable; \
55
+ fi
56
+
57
+ # Final runtime stage
58
+ FROM ${BASE_IMAGE}
59
+
60
+ WORKDIR /app
61
+
62
+ # Copy the virtual environment from builder
63
+ COPY --from=builder /app/env/.venv /app/.venv
64
+
65
+ # Copy the environment code
66
+ COPY --from=builder /app/env /app/env
67
+
68
+ # Set PATH to use the virtual environment
69
+ ENV PATH="/app/.venv/bin:$PATH"
70
+
71
+ # Set PYTHONPATH so imports work correctly
72
+ ENV PYTHONPATH="/app/env:$PYTHONPATH"
73
+
74
+ # Health check
75
+ HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
76
+ CMD curl -f http://localhost:8000/health || exit 1
77
+
78
+ # Run the FastAPI server
79
+ # The module path is constructed to work with the /app/env structure
80
+ CMD ["sh", "-c", "cd /app/env && uvicorn server.app:app --host 0.0.0.0 --port 8000"]
customer_env/server/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Customer Env environment server components."""
8
+
9
+ from .customer_env import CustomerEnvironment
10
+
11
+ __all__ = ["CustomerEnvironment"]
customer_env/server/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (288 Bytes). View file
 
customer_env/server/__pycache__/app.cpython-310.pyc ADDED
Binary file (2.25 kB). View file
 
customer_env/server/__pycache__/customer_env.cpython-310.pyc ADDED
Binary file (5.71 kB). View file
 
customer_env/server/__pycache__/customer_env_environment.cpython-310.pyc ADDED
Binary file (3.27 kB). View file
 
customer_env/server/app.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ FastAPI application for the Customer Env Environment.
9
+
10
+ This module creates an HTTP server that exposes the CustomerEnvironment
11
+ over HTTP and WebSocket endpoints, compatible with EnvClient.
12
+
13
+ Endpoints:
14
+ - POST /reset: Reset the environment
15
+ - POST /step: Execute an action
16
+ - GET /state: Get current environment state
17
+ - GET /schema: Get action/observation schemas
18
+ - WS /ws: WebSocket endpoint for persistent sessions
19
+
20
+ Usage:
21
+ # Development (with auto-reload):
22
+ uvicorn server.app:app --reload --host 0.0.0.0 --port 8000
23
+
24
+ # Production:
25
+ uvicorn server.app:app --host 0.0.0.0 --port 8000 --workers 4
26
+
27
+ # Or run directly:
28
+ python -m server.app
29
+ """
30
+
31
+ try:
32
+ from openenv.core.env_server.http_server import create_app
33
+ except Exception as e: # pragma: no cover
34
+ raise ImportError(
35
+ "openenv is required for the web interface. Install dependencies with '\n uv sync\n'"
36
+ ) from e
37
+
38
+ # Import from local models.py (PYTHONPATH includes /app/env in Docker)
39
+ from models import CustomerAction, CustomerObservation
40
+
41
+ from .customer_env import CustomerEnvironment
42
+
43
+
44
+ # Create the app with web interface and README integration
45
+ app = create_app(
46
+ CustomerEnvironment,
47
+ CustomerAction,
48
+ CustomerObservation,
49
+ env_name="customer_env",
50
+ max_concurrent_envs=1, # increase this number to allow more concurrent WebSocket sessions
51
+ )
52
+
53
+
54
+ def main(host: str = "0.0.0.0", port: int = 8000):
55
+ """
56
+ Entry point for direct execution via uv run or python -m.
57
+
58
+ This function enables running the server without Docker:
59
+ uv run --project . server
60
+ uv run --project . server --port 8001
61
+ python -m customer_env.server.app
62
+
63
+ Args:
64
+ host: Host address to bind to (default: "0.0.0.0")
65
+ port: Port number to listen on (default: 8000)
66
+
67
+ For production deployments, consider using uvicorn directly with
68
+ multiple workers:
69
+ uvicorn customer_env.server.app:app --workers 4
70
+ """
71
+ import uvicorn
72
+
73
+ uvicorn.run(app, host=host, port=port)
74
+
75
+
76
+ if __name__ == "__main__":
77
+ import argparse
78
+
79
+ parser = argparse.ArgumentParser()
80
+ parser.add_argument("--port", type=int, default=8000)
81
+ args = parser.parse_args()
82
+ main(port=args.port)
customer_env/server/basic_scenarios.csv ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ intent,persona,starting_utterance
2
+ dispute_charge: $50 at CoffeeCloud,Frustrated and rushed,What is this CoffeeCloud charge on my account?
3
+ travel_notice: going to Japan,Polite but confused,Hi, I'm going overseas next week and need to know if my card will work.
4
+ card_replacement: lost at gym,Panicked,I lost my wallet at the gym! Please help!
5
+ increase_limit: needs $5000 for wedding,Direct and formal,I would like to request a credit limit increase.
6
+ reset_password: locked out of app,Elderly and confused,The app on my phone says I'm locked out.
customer_env/server/customer_env.py ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+
4
+ import random, csv, json
5
+ from uuid import uuid4
6
+
7
+ # Use explicit relative or local imports
8
+ from models import CustomerAction, CustomerObservation
9
+ from openenv.core.env_server.interfaces import Environment
10
+ from openenv.core.env_server.types import State
11
+ from openai import OpenAI
12
+
13
+
14
+ local_llm = OpenAI(base_url="http://localhost:11434/v1", api_key="local-dev")
15
+ MODEL_NAME = "llama3"
16
+
17
+ class CustomerEnvironment(Environment):
18
+ SUPPORTS_CONCURRENT_SESSIONS: bool = False
19
+
20
+ def __init__(self):
21
+ """Initialize the Customer POMDP environment."""
22
+ self._state = State(episode_id=str(uuid4()), step_count=0)
23
+ self._reset_count = 0
24
+ self.hidden_intent = ""
25
+ self.persona = ""
26
+ self.scenarios = []
27
+ # Fallback just in case the file is missing
28
+ default_scenario = {"intent": "unknown", "persona": "neutral", "starting_utterance": "I need help."}
29
+ self.conversation_history = ""
30
+
31
+
32
+ try:
33
+ with open("basic_scenarios.csv", mode="r", encoding="utf-8") as f:
34
+ reader = csv.DictReader(f)
35
+ for row in reader:
36
+ self.scenarios.append(row)
37
+ except Exception as e:
38
+ print(f"Warning: Could not load scenarios.csv. {e}")
39
+ self.scenarios.append(default_scenario)
40
+
41
+ def reset(self) -> CustomerObservation:
42
+ """Reset the environment, pick a new hidden intent and persona."""
43
+ self._state = State(episode_id=str(uuid4()), step_count=0)
44
+ self._reset_count += 1
45
+
46
+ scenario = random.choice(self.scenarios)
47
+ self.hidden_intent = scenario["intent"]
48
+ self.persona = scenario["persona"]
49
+ start_msg = scenario["starting_utterance"]
50
+
51
+
52
+ self.conversation_history = f"System: Call connected.\nCustomer: {start_msg}"
53
+
54
+ return CustomerObservation(
55
+ customer_reply=start_msg,
56
+ tool_response=None,
57
+ conversation_history=self.conversation_history,
58
+ done=False,
59
+ reward=0.0,
60
+ metadata={"step": self._state.step_count}
61
+ )
62
+
63
+ def step(self, action: CustomerAction) -> CustomerObservation:
64
+ self._state.step_count += 1
65
+ step_reward = 0.0
66
+ done = False
67
+ tool_response = None
68
+ customer_reply = None
69
+
70
+ if action.action_type == "tool_call":
71
+ tool_name = action.content
72
+ # Mocking the database lookup for now
73
+ if tool_name == "lookup_account":
74
+ tool_response = "{'status': 'verified', 'balance': '$500'}"
75
+ step_reward += 0.5
76
+ else:
77
+ tool_response = f"Error: Tool '{tool_name}' not found."
78
+ step_reward -= 0.5
79
+
80
+ self.conversation_history += f"\nAgent [Action]: Used {tool_name}"
81
+ self.conversation_history += f"\nSystem: {tool_response}"
82
+
83
+ elif action.action_type == "speak":
84
+ self.conversation_history += f"\nAgent: {action.content}"
85
+ # Call made to the LLM
86
+ customer_reply = self._get_customer_reply(action.content)
87
+ self.conversation_history += f"\nCustomer: {customer_reply}"
88
+ step_reward -= 0.1 # Small penalty per turn to encourage efficiency
89
+
90
+ elif action.action_type == "end_call":
91
+ done = True
92
+
93
+ if self._state.step_count >= 15:
94
+ done = True
95
+
96
+ # THE JUDGE LLM EVALUATION
97
+ if done:
98
+ final_score, reasoning = self._evaluate_with_judge()
99
+ step_reward += final_score
100
+
101
+ metadata = {
102
+ "step": self._state.step_count,
103
+ "hidden_intent": self.hidden_intent,
104
+ "judge_reasoning": reasoning
105
+ }
106
+ else:
107
+ metadata = {"step": self._state.step_count}
108
+
109
+ return CustomerObservation(
110
+ customer_reply=customer_reply,
111
+ tool_response=tool_response,
112
+ conversation_history=self.conversation_history,
113
+ done=done,
114
+ reward=step_reward,
115
+ metadata=metadata
116
+ )
117
+
118
+
119
+ def _evaluate_with_judge(self) -> tuple[float, str]:
120
+ """
121
+ Uses local LLM as a Judge to score the final transcript.
122
+ Returns a tuple of (score, reasoning).
123
+ """
124
+ judge_prompt = f"""You are an expert QA Judge for a banking call center.
125
+ Review the transcript and score the Agent's performance from -5.0 to +10.0.
126
+
127
+ TRUE CUSTOMER INTENT: {self.hidden_intent}
128
+
129
+ SCORING RUBRIC:
130
+ - +10.0: Perfect. Intent captured, correct tools used, issue resolved efficiently.
131
+ - +5.0: Okay. Found the intent but took too many turns or was awkward.
132
+ - 0.0: Neutral. Didn't solve the issue but didn't hallucinate.
133
+ - -5.0: Failure. Missed the intent, hallucinated tools, or was rude.
134
+
135
+ TRANSCRIPT:
136
+ {self.conversation_history}
137
+
138
+ Respond ONLY with a valid JSON object in this exact format:
139
+ {{"score": 8.5, "reasoning": "A brief explanation of why."}}
140
+ """
141
+
142
+ try:
143
+ response = local_llm.chat.completions.create(
144
+ model=MODEL_NAME,
145
+ messages=[{"role": "user", "content": judge_prompt}],
146
+ response_format={ "type": "json_object" },
147
+ temperature=0.0
148
+ )
149
+
150
+ result = json.loads(response.choices[0].message.content)
151
+ score = float(result.get("score", 0.0))
152
+ reasoning = result.get("reasoning", "No reasoning provided.")
153
+
154
+ # Clamp the score just in case the LLM goes rogue
155
+ score = max(-5.0, min(10.0, score))
156
+ return score, reasoning
157
+
158
+ except Exception as e:
159
+ # Fallback if the local LLM fails to generate valid JSON
160
+ print(f"Judge Error: {e}")
161
+ return -2.0, "Judge LLM failed to parse transcript."
162
+
163
+
164
+ def _get_customer_reply(self, agent_text: str) -> str:
165
+ """Uses local LLM to simulate the customer."""
166
+ system_prompt = f"""You are a banking customer calling support.
167
+ Your secret intent is: {self.hidden_intent}.
168
+ Your mood is: {self.persona}.
169
+ RULES:
170
+ 1. Keep it under 2 sentences.
171
+ 2. Do NOT reveal your full intent immediately. Wait for the agent to probe.
172
+ 3. Respond naturally to what the agent just said.
173
+
174
+ Conversation history:
175
+ {self.conversation_history}"""
176
+
177
+ response = local_llm.chat.completions.create(
178
+ model=MODEL_NAME,
179
+ messages=[
180
+ {"role": "system", "content": system_prompt},
181
+ {"role": "user", "content": agent_text}
182
+ ],
183
+ temperature=0.7,
184
+ max_tokens=60
185
+ )
186
+ return response.choices[0].message.content.strip()
187
+
188
+ @property
189
+ def state(self) -> State:
190
+ return self._state
customer_env/server/requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ openenv[core]>=0.2.0
2
+ fastapi>=0.115.0
3
+ uvicorn>=0.24.0
4
+
5
+
6
+
customer_env/test_script.py ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ from client import CustomerEnv
3
+ from models import CustomerAction
4
+
5
+ def run_test():
6
+ print("🔌 Connecting to Local OpenEnv Server at http://127.0.0.1:8000...\n")
7
+
8
+ # Initialize the client, then call .sync() to use it in a standard 'with' block
9
+ client = CustomerEnv(base_url="http://127.0.0.1:8000")
10
+
11
+ with client.sync() as env:
12
+
13
+ # --- 1. RESET ---
14
+ print("--- NEW EPISODE ---")
15
+ result = env.reset()
16
+ print(f"Customer: {result.observation.customer_reply}")
17
+ print(f"Initial Reward: {result.reward}")
18
+
19
+ # --- 2. STEP 1: PROBING ---
20
+ print("\n--- AGENT ACTION 1: SPEAK ---")
21
+ action1 = CustomerAction(
22
+ action_type="speak",
23
+ content="I can help with that. Could I please get your account name?"
24
+ )
25
+ print(f"Agent: {action1.content}")
26
+ result = env.step(action1)
27
+ print(f"Customer Reply: {result.observation.customer_reply}")
28
+ print(f"Reward (Should be slightly negative for turns): {result.reward}")
29
+
30
+ # --- 3. STEP 2: TOOL USAGE ---
31
+ print("\n--- AGENT ACTION 2: TOOL CALL ---")
32
+ action2 = CustomerAction(
33
+ action_type="tool_call",
34
+ content="lookup_account",
35
+ tool_args={"name": "John Doe"}
36
+ )
37
+ print(f"Agent [Action]: Using tool '{action2.content}'")
38
+ result = env.step(action2)
39
+ print(f"System Response: {result.observation.tool_response}")
40
+ print(f"Reward (Should increase for tool usage): {result.reward}")
41
+
42
+ # --- 4. STEP 3: END CALL (Triggers Judge) ---
43
+ print("\n--- AGENT ACTION 3: END EPISODE ---")
44
+ action3 = CustomerAction(
45
+ action_type="end_call",
46
+ content="Thank you, your issue is resolved."
47
+ )
48
+ result = env.step(action3)
49
+ print(f"Episode Done: {result.done}")
50
+ print(f"Final Step Reward: {result.reward}")
51
+ print(f"Hidden Intent was: {result.observation.metadata.get('hidden_intent')}")
52
+
53
+ # Print the full transcript tracked by the environment
54
+ print("\n--- FULL CONVERSATION TRANSCRIPT ---")
55
+ print(result.observation.conversation_history)
56
+
57
+ if __name__ == "__main__":
58
+ run_test()
customer_env/uv.lock ADDED
The diff for this file is too large to render. See raw diff