gbenaa commited on
Commit
cd7277c
·
0 Parent(s):

persona_env OpenEnv Docker Space

Browse files
Dockerfile ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ # Multi-stage build using openenv-base
8
+ # This Dockerfile is flexible and works for both:
9
+ # - In-repo environments (with local OpenEnv sources)
10
+ # - Standalone environments (with openenv from PyPI/Git)
11
+ # The build script (openenv build) handles context detection and sets appropriate build args.
12
+
13
+ ARG BASE_IMAGE=ghcr.io/meta-pytorch/openenv-base:latest
14
+ FROM ${BASE_IMAGE} AS builder
15
+
16
+ WORKDIR /app
17
+
18
+ # Ensure git is available (required for installing dependencies from VCS)
19
+ RUN apt-get update && \
20
+ apt-get install -y --no-install-recommends git && \
21
+ rm -rf /var/lib/apt/lists/*
22
+
23
+ # Build argument to control whether we're building standalone or in-repo
24
+ ARG BUILD_MODE=in-repo
25
+ ARG ENV_NAME=persona_env
26
+
27
+ # Copy environment code (always at root of build context)
28
+ COPY . /app/env
29
+
30
+ # For in-repo builds, openenv is already vendored in the build context
31
+ # For standalone builds, openenv will be installed via pyproject.toml
32
+ WORKDIR /app/env
33
+
34
+ # Ensure uv is available (for local builds where base image lacks it)
35
+ RUN if ! command -v uv >/dev/null 2>&1; then \
36
+ curl -LsSf https://astral.sh/uv/install.sh | sh && \
37
+ mv /root/.local/bin/uv /usr/local/bin/uv && \
38
+ mv /root/.local/bin/uvx /usr/local/bin/uvx; \
39
+ fi
40
+
41
+ # Install dependencies using uv sync --no-dev
42
+ # If uv.lock exists, use it; otherwise resolve on the fly
43
+ RUN --mount=type=cache,target=/root/.cache/uv \
44
+ if [ -f uv.lock ]; then \
45
+ uv sync --no-dev --frozen --no-install-project --no-editable; \
46
+ else \
47
+ uv sync --no-dev --no-install-project --no-editable; \
48
+ fi
49
+
50
+ RUN --mount=type=cache,target=/root/.cache/uv \
51
+ if [ -f uv.lock ]; then \
52
+ uv sync --no-dev --frozen --no-editable; \
53
+ else \
54
+ uv sync --no-dev --no-editable; \
55
+ fi
56
+
57
+ # Final runtime stage
58
+ FROM ${BASE_IMAGE}
59
+
60
+ WORKDIR /app
61
+
62
+ # Copy the virtual environment from builder
63
+ COPY --from=builder /app/env/.venv /app/.venv
64
+
65
+ # Copy the environment code
66
+ COPY --from=builder /app/env /app/env
67
+
68
+ # Set PATH to use the virtual environment
69
+ ENV PATH="/app/.venv/bin:$PATH"
70
+
71
+ # Set PYTHONPATH so imports work correctly
72
+ ENV PYTHONPATH="/app/env:$PYTHONPATH"
73
+
74
+ # Health check
75
+ HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
76
+ CMD curl -f http://localhost:${PORT:-8000}/health || exit 1
77
+
78
+ # Run the FastAPI server
79
+ # The module path is constructed to work with the /app/env structure
80
+ CMD ["sh", "-c", "cd /app/env && uvicorn server.app:app --host 0.0.0.0 --port ${PORT:-8000}"]
README.md ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Persona Env Environment Server
3
+ emoji: 🎾
4
+ colorFrom: red
5
+ colorTo: purple
6
+ sdk: docker
7
+ pinned: false
8
+ app_port: 8000
9
+ base_path: /web
10
+ tags:
11
+ - openenv
12
+ ---
13
+
14
+ # Persona Env Environment
15
+
16
+ A simple test environment that echoes back messages. Perfect for testing the env APIs as well as demonstrating environment usage patterns.
17
+
18
+ ## Quick Start
19
+
20
+ The simplest way to use the Persona Env environment is through the `PersonaEnv` class:
21
+
22
+ ```python
23
+ from persona_env import PersonaAction, PersonaEnv
24
+
25
+ try:
26
+ # Create environment from Docker image
27
+ persona_envenv = PersonaEnv.from_docker_image("persona_env-env:latest")
28
+
29
+ # Reset
30
+ result = persona_envenv.reset()
31
+ print(f"Reset: {result.observation.echoed_message}")
32
+
33
+ # Send multiple messages
34
+ messages = ["Hello, World!", "Testing echo", "Final message"]
35
+
36
+ for msg in messages:
37
+ result = persona_envenv.step(PersonaAction(message=msg))
38
+ print(f"Sent: '{msg}'")
39
+ print(f" → Echoed: '{result.observation.echoed_message}'")
40
+ print(f" → Length: {result.observation.message_length}")
41
+ print(f" → Reward: {result.reward}")
42
+
43
+ finally:
44
+ # Always clean up
45
+ persona_envenv.close()
46
+ ```
47
+
48
+ That's it! The `PersonaEnv.from_docker_image()` method handles:
49
+ - Starting the Docker container
50
+ - Waiting for the server to be ready
51
+ - Connecting to the environment
52
+ - Container cleanup when you call `close()`
53
+
54
+ ## Building the Docker Image
55
+
56
+ Before using the environment, you need to build the Docker image:
57
+
58
+ ```bash
59
+ # From project root
60
+ docker build -t persona_env-env:latest -f server/Dockerfile .
61
+ ```
62
+
63
+ ## Deploying to Hugging Face Spaces
64
+
65
+ You can easily deploy your OpenEnv environment to Hugging Face Spaces using the `openenv push` command:
66
+
67
+ ```bash
68
+ # From the environment directory (where openenv.yaml is located)
69
+ openenv push
70
+
71
+ # Or specify options
72
+ openenv push --namespace my-org --private
73
+ ```
74
+
75
+ The `openenv push` command will:
76
+ 1. Validate that the directory is an OpenEnv environment (checks for `openenv.yaml`)
77
+ 2. Prepare a custom build for Hugging Face Docker space (enables web interface)
78
+ 3. Upload to Hugging Face (ensuring you're logged in)
79
+
80
+ ### Prerequisites
81
+
82
+ - Authenticate with Hugging Face: The command will prompt for login if not already authenticated
83
+
84
+ ### Options
85
+
86
+ - `--directory`, `-d`: Directory containing the OpenEnv environment (defaults to current directory)
87
+ - `--repo-id`, `-r`: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)
88
+ - `--base-image`, `-b`: Base Docker image to use (overrides Dockerfile FROM)
89
+ - `--private`: Deploy the space as private (default: public)
90
+
91
+ ### Examples
92
+
93
+ ```bash
94
+ # Push to your personal namespace (defaults to username/env-name from openenv.yaml)
95
+ openenv push
96
+
97
+ # Push to a specific repository
98
+ openenv push --repo-id my-org/my-env
99
+
100
+ # Push with a custom base image
101
+ openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest
102
+
103
+ # Push as a private space
104
+ openenv push --private
105
+
106
+ # Combine options
107
+ openenv push --repo-id my-org/my-env --base-image custom-base:latest --private
108
+ ```
109
+
110
+ After deployment, your space will be available at:
111
+ `https://huggingface.co/spaces/<repo-id>`
112
+
113
+ The deployed space includes:
114
+ - **Web Interface** at `/web` - Interactive UI for exploring the environment
115
+ - **API Documentation** at `/docs` - Full OpenAPI/Swagger interface
116
+ - **Health Check** at `/health` - Container health monitoring
117
+ - **WebSocket** at `/ws` - Persistent session endpoint for low-latency interactions
118
+
119
+ ## Environment Details
120
+
121
+ ### Action
122
+ **PersonaAction**: Contains a single field
123
+ - `message` (str) - The message to echo back
124
+
125
+ ### Observation
126
+ **PersonaObservation**: Contains the echo response and metadata
127
+ - `echoed_message` (str) - The message echoed back
128
+ - `message_length` (int) - Length of the message
129
+ - `reward` (float) - Reward based on message length (length × 0.1)
130
+ - `done` (bool) - Always False for echo environment
131
+ - `metadata` (dict) - Additional info like step count
132
+
133
+ ### Reward
134
+ The reward is calculated as: `message_length × 0.1`
135
+ - "Hi" → reward: 0.2
136
+ - "Hello, World!" → reward: 1.3
137
+ - Empty message → reward: 0.0
138
+
139
+ ## Advanced Usage
140
+
141
+ ### Connecting to an Existing Server
142
+
143
+ If you already have a Persona Env environment server running, you can connect directly:
144
+
145
+ ```python
146
+ from persona_env import PersonaEnv
147
+
148
+ # Connect to existing server
149
+ persona_envenv = PersonaEnv(base_url="<ENV_HTTP_URL_HERE>")
150
+
151
+ # Use as normal
152
+ result = persona_envenv.reset()
153
+ result = persona_envenv.step(PersonaAction(message="Hello!"))
154
+ ```
155
+
156
+ Note: When connecting to an existing server, `persona_envenv.close()` will NOT stop the server.
157
+
158
+ ### Using the Context Manager
159
+
160
+ The client supports context manager usage for automatic connection management:
161
+
162
+ ```python
163
+ from persona_env import PersonaAction, PersonaEnv
164
+
165
+ # Connect with context manager (auto-connects and closes)
166
+ with PersonaEnv(base_url="http://localhost:8000") as env:
167
+ result = env.reset()
168
+ print(f"Reset: {result.observation.echoed_message}")
169
+ # Multiple steps with low latency
170
+ for msg in ["Hello", "World", "!"]:
171
+ result = env.step(PersonaAction(message=msg))
172
+ print(f"Echoed: {result.observation.echoed_message}")
173
+ ```
174
+
175
+ The client uses WebSocket connections for:
176
+ - **Lower latency**: No HTTP connection overhead per request
177
+ - **Persistent session**: Server maintains your environment state
178
+ - **Efficient for episodes**: Better for many sequential steps
179
+
180
+ ### Concurrent WebSocket Sessions
181
+
182
+ The server supports multiple concurrent WebSocket connections. To enable this,
183
+ modify `server/app.py` to use factory mode:
184
+
185
+ ```python
186
+ # In server/app.py - use factory mode for concurrent sessions
187
+ app = create_app(
188
+ PersonaEnvironment, # Pass class, not instance
189
+ PersonaAction,
190
+ PersonaObservation,
191
+ max_concurrent_envs=4, # Allow 4 concurrent sessions
192
+ )
193
+ ```
194
+
195
+ Then multiple clients can connect simultaneously:
196
+
197
+ ```python
198
+ from persona_env import PersonaAction, PersonaEnv
199
+ from concurrent.futures import ThreadPoolExecutor
200
+
201
+ def run_episode(client_id: int):
202
+ with PersonaEnv(base_url="http://localhost:8000") as env:
203
+ result = env.reset()
204
+ for i in range(10):
205
+ result = env.step(PersonaAction(message=f"Client {client_id}, step {i}"))
206
+ return client_id, result.observation.message_length
207
+
208
+ # Run 4 episodes concurrently
209
+ with ThreadPoolExecutor(max_workers=4) as executor:
210
+ results = list(executor.map(run_episode, range(4)))
211
+ ```
212
+
213
+ ## Development & Testing
214
+
215
+ ### Direct Environment Testing
216
+
217
+ Test the environment logic directly without starting the HTTP server:
218
+
219
+ ```bash
220
+ # From the server directory
221
+ python3 server/persona_env_environment.py
222
+ ```
223
+
224
+ This verifies that:
225
+ - Environment resets correctly
226
+ - Step executes actions properly
227
+ - State tracking works
228
+ - Rewards are calculated correctly
229
+
230
+ ### Running Locally
231
+
232
+ Run the server locally for development:
233
+
234
+ ```bash
235
+ uvicorn server.app:app --reload
236
+ ```
237
+
238
+ ## Project Structure
239
+
240
+ ```
241
+ persona_env/
242
+ ├── .dockerignore # Docker build exclusions
243
+ ├── __init__.py # Module exports
244
+ ├── README.md # This file
245
+ ├── openenv.yaml # OpenEnv manifest
246
+ ├── pyproject.toml # Project metadata and dependencies
247
+ ├── uv.lock # Locked dependencies (generated)
248
+ ├── client.py # PersonaEnv client
249
+ ├── models.py # Action and Observation models
250
+ └── server/
251
+ ├── __init__.py # Server module exports
252
+ ├── persona_env_environment.py # Core environment logic
253
+ ├── app.py # FastAPI application (HTTP + WebSocket endpoints)
254
+ └── Dockerfile # Container image definition
255
+ ```
__init__.py ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Persona Env Environment."""
8
+
9
+ from .client import PersonaEnv
10
+ from .models import PersonaAction, PersonaObservation
11
+
12
+ __all__ = [
13
+ "PersonaAction",
14
+ "PersonaObservation",
15
+ "PersonaEnv",
16
+ ]
__pycache__/__init__.cpython-310.pyc ADDED
Binary file (449 Bytes). View file
 
__pycache__/client.cpython-310.pyc ADDED
Binary file (3.45 kB). View file
 
__pycache__/models.cpython-310.pyc ADDED
Binary file (1.28 kB). View file
 
client.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Persona Env Environment Client."""
8
+
9
+ from typing import Dict
10
+
11
+ from openenv.core.client_types import StepResult
12
+ from openenv.core.env_server.types import State
13
+ from openenv.core import EnvClient
14
+
15
+ from .models import PersonaAction, PersonaObservation
16
+
17
+
18
+ class PersonaEnv(
19
+ EnvClient[PersonaAction, PersonaObservation]
20
+ ):
21
+ """
22
+ Client for the Persona Env Environment.
23
+
24
+ This client maintains a persistent WebSocket connection to the environment server,
25
+ enabling efficient multi-step interactions with lower latency.
26
+ Each client instance has its own dedicated environment session on the server.
27
+
28
+ Example:
29
+ >>> # Connect to a running server
30
+ >>> with PersonaEnv(base_url="http://localhost:8000") as client:
31
+ ... result = client.reset()
32
+ ... print(result.observation.echoed_message)
33
+ ...
34
+ ... result = client.step(PersonaAction(message="Hello!"))
35
+ ... print(result.observation.echoed_message)
36
+
37
+ Example with Docker:
38
+ >>> # Automatically start container and connect
39
+ >>> client = PersonaEnv.from_docker_image("persona_env-env:latest")
40
+ >>> try:
41
+ ... result = client.reset()
42
+ ... result = client.step(PersonaAction(message="Test"))
43
+ ... finally:
44
+ ... client.close()
45
+ """
46
+
47
+ def _step_payload(self, action: PersonaAction) -> Dict:
48
+ """
49
+ Convert PersonaAction to JSON payload for step message.
50
+
51
+ Args:
52
+ action: PersonaAction instance
53
+
54
+ Returns:
55
+ Dictionary representation suitable for JSON encoding
56
+ """
57
+ return {
58
+ "message": action.message,
59
+ }
60
+
61
+ def _parse_result(self, payload: Dict) -> StepResult[PersonaObservation]:
62
+ """
63
+ Parse server response into StepResult[PersonaObservation].
64
+
65
+ Args:
66
+ payload: JSON response data from server
67
+
68
+ Returns:
69
+ StepResult with PersonaObservation
70
+ """
71
+ obs_data = payload.get("observation", {})
72
+ observation = PersonaObservation(
73
+ echoed_message=obs_data.get("echoed_message", ""),
74
+ message_length=obs_data.get("message_length", 0),
75
+ done=payload.get("done", False),
76
+ reward=payload.get("reward"),
77
+ metadata=obs_data.get("metadata", {}),
78
+ )
79
+
80
+ return StepResult(
81
+ observation=observation,
82
+ reward=payload.get("reward"),
83
+ done=payload.get("done", False),
84
+ )
85
+
86
+ def _parse_state(self, payload: Dict) -> State:
87
+ """
88
+ Parse server response into State object.
89
+
90
+ Args:
91
+ payload: JSON response from state request
92
+
93
+ Returns:
94
+ State object with episode_id and step_count
95
+ """
96
+ return State(
97
+ episode_id=payload.get("episode_id"),
98
+ step_count=payload.get("step_count", 0),
99
+ )
models.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from typing import Dict, Literal, Optional
4
+ from pydantic import Field
5
+ from openenv.core.env_server.types import Action, Observation
6
+
7
+
8
+ class PersonaAction(Action):
9
+ # = one of three actions
10
+ kind: Literal["show_content", "ask_question", "advance_time"] = Field(
11
+ ..., description="Which action to apply"
12
+ )
13
+
14
+ # = show_content
15
+ topic: Optional[str] = None
16
+ source: Optional[str] = None
17
+ valence: Optional[Literal["positive", "neutral", "negative"]] = None
18
+
19
+ # = ask_question
20
+ question: Optional[str] = None
21
+
22
+ # = advance_time
23
+ hours: Optional[int] = None
24
+
25
+
26
+ class PersonaObservation(Observation):
27
+ reaction_text: str
28
+ mood: float
29
+ interests: Dict[str, float]
openenv.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ spec_version: 1
2
+ name: persona_env
3
+ type: space
4
+ runtime: fastapi
5
+ app: server.app:app
6
+ port: 8000
7
+
pyproject.toml ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ [build-system]
8
+ requires = ["setuptools>=45", "wheel"]
9
+ build-backend = "setuptools.build_meta"
10
+
11
+ [project]
12
+ name = "openenv-persona_env"
13
+ version = "0.1.0"
14
+ description = "Persona Env environment for OpenEnv"
15
+ requires-python = ">=3.10"
16
+ dependencies = [
17
+ # Core OpenEnv runtime (provides FastAPI server + HTTP client types)
18
+ # install from github
19
+ # "openenv-core[core] @ git+https://github.com/meta-pytorch/OpenEnv.git",
20
+ "openenv-core[core]>=0.2.0",
21
+ # Environment-specific dependencies
22
+ # Add all dependencies needed for your environment here
23
+ # Examples:
24
+ # "numpy>=1.19.0",
25
+ # "torch>=2.0.0",
26
+ # "gymnasium>=0.29.0",
27
+ # "openspiel>=1.0.0",
28
+ # "smolagents>=1.22.0,<2",
29
+ ]
30
+
31
+ [project.optional-dependencies]
32
+ dev = [
33
+ "pytest>=8.0.0",
34
+ "pytest-cov>=4.0.0",
35
+ ]
36
+
37
+ [project.scripts]
38
+ # Server entry point - enables running via: uv run --project . server
39
+ # or: python -m persona_env.server.app
40
+ server = "persona_env.server.app:main"
41
+
42
+ [tool.setuptools]
43
+ include-package-data = true
44
+ packages = ["persona_env", "persona_env.server"]
45
+ package-dir = { "persona_env" = ".", "persona_env.server" = "server" }
server/Dockerfile ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ # Multi-stage build using openenv-base
8
+ # This Dockerfile is flexible and works for both:
9
+ # - In-repo environments (with local OpenEnv sources)
10
+ # - Standalone environments (with openenv from PyPI/Git)
11
+ # The build script (openenv build) handles context detection and sets appropriate build args.
12
+
13
+ ARG BASE_IMAGE=ghcr.io/meta-pytorch/openenv-base:latest
14
+ FROM ${BASE_IMAGE} AS builder
15
+
16
+ WORKDIR /app
17
+
18
+ # Ensure git is available (required for installing dependencies from VCS)
19
+ RUN apt-get update && \
20
+ apt-get install -y --no-install-recommends git && \
21
+ rm -rf /var/lib/apt/lists/*
22
+
23
+ # Build argument to control whether we're building standalone or in-repo
24
+ ARG BUILD_MODE=in-repo
25
+ ARG ENV_NAME=persona_env
26
+
27
+ # Copy environment code (always at root of build context)
28
+ COPY . /app/env
29
+
30
+ # For in-repo builds, openenv is already vendored in the build context
31
+ # For standalone builds, openenv will be installed via pyproject.toml
32
+ WORKDIR /app/env
33
+
34
+ # Ensure uv is available (for local builds where base image lacks it)
35
+ RUN if ! command -v uv >/dev/null 2>&1; then \
36
+ curl -LsSf https://astral.sh/uv/install.sh | sh && \
37
+ mv /root/.local/bin/uv /usr/local/bin/uv && \
38
+ mv /root/.local/bin/uvx /usr/local/bin/uvx; \
39
+ fi
40
+
41
+ # Install dependencies using uv sync --no-dev
42
+ # If uv.lock exists, use it; otherwise resolve on the fly
43
+ RUN --mount=type=cache,target=/root/.cache/uv \
44
+ if [ -f uv.lock ]; then \
45
+ uv sync --no-dev --frozen --no-install-project --no-editable; \
46
+ else \
47
+ uv sync --no-dev --no-install-project --no-editable; \
48
+ fi
49
+
50
+ RUN --mount=type=cache,target=/root/.cache/uv \
51
+ if [ -f uv.lock ]; then \
52
+ uv sync --no-dev --frozen --no-editable; \
53
+ else \
54
+ uv sync --no-dev --no-editable; \
55
+ fi
56
+
57
+ # Final runtime stage
58
+ FROM ${BASE_IMAGE}
59
+
60
+ WORKDIR /app
61
+
62
+ # Copy the virtual environment from builder
63
+ COPY --from=builder /app/env/.venv /app/.venv
64
+
65
+ # Copy the environment code
66
+ COPY --from=builder /app/env /app/env
67
+
68
+ # Set PATH to use the virtual environment
69
+ ENV PATH="/app/.venv/bin:$PATH"
70
+
71
+ # Set PYTHONPATH so imports work correctly
72
+ ENV PYTHONPATH="/app/env:$PYTHONPATH"
73
+
74
+ # Health check
75
+ HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
76
+ CMD curl -f http://localhost:${PORT:-8000}/health || exit 1
77
+
78
+ # Run the FastAPI server
79
+ # The module path is constructed to work with the /app/env structure
80
+ CMD ["sh", "-c", "cd /app/env && uvicorn server.app:app --host 0.0.0.0 --port ${PORT:-8000}"]
server/__init__.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Persona Env environment server components."""
8
+
9
+ from .persona_env_environment import PersonaEnvironment
10
+
11
+ __all__ = ["PersonaEnvironment"]
server/__pycache__/__init__.cpython-310.pyc ADDED
Binary file (391 Bytes). View file
 
server/__pycache__/app.cpython-310.pyc ADDED
Binary file (2.4 kB). View file
 
server/__pycache__/persona_env_environment.cpython-310.pyc ADDED
Binary file (3.29 kB). View file
 
server/app.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from openenv.core.env_server import create_app
2
+
3
+ from models import PersonaAction, PersonaObservation
4
+ from .persona_env_environment import PersonaEnvironment
5
+
6
+
7
+ # = create one env instance we control (for debug visibility)
8
+ _env = PersonaEnvironment()
9
+
10
+ # = create the OpenEnv app (keeps /reset, /step, /schema, etc working)
11
+ app = create_app(
12
+ PersonaEnvironment,
13
+ PersonaAction,
14
+ PersonaObservation,
15
+ env_name="persona_env",
16
+ )
17
+
18
+ # = add a debug endpoint that reflects the live env state we control
19
+ @app.get("/debug_state")
20
+ def debug_state():
21
+ return _env.state
22
+
23
+
24
+ # Optional: also step/reset the debug env using the same action schema
25
+ @app.post("/debug_reset")
26
+ def debug_reset():
27
+ obs = _env.reset()
28
+ return {"observation": obs, "reward": getattr(obs, "reward", 0.0), "done": getattr(obs, "done", False)}
29
+
30
+
31
+ @app.post("/debug_step")
32
+ def debug_step(payload: dict):
33
+ # payload format matches your /step envelope: {"action": {...}}
34
+ action_dict = payload.get("action", {})
35
+ action = PersonaAction(**action_dict)
36
+ obs = _env.step(action)
37
+ return {"observation": obs, "reward": getattr(obs, "reward", 0.0), "done": getattr(obs, "done", False)}
server/k_persona_env_environment.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from dataclasses import dataclass, field
4
+ from typing import Dict
5
+ from uuid import uuid4
6
+
7
+ from openenv.core.env_server.interfaces import Environment
8
+ from openenv.core.env_server.types import State
9
+
10
+ from models import PersonaAction, PersonaObservation
11
+
12
+
13
+ @dataclass
14
+ class PersonaInternal:
15
+ # = stable attributes (placeholders)
16
+ star_sign: str = "Taurus"
17
+ character_type: str = "Sanguine-melancholic"
18
+ background: str = "Arts-adjacent, lower-middle class"
19
+
20
+ # = evolving state
21
+ mood: float = 0.1 # -> [-1, 1]
22
+ interests: Dict[str, float] = field(default_factory=lambda: {
23
+ "animal_welfare": 0.7,
24
+ "interior_design": 0.5,
25
+ "politics": 0.3,
26
+ })
27
+
28
+ def clamp(self) -> None:
29
+ self.mood = max(-1.0, min(1.0, self.mood))
30
+ for k, v in list(self.interests.items()):
31
+ self.interests[k] = max(0.0, min(1.0, v))
32
+
33
+
34
+ class PersonaEnvironment(Environment):
35
+ def __init__(self):
36
+ self._state = State(episode_id=str(uuid4()), step_count=0)
37
+ self._p = PersonaInternal()
38
+
39
+ def reset(self) -> PersonaObservation:
40
+ self._state = State(episode_id=str(uuid4()), step_count=0)
41
+ self._p = PersonaInternal()
42
+ return PersonaObservation(
43
+ reaction_text="Persona initialised.",
44
+ mood=self._p.mood,
45
+ interests=dict(self._p.interests),
46
+ done=False,
47
+ reward=0.0,
48
+ )
49
+
50
+ def step(self, action: PersonaAction) -> PersonaObservation:
51
+ self._state.step_count += 1
52
+
53
+ if action.kind == "show_content":
54
+ reaction = self._apply_content(
55
+ topic=action.topic or "unknown",
56
+ source=action.source or "unknown",
57
+ valence=action.valence or "neutral",
58
+ )
59
+ elif action.kind == "ask_question":
60
+ reaction = self._answer_question(action.question or "")
61
+ elif action.kind == "advance_time":
62
+ reaction = self._advance_time(action.hours or 0)
63
+ else:
64
+ reaction = "Action rejected."
65
+
66
+ self._p.clamp()
67
+
68
+ return PersonaObservation(
69
+ reaction_text=reaction,
70
+ mood=self._p.mood,
71
+ interests=dict(self._p.interests),
72
+ done=False,
73
+ reward=0.0,
74
+ )
75
+
76
+ @property
77
+ def state(self) -> State:
78
+ return self._state
79
+
80
+ # = internal logic (simple, deterministic)
81
+
82
+ def _apply_content(self, topic: str, source: str, valence: str) -> str:
83
+ base = 0.02
84
+ if valence == "positive":
85
+ mood_delta = +0.05
86
+ interest_delta = +base
87
+ elif valence == "negative":
88
+ mood_delta = -0.05
89
+ interest_delta = +base / 2
90
+ else:
91
+ mood_delta = 0.0
92
+ interest_delta = +base / 4
93
+
94
+ if source in {"tabloid", "ragebait"}:
95
+ mood_delta -= 0.03
96
+ elif source in {"charity", "trusted"}:
97
+ mood_delta += 0.02
98
+
99
+ self._p.mood += mood_delta
100
+ self._p.interests[topic] = self._p.interests.get(topic, 0.2) + interest_delta
101
+
102
+ return f"Consumed {valence} content on {topic} from {source}. Mood {mood_delta:+.2f}."
103
+
104
+ def _answer_question(self, question: str) -> str:
105
+ if not question.strip():
106
+ return "No reaction."
107
+ top_interest = max(self._p.interests.items(), key=lambda kv: kv[1])[0]
108
+ return f"Answers via {top_interest}: '{question.strip()}'"
109
+
110
+ def _advance_time(self, hours: int) -> str:
111
+ hours = max(0, hours)
112
+ decay = min(0.2, hours / 240.0)
113
+ self._p.mood *= (1.0 - decay)
114
+ for k in list(self._p.interests.keys()):
115
+ self._p.interests[k] *= (1.0 - decay / 5.0)
116
+ return f"Advanced time by {hours}h."
server/persona_env_environment.py ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from dataclasses import dataclass, field
4
+ from typing import Dict, Optional, Tuple
5
+ from uuid import uuid4
6
+
7
+ from openenv.core.env_server.interfaces import Environment
8
+ from openenv.core.env_server.types import State
9
+
10
+ from models import PersonaAction, PersonaObservation
11
+
12
+
13
+ @dataclass
14
+ class PersonaInternal:
15
+ # = stable attributes (placeholders)
16
+ star_sign: str = "Taurus"
17
+ character_type: str = "Sanguine-melancholic"
18
+ background: str = "Arts-adjacent, lower-middle class"
19
+
20
+ # = evolving state
21
+ mood: float = 0.1 # -> [-1, 1]
22
+ interests: Dict[str, float] = field(default_factory=lambda: {
23
+ "animal_welfare": 0.7,
24
+ "interior_design": 0.5,
25
+ "politics": 0.3,
26
+ })
27
+
28
+ # = tiny memory (for continuity)
29
+ last_question: Optional[str] = None
30
+ last_topic: Optional[str] = None
31
+
32
+ def clamp(self) -> None:
33
+ self.mood = max(-1.0, min(1.0, self.mood))
34
+ for k, v in list(self.interests.items()):
35
+ self.interests[k] = max(0.0, min(1.0, v))
36
+
37
+ def bump_interest(self, topic: str, delta: float) -> None:
38
+ self.interests[topic] = self.interests.get(topic, 0.2) + delta
39
+
40
+
41
+ class PersonaEnvironment(Environment):
42
+ def __init__(self):
43
+ self._state = State(episode_id=str(uuid4()), step_count=0)
44
+ self._p = PersonaInternal()
45
+
46
+ def reset(self) -> PersonaObservation:
47
+ self._state = State(episode_id=str(uuid4()), step_count=0)
48
+ self._p = PersonaInternal()
49
+ return PersonaObservation(
50
+ reaction_text="Persona initialised.",
51
+ mood=self._p.mood,
52
+ interests=dict(self._p.interests),
53
+ done=False,
54
+ reward=0.0,
55
+ )
56
+
57
+ def step(self, action: PersonaAction) -> PersonaObservation:
58
+ self._state.step_count += 1
59
+
60
+ if action.kind == "show_content":
61
+ reaction = self._apply_content(
62
+ topic=action.topic or "unknown",
63
+ source=action.source or "unknown",
64
+ valence=action.valence or "neutral",
65
+ )
66
+ elif action.kind == "ask_question":
67
+ reaction = self._answer_question(action.question or "")
68
+ elif action.kind == "advance_time":
69
+ reaction = self._advance_time(action.hours or 0)
70
+ else:
71
+ reaction = "Action rejected."
72
+
73
+ self._p.clamp()
74
+
75
+ return PersonaObservation(
76
+ reaction_text=reaction,
77
+ mood=self._p.mood,
78
+ interests=dict(self._p.interests),
79
+ done=False,
80
+ reward=0.0,
81
+ )
82
+
83
+ @property
84
+ def state(self) -> State:
85
+ return self._state
86
+
87
+ # = internal logic (simple, deterministic)
88
+
89
+ def _apply_content(self, topic: str, source: str, valence: str) -> str:
90
+ base = 0.02
91
+ if valence == "positive":
92
+ mood_delta = +0.05
93
+ interest_delta = +base
94
+ elif valence == "negative":
95
+ mood_delta = -0.05
96
+ interest_delta = +base / 2
97
+ else:
98
+ mood_delta = 0.0
99
+ interest_delta = +base / 4
100
+
101
+ if source in {"tabloid", "ragebait"}:
102
+ mood_delta -= 0.03
103
+ elif source in {"charity", "trusted"}:
104
+ mood_delta += 0.02
105
+
106
+ self._p.mood += mood_delta
107
+ self._p.bump_interest(topic, interest_delta)
108
+ self._p.last_topic = topic
109
+
110
+ return f"Consumed {valence} content on {topic} from {source}. Mood {mood_delta:+.2f}."
111
+
112
+ def _answer_question(self, question: str) -> str:
113
+ q = question.strip()
114
+ if not q:
115
+ return "No reaction."
116
+
117
+ self._p.last_question = q
118
+
119
+ # = keyword -> topic mapping
120
+ # -> This gives you a controllable, inspectable way to make questions influence state.
121
+ topic, mood_delta = self._infer_topic_and_mood_from_question(q)
122
+
123
+ if topic is not None:
124
+ # -> Questions increase attention to a topic a little.
125
+ self._p.bump_interest(topic, 0.015)
126
+ self._p.last_topic = topic
127
+
128
+ self._p.mood += mood_delta
129
+
130
+ top_interest = max(self._p.interests.items(), key=lambda kv: kv[1])[0]
131
+
132
+ # -> Mild continuity: reference last topic if available
133
+ if self._p.last_topic:
134
+ continuity = f" (recently thinking about {self._p.last_topic})"
135
+ else:
136
+ continuity = ""
137
+
138
+ return f"Answers via {top_interest}{continuity}: '{q}'"
139
+
140
+ def _infer_topic_and_mood_from_question(self, q: str) -> Tuple[Optional[str], float]:
141
+ ql = q.lower()
142
+
143
+ # -> Default: neutral mood change from being asked something
144
+ mood_delta = 0.0
145
+
146
+ # -> A few simple triggers
147
+ if any(w in ql for w in ["ethical", "cruelty", "welfare", "rescue", "animal"]):
148
+ return "animal_welfare", +0.01
149
+
150
+ if any(w in ql for w in ["decor", "interior", "furniture", "colour", "paint", "design"]):
151
+ return "interior_design", +0.01
152
+
153
+ if any(w in ql for w in ["election", "immigration", "tax", "government", "policy", "minister", "party"]):
154
+ # -> Politics questions tend to stress this persona slightly
155
+ return "politics", -0.01
156
+
157
+ return None, mood_delta
158
+
159
+ def _advance_time(self, hours: int) -> str:
160
+ hours = max(0, hours)
161
+ decay = min(0.2, hours / 240.0)
162
+
163
+ # -> mood drifts towards 0 with time
164
+ self._p.mood *= (1.0 - decay)
165
+
166
+ # -> interests slowly decay with time
167
+ for k in list(self._p.interests.keys()):
168
+ self._p.interests[k] *= (1.0 - decay / 5.0)
169
+
170
+ # -> very light memory fade
171
+ if hours >= 24:
172
+ self._p.last_question = None
173
+
174
+ return f"Advanced time by {hours}h."
server/requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ openenv[core]>=0.2.0
2
+ fastapi>=0.115.0
3
+ uvicorn>=0.24.0
4
+
5
+
6
+