RudrakshNanavaty commited on
Commit
0c99808
·
0 Parent(s):

Initial Commit + reset() endpoint

Browse files
.gitignore ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python / Environment
2
+ .venv/
3
+ __pycache__/
4
+ *.pyc
5
+
6
+ # Environment Variables / Keys
7
+ .env
8
+ .env.*
9
+ !.env.example
10
+
11
+
12
+ # Logging / Tracking Results
13
+ tracking/results.csv
14
+
15
+ # IDEs
16
+ .vscode/
17
+ .idea/
18
+ openenv_earnings_analyst.egg-info/
19
+
20
+ # Agents
21
+ .agents/
22
+ .cursor/
23
+
24
+ .DS_Store
25
+
26
+ # Huggingface spaces doesn't allow pdfs anywhere in git history
27
+ *.pdf
28
+ *.parquet
.python-version ADDED
@@ -0,0 +1 @@
 
 
1
+ 3.12
README.md ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Earnings Analyst Environment Server
3
+ emoji: 🏑
4
+ colorFrom: pink
5
+ colorTo: pink
6
+ sdk: docker
7
+ pinned: false
8
+ app_port: 8000
9
+ base_path: /web
10
+ tags:
11
+ - openenv
12
+ ---
13
+
14
+ # Earnings Analyst Environment
15
+
16
+ A simple test environment that echoes back messages. Perfect for testing the env APIs as well as demonstrating environment usage patterns.
17
+
18
+ ## Quick Start
19
+
20
+ The simplest way to use the Earnings Analyst environment is through the `EarningsAnalystEnv` class:
21
+
22
+ ```python
23
+ from earnings_analyst import EarningsAnalystAction, EarningsAnalystEnv
24
+
25
+ try:
26
+ # Create environment from Docker image
27
+ earnings_analystenv = EarningsAnalystEnv.from_docker_image("earnings_analyst-env:latest")
28
+
29
+ # Reset
30
+ result = earnings_analystenv.reset()
31
+ print(f"Reset: {result.observation.echoed_message}")
32
+
33
+ # Send multiple messages
34
+ messages = ["Hello, World!", "Testing echo", "Final message"]
35
+
36
+ for msg in messages:
37
+ result = earnings_analystenv.step(EarningsAnalystAction(message=msg))
38
+ print(f"Sent: '{msg}'")
39
+ print(f" → Echoed: '{result.observation.echoed_message}'")
40
+ print(f" → Length: {result.observation.message_length}")
41
+ print(f" → Reward: {result.reward}")
42
+
43
+ finally:
44
+ # Always clean up
45
+ earnings_analystenv.close()
46
+ ```
47
+
48
+ That's it! The `EarningsAnalystEnv.from_docker_image()` method handles:
49
+ - Starting the Docker container
50
+ - Waiting for the server to be ready
51
+ - Connecting to the environment
52
+ - Container cleanup when you call `close()`
53
+
54
+ ## Building the Docker Image
55
+
56
+ Before using the environment, you need to build the Docker image:
57
+
58
+ ```bash
59
+ # From project root
60
+ docker build -t earnings_analyst-env:latest -f server/Dockerfile .
61
+ ```
62
+
63
+ ## Deploying to Hugging Face Spaces
64
+
65
+ You can easily deploy your OpenEnv environment to Hugging Face Spaces using the `openenv push` command:
66
+
67
+ ```bash
68
+ # From the environment directory (where openenv.yaml is located)
69
+ openenv push
70
+
71
+ # Or specify options
72
+ openenv push --namespace my-org --private
73
+ ```
74
+
75
+ The `openenv push` command will:
76
+ 1. Validate that the directory is an OpenEnv environment (checks for `openenv.yaml`)
77
+ 2. Prepare a custom build for Hugging Face Docker space (enables web interface)
78
+ 3. Upload to Hugging Face (ensuring you're logged in)
79
+
80
+ ### Prerequisites
81
+
82
+ - Authenticate with Hugging Face: The command will prompt for login if not already authenticated
83
+
84
+ ### Options
85
+
86
+ - `--directory`, `-d`: Directory containing the OpenEnv environment (defaults to current directory)
87
+ - `--repo-id`, `-r`: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)
88
+ - `--base-image`, `-b`: Base Docker image to use (overrides Dockerfile FROM)
89
+ - `--private`: Deploy the space as private (default: public)
90
+
91
+ ### Examples
92
+
93
+ ```bash
94
+ # Push to your personal namespace (defaults to username/env-name from openenv.yaml)
95
+ openenv push
96
+
97
+ # Push to a specific repository
98
+ openenv push --repo-id my-org/my-env
99
+
100
+ # Push with a custom base image
101
+ openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest
102
+
103
+ # Push as a private space
104
+ openenv push --private
105
+
106
+ # Combine options
107
+ openenv push --repo-id my-org/my-env --base-image custom-base:latest --private
108
+ ```
109
+
110
+ After deployment, your space will be available at:
111
+ `https://huggingface.co/spaces/<repo-id>`
112
+
113
+ The deployed space includes:
114
+ - **Web Interface** at `/web` - Interactive UI for exploring the environment
115
+ - **API Documentation** at `/docs` - Full OpenAPI/Swagger interface
116
+ - **Health Check** at `/health` - Container health monitoring
117
+ - **WebSocket** at `/ws` - Persistent session endpoint for low-latency interactions
118
+
119
+ ## Environment Details
120
+
121
+ ### Action
122
+ **EarningsAnalystAction**: Contains a single field
123
+ - `message` (str) - The message to echo back
124
+
125
+ ### Observation
126
+ **EarningsAnalystObservation**: Contains the echo response and metadata
127
+ - `echoed_message` (str) - The message echoed back
128
+ - `message_length` (int) - Length of the message
129
+ - `reward` (float) - Reward based on message length (length × 0.1)
130
+ - `done` (bool) - Always False for echo environment
131
+ - `metadata` (dict) - Additional info like step count
132
+
133
+ ### Reward
134
+ The reward is calculated as: `message_length × 0.1`
135
+ - "Hi" → reward: 0.2
136
+ - "Hello, World!" → reward: 1.3
137
+ - Empty message → reward: 0.0
138
+
139
+ ## Advanced Usage
140
+
141
+ ### Connecting to an Existing Server
142
+
143
+ If you already have a Earnings Analyst environment server running, you can connect directly:
144
+
145
+ ```python
146
+ from earnings_analyst import EarningsAnalystEnv
147
+
148
+ # Connect to existing server
149
+ earnings_analystenv = EarningsAnalystEnv(base_url="<ENV_HTTP_URL_HERE>")
150
+
151
+ # Use as normal
152
+ result = earnings_analystenv.reset()
153
+ result = earnings_analystenv.step(EarningsAnalystAction(message="Hello!"))
154
+ ```
155
+
156
+ Note: When connecting to an existing server, `earnings_analystenv.close()` will NOT stop the server.
157
+
158
+ ### Using the Context Manager
159
+
160
+ The client supports context manager usage for automatic connection management:
161
+
162
+ ```python
163
+ from earnings_analyst import EarningsAnalystAction, EarningsAnalystEnv
164
+
165
+ # Connect with context manager (auto-connects and closes)
166
+ with EarningsAnalystEnv(base_url="http://localhost:8000") as env:
167
+ result = env.reset()
168
+ print(f"Reset: {result.observation.echoed_message}")
169
+ # Multiple steps with low latency
170
+ for msg in ["Hello", "World", "!"]:
171
+ result = env.step(EarningsAnalystAction(message=msg))
172
+ print(f"Echoed: {result.observation.echoed_message}")
173
+ ```
174
+
175
+ The client uses WebSocket connections for:
176
+ - **Lower latency**: No HTTP connection overhead per request
177
+ - **Persistent session**: Server maintains your environment state
178
+ - **Efficient for episodes**: Better for many sequential steps
179
+
180
+ ### Concurrent WebSocket Sessions
181
+
182
+ The server supports multiple concurrent WebSocket connections. To enable this,
183
+ modify `server/app.py` to use factory mode:
184
+
185
+ ```python
186
+ # In server/app.py - use factory mode for concurrent sessions
187
+ app = create_app(
188
+ EarningsAnalystEnvironment, # Pass class, not instance
189
+ EarningsAnalystAction,
190
+ EarningsAnalystObservation,
191
+ max_concurrent_envs=4, # Allow 4 concurrent sessions
192
+ )
193
+ ```
194
+
195
+ Then multiple clients can connect simultaneously:
196
+
197
+ ```python
198
+ from earnings_analyst import EarningsAnalystAction, EarningsAnalystEnv
199
+ from concurrent.futures import ThreadPoolExecutor
200
+
201
+ def run_episode(client_id: int):
202
+ with EarningsAnalystEnv(base_url="http://localhost:8000") as env:
203
+ result = env.reset()
204
+ for i in range(10):
205
+ result = env.step(EarningsAnalystAction(message=f"Client {client_id}, step {i}"))
206
+ return client_id, result.observation.message_length
207
+
208
+ # Run 4 episodes concurrently
209
+ with ThreadPoolExecutor(max_workers=4) as executor:
210
+ results = list(executor.map(run_episode, range(4)))
211
+ ```
212
+
213
+ ## Development & Testing
214
+
215
+ ### Direct Environment Testing
216
+
217
+ Test the environment logic directly without starting the HTTP server:
218
+
219
+ ```bash
220
+ # From the server directory
221
+ python3 server/earnings_analyst_environment.py
222
+ ```
223
+
224
+ This verifies that:
225
+ - Environment resets correctly
226
+ - Step executes actions properly
227
+ - State tracking works
228
+ - Rewards are calculated correctly
229
+
230
+ ### Running Locally
231
+
232
+ Run the server locally for development:
233
+
234
+ ```bash
235
+ uvicorn server.app:app --reload
236
+ ```
237
+
238
+ ## Project Structure
239
+
240
+ ```
241
+ earnings_analyst/
242
+ ├── .dockerignore # Docker build exclusions
243
+ ├── __init__.py # Module exports
244
+ ├── README.md # This file
245
+ ├── openenv.yaml # OpenEnv manifest
246
+ ├── pyproject.toml # Project metadata and dependencies
247
+ ├── uv.lock # Locked dependencies (generated)
248
+ ├── client.py # EarningsAnalystEnv client
249
+ ├── models.py # Action and Observation models
250
+ └── server/
251
+ ├── __init__.py # Server module exports
252
+ ├── earnings_analyst_environment.py # Core environment logic
253
+ ├── app.py # FastAPI application (HTTP + WebSocket endpoints)
254
+ └── Dockerfile # Container image definition
255
+ ```
__init__.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ """Earnings Analyst Environment."""
2
+
3
+ from .client import EarningsAnalystEnv
4
+ from .models import EarningsAnalystAction, EarningsAnalystObservation
5
+
6
+ __all__ = [
7
+ "EarningsAnalystAction",
8
+ "EarningsAnalystObservation",
9
+ "EarningsAnalystEnv",
10
+ ]
client.py ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Earnings Analyst Environment Client."""
2
+
3
+ from typing import Dict
4
+
5
+ from openenv.core import EnvClient
6
+ from openenv.core.client_types import StepResult
7
+ from openenv.core.env_server.types import State
8
+
9
+ from .models import EarningsAnalystAction, EarningsAnalystObservation
10
+
11
+
12
+ class EarningsAnalystEnv(
13
+ EnvClient[EarningsAnalystAction, EarningsAnalystObservation, State]
14
+ ):
15
+ """
16
+ Client for the Earnings Analyst Environment.
17
+
18
+ This client maintains a persistent WebSocket connection to the environment server,
19
+ enabling efficient multi-step interactions with lower latency.
20
+ Each client instance has its own dedicated environment session on the server.
21
+
22
+ Example:
23
+ >>> # Connect to a running server
24
+ >>> with EarningsAnalystEnv(base_url="http://localhost:8000") as client:
25
+ ... result = client.reset()
26
+ ... print(result.observation.task_instruction)
27
+ ...
28
+ ... result = client.step(EarningsAnalystAction(sentiment="neutral"))
29
+ ... print(result.observation.metadata)
30
+
31
+ Example with Docker:
32
+ >>> # Automatically start container and connect
33
+ >>> client = EarningsAnalystEnv.from_docker_image("earnings_analyst-env:latest")
34
+ >>> try:
35
+ ... result = client.reset()
36
+ ... result = client.step(EarningsAnalystAction(sentiment="neutral"))
37
+ ... finally:
38
+ ... client.close()
39
+ """
40
+
41
+ def _step_payload(self, action: EarningsAnalystAction) -> Dict:
42
+ """
43
+ Convert EarningsAnalystAction to JSON payload for step message.
44
+
45
+ Args:
46
+ action: EarningsAnalystAction instance
47
+
48
+ Returns:
49
+ Dictionary representation suitable for JSON encoding
50
+ """
51
+ return {
52
+ "sentiment": action.sentiment,
53
+ }
54
+
55
+ def _parse_result(self, payload: Dict) -> StepResult[EarningsAnalystObservation]:
56
+ """
57
+ Parse server response into StepResult[EarningsAnalystObservation].
58
+
59
+ Args:
60
+ payload: JSON response data from server
61
+
62
+ Returns:
63
+ StepResult with EarningsAnalystObservation
64
+ """
65
+ obs_data = payload.get("observation", {})
66
+ observation = EarningsAnalystObservation(
67
+ text_context=obs_data.get("text_context") or {},
68
+ numerical_context=obs_data.get("numerical_context") or {},
69
+ task_instruction=obs_data.get("task_instruction", ""),
70
+ done=payload.get("done", False),
71
+ reward=payload.get("reward"),
72
+ metadata=obs_data.get("metadata", {}),
73
+ )
74
+
75
+ return StepResult(
76
+ observation=observation,
77
+ reward=payload.get("reward"),
78
+ done=payload.get("done", False),
79
+ )
80
+
81
+ def _parse_state(self, payload: Dict) -> State:
82
+ """
83
+ Parse server response into State object.
84
+
85
+ Args:
86
+ payload: JSON response from state request
87
+
88
+ Returns:
89
+ State object with episode_id and step_count
90
+ """
91
+ return State(
92
+ episode_id=payload.get("episode_id"),
93
+ step_count=payload.get("step_count", 0),
94
+ )
environment_config.py ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Central configuration for all earnings analyst tasks.
3
+
4
+ Plain data only — no imports from env or dataset loader. Add new tasks here.
5
+ """
6
+
7
+ DATASET_ID = "RudrakshNanavaty/earnings-call-data"
8
+ DATASET_FILE = "episodes.parquet"
9
+
10
+ TASKS = {
11
+ "sentiment_v1": {
12
+ "text_cols": [
13
+ "earnings_transcript",
14
+ "press_release_8k_body",
15
+ "press_release_ex991",
16
+ "press_release_ex992",
17
+ ],
18
+ "numerical_cols": [
19
+ "price_momentum_30d",
20
+ "price_momentum_90d",
21
+ "pct_from_52w_high_pt",
22
+ "avg_volume_20d",
23
+ "d_minus_1_close",
24
+ ],
25
+ "label_col": "sentiment_label",
26
+ "label_values": [
27
+ "very bearish",
28
+ "bearish",
29
+ "neutral",
30
+ "bullish",
31
+ "very bullish",
32
+ ],
33
+ "task_instruction": (
34
+ "Analyse the provided earnings call materials and classify the overall market sentiment.\n\n"
35
+ "Return a JSON object matching this exact schema:\n"
36
+ '{"sentiment": "<one of: very bearish | bearish | neutral | bullish | very bullish>"}\n\n'
37
+ "Do not include any other keys or explanation."
38
+ ),
39
+ },
40
+ }
41
+
42
+ DEFAULT_TASK = "sentiment_v1"
main.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ def main():
2
+ print("Hello from earnings-analyst!")
3
+
4
+
5
+ if __name__ == "__main__":
6
+ main()
models.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Data models for the Earnings Analyst Environment.
3
+ """
4
+
5
+ from openenv.core.env_server.types import Action, Observation
6
+ from pydantic import Field
7
+
8
+
9
+ class EarningsAnalystAction(Action):
10
+ """Action for sentiment classification (and future tasks)."""
11
+
12
+ sentiment: str = Field(
13
+ ...,
14
+ description=(
15
+ "Predicted sentiment: one of very bearish, bearish, neutral, bullish, very bullish"
16
+ ),
17
+ )
18
+
19
+
20
+ class EarningsAnalystObservation(Observation):
21
+ """Observation bundle: text context, numerical context, and task instruction."""
22
+
23
+ text_context: dict[str, str] = Field(
24
+ default_factory=dict,
25
+ description="Non-null text fields for the active task (column name -> text)",
26
+ )
27
+ numerical_context: dict[str, float] = Field(
28
+ default_factory=dict,
29
+ description="Market / numerical features for the active task (column name -> value)",
30
+ )
31
+ task_instruction: str = Field(
32
+ default="",
33
+ description="Natural language instruction and JSON schema for the agent",
34
+ )
openenv.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ spec_version: 1
2
+ name: earnings_analyst
3
+ type: space
4
+ runtime: fastapi
5
+ app: server.app:app
6
+ port: 8000
7
+
pyproject.toml ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = ["setuptools>=45", "wheel"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "openenv-earnings_analyst"
7
+ version = "0.1.0"
8
+ description = "Earnings Analyst environment for OpenEnv"
9
+ requires-python = ">=3.12"
10
+ dependencies = [
11
+ "datasets>=4.8.4",
12
+ # Core OpenEnv runtime (provides FastAPI server + HTTP client types)
13
+ # install from github
14
+ # "openenv-core[core] @ git+https://github.com/meta-pytorch/OpenEnv.git",
15
+ "huggingface-hub>=1.10.1",
16
+ "openenv-core[core]>=0.2.2",
17
+ # Environment-specific dependencies
18
+ # Add all dependencies needed for your environment here
19
+ # Examples:
20
+ # "numpy>=1.19.0",
21
+ # "torch>=2.0.0",
22
+ # "gymnasium>=0.29.0",
23
+ # "openspiel>=1.0.0",
24
+ # "smolagents>=1.22.0,<2",
25
+ ]
26
+
27
+ [project.optional-dependencies]
28
+ dev = ["pytest>=8.0.0", "pytest-cov>=4.0.0"]
29
+
30
+ [project.scripts]
31
+ # Server entry point - enables running via: uv run --project . server
32
+ # or: python -m earnings_analyst.server.app
33
+ server = "earnings_analyst.server.app:main"
34
+
35
+ [tool.setuptools]
36
+ include-package-data = true
37
+ packages = ["earnings_analyst", "earnings_analyst.server"]
38
+ package-dir = { "earnings_analyst" = ".", "earnings_analyst.server" = "server" }
server/Dockerfile ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Multi-stage build using openenv-base
2
+ # This Dockerfile is flexible and works for both:
3
+ # - In-repo environments (with local OpenEnv sources)
4
+ # - Standalone environments (with openenv from PyPI/Git)
5
+ # The build script (openenv build) handles context detection and sets appropriate build args.
6
+
7
+ ARG BASE_IMAGE=ghcr.io/meta-pytorch/openenv-base:latest
8
+ FROM ${BASE_IMAGE} AS builder
9
+
10
+ WORKDIR /app
11
+
12
+ # Ensure git is available (required for installing dependencies from VCS)
13
+ RUN apt-get update && \
14
+ apt-get install -y --no-install-recommends git && \
15
+ rm -rf /var/lib/apt/lists/*
16
+
17
+ # Build argument to control whether we're building standalone or in-repo
18
+ ARG BUILD_MODE=in-repo
19
+ ARG ENV_NAME=earnings_analyst
20
+
21
+ # Copy environment code (always at root of build context)
22
+ COPY . /app/env
23
+
24
+ # For in-repo builds, openenv is already vendored in the build context
25
+ # For standalone builds, openenv will be installed via pyproject.toml
26
+ WORKDIR /app/env
27
+
28
+ # Ensure uv is available (for local builds where base image lacks it)
29
+ RUN if ! command -v uv >/dev/null 2>&1; then \
30
+ curl -LsSf https://astral.sh/uv/install.sh | sh && \
31
+ mv /root/.local/bin/uv /usr/local/bin/uv && \
32
+ mv /root/.local/bin/uvx /usr/local/bin/uvx; \
33
+ fi
34
+
35
+ # Install dependencies using uv sync
36
+ # If uv.lock exists, use it; otherwise resolve on the fly
37
+ RUN --mount=type=cache,target=/root/.cache/uv \
38
+ if [ -f uv.lock ]; then \
39
+ uv sync --frozen --no-install-project --no-editable; \
40
+ else \
41
+ uv sync --no-install-project --no-editable; \
42
+ fi
43
+
44
+ RUN --mount=type=cache,target=/root/.cache/uv \
45
+ if [ -f uv.lock ]; then \
46
+ uv sync --frozen --no-editable; \
47
+ else \
48
+ uv sync --no-editable; \
49
+ fi
50
+
51
+ # Final runtime stage
52
+ FROM ${BASE_IMAGE}
53
+
54
+ WORKDIR /app
55
+
56
+ # Copy the virtual environment from builder
57
+ COPY --from=builder /app/env/.venv /app/.venv
58
+
59
+ # Copy the environment code
60
+ COPY --from=builder /app/env /app/env
61
+
62
+ # Set PATH to use the virtual environment
63
+ ENV PATH="/app/.venv/bin:$PATH"
64
+
65
+ # Set PYTHONPATH so imports work correctly
66
+ ENV PYTHONPATH="/app/env:$PYTHONPATH"
67
+
68
+ # Health check
69
+ HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
70
+ CMD curl -f http://localhost:8000/health || exit 1
71
+
72
+ # Run the FastAPI server
73
+ # The module path is constructed to work with the /app/env structure
74
+ CMD ["sh", "-c", "cd /app/env && uvicorn server.app:app --host 0.0.0.0 --port 8000"]
server/__init__.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ """Earnings Analyst environment server components."""
2
+
3
+ from .earnings_analyst_environment import EarningsAnalystEnvironment
4
+
5
+ __all__ = ["EarningsAnalystEnvironment"]
server/app.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ FastAPI application for the Earnings Analyst Environment.
3
+
4
+ This module creates an HTTP server that exposes the EarningsAnalystEnvironment
5
+ over HTTP and WebSocket endpoints, compatible with EnvClient.
6
+
7
+ Endpoints:
8
+ - POST /reset: Reset the environment
9
+ - POST /step: Execute an action
10
+ - GET /state: Get current environment state
11
+ - GET /schema: Get action/observation schemas
12
+ - WS /ws: WebSocket endpoint for persistent sessions
13
+
14
+ Usage:
15
+ # Development (with auto-reload):
16
+ uvicorn server.app:app --reload --host 0.0.0.0 --port 8000
17
+
18
+ # Production:
19
+ uvicorn server.app:app --host 0.0.0.0 --port 8000 --workers 4
20
+
21
+ # Or run directly:
22
+ python -m server.app
23
+ """
24
+
25
+ try:
26
+ from openenv.core.env_server.http_server import create_app
27
+ except Exception as e: # pragma: no cover
28
+ raise ImportError(
29
+ "openenv is required for the web interface. Install dependencies with '\n uv sync\n'"
30
+ ) from e
31
+
32
+ try:
33
+ from ..models import EarningsAnalystAction, EarningsAnalystObservation
34
+ from .earnings_analyst_environment import EarningsAnalystEnvironment
35
+ except ModuleNotFoundError:
36
+ from models import EarningsAnalystAction, EarningsAnalystObservation
37
+ from server.earnings_analyst_environment import EarningsAnalystEnvironment
38
+
39
+
40
+ # Create the app with web interface and README integration
41
+ app = create_app(
42
+ EarningsAnalystEnvironment,
43
+ EarningsAnalystAction,
44
+ EarningsAnalystObservation,
45
+ env_name="earnings_analyst",
46
+ max_concurrent_envs=1, # increase this number to allow more concurrent WebSocket sessions
47
+ )
48
+
49
+
50
+ def main(host: str = "0.0.0.0", port: int = 8000):
51
+ """
52
+ Entry point for direct execution via uv run or python -m.
53
+
54
+ This function enables running the server without Docker:
55
+ uv run --project . server
56
+ uv run --project . server --port 8001
57
+ python -m earnings_analyst.server.app
58
+
59
+ Args:
60
+ host: Host address to bind to (default: "0.0.0.0")
61
+ port: Port number to listen on (default: 8000)
62
+
63
+ For production deployments, consider using uvicorn directly with
64
+ multiple workers:
65
+ uvicorn earnings_analyst.server.app:app --workers 4
66
+ """
67
+ import uvicorn
68
+
69
+ uvicorn.run(app, host=host, port=port)
70
+
71
+
72
+ if __name__ == "__main__":
73
+ import argparse
74
+
75
+ parser = argparse.ArgumentParser()
76
+ parser.add_argument("--port", type=int, default=8000)
77
+ args = parser.parse_args()
78
+ main(port=args.port)
server/dataset_loader.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Load the Hugging Face dataset once as a module-level singleton.
3
+
4
+ No task-specific column lists — see environment_config.TASKS for that.
5
+ """
6
+
7
+ from datasets import load_dataset
8
+
9
+ try:
10
+ from ..environment_config import DATASET_FILE, DATASET_ID
11
+ except ImportError:
12
+ from environment_config import DATASET_FILE, DATASET_ID
13
+
14
+ # Loaded once on first import; all resets share this object.
15
+ # Pin Hub parquet so we never pick up features.parquet / raw_*.parquet from the same repo.
16
+ dataset = load_dataset(
17
+ DATASET_ID,
18
+ data_files={"train": DATASET_FILE},
19
+ split="train",
20
+ )
server/earnings_analyst_environment.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Earnings Analyst Environment Implementation.
3
+
4
+ Samples rows from the Hugging Face earnings-call dataset and exposes task-specific
5
+ observations from environment_config.TASKS.
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ import math
11
+ import random
12
+ from typing import Any
13
+ from uuid import uuid4
14
+
15
+ from openenv.core.env_server.interfaces import Environment
16
+ from openenv.core.env_server.types import State
17
+
18
+ try:
19
+ from ..environment_config import DEFAULT_TASK, TASKS
20
+ from ..models import EarningsAnalystAction, EarningsAnalystObservation
21
+ except ImportError:
22
+ from environment_config import DEFAULT_TASK, TASKS
23
+ from models import EarningsAnalystAction, EarningsAnalystObservation
24
+
25
+ try:
26
+ from .dataset_loader import dataset
27
+ except ImportError:
28
+ from server.dataset_loader import dataset
29
+
30
+
31
+ def _non_empty_text(value: Any) -> bool:
32
+ if value is None:
33
+ return False
34
+ s = str(value).strip()
35
+ return bool(s)
36
+
37
+
38
+ def _finite_float(value: Any) -> float | None:
39
+ if value is None:
40
+ return None
41
+ try:
42
+ x = float(value)
43
+ except (TypeError, ValueError):
44
+ return None
45
+ if isinstance(x, float) and math.isnan(x):
46
+ return None
47
+ return x
48
+
49
+
50
+ class EarningsAnalystEnvironment(Environment):
51
+ """
52
+ RL environment over earnings-call rows: reset samples a row and returns
53
+ text_context, numerical_context, and task_instruction per the active task.
54
+ """
55
+
56
+ SUPPORTS_CONCURRENT_SESSIONS: bool = True
57
+
58
+ def __init__(self, task_id: str = DEFAULT_TASK) -> None:
59
+ if task_id not in TASKS:
60
+ raise KeyError(
61
+ f"Unknown task_id={task_id!r}. Valid: {sorted(TASKS.keys())}"
62
+ )
63
+ self._cfg = TASKS[task_id]
64
+ self._state = State(episode_id=str(uuid4()), step_count=0)
65
+ self._current_row: dict[str, Any] | None = None
66
+
67
+ def reset(self) -> EarningsAnalystObservation:
68
+ """Sample one dataset row and return the agent-visible observation bundle."""
69
+ self._state = State(episode_id=str(uuid4()), step_count=0)
70
+ idx = random.randrange(len(dataset))
71
+ row = dataset[idx]
72
+ # Normalize to a plain dict for grading and column access
73
+ self._current_row = dict(row)
74
+
75
+ text_context = {
76
+ col: str(self._current_row[col]).strip()
77
+ for col in self._cfg["text_cols"]
78
+ if _non_empty_text(self._current_row.get(col))
79
+ }
80
+ numerical_context: dict[str, float] = {}
81
+ for col in self._cfg["numerical_cols"]:
82
+ v = _finite_float(self._current_row.get(col))
83
+ if v is not None:
84
+ numerical_context[col] = v
85
+
86
+ return EarningsAnalystObservation(
87
+ text_context=text_context,
88
+ numerical_context=numerical_context,
89
+ task_instruction=self._cfg["task_instruction"],
90
+ done=False,
91
+ reward=0.0,
92
+ )
93
+
94
+ def step(self, action: EarningsAnalystAction) -> EarningsAnalystObservation: # type: ignore[override]
95
+ """
96
+ Execute one step (stub). Scoring against ``sentiment_label`` is a follow-up.
97
+
98
+ Args:
99
+ action: Agent action with predicted ``sentiment``.
100
+
101
+ Returns:
102
+ Terminal observation placeholder; reward grading not implemented yet.
103
+ """
104
+ self._state.step_count += 1
105
+ return EarningsAnalystObservation(
106
+ text_context={},
107
+ numerical_context={},
108
+ task_instruction=self._cfg["task_instruction"],
109
+ done=True,
110
+ reward=0.0,
111
+ metadata={"predicted_sentiment": action.sentiment},
112
+ )
113
+
114
+ @property
115
+ def state(self) -> State:
116
+ """Current environment state."""
117
+ return self._state
server/requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ openenv[core]>=0.2.0
2
+ fastapi>=0.115.0
3
+ uvicorn>=0.24.0
4
+ datasets>=3.0.0
5
+ huggingface-hub>=0.24.0
6
+
7
+
8
+
uv.lock ADDED
The diff for this file is too large to render. See raw diff