File size: 3,413 Bytes
1794757
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
# Trenches Backend

This directory contains the Python backend for the Trenches simulator.

It now exposes two layers:

- the existing session-oriented FastAPI API used by the React dashboard
- a native OpenEnv-compatible environment mounted under `/openenv` when `openenv-core` is installed

The backend does not serve frontend assets and is intended to stay frontend-stack agnostic. Any web client
(Next.js, Vite, Bun, mobile, or a thin dashboard proxy) should be able to consume the same HTTP contract.

CORS is configurable so frontend migrations do not require backend code changes:

- `TRENCHES_CORS_ALLOW_ORIGINS=https://app.example.com,https://ops.example.com`
- `TRENCHES_CORS_ALLOW_ORIGIN_REGEX=https://.*\\.example\\.com`
- `TRENCHES_CORS_ALLOW_CREDENTIALS=true|false`

If no CORS env vars are set, the backend allows local development origins on `localhost` / `127.0.0.1` for any port.

Entity-model provider bindings are also configurable per agent. The backend does not fake provider readiness:
if a provider/model is not configured, the runtime reports `heuristic_fallback` explicitly in session state and
`/capabilities`.

Supported env patterns:

- `TRENCHES_MODEL_PROVIDER=openai|anthropic|openrouter|ollama|vllm|custom`
- `TRENCHES_MODEL_NAME=<provider model id>`
- `TRENCHES_MODEL_BASE_URL=<custom base url>`
- `TRENCHES_MODEL_API_KEY_ENV=<name of env var holding the secret>`
- `TRENCHES_MODEL_SUPPORTS_TOOL_CALLS=true|false`
- `TRENCHES_MODEL_SUPPORTS_STRUCTURED_OUTPUT=true|false`

Per-entity overrides use the uppercase agent suffix, for example:

- `TRENCHES_MODEL_PROVIDER_US=openai`
- `TRENCHES_MODEL_NAME_US=gpt-4.1`
- `TRENCHES_MODEL_API_KEY_ENV_US=OPENAI_API_KEY`

Relevant OpenEnv pieces in this package:

- `trenches_env.openenv_adapter.TrenchesOpenEnvEnvironment`
- `trenches_env.openenv_adapter.TrenchesOpenEnvAction`
- `trenches_env.openenv_adapter.TrenchesOpenEnvObservation`
- `trenches_env.openenv_adapter.TrenchesOpenEnvState`
- `trenches_env.openenv_client.TrenchesEnvClient`

Historical replay training pieces:

- `trenches_env.models.Prediction`
- `trenches_env.models.HistoricalEvent`
- `trenches_env.models.HistoricalReplayState`
- `trenches_env.training_cli`

The backend now supports replay-aware forecast training:

- `reset(..., replay_id=...)` starts from a visible historical context event
- `step(...)` accepts separate `action` and `prediction`
- the next ground-truth event is revealed on the same OpenEnv step
- reward blends the entity action reward with forecast scoring terms

Bundled bootstrap replay (⚠️ **all replays are synthetic seed data** — replace with curated truth sets for production):

- `us_synthetic_seed_2025_2026`

CLI training entrypoint:

```bash
trenches-train \
  --training-agent us \
  --replay-id us_synthetic_seed_2025_2026 \
  --generation-backend transformers
```

The CLI supports two rollout backends:

- `transformers` for portable local smoke runs
- `vllm` for the documented colocated OpenEnv + TRL path on a GPU box

Planned responsibilities:

- Hold in-memory crisis sessions.
- Expose `create`, `reset`, `step`, and `state` HTTP endpoints.
- Model the fog-of-war world state and per-agent observations.
- Provide a native OpenEnv boundary with scalar rewards for one active training agent while retaining full per-agent state internally.
- Provide extension points for World Monitor ingestion and RL training hooks.