title: OpenEnv Multimodal Moderation Environment
colorFrom: blue
colorTo: green
sdk: docker
pinned: false
license: apache-2.0
app_port: 8000
base_path: /web
OpenEnv Multimodal Moderation Environment
Production-ready OpenEnv environment for multimodal content moderation with a fixed episode flow:
analyzeretrieve_policydecidereviewfinalize
The environment follows the official Meta OpenEnv API:
reset(...) -> Observationstep(action) -> Observationstate -> State
It subclasses openenv.core.env_server.interfaces.Environment and serves HTTP endpoints via create_fastapi_app(...).
Project Layout
server/
__init__.py
app.py
env.py
logic.py
models.py
server_routes.py
requirements.txt
Dockerfile
rag/
__init__.py
policies.py
retriever.py
client.py
models.py
inference.py
openenv.yaml
pyproject.toml
requirements.txt
README.md
Environment Design
Each episode uses one moderation case containing:
- text content
- image metadata
- expected moderation action
- reviewer recommendation
Persistent state_data stores:
- selected case
- retrieved policy chunks
- action history
- reviewer note
- reward breakdown
- final action
Actions
allowflagremoveescalate
Rule Overrides
- content containing
killormurderforces aremoveexpectation - content containing
nudeforces aflagexpectation
Dense Rewards
- analysis step:
+0.2 - retrieval grounding step:
+0.2 - correct decision:
+1.0 - reviewer agreement:
+0.2 - unsafe allow on risky content:
-0.6
Policy Retrieval
The retriever loads curated moderation policies from server/rag/policies.json and ranks them with a lightweight keyword-overlap scorer. This keeps startup deterministic, avoids runtime model downloads, and eliminates noisy encoder logs during evaluation.
Install
This repo is set up for uv and the official OpenEnv package source from the Meta repo.
$env:UV_CACHE_DIR = "$PWD\.uv-cache"
uv sync
If you hit a stuck Windows cache rename error, remove the local cache and retry:
Remove-Item -Recurse -Force .\.uv-cache -ErrorAction SilentlyContinue
$env:UV_CACHE_DIR = "$PWD\.uv-cache"
uv sync
Run Locally
Preferred:
uv run server
Direct Python entrypoint:
python server/app.py
Open API docs:
http://127.0.0.1:8000/docs
HTTP Endpoints
Core OpenEnv endpoints:
POST /resetPOST /stepGET /stateGET /schemaGET /docs
Helper endpoints:
GET /caseslists built-in case idsGET /state_fullreturns full typed stateGET /episode_summaryreturns final reward and reviewer output
Inference Script
inference.py uses the OpenAI Python client against an OpenAI-compatible API.
Environment variables:
API_BASE_URLMODEL_NAMEHF_TOKEN
Optional:
ENV_BASE_URLdefault:http://127.0.0.1:8000
Run:
python inference.py
Notes:
- the script runs one task per invocation and emits strict
[START],[STEP], and[END]lines - task selection and grading are aligned with the built-in moderation cases
- the current default
MODEL_NAMEin code is only used when your environment configuration provides one
Docker
Docker uses uv for package installation.
Build:
docker build -f server/Dockerfile -t openenv-moderation .
Run:
docker run --rm -p 8000:8000 openenv-moderation
to run the deployed version:
docker run -it -p 7860:7860 --platform=linux/amd64 \
-e API_BASE_URL="YOUR_VALUE_HERE" \
-e MODEL_NAME="YOUR_VALUE_HERE" \
-e HF_TOKEN="YOUR_VALUE_HERE" \
-e ENV_BASE_URL="YOUR_VALUE_HERE" \
registry.hf.space/sujanmidatani-openenv-multimodal-moderation:latest
Resource Profile
- designed for
<= 2 vCPU - designed for
<= 8 GB RAM - lightweight JSON-backed retrieval only
- deterministic case selection when
seedis provided