Spaces:
Runtime error
title: Vc Gemini V0 Environment Server
emoji: π΅
colorFrom: indigo
colorTo: blue
sdk: docker
pinned: false
app_port: 8000
base_path: /web
tags:
- openenv
Vc Gemini V0 Environment
A simple test environment that echoes back messages. Perfect for testing the env APIs as well as demonstrating environment usage patterns.
Quick Start
The simplest way to use the Vc Gemini V0 environment is through the VcGeminiV0Env class:
from vc_gemini_v0 import VcGeminiV0Action, VcGeminiV0Env
try:
# Create environment from Docker image
vc_gemini_v0env = VcGeminiV0Env.from_docker_image("vc_gemini_v0-env:latest")
# Reset
result = vc_gemini_v0env.reset()
print(f"Reset: {result.observation.echoed_message}")
# Send multiple messages
messages = ["Hello, World!", "Testing echo", "Final message"]
for msg in messages:
result = vc_gemini_v0env.step(VcGeminiV0Action(message=msg))
print(f"Sent: '{msg}'")
print(f" β Echoed: '{result.observation.echoed_message}'")
print(f" β Length: {result.observation.message_length}")
print(f" β Reward: {result.reward}")
finally:
# Always clean up
vc_gemini_v0env.close()
That's it! The VcGeminiV0Env.from_docker_image() method handles:
- Starting the Docker container
- Waiting for the server to be ready
- Connecting to the environment
- Container cleanup when you call
close()
Building the Docker Image
Before using the environment, you need to build the Docker image:
# From project root
docker build -t vc_gemini_v0-env:latest -f server/Dockerfile .
Deploying to Hugging Face Spaces
You can easily deploy your OpenEnv environment to Hugging Face Spaces using the openenv push command:
# From the environment directory (where openenv.yaml is located)
openenv push
# Or specify options
openenv push --namespace my-org --private
The openenv push command will:
- Validate that the directory is an OpenEnv environment (checks for
openenv.yaml) - Prepare a custom build for Hugging Face Docker space (enables web interface)
- Upload to Hugging Face (ensuring you're logged in)
Prerequisites
- Authenticate with Hugging Face: The command will prompt for login if not already authenticated
Options
--directory,-d: Directory containing the OpenEnv environment (defaults to current directory)--repo-id,-r: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)--base-image,-b: Base Docker image to use (overrides Dockerfile FROM)--private: Deploy the space as private (default: public)
Examples
# Push to your personal namespace (defaults to username/env-name from openenv.yaml)
openenv push
# Push to a specific repository
openenv push --repo-id my-org/my-env
# Push with a custom base image
openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest
# Push as a private space
openenv push --private
# Combine options
openenv push --repo-id my-org/my-env --base-image custom-base:latest --private
After deployment, your space will be available at:
https://huggingface.co/spaces/<repo-id>
The deployed space includes:
- Web Interface at
/web- Interactive UI for exploring the environment - API Documentation at
/docs- Full OpenAPI/Swagger interface - Health Check at
/health- Container health monitoring - WebSocket at
/ws- Persistent session endpoint for low-latency interactions
Environment Details
Action
VcGeminiV0Action: Contains a single field
message(str) - The message to echo back
Observation
VcGeminiV0Observation: Contains the echo response and metadata
echoed_message(str) - The message echoed backmessage_length(int) - Length of the messagereward(float) - Reward based on message length (length Γ 0.1)done(bool) - Always False for echo environmentmetadata(dict) - Additional info like step count
Reward
The reward is calculated as: message_length Γ 0.1
- "Hi" β reward: 0.2
- "Hello, World!" β reward: 1.3
- Empty message β reward: 0.0
Advanced Usage
Connecting to an Existing Server
If you already have a Vc Gemini V0 environment server running, you can connect directly:
from vc_gemini_v0 import VcGeminiV0Env
# Connect to existing server
vc_gemini_v0env = VcGeminiV0Env(base_url="<ENV_HTTP_URL_HERE>")
# Use as normal
result = vc_gemini_v0env.reset()
result = vc_gemini_v0env.step(VcGeminiV0Action(message="Hello!"))
Note: When connecting to an existing server, vc_gemini_v0env.close() will NOT stop the server.
Using the Context Manager
The client supports context manager usage for automatic connection management:
from vc_gemini_v0 import VcGeminiV0Action, VcGeminiV0Env
# Connect with context manager (auto-connects and closes)
with VcGeminiV0Env(base_url="http://localhost:8000") as env:
result = env.reset()
print(f"Reset: {result.observation.echoed_message}")
# Multiple steps with low latency
for msg in ["Hello", "World", "!"]:
result = env.step(VcGeminiV0Action(message=msg))
print(f"Echoed: {result.observation.echoed_message}")
The client uses WebSocket connections for:
- Lower latency: No HTTP connection overhead per request
- Persistent session: Server maintains your environment state
- Efficient for episodes: Better for many sequential steps
Concurrent WebSocket Sessions
The server supports multiple concurrent WebSocket connections. To enable this,
modify server/app.py to use factory mode:
# In server/app.py - use factory mode for concurrent sessions
app = create_app(
VcGeminiV0Environment, # Pass class, not instance
VcGeminiV0Action,
VcGeminiV0Observation,
max_concurrent_envs=4, # Allow 4 concurrent sessions
)
Then multiple clients can connect simultaneously:
from vc_gemini_v0 import VcGeminiV0Action, VcGeminiV0Env
from concurrent.futures import ThreadPoolExecutor
def run_episode(client_id: int):
with VcGeminiV0Env(base_url="http://localhost:8000") as env:
result = env.reset()
for i in range(10):
result = env.step(VcGeminiV0Action(message=f"Client {client_id}, step {i}"))
return client_id, result.observation.message_length
# Run 4 episodes concurrently
with ThreadPoolExecutor(max_workers=4) as executor:
results = list(executor.map(run_episode, range(4)))
Development & Testing
Direct Environment Testing
Test the environment logic directly without starting the HTTP server:
# From the server directory
python3 server/vc_gemini_v0_environment.py
This verifies that:
- Environment resets correctly
- Step executes actions properly
- State tracking works
- Rewards are calculated correctly
Running Locally
Run the server locally for development:
uvicorn server.app:app --reload
Project Structure
vc_gemini_v0/
βββ .dockerignore # Docker build exclusions
βββ __init__.py # Module exports
βββ README.md # This file
βββ openenv.yaml # OpenEnv manifest
βββ pyproject.toml # Project metadata and dependencies
βββ uv.lock # Locked dependencies (generated)
βββ client.py # VcGeminiV0Env client
βββ models.py # Action and Observation models
βββ server/
βββ __init__.py # Server module exports
βββ vc_gemini_v0_environment.py # Core environment logic
βββ app.py # FastAPI application (HTTP + WebSocket endpoints)
βββ Dockerfile # Container image definition