metadata
library_name: stable-baselines3
tags:
- reinforcement-learning
- stable-baselines3
- deep-reinforcement-learning
- fluidgym
- active-flow-control
- fluid-dynamics
- simulation
- RBC2D-hard-v0
model-index:
- name: PPO-RBC2D-hard-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FluidGym-RBC2D-hard-v0
type: fluidgym
metrics:
- type: mean_reward
value: -0.46
name: mean_reward
PPO on RBC2D-hard-v0 (FluidGym)
This repository is part of the FluidGym benchmark results. It contains trained Stable Baselines3 agents for the specialized RBC2D-hard-v0 environment.
Evaluation Results
Global Performance (Aggregated across 5 seeds)
Mean Reward: -0.46 ± 0.12
Per-Seed Statistics
| Run | Mean Reward | Std Dev |
|---|---|---|
| Seed 0 | -0.44 | 2.50 |
| Seed 1 | -0.24 | 2.47 |
| Seed 2 | -0.57 | 2.78 |
| Seed 3 | -0.58 | 2.58 |
| Seed 4 | -0.48 | 2.58 |
About FluidGym
FluidGym is a benchmark for reinforcement learning in active flow control.
Usage
Each seed is contained in its own subdirectory. You can load a model using:
from stable_baselines3 import PPO
model = PPO.load("0/ckpt_latest.zip")
Important: The models were trained using fluidgym==0.0.2. In order to use
them with newer versions of FluidGym, you need to wrap the environment with a
FlattenObservation wrapper as shown below:
import fluidgym
from fluidgym.wrappers import FlattenObservation
from stable_baselines3 import PPO
env = fluidgym.make("RBC2D-hard-v0")
env = FlattenObservation(env)
model = PPO.load("path_to_model/ckpt_latest.zip")
obs, info = env.reset(seed=42)
action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)