FluidGym Benchmark Models
Collection
Plug-and-Play Benchmarking of Reinforcement Learning Algorithms for Large-Scale Flow Control • 65 items • Updated
How to use safe-autonomous-systems/ma-ppo-RBC3D-medium-v0 with stable-baselines3:
from huggingface_sb3 import load_from_hub
checkpoint = load_from_hub(
repo_id="safe-autonomous-systems/ma-ppo-RBC3D-medium-v0",
filename="{MODEL FILENAME}.zip",
)This repository is part of the FluidGym benchmark results. It contains trained Stable Baselines3 agents for the specialized RBC3D-medium-v0 environment.
Mean Reward: 0.34 ± 0.01
| Run | Mean Reward | Std Dev |
|---|---|---|
| Seed 0 | 0.35 | 0.05 |
| Seed 1 | 0.33 | 0.06 |
| Seed 2 | 0.34 | 0.03 |
| Seed 3 | 0.33 | 0.04 |
| Seed 4 | 0.33 | 0.04 |
FluidGym is a benchmark for reinforcement learning in active flow control.
Each seed is contained in its own subdirectory. You can load a model using:
from stable_baselines3 import PPO
model = PPO.load("0/ckpt_latest.zip")
Important: The models were trained using fluidgym==0.0.2. In order to use
them with newer versions of FluidGym, you need to wrap the environment with a
FlattenObservation wrapper as shown below:
import fluidgym
from fluidgym.wrappers import FlattenObservation
from stable_baselines3 import PPO
env = fluidgym.make("RBC3D-medium-v0")
env = FlattenObservation(env)
model = PPO.load("path_to_model/ckpt_latest.zip")
obs, info = env.reset(seed=42)
action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)
from huggingface_sb3 import load_from_hub checkpoint = load_from_hub( repo_id="safe-autonomous-systems/ma-ppo-RBC3D-medium-v0", filename="{MODEL FILENAME}.zip", )