sac-RBC2D-hard-v0 / README.md
becktepe's picture
Add files using upload-large-folder tool
ee9e73d verified
metadata
library_name: stable-baselines3
tags:
  - reinforcement-learning
  - stable-baselines3
  - deep-reinforcement-learning
  - fluidgym
  - active-flow-control
  - fluid-dynamics
  - simulation
  - RBC2D-hard-v0
model-index:
  - name: SAC-RBC2D-hard-v0
    results:
      - task:
          type: reinforcement-learning
          name: reinforcement-learning
        dataset:
          name: FluidGym-RBC2D-hard-v0
          type: fluidgym
        metrics:
          - type: mean_reward
            value: 0.51
            name: mean_reward

SAC on RBC2D-hard-v0 (FluidGym)

This repository is part of the FluidGym benchmark results. It contains trained Stable Baselines3 agents for the specialized RBC2D-hard-v0 environment.

Evaluation Results

Global Performance (Aggregated across 5 seeds)

Mean Reward: 0.51 ± 0.08

Per-Seed Statistics

Run Mean Reward Std Dev
Seed 0 0.53 1.09
Seed 1 0.40 1.48
Seed 2 0.62 0.91
Seed 3 0.56 1.65
Seed 4 0.44 1.96

About FluidGym

FluidGym is a benchmark for reinforcement learning in active flow control.

Usage

Each seed is contained in its own subdirectory. You can load a model using:

from stable_baselines3 import SAC
model = SAC.load("0/ckpt_latest.zip")

**Important:** The models were trained using ```fluidgym==0.0.2```. In order to use
them with newer versions of FluidGym, you need to wrap the environment with a
`FlattenObservation` wrapper as shown below:
```python
import fluidgym
from fluidgym.wrappers import FlattenObservation
from stable_baselines3 import SAC

env = fluidgym.make("RBC2D-hard-v0")
env = FlattenObservation(env)
model = SAC.load("path_to_model/ckpt_latest.zip")

obs, info = env.reset(seed=42)

action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)

References