becktepe's picture
Add files using upload-large-folder tool
bed05de verified
metadata
library_name: stable-baselines3
tags:
  - reinforcement-learning
  - stable-baselines3
  - deep-reinforcement-learning
  - fluidgym
  - active-flow-control
  - fluid-dynamics
  - simulation
  - RBC2D-medium-v0
model-index:
  - name: SAC-RBC2D-medium-v0
    results:
      - task:
          type: reinforcement-learning
          name: reinforcement-learning
        dataset:
          name: FluidGym-RBC2D-medium-v0
          type: fluidgym
        metrics:
          - type: mean_reward
            value: 0.05
            name: mean_reward

SAC on RBC2D-medium-v0 (FluidGym)

This repository is part of the FluidGym benchmark results. It contains trained Stable Baselines3 agents for the specialized RBC2D-medium-v0 environment.

Evaluation Results

Global Performance (Aggregated across 5 seeds)

Mean Reward: 0.05 ± 0.48

Per-Seed Statistics

Run Mean Reward Std Dev
Seed 0 0.74 1.52
Seed 1 0.17 1.62
Seed 2 -0.54 1.98
Seed 3 0.33 1.38
Seed 4 -0.46 1.85

About FluidGym

FluidGym is a benchmark for reinforcement learning in active flow control.

Usage

Each seed is contained in its own subdirectory. You can load a model using:

from stable_baselines3 import SAC
model = SAC.load("0/ckpt_latest.zip")

Important: The models were trained using fluidgym==0.0.2. In order to use them with newer versions of FluidGym, you need to wrap the environment with a FlattenObservation wrapper as shown below:

import fluidgym
from fluidgym.wrappers import FlattenObservation
from stable_baselines3 import SAC

env = fluidgym.make("RBC2D-medium-v0")
env = FlattenObservation(env)
model = SAC.load("path_to_model/ckpt_latest.zip")

obs, info = env.reset(seed=42)

action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)

References