File size: 2,057 Bytes
167de22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c51e403
 
167de22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c51e403
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d9632f2
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
library_name: stable-baselines3
tags:
- reinforcement-learning
- stable-baselines3
- deep-reinforcement-learning
- fluidgym
- active-flow-control
- fluid-dynamics
- simulation
- RBC2D-easy-v0
model-index:
- name: PPO-RBC2D-easy-v0
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: FluidGym-RBC2D-easy-v0
      type: fluidgym
    metrics:
    - type: mean_reward
      value: 0.87
      name: mean_reward


---

# PPO on RBC2D-easy-v0 (FluidGym)

This repository is part of the **FluidGym** benchmark results. It contains trained Stable Baselines3 agents for the specialized **RBC2D-easy-v0** environment.

## Evaluation Results

### Global Performance (Aggregated across 5 seeds)
**Mean Reward:** 0.87 ± 0.03

### Per-Seed Statistics
| Run | Mean Reward | Std Dev |
| --- | --- | --- |
| Seed 0 | 0.87 | 0.12 |
| Seed 1 | 0.83 | 0.13 |
| Seed 2 | 0.90 | 0.13 |
| Seed 3 | 0.91 | 0.11 |
| Seed 4 | 0.86 | 0.12 |

## About FluidGym
FluidGym is a benchmark for reinforcement learning in active flow control.

## Usage
Each seed is contained in its own subdirectory. You can load a model using:
```python
from stable_baselines3 import PPO
model = PPO.load("0/ckpt_latest.zip")

**Important:** The models were trained using ```fluidgym==0.0.2```. In order to use
them with newer versions of FluidGym, you need to wrap the environment with a
`FlattenObservation` wrapper as shown below:
```python
import fluidgym
from fluidgym.wrappers import FlattenObservation
from stable_baselines3 import PPO

env = fluidgym.make("RBC2D-easy-v0")
env = FlattenObservation(env)
model = PPO.load("path_to_model/ckpt_latest.zip")

obs, info = env.reset(seed=42)

action, _ = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)
```

## References

* [Plug-and-Play Benchmarking of Reinforcement Learning Algorithms for Large-Scale Flow Control](http://arxiv.org/abs/2601.15015)
* [FluidGym GitHub Repository](https://github.com/safe-autonomous-systems/fluidgym)