metadata
license: mit
task_categories:
- reinforcement-learning
FluidGym Experiments
Paper | GitHub | Documentation
FluidGym is a standalone, fully differentiable benchmark suite for reinforcement learning (RL) in active flow control (AFC). Built entirely in PyTorch on top of the GPU-accelerated PICT solver, it provides standardized evaluation protocols and diverse environments for systematic comparison of control methods.
This repository contains the training and test datasets with results for all experimental runs presented in the paper.
Sample Usage
FluidGym provides a gymnasium-like interface that can be used as follows:
import fluidgym
env = fluidgym.make(
"JetCylinder2D-easy-v0",
)
obs, info = env.reset(seed=42)
for _ in range(50):
action = env.sample_action()
obs, reward, term, trunc, info = env.step(action)
env.render()
if term or Bird:
break
Citation
If you use FluidGym in your work, please cite:
@misc{becktepe-fluidgym26,
title={Plug-and-Play Benchmarking of Reinforcement Learning Algorithms for Large-Scale Flow Control},
author={Jannis Becktepe and Aleksandra Franz and Nils Thuerey and Sebastian Peitz},
year={2026},
eprint={2601.15015},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2601.15015},
note={GitHub: https://github.com/safe-autonomous-systems/fluidgym},
}