ZacharyCB99's picture
Add files using upload-large-folder tool
b5241eb verified
# WAKESET: A Large-Scale, High-Reynolds Number Flow Dataset
**WAKESET** is a comprehensive Computational Fluid Dynamics (CFD) dataset designed for Machine Learning applications in fluid mechanics. It captures the complex hydrodynamic interactions of an Extra Large Uncrewed Underwater Vehicle (XLUUV).
The dataset comprises **1,091 high-fidelity RANS simulations** (augmented to 4,364 instances via the provided `WAKESET_pytorch.py`), covering Reynolds numbers up to $1.09 \times 10^8$ and turning angles up to 60 degrees.
## Directory Structure
```text
WAKESET/
|-- Volumes/ # 3D interpolated grids (128x128x128)
| |-- Forward_0100_ms_Angle_00_CUBE_128/
| |-- Forward_0100_ms_Angle_05_CUBE_128/
| |-- ...
|-- Planes/ # 2D slices (Vertical and Horizontal)
| |-- Vertical/
| | |-- Forward_0100_ms_Angle_00_VERTPLN_ALL/
| | |-- Forward_0100_ms_Angle_05_VERTPLN_ALL/
| | |-- ...
| |-- Horizontal/
| | |-- Forward_0100_ms_Angle_00_HORZPLN_ALL/
| | |-- Forward_0100_ms_Angle_05_HORZPLN_ALL/
| | |-- ...
|-- Examples/ # Python Toolkit
| |-- Python/
| |-- requirements.txt
| |-- WAKESET_pytorch.py # ML Dataloader with on-the-fly augmentation
| |-- load_planes.py # Utilities for 2D data
| |-- load_volumes.py # Utilities for 3D data
| |-- load_visualizations.py # Plotting tools
```
# Quick Start
## 1. Installation
The dataset includes a Python toolkit to streamline loading, parsing, and augmentation.
```bash
cd Examples/Python
pip install -r requirements.txt
```
## 2. Loading 3D Volumes (CFD Data)
The `load_volumes.py` script handles the parsing of sparse CFD exports and reshapes them into structured grids.
```python
import sys
sys.path.append("Examples/Python")
from load_volumes import load_volume
# Load a specific volume
vol_data = load_volume(
velocity=1.0,
angle=0,
variable="velocity_magnitude",
data_dir="../../Volumes"
)
print(f"Loaded Volume Shape: {vol_data.values.shape}") # (128, 128, 128)
```
## 3. Machine Learning (PyTorch)
Use the `WAKESET_pytorch.py` wrapper to plug the dataset directly into an ML pipeline. This loader handles on-the-fly augmentation (rotation and flipping) to expand the effective dataset size to 4,364 instances without using extra disk space.
```python
from torch.utils.data import DataLoader
from WAKESET_pytorch import WakesetVolumeDataset
# Initialize Dataset
dataset = WakesetVolumeDataset(
root_dir="../../",
subset='train',
augment=True # Enables physics-informed rotation/flipping
)
loader = DataLoader(dataset, batch_size=4, shuffle=True)
# Training Loop
for flow_field, kinematics in loader:
# flow_field: [Batch, 1, 128, 128, 128]
# kinematics: [Batch, 2] (Speed, Angle)
print(flow_field.shape, kinematics.shape)
break
```
## 4. Visualization
Visualise the data using `load_visualizations.py`.
```python
from load_visualizations import visualize_volume_slices
# Visualize the center slices of the previously loaded volume
visualize_volume_slices(vol_data, variable="velocity_magnitude")
```
# Citation
If you use WAKESET in your research, please cite:
Cooper-Baldock, Z., Santos, P. E., Brinkworth, R. S. A., & Sammut, K. (2026). WAKESET: A Large-Scale, High-Reynolds Number Flow Dataset for Machine Learning of Turbulent Wake Dynamics.