Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Simulations of Kolmogorov flow at different Reynolds numbers

This directory contains HDF5 trajectory datasets for 2D Kolmogorov flow simulations across multiple Reynolds numbers.

Each simulation corresponds to monochromatic forcing of the Kolmogorov flow at fixed Reynolds number, corresponding to 6 snapshots for each of 10,000 turnover times. The total number of timepoints varies with Reynolds number due to the fixed timestep used for snapshots, but is typically around 60,000 time steps. Both the timestep and the snapshot interval are scaled by the Reynolds number, and burn-in time has been removed from each set of snapshots. The geometry and forcing are identical to those used in Chandler and Kerswell (2013).

Each .h5 file contains a trajectory dataset that stores vorticity snapshots in a 3D array of shape (T, 512, 512), where T is the number of time steps. The file also includes fields for the Reynolds number, domain extent, injection mode, and time between snapshots. In two-dimensions, the streamfunction can be uniquely identified from the vorticity using a Poisson solve, and so the raw vorticity fully characterizes the flow.

Data validation

To ensure the data is valid

python check_data.py

Usage example

To stream each simulation, we can use the h5py library to read the trajectory in chunks. Recall that h5py does not load data into memory until a cast forces it to be materialized, and so we can stream the data in chunks.

import h5py
import numpy as np

all_obs = []
batch_size = 64  # number of time steps to read into memory at a time

## Load the data from the Hugging Face repository
from huggingface_hub import HfFileSystem
repo_id = "williamgilpin/kolmo"
path_in_repo = "re40.h5"  # adjust if stored in a subdirectory

fs = HfFileSystem()
with fs.open(f"datasets/{repo_id}/{path_in_repo}", "rb") as remote_f:
    with h5py.File(remote_f, "r") as f:

        ## Load the simulationmetadata. 
        reynolds_number = float(f.attrs["reynolds_number"])
        nu = 1.0 / reynolds_number # viscosity
        domain_extent = float(f.attrs.get("domain_extent", 2 * np.pi))
        injection_mode = int(f.attrs.get("injection_mode", f.attrs["forcing_frequency"]))
        time_between_snapshots = float(f.attrs["time_between_snapshots"])

        ## Iterate over the trajectory in chunks. Avoid casting the trajectory itself to a
        ## numpy array, because it's too large to fit into memory.
        traj = f["trajectory"]
        n_steps = traj.shape[0]
        first_batch = traj[:batch_size, 0, :, :]
        for start in range(first_batch.shape[0], n_steps, batch_size):
            print(f"Running {start} / {n_steps}", flush=True)
            stop = min(start + batch_size, n_steps)
            batch = traj[start:stop, 0, :, :]
            ## Can operate on the batch here.
Downloads last month
74