westlake-ai4s commited on
Commit
5c2bc33
·
verified ·
1 Parent(s): b74bed3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +83 -86
README.md CHANGED
@@ -19,10 +19,6 @@ task_categories:
19
 
20
  # RealPDEBench
21
 
22
- > **⚠️ Data Update Notice (2026-01-13)**
23
- >
24
- > We are updating the dataset format to support dynamic `N_autoregressive` values. The new V2 format will be available before **January 15, 2026**. Please wait for the update before downloading.
25
-
26
  [![HF Dataset](https://img.shields.io/badge/HF%20Dataset-RealPDEBench-FFD21E?logo=huggingface)](https://huggingface.co/datasets/AI4Science-WestlakeU/RealPDEBench)
27
  [![arXiv](https://img.shields.io/badge/arXiv-2601.01829-b31b1b?logo=arxiv)](https://arxiv.org/abs/2601.01829)
28
  [![Website & Docs](https://img.shields.io/badge/Website%20%26%20Docs-realpdebench.github.io-1f6feb?logo=google-chrome)](https://realpdebench.github.io/)
@@ -82,34 +78,14 @@ This Hub repository (`AI4Science-WestlakeU/RealPDEBench`) is the **release repo*
82
 
83
  ## Data format on the Hub
84
 
85
- Each split is stored as a Hugging Face `datasets.Dataset` saved with `Dataset.save_to_disk()`. Concretely, each split is a directory containing:
86
- - `data-*.arrow` (sharded Arrow files, float32 payloads stored as bytes)
87
- - `dataset_info.json`
88
- - `state.json`
89
-
90
- ### test_mode metadata (JSON)
91
-
92
- RealPDEBench supports `test_mode` evaluation splits (`in_dist`, `out_dist`, `seen`, `unseen`). The group definitions are shipped as JSON dicts per scenario:
93
-
94
- - `in_dist_test_params_{type}.json`
95
- - `out_dist_test_params_{type}.json`
96
- - `remain_params_{type}.json`
97
-
98
- where `{type}` is `real` or `numerical`.
99
 
100
- ### Temporal windowing (what an “example” means)
 
 
 
101
 
102
- RealPDEBench is stored as **sliding windows** cut from longer trajectories. Each row corresponds to `(sim_id, time_id)`:
103
-
104
- - `sim_id`: which trajectory (HDF5 file)
105
- - `time_id`: start index of the window
106
-
107
- Typical window lengths \(T\):
108
- - **40 frames** for `cylinder`, `fsi`, `foil`, `combustion` (often used as 20‑step input + 20‑step output)
109
- - **20 frames** for `controlled_cylinder` (often 10 + 10)
110
- - **20 frames** for `combustion/surrogate_train` (surrogate model training data)
111
-
112
- **Intended layout for the full release** (mirrors the on-disk structure used by RealPDEBench loaders):
113
 
114
  ```
115
  {repo_root}/
@@ -121,31 +97,28 @@ Typical window lengths \(T\):
121
  out_dist_test_params_numerical.json
122
  remain_params_numerical.json
123
  hf_dataset/
124
- real_train/ real_val/ real_test/
125
- numerical_train/ numerical_val/ numerical_test/
 
 
 
 
 
 
 
 
 
 
 
 
126
  fsi/
127
- in_dist_test_params_real.json
128
- out_dist_test_params_real.json
129
- remain_params_real.json
130
- in_dist_test_params_numerical.json
131
- out_dist_test_params_numerical.json
132
- remain_params_numerical.json
133
- hf_dataset/
134
- ...
135
  combustion/
136
- in_dist_test_params_real.json
137
- out_dist_test_params_real.json
138
- remain_params_real.json
139
- in_dist_test_params_numerical.json
140
- out_dist_test_params_numerical.json
141
- remain_params_numerical.json
142
- hf_dataset/
143
- real_train/ real_val/ real_test/
144
- numerical_train/ # (val/test intentionally empty)
145
- surrogate_train/ # combustion-only (surrogate model training)
146
- surrogate_train_sim_ids.txt
147
- surrogate_train_meta.json
148
- ...
149
  ```
150
 
151
  ### How to download only what you need
@@ -166,58 +139,82 @@ local_dir = snapshot_download(
166
  endpoint="https://hf-mirror.com",
167
  )
168
 
169
- ds = load_from_disk(os.path.join(local_dir, "fsi", "hf_dataset", "numerical_val"))
170
- row = ds[0]
171
- print(row.keys())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
172
  ```
173
 
174
  ## Schema (columns)
175
 
176
  ### Fluid datasets (`cylinder`, `controlled_cylinder`, `fsi`, `foil`)
177
 
178
- - **Keys**:
179
  - `sim_id` (string): trajectory file name (e.g., `10031.h5`)
180
- - `time_id` (int): start frame index of the window
181
- - `u`, `v` (bytes): float32 arrays of shape `(T, H, W)`
182
- - `p` (bytes): float32 array `(T, H, W)` *(numerical splits only)*
183
- - `shape_t`, `shape_h`, `shape_w` (int): shapes for decoding
184
 
185
  ### Combustion dataset (`combustion`)
186
 
187
- - **Keys**:
188
  - `sim_id` (string): e.g., `40NH3_1.1.h5`
189
- - `time_id` (int): start frame index of the window
190
- - `observed` (bytes): float32 array `(T, H, W)` (real: measured intensity; numerical: surrogate intensity)
191
- - `numerical` (bytes): float32 array `(T, H, W, 15)` *(numerical splits only)*
192
  - `numerical_channels` (int): number of numerical channels (15)
193
- - `shape_t`, `shape_h`, `shape_w` (int): shapes for decoding
194
-
195
- ### Combustion surrogate-train (`combustion/surrogate_train`)
196
 
197
- Used to train a surrogate model mapping simulated modalities → real modality (combustion only).
198
 
199
- - **Keys**:
200
- - `real` (bytes): float32 array `(T, H, W)` (target intensity)
201
- - `numerical` (bytes): float32 array `(T, H, W, C)` (input fields)
202
- - plus shapes (`*_shape_*`) and `numerical_channels`
203
 
204
- ## Current converted data size (local conversion; full release target)
 
 
 
 
 
 
 
205
 
206
- These numbers refer to our current HF Arrow conversion outputs (not all uploaded to this test repo yet):
207
 
208
- - **Total**: ~**954GB** across all scenarios
209
- - **Largest shard file**: ~**0.47GB** (well below the Hubs recommended **<50GB per file**)
210
- - **Total file count**: ~**2.1k files** (well below the Hubs recommended **<100k files per repo**)
211
 
212
- Per-scenario totals (HF Arrow):
213
 
214
- | Scenario | Total size |
215
- |---|---:|
216
- | combustion | 622GB |
217
- | cylinder | 116GB |
218
- | fsi | 34GB |
219
- | controlled_cylinder | 61GB |
220
- | foil | 124GB |
 
221
 
222
  ## Recommended benchmark protocols
223
 
 
19
 
20
  # RealPDEBench
21
 
 
 
 
 
22
  [![HF Dataset](https://img.shields.io/badge/HF%20Dataset-RealPDEBench-FFD21E?logo=huggingface)](https://huggingface.co/datasets/AI4Science-WestlakeU/RealPDEBench)
23
  [![arXiv](https://img.shields.io/badge/arXiv-2601.01829-b31b1b?logo=arxiv)](https://arxiv.org/abs/2601.01829)
24
  [![Website & Docs](https://img.shields.io/badge/Website%20%26%20Docs-realpdebench.github.io-1f6feb?logo=google-chrome)](https://realpdebench.github.io/)
 
78
 
79
  ## Data format on the Hub
80
 
81
+ RealPDEBench stores **complete trajectories** in HuggingFace Arrow format, with separate JSON index files for train/val/test splits. This enables dynamic `N_autoregressive` support at runtime.
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
+ Each scenario contains:
84
+ - **Trajectory data**: `hf_dataset/{real,numerical}/` — Arrow files with complete time series
85
+ - **Index files**: `hf_dataset/{split}_index_{type}.json` — maps sample indices to `(sim_id, time_id)`
86
+ - **test_mode metadata**: `{in_dist,out_dist,remain}_params_{type}.json`
87
 
88
+ **Repository layout**:
 
 
 
 
 
 
 
 
 
 
89
 
90
  ```
91
  {repo_root}/
 
97
  out_dist_test_params_numerical.json
98
  remain_params_numerical.json
99
  hf_dataset/
100
+ real/ # Arrow: complete trajectories (92 files)
101
+ data-*.arrow
102
+ dataset_info.json
103
+ state.json
104
+ numerical/ # Arrow: complete trajectories
105
+ data-*.arrow
106
+ dataset_info.json
107
+ state.json
108
+ train_index_real.json # Index: [{"sim_id": "xxx.h5", "time_id": 0}, ...]
109
+ val_index_real.json
110
+ test_index_real.json
111
+ train_index_numerical.json
112
+ val_index_numerical.json
113
+ test_index_numerical.json
114
  fsi/
115
+ ... (same structure)
116
+ controlled_cylinder/
117
+ ... (same structure)
118
+ foil/
119
+ ... (same structure)
 
 
 
120
  combustion/
121
+ ... (same structure)
 
 
 
 
 
 
 
 
 
 
 
 
122
  ```
123
 
124
  ### How to download only what you need
 
139
  endpoint="https://hf-mirror.com",
140
  )
141
 
142
+ # Load trajectory data
143
+ trajectories = load_from_disk(os.path.join(local_dir, "fsi", "hf_dataset", "real"))
144
+ print(f"Loaded {len(trajectories)} trajectories")
145
+ print(trajectories[0].keys()) # sim_id, u, v, shape_t, shape_h, shape_w
146
+ ```
147
+
148
+ ### Using the RealPDEBench loaders (recommended)
149
+
150
+ For automatic train/val/test splitting and dynamic `N_autoregressive` support, use the provided dataset loaders:
151
+
152
+ ```python
153
+ from realpdebench.data.fluid_hf_dataset import FSIHFDataset
154
+
155
+ dataset = FSIHFDataset(
156
+ dataset_name="fsi",
157
+ dataset_root="/path/to/data",
158
+ dataset_type="real",
159
+ mode="test",
160
+ N_autoregressive=10, # Dynamic! Works with any value
161
+ )
162
+
163
+ input_tensor, output_tensor = dataset[0]
164
+ print(f"Input shape: {input_tensor.shape}") # (20, H, W, 2)
165
+ print(f"Output shape: {output_tensor.shape}") # (200, H, W, 2) = 20 × 10
166
  ```
167
 
168
  ## Schema (columns)
169
 
170
  ### Fluid datasets (`cylinder`, `controlled_cylinder`, `fsi`, `foil`)
171
 
172
+ - **Keys** (each row = one complete trajectory):
173
  - `sim_id` (string): trajectory file name (e.g., `10031.h5`)
174
+ - `u`, `v` (bytes): float32 arrays of shape `(T_full, H, W)` — **complete time series**
175
+ - `p` (bytes): float32 array `(T_full, H, W)` *(numerical splits only)*
176
+ - `shape_t` (int): **complete trajectory length** (e.g., 3990, 2173)
177
+ - `shape_h`, `shape_w` (int): spatial dimensions
178
 
179
  ### Combustion dataset (`combustion`)
180
 
181
+ - **Keys** (each row = one complete trajectory):
182
  - `sim_id` (string): e.g., `40NH3_1.1.h5`
183
+ - `observed` (bytes): float32 array `(T_full, H, W)` — **complete time series**
184
+ - `numerical` (bytes): float32 array `(T_full, H, W, 15)` *(numerical splits only)*
 
185
  - `numerical_channels` (int): number of numerical channels (15)
186
+ - `shape_t` (int): **complete trajectory length** (e.g., 2001)
187
+ - `shape_h`, `shape_w` (int): spatial dimensions
 
188
 
189
+ ### Index files (JSON)
190
 
191
+ Each split has an index file mapping sample indices to trajectory positions:
 
 
 
192
 
193
+ ```json
194
+ [
195
+ {"sim_id": "10031.h5", "time_id": 0},
196
+ {"sim_id": "10031.h5", "time_id": 20},
197
+ {"sim_id": "10031.h5", "time_id": 40},
198
+ ...
199
+ ]
200
+ ```
201
 
202
+ ## Data size
203
 
204
+ - **Total**: ~**210GB** across all scenarios
205
+ - **Largest shard file**: ~**0.5GB** (well below the Hub's recommended **<50GB per file**)
206
+ - **Total file count**: ~**550 files** (well below the Hub's recommended **<100k files per repo**)
207
 
208
+ Per-scenario totals:
209
 
210
+ | Scenario | real | numerical | Total |
211
+ |---|---:|---:|---:|
212
+ | cylinder | 23GB | 34GB | 57GB |
213
+ | controlled_cylinder | 24GB | 36GB | 59GB |
214
+ | fsi | 6GB | 11GB | 17GB |
215
+ | foil | 24GB | 37GB | 61GB |
216
+ | combustion | 1GB | 15GB | 16GB |
217
+ | **Total** | **78GB** | **133GB** | **~210GB** |
218
 
219
  ## Recommended benchmark protocols
220