| --- |
| license: cc-by-sa-4.0 |
| task_categories: |
| - graph-ml |
| tags: |
| - cfd |
| - aerodynamics |
| - point-cloud |
| - surrogate-modeling |
| - automotive |
| - drivaer |
| - drivaerml |
| size_categories: |
| - n<1K |
| --- |
| |
| # DrivAerML Point Clouds |
|
|
| A preprocessed, point-cloud version of the [**DrivAerML**](https://huggingface.co/datasets/neashton/drivaerml) high-fidelity CFD dataset, ready for training point-based deep learning surrogates (PointNet, PCT, DGCNN, Graph Neural Operators, etc.) for automotive external aerodynamics. |
|
|
| The original DrivAerML release contains 500 scale-resolving CFD simulations of parametrically morphed DrivAer notchback geometries and ships as ~31 TB of raw STL / VTP / VTU / OpenFOAM data. This release distills the **surface boundary** of each run down to a single compressed `.npz` file (~10 MB) containing the STL point coordinates and the CFD surface fields interpolated onto those points — so the full usable set fits in **~5 GB** instead of multiple terabytes. |
|
|
| ## What's in here |
|
|
| For each run: |
|
|
| - `point_cloud_{i}.npz` — STL surface points with interpolated CFD fields |
| - `force_mom_{i}.csv` — time-averaged force and moment coefficients (Cd, Cl, Clf, Clr, Cs) |
|
|
| Plus at the dataset root: |
|
|
| - `splits.json` — reproducible train/val/test assignment (seed 42, 80/10/10) |
| - `processing_log.json` — per-run processing status and nearest-neighbor distance diagnostics |
|
|
| ### Directory layout |
|
|
| ``` |
| Drivaerml_point_clouds/ |
| ├── train/ # 387 runs |
| │ ├── run_1/ |
| │ │ ├── point_cloud_1.npz |
| │ │ └── force_mom_1.csv |
| │ └── ... |
| ├── val/ # 46 runs |
| │ └── ... |
| ├── test/ # 52 runs |
| │ └── ... |
| ├── splits.json |
| └── processing_log.json |
| ``` |
|
|
| Total: **485 runs** (of the 500 original designs, 15 runs are missing from the source dataset: 167, 211, 218, 221, 248, 282, 291, 295, 316, 325, 329, 364, 370, 376, 403, 473). |
|
|
| > **Note on the Hugging Face dataset viewer:** the viewer only previews the tabular `force_mom_*.csv` files (the Cd/Cl/Clf/Clr/Cs coefficients). The actual point-cloud payload lives in the `.npz` files, which the viewer does not render — clone the repo or stream it with `huggingface_hub` to access the geometry and fields. |
| |
| ## `.npz` contents |
| |
| Each `point_cloud_{i}.npz` has three arrays: |
| |
| | Key | Shape | Dtype | Description | |
| | ------------- | ------------ | ---------- | ----------------------------------------------------- | |
| | `points` | `(N, 3)` | `float32` | XYZ coordinates of STL surface points | |
| | `fields` | `(N, 5)` | `float32` | CFD surface fields interpolated to each STL point | |
| | `field_names` | `(5,)` | `str` | Names of the columns in `fields` | |
| |
| `N` varies per run (roughly 300k points, matching the STL resolution). |
| |
| The five field columns are: |
| |
| 1. `CpMeanTrim` — time-averaged pressure coefficient |
| 2. `wallShearStressMeanTrim_mag` — magnitude of time-averaged wall shear stress |
| 3. `wallShearStressMeanTrim_x` — x-component |
| 4. `wallShearStressMeanTrim_y` — y-component |
| 5. `wallShearStressMeanTrim_z` — z-component |
| |
| ## How the preprocessing works |
| |
| The original DrivAerML surface data is stored in **`boundary_{i}.vtp`**: a high-resolution (~8M point) mesh with CFD fields attached as **cell data**. The STL geometry **`drivaer_{i}.stl`** is a lower-resolution (~300k point) representation of the same surface but without any fields. |
| |
| To produce a clean point cloud where every geometry point carries its own CFD values, the pipeline runs the following per run: |
| |
| 1. Load the `.vtp` boundary mesh and the `.stl` geometry. |
| 2. Convert the VTP's cell-centered fields to point-centered via PyVista's `cell_data_to_point_data()` (averaging adjacent cells to the shared vertex). |
| 3. Build a `scipy.spatial.cKDTree` over the VTP point coordinates. |
| 4. For each STL point, take the **K=1 nearest neighbor** in the VTP point cloud and copy its field values. |
| 5. Save `points`, `fields`, and `field_names` as a compressed `.npz`. |
| 6. Delete the raw VTP and STL to keep disk usage bounded while processing. |
| |
| The K=1 nearest-neighbor mapping was chosen deliberately for **benchmark comparability** with existing DrivAerML / DrivAerNet++ leaderboard models (TripNet, FIGConvNet, RegDGCNN), all of which operate on the STL vertices directly. The nearest-neighbor distances are logged in `processing_log.json` for every run so this mapping can be audited. |
| |
| The 80/10/10 train/val/test split is computed with `numpy.random.default_rng(seed=42)` over a sorted list of successfully-processed run IDs, making it fully reproducible from the `splits.json` manifest. |
| |
| ## Loading |
| |
| ### With `datasets` (CSV force/moment preview only) |
| |
| The Hugging Face dataset loader will pick up the `force_mom_*.csv` files automatically: |
|
|
| ```python |
| from datasets import load_dataset |
| ds = load_dataset("Jrhoss/Drivaerml_point_clouds") |
| # ds["train"][0] -> {"Cd": 0.30, "Cl": 0.07, "Clf": -0.04, "Clr": 0.10, "Cs": 0.05} |
| ``` |
|
|
| ### Point clouds (recommended) |
|
|
| Clone the repo or snapshot-download it, then load `.npz` files directly: |
|
|
| ```python |
| from huggingface_hub import snapshot_download |
| import numpy as np |
| from pathlib import Path |
| |
| root = Path(snapshot_download("Jrhoss/Drivaerml_point_clouds", repo_type="dataset")) |
| |
| run = np.load(root / "train" / "run_1" / "point_cloud_1.npz", allow_pickle=True) |
| points = run["points"] # (N, 3) |
| fields = run["fields"] # (N, 5) |
| names = run["field_names"] # ['CpMeanTrim', 'wallShearStressMeanTrim_mag', ...] |
| ``` |
|
|
| ### Minimal PyTorch `Dataset` |
|
|
| ```python |
| import numpy as np |
| import pandas as pd |
| import torch |
| from pathlib import Path |
| from torch.utils.data import Dataset |
| |
| class DrivAerMLPointClouds(Dataset): |
| def __init__(self, root, split="train", n_points=16384): |
| self.run_dirs = sorted((Path(root) / split).glob("run_*"), |
| key=lambda p: int(p.name.split("_")[1])) |
| self.n_points = n_points |
| |
| def __len__(self): |
| return len(self.run_dirs) |
| |
| def __getitem__(self, idx): |
| run_dir = self.run_dirs[idx] |
| run_id = int(run_dir.name.split("_")[1]) |
| |
| npz = np.load(run_dir / f"point_cloud_{run_id}.npz", allow_pickle=True) |
| points, fields = npz["points"], npz["fields"] |
| |
| # Random subsample for batching |
| sel = np.random.choice(len(points), self.n_points, replace=False) |
| points, fields = points[sel], fields[sel] |
| |
| # Integrated force/moment coefficients |
| fm = pd.read_csv(run_dir / f"force_mom_{run_id}.csv").iloc[0] |
| coeffs = torch.tensor([fm["Cd"], fm["Cl"], fm["Clf"], fm["Clr"], fm["Cs"]], |
| dtype=torch.float32) |
| |
| return { |
| "points": torch.from_numpy(points), # (n_points, 3) |
| "fields": torch.from_numpy(fields), # (n_points, 5) |
| "coeffs": coeffs, # (5,) |
| "run_id": run_id, |
| } |
| ``` |
|
|
| ## Suggested tasks |
|
|
| - **Per-point surface field regression** — predict `CpMeanTrim` and wall shear stress vectors from geometry alone. Comparable to the TripNet / FIGConvNet / RegDGCNN benchmarks (which report ~20% relative L2 error). |
| - **Integrated coefficient regression** — predict `Cd`, `Cl`, etc. from the point cloud (global pooling over the surface). |
| - **Coupled prediction** — joint learning of per-point fields and integrated coefficients, using the integrated values as a physics-informed auxiliary loss. |
|
|
| ## Known limitations |
|
|
| - **K=1 interpolation is not exact.** It preserves the CFD field values faithfully where VTP and STL points are co-located, but introduces small errors at points where the STL has higher local resolution than the VTP. `processing_log.json` reports `nn_dist_mean`, `nn_dist_max`, and `nn_dist_p99` per run so you can filter out any pathological cases. |
| - **Fields are time-averaged only.** Transient information (e.g. unsteady vortex shedding in the wake) is not preserved; the source dataset contains scale-resolving data but only the trim-averaged fields are interpolated here. |
| - **Surface only.** The volumetric flow field (the 50 GB-per-run `volume_{i}.vtu`) is not included — go to the source dataset for volumetric surrogate modeling. |
|
|
| ## Reproducing this dataset |
|
|
| The full preprocessing script is shipped alongside this repo (`process_drivaerml.py`). To regenerate from scratch: |
|
|
| ```bash |
| pip install pyvista numpy scipy tqdm huggingface_hub |
| python process_drivaerml.py --output_dir ./drivaerml_processed --start 1 --end 500 |
| ``` |
|
|
| The script streams one run at a time — downloading the VTP + STL + force/moment CSV from `neashton/drivaerml`, producing the `.npz`, then deleting the raw files — so it runs comfortably in a few tens of GB of free disk regardless of total dataset size. |
|
|
| ## License |
|
|
| **CC-BY-SA 4.0**, inherited from the source dataset. If you use this data you must give appropriate credit, indicate any changes, and distribute any derivative works under the same license. |
|
|
| ## Citation |
|
|
| Please cite the original DrivAerML paper: |
|
|
| ```bibtex |
| @article{ashton2024drivaer, |
| title = {DrivAerML: High-Fidelity Computational Fluid Dynamics Dataset for Road-Car External Aerodynamics}, |
| author = {Ashton, N. and Mockett, C. and Fuchs, M. and Fliessbach, L. and Hetmann, H. |
| and Knacke, T. and Schonwald, N. and Skaperdas, V. and Fotiadis, G. |
| and Walle, A. and Hupertz, B. and Maddix, D.}, |
| journal = {arXiv preprint arXiv:2408.11969}, |
| year = {2024}, |
| url = {https://arxiv.org/abs/2408.11969} |
| } |
| ``` |
|
|
| ## Acknowledgments |
|
|
| All credit for the underlying CFD data goes to the DrivAerML team (Neil Ashton et al., AWS / UpstreamCFD / BETA-CAE / Siemens Energy / Ford). This repository only redistributes a preprocessed surface-point-cloud view of that work. For the full multi-terabyte dataset including volumetric fields, OpenFOAM meshes, slice images, and residual plots, see [`neashton/drivaerml`](https://huggingface.co/datasets/neashton/drivaerml). |