The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: ArrowInvalid
Message: Mismatching child array lengths
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2674, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2208, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2232, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 483, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 87, in _generate_tables
pa_table = _recursive_load_arrays(h5, self.info.features, start, end)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 273, in _recursive_load_arrays
arr = _recursive_load_arrays(dset, features[path], start, end)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 294, in _recursive_load_arrays
sarr = pa.StructArray.from_arrays(values, names=keys)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 4294, in pyarrow.lib.StructArray.from_arrays
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Mismatching child array lengthsNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
AV2 2026 Multi-Dataset Scene Flow Challenge — Test Set
Dataset Description
This is the anonymized test set for the AV2 2026 Scene Flow Challenge — a multi-dataset, single-model-checkpoint scene flow benchmark for autonomous driving.
The dataset aggregates evaluation frames from five diverse LiDAR datasets, covering urban, suburban, and highway driving with different sensor configurations, ego-vehicle platforms, and geographic regions. All scenes are assigned opaque random identifiers before release so that participants cannot determine which source dataset a scene originates from. This design directly reflects the real-world requirement that a deployed perception system must handle any LiDAR sensor it encounters.
- Curated by: Challenge organizers (RPL, KTH Royal Institute of Technology; Carnegie Mellon University; University of Pennsylvania)
- License: CC BY-NC-SA 4.0 — non-commercial use only; individual source datasets retain their original licenses
- Challenge page: EvalAI — AV2 2026 Scene Flow
- Related paper: UniFlow: Zero-Shot LiDAR Scene Flow for Autonomous Driving
Dataset Statistics
| Split | # Scenes | # Eval Frames | Distance Ranges |
|---|---|---|---|
| Test | 458 | 9,613 | 0–35 m · 35–70 m |
Evaluation frames are sampled at dataset-specific intervals (every 5–10 frames) following the protocol of prior AV2 Scene Flow Challenges. Frames are filtered to retain only those with ≥ 10,000 non-ground points and valid flow annotations.
The 523 scenes are drawn from five source datasets (Argoverse 2, nuScenes, Waymo, TruckScenes, Aeva). Command for downloading:
# around 140G for total challenge test set
huggingface-cli download kin-zhang/multidata-sf-challenge --repo-type dataset --local-dir ./challenge_data
Dataset Structure
Each scene is stored as a single HDF5 (.h5) file.
Groups inside the file are keyed by integer timestamp strings.
A scene may contain more timestamps than the eval index — all frames are included
so that methods can use multi-frame temporal context for inference.
Fields per timestamp group
| Field | Shape | dtype | Description |
|---|---|---|---|
lidar |
(N, 4) | float32 | Point cloud — x, y, z, intensity (metres). Use only xyz for flow. |
ground_mask |
(N,) | bool | True for ground points, which are excluded from evaluation. |
pose |
(4, 4) | float32 | Ego-vehicle pose in world frame (SE3 matrix). |
Index file
index_eval.pkl — a Python pickle containing a list of (scene_id, timestamp) tuples
identifying exactly which frames require predictions.
import pickle
with open("index_eval.pkl", "rb") as f:
index = pickle.load(f) # [(scene_id_str, timestamp_int), ...]
Loading a scene
import h5py, pickle, torch
with open("index_eval.pkl", "rb") as f:
index = pickle.load(f)
scene_id, timestamp = index[0]
with h5py.File(f"{scene_id}.h5", "r") as f:
pc = torch.tensor(f[str(timestamp)]["lidar"][:][:, :3]) # xyz
gm = torch.tensor(f[str(timestamp)]["ground_mask"][:]) # ground mask
pose = torch.tensor(f[str(timestamp)]["pose"][:]) # ego pose
Submission — Quick Start with OpenSceneFlow
The easiest way to train a model and generate a valid submission is via the OpenSceneFlow framework, which natively supports this challenge's data format and submission protocol.
Step 1 — Install OpenSceneFlow
git clone https://github.com/KTH-RPL/OpenSceneFlow.git
cd OpenSceneFlow
Step 2 — Train (or download a pre-trained checkpoint)
Train on any combination of supported datasets following the OpenSceneFlow README. Pre-trained UniFlow checkpoints are available on the project page.
Rule: Do not use any validation split from Argoverse 2, nuScenes, Waymo, TruckScenes, or Aeva for training. A single model checkpoint/method must cover all five datasets.
Step 3 — Run inference on the challenge test set
Point dataset_path at the directory containing index_eval.pkl and all {scene_id}.h5 files.
Set leaderboard_version=3 and save_res=True:
python eval.py \
checkpoint=/path/to/your/model.ckpt \
data_mode=test \
dataset_path=/path/to/challenge_public \
leaderboard_version=3 \
save_res=True
Example output:
Model: DeltaFlow, Checkpoint from: /path/to/your/model.ckpt
Test results saved in: /path/to/challenge_public/../results/deltaflow-xxx-test-v3
Please run submit command and upload to online leaderboard for results.
evalai challenge <CHALLENGE_ID> phase <PHASE_ID> submit \
--file /path/to/results/deltaflow-xxx-test-v3.zip --large --private
The script automatically packages predictions into the correct zip layout:
deltaflow-xxx-test-v3.zip
└── {scene_id}/
├── {timestamp_0}.feather
├── {timestamp_1}.feather
└── ...
Step 4 — Submit via EvalAI
Copy the evalai command printed by the script and run it,
or upload the zip manually via the Submit tab on the challenge page:
# CLI submission (requires: pip install evalai)
evalai challenge <CHALLENGE_ID> phase <PHASE_ID> submit \
--file /path/to/results/xxx-test-v3.zip --large --private
Submission Format (manual)
If you are not using OpenSceneFlow, produce per-point flow in
ego-motion-subtracted (relative) format for every (scene_id, timestamp) in
index_eval.pkl and save as Apache Feather files:
import pandas as pd
from pathlib import Path
# pred_flow: (N, 3) float32 numpy array — relative flow (ego motion already removed)
# N = total number of points in lidar (including ground points; masked server-side)
out_dir = Path("submission") / scene_id
out_dir.mkdir(parents=True, exist_ok=True)
pd.DataFrame({
"flow_tx_m": pred_flow[:, 0],
"flow_ty_m": pred_flow[:, 1],
"flow_tz_m": pred_flow[:, 2],
}).to_feather(out_dir / f"{timestamp}.feather") # filename = exact timestamp int
Then zip and upload:
cd submission && zip -r ../submission.zip .
Evaluation Metric
The primary metric is Dynamic Bucket-Normalized EPE (lower is better), reported at two Chebyshev XY distance ranges:
| Range | Description |
|---|---|
| 0–35 m | Near-range — matches prior challenge protocol |
| 35–70 m | Far-range — tests long-range generalization |
Entries are ranked by mean Dynamic — the grand mean of Dynamic EPE
across both distance ranges and all five source datasets.
Per-dataset and per-range breakdowns are shown in the full leaderboard.
Uses
Intended Use
This dataset is intended exclusively for the AV2 2026 Scene Flow Challenge. It benchmarks the ability of a single model to estimate LiDAR scene flow across diverse sensors and driving scenarios without knowing the source dataset.
Participants should use the test set only for generating challenge submissions. No validation set from any of the five source datasets may be used for training.
Out-of-Scope Use
- Training or fine-tuning any model
- Any commercial application
- Any use that violates the license terms of the individual source datasets
Dataset Creation
Curation Rationale
Prior scene flow benchmarks evaluate models on a single dataset and sensor. This dataset was created to measure zero-shot cross-domain generalization — a property increasingly important as autonomous systems are deployed across diverse hardware platforms and geographic regions.
The scene anonymization design ensures that leaderboard rankings reflect genuine multi-sensor generalization rather than dataset-specific tuning.
Source Data
Frames are selected from five publicly available autonomous driving datasets. For each source dataset, scenes are sampled from the prescribed validation split. Frames are filtered to retain only those with sufficient non-ground point density and valid flow ground-truth annotations. Ground-truth flow is derived from 3D bounding-box tracks using rigid-body point assignment following the procedure in Khatri et al. (2024).
Scene IDs are replaced with random UUID hex strings before public release. A private server-side mapping links each anonymous ID back to the source dataset and real scene identifier for per-dataset scoring.
Personal and Sensitive Information
All point clouds are collected from ego-vehicles in public road environments. Individual source datasets apply face and license-plate blurring where applicable. No personally identifiable information is present in the LiDAR point clouds.
Bias, Risks, and Limitations
- Geographic coverage is limited to North America, Europe, and East Asia (urban and highway).
- Sensor diversity covers spinning mechanical LiDARs (32- and 64-beam) and one FMCW sensor; other modalities are not represented.
- Ground-truth flow is derived from 3D bounding-box annotations, which may miss unlabeled or partially visible objects.
- Evaluation focuses on non-ground points; performance on ground-level motion (e.g., debris) is not measured.
- The anonymization prevents dataset-specific debugging; participants must rely on their own held-out data for ablations.
Citation
If you use this dataset or report results from this challenge, please cite the following and OpenSceneFlow:
BibTeX:
@article{li2025uniflow,
author = {Li, Siyi and Zhang, Qingwen and Khatri, Ishan and Vedder, Kyle and
Eaton, Eric and Ramanan, Deva and Peri, Neehar},
title = {UniFlow: Zero-Shot {LiDAR} Scene Flow for Autonomous Driving},
journal = {arXiv preprint arXiv:2511.18254},
year = {2026}
}
@inproceedings{khatri2024sceneflow,
author = {Khatri, Ishan and Vedder, Kyle and Peri, Neehar and Ramanan, Deva and Hays, James},
title = {I Can't Believe It's Not Scene Flow!},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2024}
}
Dataset Card Contact
Please open an issue on the associated GitHub repository or post in the EvalAI challenge forum.
- Downloads last month
- -