Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Invalid value. in row 0
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 174, in _generate_tables
df = pandas_read_json(f)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
return json_reader.read()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
obj = self._get_object_parser(self.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
self._parse()
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
ujson_loads(json, precise_float=self.precise_float), dtype=None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 177, in _generate_tables
raise e
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 151, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ScenePilot-Bench: A Large-Scale First-Person Dataset and Benchmark for Evaluation of Vision-Language Models in Autonomous Driving
Figure 1: Overview of the ScenePilot-Bench dataset and evaluation metrics.
📦 Contents Overview
The dataset files in this repository can be grouped into the following categories.
1. Model Weight Files
- ScenePilot_2.5_3b_200k_merged.zip
- ScenePilot_2_2b_200k_merged.zip
These two compressed files contain pretrained model weights obtained by training on a 200k-scale VQA training set constructed in this work.
- The former corresponds to Qwen2.5-VL-3B
- The latter corresponds to Qwen2-VL-2B
Both models are trained using the same dataset and unified training pipeline, and are used in the main experiments and comparison studies.
2. Spatial Perception and Annotation Data
VGGT.zip
Contains annotation data related to spatial perception tasks, including:- Ego-vehicle trajectory information
- Depth-related information
These annotations are used to support experiments involving trajectory prediction and spatial understanding.
YOLO.zip
Provides 2D object detection results for major traffic participants.
All detections are generated by a unified detection model and are used as perception inputs for downstream VQA and risk assessment tasks.scene_description.zip
Contains scene description results generated from the original data, including:- Weather conditions
- Road types
- Other environmental and semantic attributes
These descriptions are used for scene understanding and for constructing balanced dataset splits.
3. Dataset Split Definition
- split_train_test_val.zip
This file contains the original video-level dataset split, including:
- Training set
- Validation set
- Test set
All VQA datasets of different scales are constructed strictly based on this video-level split to avoid scene-level information leakage.
4. VQA Datasets
4.1 All-VQA
- All-VQA.zip
This archive contains all VQA data in JSON format.
Files are organized according to training, validation, and test splits.
Examples include:
Deleted_2D_train_vqa_add_new.jsonDeleted_2D_train_vqa_new.json
These files together form the complete training VQA dataset.
Other files correspond to validation and test data.
4.2 Test-VQA
- Test-VQA.zip
This archive contains the 100k-scale VQA test datasets used in the experiments.
Deleted_2D_test_selected_vqa_100k_final.json
Used as the main test set in the primary experiments.
Additional test sets are provided for generalization studies:
- Files ending with
europe,japan-and-korea,us, andothercorrespond to geographic generalization experiments. - Files ending with
leftcorrespond to left-hand traffic country experiments.
Each test set contains 100k VQA samples.
4.3 Train-VQA
- Train-VQA.zip
This archive contains training datasets of different scales:
- 200k VQA
- 2000k VQA
Additional subsets include:
- Files ending with
china, used for geographic generalization experiments. - Files ending with
right, used for right-hand traffic country experiments.
5. Video Index and Download Information
- video_name_all.xlsx
This file lists all videos used in the dataset along with their corresponding download links.
It is provided to support dataset reproduction and access to the original video resources.
🔧 Data Processing Utility
- clip.py
This repository provides a utility script for extracting image frames from raw videos.
The script performs the following operations:
- Trims a fixed duration from the beginning and end of each video
- Samples frames at a fixed rate
- Organizes extracted frames into structured folders
📚Citation
@article@misc{wang2026scenepilotbenchlargescaledatasetbenchmark,
title={ScenePilot-Bench: A Large-Scale Dataset and Benchmark for Evaluation of Vision-Language Models in Autonomous Driving},
author={Yujin Wang and Yutong Zheng and Wenxian Fan and Tianyi Wang and Hongqing Chu and Daxin Tian and Bingzhao Gao and Jianqiang Wang and Hong Chen},
year={2026},
eprint={2601.19582},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.19582},
}
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Downloads last month
- 162