larswangtj commited on
Commit
43910e7
·
verified ·
1 Parent(s): 2ccd2b9

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +150 -1
  2. assets/The overall structure.png +3 -0
  3. clip.py +132 -0
README.md CHANGED
@@ -1,3 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # **ScenePilot-Bench: A Large-Scale First-Person Dataset and Benchmark for Evaluation of Vision-Language Models in Autonomous Driving**
2
+ <div align="center">
3
+ <img src="assets/The overall structure.png" width="800px">
4
+ <p>Figure 1: Overview of the ScenePilot-Bench dataset and evaluation metrics.</p>
5
+ </div>
6
+
7
+
8
+ [![Project Page](https://img.shields.io/badge/Project-Website-blue?style=flat-square)](https://github.com/yjwangtj/ScenePilot-Bench)
9
+ [![Dataset](https://img.shields.io/badge/Dataset-Download-green?style=flat-square)](https://huggingface.co/datasets/larswangtj/ScenePilot-4K/tree/main)
10
+ [![Paper](https://img.shields.io/badge/Paper-Arxiv-red?style=flat-square)](#)
11
+
12
+ ## 📦 Contents Overview
13
+
14
+ The dataset files in this repository can be grouped into the following categories.
15
+
16
  ---
17
+
18
+ ## 1. Model Weight Files
19
+
20
+ - **ScenePilot_2.5_3b_200k_merged.zip**
21
+ - **ScenePilot_2_2b_200k_merged.zip**
22
+
23
+ These two compressed files contain pretrained model weights obtained by training on a **200k-scale VQA training set** constructed in this work.
24
+
25
+ - The former corresponds to **Qwen2.5-VL-3B**
26
+ - The latter corresponds to **Qwen2-VL-2B**
27
+
28
+ Both models are trained using the same dataset and unified training pipeline, and are used in the main experiments and comparison studies.
29
+
30
  ---
31
+
32
+ ## 2. Spatial Perception and Annotation Data
33
+
34
+ - **VGGT.zip**
35
+ Contains annotation data related to spatial perception tasks, including:
36
+ - Ego-vehicle trajectory information
37
+ - Depth-related information
38
+
39
+ These annotations are used to support experiments involving trajectory prediction and spatial understanding.
40
+
41
+ - **YOLO.zip**
42
+ Provides 2D object detection results for major traffic participants.
43
+ All detections are generated by a unified detection model and are used as perception inputs for downstream VQA and risk assessment tasks.
44
+
45
+ - **scene_description.zip**
46
+ Contains scene description results generated from the original data, including:
47
+ - Weather conditions
48
+ - Road types
49
+ - Other environmental and semantic attributes
50
+
51
+ These descriptions are used for scene understanding and for constructing balanced dataset splits.
52
+
53
+ ---
54
+
55
+ ## 3. Dataset Split Definition
56
+
57
+ - **split_train_test_val.zip**
58
+
59
+ This file contains the **original video-level dataset split**, including:
60
+ - Training set
61
+ - Validation set
62
+ - Test set
63
+
64
+ All VQA datasets of different scales are constructed **strictly based on this video-level split** to avoid scene-level information leakage.
65
+
66
+ ---
67
+
68
+ ## 4. VQA Datasets
69
+
70
+ ### 4.1 All-VQA
71
+
72
+ - **All-VQA.zip**
73
+
74
+ This archive contains all VQA data in JSON format.
75
+ Files are organized according to training, validation, and test splits.
76
+
77
+ For example:
78
+ - `Deleted_2D_train_vqa_add_new.json`
79
+ - `Deleted_2D_train_vqa_new.json`
80
+
81
+ These files together form the complete training VQA dataset. Other files correspond to validation and test data.
82
+
83
+ ---
84
+
85
+ ### 4.2 Test-VQA
86
+
87
+ - **Test-VQA.zip**
88
+
89
+ This archive contains the **100k-scale VQA test datasets** used in the experiments.
90
+
91
+ - `Deleted_2D_test_selected_vqa_100k_final.json`
92
+ Used as the main test set in the primary experiments.
93
+
94
+ Additional test sets are provided for generalization studies:
95
+ - Files ending with `europe`, `japan-and-korea`, `us`, and `other` correspond to geographic generalization experiments.
96
+ - Files ending with `left` correspond to left-hand traffic country experiments.
97
+
98
+ Each test set contains **100k VQA samples**.
99
+
100
+ ---
101
+
102
+ ### 4.3 Train-VQA
103
+
104
+ - **Train-VQA.zip**
105
+
106
+ This archive contains training datasets of different scales:
107
+ - **200k VQA**
108
+ - **2000k VQA**
109
+
110
+ Additional subsets include:
111
+ - Files ending with `china`, used for geographic generalization experiments.
112
+ - Files ending with `right`, used for right-hand traffic country experiments.
113
+
114
+ ---
115
+
116
+ ## 5. Video Index and Download Information
117
+
118
+ - **video_name_all.xlsx**
119
+
120
+ This file lists all videos used in the dataset along with their corresponding download links.
121
+ It is provided to support dataset reproduction and access to the original video resources.
122
+
123
+ ---
124
+
125
+ ## 🔧 Data Processing Utility
126
+
127
+ - **clip.py**
128
+
129
+ This repository provides a utility script for extracting image frames from raw videos.
130
+
131
+ The script performs the following operations:
132
+ - Trims a fixed duration from the beginning and end of each video
133
+ - Samples frames at a fixed rate
134
+ - Organizes extracted frames into structured folders
135
+
136
+
137
+ ## Citation
138
+
139
+ ```bibtex
140
+ @article{scenepilot,
141
+ title={ScenePilot-Bench: A Large-Scale First-Person Dataset and Benchmark for Evaluation of Vision-Language Models in Autonomous Driving},
142
+ author={Yujin Wang, Yutong Zheng, Wenxian Fan, Jinlong Hong, Wei Tiana,Haiyang Yu, Bingzhao Gao, Jianqiang Wang, Hong Chen},
143
+ journal={arXiv preprint},
144
+ year={2025}
145
+ }
146
+ ```
147
+
148
+ ## License
149
+
150
+ [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
151
+
152
+ This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
assets/The overall structure.png ADDED

Git LFS Details

  • SHA256: 625c18259e93615245ec221753af0ac712b5da45c4b847443b84cd408a6e48d6
  • Pointer size: 131 Bytes
  • Size of remote file: 508 kB
clip.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import cv2
3
+ from tqdm import tqdm
4
+
5
+ # ================= User Configuration =================
6
+ INPUT_FOLDER = "bus" # Name of the input video folder
7
+ PREFIX = "08" # Output prefix (must match category index)
8
+
9
+ # Category index reference:
10
+ # 01 street_ca
11
+ # 02 street_au
12
+ # 03 street_cn
13
+ # 04 street_eu
14
+ # 05 street_kr
15
+ # 06 street_us
16
+ # 07 highway
17
+ # 08 bus
18
+
19
+ ROOT_DIR = r"your_root_path" # Root directory of raw videos
20
+ CLIPS_DIR = r"your_clips_path" # Output directory for extracted frames
21
+
22
+ SAMPLING_FPS = 2 # Frames sampled per second
23
+ TRIM_HEAD_SEC = 180 # Trim first N seconds
24
+ TRIM_TAIL_SEC = 180 # Trim last N seconds
25
+ # ======================================================
26
+
27
+
28
+ def sanitize_filename(name: str) -> str:
29
+ """Remove illegal characters from file names."""
30
+ return "".join(c for c in name if c.isalnum() or c in (" ", "-", "_")).strip()
31
+
32
+
33
+ def process_videos(input_dir, clips_dir, prefix, folder_name):
34
+ output_main_dir = os.path.join(clips_dir, f"{prefix}_{folder_name}")
35
+ os.makedirs(output_main_dir, exist_ok=True)
36
+
37
+ video_files = sorted([
38
+ f for f in os.listdir(input_dir)
39
+ if f.lower().endswith(('.mp4', '.mkv', '.avi', '.mov'))
40
+ ])
41
+
42
+ print(f"\nFound {len(video_files)} video files")
43
+
44
+ video_index = 0
45
+
46
+ # Optional: specify a video file name to start processing from
47
+ start_from = None
48
+
49
+ for filename in video_files:
50
+ if start_from is not None:
51
+ if filename != start_from:
52
+ print(f"Skip: {filename}")
53
+ video_index += 1
54
+ continue
55
+ else:
56
+ print(f"Start from specified video: {filename}")
57
+ start_from = None
58
+
59
+ file_path = os.path.join(input_dir, filename)
60
+ print(f"\nProcessing video: {file_path}")
61
+
62
+ cap = cv2.VideoCapture(file_path)
63
+ if not cap.isOpened():
64
+ print("Failed to open video file")
65
+ continue
66
+
67
+ fps = cap.get(cv2.CAP_PROP_FPS)
68
+ total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
69
+ duration = total_frames / fps
70
+
71
+ print(f"Duration: {duration:.2f} seconds")
72
+
73
+ if duration < TRIM_HEAD_SEC + TRIM_TAIL_SEC:
74
+ print("Skip: video too short after trimming")
75
+ cap.release()
76
+ continue
77
+
78
+ start_frame = int(TRIM_HEAD_SEC * fps)
79
+ end_frame = int(total_frames - TRIM_TAIL_SEC * fps)
80
+ extract_interval = max(1, int(fps / SAMPLING_FPS))
81
+
82
+ video_name = os.path.splitext(filename)[0]
83
+ safe_video_name = sanitize_filename(video_name)
84
+ output_subdir = os.path.join(output_main_dir, f"{prefix}_{video_index:05d}")
85
+ os.makedirs(output_subdir, exist_ok=True)
86
+
87
+ cap.set(cv2.CAP_PROP_POS_FRAMES, start_frame)
88
+ frame_count = 0
89
+ saved_count = 0
90
+
91
+ for _ in tqdm(
92
+ range(end_frame - start_frame),
93
+ desc=f"Extracting {safe_video_name}",
94
+ ncols=100
95
+ ):
96
+ ret, frame = cap.read()
97
+ if not ret:
98
+ break
99
+
100
+ if frame_count % extract_interval == 0:
101
+ output_file = os.path.join(
102
+ output_subdir,
103
+ f"{prefix}_{video_index:05d}_{saved_count:06d}.jpg"
104
+ )
105
+ cv2.imwrite(
106
+ output_file,
107
+ frame,
108
+ [int(cv2.IMWRITE_JPEG_QUALITY), 90]
109
+ )
110
+ saved_count += 1
111
+
112
+ frame_count += 1
113
+
114
+ cap.release()
115
+ print(f"Saved {saved_count} frames to {output_subdir}")
116
+ video_index += 1
117
+
118
+
119
+ def main():
120
+ input_dir = os.path.join(ROOT_DIR, INPUT_FOLDER)
121
+
122
+ if not os.path.isdir(input_dir):
123
+ print(f"Invalid input directory: {input_dir}")
124
+ return
125
+
126
+ print(f"Start processing folder: {input_dir}")
127
+ process_videos(input_dir, CLIPS_DIR, PREFIX, INPUT_FOLDER)
128
+ print("\nAll videos processed.")
129
+
130
+
131
+ if __name__ == "__main__":
132
+ main()