File size: 7,468 Bytes
a08c508
8db57ff
 
 
 
 
a08c508
8db57ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a08c508
8db57ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
---

viewer: false
annotations_creators:
- expert-annotated
language:
- en
license: cc-by-nc-nd-4.0
multilinguality: monolingual
pretty_name: V2X-Radar
size_categories:
- 10K<n<100K
source_datasets: []
tags:
- autonomous-driving
- cooperative-perception
- multimodal
- 4d-radar
- v2x
- lidar
- camera

task_categories:
- robotics
- object-detection
---


<p align="center">
  <h1 align="center">V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception</h1>
  <p align="center">
    <a href="https://scholar.google.com.hk/citations?user=EUnI2nMAAAAJ&hl=zh-CN&oi=sra"><strong>Lei Yang</strong></a>

    · 

    <a href="https://scholar.google.com.hk/citations?user=0Q7pN4cAAAAJ&hl=zh-CN"><strong>Xinyu Zhang</strong></a>

    ·

    <a href="https://www.tsinghua.edu.cn/"><strong>Jun Li</strong></a>

    ·

    <a href="https://www.tsinghua.edu.cn/"><strong>Chen Wang</strong></a>

    ·

    <a href="https://scholar.google.com.hk/citations?user=S3cQz1AAAAAJ&hl=zh-CN&oi=ao"><strong>Jiaqi Ma</strong></a>

    ·

    <a href="https://scholar.google.com.hk/citations?user=joReSgYAAAAJ&hl=zh-CN&oi=sra"><strong>Zhiying Song</strong></a>

    ·

    <a href="https://scholar.google.com.hk/citations?user=tTnWi_EAAAAJ&hl=zh-CN"><strong>Tong Zhao</strong></a>

    ·

    <a href="https://scholar.google.com.hk/citations?user=tIjCAKEAAAAJ&hl=zh-CN"><strong>Ziying Song</strong></a>

    ·

    <a href="https://scholar.google.com.hk/citations?user=pmzKjcUAAAAJ&hl=zh-CN"><strong>Li Wang</strong></a>

    ·

    <a href="https://www.tsinghua.edu.cn/"><strong> Mo Zhou</strong></a>

    ·

    <a href="https://www.tsinghua.edu.cn/"><strong> Yang Shen</strong></a>

    ·

    <a href="https://scholar.google.com.hk/citations?hl=zh-CN&user=ElfT3eoAAAAJ"><strong> Kai Wu</strong></a>

    ·

    <a href="https://scholar.google.com.hk/citations?user=UKVs2CEAAAAJ&hl=zh-CN"><strong> Chen Lv</strong></a>

</p>


<div align="center">
  <img src="./assets/teaser-v2.jpg" alt="Logo" width="100%">
</div>

<p align="center">
  <br>
    <a href="https://neurips.cc/virtual/2025/poster/121426"><img alt="website" src="https://img.shields.io/badge/Website-Explore%20Now-blueviolet?style=flat&logo=google-chrome"></a>

    <a href="https://arxiv.org/pdf/2411.10962"><img alt="paper" src="https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg"></a>

    <a href="https://github.com/yanglei18/V2X-Radar">

    <img alt="github" src="https://img.shields.io/badge/GitHub-Code-black?style=flat&logo=github"></a>

    <a href='https://youtu.be/nzmj_-9M_lg'><img src='https://img.shields.io/badge/Video-Presentation-F9D371' alt='Docker'></a>


  </a>
    <br></br>

    </a>

  </p>

</p>


This is the official implementation of **"V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception"** (<span style="color:red">**NeuIPS 2025 Spotlight**</span>).

Supported by the [THU OpenMDP Lab](http://openmpd.com/column/V2X-Radar).

## 📘 Dataset Summary
**V2X-Radar** is a large-scale cooperative perception dataset collected from complex urban intersections in mainland China. It is the **first public dataset** that integrates **4D imaging radar**, **LiDAR**, and **multi-view cameras** across **vehicle-to-everything (V2X)** configurations. The dataset aims to advance **multi-sensor fusion**, **cooperative 3D detection**, and **adverse-weather perception** research in autonomous driving.

## 🧩 Supported Tasks
- **3D Object Detection** (Radar/LiDAR/Camera/V2X Fusion)  
- **Cooperative Perception** (V2V / V2I / V2X)  
- **Temporal Misalignment & Communication Delay Benchmarking**  
- **Domain Adaptation and Sensor-Robust Learning**

## 🗣️ Languages
All metadata and annotations are provided in **English**.  
File paths and geographic identifiers are anonymized to comply with Chinese data export regulations.

## 📊 Dataset Structure
```

V2X-Radar

│   ├── V2X-Radar-I   # KITTI Format

│   │   ├── training

│   │   │   ├── velodyne

│   │   │   ├── radar

│   │   │   ├── calib

│   │   │   ├── image_1

│   │   │   ├── image_2

│   │   │   ├── image_3

│   │   │   ├── label_2

│   │   ├── ImageSets

│   │   │   ├── train.txt

│   │   │   ├── trainval.txt

│   │   │   ├── val.txt

│   │   │   ├── test.txt

│   ├── V2X-Radar-V   # KITTI Format

│   │   ├── training

│   │   │   ├── velodyne

│   │   │   ├── radar 

│   │   │   ├── calib

│   │   │   ├── image_2

│   │   │   ├── label_2

│   │   ├── ImageSets

│   │   │   ├── train.txt

│   │   │   ├── trainval.txt

│   │   │   ├── val.txt

│   │   │   ├── test.txt

│   ├── V2X-Radar-C  # OpenV2V Format

│   │   ├── train

│   │   │   ├── 2024-05-15-16-28-09

│   │   │   │   ├── -1  # RoadSide

│   │   │   │   │   ├── 00000.pcd - 00250.pcd # LiDAR point clouds from timestamp 0 to 250

│   │   │   │   │   ├── 00000_radar.pcd - 00250_radar.pcd # the 4D Radar point clouds from timestamp 0 to 250

│   │   │   │   │   ├── 00000.yaml - 00250.yaml # metadata for each timestamp

│   │   │   │   │   ├── 00000_camera0.jpg - 00250_camera0.jpg # left camera images

│   │   │   │   │   ├── 00000_camera1.jpg - 00250_camera1.jpg # front camera images

│   │   │   │   │   ├── 00000_camera2.jpg - 00250_camera2.jpg # right camera images

│   │   │   │   ├── 142 # Vehicle Side

│   │   ├── validate

│   │   ├── test

```

## ⚙️ Data Fields

| Field | Type | Description |
|:------|:----:|:------------|
| `radar_points` | array(float) | 4D Radar point clouds (x, y, z, doppler, intensity) |
| `lidar_points` | array(float) | LiDAR point clouds |
| `images` | list(image) | Multi-view RGB frames |
| `calibration` | dict | Intrinsics + extrinsics |
| `timestamp` | float | Absolute timestamp (ms) |
| `annotations` | dict | 3D bounding boxes, categories and track IDs |


## 🧭 Data Collection and Geographic Coverage
Data were recorded in **Chinese metropolitan cities** using research-licensed vehicles and roadside units.  
All raw sensor data underwent **manual anonymization** and **privacy filtering** (no personal identities, license plates, or facial information remain).


## ⚖️ Licensing Information
This dataset is released under the **CC BY-NC-ND 4.0**.

- **Attribution** — Users must credit “V2X-Radar Dataset, 2025”.  
- **Non-Commercial** — Use for research and education only.  
- **No Derivatives** — Do not redistribute modified versions.

Full license text: [https://creativecommons.org/licenses/by-nc-nd/4.0/](https://creativecommons.org/licenses/by-nc-nd/4.0/)


## 🪪 Citation
```bibtex

@article{yang2024v2x,

  title={V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception},

  author={Yang, Lei and Zhang, Xinyu and Li, Jun and Wang, Chen and Ma, Jiaqi and Song, Zhiying and Zhao, Tong and Song, Ziying and Wang, Li and Zhou, Mo and Shen, Yang and Lv, Chen},

  journal={Advances in Neural Information Processing Systems (NeurIPS)},

  year={2025}

}