Update README.md
Browse files
README.md
CHANGED
|
@@ -10,9 +10,8 @@ license: cc-by-nc-4.0
|
|
| 10 |
[[arXiv]](http://arxiv.org/abs/2507.22412) [ICCV 2025]
|
| 11 |
|
| 12 |
We introduce UAVScenes, a large-scale dataset designed to benchmark various tasks across both 2D and 3D modalities. Our benchmark dataset is built upon the well-calibrated multi-modal UAV dataset MARS-LVIG, originally developed only for simultaneous localization and mapping (SLAM). We enhance this dataset by providing manually labeled semantic annotations for both images and LiDAR point clouds, along with accurate 6-degree-of-freedom (6-DoF) poses. These additions enable a wide range of UAV perception tasks, including detection, segmentation, depth estimation, 6-DoF localization, place recognition, and novel view synthesis (NVS). To the best of our knowledge, this is the first UAV benchmark dataset to offer both image and LiDAR point cloud semantic annotations (120k labeled pairs), with the potential to advance multi-modal UAV perception research.
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
</div>
|
| 16 |
|
| 17 |
|
| 18 |
## Download
|
|
@@ -45,18 +44,9 @@ Camera-3D map calibrations are in `sampleinfos_interpolated.json`. <br>
|
|
| 45 |
|
| 46 |
- More sensor and scene information can be found from [MARS-LVIG](https://mars.hku.hk/dataset.html).
|
| 47 |
|
| 48 |
-
<!--  -->
|
| 49 |
-
<div style="text-align:center;">
|
| 50 |
-
<img src="./pics/dji_m300.png" alt="pic" style="width:50%; height:auto;">
|
| 51 |
-
</div>
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
- UAVScenes consists of 4 large scenes (AMtown, AMvalley, HKairport, and HKisland). Each scene consists of multiple runs (e.g., 01, 02, and 03).
|
| 55 |
-
<div style="text-align:center;">
|
| 56 |
-
<img src="./pics/summary.png" alt="pic" style="width:100%; height:auto;">
|
| 57 |
-
</div>
|
| 58 |
-
|
| 59 |
|
|
|
|
|
|
|
| 60 |
|
| 61 |
## Baseline Code
|
| 62 |
Under preparing. Please stay tuned.
|
|
|
|
| 10 |
[[arXiv]](http://arxiv.org/abs/2507.22412) [ICCV 2025]
|
| 11 |
|
| 12 |
We introduce UAVScenes, a large-scale dataset designed to benchmark various tasks across both 2D and 3D modalities. Our benchmark dataset is built upon the well-calibrated multi-modal UAV dataset MARS-LVIG, originally developed only for simultaneous localization and mapping (SLAM). We enhance this dataset by providing manually labeled semantic annotations for both images and LiDAR point clouds, along with accurate 6-degree-of-freedom (6-DoF) poses. These additions enable a wide range of UAV perception tasks, including detection, segmentation, depth estimation, 6-DoF localization, place recognition, and novel view synthesis (NVS). To the best of our knowledge, this is the first UAV benchmark dataset to offer both image and LiDAR point cloud semantic annotations (120k labeled pairs), with the potential to advance multi-modal UAV perception research.
|
| 13 |
+
|
| 14 |
+

|
|
|
|
| 15 |
|
| 16 |
|
| 17 |
## Download
|
|
|
|
| 44 |
|
| 45 |
- More sensor and scene information can be found from [MARS-LVIG](https://mars.hku.hk/dataset.html).
|
| 46 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
+

|
| 49 |
+

|
| 50 |
|
| 51 |
## Baseline Code
|
| 52 |
Under preparing. Please stay tuned.
|