Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Tags:
uav
Libraries:
Datasets
License:
daisq commited on
Commit
99db2b9
Β·
verified Β·
1 Parent(s): 6f73f4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -1
README.md CHANGED
@@ -7,4 +7,116 @@ language:
7
  tags:
8
  - uav
9
  pretty_name: mm-uavbench
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  tags:
8
  - uav
9
  pretty_name: mm-uavbench
10
+ ---
11
+ # MM-UAVBench
12
+
13
+ A comprehensive multimodal benchmark designed to evaluate the perception, cognition, and planning abilities of Multimodal Large Language Models (MLLMs) in low-altitude UAV scenarios.
14
+
15
+ ## πŸ“š Dataset Overview
16
+
17
+ MM-UAVBench focuses on assessing MLLMs' performance in UAV-specific low-altitude scenarios, with three core characteristics:
18
+
19
+ ### Key Features
20
+
21
+ 1. **Comprehensive Task Design**
22
+
23
+ 19 tasks across 3 capability dimensions (perception/cognition/planning), incorporating UAV-specific considerations – specifically multi-level cognition (object/scene/event) and planning for both aerial and ground agents.
24
+
25
+ 2. **Diverse Real-World Scenarios**
26
+ * 1,549 real-world UAV video clips
27
+ * 2,873 high-resolution UAV images (avg. resolution: 1622 x 1033)
28
+ * Collected from diverse real-world low-altitude scenarios (urban/suburban/rural)
29
+
30
+ 3. **High-Quality Annotations**
31
+ * 5,702 multiple-choice QA pairs in total
32
+ * 16 tasks with manual human annotations
33
+ * 3 additional tasks via rule-based transformation of manual labels
34
+
35
+ ## 🎯 Dataset Structure
36
+
37
+ ```plaintext
38
+ MM-UAVBench/
39
+ β”œβ”€β”€ images/
40
+ β”‚ β”œβ”€β”€ annotated/ # Annotated images (used for official benchmark evaluation)
41
+ β”‚ └── raw/ # Unannotated raw UAV images (open-sourced for custom annotation)
42
+ β”œβ”€β”€ tasks/ # QA annotations
43
+ β”œβ”€β”€ tools/
44
+ β”‚ └── render_annotated.py # Script to render labels on raw images
45
+ β”‚ └── util.py # Visualization tools
46
+ └── README.md # Dataset usage guide
47
+ ```
48
+ ### Important Notes on Image Files
49
+ * **Evaluation Usage**: The benchmark evaluation is conducted using annotated images in `images/annotated/`.
50
+ * **Raw Images for Custom Annotation**: We also open-source unannotated raw UAV images in `images/raw/`. You can refer to the `tools/render_annotated.py` script to render custom labels on these raw images.
51
+
52
+ ## πŸš€ Quick Start
53
+ ### Evaluate MLLMs on MM-UAVBench
54
+ MM-UAVBench is fully compatible with [VLMEvalKit](https://github.com/open-compass/VLMEvalKit):
55
+ #### Step 1: Install Dependencies
56
+
57
+ ```bash
58
+ git clone https://github.com/MM-UAVBench/MM-UAVBench.git
59
+ cd MM-UAVBench
60
+ git clone https://github.com/open-compass/VLMEvalKit.git
61
+ cd VLMEvalKit
62
+ pip install -e .
63
+ ```
64
+
65
+ #### Step 2: Configure Evaluation Dataset
66
+ Copy the dataset file to the VLMEvalKit directory:
67
+ ```bash
68
+ cp ~/MM-UAVBench/mmuavbench.py ~/MM-UAVBench/VLMEvalKit/vlmeval/dataset
69
+ ```
70
+ Edit `~/MM-UAVBench/VLMEvalKit/vlmeval/dataset/__init__.py` and add the following content:
71
+ ```python
72
+ from.mmuavbench import MMUAVBench_Image, MMUAVBench_Video
73
+
74
+ IMAGE_DATASET = [
75
+ # Existing datasets
76
+ MMUAVBench_Image,
77
+ ]
78
+
79
+ VIDEO_DATASET = [
80
+ # Existing datasets
81
+ MMUAVBench_Video,
82
+ ]
83
+ ```
84
+
85
+ #### Step 3: Download Dataset
86
+ Download the dataset from [huggingface](https://huggingface.co/datasets/daisq/MM-UAVBench) and put it in `~/MM-UAVBench/data`.
87
+
88
+ Set the dataset path in `~/MM-UAVBench/VLMEvalKit/.env`:
89
+ ```
90
+ LMUData="~/MM-UAVBench/data"
91
+ ```
92
+ #### Step 4: Run Evaluation
93
+ Modify the model checkpoint path in `~/MM-UAVBench/VLMEvalKit/vlmeval/config.py` to your target model path.
94
+
95
+ Run the evaluation command:
96
+ ```bash
97
+ python run.py \
98
+ --data MMUAVBench_Image MMUAVBench_Video \
99
+ --model Qwen3-VL-8B-Instruct \
100
+ --mode all \
101
+ --work-dir ~/MM-UAVBench/eval_results \
102
+ --verbose
103
+ ```
104
+
105
+ ### Render Custom Annotations on Raw Images
106
+ To generate annotated images from raw files (using our script):
107
+ ```bash
108
+ # 1. Set your MM-UAVBench root directory in render_annotated.py
109
+ # 2. Run the annotation rendering script
110
+ python tools/render_annotated.py
111
+ ```
112
+ ## πŸ“– Citation
113
+ If you find MM-UAVBench useful in your research tasks or applications, please consider to give **star⭐** and kindly cite:
114
+ ```
115
+ @article{dai2025mm,
116
+ title={MM-UAVBench: How Well Do Multimodal Large Language Models See, Think, and Plan in Low-Altitude UAV Scenarios?},
117
+ author={Dai, Shiqi and Ma, Zizhi and Luo, Zhicong and Yang, Xuesong and Huang, Yibin and Zhang, Wanyue and Chen, Chi and Guo, Zonghao and Xu, Wang and Sun, Yufei and others},
118
+ journal={arXiv preprint arXiv:2512.23219},
119
+ year={2025},
120
+ url={https://arxiv.org/abs/2512.23219}
121
+ }
122
+ ```