--- license: apache-2.0 language: - en --- #### 🎉 To the best of our knowledge, UAVBench is the first vision-language benchmark to assess low-altitude UAV image-level and region-level understanding and reasoning capabilities of MLLMs.
### 📢 News This is an ongoing project. We will be working on improving it. - 📦 Complete evaluation code coming soon! 🚀 - 📦 Detailed low-altitude UAV MLLMs model inference tutorial coming soon! 🚀 - [05/13/2025] 🚀 We release our UAVBench benchmark and UAVIT-1M instruct tunning data to huggingface. [UAVIT-1M](https://huggingface.co/datasets/ZhanYang-nwpu/UAVIT-1M) - [05/13/2025] 🚀 We release 3 low-altitude UAV Multi-modal Large Language Model baselines. 🤖[LLaVA1.5-UAV](https://huggingface.co/ZhanYang-nwpu/LLaVA1.5-UAV), 🤖[MiniGPTv2-UAV](https://huggingface.co/ZhanYang-nwpu/MiniGPTv2-UAV), 🤖[GeoChat-UAV](https://huggingface.co/ZhanYang-nwpu/GeoChat-UAV) ### 💎 UAVBench Benchmark To evaluate existing MLLMs’ abilities in low-altitude UAV vision-language tasks, we introduce the UAVBench Benchmark. The UAVBench comprises about **966k high-quality data samples** and **43 test units**, across **10 tasks** at the image-level and region-level, covering **261k multi-spatial resolution and multi-scene images**. ### 🎆 UAVIT-1M Instruct Tuning Dataset UAVIT-1M consists of approximately **1.24 million diverse instructions**, covering **789k multi-scene low-altitude UAV images** and about **2,000 types of spatial resolutions** with **11 distinct tasks**. UAVBench and UAVIT-1M feature pure real-world visual images and rich weather conditions, and involve manual sampling verification to ensure high quality. For more information, please refer to our 🏠[Homepage](https://). ### 📨 Contact If you have any questions about this project, please feel free to contact zhanyangnwpu@gmail.com.