File size: 5,272 Bytes
947cb9c
1eecbd9
 
947cb9c
1eecbd9
 
947cb9c
 
1eecbd9
947cb9c
 
 
1eecbd9
 
 
 
947cb9c
 
1eecbd9
947cb9c
1eecbd9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
947cb9c
1eecbd9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
947cb9c
1eecbd9
 
 
947cb9c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
language:
- en
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- question-answering
- image-text-to-text
tags:
- math
- reasoning
- uav
- aerial-imagery
- multimodal
- vlm
---

# Multimodal Mathematical Reasoning Embedded in Aerial Vehicle Imagery: Benchmarking, Analysis, and Exploration

<p align="center">
    <img src="https://github.com/VisionXLab/avi-math/blob/main/images/avi-math.png?raw=true" width=100%>
</p>

## Abstract

Mathematical reasoning is critical for tasks such as precise distance and area computations, trajectory estimations, and spatial analysis in unmanned aerial vehicle (UAV) based remote sensing, yet current vision-language models (VLMs) have not been adequately tested in this domain. To address this gap, we introduce AVI-Math, the first benchmark to rigorously evaluate multimodal mathematical reasoning in aerial vehicle imagery, moving beyond simple counting tasks to include domain-specific knowledge in areas such as geometry, logic, and algebra. The dataset comprises 3,773 high-quality vehicle-related questions captured from UAV views, covering 6 mathematical subjects and 20 topics. The data, collected at varying altitudes and from multiple UAV angles, reflects real-world UAV scenarios, ensuring the diversity and complexity of the constructed mathematical problems. In this paper, we benchmark 14 prominent VLMs through a comprehensive evaluation and demonstrate that, despite their success on previous multimodal benchmarks, these models struggle with the reasoning tasks in AVI-Math. Our detailed analysis highlights significant limitations in the mathematical reasoning capabilities of current VLMs and suggests avenues for future research. Furthermore, we explore the use of Chain-of-Thought prompting and fine-tuning techniques, which show promise in addressing the reasoning challenges in AVI-Math. Our findings not only expose the limitations of VLMs in mathematical reasoning but also offer valuable insights for advancing UAV-based trustworthy VLMs in real-world applications.

<p align="center">
  <img src="https://github.com/VisionXLab/avi-math/blob/main/images/cat.png?raw=true" width=50%>
  <div style="display: inline-block; color: #999; padding: 2px;">
      ARI: arithmetic, CNT: counting, ALG: algebra, STA: statistics, LOG: logic, GEO: geometry.
  </div>
</p>

---

## Latest Updates

- **[2025.09.15]** We released the benchmark and evaluation code.
- **[2025.09.08]** Accepted by ISPRS JPRS.

---

## Contributions

-   **Benchmark:** We introduce AVI-Math, the first multimodal benchmark for mathematical reasoning in UAV imagery, covering six subjects and real-world UAV scenarios.

-   **Analysis:** We provide a comprehensive analysis, uncovering the limitations of current VLMs in mathematical reasoning and offering insights for future improvements.

-   **Exploration:** We explore the potential of Chain-of-Thought prompting and fine-tuning techniques to enhance VLM performance, providing a 215k-sample instruction set for VLMs to learn domain-specific knowledge in UAV scenarios.

---

## Benchmark

Examples of six mathematical reasoning subjects in AVI-Math.

<p align="center">
  <img src="https://github.com/VisionXLab/avi-math/blob/main/images/bench1.png?raw=true" width=100%>
</p>
<p align="center">
  <img src="https://github.com/VisionXLab/avi-math/blob/main/images/bench2.png?raw=true" width=100%>
</p>

Please download the [dataset](https://huggingface.co/datasets/erenzhou/AVI-Math) first and then refer to the code in the evaluation to infer and evaluate the score.

---

## Analysis

Accuracy scores on the AVI-Math. AVG: average accuracy of the six subjects. FRE: free-form question, CHO: multiple choice question, T/F: true or false question. The highest scores among models in each part and overall are highlighted in blue and red. The table exclusively employs the original model weights without fine-tuning.

<p align="center">
  <img src="https://github.com/VisionXLab/avi-math/blob/main/images/analysis.png?raw=true" width=100%>
</p>

---

## Exploration

Chain-of-Thought and fine-tuning results on various VLMs.

<p align="center">
  <img src="https://github.com/VisionXLab/avi-math/blob/main/images/explore.png?raw=true" width=100%>
</p>

---

## Usage

The dataset can be downloaded from Hugging Face. For evaluation and to infer and evaluate scores using the dataset, please refer to the code provided in the [official GitHub repository](https://github.com/VisionXLab/avi-math).

## Dataset Sources

-   **Paper:** [Multimodal Mathematical Reasoning Embedded in Aerial Vehicle Imagery: Benchmarking, Analysis, and Exploration](https://huggingface.co/papers/2509.10059)
-   **Repository:** https://github.com/VisionXLab/avi-math
-   **Project Page:** https://zytx121.github.io/

**BibTeX:**

```bibtex
@ARTICLE{zhou2025avimath,
  author={Zhou, Yue and Feng, Litong and Lan, Mengcheng and Yang, Xue and Li, Qingyun and Ke, Yiping and Jiang, Xue and Zhang, Wayne},
  journal={ISPRS Journal of Photogrammetry and Remote Sensing}, 
  title={Multimodal Mathematical Reasoning Embedded in Aerial Vehicle Imagery: Benchmarking, Analysis, and Exploration}, 
  year={2025},
  volume={},
  number={},
  pages={},
  doi={}
}
```
## Contact
yzhou@geoai.ecnu.edu.cn