Datasets:

Enhance dataset card: Add task categories, tags, update abstract, and improve links

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +14 -7
README.md CHANGED
@@ -1,5 +1,13 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
3
  configs:
4
  - config_name: default
5
  data_files:
@@ -67,22 +75,21 @@ dataset_info:
67
  download_size: 629990525
68
  dataset_size: 469809749.5
69
  ---
 
70
  # TDBench: Benchmarking Vision-Language Models in Understanding Top-Down / Bird's Eye View Images
71
 
72
  [Kaiyuan Hou](https://hou-kaiyuan.github.io/)+, [Minghui Zhao](https://scottz.net/)+, [Lilin Xu](https://initxu.github.io/), [Yuang Fan](https://www.linkedin.com/in/yuang-fan/), [Xiaofan Jiang](http://fredjiang.com/) (+: Equally contributing first authors)
73
 
74
  #### **Intelligent and Connected Systems Lab (ICSL), Columbia University**
75
 
76
- [![paper](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/pdf/2504.03748)
77
- [![HuggingFace](https://img.shields.io/badge/HuggingFace-Dataset-orange)](https://huggingface.co/datasets/Columbia-ICSL/TDBench)
78
-
79
 
80
  <p align="center">
81
  <img src="images/TDBench.jpg" width="600"></a>
82
  </p>
83
  <p align="center"> 8 Representative VLMs on 10 dimensions in TDBench </p>
84
 
85
- **Abstract:** The rapid emergence of Vision-Language Models (VLMs) has significantly advanced multimodal understanding, enabling applications in scene comprehension and visual reasoning. While these models have been primarily evaluated and developed for front-view image understanding, their capabilities in interpreting top-down images have received limited attention, partly due to the scarcity of diverse top-down datasets and the challenges in collecting such data. In contrast, top-down vision provides explicit spatial overviews and improved contextual understanding of scenes, making it particularly valuable for tasks like autonomous navigation, aerial imaging, and spatial planning. In this work, we address this gap by introducing TDBench, a comprehensive benchmark for VLMs in top-down image understanding. TDBench is constructed from public top-down view datasets and high-quality simulated images, including diverse real-world and synthetic scenarios. TDBench consists of visual question-answer pairs across ten evaluation dimensions of image understanding. Moreover, we conduct four case studies that commonly happen in real-world scenarios but are less explored. By revealing the strengths and limitations of existing VLM through evaluation results, we hope TDBench to provide insights for motivating future research.
86
 
87
 
88
  ## 📢 Latest Updates
@@ -130,7 +137,7 @@ Top-down images are usually captured from a relatively high altitude, which may
130
  4. **Z-Axis Perception and Depth Understanding**
131
  - Assessing the depth reasoning from top-down images
132
 
133
- ## 🤖 How to run TDBench
134
 
135
  TDBench is fully compatible with [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
136
 
@@ -169,7 +176,7 @@ python run.py --data tdbench_rot0 \
169
  ```
170
  To apply RotationalEval, simply run all rotations
171
  ```python
172
- python run.py --data tdbench_rot0 tdbench_rot90 tdbench_rot270 tdbench_rot270 \
173
  --model <model_name> \
174
  --verbose \
175
  --work-dir <results_directory>
@@ -227,4 +234,4 @@ If you have any questions, please create an issue on this repository or contact
227
  mz2866@columbia.edu.
228
 
229
  ---
230
- [<img src="images/ICSL_Logo.png" width="500"/>](http://icsl.ee.columbia.edu/)
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ tags:
6
+ - vqa
7
+ - vision-language-model
8
+ - top-down-images
9
+ - aerial-images
10
+ - benchmark
11
  configs:
12
  - config_name: default
13
  data_files:
 
75
  download_size: 629990525
76
  dataset_size: 469809749.5
77
  ---
78
+
79
  # TDBench: Benchmarking Vision-Language Models in Understanding Top-Down / Bird's Eye View Images
80
 
81
  [Kaiyuan Hou](https://hou-kaiyuan.github.io/)+, [Minghui Zhao](https://scottz.net/)+, [Lilin Xu](https://initxu.github.io/), [Yuang Fan](https://www.linkedin.com/in/yuang-fan/), [Xiaofan Jiang](http://fredjiang.com/) (+: Equally contributing first authors)
82
 
83
  #### **Intelligent and Connected Systems Lab (ICSL), Columbia University**
84
 
85
+ [Paper](https://huggingface.co/papers/2504.03748) | [Code / Project Page](https://github.com/Columbia-ICSL/TDBench)
 
 
86
 
87
  <p align="center">
88
  <img src="images/TDBench.jpg" width="600"></a>
89
  </p>
90
  <p align="center"> 8 Representative VLMs on 10 dimensions in TDBench </p>
91
 
92
+ **Abstract:** Top-down images play an important role in safety-critical settings such as autonomous navigation and aerial surveillance, where they provide holistic spatial information that front-view images cannot capture. Despite this, Vision Language Models (VLMs) are mostly trained and evaluated on front-view benchmarks, leaving their performance in the top-down setting poorly understood. Existing evaluations also overlook a unique property of top-down images: their physical meaning is preserved under rotation. In addition, conventional accuracy metrics can be misleading, since they are often inflated by hallucinations or "lucky guesses", which obscures a model's true reliability and its grounding in visual evidence. To address these issues, we introduce TDBench, a benchmark for top-down image understanding that includes 2000 curated questions for each rotation. We further propose RotationalEval (RE), which measures whether models provide consistent answers across four rotated views of the same scene, and we develop a reliability framework that separates genuine knowledge from chance. Finally, we conduct four case studies targeting underexplored real-world challenges. By combining rigorous evaluation with reliability metrics, TDBench not only benchmarks VLMs in top-down perception but also provides a new perspective on trustworthiness, guiding the development of more robust and grounded AI systems. Project homepage: this https URL
93
 
94
 
95
  ## 📢 Latest Updates
 
137
  4. **Z-Axis Perception and Depth Understanding**
138
  - Assessing the depth reasoning from top-down images
139
 
140
+ ## 🤖 Sample Usage: How to run TDBench
141
 
142
  TDBench is fully compatible with [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
143
 
 
176
  ```
177
  To apply RotationalEval, simply run all rotations
178
  ```python
179
+ python run.py --data tdbench_rot0 tdbench_rot90 tdbench_rot180 tdbench_rot270 \
180
  --model <model_name> \
181
  --verbose \
182
  --work-dir <results_directory>
 
234
  mz2866@columbia.edu.
235
 
236
  ---
237
+ [<img src="images/ICSL_Logo.png" width="500"/>](http://icsl.ee.columbia.edu/)