Datasets:
Enhance dataset card: Add task categories, tags, update abstract, and improve links
Browse filesThis PR significantly enhances the TDBench dataset card by:
- Adding `task_categories: ['image-text-to-text']` and relevant `tags` to the metadata for improved discoverability on the Hugging Face Hub.
- Updating the abstract to match the comprehensive abstract provided in the paper, ensuring a more accurate description of the dataset and its contributions.
- Making links to the paper (Hugging Face Papers), code repository, and project page more prominent and consolidated at the top of the README.
- Explicitly labeling the 'How to run TDBench' section as 'Sample Usage' for clarity.
README.md
CHANGED
|
@@ -1,5 +1,13 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
configs:
|
| 4 |
- config_name: default
|
| 5 |
data_files:
|
|
@@ -67,22 +75,21 @@ dataset_info:
|
|
| 67 |
download_size: 629990525
|
| 68 |
dataset_size: 469809749.5
|
| 69 |
---
|
|
|
|
| 70 |
# TDBench: Benchmarking Vision-Language Models in Understanding Top-Down / Bird's Eye View Images
|
| 71 |
|
| 72 |
[Kaiyuan Hou](https://hou-kaiyuan.github.io/)+, [Minghui Zhao](https://scottz.net/)+, [Lilin Xu](https://initxu.github.io/), [Yuang Fan](https://www.linkedin.com/in/yuang-fan/), [Xiaofan Jiang](http://fredjiang.com/) (+: Equally contributing first authors)
|
| 73 |
|
| 74 |
#### **Intelligent and Connected Systems Lab (ICSL), Columbia University**
|
| 75 |
|
| 76 |
-
[
|
| 77 |
-
[](https://huggingface.co/datasets/Columbia-ICSL/TDBench)
|
| 78 |
-
|
| 79 |
|
| 80 |
<p align="center">
|
| 81 |
<img src="images/TDBench.jpg" width="600"></a>
|
| 82 |
</p>
|
| 83 |
<p align="center"> 8 Representative VLMs on 10 dimensions in TDBench </p>
|
| 84 |
|
| 85 |
-
**Abstract:**
|
| 86 |
|
| 87 |
|
| 88 |
## 📢 Latest Updates
|
|
@@ -130,7 +137,7 @@ Top-down images are usually captured from a relatively high altitude, which may
|
|
| 130 |
4. **Z-Axis Perception and Depth Understanding**
|
| 131 |
- Assessing the depth reasoning from top-down images
|
| 132 |
|
| 133 |
-
## 🤖 How to run TDBench
|
| 134 |
|
| 135 |
TDBench is fully compatible with [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
|
| 136 |
|
|
@@ -169,7 +176,7 @@ python run.py --data tdbench_rot0 \
|
|
| 169 |
```
|
| 170 |
To apply RotationalEval, simply run all rotations
|
| 171 |
```python
|
| 172 |
-
python run.py --data tdbench_rot0 tdbench_rot90
|
| 173 |
--model <model_name> \
|
| 174 |
--verbose \
|
| 175 |
--work-dir <results_directory>
|
|
@@ -227,4 +234,4 @@ If you have any questions, please create an issue on this repository or contact
|
|
| 227 |
mz2866@columbia.edu.
|
| 228 |
|
| 229 |
---
|
| 230 |
-
[<img src="images/ICSL_Logo.png" width="500"/>](http://icsl.ee.columbia.edu/)
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-text-to-text
|
| 5 |
+
tags:
|
| 6 |
+
- vqa
|
| 7 |
+
- vision-language-model
|
| 8 |
+
- top-down-images
|
| 9 |
+
- aerial-images
|
| 10 |
+
- benchmark
|
| 11 |
configs:
|
| 12 |
- config_name: default
|
| 13 |
data_files:
|
|
|
|
| 75 |
download_size: 629990525
|
| 76 |
dataset_size: 469809749.5
|
| 77 |
---
|
| 78 |
+
|
| 79 |
# TDBench: Benchmarking Vision-Language Models in Understanding Top-Down / Bird's Eye View Images
|
| 80 |
|
| 81 |
[Kaiyuan Hou](https://hou-kaiyuan.github.io/)+, [Minghui Zhao](https://scottz.net/)+, [Lilin Xu](https://initxu.github.io/), [Yuang Fan](https://www.linkedin.com/in/yuang-fan/), [Xiaofan Jiang](http://fredjiang.com/) (+: Equally contributing first authors)
|
| 82 |
|
| 83 |
#### **Intelligent and Connected Systems Lab (ICSL), Columbia University**
|
| 84 |
|
| 85 |
+
[Paper](https://huggingface.co/papers/2504.03748) | [Code / Project Page](https://github.com/Columbia-ICSL/TDBench)
|
|
|
|
|
|
|
| 86 |
|
| 87 |
<p align="center">
|
| 88 |
<img src="images/TDBench.jpg" width="600"></a>
|
| 89 |
</p>
|
| 90 |
<p align="center"> 8 Representative VLMs on 10 dimensions in TDBench </p>
|
| 91 |
|
| 92 |
+
**Abstract:** Top-down images play an important role in safety-critical settings such as autonomous navigation and aerial surveillance, where they provide holistic spatial information that front-view images cannot capture. Despite this, Vision Language Models (VLMs) are mostly trained and evaluated on front-view benchmarks, leaving their performance in the top-down setting poorly understood. Existing evaluations also overlook a unique property of top-down images: their physical meaning is preserved under rotation. In addition, conventional accuracy metrics can be misleading, since they are often inflated by hallucinations or "lucky guesses", which obscures a model's true reliability and its grounding in visual evidence. To address these issues, we introduce TDBench, a benchmark for top-down image understanding that includes 2000 curated questions for each rotation. We further propose RotationalEval (RE), which measures whether models provide consistent answers across four rotated views of the same scene, and we develop a reliability framework that separates genuine knowledge from chance. Finally, we conduct four case studies targeting underexplored real-world challenges. By combining rigorous evaluation with reliability metrics, TDBench not only benchmarks VLMs in top-down perception but also provides a new perspective on trustworthiness, guiding the development of more robust and grounded AI systems. Project homepage: this https URL
|
| 93 |
|
| 94 |
|
| 95 |
## 📢 Latest Updates
|
|
|
|
| 137 |
4. **Z-Axis Perception and Depth Understanding**
|
| 138 |
- Assessing the depth reasoning from top-down images
|
| 139 |
|
| 140 |
+
## 🤖 Sample Usage: How to run TDBench
|
| 141 |
|
| 142 |
TDBench is fully compatible with [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
|
| 143 |
|
|
|
|
| 176 |
```
|
| 177 |
To apply RotationalEval, simply run all rotations
|
| 178 |
```python
|
| 179 |
+
python run.py --data tdbench_rot0 tdbench_rot90 tdbench_rot180 tdbench_rot270 \
|
| 180 |
--model <model_name> \
|
| 181 |
--verbose \
|
| 182 |
--work-dir <results_directory>
|
|
|
|
| 234 |
mz2866@columbia.edu.
|
| 235 |
|
| 236 |
---
|
| 237 |
+
[<img src="images/ICSL_Logo.png" width="500"/>](http://icsl.ee.columbia.edu/)
|