Update README.md
Browse files
README.md
CHANGED
|
@@ -5,18 +5,28 @@ size_categories:
|
|
| 5 |
- 1K<n<10K
|
| 6 |
viewer: false
|
| 7 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
# Evaluating Vision-Language Models on Misleading Data Visualizations (Dataset)
|
| 10 |
|
| 11 |
## Overview
|
| 12 |
|
| 13 |
This dataset accompanies the paper:
|
| 14 |
|
| 15 |
-
“When Visuals Aren’t the Problem: Evaluating Vision-Language Models on Misleading Data Visualizations.”
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
-
Unlike prior benchmarks that primarily focus on chart understanding or visual distortions,
|
| 20 |
|
| 21 |
---
|
| 22 |
|
|
@@ -324,9 +334,10 @@ For any issues related to the dataset, feel free to reach out to lalaiharsh26@gm
|
|
| 324 |
# Citation
|
| 325 |
|
| 326 |
```
|
| 327 |
-
@article{
|
| 328 |
-
title={When Visuals Aren
|
| 329 |
author={Lalai, Harsh Nishant and Shah, Raj Sanjay and Pfister, Hanspeter and Varma, Sashank and Guo, Grace},
|
|
|
|
| 330 |
year={2026}
|
| 331 |
}
|
| 332 |
```
|
|
|
|
| 5 |
- 1K<n<10K
|
| 6 |
viewer: false
|
| 7 |
license: cc-by-nc-sa-4.0
|
| 8 |
+
paper: https://arxiv.org/abs/2603.22368
|
| 9 |
+
repository: https://github.com/Harsh-Lalai/Evaluating-Vision-Language-Models-on-Misleading-Data-Visualizations
|
| 10 |
+
point_of_contact: lalaiharsh26@gmail.com
|
| 11 |
---
|
| 12 |
+
|
| 13 |
+
## Dataset Description
|
| 14 |
+
|
| 15 |
+
- **Repository:** https://github.com/Harsh-Lalai/Evaluating-Vision-Language-Models-on-Misleading-Data-Visualizations
|
| 16 |
+
- **Paper:** https://arxiv.org/abs/2603.22368
|
| 17 |
+
- **Point of Contact:** lalaiharsh26@gmail.com
|
| 18 |
+
|
| 19 |
# Evaluating Vision-Language Models on Misleading Data Visualizations (Dataset)
|
| 20 |
|
| 21 |
## Overview
|
| 22 |
|
| 23 |
This dataset accompanies the paper:
|
| 24 |
|
| 25 |
+
“[When Visuals Aren’t the Problem: Evaluating Vision-Language Models on Misleading Data Visualizations.](https://arxiv.org/abs/2603.22368)”
|
| 26 |
|
| 27 |
+
MisVisBench is designed to evaluate whether Vision-Language Models (VLMs) can detect misleading information in **data visualization-caption pairs**, and whether they can correctly attribute the source of misleadingness to appropriate error types: Caption-level reasoning errors and Visualization design errors.
|
| 28 |
|
| 29 |
+
Unlike prior benchmarks that primarily focus on chart understanding or visual distortions, MisVisBench enables **fine-grained analysis of misleadingness arising from both textual reasoning and visualization design choices**.
|
| 30 |
|
| 31 |
---
|
| 32 |
|
|
|
|
| 334 |
# Citation
|
| 335 |
|
| 336 |
```
|
| 337 |
+
@article{lalai2026visuals,
|
| 338 |
+
title={When Visuals Aren't the Problem: Evaluating Vision-Language Models on Misleading Data Visualizations},
|
| 339 |
author={Lalai, Harsh Nishant and Shah, Raj Sanjay and Pfister, Hanspeter and Varma, Sashank and Guo, Grace},
|
| 340 |
+
journal={arXiv preprint arXiv:2603.22368},
|
| 341 |
year={2026}
|
| 342 |
}
|
| 343 |
```
|