Datasets:
Tasks:
Image Classification
Formats:
imagefolder
Sub-tasks:
multi-label-classification
Languages:
English
Size:
1K - 10K
ArXiv:
License:
| language: | |
| - en | |
| license: cc-by-sa-4.0 | |
| pretty_name: Misviz | |
| size_categories: | |
| - 1K<n<10K | |
| task_categories: | |
| - image-classification | |
| task_ids: | |
| - multi-label-classification | |
| tags: | |
| - data-visualization | |
| - misinformation | |
| - multimodal | |
| - chart-understanding | |
| annotations_creators: | |
| - expert-annotated | |
| # Dataset Card for Misviz | |
| ## Dataset Description | |
| - **Homepage:** https://ukplab.github.io/arxiv2025-misviz/ | |
| - **Repository:** https://github.com/UKPLab/arxiv2025-misviz | |
| - **Paper:** https://arxiv.org/abs/2508.21675 | |
| - **Point of Contact:** jonathan.tonglet@tu-darmstadt.de | |
| ## Dataset Summary | |
| Misviz is a dataset of 2,604 real-world data visualizations collected from the web and manually annotated for misleading design practices. | |
| The dataset is introduced in the arXiv preprint *Is this chart lying to me? Automating the detection of misleading visualizations*. | |
| Each visualization may contain up to three misleading design violations from a taxonomy of 12 misleaders. | |
| A misleader refers to a specific misleading visualization practice, such as truncated axes, distorted proportions, or misleading annotations. | |
| The following misleaders are included: misrepresentation, 3D, truncated axis, inappropriate use of pie chart, inconsistent binning size, discretized continuous variable, | |
| inconsistent tick intervals, dual axis, inappropriate use of line chart, inappropriate item order, inverted axis, and inappropriate axis range. | |
| The dataset contains: | |
| - The visualization image | |
| - One or more chart type labels | |
| - One or more misleader labels | |
| - Optional bounding boxes localizing misleading regions | |
| ## Use cases | |
| - Multi-label classification: predicting which misleaders affect a visualization | |
| - Misleader localization: predicting bounding boxes around misleading regions | |
|  | |
| ## Dataset Structure | |
| ### Data Fields | |
| Each entry contains the following fields: | |
| - `image` | |
| The visualization image | |
| - `chart_type` | |
| A list containing one or more chart types present in the visualization. | |
| - `misleader` | |
| A list of misleading design violations affecting the visualization. Each visualization may contain up to three misleaders from a taxonomy of 12. | |
| - `bbox` | |
| Optional bounding boxes identifying the regions where a misleader is present. | |
| ### Data Splits | |
| The dataset contains predefined split labels: | |
| - train (=dev) | |
| - validation | |
| - test | |
| Note that the train set is not a real train set, but rather a small dev set that can be used for few-shot demonstrations retrieval. | |
| In the preprint, it is referred to as the dev set. However, HuggingFace's naming conventions require us to call it the train set here. | |
| To load the dataset, you will need to request access and then run the following script. | |
| ```python | |
| from datasets import load_dataset | |
| ds = load_dataset("UKPLab/misviz", token="your_huggingface_token") | |
| ``` | |
| ## Dataset Creation | |
| ### Curation Rationale | |
| Misleading visualizations can distort public understanding of data and contribute to misinformation. | |
| While prior work has considered automating the detection of misleading charts, their datasets were either small or not publicly accessible. | |
| Misviz was created to provide the first large real-world benchmark for automated detection of misleading visualizations. | |
| ### Data collection | |
| The visualizations were collected from four sources. | |
| - The corpus created by [Lo et al. (2022)](https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14559) to construct their taxonomy of misleading visualizations | |
| - The corpus created by [Lan et Liu (2024)](https://ieeexplore.ieee.org/document/10670488/) to construct their taxonomy of misleading visualizations | |
| - The subreddit r/dataisugly, an online community to share and discuss examples of misleading visualizations | |
| - The subreddit r/dataisbeautiful, an online community to share and discuss examples of non-misleading visualizations | |
| ### Data annotation | |
| - The first two corpora were already annotated with misleaders in prior work. | |
| - Crowdworkers on Prolific annotated images from the subreddits | |
| ## Considerations for Using the Data | |
| ### Social Impact of Dataset | |
| This dataset supports research on detecting misleading visual content, which can help improve chart literacy, counter visual misinformation, | |
| and improve trust in data visualizations. | |
| ### Known Limitations | |
| - - The HuggingFace dataset format differs slightly from the format used in the main repository. To reproduce the experiments from the paper, we recommend using the `misviz.json` file available in the GitHub repository. | |
| - The dataset contains 2,604 visualizations, which is a moderate scale. | |
| - The dataset does not cover all types of misleaders proposed in the existing taxonomies | |
| ### Licensing Information | |
| The dataset annotations are released under a CC-BY-SA 4.0 license. The dataset creators do not hold copyright for the visualization images. | |
| The dataset should be used only for academic research. | |
| ### Citation Information | |
| If you find this dataset useful, please cite our paper as follows: | |
| ```bibtex | |
| @article{tonglet2025misviz, | |
| title={Is this chart lying to me? Automating the detection of misleading visualizations}, | |
| author={Tonglet, Jonathan and Zimny, Jan and Tuytelaars, Tinne and Gurevych, Iryna}, | |
| journal={arXiv preprint arXiv:2508.21675}, | |
| year={2025}, | |
| url={https://arxiv.org/abs/2508.21675}, | |
| doi={10.48550/arXiv.2508.21675} | |
| } | |
| ``` | |
| ### Dataset Card Authors | |
| Jonathan Tonglet | |
| ### Dataset Card Contact | |
| jonathan.tonglet@tu-darmstadt.de |