|
|
---
|
|
|
license: apache-2.0
|
|
|
---
|
|
|
|
|
|
# What Lies Beneath: A Call for Distribution-based Visual Question & Answer Datasets
|
|
|
|
|
|
Publication: TBD
|
|
|
|
|
|
This is a histogram-based dataset for visual question and answer (VQA) with humans and large language/multimodal models (LMMs).
|
|
|
|
|
|
Data contains synthetically generated single-panel histograms images, bounding box data for titles, axis and tick labels, and data marks, and VQA question-answer pairs. The subset of data presented in the paper (`example_hist/` folder) includes both human (two annotators) and LMM (ChatGPT-5-nano) annotations.
|
|
|
|
|
|
Overview of the [directory structure](https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms/tree/main) is as follows:
|
|
|
* `example_hists/` -- contains img and json for a small (80 images), visually uniform set of histogram data with several questions annotated by both LMMs
|
|
|
* `example_hists_larger/` -- larger (500 images) dataset of uniform histogram images, bounding boxes, questions and answers
|
|
|
* `example_hists_complex/` -- largest (100 images) dataset of histograms with a variety of distributions, shapes, colors, etc., and bounding boxes, questions and answers
|
|
|
|
|
|
|
|
|
|