metadata
license: apache-2.0
What Lies Beneath: A Call for Distribution-based Visual Question & Answer Datasets
Publication: TBD GitHub Repo: TBD
This is a histogram-based dataset for visual question and answer (VQA) with humans and large language/multimodal models (LMMs).
Data contains synthetically generated single-panel histograms images, data used to create histograms, bounding box data for titles, axis and tick labels, and data marks, and VQA question-answer pairs. The subset of data presented in the paper (example_hist/ folder) includes both human (two annotators) and LMM (ChatGPT-5-nano) annotations.
See GitHub link for code used to create and parse the following files.
Directory Structure
Overview of the directory structure is as follows:
example_hists/-- contains img and json for a small (80 images), visually uniform set of histogram data with several questions annotated by both LMMsexample_hists_larger/-- larger (500 images) dataset of uniform histogram imagesexample_hists_complex/-- largest (100 images) dataset of histograms with a variety of distributions, shapes, colors, etc.
Paper-dataset (example_hists/) directory structure:
LLM_outputs/-- contains outputs from various trials using ChatGPT-5imgs/-- stores all images (also inimgs.zipfile)jsons/-- stores JSON for bounding boxes, data used to create images, VQA datahuman_and_llm_annotated_data.csv-- contains two human annotations and two LMM annotations (gpt-5-nano, gpt-5-mini) for a subset of questions
Human and LMM Annotations
Citation information
If you use this work please cite:
TBD