jnaiman
readme
6bca8ce
|
raw
history blame
1.88 kB
metadata
license: apache-2.0

What Lies Beneath: A Call for Distribution-based Visual Question & Answer Datasets

Publication: TBD GitHub Repo: TBD

This is a histogram-based dataset for visual question and answer (VQA) with humans and large language/multimodal models (LMMs).

Data contains synthetically generated single-panel histograms images, data used to create histograms, bounding box data for titles, axis and tick labels, and data marks, and VQA question-answer pairs. The subset of data presented in the paper (example_hist/ folder) includes both human (two annotators) and LMM (ChatGPT-5-nano) annotations.

See GitHub link for code used to create and parse the following files.

Directory Structure

Overview of the directory structure is as follows:

  • example_hists/ -- contains img and json for a small (80 images), visually uniform set of histogram data with several questions annotated by both LMMs
  • example_hists_larger/ -- larger (500 images) dataset of uniform histogram images
  • example_hists_complex/ -- largest (100 images) dataset of histograms with a variety of distributions, shapes, colors, etc.

Paper-dataset (example_hists/) directory structure:

  • LLM_outputs/ -- contains outputs from various trials using ChatGPT-5
  • imgs/ -- stores all images (also in imgs.zip file)
  • jsons/ -- stores JSON for bounding boxes, data used to create images, VQA data
  • human_and_llm_annotated_data.csv -- contains two human annotations and two LMM annotations (gpt-5-nano, gpt-5-mini) for a subset of questions

Human and LMM Annotations

Citation information

If you use this work please cite:

TBD