jnaiman commited on
Commit
21e9422
·
1 Parent(s): 329c3fb

updating readme

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md CHANGED
@@ -1,3 +1,68 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # What Lies Beneath: A Call for Distribution-based Visual Question & Answer Datasets
6
+
7
+ ### Publication: TBD (linked on publication)
8
+ ### GitHub Repo: TBD (linked on publication)
9
+
10
+ This is a histogram-based dataset for visual question and answer (VQA) with humans and large language/multimodal models (LMMs).
11
+
12
+ Data contains synthetically generated single-panel histograms images, data used to create histograms, bounding box data for titles, axis and tick labels, and data marks, and VQA question-answer pairs. The subset of data presented in the paper (`example_hist/` folder) includes both human (two annotators) and LMM (ChatGPT-5-nano) annotations.
13
+
14
+ See GitHub link for code used to create and parse the following files.
15
+
16
+ ## Directory Structure
17
+
18
+ Overview of the [directory structure](https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms/tree/main) is as follows:
19
+ - `example_hists/` -- contains img and json for a small (80 images), visually uniform set of histogram data with several questions annotated by both LMMs
20
+ - `example_hists_larger/` -- larger (500 images) dataset of uniform histogram images
21
+ - `example_hists_complex/` -- largest (1000 images) dataset of histograms with a variety of distributions, shapes, colors, etc.
22
+
23
+ Paper-dataset (`example_hists/`) [directory structure](https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms/tree/main/example_hists):
24
+ - `LLM_outputs/` -- contains outputs from various trials using ChatGPT-5
25
+ - `imgs/` -- stores all images (also in `imgs.zip` file)
26
+ - `jsons/` -- stores JSON for bounding boxes, data used to create images, VQA data
27
+ - `human_and_llm_annotated_data.csv` -- contains two human annotations and two LMM annotations (gpt-5-nano, gpt-5-mini) for a subset of questions
28
+
29
+
30
+ ## Human and LMM Annotations
31
+
32
+ Questions which have annotations in `human_and_llm_annotated_data.csv` are:
33
+ 1. "What is the median value of the data in this figure panel?" and,
34
+ 2. "How many gaussians were used to generate the data for the plot in the figure panel?". The addition of a constraint in the `format` part of the prompt of "Please choose an integer number from 1 to 5." was used in the LMM prompts to mimic the background knowledge of the human annotators.
35
+
36
+ Code for annotations from LMMs can be found in our GitHub repo linked at the top of this page.
37
+
38
+ Human annotations were performed with the [Zooniverse](https://www.zooniverse.org/) citizen science platform. For the number of gaussians, the humans were prompted to enter a number:
39
+
40
+ <img src="https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms/resolve/main/docs/ngaussians.png" alt="Zooniverse interface showing histogram image with the prompt 'How many gaussians were used to make the underlying distribution?'">
41
+
42
+ For the median, humans are first prompted to input the median as a number:
43
+ <img src="https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms/resolve/main/docs/median_number.png" alt="Zooniverse interface showing histogram image with the prompt 'What is the median of the underlying distribution (as a number)?'">
44
+
45
+ The humans were then prompted to draw the median with a line tool:
46
+ <img src="https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms/resolve/main/docs/median_draw.png" alt="Zooniverse interface showing histogram image with the prompt 'What is the median of the underlying distribution (draw a line)?'">
47
+
48
+ The human-drawn annotations were found to be more accurate.
49
+
50
+
51
+ ## Question formatting
52
+
53
+ Questions are stored in whole and broken into parts based on the structure from the [Google Prompting Guide](https://www.kaggle.com/whitepaper-prompt-engineering):
54
+
55
+ * persona: role the LMM is taking (e.g., "You are a helpful assistant that analyzes images.")
56
+ * context: context in which the question is being answered (often blank, but example could be "Please choose from the following list of options...")
57
+ * question: simplest form of question (e.g., "How many lines are on this plot?")
58
+ * format: output format of answer (e.g., "Please format your answer for the median as a json snippet as in '{'nbars':''}' where the number of bars is an integer stored in the empty string in the snippet.")
59
+
60
+ Definitions are not rigorous, user should feel welcome to mix and match at will.
61
+
62
+
63
+ ## Citation information
64
+
65
+ If you use this work please cite:
66
+ ```
67
+ TBD
68
+ ```