File size: 4,175 Bytes
b6653f3
7d3bba0
b6653f3
 
 
 
ce8127b
 
 
b6653f3
2a9a776
7881c29
b6653f3
 
 
 
 
 
7881c29
6a8c51e
bd929d1
 
 
 
 
6a8c51e
 
7881c29
0544c0a
de5c903
0544c0a
b6653f3
7881c29
c95e8af
7a00a80
b6653f3
7d92a47
9b092a7
b6653f3
7d92a47
7881c29
7d92a47
 
b6653f3
 
 
7881c29
 
b6653f3
 
 
 
 
 
 
 
f020aef
b6653f3
f020aef
 
 
 
 
 
 
a967915
f020aef
e953797
7d3bba0
40f633e
 
f6058bd
40f633e
 
7d3bba0
40f633e
 
9848146
 
7d3bba0
 
e953797
f020aef
 
40f633e
f020aef
 
40f633e
a967915
40f633e
f020aef
 
e953797
 
f6058bd
f020aef
 
 
 
 
 
f6058bd
381d16b
f6058bd
 
f020aef
 
 
f6058bd
 
 
 
 
7d3bba0
381d16b
40f633e
f6058bd
0dcb582
e953797
 
40f633e
 
a967915
0dcb582
7d3bba0
f6058bd
381d16b
7d3bba0
 
40f633e
7d3bba0
 
 
 
 
 
f6058bd
 
f020aef
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
---
license: cc-by-4.0
dataset_info:
  features:
  - name: question_id
    dtype: string
  - name: question_type_id
    dtype: string
  - name: question_type_name
    dtype: string
  - name: figure_id
    list: string
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: instructions
    dtype: string
  - name: url
    dtype: string
  - name: extra_input_figure_ids
    list: string
  - name: extra_input_figure_bboxes
    sequence:
      sequence: int64
  - name: data_fact
    dtype: string
  - name: difficulty
    dtype: string
  - name: chart_type
    dtype: string
  splits:
  - name: text
    num_bytes: 25387749
    num_examples: 50920
  - name: visual_metaphor
    num_bytes: 407203
    num_examples: 462
  - name: visual_basic
    num_bytes: 7166829
    num_examples: 7475
  download_size: 10423864
  dataset_size: 32961781
configs:
- config_name: default
  data_files:
  - split: text
    path: data/text-*
  - split: visual_metaphor
    path: data/visual_metaphor-*
  - split: visual_basic
    path: data/visual_basic-*
---



# InfoChartQA:  Benchmark for Multimodal Question Answering on Infographic Charts

🤗[Dataset](https://huggingface.co/datasets/Jietson/InfoChartQA)

# Dataset 
You can find our dataset on huggingface: 🤗[InfoChartQA Dataset](https://huggingface.co/datasets/Jietson/InfoChartQA)

# Usage

Each question entry is arranged as follows. **Note that for visual questions, there may be some extra input figures, which are cropped from the orginal figure. We present their bboxes in "extra_input_figure_bbox".**
```
{
        "question_id": id of the question,
        "question_type_name": question type name, for example: "extreme" questions, 
        "question_type_id": question type id, this is only used for evaluation! For example: 72 means "extreme" questions,
        "figure_id": id of the figure,
        "question": question text,  
        "answer": ground truth answer,
        "instructions": instructions,
        "url": url of the input image,
        "extra_input_figure_ids": ids of the extra input figures,
        "extra_input_figure_bboxes": bbox of the extra input figures, in [x,y,w,h] format w/o normalization,
        "data_fact": data fact of the question, only for text-based questions,
        "difficulty": difficulty level,
        "chart_type": chart_type,
}
```

Each question is built by:

```
input_image: item["url"] (may need to download for models that don't support url input)
extra_input_image: Cropped input_image using item["extra_input_figure_bboxes"],
input_text: item["question"] + item["instructions"] (if any)
```

where ``item`` is an entry of the dataset.


# Evaluate

You should store and evaluate model's response as:

```python
# Example code for evaluate
def build_question(query):
    question = query['question']
    if "instructions" in query:
        question += query["instructions"]
    return question


#### Run your model and save your answer

Responses = {}

for query in tqdm(ds):
    query_idx = query["question_id"]
    input_text = build_question(query)
    input_figure = query["url"]  # This should be a list of url for models that support url input

    """
        Note that for models that do not support url input, you may need to download images first.
        For example, for model like Qwen2.5-VL, you may need to down load the image first and pass the local image path to the model,
        like: input_figure = YOUR_LOCAL_IMAGE_PATH OF query['figure_id']
        Moreover, for questions with extra figure input, you may need to crop figure, for example,
        extra_input_figures = [crop(input_figure,bbox) for bbox in query["extra_input_figure_bboxes"]]
    """

    # Replace with your model
    response = model.generate(input_text, input_figure)

    Responses[query_idx] = {
        "qtype": int(query["question_type_id"]), # Note that "question_type_id" are used for evaluation only!
        "answer": query["answer"],
        "question_id": query_idx,
        "response": response,
    }

with open("./model_response.json", "w", encoding="utf-8") as f:
    json.dump(Responses, f, indent=2, ensure_ascii=False)

```