Update README.md
Browse files
README.md
CHANGED
|
@@ -61,6 +61,7 @@ You can find our dataset on huggingface: 🤗[InfoChartQA Dataset](https://huggi
|
|
| 61 |
Each question entry is arranged as:
|
| 62 |
|
| 63 |
```
|
|
|
|
| 64 |
"question_id": id of the question,
|
| 65 |
"qtype": type of the question, for example: "rank" questions
|
| 66 |
"figure_path": local path of the image if you download the image,
|
|
@@ -72,15 +73,18 @@ Each question entry is arranged as:
|
|
| 72 |
"parent": the original image of the cropped image,
|
| 73 |
"difficulty": difficulty level,
|
| 74 |
"chart_type": chart_type,
|
|
|
|
| 75 |
```
|
| 76 |
|
| 77 |
-
Each question is built
|
| 78 |
|
| 79 |
```
|
| 80 |
-
image_input: url (may need to download for models that don't support url input)
|
| 81 |
-
text_iunput: question + instructions (if any)
|
| 82 |
```
|
| 83 |
|
|
|
|
|
|
|
| 84 |
# Evaluate
|
| 85 |
|
| 86 |
You should store and evaluate model's response as:
|
|
@@ -107,7 +111,9 @@ for query in tqdm(queries):
|
|
| 107 |
question_text = build_question(query)
|
| 108 |
chart_figure = query["url"] # This should be a list of url
|
| 109 |
"""
|
| 110 |
-
|
|
|
|
|
|
|
| 111 |
"""
|
| 112 |
# Replace with your model
|
| 113 |
response = model.generate(question_text, chart_figure)
|
|
|
|
| 61 |
Each question entry is arranged as:
|
| 62 |
|
| 63 |
```
|
| 64 |
+
{
|
| 65 |
"question_id": id of the question,
|
| 66 |
"qtype": type of the question, for example: "rank" questions
|
| 67 |
"figure_path": local path of the image if you download the image,
|
|
|
|
| 73 |
"parent": the original image of the cropped image,
|
| 74 |
"difficulty": difficulty level,
|
| 75 |
"chart_type": chart_type,
|
| 76 |
+
}
|
| 77 |
```
|
| 78 |
|
| 79 |
+
Each question is built on:
|
| 80 |
|
| 81 |
```
|
| 82 |
+
image_input: item["url"] (may need to download for models that don't support url input)
|
| 83 |
+
text_iunput: item["question"] + item["instructions"] (if any)
|
| 84 |
```
|
| 85 |
|
| 86 |
+
where ``item`` is an entry of the dataset.
|
| 87 |
+
|
| 88 |
# Evaluate
|
| 89 |
|
| 90 |
You should store and evaluate model's response as:
|
|
|
|
| 111 |
question_text = build_question(query)
|
| 112 |
chart_figure = query["url"] # This should be a list of url
|
| 113 |
"""
|
| 114 |
+
Note that for models that do not support url input, you may need to download images first.
|
| 115 |
+
For example, for model like Qwen2.5-VL, you may need to down load the image first and pass the local image path to the model,
|
| 116 |
+
like: figure_path = YOUR_LOCAL_IMAGE_PATH OF query['figure_path']
|
| 117 |
"""
|
| 118 |
# Replace with your model
|
| 119 |
response = model.generate(question_text, chart_figure)
|