InfoChartQA / README.md
Jietson's picture
Upload dataset
7d92a47 verified
metadata
license: cc-by-4.0
dataset_info:
  features:
    - name: question_id
      dtype: string
    - name: question_type_id
      dtype: string
    - name: question_type_name
      dtype: string
    - name: figure_id
      list: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: instructions
      dtype: string
    - name: url
      dtype: string
    - name: extra_input_figure_ids
      list: string
    - name: extra_input_figure_bboxes
      sequence:
        sequence: int64
    - name: data_fact
      dtype: string
    - name: difficulty
      dtype: string
    - name: chart_type
      dtype: string
  splits:
    - name: text
      num_bytes: 25387749
      num_examples: 50920
    - name: visual_metaphor
      num_bytes: 407203
      num_examples: 462
    - name: visual_basic
      num_bytes: 7166829
      num_examples: 7475
  download_size: 10423864
  dataset_size: 32961781
configs:
  - config_name: default
    data_files:
      - split: text
        path: data/text-*
      - split: visual_metaphor
        path: data/visual_metaphor-*
      - split: visual_basic
        path: data/visual_basic-*

InfoChartQA: Benchmark for Multimodal Question Answering on Infographic Charts

🤗Dataset

Dataset

You can find our dataset on huggingface: 🤗InfoChartQA Dataset

Usage

Each question entry is arranged as follows. Note that for visual questions, there may be some extra input figures, which are cropped from the orginal figure. We present their bboxes in "extra_input_figure_bbox".

{
        "question_id": id of the question,
        "question_type_name": question type name, for example: "extreme" questions, 
        "question_type_id": question type id, this is only used for evaluation! For example: 72 means "extreme" questions,
        "figure_id": id of the figure,
        "question": question text,  
        "answer": ground truth answer,
        "instructions": instructions,
        "url": url of the input image,
        "extra_input_figure_ids": ids of the extra input figures,
        "extra_input_figure_bboxes": bbox of the extra input figures, in [x,y,w,h] format w/o normalization,
        "data_fact": data fact of the question, only for text-based questions,
        "difficulty": difficulty level,
        "chart_type": chart_type,
}

Each question is built by:

input_image: item["url"] (may need to download for models that don't support url input)
extra_input_image: Cropped input_image using item["extra_input_figure_bboxes"],
input_text: item["question"] + item["instructions"] (if any)

where item is an entry of the dataset.

Evaluate

You should store and evaluate model's response as:

# Example code for evaluate
def build_question(query):
    question = query['question']
    if "instructions" in query:
        question += query["instructions"]
    return question


#### Run your model and save your answer

Responses = {}

for query in tqdm(ds):
    query_idx = query["question_id"]
    input_text = build_question(query)
    input_figure = query["url"]  # This should be a list of url for models that support url input

    """
        Note that for models that do not support url input, you may need to download images first.
        For example, for model like Qwen2.5-VL, you may need to down load the image first and pass the local image path to the model,
        like: input_figure = YOUR_LOCAL_IMAGE_PATH OF query['figure_id']
        Moreover, for questions with extra figure input, you may need to crop figure, for example,
        extra_input_figures = [crop(input_figure,bbox) for bbox in query["extra_input_figure_bboxes"]]
    """

    # Replace with your model
    response = model.generate(input_text, input_figure)

    Responses[query_idx] = {
        "qtype": int(query["question_type_id"]), # Note that "question_type_id" are used for evaluation only!
        "answer": query["answer"],
        "question_id": query_idx,
        "response": response,
    }

with open("./model_response.json", "w", encoding="utf-8") as f:
    json.dump(Responses, f, indent=2, ensure_ascii=False)