TAIX-VQA / README.md
jomoll's picture
Update README.md
f36b59f verified
metadata
dataset_info:
  features:
    - name: UID
      dtype: string
    - name: Question
      dtype: string
    - name: Answer
      dtype: string
    - name: Type
      dtype: string
    - name: PatientID
      dtype: string
    - name: Age
      dtype: int64
    - name: HeartSize
      dtype: int64
    - name: PulmonaryCongestion
      dtype: int64
    - name: PleuralEffusion_Right
      dtype: int64
    - name: PleuralEffusion_Left
      dtype: int64
    - name: PulmonaryOpacities_Right
      dtype: int64
    - name: PulmonaryOpacities_Left
      dtype: int64
    - name: Atelectasis_Right
      dtype: int64
    - name: Atelectasis_Left
      dtype: int64
    - name: Split
      dtype: string
    - name: PhysicianID
      dtype: string
    - name: StudyDate
      dtype: string
    - name: Sex
      dtype: string
    - name: Image
      dtype: image
  splits:
    - name: train
      num_bytes: 5622656901
      num_examples: 20288
    - name: val
      num_bytes: 1462315894
      num_examples: 5120
    - name: test
      num_bytes: 1783934753
      num_examples: 6592
  download_size: 363809891
  dataset_size: 8868907548
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: val
        path: data/val-*
      - split: test
        path: data/test-*

TAIX-VQA Dataset

We share this dataset used for the evaluations in "Evaluating Reasoning Faithfulness in Medical Vision-Language Models using Multimodal Perturbations". The dataset contains 1,000 distinct chest X-rays from the TAIX-RAY dataset with structured annotations from human radiologists. For each image, we added 32 different, clinically realistic questions together with expert-annotated answers.

Dataset Details

For details, please check the paper, project page, and Github.

✏️ Citation

If you find this work useful, please cite:

@article{evaluating-2025,
  title={Evaluating Reasoning Faithfulness in Medical Vision-Language Models using Multimodal Perturbations},
  author={Moll, Johannes and Graf, Markus and Lemke, Tristan and Lenhart, Nicolas and Truhn, Daniel and Delbrouck, Jean-Benoit and Pan, Jiazhen and Rueckert, Daniel and Adams, Lisa C. and Bressem, Keno K.},
  journal={arXiv preprint arXiv:2510.11196},
  url={https://arxiv.org/abs/2510.11196},
  year={2025}
}