MIMIC-CXR-Diff / README.md
mtybilly's picture
Add dataset card with schema docs and processing pipeline
8dd1732 verified
metadata
license: other
license_name: physionet-credentialed-health-data-license
license_link: https://physionet.org/content/mimiciii/view-license/1.4/
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - medical
  - radiology
  - chest-x-ray
  - hard-negatives
  - mimic-cxr
pretty_name: MIMIC-CXR-Diff
size_categories:
  - 1M<n<10M

MIMIC-CXR-Diff

Hard-negative VQA pairs mined from MIMIC-Ext-VQA, a large-scale medical VQA dataset built on MIMIC-CXR chest X-rays.

Each row is a pair of visually similar images with the same question type but different answers — designed for evaluating and improving model robustness to subtle visual differences.

Processing Pipeline

  1. Normalize the MIMIC-Ext-VQA training split (290K samples → 260K after removing invalid entries).
  2. Embed all images using BiomedCLIP.
  3. Mine pairs via approximate nearest-neighbor search (FAISS) on image embeddings.
  4. Filter pairs:
    • Remove pairs where both sides reference the exact same image (image_similarity = 1.0).
    • Keep only pairs where the question type matches (content_type_a == content_type_b).
    • Keep only pairs where the answers differ (after whitespace stripping).

Result: 1,739,965 hard-negative pairs covering 109,415 unique images.

Schema

Column Type Description
pair_id int Unique sequential identifier (0-indexed)
image_path_a string Relative path to image A within the MIMIC-CXR directory
image_path_b string Relative path to image B within the MIMIC-CXR directory
question_a string VQA question for image A
question_b string VQA question for image B
answer_a string Ground-truth answer for image A
answer_b string Ground-truth answer for image B
meta string (JSON) Nested metadata (see below)

Meta Structure

{
  "question_type": "presence",
  "image_similarity": 0.9833,
  "question_similarity": 0.9118,
  "image_a": {
    "image_id": "d0d24188-dda41b64-...",
    "subject_id": "18386740",
    "study_id": "56503182",
    "semantic_type": "verify",
    "template": "Is the ${object} showing indications of ${attribute}?",
    "template_program": "program_2",
    "template_arguments": {"object": {"0": "right lower lung zone"}, ...},
    "mimic_ext_vqa_idx": 7040
  },
  "image_b": { ... }
}
Meta Field Description
question_type Shared question category (e.g., presence, attribute, abnormality)
image_similarity BiomedCLIP cosine similarity between the two images
question_similarity Sentence embedding cosine similarity between the two questions
image_a / image_b Per-image metadata
.image_id DICOM-derived image identifier
.subject_id Patient identifier in MIMIC-CXR
.study_id Study identifier in MIMIC-CXR
.semantic_type Question semantic type (verify, query, choose)
.template Question generation template
.template_program Template program identifier
.template_arguments Template slot-fill arguments
.mimic_ext_vqa_idx Index into the original MIMIC-Ext-VQA train.json

Usage

from datasets import load_dataset
import json

ds = load_dataset("mtybilly/MIMIC-CXR-Diff", split="train")

row = ds[0]
meta = json.loads(row["meta"])
print(row["question_a"], "→", row["answer_a"])
print(row["question_b"], "→", row["answer_b"])
print("Image similarity:", meta["image_similarity"])

Note: This dataset contains metadata only. Images must be obtained separately from MIMIC-CXR via PhysioNet (requires credentialed access). Image paths are relative to the MIMIC-CXR root directory.