metadata
pretty_name: MultiCaRe Images
license: cc-by-4.0
task_categories:
- image-classification
- image-to-text
language:
- en
size_categories:
- 100K<n<1M
MultiCaRe: Open-Source Clinical Case Dataset
MultiCaRe is an open-source, multimodal clinical case dataset built from the PubMed Central Open Access (OA) Case Report articles. It aggregates de-identified, open-access case narratives, figure images, captions, and rich article metadata across diverse specialties (radiology, pathology, surgery, ophthalmology, etc.). The data is normalized so images, cases, and articles can be joined via stable IDs.
- Source and process: OA case reports were collected from PMC; article metadata and abstracts were parsed; figures were downloaded and split into subimages when needed; captions were aligned; and image labels were curated from a hierarchical medical taxonomy (>140 classes). The dataset maps every image to its case text and article metadata, enabling powerful cross-modal workflows.
- Scale: 85k+ OA case reports, 110k+ patients mentioned, 160k+ figures/subimages (v2.0), hundreds of thousands of clinicians/researchers as authors.
- Tasks enabled: image classification (multi/multilabel), image-text retrieval, caption grounding, VQA/doc-QA, multimodal modeling, and text-only tasks (case narrative classification, retrieval, summarization).
- Citation: An Open-Source Clinical Case Dataset for Medical Image Classification and Multimodal AI Applications (MDPI DATA journal).
- Paper: https://www.mdpi.com/2306-5729/10/8/123
- Zenodo (v2.0): https://zenodo.org/records/13936721
This repository: per-image dataset with pixels Per-image dataset with the actual images, captions, labels, and core metadata from MultiCaRe. Images are stored inside the dataset shards so you can just load and use.
Highlights
- 161k+ images across radiology, pathology, endoscopy, medical photographs, ophthalmic imaging, electrography, and charts.
- Supervised multilabels (89-class reduced taxonomy) and optional semi-supervised labels.
- Stable join keys to link with cases and articles datasets.
Schema
- file_id: unique row ID for the processed image file
- image: datasets.Image (PIL-compatible)
- file: processed image filename
- main_image: original figure ID (group identifier for subimages)
- image_component: subimage reference (e.g. undivided, a, b, …)
- caption: figure caption (full or segment)
- labels: list of labels for supervised training (strings)
- semi_labels: additional labels from the full taxonomy (strings; sparse)
- image_type, image_subtype, radiology_region, radiology_region_granular, radiology_view: multiclass attributes
- patient_id: case identifier (join to cases.case_id)
- license: per-article OA license
Quick start
from datasets import load_dataset
ds = load_dataset("openmed-community/multicare-images", split="train")
img = ds[0]["image"] # PIL.Image.Image
caption = ds[0]["caption"]
labels = ds[0]["labels"] # list[str]
img.show()
Join examples
from datasets import load_dataset
img = load_dataset("openmed-community/multicare-images", split="train")
cas = load_dataset("openmed-community/multicare-cases", split="train")
art = load_dataset("openmed-community/multicare-articles", split="train")
# Example: fetch one case and its first image
case = cas[0]
case_id = case["case_id"]
imgs_for_case = img.filter(lambda e: e["patient_id"] == case_id)
print(case["case_text"][:400])
imgs_for_case[0]["image"].show()
Splitting tips
- Avoid leakage by splitting at patient_id (case) or article_id level.
License
- The dataset is CC-BY-4.0. Each item also retains the per-article OA license string. Respect per-article terms when redistributing.
Cite
- DOI: 10.5281/zenodo.13936721