Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
OmniTumorData
A curated multi-source benchmark for text-prompted 3D tumor segmentation across CT and MRI, accompanying the OmniTumor paper.
11,326 subjects · 12 public cohorts · CT + MRI · 21 sub-region prompts
Data access
Imaging data is not redistributed from this page. Several of the constituent cohorts (e.g., AbdomenCT-1K, ULS23, COVID-19 CT) are released under non-redistribution licenses, and a few originate from clinical sites with patient-privacy restrictions on derivative works. We therefore host only the dataset documentation here.
For access to the curated PNG dataset and the consolidated metadata file (dataset_metadata_v2.json):
- Request via Google Drive — a curated copy with the unified PNG layout, ontology, and split files is staged at: https://drive.google.com/drive/folders/1Kd7NgrMCbzE0vidIj0SHKA7pSfRk_8n0?usp=sharing Access is granted on a per-request basis after a brief usage statement.
- Or contact the authors directly via the GitHub repository linked at the bottom of this page; we will share download instructions once the request is reviewed.
We additionally provide the original sources below so that users with the appropriate licenses can rebuild the curated layout from scratch.
Composition
| # | Cohort | Anatomy | Modality | # Subjects | License | Source |
|---|---|---|---|---|---|---|
| 1 | BraTS 2023 | Brain | MRI (T1c) | 2,350 | CC-BY-NC-SA 4.0 | synapse.org/brats2023 |
| 2 | MSD Task01 BrainTumour | Brain | MRI | 484 | CC-BY-SA 4.0 | medicaldecathlon.com |
| 3 | LGG Segmentation (Buda 2019) | Brain | MRI | 110 | CC-BY-SA 3.0 | kaggle.com/.../lgg-mri-segmentation |
| 4 | AbdomenCT-1K (tumor subset) | Abdomen | CT | 715 | CC-BY-NC-ND 4.0 | github.com/JunMa11/AbdomenCT-1K |
| 5 | MSD Task03 Liver | Liver | CT | 131 | CC-BY-SA 4.0 | medicaldecathlon.com |
| 6 | MSD Task08 (tumor only) | Liver | CT | 303 | CC-BY-SA 4.0 | medicaldecathlon.com |
| 7 | ULS23 Part 1 | Multi-organ | CT | 1,560 | CC-BY-NC 4.0 | uls23.grand-challenge.org |
| 8 | ULS23 Part 2 | Multi-organ | CT | 3,400 | CC-BY-NC 4.0 | uls23.grand-challenge.org |
| 9 | ULS23 Part 3 | Multi-organ | CT | 1,416 | CC-BY-NC 4.0 | uls23.grand-challenge.org |
| 10 | LUNA16 | Lung | CT | 601 | CC-BY 3.0 | luna16.grand-challenge.org |
| 11 | LNDb | Lung | CT | 236 | CC-BY 4.0 | lndb.grand-challenge.org |
| 12 | COVID-19 CT | Lung | CT | 20 | CC-BY-NC-SA 4.0 | zenodo.org/record/3757476 |
| Total | 11,326 |
ULS23 Part 1–3 internally redistribute the following sources, also covered by the ULS23 license: DeepLesion3D, Radboudumc Bone, Radboudumc Pancreas, KiTS21, LIDC-IDRI, LiTS, MSD Lung/Pancreas/Colon, NIH Lymph Node. We load these only via ULS23 to avoid duplicate ingestion.
Ontology
The 12 cohorts are mapped to 21 unique specific-object prompts following the BiomedParse ontology, with sub-region distinctions preserved (e.g., BraTS produces three prompts: necrotic tumor core in brain MRI, peritumoral edema in brain MRI, enhancing tumor in brain MRI). Each canonical prompt is expanded into 7 synonymous variations, yielding 147 unique training strings total.
The complete ontology and augmented prompt pool ship together with the curated dataset behind the access link above.
Splits
Case-level random splits (80% train / 10% val / 10% test, fixed seed = 42) prevent volumetric leakage by assigning all slices from the same 3D volume to the same split. For cohorts with multiple lesions per patient (DeepLesion3D, BraTS, ULS23), a stricter patient-level grouping is additionally enforced so that no patient appears across splits.
| Split | # Subjects |
|---|---|
| Train | 9,057 |
| Val | 1,130 |
| Test | 1,139 |
The split files (splits/train.txt, splits/val.txt, splits/test.txt) are included in this repository so that any user who reconstructs the layout from raw cohorts can reproduce the exact partitioning.
Reproducing the layout from raw cohorts
After downloading each cohort to RAW_ROOT/<cohort>/, run:
python scripts/convert_all.sh # 12 cohorts -> unified PNG layout (1024x1024)
python scripts/build_metadata_v2.py # consolidate ontology + ULS23 routing -> dataset_metadata_v2.json
Preprocessing applied during conversion:
- Resize axial slices to 1024×1024 (Lanczos for image, nearest for mask)
- CT: window center 40 HU, width 400 HU
- MRI: per-volume 1st–99th percentile min-max scaling
- Body mask threshold: −500 HU for CT, 5th-percentile intensity for MRI
- Mask encoding: label k → pixel value 50k; recover via division by 50
Citation
Please cite both OmniTumor and the original source datasets.
@article{omnitumor2025,
title = {A Spatial Vision-Language Foundation Model for Universal Volumetric Tumor Segmentation},
author = {Zhao, Songlin and Sun, Lichao and Liu, Wei},
year = {2025},
}
Contact
Songlin Zhao — see github.com/soz223/OmniTumor or open an issue there for data-access requests.
- Downloads last month
- 29