|
|
--- |
|
|
license: cc-by-nc-sa-4.0 |
|
|
task_categories: |
|
|
- image-segmentation |
|
|
tags: |
|
|
- medical |
|
|
- biomedical |
|
|
- 3d |
|
|
- cvpr2025 |
|
|
--- |
|
|
|
|
|
This repository contains the BiomedSegFM dataset, a crucial resource for the **CVPR 2025 Competition: Foundation Models for 3D Biomedical Image Segmentation**. |
|
|
|
|
|
- Foundation Models for Interactive 3D Biomedical Image Segmentation ([Homepage](https://www.codabench.org/competitions/5263/)) |
|
|
|
|
|
- Foundation Models for Text-guided 3D Biomedical Image Segmentation ([Homepage](https://www.codabench.org/competitions/5651/)) |
|
|
|
|
|
# CVPR 2025 Competition: Foundation Models for 3D Biomedical Image Segmentation |
|
|
|
|
|
**Highly recommend watching the [webinar recording](https://www.youtube.com/playlist?list=PLWPTMGguY4Kh48ov6WTkAQDfKRrgXZqlh) to learn about the task settings and baseline methods.** |
|
|
|
|
|
The dataset covers five commonly used 3D biomedical image modalities: CT, MR, PET, Ultrasound, and Microscopy. All the images are from public datasets with a License for redistribution. |
|
|
To reduce dataset size, all labeled slices are extracted and preprocessed into npz files. Each `npz` file contains |
|
|
- `imgs`: image data; shape: (D,H,W); Intensity range: [0, 255] |
|
|
- `gts`: ground truth; shape: (D,H,W); |
|
|
- `spacing` |
|
|
|
|
|
Folder structure |
|
|
|
|
|
- 3D_train_npz_all: complete training set |
|
|
- 3D_train_npz_random_10percent_16G: randomly selected 10% cases from the above training set. In other words, these cases have been included in the complete training set. Participants are allowed to use other criteria to select 10% cases as the coreset. |
|
|
- 3D_val_npz: validation set |
|
|
- 3D_val_gt: ground truth of validation set |
|
|
- CVPR25_TextSegFMData_with_class.json: text prompt for test-guided segmentation task |
|
|
|
|
|
|
|
|
## Sample Usage (Interactive 3D Segmentation) |
|
|
|
|
|
The training `npz` files contain three keys: `imgs`, `gts`, and `spacing`. |
|
|
The validation (and testing) `npz` files don't have `gts` keys. We provide an optional box key in the `npz` file, which is defined by the middle slice 2D bounding box and the top and bottom slice (closed interval). |
|
|
Here is a demo to load the data: |
|
|
|
|
|
```python |
|
|
import numpy as np |
|
|
|
|
|
npz = np.load('path to npz file', allow_pickle=True) |
|
|
print(npz.keys()) |
|
|
imgs = npz['imgs'] |
|
|
gts = npz['gts'] # will not be in the npz for testing cases |
|
|
boxes = npz['boxes'] # a list of bounding box prompts |
|
|
print(boxes[0].keys()) # dict_keys(['z_min', 'z_max', 'z_mid', 'z_mid_x_min', 'z_mid_y_min', 'z_mid_x_max', 'z_mid_y_max']) |
|
|
``` |
|
|
|
|
|
Remarks: |
|
|
|
|
|
1. Box prompt is optional to use, where the corresponding DSC and NSD scores are also not used during ranking. |
|
|
|
|
|
2. Some objects don't have box prompts, such as vessels (filename contains `vessel`) and multicomponent brain lesions (filename contains `brats`), becuase the box is not a proper prompt for such targets. The evaluation script will generate a zero-mask and then algorithms can directly start with the point prompt for segmentation. |
|
|
|
|
|
3. The provided box prompts is designed for better efficiency for annotators, which may not cover the whole object. [Here](https://github.com/JunMa11/CVPR-MedSegFMCompetition/blob/main/get_boxes.py ) is the script to generate box prompts from ground truth. |
|
|
|
|
|
|
|
|
## Sample Usage (Text-Guided Segmentation) |
|
|
|
|
|
For the training set, we provide a json file with dataset-wise prompts `CVPR25_TextSegFMData_with_class.json`. |
|
|
|
|
|
In the text prompts, |
|
|
- `'instance_label': 1` denotes the instance mask where each label corresponds to one instance (e.g., lesions). It is generated by tumor_instance = cc3d.connected_components(tumor_binary_mask>0) |
|
|
- `'instance_label': 0` denotes the common semantic mask |
|
|
|
|
|
For the validation (and hidden testing) set, we provided a text key for each validation npz file |
|
|
|
|
|
```python |
|
|
import numpy as np |
|
|
|
|
|
npz = np.load('path to npz file', allow_pickle=True) |
|
|
print(npz.keys()) |
|
|
imgs = npz['imgs'] |
|
|
print(npz['text_prompts']) |
|
|
``` |
|
|
|
|
|
Remarks: |
|
|
|
|
|
1. To ensure rotation consistency, all testing cases will be preprocessed to standard rotation by https://nipy.org/nibabel/reference/nibabel.funcs.html#nibabel.funcs.as_closest_canonical |
|
|
2. Some datasets don't have text prompts, please simply exclude them during model training. |
|
|
3. For instance labels, the evaluate metric is [F1 score](https://github.com/JunMa11/NeurIPS-CellSeg/blob/main/baseline/compute_metric.py) where the order of instance id doesn't matter. |