Datasets:
metadata
license: cc-by-4.0
task_categories:
- image-segmentation
tags:
- medical
- MRI
- segmentation
- WMH_Segmentation_Challenge
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: train.jsonl
- split: test
path: test.jsonl
WMH Segmentation Challenge Dataset
Dataset Description
The WMH Segmentation Challenge dataset for white matter hyperintensities segmentation. This dataset contains MRI FLAIR scans with dense segmentation annotations.
Dataset Details
- Modality: MRI FLAIR
- Target: white matter hyperintensities
- Format: NIfTI (.nii.gz)
Dataset Structure
Each sample in the JSONL file contains:
{
"image": "path/to/image.nii.gz",
"mask": "path/to/mask.nii.gz",
"label": ["organ1", "organ2", ...],
"modality": "MRI",
"dataset": "WMH_Segmentation_Challenge",
"official_split": "train",
"patient_id": "patient_id"
}
Usage
Load Metadata
from datasets import load_dataset
# Load the dataset
ds = load_dataset("Angelou0516/wmh-segmentation")
# Access a sample
sample = ds['train'][0]
print(f"Patient ID: {sample['patient_id']}")
print(f"Image: {sample['image']}")
print(f"Mask: {sample['mask']}")
print(f"Labels: {sample['label']}")
Load Images
from huggingface_hub import snapshot_download
import nibabel as nib
import os
# Download the full dataset
local_path = snapshot_download(
repo_id="Angelou0516/wmh-segmentation",
repo_type="dataset"
)
# Load a sample
sample = ds['train'][0]
image = nib.load(os.path.join(local_path, sample['image']))
mask = nib.load(os.path.join(local_path, sample['mask']))
# Get numpy arrays
image_data = image.get_fdata()
mask_data = mask.get_fdata()
print(f"Image shape: {image_data.shape}")
print(f"Mask shape: {mask_data.shape}")
Citation
@article{wmh_segmentation_challenge,
title={White Matter Hyperintensities Segmentation Challenge},
year={2023}
}
License
CC-BY-4.0