Angelou0516 commited on
Commit
3ec0169
·
verified ·
1 Parent(s): 3d30440

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - image-segmentation
5
+ tags:
6
+ - medical
7
+ - CT
8
+ - segmentation
9
+ - WORD
10
+ size_categories:
11
+ - n<1K
12
+ configs:
13
+ - config_name: default
14
+ data_files:
15
+ - split: train
16
+ path: train.jsonl
17
+ - split: test
18
+ path: test.jsonl
19
+ - split: validation
20
+ path: validation.jsonl
21
+ ---
22
+
23
+ # WORD (Whole abdominal ORgan Dataset) Dataset
24
+
25
+ ## Dataset Description
26
+
27
+ The WORD (Whole abdominal ORgan Dataset) dataset for abdominal organ segmentation with 16 organs. This dataset contains CT scans with dense segmentation annotations.
28
+
29
+ ### Dataset Details
30
+
31
+ - **Modality**: CT
32
+ - **Target**: liver, spleen, kidneys, stomach, gallbladder, esophagus, pancreas, duodenum, colon, intestine, adrenal gland, rectum, bladder, femoral heads
33
+ - **Format**: NIfTI (.nii.gz)
34
+
35
+ ### Dataset Structure
36
+
37
+ Each sample in the JSONL file contains:
38
+ ```json
39
+ {
40
+ "image": "path/to/image.nii.gz",
41
+ "mask": "path/to/mask.nii.gz",
42
+ "label": ["organ1", "organ2", ...],
43
+ "modality": "CT",
44
+ "dataset": "WORD",
45
+ "official_split": "train",
46
+ "patient_id": "patient_id"
47
+ }
48
+ ```
49
+
50
+ ## Usage
51
+
52
+ ### Load Metadata
53
+
54
+ ```python
55
+ from datasets import load_dataset
56
+
57
+ # Load the dataset
58
+ ds = load_dataset("Angelou0516/word")
59
+
60
+ # Access a sample
61
+ sample = ds['train'][0]
62
+ print(f"Patient ID: {sample['patient_id']}")
63
+ print(f"Image: {sample['image']}")
64
+ print(f"Mask: {sample['mask']}")
65
+ print(f"Labels: {sample['label']}")
66
+ ```
67
+
68
+ ### Load Images
69
+
70
+ ```python
71
+ from huggingface_hub import snapshot_download
72
+ import nibabel as nib
73
+ import os
74
+
75
+ # Download the full dataset
76
+ local_path = snapshot_download(
77
+ repo_id="Angelou0516/word",
78
+ repo_type="dataset"
79
+ )
80
+
81
+ # Load a sample
82
+ sample = ds['train'][0]
83
+ image = nib.load(os.path.join(local_path, sample['image']))
84
+ mask = nib.load(os.path.join(local_path, sample['mask']))
85
+
86
+ # Get numpy arrays
87
+ image_data = image.get_fdata()
88
+ mask_data = mask.get_fdata()
89
+
90
+ print(f"Image shape: {image_data.shape}")
91
+ print(f"Mask shape: {mask_data.shape}")
92
+ ```
93
+
94
+ ## Citation
95
+
96
+ ```bibtex
97
+ @article{word,
98
+ title={WORD: A Large Scale Dataset for Whole Abdominal Organ Segmentation},
99
+ year={2023}
100
+ }
101
+ ```
102
+
103
+ ## License
104
+
105
+ CC-BY-4.0
106
+
107
+ ## Dataset Homepage
108
+
109
+ https://github.com/HiLab-git/WORD