Improve dataset card: add task categories, tags, GitHub link, and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +63 -4
README.md CHANGED
@@ -1,13 +1,74 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
 
3
  ---
4
 
5
  # CPath Preprocessed Patch for E2E Training
6
 
7
  <!-- Provide a quick summary of the dataset. -->
8
 
9
- This dataset is used for training E2E CPath model. SEE [Revisiting End-to-End Learning with Slide-level Supervision in Computational Pathology](https://huggingface.co/papers/2506.02408), NeurIPS 2025.
10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  ## Citation
13
 
@@ -24,6 +85,4 @@ This dataset is used for training E2E CPath model. SEE [Revisiting End-to-End Le
24
  primaryClass={cs.CV},
25
  url={https://arxiv.org/abs/2506.02408},
26
  }
27
- ```
28
-
29
-
 
1
  ---
2
  license: cc-by-4.0
3
+ task_categories:
4
+ - image-classification
5
+ tags:
6
+ - medical
7
+ - pathology
8
+ - whole-slide-imaging
9
+ - cancer-detection
10
  ---
11
 
12
  # CPath Preprocessed Patch for E2E Training
13
 
14
  <!-- Provide a quick summary of the dataset. -->
15
 
16
+ This dataset is used for training E2E CPath model, presented in [Revisiting End-to-End Learning with Slide-level Supervision in Computational Pathology](https://huggingface.co/papers/2506.02408), NeurIPS 2025.
17
 
18
+ **Code:** [https://github.com/DearCaat/E2E-WSI-ABMILX](https://github.com/DearCaat/E2E-WSI-ABMILX)
19
+
20
+ ## Sample Usage
21
+
22
+ This dataset provides preprocessed whole-slide image (WSI) patches in LMDB format. Below is an example of how to load data from an LMDB dataset.
23
+
24
+ ```python
25
+ import lmdb
26
+ import torch
27
+ import pickle
28
+ from datasets.utils import imfrombytes # Ensure this utility function is correctly referenced
29
+
30
+ slide_name = "xxxx" # Example slide name
31
+ path_to_lmdb = "YOUR_PATH_TO_LMDB_FILE" # e.g., "/path/to/my_dataset_256_level0.lmdb"
32
+
33
+ # Open LMDB dataset
34
+ env = lmdb.open(path_to_lmdb, subdir=False, readonly=True, lock=False,
35
+ readahead=False, meminit=False, map_size=100 * (1024**3))
36
+
37
+ with env.begin(write=False) as txn:
38
+ # Get patch count for the slide
39
+ pn_dict = pickle.loads(txn.get(b'__pn__'))
40
+ if slide_name not in pn_dict:
41
+ raise ValueError(f"Slide ID {slide_name} not found in LMDB metadata.")
42
+ num_patches = pn_dict[slide_name]
43
+
44
+ # Generate patch IDs
45
+ patch_ids = [f"{slide_name}-{i}" for i in range(num_patches)]
46
+
47
+ # Allocate memory for patches (adjust dimensions and dtype as needed)
48
+ # Assuming patches are 224x224, 3 channels, and will be normalized later
49
+ patches_tensor = torch.empty((len(patch_ids), 3, 224, 224), dtype=torch.float32)
50
+
51
+ # Load and decode data into torch.tensor
52
+ for i, key_str in enumerate(patch_ids):
53
+ patch_bytes = txn.get(key_str.encode('ascii'))
54
+ if patch_bytes is None:
55
+ print(f"Warning: Key {key_str} not found in LMDB.")
56
+ continue
57
+ # Assuming the stored value is pickled image bytes
58
+ img_array = imfrombytes(pickle.loads(patch_bytes).tobytes()) # Or .tobytes() if it's already bytes
59
+ patches_tensor[i] = torch.from_numpy(img_array.transpose(2, 0, 1)) # HWC to CHW
60
+
61
+ # Normalize the data (example using ImageNet stats)
62
+ # Ensure values are in [0, 255] before this normalization if they aren't already
63
+ mean = torch.tensor([0.485, 0.456, 0.406]).view((1, 3, 1, 1)) * 255.0
64
+ std = torch.tensor([0.229, 0.224, 0.225]).view((1, 3, 1, 1)) * 255.0
65
+
66
+ # If your patches_tensor is already in [0,1] range, remove * 255.0 from mean/std
67
+ # If your patches_tensor is uint8 [0,255], convert to float first: patches_tensor.float()
68
+ patches_tensor = (patches_tensor.float() - mean) / std
69
+
70
+ env.close()
71
+ ```
72
 
73
  ## Citation
74
 
 
85
  primaryClass={cs.CV},
86
  url={https://arxiv.org/abs/2506.02408},
87
  }
88
+ ```