Update README.md
Browse files
README.md
CHANGED
|
@@ -91,9 +91,9 @@ Each dataset (**STHELAR_40x** or **STHELAR_20x**) consists of:
|
|
| 91 |
* **tissue**: Tissue type, provided as categorical labels (e.g., Breast, Lung, Colon).
|
| 92 |
* **image**: RGB color images of size 256×256 pixels, extracted from H&E-stained whole-slide images at 40x or 20x magnification, with a 64-pixel overlap between adjacent patches.
|
| 93 |
* **cell_id_map**: Sparse segmentation mask (CSR matrix stored as `.npz` bytes) aligned with the H&E patch, where each nucleus pixel stores its biological `cell_id` integer (`0` = background). This is a cell-identity mask, not an 'instance index' mask: the integer values are stable per slide and are designed to join with the per-slide metadata parquets below.
|
| 94 |
-
* **Dice**: Dice similarity coefficient
|
| 95 |
-
* **Jaccard**: Jaccard index
|
| 96 |
-
* **bPQ**: Binary Panoptic Quality score
|
| 97 |
|
| 98 |
---
|
| 99 |
|
|
|
|
| 91 |
* **tissue**: Tissue type, provided as categorical labels (e.g., Breast, Lung, Colon).
|
| 92 |
* **image**: RGB color images of size 256×256 pixels, extracted from H&E-stained whole-slide images at 40x or 20x magnification, with a 64-pixel overlap between adjacent patches.
|
| 93 |
* **cell_id_map**: Sparse segmentation mask (CSR matrix stored as `.npz` bytes) aligned with the H&E patch, where each nucleus pixel stores its biological `cell_id` integer (`0` = background). This is a cell-identity mask, not an 'instance index' mask: the integer values are stable per slide and are designed to join with the per-slide metadata parquets below.
|
| 94 |
+
* **Dice**: Dice similarity coefficient comparing the provided segmentation mask with the segmentation mask predicted by the pre-trained CellViT model (SAM-H encoder).
|
| 95 |
+
* **Jaccard**: Jaccard index comparing the provided segmentation mask with the segmentation mask predicted by the pre-trained CellViT model (SAM-H encoder).
|
| 96 |
+
* **bPQ**: Binary Panoptic Quality score comparing the provided segmentation mask with the segmentation mask predicted by the pre-trained CellViT model (SAM-H encoder).
|
| 97 |
|
| 98 |
---
|
| 99 |
|