Commit
·
2fad864
1
Parent(s):
9d5a479
docs: update README file to include initial setup
Browse files
README.md
CHANGED
|
@@ -2,4 +2,81 @@
|
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
+
library_name: sam2
|
| 6 |
+
pipeline_tag: image-segmentation
|
| 7 |
+
tags:
|
| 8 |
+
- whole-slide-imaging
|
| 9 |
+
- histopathology
|
| 10 |
+
- tissue-segmentation
|
| 11 |
+
- sam2
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# AtlasPatch: Whole-Slide Image Tissue Segmentation
|
| 15 |
+
|
| 16 |
+
Segmentation model for whole-slide image (WSI) thumbnails, built on **Segment Anything 2 (SAM2) Tiny** and finetuned only on the normalization layers. The model takes a **power-based WSI thumbnail (longest side clamped to 1024 px, internally resized to 1024×1024)** and predicts a binary tissue mask. Training used segmented thumbnails. AtlasPatch codebase (WSI preprocessing & tooling): https://github.com/AtlasAnalyticsLab/SlideProcessor
|
| 17 |
+
|
| 18 |
+
## Quickstart
|
| 19 |
+
|
| 20 |
+
Install dependencies:
|
| 21 |
+
|
| 22 |
+
```bash
|
| 23 |
+
pip install atlas-patch
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
Recommended: use the same components we ship in AtlasPatch/SlideProcessor. The segmentation service will (a) load your WSI with the registered backend, (b) build a 1.25× power thumbnail, (c) resize it to 1024×1024, (d) run SAM2 with a full-frame box, and (e) return a mask aligned to the thumbnail.
|
| 27 |
+
|
| 28 |
+
```python
|
| 29 |
+
import numpy as np
|
| 30 |
+
import torch
|
| 31 |
+
from pathlib import Path
|
| 32 |
+
from PIL import Image
|
| 33 |
+
from importlib.resources import files
|
| 34 |
+
|
| 35 |
+
from slide_processor.core.config import SegmentationConfig
|
| 36 |
+
from slide_processor.services.segmentation import SAM2SegmentationService
|
| 37 |
+
from slide_processor.core.wsi import WSIFactory
|
| 38 |
+
|
| 39 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 40 |
+
|
| 41 |
+
# 1) Config: packaged SAM2 Hiera-T config; leave checkpoint_path=None to auto-download from HF.
|
| 42 |
+
cfg_path = Path(files("slide_processor.configs") / "sam2.1_hiera_t.yaml")
|
| 43 |
+
seg_cfg = SegmentationConfig(
|
| 44 |
+
checkpoint_path=None, # downloads Atlas-Patch/model.pth from Hugging Face
|
| 45 |
+
config_path=cfg_path,
|
| 46 |
+
device=str(device),
|
| 47 |
+
batch_size=1,
|
| 48 |
+
thumbnail_power=1.25,
|
| 49 |
+
thumbnail_max=1024,
|
| 50 |
+
mask_threshold=0.0,
|
| 51 |
+
)
|
| 52 |
+
segmenter = SAM2SegmentationService(seg_cfg)
|
| 53 |
+
|
| 54 |
+
# 2) Load a WSI and segment the thumbnail.
|
| 55 |
+
wsi = WSIFactory.load("slide.svs") # backend auto-detected (e.g., openslide)
|
| 56 |
+
mask = segmenter.segment_thumbnail(wsi) # mask.data matches the thumbnail size
|
| 57 |
+
|
| 58 |
+
# 3) Save the mask.
|
| 59 |
+
mask_img = Image.fromarray((mask.data > 0).astype(np.uint8) * 255)
|
| 60 |
+
mask_img.save("thumbnail_mask.png")
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
## Preparing the Thumbnail
|
| 64 |
+
|
| 65 |
+
AtlasPatch generates thumbnails at **1.25× objective power** (power-based downsampling) and then clamps the longest side to **1024 px**. Using the same helper the library uses:
|
| 66 |
+
|
| 67 |
+
```python
|
| 68 |
+
from slide_processor.core.wsi import WSIFactory
|
| 69 |
+
|
| 70 |
+
wsi = WSIFactory.load("slide.svs")
|
| 71 |
+
thumb = wsi.get_thumbnail_at_power(power=1.25, interpolation="optimise")
|
| 72 |
+
thumb.thumbnail((1024, 1024)) # in-place resize to 1024×1024
|
| 73 |
+
thumb.save("thumbnail.png")
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
## License and Commercial Use
|
| 77 |
+
|
| 78 |
+
This model is released under **CC-BY-NC-SA-4.0**, which strictly disallows commercial use of the model weights or any derivative works. Commercialization includes selling the model, offering it as a paid service, using it inside commercial products, or distributing modified versions for commercial gain. Non-commercial research, experimentation, educational use, and use by academic or non-profit organizations is permitted under the license terms. If you need commercial rights, please contact the authors to obtain a separate commercial license. See the LICENSE file in this repository for full terms.
|
| 79 |
+
|
| 80 |
+
## Citation
|
| 81 |
+
|
| 82 |
+
If you use this model, please cite SAM2 and the AtlasPatch project. A formal paper is forthcoming.
|