AtlasPatch / README.md
yousefkotp's picture
docs: update tool name
39df7e9
|
raw
history blame
3.4 kB
metadata
license: cc-by-nc-sa-4.0
language:
  - en
library_name: sam2
pipeline_tag: image-segmentation
tags:
  - whole-slide-imaging
  - histopathology
  - tissue-segmentation
  - sam2

AtlasPatch: Whole-Slide Image Tissue Segmentation

Segmentation model for whole-slide image (WSI) thumbnails, built on Segment Anything 2 (SAM2) Tiny and finetuned only on the normalization layers. The model takes a power-based WSI thumbnail at 1.25x magnification level (resized to 1024×1024) and predicts a binary tissue mask. Training used segmented thumbnails. AtlasPatch codebase (WSI preprocessing & tooling): https://github.com/AtlasAnalyticsLab/AtlasPatch

Quickstart

Install dependencies:

pip install atlas-patch

Recommended: use the same components we ship in AtlasPatch. The segmentation service will (a) load your WSI with the registered backend, (b) build a 1.25× power thumbnail, (c) resize it to 1024×1024, (d) run SAM2 with a full-frame box, and (e) return a mask aligned to the thumbnail.

import numpy as np
import torch
from pathlib import Path
from PIL import Image
from importlib.resources import files

from atlas_patch.core.config import SegmentationConfig
from atlas_patch.services.segmentation import SAM2SegmentationService
from atlas_patch.core.wsi import WSIFactory

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# 1) Config: packaged SAM2 Hiera-T config; leave checkpoint_path=None to auto-download from HF.
cfg_path = Path(files("atlas_patch.configs") / "sam2.1_hiera_t.yaml")
seg_cfg = SegmentationConfig(
    checkpoint_path=None,          # downloads Atlas-Patch/model.pth from Hugging Face
    config_path=cfg_path,
    device=str(device),
    batch_size=1,
    thumbnail_power=1.25,
    thumbnail_max=1024,
    mask_threshold=0.0,
)
segmenter = SAM2SegmentationService(seg_cfg)

# 2) Load a WSI and segment the thumbnail.
wsi = WSIFactory.load("slide.svs")  # backend auto-detected (e.g., openslide)
mask = segmenter.segment_thumbnail(wsi)  # mask.data matches the thumbnail size

# 3) Save the mask.
mask_img = Image.fromarray((mask.data > 0).astype(np.uint8) * 255)
mask_img.save("thumbnail_mask.png")

Preparing the Thumbnail

AtlasPatch generates thumbnails at 1.25× objective power (power-based downsampling) and then clamps the longest side to 1024 px. Using the same helper the library uses:

from atlas_patch.core.wsi import WSIFactory

wsi = WSIFactory.load("slide.svs")
thumb = wsi.get_thumbnail_at_power(power=1.25, interpolation="optimise")
thumb.thumbnail((1024, 1024))  # in-place resize to 1024×1024
thumb.save("thumbnail.png")

License and Commercial Use

This model is released under CC-BY-NC-SA-4.0, which strictly disallows commercial use of the model weights or any derivative works. Commercialization includes selling the model, offering it as a paid service, using it inside commercial products, or distributing modified versions for commercial gain. Non-commercial research, experimentation, educational use, and use by academic or non-profit organizations is permitted under the license terms. If you need commercial rights, please contact the authors to obtain a separate commercial license. See the LICENSE file in this repository for full terms.

Citation

If you use this model, please cite SAM2 and the AtlasPatch project. A formal paper is forthcoming.