SAM2-tiny โ Cell Segmentation
Fine-tuned SAM2-tiny for instance segmentation of cells in fluorescence microscopy images. Part of the biomech-inference-serving pipeline (internal research project).
Training
| Base model | facebook/sam2.1-hiera-tiny |
| Training data | DnaRnaProteins/cell_seg_labeled |
| Fine-tuning | Full decoder fine-tune |
| Framework | sam2 |
Usage
import numpy as np, torch
from PIL import Image
from sam2.sam2_image_predictor import SAM2ImagePredictor
predictor = SAM2ImagePredictor.from_pretrained("DnaRnaProteins/sam2-cells-seg")
image = np.array(Image.open("cell_image.png").convert("RGB"))
predictor.set_image(image)
with torch.inference_mode():
masks, scores, _ = predictor.predict(
point_coords=np.array([[128, 256]]), # [x, y] prompt point
point_labels=np.array([1]),
multimask_output=True,
)
# masks: (N, H, W) bool array
# scores: (N,) float confidence per mask
Via Modal endpoint
import base64, modal
segment = modal.Function.from_name("biomech-inference-serving", "segment")
with open("cell_image.png", "rb") as f:
b64 = base64.b64encode(f.read()).decode()
result = segment.remote(b64)
# {"masks": [[...]], "scores": [0.94, ...]}
Limitations
- Optimised for fluorescence cell images; performance on brightfield or H&E may vary.
- Point prompts improve precision โ promptless predictions use a default center point.
- Downloads last month
- 260
Model tree for DnaRnaProteins/sam2-cells-seg
Base model
facebook/sam2.1-hiera-tiny