πŸ”¬ Gray Leaf Spot Segmentation Model

PyTorch U-Net models for gray leaf spot (Magnaporthe and related fungal) colony segmentation on 90 mm petri-dish images.

β–Ά Try the live demo β€” upload images, run inference, see overlays & growth charts in your browser.


Model Weights

File Architecture Params Area-Consistency Weight Description
grayleafspot.pt smp.Unet (ResNet-34) ~24.4 M β€” Main encoder–decoder model
best_area_w_0.1.pt SmallUNet ~250 K 0.1 Light area regularisation
best_area_w_0.3.pt SmallUNet ~250 K 0.3 Moderate area regularisation
best_area_w_0.5.pt SmallUNet ~250 K 0.5 Balanced BCE + area
best_area_w_0.7.pt SmallUNet ~250 K 0.7 Strong area consistency (used by demo) βœ… recommended

All SmallUNet variants share the same architecture:

Input (3 Γ— 256 Γ— 256)
  β”‚
  β”œβ”€ enc1: ConvBlock(3 β†’ 16)              ─── skip s1
  β”œβ”€ enc2: MaxPool2d β†’ ConvBlock(16 β†’ 32)  ─── skip s2
  β”œβ”€ enc3: MaxPool2d β†’ ConvBlock(32 β†’ 64)  ─── skip s3
  β”œβ”€ enc4: MaxPool2d β†’ ConvBlock(64 β†’ 128) ─── skip s4
  β”‚
  β”œβ”€ bottleneck: MaxPool2d β†’ ConvBlock(128 β†’ 256)
  β”‚
  β”œβ”€ up4: Upsample + cat(s4) β†’ ConvBlock(384 β†’ 128)
  β”œβ”€ up3: Upsample + cat(s3) β†’ ConvBlock(192 β†’ 64)
  β”œβ”€ up2: Upsample + cat(s2) β†’ ConvBlock(96 β†’ 32)
  β”œβ”€ up1: Upsample + cat(s1) β†’ ConvBlock(48 β†’ 16)
  β”‚
  └─ head: Conv2d(16 β†’ 1) β†’ Sigmoid

Each ConvBlock = Conv3Γ—3 (no bias) β†’ ReLU β†’ Conv3Γ—3 (no bias) β†’ ReLU.

Property Value
Input 256 Γ— 256 RGB
Output 1-channel sigmoid probability mask
Training loss BCE + area-consistency loss
CPU compatible βœ… Pure PyTorch β€” no custom CUDA kernels

Quick Start

Download & Inference (SmallUNet)

import torch
from huggingface_hub import hf_hub_download

# Download weights
path = hf_hub_download("rotsl/grayleafspot-segmentation", "best_area_w_0.7.pt")

# Load checkpoint
ckpt = torch.load(path, map_location="cpu", weights_only=False)

# Build model (SmallUNet architecture β€” see demo repo for full class definition)
# https://huggingface.co/rotsl/grayleafspot-segmentation-demo/blob/main/app.py
from model import SmallUNet  # or copy the class from the demo app.py

model = SmallUNet(in_channels=3, out_channels=1, base_channels=16)
model.load_state_dict(ckpt["model_state_dict"])
model.eval()

# Run inference on a 256Γ—256 RGB tensor
import numpy as np
from PIL import Image

img = Image.open("petri_dish.jpg").convert("RGB").resize((256, 256))
x = torch.from_numpy(np.array(img).transpose(2, 0, 1)).float() / 255.0
x = x.unsqueeze(0)

with torch.no_grad():
    prob = model(x)[0, 0].numpy()

mask = (prob > 0.5).astype(np.uint8) * 255
Image.fromarray(mask).save("colony_mask.png")

Download & Inference (Main U-Net)

import torch

path = hf_hub_download("rotsl/grayleafspot-segmentation", "grayleafspot.pt")
model = torch.load(path, map_location="cpu", weights_only=False)
model.eval()

Training & Inference Pipeline

Environment Setup (Apple Silicon recommended)

python3.10 -m venv trainenv
source trainenv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt

Dataset Preparation

Place raw images in raw/ and corresponding masks in masks/ (matching filenames). Expand the dataset with augmentations:

python src/build_augmented_dataset.py --copies-per-image 4 --clean

Training

Standard U-Net training:

python src/train.py \
    --image-dir augmented_dataset/raw \
    --mask-dir augmented_dataset/masks \
    --epochs 40 --batch-size 4 --lr 1e-4 \
    --image-size 256 --freeze-encoder-epochs 5

Area-consistency U-Net (LabelMe JSON polygons):

./trainenv/bin/python src/area_consistency/train_area.py

Inference (CLI)

Single image:

python src/predict.py --input raw/your_image.jpg --weights models/best_finetuned.pt --output-dir predictions

Folder:

python src/predict.py --input raw --weights models/best_finetuned.pt --output-dir predictions

Best Practices

  • Keep trusted human-labelled masks unchanged.
  • Use augmentations and area-consistency loss for improved generalisation.
  • Inspect overlay outputs to verify mask quality.
  • On Apple Silicon, MPS acceleration is used automatically if available.

Demo Spaces

βœ… Recommended: rotsl/grayleafspot-segmentation-demo

Uses best_area_w_0.7.pt (SmallUNet with area-consistency loss). More accurate segmentation with better boundary adherence thanks to the area-consistency regularisation.

Features: dish detection β†’ colony segmentation β†’ crack & hyphae analysis β†’ 16 morphometric measurements β†’ time-series growth charts β†’ CSV/JSON export.

Source code: rotsl/grayleafspot-segmentation-demo (model repo)

Legacy: rotsl/fungal-colony-input

Uses grayleafspot.pt (smp.Unet with ResNet-34 encoder). This is the earlier, larger model trained with standard BCE loss only β€” it is less accurate than the area-consistency variant above, particularly for colony boundary delineation and area estimation. Kept available for reference and backward compatibility.


Citation

@misc{rohan_r_2026,
  author       = {rohan r},
  title        = {grayleafspot-segmentation (Revision 0e85f71)},
  year         = 2026,
  url          = {https://huggingface.co/rotsl/grayleafspot-segmentation},
  doi          = {10.57967/hf/8416},
  publisher    = {Hugging Face}
}

License

Apache License 2.0 β€” see LICENSE for details.

Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using rotsl/grayleafspot-segmentation 2

Collection including rotsl/grayleafspot-segmentation