Leaf-Annotate-v2 / README.md
Subh775's picture
Update README.md
cc8bb9c verified
metadata
license: apache-2.0
language: en
library_name: segmentation-models-pytorch
tags:
  - pytorch
  - leaf-segmentation
datasets:
  - LeafNet75/In_the_Lab_masks
base_model:
  - google/mobilenet_v2_1.0_224
pipeline_tag: image-segmentation

🌿 Leaf-Annotate-v2

License PyTorch Hugging Face Model

Precise segmentation of leave(s) with cpu-friendly U-Net architecture.

NOTE: The model is well-suited for single-leaf images as it is only trained for that. For "in-the-wild" multi-leaf images, it may fail and directly predict all the leaves it detects.

Model Description

This model is a U-Net with a lightweight MobileNetV2 backbone. It's designed for interactive segmentation: it takes a 4-channel input (RGB image + a single-channel user scribble) and outputs a binary segmentation mask of the indicated leaf. This model was trained on the LeafNet75/In_the_Lab_masks dataset.

Purpose

This model is created for auto-annotation of leave(s) with cpu-friendly computation, focusing on precise segmentation over hardware. For the current trained weights, here are some example outputs:

Training Performance

The model was trained for 50 epochs with a final validation loss of 0.0736 and a final Dice score of 0.9856.

Important Note

This model provides automated image segmentation, but it may produce errors even on simple images and may not generate accurate segmentations for all types of images. In such cases, human intervention is required, and corrections can be made using tools like CVAT

You can run this model on a collection of unsegmented images using the Batch_Inference.py script.

  • Place all your input images in a folder.
  • Update the script paths as needed.
  • Run the script, and the masks will be saved in the masks/ directory with the same base filename as the original images, in .png format.
  • After generating the masks, run prepare_annotate.py to convert the image–mask pairs into a COCO 1.1-compatible JSON annotation file, which can be imported into CVAT for polygon editing.

In this way annotations can be corrected without doing from scratch.

Contributions

We welcome contributions of any kind—whether it’s improving the model, enhancing datasets, or adding new tools and utilities.

Thanks