Dis-Seg-Former / README.md
Subh775's picture
Update README.md
b48c334 verified
---
license: apache-2.0
language:
- en
pipeline_tag: image-segmentation
tags:
- ROI-extraction
---
# Leaf Disease Segmentation
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?style=flat&logo=PyTorch&logoColor=white)](https://pytorch.org/)
[![Hugging Face Model](https://img.shields.io/badge/%F0%9F%A4%97%20H%20F-Demo-darkred)](https://huggingface.co/spaces/LeafNet75/Diseased_ROI_extraction)
This model is finetuned on the top of `RF-DETR-seg-Preview` model, utilizing [leaf-disease-segmentation](https://app.roboflow.com/daru-ka-adda/leaf-disease-seg-finhu/1) dataset in the `coco-segmentation` format from roboflow.
Inference results:
![image](https://cdn-uploads.huggingface.co/production/uploads/66c6048d0bf40704e4159a23/PbvvP6L39Vf7V1PpC1AaB.png)
### Inference Instruction using Colab T4
```python
!pip install -q rfdetr==1.3.0 supervision==0.26.1 roboflow==1.2.10 --quiet
```
```python
%mkdir output
%cd output
!wget https://huggingface.co/Subh775/Dis-Seg-Former/resolve/main/checkpoint_best_total.pth
```
```python
from rfdetr import RFDETRSegPreview
import torch
model = RFDETRSegPreview(
pretrain_weights="checkpoint_best_total.pth",
num_classes=1,
segmentation_head=True
)
model.optimize_for_inference()
```
```python
import supervision as sv
from PIL import Image
def annotate(image: Image.Image, detections: sv.Detections, classes: dict[int, str]) -> Image.Image:
color = sv.ColorPalette.from_hex([
"#ffff00", "#ff9b00", "#ff8080", "#ff66b2", "#ff66ff", "#b266ff",
"#9999ff", "#3399ff", "#66ffff", "#33ff99", "#66ff66", "#99ff00"
])
text_scale = sv.calculate_optimal_text_scale(resolution_wh=image.size)
mask_annotator = sv.MaskAnnotator(color=color)
polygon_annotator = sv.PolygonAnnotator(color=sv.Color.WHITE)
label_annotator = sv.LabelAnnotator(
color=color,
text_color=sv.Color.BLACK,
text_scale=text_scale,
text_position=sv.Position.CENTER_OF_MASS
)
labels = [
f"{classes.get(class_id, 'unknown')} {confidence:.2f}"
for class_id, confidence in zip(detections.class_id, detections.confidence)
]
out = image.copy()
out = mask_annotator.annotate(out, detections)
out = polygon_annotator.annotate(out, detections)
out = label_annotator.annotate(out, detections, labels)
out.thumbnail((1000, 1000))
return out
```
```python
import torchvision.transforms as T
import supervision as sv
import torch.nn.functional as F
transform = T.Compose([
T.Resize((512, 512)),
T.ToTensor()
])
def run_inference(image_path):
img = Image.open(image_path).convert("RGB")
detections = model.predict(img, threshold=0.5)
return img, detections
```
```python
classes = {
0: "Disease",
}
```
```python
def segment_and_visualize(image_path):
img, detections = run_inference(image_path)
annotated = annotate(img, detections, classes)
return annotated
result = segment_and_visualize("res.jpg")
result
```