File size: 3,118 Bytes
a674f22 b48c334 a674f22 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 |
---
license: apache-2.0
language:
- en
pipeline_tag: image-segmentation
tags:
- ROI-extraction
---
# Leaf Disease Segmentation
[](https://opensource.org/licenses/Apache-2.0)
[](https://pytorch.org/)
[](https://huggingface.co/spaces/LeafNet75/Diseased_ROI_extraction)
This model is finetuned on the top of `RF-DETR-seg-Preview` model, utilizing [leaf-disease-segmentation](https://app.roboflow.com/daru-ka-adda/leaf-disease-seg-finhu/1) dataset in the `coco-segmentation` format from roboflow.
Inference results:

### Inference Instruction using Colab T4
```python
!pip install -q rfdetr==1.3.0 supervision==0.26.1 roboflow==1.2.10 --quiet
```
```python
%mkdir output
%cd output
!wget https://huggingface.co/Subh775/Dis-Seg-Former/resolve/main/checkpoint_best_total.pth
```
```python
from rfdetr import RFDETRSegPreview
import torch
model = RFDETRSegPreview(
pretrain_weights="checkpoint_best_total.pth",
num_classes=1,
segmentation_head=True
)
model.optimize_for_inference()
```
```python
import supervision as sv
from PIL import Image
def annotate(image: Image.Image, detections: sv.Detections, classes: dict[int, str]) -> Image.Image:
color = sv.ColorPalette.from_hex([
"#ffff00", "#ff9b00", "#ff8080", "#ff66b2", "#ff66ff", "#b266ff",
"#9999ff", "#3399ff", "#66ffff", "#33ff99", "#66ff66", "#99ff00"
])
text_scale = sv.calculate_optimal_text_scale(resolution_wh=image.size)
mask_annotator = sv.MaskAnnotator(color=color)
polygon_annotator = sv.PolygonAnnotator(color=sv.Color.WHITE)
label_annotator = sv.LabelAnnotator(
color=color,
text_color=sv.Color.BLACK,
text_scale=text_scale,
text_position=sv.Position.CENTER_OF_MASS
)
labels = [
f"{classes.get(class_id, 'unknown')} {confidence:.2f}"
for class_id, confidence in zip(detections.class_id, detections.confidence)
]
out = image.copy()
out = mask_annotator.annotate(out, detections)
out = polygon_annotator.annotate(out, detections)
out = label_annotator.annotate(out, detections, labels)
out.thumbnail((1000, 1000))
return out
```
```python
import torchvision.transforms as T
import supervision as sv
import torch.nn.functional as F
transform = T.Compose([
T.Resize((512, 512)),
T.ToTensor()
])
def run_inference(image_path):
img = Image.open(image_path).convert("RGB")
detections = model.predict(img, threshold=0.5)
return img, detections
```
```python
classes = {
0: "Disease",
}
```
```python
def segment_and_visualize(image_path):
img, detections = run_inference(image_path)
annotated = annotate(img, detections, classes)
return annotated
result = segment_and_visualize("res.jpg")
result
```
|