Update README.md
Browse files
README.md
CHANGED
|
@@ -1,11 +1,8 @@
|
|
| 1 |
---
|
| 2 |
-
language:
|
| 3 |
- en
|
| 4 |
-
|
| 5 |
license: mit
|
| 6 |
-
|
| 7 |
library_name: pytorch
|
| 8 |
-
|
| 9 |
tags:
|
| 10 |
- computer-vision
|
| 11 |
- image-classification
|
|
@@ -16,23 +13,16 @@ tags:
|
|
| 16 |
- plants
|
| 17 |
- environment
|
| 18 |
- ai-for-good
|
| 19 |
-
|
| 20 |
pipeline_tag: image-classification
|
| 21 |
-
|
| 22 |
model_name: FloraGuard
|
| 23 |
-
|
| 24 |
model_type: ConvNeXt-Base
|
| 25 |
-
|
| 26 |
base_model: timm/convnext_base
|
| 27 |
-
|
| 28 |
author: Aarav
|
| 29 |
-
|
| 30 |
description: >
|
| 31 |
FloraGuard is a deep learning–based image classification model designed to
|
| 32 |
identify and classify forest and plant-related categories. It uses a
|
| 33 |
-
ConvNeXt-Base backbone pretrained on ImageNet and a custom classification
|
| 34 |
-
|
| 35 |
-
|
| 36 |
training:
|
| 37 |
framework: PyTorch
|
| 38 |
optimizer: AdamW
|
|
@@ -41,12 +31,10 @@ training:
|
|
| 41 |
epochs: 20
|
| 42 |
scheduler: ReduceLROnPlateau
|
| 43 |
device: CUDA / CPU
|
| 44 |
-
|
| 45 |
architecture:
|
| 46 |
backbone: ConvNeXt-Base (pretrained on ImageNet)
|
| 47 |
-
classification_head:
|
| 48 |
Linear → BatchNorm → ReLU → Dropout → Linear
|
| 49 |
-
|
| 50 |
dataset:
|
| 51 |
format: ImageFolder
|
| 52 |
structure: |
|
|
@@ -59,91 +47,68 @@ dataset:
|
|
| 59 |
├── class_1/
|
| 60 |
├── class_2/
|
| 61 |
└── ...
|
| 62 |
-
description:
|
| 63 |
Multi-class forest and plant image dataset organized by class folders.
|
| 64 |
-
|
| 65 |
augmentation:
|
| 66 |
train:
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
validation:
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
evaluation:
|
| 77 |
metrics:
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
outputs:
|
| 84 |
best_model: best_model.pth
|
| 85 |
-
|
| 86 |
use_cases:
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
requirements:
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
|
|
|
|
|
|
| 100 |
---
|
| 101 |
|
| 102 |
|
| 103 |
-
Evaluation Results
|
| 104 |
-
Classification Report
|
| 105 |
-
precision recall f1-score support
|
| 106 |
|
| 107 |
-
|
| 108 |
-
Low 0.40 0.32 0.36 2599
|
| 109 |
-
Moderate 0.20 0.16 0.17 1772
|
| 110 |
-
Non-burnable 0.82 0.89 0.85 5091
|
| 111 |
-
Very_High 0.36 0.54 0.43 1438
|
| 112 |
-
Very_Low 0.80 0.69 0.74 8448
|
| 113 |
-
Water 0.67 0.93 0.78 584
|
| 114 |
|
| 115 |
-
|
| 116 |
-
macro avg 0.51 0.57 0.53 21541
|
| 117 |
-
weighted avg 0.64 0.63 0.63 21541
|
| 118 |
|
| 119 |
-
Confusion Matrix
|
| 120 |
-
[[ 686 74 221 54 451 107 16]
|
| 121 |
-
[ 317 842 302 76 296 733 33]
|
| 122 |
-
[ 578 202 276 58 402 237 19]
|
| 123 |
-
[ 20 75 50 4536 37 309 64]
|
| 124 |
-
[ 358 45 213 5 783 30 4]
|
| 125 |
-
[ 282 877 329 786 213 5829 132]
|
| 126 |
-
[ 7 7 1 8 2 15 544]]
|
| 127 |
|
| 128 |
-
Notes
|
| 129 |
|
| 130 |
-
Overall accuracy: 0.63
|
| 131 |
|
| 132 |
-
Best performing classes:
|
| 133 |
|
| 134 |
-
Non-burnable (F1: 0.85)
|
| 135 |
|
| 136 |
-
Very_Low (F1: 0.74)
|
| 137 |
|
| 138 |
-
Water (F1: 0.78)
|
| 139 |
|
| 140 |
-
Weaker performance on:
|
| 141 |
|
| 142 |
-
Moderate
|
| 143 |
|
| 144 |
-
High
|
| 145 |
|
| 146 |
-
Low
|
| 147 |
|
| 148 |
-
**DISCLOSURES**
|
| 149 |
-
-Model was developed with assistance from Github Copilot Only for the augmentation techniques. (MIXUP, CUTMIX)
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
- en
|
|
|
|
| 4 |
license: mit
|
|
|
|
| 5 |
library_name: pytorch
|
|
|
|
| 6 |
tags:
|
| 7 |
- computer-vision
|
| 8 |
- image-classification
|
|
|
|
| 13 |
- plants
|
| 14 |
- environment
|
| 15 |
- ai-for-good
|
|
|
|
| 16 |
pipeline_tag: image-classification
|
|
|
|
| 17 |
model_name: FloraGuard
|
|
|
|
| 18 |
model_type: ConvNeXt-Base
|
|
|
|
| 19 |
base_model: timm/convnext_base
|
|
|
|
| 20 |
author: Aarav
|
|
|
|
| 21 |
description: >
|
| 22 |
FloraGuard is a deep learning–based image classification model designed to
|
| 23 |
identify and classify forest and plant-related categories. It uses a
|
| 24 |
+
ConvNeXt-Base backbone pretrained on ImageNet and a custom classification head
|
| 25 |
+
for improved accuracy and generalization.
|
|
|
|
| 26 |
training:
|
| 27 |
framework: PyTorch
|
| 28 |
optimizer: AdamW
|
|
|
|
| 31 |
epochs: 20
|
| 32 |
scheduler: ReduceLROnPlateau
|
| 33 |
device: CUDA / CPU
|
|
|
|
| 34 |
architecture:
|
| 35 |
backbone: ConvNeXt-Base (pretrained on ImageNet)
|
| 36 |
+
classification_head: |
|
| 37 |
Linear → BatchNorm → ReLU → Dropout → Linear
|
|
|
|
| 38 |
dataset:
|
| 39 |
format: ImageFolder
|
| 40 |
structure: |
|
|
|
|
| 47 |
├── class_1/
|
| 48 |
├── class_2/
|
| 49 |
└── ...
|
| 50 |
+
description: |
|
| 51 |
Multi-class forest and plant image dataset organized by class folders.
|
|
|
|
| 52 |
augmentation:
|
| 53 |
train:
|
| 54 |
+
- Resize (320x320)
|
| 55 |
+
- Random Horizontal Flip
|
| 56 |
+
- Random Rotation
|
| 57 |
+
- Color Jitter
|
| 58 |
+
- Normalization
|
| 59 |
validation:
|
| 60 |
+
- Resize (320x320)
|
| 61 |
+
- Normalization
|
|
|
|
| 62 |
evaluation:
|
| 63 |
metrics:
|
| 64 |
+
- F1 Score
|
| 65 |
+
- Confusion Matrix
|
| 66 |
+
- Classification Report
|
| 67 |
+
- Training & Validation Loss Curves
|
|
|
|
| 68 |
outputs:
|
| 69 |
best_model: best_model.pth
|
|
|
|
| 70 |
use_cases:
|
| 71 |
+
- Forest monitoring
|
| 72 |
+
- Plant classification
|
| 73 |
+
- Environmental AI applications
|
| 74 |
+
- Research and education
|
|
|
|
| 75 |
requirements:
|
| 76 |
+
- torch
|
| 77 |
+
- torchvision
|
| 78 |
+
- timm
|
| 79 |
+
- numpy
|
| 80 |
+
- matplotlib
|
| 81 |
+
- scikit-learn
|
| 82 |
+
- pillow
|
| 83 |
+
datasets:
|
| 84 |
+
- blanchon/FireRisk
|
| 85 |
---
|
| 86 |
|
| 87 |
|
| 88 |
+
###Evaluation Results
|
|
|
|
|
|
|
| 89 |
|
| 90 |
+
##Classification Report & Confusion Matrix------>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 91 |
|
| 92 |
+

|
|
|
|
|
|
|
| 93 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
|
| 95 |
+
###Notes
|
| 96 |
|
| 97 |
+
-Overall accuracy: 0.63 or 63%
|
| 98 |
|
| 99 |
+
-Best performing classes:
|
| 100 |
|
| 101 |
+
-----Non-burnable (F1: 0.85)
|
| 102 |
|
| 103 |
+
-----Very_Low (F1: 0.74)
|
| 104 |
|
| 105 |
+
-----Water (F1: 0.78)
|
| 106 |
|
| 107 |
+
-Weaker performance on:
|
| 108 |
|
| 109 |
+
-----Moderate
|
| 110 |
|
| 111 |
+
-----High
|
| 112 |
|
| 113 |
+
-----Low
|
| 114 |
|
|
|
|
|
|