agriscan-efficientnet model
#1
by
Vernard
- opened
README.md
CHANGED
|
@@ -1,11 +1,22 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
-
#
|
| 7 |
|
| 8 |
-
|
| 9 |
|
| 10 |
|
| 11 |
|
|
@@ -13,187 +24,173 @@ tags: []
|
|
| 13 |
|
| 14 |
### Model Description
|
| 15 |
|
| 16 |
-
|
| 17 |
|
| 18 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
| 19 |
|
| 20 |
-
- **Developed by:** [
|
| 21 |
-
- **Funded by [optional]:**
|
| 22 |
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:**
|
| 24 |
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:**
|
| 26 |
-
- **Finetuned from model [optional]:** [
|
| 27 |
-
|
| 28 |
### Model Sources [optional]
|
| 29 |
|
| 30 |
-
|
|
|
|
| 31 |
|
| 32 |
- **Repository:** [More Information Needed]
|
| 33 |
- **Paper [optional]:** [More Information Needed]
|
| 34 |
- **Demo [optional]:** [More Information Needed]
|
| 35 |
|
| 36 |
-
## Uses
|
| 37 |
-
|
| 38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
|
| 40 |
### Direct Use
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
[More Information Needed]
|
| 45 |
-
|
| 46 |
-
### Downstream Use [optional]
|
| 47 |
-
|
| 48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 49 |
|
| 50 |
-
|
|
|
|
| 51 |
|
| 52 |
### Out-of-Scope Use
|
| 53 |
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
|
| 58 |
## Bias, Risks, and Limitations
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
|
|
|
| 63 |
|
| 64 |
### Recommendations
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
|
|
|
| 69 |
|
| 70 |
## How to Get Started with the Model
|
| 71 |
|
| 72 |
Use the code below to get started with the model.
|
| 73 |
|
| 74 |
-
[
|
| 75 |
-
|
| 76 |
-
|
|
|
|
| 77 |
|
| 78 |
-
|
|
|
|
|
|
|
| 79 |
|
| 80 |
-
|
|
|
|
|
|
|
| 81 |
|
| 82 |
-
|
|
|
|
|
|
|
| 83 |
|
| 84 |
-
|
|
|
|
|
|
|
|
|
|
| 85 |
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
#### Preprocessing [optional]
|
| 89 |
|
| 90 |
-
|
| 91 |
|
|
|
|
| 92 |
|
| 93 |
-
|
| 94 |
|
| 95 |
-
|
| 96 |
|
| 97 |
-
#### Speeds, Sizes, Times [optional]
|
| 98 |
|
| 99 |
-
|
| 100 |
|
| 101 |
-
|
| 102 |
|
| 103 |
-
|
| 104 |
|
| 105 |
-
|
| 106 |
|
| 107 |
-
|
| 108 |
|
| 109 |
-
|
| 110 |
|
| 111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
| 112 |
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
#### Factors
|
| 116 |
|
| 117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
|
| 119 |
-
|
| 120 |
|
| 121 |
-
|
| 122 |
|
| 123 |
-
|
| 124 |
|
| 125 |
-
|
| 126 |
|
| 127 |
### Results
|
| 128 |
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
#### Summary
|
| 132 |
-
|
| 133 |
|
|
|
|
| 134 |
|
| 135 |
## Model Examination [optional]
|
| 136 |
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
|
| 141 |
## Environmental Impact
|
| 142 |
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
- **
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
|
| 153 |
## Technical Specifications [optional]
|
| 154 |
|
| 155 |
### Model Architecture and Objective
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
-
###
|
| 160 |
|
| 161 |
-
|
| 162 |
|
| 163 |
-
####
|
| 164 |
|
| 165 |
-
|
| 166 |
|
| 167 |
-
####
|
| 168 |
|
| 169 |
-
|
| 170 |
|
| 171 |
## Citation [optional]
|
| 172 |
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
|
| 193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
|
| 195 |
-
[More Information Needed]
|
| 196 |
|
| 197 |
## Model Card Contact
|
| 198 |
|
| 199 |
-
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
+
tags:
|
| 4 |
+
- agriculture
|
| 5 |
+
- plant-disease-classification
|
| 6 |
+
- computer-vision
|
| 7 |
+
- efficientnet
|
| 8 |
+
- Africa
|
| 9 |
+
license: apache-2.0
|
| 10 |
+
datasets:
|
| 11 |
+
- BrandonFors/Plant-Diseases-PlantVillage-Dataset
|
| 12 |
+
language:
|
| 13 |
+
- en
|
| 14 |
+
pipeline_tag: image-classification
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# AgriScan-EfficientNet
|
| 18 |
|
| 19 |
+
Crop Disease Classification Model
|
| 20 |
|
| 21 |
|
| 22 |
|
|
|
|
| 24 |
|
| 25 |
### Model Description
|
| 26 |
|
| 27 |
+
**AgriScan-EfficientNet** is a vision model fine-tuned to identify diseases in key crops critical to smallholder farmers in Africa. It is the core AI engine of the **AgriScan** platform—an initiative by **CDSA (Cross Domain Solution Architect)** to provide accessible, AI-powered agricultural intelligence.
|
| 28 |
|
| 29 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
| 30 |
|
| 31 |
+
- **Developed by:** [Vernard Sharbney Ngomane](https://huggingface.co/VCDSA) of CDSA
|
| 32 |
+
- **Funded by [optional]:** This model was developed as part of a pro-bono innovation project.
|
| 33 |
- **Shared by [optional]:** [More Information Needed]
|
| 34 |
+
- **Model type:** Image Classification (Fine-tuned EfficientNet)
|
| 35 |
- **Language(s) (NLP):** [More Information Needed]
|
| 36 |
+
- **License:** Apache 2.0
|
| 37 |
+
- **Finetuned from model [optional]:** [google/efficientnet-b3](https://huggingface.co/google/efficientnet-b3) (via PyTorch Image Models)
|
| 38 |
+
- **Demo:** [AgriScan Live Demo Space](https://huggingface.co/spaces/VCDSA/agriscan-demo)
|
| 39 |
### Model Sources [optional]
|
| 40 |
|
| 41 |
+
- **Repository:** [CDSA AgriScan Project](https://github.com/[Your-Github-Org]/AgriScan) *(Replace with actual link)*
|
| 42 |
+
- **Paper (For EfficientNet):** [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946)
|
| 43 |
|
| 44 |
- **Repository:** [More Information Needed]
|
| 45 |
- **Paper [optional]:** [More Information Needed]
|
| 46 |
- **Demo [optional]:** [More Information Needed]
|
| 47 |
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
### Direct Use
|
| 50 |
|
| 51 |
+
This model is intended for **direct use in agricultural decision-support systems**. Its primary function is to analyze images of plant leaves and predict the presence and type of disease.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
+
- **Primary Use Case:** Integration into mobile applications (via TFLite), web APIs, or messaging bots (WhatsApp) to provide instant disease diagnosis for farmers and agricultural extension officers.
|
| 54 |
+
- **Educational Use:** Can be used in training materials or apps to help students and farmers learn to identify common crop diseases.
|
| 55 |
|
| 56 |
### Out-of-Scope Use
|
| 57 |
|
| 58 |
+
- **Medical or Human Diagnosis:** This model is strictly for plant disease classification.
|
| 59 |
+
- **Absolute Decision-Making:** Predictions are advisory. Final agricultural decisions should involve human expertise and local context.
|
| 60 |
+
- **Non-Visual Data:** Cannot analyze soil samples, weather patterns, or textual descriptions alone.
|
| 61 |
|
| 62 |
## Bias, Risks, and Limitations
|
| 63 |
|
| 64 |
+
- **Geographic & Crop Bias:** Trained primarily on the PlantVillage dataset, which may not fully represent the diversity of disease manifestations, camera qualities, and environmental conditions (e.g., specific lighting, soil backgrounds) found across all African farms.
|
| 65 |
+
- **Limited Scope:** Classifies diseases present in the training set. It is not an "unknown disease detector" and may misclassify novel or regional-specific pathogens.
|
| 66 |
+
- **Field Condition Performance:** Lab/studio images from PlantVillage may lead to lower accuracy on blurry, angled, or occluded field photos.
|
| 67 |
+
- **Mitigation Strategy:** The AgriScan project employs **IBM AI Fairness 360 (AIF360)** for bias auditing and uses a confidence threshold. Low-confidence predictions are flagged for human review.
|
| 68 |
|
| 69 |
### Recommendations
|
| 70 |
|
| 71 |
+
Users should:
|
| 72 |
+
1. Treat outputs as a **first alert system**, not a definitive diagnosis.
|
| 73 |
+
2. Be aware of the model's training background and correlate predictions with local agricultural expert knowledge.
|
| 74 |
+
3. Report misclassifications to help improve future model versions.
|
| 75 |
|
| 76 |
## How to Get Started with the Model
|
| 77 |
|
| 78 |
Use the code below to get started with the model.
|
| 79 |
|
| 80 |
+
[```python
|
| 81 |
+
from transformers import AutoImageProcessor, AutoModelForImageClassification
|
| 82 |
+
from PIL import Image
|
| 83 |
+
import torch]
|
| 84 |
|
| 85 |
+
# Load model and processor from the Hub
|
| 86 |
+
processor = AutoImageProcessor.from_pretrained("VCDSA/agriscan-efficientnet")
|
| 87 |
+
model = AutoModelForImageClassification.from_pretrained("VCDSA/agriscan-efficientnet")
|
| 88 |
|
| 89 |
+
# Load and preprocess an image
|
| 90 |
+
image = Image.open("path_to_your_leaf_image.jpg").convert("RGB")
|
| 91 |
+
inputs = processor(images=image, return_tensors="pt")
|
| 92 |
|
| 93 |
+
# Run inference
|
| 94 |
+
with torch.no_grad():
|
| 95 |
+
logits = model(**inputs).logits
|
| 96 |
|
| 97 |
+
# Get prediction
|
| 98 |
+
predicted_class_idx = logits.argmax(-1).item()
|
| 99 |
+
label = model.config.id2label[predicted_class_idx]
|
| 100 |
+
print(f"Predicted disease: {label}")
|
| 101 |
|
| 102 |
+
## Training Details
|
|
|
|
|
|
|
| 103 |
|
| 104 |
+
### Training Data
|
| 105 |
|
| 106 |
+
Primary Dataset: Plant-Diseases-PlantVillage-Dataset
|
| 107 |
|
| 108 |
+
Description: Contains over 54,000 images of healthy and diseased leaves across 14 crop species and 38 disease classes.
|
| 109 |
|
| 110 |
+
Preprocessing: Images were resized to 300x300 pixels, normalized using ImageNet statistics, and augmented with random horizontal flips and slight rotations to improve generalization.
|
| 111 |
|
|
|
|
| 112 |
|
| 113 |
+
### Training Procedure
|
| 114 |
|
| 115 |
+
Fine-tuning Regime: Standard supervised fine-tuning on the PlantVillage classification task.
|
| 116 |
|
| 117 |
+
Optimizer: AdamW
|
| 118 |
|
| 119 |
+
Learning Rate: 5e-5 with linear decay
|
| 120 |
|
| 121 |
+
Epochs: 10
|
| 122 |
|
| 123 |
+
Framework: PyTorch, using Hugging Face transformers and timm libraries.
|
| 124 |
|
|
|
|
| 125 |
|
| 126 |
+
## Evaluation
|
|
|
|
|
|
|
| 127 |
|
|
|
|
| 128 |
|
| 129 |
+
### Testing Data, Factors & Metrics
|
| 130 |
|
| 131 |
+
Testing Data: 20% hold-out split from the PlantVillage dataset.
|
| 132 |
|
| 133 |
+
Primary Metric: Top-1 Accuracy
|
| 134 |
|
| 135 |
+
Secondary Metrics: Precision, Recall, F1-Score (computed per class).
|
| 136 |
|
| 137 |
### Results
|
| 138 |
|
| 139 |
+
Overall Accuracy: 98.7% on the PlantVillage test set.
|
|
|
|
|
|
|
|
|
|
| 140 |
|
| 141 |
+
Note on Real-World Performance: This high accuracy reflects performance on curated lab images. Field performance is actively being evaluated through pilot studies and is expected to be lower. Continuous evaluation with real farmer data is a core part of the AgriScan development cycle.
|
| 142 |
|
| 143 |
## Model Examination [optional]
|
| 144 |
|
| 145 |
+
Explainability
|
| 146 |
+
To build trust and transparency, the AgriScan project uses IBM AI Explainability 360 (AIX360) to generate saliency maps (e.g., Grad-CAM). These maps visually highlight the regions of the leaf most influential to the model's prediction, helping users and developers understand the model's "reasoning."
|
|
|
|
| 147 |
|
| 148 |
## Environmental Impact
|
| 149 |
|
| 150 |
+
- **Hardware Type:** 1 x NVIDIA Tesla T4 GPU
|
| 151 |
+
- **Hours used:** ~4 hours
|
| 152 |
+
- **Cloud Provider:** Google Colab
|
| 153 |
+
- **Carbon Emitted:** Estimated < 0.5 kgCO2eq (based on ML CO2 impact calculator)
|
| 154 |
+
- **Mitigation:** The model's efficiency (EfficientNet architecture) and intended use case—potentially reducing pesticide overuse and crop loss—aim to offset its training footprint with positive environmental impact.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 155 |
|
| 156 |
## Technical Specifications [optional]
|
| 157 |
|
| 158 |
### Model Architecture and Objective
|
| 159 |
|
| 160 |
+
EfficientNet-B3
|
| 161 |
|
| 162 |
+
### Input
|
| 163 |
|
| 164 |
+
3-channel RGB image (300x300 pixels)
|
| 165 |
|
| 166 |
+
#### Output
|
| 167 |
|
| 168 |
+
Logits over 38 disease classes (compatible with PlantVillage labels).
|
| 169 |
|
| 170 |
+
#### Export Formats
|
| 171 |
|
| 172 |
+
Available in PyTorch (.bin), ONNX (.onnx), and TensorFlow Lite (.tflite) formats for cross-platform deployment.
|
| 173 |
|
| 174 |
## Citation [optional]
|
| 175 |
|
| 176 |
+
@misc{agriscan2025,
|
| 177 |
+
author = {Ngomane, Vernard Sharbney and CDSA},
|
| 178 |
+
title = {AgriScan: An AI-Powered Agricultural Intelligence Platform},
|
| 179 |
+
year = {2025},
|
| 180 |
+
publisher = {Hugging Face},
|
| 181 |
+
url = {https://huggingface.co/VCDSA/agriscan-efficientnet}
|
| 182 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 183 |
|
| 184 |
+
@article{tan2020efficientnet,
|
| 185 |
+
title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks},
|
| 186 |
+
author={Tan, Mingxing and Le, Quoc V.},
|
| 187 |
+
journal={International Conference on Machine Learning},
|
| 188 |
+
pages={6105--6114},
|
| 189 |
+
year={2020},
|
| 190 |
+
publisher={PMLR}
|
| 191 |
+
}
|
| 192 |
|
|
|
|
| 193 |
|
| 194 |
## Model Card Contact
|
| 195 |
|
| 196 |
+
For questions, comments, or partnership inquiries regarding the AgriScan model, please open a discussion on the AgriScan Hub community.
|