Spaces:
Sleeping
Sleeping
Merge branch 'develop' into ops/clearml-setup
Browse files- .gitignore +8 -0
- README.md +372 -1
- docs/USAGE.md +401 -0
- docs/deployment_guide.md +404 -0
- quickstart.sh +66 -0
- requirements.txt +17 -1
- ui/app.py +280 -0
- ui/config.py +15 -0
- ui/model_loader.py +47 -0
- ui/utils.py +210 -0
.gitignore
CHANGED
|
@@ -1,3 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
|
| 2 |
# Python environment
|
| 3 |
venv/
|
|
@@ -11,3 +18,4 @@ __pycache__/
|
|
| 11 |
|
| 12 |
# Generated files from data_preparation.py
|
| 13 |
class_distribution.png
|
|
|
|
|
|
| 1 |
+
<<<<<<< HEAD
|
| 2 |
+
.vscode/
|
| 3 |
+
.venv/
|
| 4 |
+
.vscode/
|
| 5 |
+
.models/
|
| 6 |
+
__pycache__/
|
| 7 |
+
=======
|
| 8 |
|
| 9 |
# Python environment
|
| 10 |
venv/
|
|
|
|
| 18 |
|
| 19 |
# Generated files from data_preparation.py
|
| 20 |
class_distribution.png
|
| 21 |
+
>>>>>>> 04cb88662062ef6b880c627546d067fa0cedfa8b
|
README.md
CHANGED
|
@@ -1 +1,372 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Plant Disease Detection - UI and Deployment
|
| 2 |
+
|
| 3 |
+
This directory contains the Gradio-based user interface and deployment code for the Plant Disease Detection project.
|
| 4 |
+
|
| 5 |
+
## Team Information
|
| 6 |
+
|
| 7 |
+
**Team Number:** [Add your team number]
|
| 8 |
+
|
| 9 |
+
**Team Members:**
|
| 10 |
+
- [Add team member names here]
|
| 11 |
+
|
| 12 |
+
## Links
|
| 13 |
+
|
| 14 |
+
- **GitHub Repository:** https://github.kcl.ac.uk/K23064919/smallGroupProject
|
| 15 |
+
- **Deployed App:** [Add Hugging Face Spaces URL here]
|
| 16 |
+
- **Trained Model:** [Add model download link or ClearML model ID here]
|
| 17 |
+
|
| 18 |
+
## Project Structure
|
| 19 |
+
|
| 20 |
+
```
|
| 21 |
+
plant-disease-ui/
|
| 22 |
+
├── ui/
|
| 23 |
+
│ ├── app.py # Main Gradio application
|
| 24 |
+
│ ├── config.py # Configuration (class names, paths, etc.)
|
| 25 |
+
│ ├── model_loader.py # Model loading utilities
|
| 26 |
+
│ ├── utils.py # Utility functions (preprocessing, etc.)
|
| 27 |
+
│ └── examples/ # Example images for gallery
|
| 28 |
+
├── models/
|
| 29 |
+
│ ├── mock_model.py # Mock model for development
|
| 30 |
+
│ └── best_model.pth # (To be added) Trained model weights
|
| 31 |
+
├── docs/
|
| 32 |
+
│ └── deployment_guide.md # Deployment instructions
|
| 33 |
+
├── requirements.txt # Python dependencies
|
| 34 |
+
└── README.md # This file
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
## Features
|
| 38 |
+
|
| 39 |
+
### Core Features
|
| 40 |
+
- ✅ **Image Upload:** Upload plant leaf images for disease detection
|
| 41 |
+
- ✅ **Top-K Predictions:** Display top 10 predictions with confidence scores
|
| 42 |
+
- ✅ **Formatted Output:** Clean, readable prediction results
|
| 43 |
+
|
| 44 |
+
### Advanced Features
|
| 45 |
+
- ✅ **Multiple Models:** Switch between different trained models (CNN, Transfer Learning)
|
| 46 |
+
- ✅ **Example Gallery:** Pre-loaded example images for quick testing
|
| 47 |
+
- ✅ **Batch Processing:** Upload and classify multiple images at once
|
| 48 |
+
- ✅ **Flag Predictions:** Report incorrect predictions
|
| 49 |
+
- ✅ **Confidence Threshold:** Filter predictions by minimum confidence level
|
| 50 |
+
- ✅ **Detailed Information:** View plant type, disease name, and health status
|
| 51 |
+
|
| 52 |
+
## Setup Instructions
|
| 53 |
+
|
| 54 |
+
### 1. Install Dependencies
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
# Create a virtual environment (recommended)
|
| 58 |
+
python -m venv venv
|
| 59 |
+
source venv/bin/activate # On Windows: venv\Scripts\activate
|
| 60 |
+
|
| 61 |
+
# Install required packages
|
| 62 |
+
pip install -r requirements.txt
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
### 2. Add Example Images (Optional)
|
| 66 |
+
|
| 67 |
+
To enable the example gallery feature:
|
| 68 |
+
|
| 69 |
+
```bash
|
| 70 |
+
# Create examples directory
|
| 71 |
+
mkdir -p ui/examples
|
| 72 |
+
|
| 73 |
+
# Add plant disease images to ui/examples/
|
| 74 |
+
# You can download sample images from the PlantVillage dataset
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
To download example images programmatically:
|
| 78 |
+
|
| 79 |
+
```python
|
| 80 |
+
from datasets import load_dataset
|
| 81 |
+
|
| 82 |
+
# Load PlantVillage dataset
|
| 83 |
+
dataset = load_dataset("EdBianchi/plant-village")
|
| 84 |
+
|
| 85 |
+
# Save some example images
|
| 86 |
+
import os
|
| 87 |
+
os.makedirs("ui/examples", exist_ok=True)
|
| 88 |
+
|
| 89 |
+
for i in range(10): # Save 10 examples
|
| 90 |
+
img = dataset['train'][i * 1000]['image'] # Sample every 1000th image
|
| 91 |
+
img.save(f"ui/examples/example_{i}.jpg")
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
### 3. Run the App Locally
|
| 95 |
+
|
| 96 |
+
**Option A: Using Mock Model (for development)**
|
| 97 |
+
|
| 98 |
+
```bash
|
| 99 |
+
cd ui
|
| 100 |
+
python app.py
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
The app will start at `http://localhost:7860`
|
| 104 |
+
|
| 105 |
+
**Option B: Using Your Trained Model**
|
| 106 |
+
|
| 107 |
+
First, modify `app.py` to load your real model:
|
| 108 |
+
|
| 109 |
+
```python
|
| 110 |
+
# In app.py, change the last line:
|
| 111 |
+
demo = create_interface(use_mock=False) # Change to False
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
Then run:
|
| 115 |
+
|
| 116 |
+
```bash
|
| 117 |
+
cd ui
|
| 118 |
+
python app.py
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
### 4. Configure for Real Model
|
| 122 |
+
|
| 123 |
+
When your team's model is ready, you have several options:
|
| 124 |
+
|
| 125 |
+
#### Option 1: Load from Local File
|
| 126 |
+
|
| 127 |
+
```python
|
| 128 |
+
# In model_loader.py, update the model path
|
| 129 |
+
MODEL_PATH = "models/best_model.pth"
|
| 130 |
+
|
| 131 |
+
# Then in app.py:
|
| 132 |
+
app = PlantDiseaseApp(use_mock=False)
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
#### Option 2: Load from ClearML
|
| 136 |
+
|
| 137 |
+
```python
|
| 138 |
+
# In app.py or model_loader.py:
|
| 139 |
+
loader = ModelLoader(use_mock=False)
|
| 140 |
+
model = loader.load_from_clearml(
|
| 141 |
+
project_name="Plant Disease Detection",
|
| 142 |
+
task_name="CNN Training"
|
| 143 |
+
)
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
#### Option 3: Load from Hugging Face Hub
|
| 147 |
+
|
| 148 |
+
```python
|
| 149 |
+
# First, upload your model to HF Hub
|
| 150 |
+
# Then in model_loader.py:
|
| 151 |
+
loader = ModelLoader(use_mock=False)
|
| 152 |
+
model = loader.load_from_huggingface("your-username/plant-disease-model")
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
## Deployment to Hugging Face Spaces
|
| 156 |
+
|
| 157 |
+
### Step 1: Create a Hugging Face Account
|
| 158 |
+
|
| 159 |
+
1. Go to https://huggingface.co/ and create an account
|
| 160 |
+
2. Verify your email address
|
| 161 |
+
|
| 162 |
+
### Step 2: Create a New Space
|
| 163 |
+
|
| 164 |
+
1. Click on your profile → "New Space"
|
| 165 |
+
2. Space name: `plant-disease-detection`
|
| 166 |
+
3. License: Apache 2.0
|
| 167 |
+
4. Select SDK: **Gradio**
|
| 168 |
+
5. Make it **Public**
|
| 169 |
+
6. Click "Create Space"
|
| 170 |
+
|
| 171 |
+
### Step 3: Prepare Files for Deployment
|
| 172 |
+
|
| 173 |
+
Create these files in the root of your Space:
|
| 174 |
+
|
| 175 |
+
**app.py** (Simplified version for HF Spaces)
|
| 176 |
+
```python
|
| 177 |
+
# Copy ui/app.py and modify the imports to work in the flat structure
|
| 178 |
+
```
|
| 179 |
+
|
| 180 |
+
**requirements.txt**
|
| 181 |
+
```
|
| 182 |
+
torch
|
| 183 |
+
torchvision
|
| 184 |
+
gradio
|
| 185 |
+
Pillow
|
| 186 |
+
numpy
|
| 187 |
+
huggingface-hub
|
| 188 |
+
```
|
| 189 |
+
|
| 190 |
+
**README.md** (for the Space)
|
| 191 |
+
```markdown
|
| 192 |
+
---
|
| 193 |
+
title: Plant Disease Detection
|
| 194 |
+
emoji: 🌱
|
| 195 |
+
colorFrom: green
|
| 196 |
+
colorTo: blue
|
| 197 |
+
sdk: gradio
|
| 198 |
+
sdk_version: 4.0.0
|
| 199 |
+
app_file: app.py
|
| 200 |
+
pinned: false
|
| 201 |
+
---
|
| 202 |
+
|
| 203 |
+
# Plant Disease Detection
|
| 204 |
+
|
| 205 |
+
AI-powered plant disease detection from leaf images.
|
| 206 |
+
Developed by [Your Team Name] for King's College London.
|
| 207 |
+
```
|
| 208 |
+
|
| 209 |
+
### Step 4: Upload Your Model
|
| 210 |
+
|
| 211 |
+
**Option A: Upload weights to the Space**
|
| 212 |
+
|
| 213 |
+
1. Upload your `best_model.pth` to the Space
|
| 214 |
+
2. Modify `app.py` to load from this file
|
| 215 |
+
|
| 216 |
+
**Option B: Use Hugging Face Hub**
|
| 217 |
+
|
| 218 |
+
1. Upload model to HF Model Hub:
|
| 219 |
+
```python
|
| 220 |
+
from huggingface_hub import HfApi
|
| 221 |
+
|
| 222 |
+
api = HfApi()
|
| 223 |
+
api.upload_file(
|
| 224 |
+
path_or_fileobj="models/best_model.pth",
|
| 225 |
+
path_in_repo="model.pth",
|
| 226 |
+
repo_id="your-username/plant-disease-model",
|
| 227 |
+
repo_type="model"
|
| 228 |
+
)
|
| 229 |
+
```
|
| 230 |
+
|
| 231 |
+
2. Load in app:
|
| 232 |
+
```python
|
| 233 |
+
from huggingface_hub import hf_hub_download
|
| 234 |
+
model_path = hf_hub_download(
|
| 235 |
+
repo_id="your-username/plant-disease-model",
|
| 236 |
+
filename="model.pth"
|
| 237 |
+
)
|
| 238 |
+
```
|
| 239 |
+
|
| 240 |
+
**Option C: Fetch from ClearML**
|
| 241 |
+
|
| 242 |
+
1. Add ClearML credentials to Space Secrets
|
| 243 |
+
2. Use the `load_from_clearml()` function
|
| 244 |
+
|
| 245 |
+
### Step 5: Deploy
|
| 246 |
+
|
| 247 |
+
1. Upload all files to your HF Space repository
|
| 248 |
+
2. The app will automatically build and deploy
|
| 249 |
+
3. Test at: `https://huggingface.co/spaces/your-username/plant-disease-detection`
|
| 250 |
+
|
| 251 |
+
## Model Integration Guide
|
| 252 |
+
|
| 253 |
+
### Your CNN Model Structure
|
| 254 |
+
|
| 255 |
+
When integrating your actual trained model, make sure to update `model_loader.py` with your actual CNN architecture:
|
| 256 |
+
|
| 257 |
+
```python
|
| 258 |
+
class YourCNNModel(nn.Module):
|
| 259 |
+
def __init__(self, num_classes=39):
|
| 260 |
+
super(YourCNNModel, self).__init__()
|
| 261 |
+
|
| 262 |
+
# Add your actual CNN architecture here
|
| 263 |
+
# This should match what you used for training
|
| 264 |
+
|
| 265 |
+
def forward(self, x):
|
| 266 |
+
# Your forward pass
|
| 267 |
+
return x
|
| 268 |
+
```
|
| 269 |
+
|
| 270 |
+
### Loading Trained Weights
|
| 271 |
+
|
| 272 |
+
```python
|
| 273 |
+
# Load model
|
| 274 |
+
model = YourCNNModel(num_classes=39)
|
| 275 |
+
|
| 276 |
+
# Load trained weights
|
| 277 |
+
checkpoint = torch.load('path/to/best_model.pth', map_location=device)
|
| 278 |
+
|
| 279 |
+
# If you saved the entire model:
|
| 280 |
+
model = checkpoint
|
| 281 |
+
|
| 282 |
+
# If you saved just state_dict:
|
| 283 |
+
model.load_state_dict(checkpoint)
|
| 284 |
+
|
| 285 |
+
# Or if you saved optimizer and other info:
|
| 286 |
+
model.load_state_dict(checkpoint['model_state_dict'])
|
| 287 |
+
```
|
| 288 |
+
|
| 289 |
+
## Testing the UI
|
| 290 |
+
|
| 291 |
+
### Manual Testing Checklist
|
| 292 |
+
|
| 293 |
+
- [ ] Upload a single image and get predictions
|
| 294 |
+
- [ ] Try different models from the dropdown
|
| 295 |
+
- [ ] Adjust confidence threshold slider
|
| 296 |
+
- [ ] Test example gallery (if images added)
|
| 297 |
+
- [ ] Upload multiple images for batch processing
|
| 298 |
+
- [ ] Flag a prediction
|
| 299 |
+
- [ ] Check all tabs load correctly
|
| 300 |
+
- [ ] Verify predictions match expected classes
|
| 301 |
+
|
| 302 |
+
### Automated Testing
|
| 303 |
+
|
| 304 |
+
```python
|
| 305 |
+
# Run tests
|
| 306 |
+
cd ui
|
| 307 |
+
python -m pytest test_app.py # (Create tests if needed)
|
| 308 |
+
```
|
| 309 |
+
|
| 310 |
+
## Troubleshooting
|
| 311 |
+
|
| 312 |
+
### Common Issues
|
| 313 |
+
|
| 314 |
+
**1. ModuleNotFoundError**
|
| 315 |
+
```bash
|
| 316 |
+
# Make sure all dependencies are installed
|
| 317 |
+
pip install -r requirements.txt
|
| 318 |
+
```
|
| 319 |
+
|
| 320 |
+
**2. Model Loading Error**
|
| 321 |
+
```python
|
| 322 |
+
# Check that the model architecture matches the saved weights
|
| 323 |
+
# Make sure you're using the same num_classes (39)
|
| 324 |
+
```
|
| 325 |
+
|
| 326 |
+
**3. Image Size Issues**
|
| 327 |
+
```python
|
| 328 |
+
# Ensure images are being resized to (256, 256)
|
| 329 |
+
# Check config.py IMAGE_SIZE setting
|
| 330 |
+
```
|
| 331 |
+
|
| 332 |
+
**4. CUDA/GPU Errors**
|
| 333 |
+
```python
|
| 334 |
+
# The app automatically falls back to CPU
|
| 335 |
+
# Check: torch.cuda.is_available()
|
| 336 |
+
```
|
| 337 |
+
|
| 338 |
+
## Contributing
|
| 339 |
+
|
| 340 |
+
When contributing to this UI:
|
| 341 |
+
|
| 342 |
+
1. Create a new branch for your feature
|
| 343 |
+
2. Test locally with mock model first
|
| 344 |
+
3. Test with real model before pushing
|
| 345 |
+
4. Update this README if adding new features
|
| 346 |
+
5. Ensure code is well-commented
|
| 347 |
+
|
| 348 |
+
## TODO
|
| 349 |
+
|
| 350 |
+
- [ ] Add more example images to gallery
|
| 351 |
+
- [ ] Integrate with actual trained models
|
| 352 |
+
- [ ] Add disease information/treatment suggestions
|
| 353 |
+
- [ ] Implement persistent flagging system (database)
|
| 354 |
+
- [ ] Add data visualization for batch results
|
| 355 |
+
- [ ] Create comprehensive tests
|
| 356 |
+
|
| 357 |
+
## Resources
|
| 358 |
+
|
| 359 |
+
- [Gradio Documentation](https://gradio.app/docs/)
|
| 360 |
+
- [HuggingFace Spaces Guide](https://huggingface.co/docs/hub/spaces)
|
| 361 |
+
- [ClearML Python API](https://clear.ml/docs/latest/docs/references/sdk/)
|
| 362 |
+
- [PlantVillage Dataset](https://huggingface.co/datasets/EdBianchi/plant-village)
|
| 363 |
+
|
| 364 |
+
## License
|
| 365 |
+
|
| 366 |
+
[Specify your license here]
|
| 367 |
+
|
| 368 |
+
## Acknowledgments
|
| 369 |
+
|
| 370 |
+
- King's College London, 5CCSAGAP Course
|
| 371 |
+
- PlantVillage Dataset creators
|
| 372 |
+
- Course instructors and TAs
|
docs/USAGE.md
ADDED
|
@@ -0,0 +1,401 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Usage Guide - Plant Disease Detection UI
|
| 2 |
+
|
| 3 |
+
This guide explains how to use the Plant Disease Detection application.
|
| 4 |
+
|
| 5 |
+
## For Developers
|
| 6 |
+
|
| 7 |
+
### Running Locally
|
| 8 |
+
|
| 9 |
+
**Quick Start:**
|
| 10 |
+
```bash
|
| 11 |
+
./quickstart.sh
|
| 12 |
+
```
|
| 13 |
+
|
| 14 |
+
**Manual Start:**
|
| 15 |
+
```bash
|
| 16 |
+
# Activate virtual environment
|
| 17 |
+
source venv/bin/activate # On Windows: venv\Scripts\activate
|
| 18 |
+
|
| 19 |
+
# Run the app
|
| 20 |
+
cd ui
|
| 21 |
+
python app.py
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
The app will be available at `http://localhost:7860`
|
| 25 |
+
|
| 26 |
+
### Development with Mock Model
|
| 27 |
+
|
| 28 |
+
During development, the app uses a mock model by default. This allows you to:
|
| 29 |
+
- Test the UI without waiting for model training
|
| 30 |
+
- Develop features in parallel with the ML team
|
| 31 |
+
- Verify the interface works correctly
|
| 32 |
+
|
| 33 |
+
To use the mock model:
|
| 34 |
+
```python
|
| 35 |
+
# In ui/app.py
|
| 36 |
+
demo = create_interface(use_mock=True) # Default
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
### Switching to Real Model
|
| 40 |
+
|
| 41 |
+
Once your team has trained a model:
|
| 42 |
+
|
| 43 |
+
1. **Save your model:**
|
| 44 |
+
```python
|
| 45 |
+
# In your training script
|
| 46 |
+
torch.save(model.state_dict(), 'best_model.pth')
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
2. **Copy to models directory:**
|
| 50 |
+
```bash
|
| 51 |
+
cp path/to/best_model.pth models/
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
3. **Update model_loader.py with your architecture:**
|
| 55 |
+
```python
|
| 56 |
+
# Replace MockPlantDiseaseModel with your actual model
|
| 57 |
+
from your_training_code import YourCNNModel
|
| 58 |
+
|
| 59 |
+
def _load_real_model(self, model_name, model_path=None):
|
| 60 |
+
model = YourCNNModel(num_classes=39)
|
| 61 |
+
# ... rest of the code
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
4. **Change app.py to use real model:**
|
| 65 |
+
```python
|
| 66 |
+
demo = create_interface(use_mock=False)
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
### Testing
|
| 70 |
+
|
| 71 |
+
**Test individual components:**
|
| 72 |
+
```bash
|
| 73 |
+
# Test mock model
|
| 74 |
+
python models/mock_model.py
|
| 75 |
+
|
| 76 |
+
# Test model loader
|
| 77 |
+
python ui/model_loader.py
|
| 78 |
+
|
| 79 |
+
# Test utilities
|
| 80 |
+
python ui/utils.py
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
**Test with different images:**
|
| 84 |
+
1. Download example images:
|
| 85 |
+
```bash
|
| 86 |
+
python download_examples.py --num 20
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
2. Run the app and test each feature:
|
| 90 |
+
- Single image prediction
|
| 91 |
+
- Batch processing
|
| 92 |
+
- Model switching
|
| 93 |
+
- Confidence threshold
|
| 94 |
+
- Flagging predictions
|
| 95 |
+
|
| 96 |
+
## For End Users
|
| 97 |
+
|
| 98 |
+
### Single Image Classification
|
| 99 |
+
|
| 100 |
+
1. Open the app in your browser
|
| 101 |
+
2. Go to the **"Single Image"** tab
|
| 102 |
+
3. Upload an image:
|
| 103 |
+
- Click the image upload area
|
| 104 |
+
- Select a plant leaf photo from your computer
|
| 105 |
+
- Supported formats: JPG, PNG
|
| 106 |
+
4. (Optional) Select a different model from the dropdown
|
| 107 |
+
5. (Optional) Adjust the confidence threshold
|
| 108 |
+
6. Click **"Predict Disease"**
|
| 109 |
+
7. View the results:
|
| 110 |
+
- Top predictions shown as a chart
|
| 111 |
+
- Detailed information about the top prediction
|
| 112 |
+
- Raw JSON data available in the accordion
|
| 113 |
+
|
| 114 |
+
### Using Example Images
|
| 115 |
+
|
| 116 |
+
1. Go to the **"Example Images"** tab
|
| 117 |
+
2. Click on any example image
|
| 118 |
+
3. The image will be loaded into the predictor
|
| 119 |
+
4. Go back to the "Single Image" tab
|
| 120 |
+
5. Click "Predict Disease"
|
| 121 |
+
|
| 122 |
+
### Batch Processing
|
| 123 |
+
|
| 124 |
+
To classify multiple images at once:
|
| 125 |
+
|
| 126 |
+
1. Go to the **"Batch Processing"** tab
|
| 127 |
+
2. Click "Upload Multiple Images"
|
| 128 |
+
3. Select multiple image files (use Ctrl/Cmd + Click)
|
| 129 |
+
4. Click "Predict All"
|
| 130 |
+
5. View results for all images
|
| 131 |
+
|
| 132 |
+
### Flagging Incorrect Predictions
|
| 133 |
+
|
| 134 |
+
If you notice a wrong prediction:
|
| 135 |
+
|
| 136 |
+
1. After getting a prediction, expand **"Flag Incorrect Prediction"**
|
| 137 |
+
2. Enter feedback (e.g., "This is actually Apple Scab, not Black Rot")
|
| 138 |
+
3. Click **"Submit Flag"**
|
| 139 |
+
4. Your feedback is recorded for the developers
|
| 140 |
+
|
| 141 |
+
### Adjusting Confidence Threshold
|
| 142 |
+
|
| 143 |
+
The confidence threshold filters out low-confidence predictions:
|
| 144 |
+
|
| 145 |
+
1. Use the slider at the top: **"Confidence Threshold (%)"**
|
| 146 |
+
2. Move it right to see only high-confidence predictions
|
| 147 |
+
3. Move it left to see more predictions (including uncertain ones)
|
| 148 |
+
|
| 149 |
+
**Example:**
|
| 150 |
+
- Set to 50%: Only shows predictions the model is at least 50% confident about
|
| 151 |
+
- Set to 1%: Shows almost all predictions
|
| 152 |
+
|
| 153 |
+
### Understanding Results
|
| 154 |
+
|
| 155 |
+
**Prediction Display:**
|
| 156 |
+
```
|
| 157 |
+
Tomato - Late blight: 85.2%
|
| 158 |
+
Tomato - Early blight: 8.3%
|
| 159 |
+
Tomato - Leaf Mold: 3.1%
|
| 160 |
+
...
|
| 161 |
+
```
|
| 162 |
+
|
| 163 |
+
**Detailed Info:**
|
| 164 |
+
- **Top Prediction:** The most likely disease
|
| 165 |
+
- **Confidence:** How certain the model is (0-100%)
|
| 166 |
+
- **Plant:** The type of plant detected
|
| 167 |
+
- **Status:** Whether the plant is healthy or diseased
|
| 168 |
+
|
| 169 |
+
### Tips for Best Results
|
| 170 |
+
|
| 171 |
+
1. **Image Quality:**
|
| 172 |
+
- Use clear, well-lit photos
|
| 173 |
+
- Focus on the leaf
|
| 174 |
+
- Avoid blurry images
|
| 175 |
+
|
| 176 |
+
2. **Image Content:**
|
| 177 |
+
- Show the diseased area clearly
|
| 178 |
+
- Include the whole leaf if possible
|
| 179 |
+
- One leaf per image works best
|
| 180 |
+
|
| 181 |
+
3. **File Size:**
|
| 182 |
+
- The app automatically resizes images
|
| 183 |
+
- But uploading smaller images (<5MB) is faster
|
| 184 |
+
|
| 185 |
+
4. **Interpreting Confidence:**
|
| 186 |
+
- >80%: High confidence - likely correct
|
| 187 |
+
- 50-80%: Moderate confidence - possible
|
| 188 |
+
- <50%: Low confidence - uncertain
|
| 189 |
+
|
| 190 |
+
## Advanced Features
|
| 191 |
+
|
| 192 |
+
### Switching Between Models
|
| 193 |
+
|
| 194 |
+
If your team trained multiple models:
|
| 195 |
+
|
| 196 |
+
1. Use the **"Select Model"** dropdown at the top
|
| 197 |
+
2. Options might include:
|
| 198 |
+
- CNN from Scratch
|
| 199 |
+
- Transfer Learning (ResNet18)
|
| 200 |
+
3. Each model may perform differently
|
| 201 |
+
4. Try both and compare results
|
| 202 |
+
|
| 203 |
+
### Viewing Raw Predictions
|
| 204 |
+
|
| 205 |
+
For technical analysis:
|
| 206 |
+
|
| 207 |
+
1. After prediction, expand **"Advanced: View Raw Predictions"**
|
| 208 |
+
2. See the raw probability scores in JSON format
|
| 209 |
+
3. Useful for debugging or detailed analysis
|
| 210 |
+
|
| 211 |
+
### Batch Results Analysis
|
| 212 |
+
|
| 213 |
+
When processing multiple images:
|
| 214 |
+
|
| 215 |
+
1. Results show the top prediction for each image
|
| 216 |
+
2. Format: `Image 1: Disease Name (confidence%)`
|
| 217 |
+
3. Scroll through all results
|
| 218 |
+
4. Use this for analyzing a collection of plants
|
| 219 |
+
|
| 220 |
+
## Integration with Training Pipeline
|
| 221 |
+
|
| 222 |
+
### For ML Team Members
|
| 223 |
+
|
| 224 |
+
**Updating the Model:**
|
| 225 |
+
|
| 226 |
+
After training a new model:
|
| 227 |
+
|
| 228 |
+
```python
|
| 229 |
+
# Option 1: Upload to ClearML (recommended)
|
| 230 |
+
from clearml import Task
|
| 231 |
+
task = Task.current_task()
|
| 232 |
+
# Model is automatically uploaded
|
| 233 |
+
|
| 234 |
+
# Then in UI:
|
| 235 |
+
loader.load_from_clearml(task_id="your_task_id")
|
| 236 |
+
```
|
| 237 |
+
|
| 238 |
+
```python
|
| 239 |
+
# Option 2: Save locally
|
| 240 |
+
torch.save(model.state_dict(), 'models/best_model.pth')
|
| 241 |
+
|
| 242 |
+
# Then in UI:
|
| 243 |
+
loader.load_model(model_path='models/best_model.pth')
|
| 244 |
+
```
|
| 245 |
+
|
| 246 |
+
```python
|
| 247 |
+
# Option 3: Upload to HuggingFace Hub
|
| 248 |
+
from huggingface_hub import HfApi
|
| 249 |
+
api = HfApi()
|
| 250 |
+
api.upload_file(
|
| 251 |
+
path_or_fileobj="best_model.pth",
|
| 252 |
+
path_in_repo="model.pth",
|
| 253 |
+
repo_id="username/model-name"
|
| 254 |
+
)
|
| 255 |
+
|
| 256 |
+
# Then in UI:
|
| 257 |
+
loader.load_from_huggingface("username/model-name")
|
| 258 |
+
```
|
| 259 |
+
|
| 260 |
+
### Experiment Tracking
|
| 261 |
+
|
| 262 |
+
The UI can load any model from your ClearML experiments:
|
| 263 |
+
|
| 264 |
+
```python
|
| 265 |
+
# Get task ID from ClearML dashboard
|
| 266 |
+
# Then update model_loader.py or pass as parameter
|
| 267 |
+
```
|
| 268 |
+
|
| 269 |
+
## Troubleshooting
|
| 270 |
+
|
| 271 |
+
### Common Issues
|
| 272 |
+
|
| 273 |
+
**"Please upload an image"**
|
| 274 |
+
- Solution: Make sure you've selected an image before clicking Predict
|
| 275 |
+
|
| 276 |
+
**"No predictions above confidence threshold"**
|
| 277 |
+
- Solution: Lower the confidence threshold slider
|
| 278 |
+
- Or the image might not be a plant leaf
|
| 279 |
+
|
| 280 |
+
**"Error during prediction"**
|
| 281 |
+
- Check the error message in the output
|
| 282 |
+
- Verify the image is valid (not corrupted)
|
| 283 |
+
- Try a different image
|
| 284 |
+
|
| 285 |
+
**Slow predictions**
|
| 286 |
+
- First prediction may be slow (model loading)
|
| 287 |
+
- Subsequent predictions should be faster
|
| 288 |
+
- Batch processing might take longer for many images
|
| 289 |
+
|
| 290 |
+
**Example gallery is empty**
|
| 291 |
+
- Run `python download_examples.py` to download examples
|
| 292 |
+
- Or manually add images to `ui/examples/`
|
| 293 |
+
|
| 294 |
+
### Getting Help
|
| 295 |
+
|
| 296 |
+
1. Check the error message displayed in the UI
|
| 297 |
+
2. Look at the terminal/console for detailed errors
|
| 298 |
+
3. Refer to README.md for setup issues
|
| 299 |
+
4. Check docs/deployment_guide.md for deployment issues
|
| 300 |
+
5. Contact your team members or course TAs
|
| 301 |
+
|
| 302 |
+
## Recording a Demo Video
|
| 303 |
+
|
| 304 |
+
For your project submission:
|
| 305 |
+
|
| 306 |
+
### What to Include
|
| 307 |
+
|
| 308 |
+
1. **Introduction** (10 sec)
|
| 309 |
+
- "This is our Plant Disease Detection system..."
|
| 310 |
+
|
| 311 |
+
2. **Single Image Demo** (30-60 sec)
|
| 312 |
+
- Upload an image
|
| 313 |
+
- Show prediction results
|
| 314 |
+
- Explain the output
|
| 315 |
+
|
| 316 |
+
3. **Advanced Features** (30-60 sec)
|
| 317 |
+
- Show model selection
|
| 318 |
+
- Demonstrate batch processing
|
| 319 |
+
- Show flagging feature
|
| 320 |
+
|
| 321 |
+
4. **Example Gallery** (15-30 sec)
|
| 322 |
+
- Browse example images
|
| 323 |
+
- Select and predict
|
| 324 |
+
|
| 325 |
+
5. **Conclusion** (10-15 sec)
|
| 326 |
+
- Summarize capabilities
|
| 327 |
+
- Mention accuracy/performance
|
| 328 |
+
|
| 329 |
+
### Recording Tips
|
| 330 |
+
|
| 331 |
+
- Use screen recording software (QuickTime, OBS, etc.)
|
| 332 |
+
- Enable audio narration
|
| 333 |
+
- Show your face (optional but personal)
|
| 334 |
+
- Keep it concise (2-3 minutes for basic, 5-6 for feature-rich)
|
| 335 |
+
- Test audio quality before final recording
|
| 336 |
+
- Practice once before recording
|
| 337 |
+
|
| 338 |
+
### Video Quality
|
| 339 |
+
|
| 340 |
+
- Resolution: At least 1080p
|
| 341 |
+
- Format: MP4 (most compatible)
|
| 342 |
+
- Audio: Clear voice, no background noise
|
| 343 |
+
- Editing: Simple cuts are fine, no need for fancy effects
|
| 344 |
+
|
| 345 |
+
## API Documentation
|
| 346 |
+
|
| 347 |
+
For programmatic use (advanced):
|
| 348 |
+
|
| 349 |
+
```python
|
| 350 |
+
from model_loader import get_model
|
| 351 |
+
from utils import preprocess_image, postprocess_predictions
|
| 352 |
+
|
| 353 |
+
# Load model
|
| 354 |
+
model, loader = get_model(use_mock=False)
|
| 355 |
+
|
| 356 |
+
# Prepare image
|
| 357 |
+
from PIL import Image
|
| 358 |
+
image = Image.open("path/to/leaf.jpg")
|
| 359 |
+
tensor = preprocess_image(image)
|
| 360 |
+
|
| 361 |
+
# Predict
|
| 362 |
+
import torch
|
| 363 |
+
with torch.no_grad():
|
| 364 |
+
logits = model(tensor.to(loader.device))
|
| 365 |
+
|
| 366 |
+
# Get results
|
| 367 |
+
top_preds, all_preds = postprocess_predictions(logits)
|
| 368 |
+
print(top_preds)
|
| 369 |
+
```
|
| 370 |
+
|
| 371 |
+
## FAQ
|
| 372 |
+
|
| 373 |
+
**Q: Can I use this with my own plant images?**
|
| 374 |
+
A: Yes! Upload any plant leaf image. Works best with the plants/diseases in the training set.
|
| 375 |
+
|
| 376 |
+
**Q: How accurate is the model?**
|
| 377 |
+
A: Check the README or About tab for test accuracy. Typically 85-95% on validation set.
|
| 378 |
+
|
| 379 |
+
**Q: Can I add more disease categories?**
|
| 380 |
+
A: You'd need to retrain the model with additional data for new categories.
|
| 381 |
+
|
| 382 |
+
**Q: Is my data saved?**
|
| 383 |
+
A: Images uploaded during use are not saved unless you flag a prediction. Flagged data stays in memory only.
|
| 384 |
+
|
| 385 |
+
**Q: Can I run this offline?**
|
| 386 |
+
A: Yes, once installed, the app runs locally and doesn't need internet (except for downloading model from HF/ClearML).
|
| 387 |
+
|
| 388 |
+
**Q: How do I cite this in a report?**
|
| 389 |
+
A: Reference your team's GitHub repo and the deployed app URL.
|
| 390 |
+
|
| 391 |
+
## Next Steps
|
| 392 |
+
|
| 393 |
+
- **Test thoroughly:** Try various images, edge cases
|
| 394 |
+
- **Integrate real model:** Replace mock model with trained model
|
| 395 |
+
- **Deploy:** Follow deployment guide to put on HF Spaces
|
| 396 |
+
- **Record demo:** Create your submission video
|
| 397 |
+
- **Write report:** Document the UI features in your report
|
| 398 |
+
|
| 399 |
+
---
|
| 400 |
+
|
| 401 |
+
**Happy classifying! 🌱**
|
docs/deployment_guide.md
ADDED
|
@@ -0,0 +1,404 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Deployment Guide - Hugging Face Spaces
|
| 2 |
+
|
| 3 |
+
This guide walks you through deploying the Plant Disease Detection app to Hugging Face Spaces.
|
| 4 |
+
|
| 5 |
+
## Prerequisites
|
| 6 |
+
|
| 7 |
+
- Hugging Face account (free): https://huggingface.co/join
|
| 8 |
+
- Trained model weights
|
| 9 |
+
- Git installed locally
|
| 10 |
+
|
| 11 |
+
## Deployment Options
|
| 12 |
+
|
| 13 |
+
You have three options for deploying your model:
|
| 14 |
+
|
| 15 |
+
### Option 1: Upload Model Weights to Space (Recommended)
|
| 16 |
+
**Pros:** Simple, no external dependencies
|
| 17 |
+
**Cons:** Weights are part of the repo
|
| 18 |
+
|
| 19 |
+
### Option 2: Load from Hugging Face Model Hub
|
| 20 |
+
**Pros:** Separate model versioning, smaller Space repo
|
| 21 |
+
**Cons:** Need to upload model separately
|
| 22 |
+
|
| 23 |
+
### Option 3: Fetch from ClearML
|
| 24 |
+
**Pros:** Direct integration with training pipeline
|
| 25 |
+
**Cons:** Requires ClearML credentials in Space secrets
|
| 26 |
+
|
| 27 |
+
## Step-by-Step: Option 1 (Upload Weights)
|
| 28 |
+
|
| 29 |
+
### 1. Create a Hugging Face Space
|
| 30 |
+
|
| 31 |
+
1. Go to https://huggingface.co/spaces
|
| 32 |
+
2. Click "Create new Space"
|
| 33 |
+
3. Fill in details:
|
| 34 |
+
- **Name:** `plant-disease-detection`
|
| 35 |
+
- **License:** Apache 2.0
|
| 36 |
+
- **Space SDK:** Gradio
|
| 37 |
+
- **Visibility:** Public
|
| 38 |
+
4. Click "Create Space"
|
| 39 |
+
|
| 40 |
+
### 2. Prepare Files for Deployment
|
| 41 |
+
|
| 42 |
+
Create a new directory for your Space:
|
| 43 |
+
|
| 44 |
+
```bash
|
| 45 |
+
mkdir hf-space-deployment
|
| 46 |
+
cd hf-space-deployment
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
Copy and flatten the UI structure:
|
| 50 |
+
|
| 51 |
+
```bash
|
| 52 |
+
# Copy main app file
|
| 53 |
+
cp ../ui/app.py ./
|
| 54 |
+
|
| 55 |
+
# Copy supporting files
|
| 56 |
+
cp ../ui/config.py ./
|
| 57 |
+
cp ../ui/model_loader.py ./
|
| 58 |
+
cp ../ui/utils.py ./
|
| 59 |
+
cp ../models/mock_model.py ./
|
| 60 |
+
|
| 61 |
+
# Copy requirements
|
| 62 |
+
cp ../requirements.txt ./
|
| 63 |
+
|
| 64 |
+
# Copy your trained model weights
|
| 65 |
+
cp ../path/to/best_model.pth ./
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
### 3. Modify app.py for Deployment
|
| 69 |
+
|
| 70 |
+
Edit `app.py` to work with the flat structure:
|
| 71 |
+
|
| 72 |
+
```python
|
| 73 |
+
# Change imports at the top of app.py
|
| 74 |
+
import config
|
| 75 |
+
from model_loader import ModelLoader
|
| 76 |
+
from utils import preprocess_image, postprocess_predictions, ...
|
| 77 |
+
from mock_model import create_mock_predictions
|
| 78 |
+
|
| 79 |
+
# At the bottom, change:
|
| 80 |
+
demo = create_interface(use_mock=False) # Use real model
|
| 81 |
+
demo.launch() # Remove server_name and server_port
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
### 4. Modify model_loader.py
|
| 85 |
+
|
| 86 |
+
Update the model loading to use your actual architecture:
|
| 87 |
+
|
| 88 |
+
```python
|
| 89 |
+
# In _load_real_model method, replace mock model with your actual model:
|
| 90 |
+
|
| 91 |
+
def _load_real_model(self, model_name, model_path=None):
|
| 92 |
+
if model_config["model_type"] == "cnn":
|
| 93 |
+
# Import your actual model class
|
| 94 |
+
from your_model_module import YourCNNModel
|
| 95 |
+
model = YourCNNModel(num_classes=len(config.CLASS_NAMES))
|
| 96 |
+
|
| 97 |
+
# Load weights
|
| 98 |
+
if model_path:
|
| 99 |
+
model.load_state_dict(torch.load(model_path, map_location=self.device))
|
| 100 |
+
else:
|
| 101 |
+
# Load default model
|
| 102 |
+
model.load_state_dict(torch.load("best_model.pth", map_location=self.device))
|
| 103 |
+
|
| 104 |
+
return model
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
### 5. Create README for Space
|
| 108 |
+
|
| 109 |
+
Create `README.md` in the deployment directory:
|
| 110 |
+
|
| 111 |
+
```markdown
|
| 112 |
+
---
|
| 113 |
+
title: Plant Disease Detection
|
| 114 |
+
emoji: 🌱
|
| 115 |
+
colorFrom: green
|
| 116 |
+
colorTo: blue
|
| 117 |
+
sdk: gradio
|
| 118 |
+
sdk_version: 4.0.0
|
| 119 |
+
app_file: app.py
|
| 120 |
+
pinned: false
|
| 121 |
+
license: apache-2.0
|
| 122 |
+
---
|
| 123 |
+
|
| 124 |
+
# 🌱 Plant Disease Detection
|
| 125 |
+
|
| 126 |
+
AI-powered plant disease detection from leaf images.
|
| 127 |
+
|
| 128 |
+
## About
|
| 129 |
+
|
| 130 |
+
This application uses a Convolutional Neural Network (CNN) trained on the PlantVillage
|
| 131 |
+
dataset to identify plant diseases from leaf images.
|
| 132 |
+
|
| 133 |
+
**Developed by:** [Your Team Name]
|
| 134 |
+
**Course:** 5CCSAGAP - AI Group Project, King's College London
|
| 135 |
+
**Academic Year:** 2024-2025
|
| 136 |
+
|
| 137 |
+
## Features
|
| 138 |
+
|
| 139 |
+
- Upload plant leaf images for disease detection
|
| 140 |
+
- Support for 39 different plant disease categories
|
| 141 |
+
- Multiple model options (CNN, Transfer Learning)
|
| 142 |
+
- Batch processing capability
|
| 143 |
+
- Confidence threshold adjustment
|
| 144 |
+
|
| 145 |
+
## How to Use
|
| 146 |
+
|
| 147 |
+
1. Go to the "Single Image" tab
|
| 148 |
+
2. Upload a photo of a plant leaf
|
| 149 |
+
3. Click "Predict Disease"
|
| 150 |
+
4. View the top predictions with confidence scores
|
| 151 |
+
|
| 152 |
+
## Dataset
|
| 153 |
+
|
| 154 |
+
Trained on the [PlantVillage dataset](https://huggingface.co/datasets/EdBianchi/plant-village)
|
| 155 |
+
containing 55,400 images across 39 disease categories.
|
| 156 |
+
|
| 157 |
+
## Model Performance
|
| 158 |
+
|
| 159 |
+
- **Test Accuracy:** [Add your test accuracy]
|
| 160 |
+
- **Architecture:** Custom CNN / ResNet18 Transfer Learning
|
| 161 |
+
- **Training Framework:** PyTorch
|
| 162 |
+
- **Experiment Tracking:** ClearML
|
| 163 |
+
|
| 164 |
+
## Links
|
| 165 |
+
|
| 166 |
+
- **GitHub:** [Add your repo link]
|
| 167 |
+
- **Team Project:** [Add project documentation link]
|
| 168 |
+
|
| 169 |
+
---
|
| 170 |
+
|
| 171 |
+
**Disclaimer:** This is an educational project. Predictions should be verified by agricultural experts.
|
| 172 |
+
```
|
| 173 |
+
|
| 174 |
+
### 6. Upload to Hugging Face Space
|
| 175 |
+
|
| 176 |
+
Initialize Git and push:
|
| 177 |
+
|
| 178 |
+
```bash
|
| 179 |
+
# Initialize git
|
| 180 |
+
git init
|
| 181 |
+
|
| 182 |
+
# Add Hugging Face Space as remote
|
| 183 |
+
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/plant-disease-detection
|
| 184 |
+
|
| 185 |
+
# Create .gitignore
|
| 186 |
+
cat > .gitignore << EOF
|
| 187 |
+
__pycache__/
|
| 188 |
+
*.pyc
|
| 189 |
+
.DS_Store
|
| 190 |
+
EOF
|
| 191 |
+
|
| 192 |
+
# Add files
|
| 193 |
+
git add .
|
| 194 |
+
|
| 195 |
+
# Commit
|
| 196 |
+
git commit -m "Initial deployment of Plant Disease Detection app"
|
| 197 |
+
|
| 198 |
+
# Push to Hugging Face
|
| 199 |
+
git push origin main
|
| 200 |
+
```
|
| 201 |
+
|
| 202 |
+
You'll be prompted for credentials:
|
| 203 |
+
- **Username:** Your HF username
|
| 204 |
+
- **Password:** Your HF access token (create at https://huggingface.co/settings/tokens)
|
| 205 |
+
|
| 206 |
+
### 7. Verify Deployment
|
| 207 |
+
|
| 208 |
+
1. Go to your Space URL: `https://huggingface.co/spaces/YOUR_USERNAME/plant-disease-detection`
|
| 209 |
+
2. Wait for the build to complete (check "Building" status)
|
| 210 |
+
3. Test the app with sample images
|
| 211 |
+
|
| 212 |
+
## Step-by-Step: Option 2 (HF Model Hub)
|
| 213 |
+
|
| 214 |
+
### 1. Upload Model to HF Model Hub
|
| 215 |
+
|
| 216 |
+
```python
|
| 217 |
+
from huggingface_hub import HfApi, create_repo
|
| 218 |
+
|
| 219 |
+
# Create repository for model
|
| 220 |
+
repo_id = "YOUR_USERNAME/plant-disease-cnn"
|
| 221 |
+
create_repo(repo_id=repo_id, repo_type="model", exist_ok=True)
|
| 222 |
+
|
| 223 |
+
# Upload model
|
| 224 |
+
api = HfApi()
|
| 225 |
+
api.upload_file(
|
| 226 |
+
path_or_fileobj="path/to/best_model.pth",
|
| 227 |
+
path_in_repo="pytorch_model.pth",
|
| 228 |
+
repo_id=repo_id,
|
| 229 |
+
repo_type="model"
|
| 230 |
+
)
|
| 231 |
+
|
| 232 |
+
# Create model card
|
| 233 |
+
model_card = """
|
| 234 |
+
---
|
| 235 |
+
license: apache-2.0
|
| 236 |
+
tags:
|
| 237 |
+
- image-classification
|
| 238 |
+
- plant-disease
|
| 239 |
+
- pytorch
|
| 240 |
+
---
|
| 241 |
+
|
| 242 |
+
# Plant Disease CNN
|
| 243 |
+
|
| 244 |
+
CNN model for plant disease detection, trained on PlantVillage dataset.
|
| 245 |
+
|
| 246 |
+
## Model Details
|
| 247 |
+
|
| 248 |
+
- **Architecture:** Custom CNN
|
| 249 |
+
- **Framework:** PyTorch
|
| 250 |
+
- **Classes:** 39 plant disease categories
|
| 251 |
+
- **Input Size:** 256x256 RGB images
|
| 252 |
+
- **Test Accuracy:** [Add your accuracy]
|
| 253 |
+
|
| 254 |
+
## Usage
|
| 255 |
+
|
| 256 |
+
```python
|
| 257 |
+
import torch
|
| 258 |
+
from huggingface_hub import hf_hub_download
|
| 259 |
+
|
| 260 |
+
model_path = hf_hub_download(
|
| 261 |
+
repo_id="YOUR_USERNAME/plant-disease-cnn",
|
| 262 |
+
filename="pytorch_model.pth"
|
| 263 |
+
)
|
| 264 |
+
model.load_state_dict(torch.load(model_path))
|
| 265 |
+
```
|
| 266 |
+
"""
|
| 267 |
+
|
| 268 |
+
api.upload_file(
|
| 269 |
+
path_or_fileobj=model_card.encode(),
|
| 270 |
+
path_in_repo="README.md",
|
| 271 |
+
repo_id=repo_id,
|
| 272 |
+
repo_type="model"
|
| 273 |
+
)
|
| 274 |
+
```
|
| 275 |
+
|
| 276 |
+
### 2. Modify model_loader.py
|
| 277 |
+
|
| 278 |
+
```python
|
| 279 |
+
# In model_loader.py, set default to load from HF:
|
| 280 |
+
|
| 281 |
+
def _load_real_model(self, model_name, model_path=None):
|
| 282 |
+
if model_path is None:
|
| 283 |
+
# Download from HF Hub
|
| 284 |
+
from huggingface_hub import hf_hub_download
|
| 285 |
+
model_path = hf_hub_download(
|
| 286 |
+
repo_id="YOUR_USERNAME/plant-disease-cnn",
|
| 287 |
+
filename="pytorch_model.pth"
|
| 288 |
+
)
|
| 289 |
+
|
| 290 |
+
# Load model architecture
|
| 291 |
+
model = YourCNNModel(num_classes=39)
|
| 292 |
+
model.load_state_dict(torch.load(model_path, map_location=self.device))
|
| 293 |
+
|
| 294 |
+
return model
|
| 295 |
+
```
|
| 296 |
+
|
| 297 |
+
### 3. Deploy to Space
|
| 298 |
+
|
| 299 |
+
Follow steps 1-7 from Option 1, but you don't need to include the .pth file.
|
| 300 |
+
|
| 301 |
+
## Step-by-Step: Option 3 (ClearML)
|
| 302 |
+
|
| 303 |
+
### 1. Get ClearML Credentials
|
| 304 |
+
|
| 305 |
+
1. Log in to https://5ccsagap.er.kcl.ac.uk/
|
| 306 |
+
2. Go to Settings → Workspace → Create new credentials
|
| 307 |
+
3. Copy the credentials
|
| 308 |
+
|
| 309 |
+
### 2. Add Secrets to Space
|
| 310 |
+
|
| 311 |
+
1. Go to your Space settings
|
| 312 |
+
2. Click "Repository secrets"
|
| 313 |
+
3. Add three secrets:
|
| 314 |
+
- `CLEARML_API_HOST`: `https://5ccsagap.er.kcl.ac.uk`
|
| 315 |
+
- `CLEARML_API_ACCESS_KEY`: Your access key
|
| 316 |
+
- `CLEARML_API_SECRET_KEY`: Your secret key
|
| 317 |
+
|
| 318 |
+
### 3. Modify model_loader.py
|
| 319 |
+
|
| 320 |
+
```python
|
| 321 |
+
def _load_real_model(self, model_name, model_path=None):
|
| 322 |
+
if model_path is None:
|
| 323 |
+
# Load from ClearML
|
| 324 |
+
import os
|
| 325 |
+
from clearml import Task, Model
|
| 326 |
+
|
| 327 |
+
# Initialize ClearML with environment variables
|
| 328 |
+
Task.init(
|
| 329 |
+
project_name="Plant Disease Detection",
|
| 330 |
+
task_name="UI Model Loading",
|
| 331 |
+
api_host=os.environ.get("CLEARML_API_HOST"),
|
| 332 |
+
api_access_key=os.environ.get("CLEARML_API_ACCESS_KEY"),
|
| 333 |
+
api_secret_key=os.environ.get("CLEARML_API_SECRET_KEY")
|
| 334 |
+
)
|
| 335 |
+
|
| 336 |
+
# Get the model
|
| 337 |
+
model_id = "YOUR_MODEL_ID" # Get this from ClearML
|
| 338 |
+
model_obj = Model(model_id)
|
| 339 |
+
model_path = model_obj.get_local_copy()
|
| 340 |
+
|
| 341 |
+
# Load model
|
| 342 |
+
model = YourCNNModel(num_classes=39)
|
| 343 |
+
model.load_state_dict(torch.load(model_path, map_location=self.device))
|
| 344 |
+
|
| 345 |
+
return model
|
| 346 |
+
```
|
| 347 |
+
|
| 348 |
+
## Troubleshooting
|
| 349 |
+
|
| 350 |
+
### Build Failures
|
| 351 |
+
|
| 352 |
+
**Error: "Out of Memory"**
|
| 353 |
+
- Your model might be too large
|
| 354 |
+
- Try using a smaller model or quantization
|
| 355 |
+
|
| 356 |
+
**Error: "Module not found"**
|
| 357 |
+
- Check all dependencies are in requirements.txt
|
| 358 |
+
- Verify imports work with flat file structure
|
| 359 |
+
|
| 360 |
+
### Runtime Errors
|
| 361 |
+
|
| 362 |
+
**Error: "Model file not found"**
|
| 363 |
+
- Verify the .pth file is uploaded
|
| 364 |
+
- Check file path in model_loader.py
|
| 365 |
+
|
| 366 |
+
**Error: "Incompatible architecture"**
|
| 367 |
+
- Ensure your model class definition matches the saved weights
|
| 368 |
+
- Check num_classes=39
|
| 369 |
+
|
| 370 |
+
## Updating the Deployment
|
| 371 |
+
|
| 372 |
+
To update your deployed app:
|
| 373 |
+
|
| 374 |
+
```bash
|
| 375 |
+
# Make changes to files
|
| 376 |
+
# Commit and push
|
| 377 |
+
git add .
|
| 378 |
+
git commit -m "Update: description of changes"
|
| 379 |
+
git push origin main
|
| 380 |
+
```
|
| 381 |
+
|
| 382 |
+
The Space will automatically rebuild.
|
| 383 |
+
|
| 384 |
+
## Best Practices
|
| 385 |
+
|
| 386 |
+
1. **Test locally first:** Always test the app locally before deploying
|
| 387 |
+
2. **Use small example images:** Don't upload large images to the repo
|
| 388 |
+
3. **Version your models:** Tag model versions in HF Hub or ClearML
|
| 389 |
+
4. **Monitor usage:** Check Space analytics to see usage patterns
|
| 390 |
+
5. **Update README:** Keep the model card and space README up to date
|
| 391 |
+
|
| 392 |
+
## Resources
|
| 393 |
+
|
| 394 |
+
- [HF Spaces Documentation](https://huggingface.co/docs/hub/spaces)
|
| 395 |
+
- [Gradio Documentation](https://gradio.app/docs/)
|
| 396 |
+
- [HF Hub Python API](https://huggingface.co/docs/huggingface_hub/)
|
| 397 |
+
|
| 398 |
+
## Support
|
| 399 |
+
|
| 400 |
+
If you encounter issues:
|
| 401 |
+
1. Check the Space build logs
|
| 402 |
+
2. Test locally with the exact same file structure
|
| 403 |
+
3. Consult the course TAs
|
| 404 |
+
4. Check HF Community forums
|
quickstart.sh
ADDED
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
|
| 3 |
+
# Quickstart script for Plant Disease Detection UI
|
| 4 |
+
# This script sets up the environment and runs the app
|
| 5 |
+
|
| 6 |
+
echo "🌱 Plant Disease Detection - Quick Start"
|
| 7 |
+
echo "========================================"
|
| 8 |
+
echo ""
|
| 9 |
+
|
| 10 |
+
# Check if Python is installed
|
| 11 |
+
if ! command -v python3 &> /dev/null; then
|
| 12 |
+
echo "❌ Error: Python 3 is not installed"
|
| 13 |
+
echo "Please install Python 3.8 or higher"
|
| 14 |
+
exit 1
|
| 15 |
+
fi
|
| 16 |
+
|
| 17 |
+
echo "✓ Python found: $(python3 --version)"
|
| 18 |
+
echo ""
|
| 19 |
+
|
| 20 |
+
# Check if virtual environment exists
|
| 21 |
+
if [ ! -d "venv" ]; then
|
| 22 |
+
echo "📦 Creating virtual environment..."
|
| 23 |
+
python3 -m venv venv
|
| 24 |
+
echo "✓ Virtual environment created"
|
| 25 |
+
else
|
| 26 |
+
echo "✓ Virtual environment already exists"
|
| 27 |
+
fi
|
| 28 |
+
|
| 29 |
+
echo ""
|
| 30 |
+
|
| 31 |
+
# Activate virtual environment
|
| 32 |
+
echo "🔧 Activating virtual environment..."
|
| 33 |
+
source venv/bin/activate
|
| 34 |
+
|
| 35 |
+
# Install/upgrade dependencies
|
| 36 |
+
echo "📥 Installing dependencies..."
|
| 37 |
+
pip install --upgrade pip > /dev/null 2>&1
|
| 38 |
+
pip install -r requirements.txt
|
| 39 |
+
|
| 40 |
+
echo "✓ Dependencies installed"
|
| 41 |
+
echo ""
|
| 42 |
+
|
| 43 |
+
# Check if example images exist
|
| 44 |
+
if [ ! -d "ui/examples" ] || [ -z "$(ls -A ui/examples 2>/dev/null)" ]; then
|
| 45 |
+
echo "📸 No example images found"
|
| 46 |
+
echo ""
|
| 47 |
+
read -p "Would you like to download example images? (y/n): " -n 1 -r
|
| 48 |
+
echo ""
|
| 49 |
+
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
| 50 |
+
echo "Downloading example images..."
|
| 51 |
+
python3 download_examples.py
|
| 52 |
+
echo ""
|
| 53 |
+
fi
|
| 54 |
+
fi
|
| 55 |
+
|
| 56 |
+
# Run the app
|
| 57 |
+
echo "🚀 Starting the application..."
|
| 58 |
+
echo ""
|
| 59 |
+
echo "The app will be available at: http://localhost:7860"
|
| 60 |
+
echo "Press Ctrl+C to stop the server"
|
| 61 |
+
echo ""
|
| 62 |
+
echo "========================================"
|
| 63 |
+
echo ""
|
| 64 |
+
|
| 65 |
+
cd ui
|
| 66 |
+
python3 app.py
|
requirements.txt
CHANGED
|
@@ -1,3 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# -- Data prep requirements --
|
| 2 |
# Data Handling & Analysis
|
| 3 |
numpy
|
|
@@ -12,4 +27,5 @@ torch
|
|
| 12 |
torchvision
|
| 13 |
|
| 14 |
# Experiment Tracking
|
| 15 |
-
clearml
|
|
|
|
|
|
| 1 |
+
<<<<<<< HEAD
|
| 2 |
+
# Core dependencies
|
| 3 |
+
torch>=2.0.0
|
| 4 |
+
torchvision>=0.15.0
|
| 5 |
+
gradio>=4.0.0
|
| 6 |
+
numpy>=1.24.0
|
| 7 |
+
Pillow>=10.0.0
|
| 8 |
+
|
| 9 |
+
# For model deployment and tracking
|
| 10 |
+
huggingface-hub>=0.19.0
|
| 11 |
+
clearml>=1.14.0
|
| 12 |
+
|
| 13 |
+
# Optional: for advanced features
|
| 14 |
+
datasets>=2.14.0 # For loading PlantVillage dataset from HuggingFace
|
| 15 |
+
=======
|
| 16 |
# -- Data prep requirements --
|
| 17 |
# Data Handling & Analysis
|
| 18 |
numpy
|
|
|
|
| 27 |
torchvision
|
| 28 |
|
| 29 |
# Experiment Tracking
|
| 30 |
+
clearml
|
| 31 |
+
>>>>>>> 04cb88662062ef6b880c627546d067fa0cedfa8b
|
ui/app.py
ADDED
|
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Plant Disease Detection Gradio App
|
| 3 |
+
Main UI application with advanced features
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import gradio as gr
|
| 7 |
+
import torch
|
| 8 |
+
import sys
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
import json
|
| 11 |
+
from datetime import datetime
|
| 12 |
+
# Add current directory to path
|
| 13 |
+
sys.path.append(str(Path(__file__).parent))
|
| 14 |
+
sys.path.append(str(Path(__file__).parent.parent))
|
| 15 |
+
|
| 16 |
+
from model_loader import ModelLoader
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
class PlantDiseaseApp:
|
| 20 |
+
def __init__(self):
|
| 21 |
+
self.model_loader = ModelLoader()
|
| 22 |
+
self.current_modelName = "CNN from Scratch"
|
| 23 |
+
self.model = self.model_loader.loadModel(self.current_modelName)
|
| 24 |
+
self.flagged_predictions = []
|
| 25 |
+
|
| 26 |
+
def predict(self, image, modelName, confidence_threshold):
|
| 27 |
+
if image is None:
|
| 28 |
+
return None, "Please upload an image", ""
|
| 29 |
+
|
| 30 |
+
try:
|
| 31 |
+
if modelName != self.current_modelName:
|
| 32 |
+
self.model = self.model_loader.loadModel(modelName)
|
| 33 |
+
self.current_modelName = modelName
|
| 34 |
+
|
| 35 |
+
# Preprocess image
|
| 36 |
+
tensor = preprocess_image(image)
|
| 37 |
+
tensor = tensor.to(self.model_loader.device)
|
| 38 |
+
|
| 39 |
+
# Get prediction
|
| 40 |
+
with torch.no_grad():
|
| 41 |
+
logits = self.model(tensor)
|
| 42 |
+
|
| 43 |
+
# Postprocess
|
| 44 |
+
top_predictions, all_predictions = postprocess_predictions(
|
| 45 |
+
logits, config.CLASS_NAMES, config.TOP_K_PREDICTIONS
|
| 46 |
+
)
|
| 47 |
+
|
| 48 |
+
# Filter by confidence threshold
|
| 49 |
+
filtered_predictions = {
|
| 50 |
+
k: v for k, v in top_predictions.items() if v >= confidence_threshold / 100
|
| 51 |
+
}
|
| 52 |
+
|
| 53 |
+
# Get top prediction info
|
| 54 |
+
if filtered_predictions:
|
| 55 |
+
top_class = max(filtered_predictions.items(), key=lambda x: x[1])[0]
|
| 56 |
+
top_prob = filtered_predictions[top_class]
|
| 57 |
+
disease_info = get_disease_info(top_class)
|
| 58 |
+
|
| 59 |
+
result_text = f"""
|
| 60 |
+
**Top Prediction:** {disease_info['formatted_name']}
|
| 61 |
+
**Confidence:** {top_prob*100:.2f}%
|
| 62 |
+
**Plant:** {disease_info['plant']}
|
| 63 |
+
**Status:** {'Healthy' if disease_info['is_healthy'] else 'Disease Detected'}
|
| 64 |
+
"""
|
| 65 |
+
else:
|
| 66 |
+
result_text = "No predictions above confidence threshold"
|
| 67 |
+
|
| 68 |
+
# Format for Gradio Label component
|
| 69 |
+
display_predictions = {
|
| 70 |
+
format_class_name(k): v for k, v in filtered_predictions.items()
|
| 71 |
+
}
|
| 72 |
+
|
| 73 |
+
return display_predictions, result_text, json.dumps(filtered_predictions, indent=2)
|
| 74 |
+
|
| 75 |
+
except Exception as e:
|
| 76 |
+
return None, f"Error during prediction: {str(e)}", ""
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
def create_interface():
|
| 81 |
+
app = PlantDiseaseApp()
|
| 82 |
+
|
| 83 |
+
custom_css = """
|
| 84 |
+
.main-header {
|
| 85 |
+
text-align: center;
|
| 86 |
+
background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
|
| 87 |
+
padding: 2rem;
|
| 88 |
+
border-radius: 10px;
|
| 89 |
+
color: white;
|
| 90 |
+
margin-bottom: 2rem;
|
| 91 |
+
}
|
| 92 |
+
.prediction-box {
|
| 93 |
+
border: 2px solid #667eea;
|
| 94 |
+
border-radius: 10px;
|
| 95 |
+
padding: 1rem;
|
| 96 |
+
background: #f8f9fa;
|
| 97 |
+
}
|
| 98 |
+
"""
|
| 99 |
+
|
| 100 |
+
with gr.Blocks(css=custom_css, title="Plant Disease Detection") as demo:
|
| 101 |
+
|
| 102 |
+
# Header
|
| 103 |
+
gr.Markdown(
|
| 104 |
+
"""
|
| 105 |
+
<div class="main-header">
|
| 106 |
+
<h1>Plant Disease Detection System</h1>
|
| 107 |
+
<p>Upload a plant leaf image to detect diseases using AI</p>
|
| 108 |
+
</div>
|
| 109 |
+
"""
|
| 110 |
+
)
|
| 111 |
+
|
| 112 |
+
# Model selection (available to all tabs)
|
| 113 |
+
with gr.Row():
|
| 114 |
+
model_selector = gr.Dropdown(
|
| 115 |
+
choices=list(config.MODEL_CONFIGS.keys()),
|
| 116 |
+
value="CNN from Scratch",
|
| 117 |
+
label="Select Model",
|
| 118 |
+
info="Choose which model to use for predictions"
|
| 119 |
+
)
|
| 120 |
+
confidence_slider = gr.Slider(
|
| 121 |
+
minimum=0,
|
| 122 |
+
maximum=100,
|
| 123 |
+
value=1,
|
| 124 |
+
step=1,
|
| 125 |
+
label="Confidence Threshold (%)",
|
| 126 |
+
info="Only show predictions above this confidence"
|
| 127 |
+
)
|
| 128 |
+
|
| 129 |
+
# Tabs for different features
|
| 130 |
+
with gr.Tabs():
|
| 131 |
+
|
| 132 |
+
# Tab 1: Single Image Prediction
|
| 133 |
+
with gr.Tab("Single Image"):
|
| 134 |
+
with gr.Row():
|
| 135 |
+
with gr.Column(scale=1):
|
| 136 |
+
image_input = gr.Image(
|
| 137 |
+
label="Upload Plant Leaf Image",
|
| 138 |
+
type="pil"
|
| 139 |
+
)
|
| 140 |
+
|
| 141 |
+
predict_btn = gr.Button("Predict Disease", variant="primary", size="lg")
|
| 142 |
+
|
| 143 |
+
with gr.Accordion("Flag Incorrect Prediction", open=False):
|
| 144 |
+
feedback_text = gr.Textbox(
|
| 145 |
+
label="Your Feedback",
|
| 146 |
+
placeholder="What should the correct classification be?",
|
| 147 |
+
lines=2
|
| 148 |
+
)
|
| 149 |
+
flag_btn = gr.Button("Submit Flag")
|
| 150 |
+
flag_output = gr.Textbox(label="Status", interactive=False)
|
| 151 |
+
|
| 152 |
+
with gr.Column(scale=1):
|
| 153 |
+
prediction_output = gr.Label(
|
| 154 |
+
label="Top Predictions",
|
| 155 |
+
num_top_classes=10
|
| 156 |
+
)
|
| 157 |
+
result_info = gr.Markdown(label="Detailed Results")
|
| 158 |
+
|
| 159 |
+
with gr.Accordion("Advanced: View Raw Predictions", open=False):
|
| 160 |
+
raw_predictions = gr.Textbox(
|
| 161 |
+
label="Raw JSON Output",
|
| 162 |
+
lines=10,
|
| 163 |
+
interactive=False
|
| 164 |
+
)
|
| 165 |
+
|
| 166 |
+
# Connect buttons
|
| 167 |
+
predict_btn.click(
|
| 168 |
+
fn=app.predict,
|
| 169 |
+
inputs=[image_input, model_selector, confidence_slider],
|
| 170 |
+
outputs=[prediction_output, result_info, raw_predictions]
|
| 171 |
+
)
|
| 172 |
+
|
| 173 |
+
flag_btn.click(
|
| 174 |
+
fn=app.flag_prediction,
|
| 175 |
+
inputs=[image_input, result_info, feedback_text],
|
| 176 |
+
outputs=flag_output
|
| 177 |
+
)
|
| 178 |
+
|
| 179 |
+
with gr.Tab("Example Images"):
|
| 180 |
+
gr.Markdown("### Try these example plant images")
|
| 181 |
+
gr.Markdown("Click on an example below to load it into the predictor")
|
| 182 |
+
|
| 183 |
+
example_images = app.get_example_images()
|
| 184 |
+
|
| 185 |
+
if example_images:
|
| 186 |
+
examples = gr.Examples(
|
| 187 |
+
examples=example_images,
|
| 188 |
+
inputs=image_input,
|
| 189 |
+
label="Example Plant Disease Images"
|
| 190 |
+
)
|
| 191 |
+
else:
|
| 192 |
+
gr.Markdown(
|
| 193 |
+
"""
|
| 194 |
+
**No example images found.**
|
| 195 |
+
|
| 196 |
+
To add example images:
|
| 197 |
+
1. Create a folder: `ui/examples/`
|
| 198 |
+
2. Add plant leaf images (.jpg, .png) to this folder
|
| 199 |
+
3. Restart the app
|
| 200 |
+
"""
|
| 201 |
+
)
|
| 202 |
+
|
| 203 |
+
with gr.Tab("Batch Processing"):
|
| 204 |
+
gr.Markdown("### Upload multiple images for batch processing")
|
| 205 |
+
|
| 206 |
+
batch_input = gr.File(
|
| 207 |
+
label="Upload Multiple Images",
|
| 208 |
+
file_count="multiple",
|
| 209 |
+
type="filepath"
|
| 210 |
+
)
|
| 211 |
+
|
| 212 |
+
batch_predict_btn = gr.Button("Predict All", variant="primary")
|
| 213 |
+
|
| 214 |
+
batch_output = gr.Markdown(label="Batch Results")
|
| 215 |
+
|
| 216 |
+
batch_predict_btn.click(
|
| 217 |
+
fn=app.predict_batch,
|
| 218 |
+
inputs=[batch_input, model_selector, confidence_slider],
|
| 219 |
+
outputs=batch_output
|
| 220 |
+
)
|
| 221 |
+
with gr.Tab("About"):
|
| 222 |
+
gr.Markdown(
|
| 223 |
+
"""
|
| 224 |
+
## About This Application
|
| 225 |
+
|
| 226 |
+
This Plant Disease Detection system was developed as part of the
|
| 227 |
+
5CCSAGAP Artificial Intelligence Group Project at King's College London.
|
| 228 |
+
|
| 229 |
+
### Features
|
| 230 |
+
- **Single Image Prediction**: Upload and classify individual plant images
|
| 231 |
+
- **Multiple Models**: Switch between different trained models
|
| 232 |
+
- **Batch Processing**: Classify multiple images at once
|
| 233 |
+
- **Example Gallery**: Try pre-loaded example images
|
| 234 |
+
- **Flagging System**: Report incorrect predictions to help improve the model
|
| 235 |
+
- **Confidence Threshold**: Filter predictions by confidence level
|
| 236 |
+
|
| 237 |
+
### Dataset
|
| 238 |
+
The model is trained on the PlantVillage dataset, which contains 55,400 images
|
| 239 |
+
across 39 different plant disease categories.
|
| 240 |
+
|
| 241 |
+
### Model Architecture
|
| 242 |
+
- **CNN from Scratch**: Custom convolutional neural network
|
| 243 |
+
- **Transfer Learning**: Fine-tuned ResNet18 (if available)
|
| 244 |
+
|
| 245 |
+
### Technology Stack
|
| 246 |
+
- **PyTorch**: Model training and inference
|
| 247 |
+
- **Gradio**: User interface
|
| 248 |
+
- **ClearML**: Experiment tracking
|
| 249 |
+
- **Hugging Face Spaces**: Deployment platform
|
| 250 |
+
|
| 251 |
+
### Team
|
| 252 |
+
[Add your team members' names here]
|
| 253 |
+
|
| 254 |
+
### Links
|
| 255 |
+
- [GitHub Repository](https://github.kcl.ac.uk/K23064919/smallGroupProject)
|
| 256 |
+
- [ClearML Dashboard](https://5ccsagap.er.kcl.ac.uk/)
|
| 257 |
+
"""
|
| 258 |
+
)
|
| 259 |
+
|
| 260 |
+
gr.Markdown(
|
| 261 |
+
"""
|
| 262 |
+
---
|
| 263 |
+
**Note:** This is an AI-powered system and predictions should be verified by experts.
|
| 264 |
+
Built with love by KCL AI Students
|
| 265 |
+
"""
|
| 266 |
+
)
|
| 267 |
+
|
| 268 |
+
return demo
|
| 269 |
+
|
| 270 |
+
|
| 271 |
+
if __name__ == "__main__":
|
| 272 |
+
print("Starting Plant Disease Detection App...")
|
| 273 |
+
|
| 274 |
+
demo = create_interface()
|
| 275 |
+
|
| 276 |
+
demo.launch(
|
| 277 |
+
share=False,
|
| 278 |
+
server_name="0.0.0.0",
|
| 279 |
+
server_port=7860
|
| 280 |
+
)
|
ui/config.py
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
CLEARML_PROJECT_NAME = "LGT3 Plant Disease Classifier"
|
| 2 |
+
CLEARML_TASK_NAME_DEFAULT = "CNN Training (Latest)"
|
| 3 |
+
|
| 4 |
+
MODEL_CONFIGS = {
|
| 5 |
+
"CNN from Scratch": {
|
| 6 |
+
"description": "Custom CNN model trained from scratch",
|
| 7 |
+
"model_type": "cnn",
|
| 8 |
+
"clearml_task_id": "SET_ME_TO_YOUR_BEST_CNN_TASK_ID"
|
| 9 |
+
},
|
| 10 |
+
"Transfer Learning (ResNet18)": {
|
| 11 |
+
"description": "Fine-tuned ResNet18 model",
|
| 12 |
+
"model_type": "resnet18",
|
| 13 |
+
"clearml_task_id": "SET_ME_TO_YOUR_RESNET_TASK_ID"
|
| 14 |
+
}
|
| 15 |
+
}
|
ui/model_loader.py
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import sys
|
| 3 |
+
from pathlib import Path
|
| 4 |
+
import config
|
| 5 |
+
|
| 6 |
+
sys.path.append(str(Path(__file__).parent.parent))
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
class ModelLoader:
|
| 10 |
+
def __init__(self):
|
| 11 |
+
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
| 12 |
+
self.modelCache = {}
|
| 13 |
+
|
| 14 |
+
def loadFromClearml(self, modelName):
|
| 15 |
+
modelConfig = config.MODEL_CONFIGS.get(modelName)
|
| 16 |
+
|
| 17 |
+
if not modelConfig or 'clearml_task_id' not in modelConfig:
|
| 18 |
+
raise ValueError(f"ClearML configuration not found for model: {modelName}")
|
| 19 |
+
|
| 20 |
+
taskID = modelConfig['clearmml_task_id']
|
| 21 |
+
modelType = modelConfig['modelType']
|
| 22 |
+
|
| 23 |
+
try:
|
| 24 |
+
print(f"attemtping to fetch '{modelName}' from clearML task: {taskID}")
|
| 25 |
+
|
| 26 |
+
modelObject = Model(taskID=taskID)
|
| 27 |
+
modelPath = modelObject.get_local_copy()
|
| 28 |
+
|
| 29 |
+
model = self.loadRealModel(modelName, modelPath, modelType)
|
| 30 |
+
|
| 31 |
+
return model
|
| 32 |
+
|
| 33 |
+
except Exception as e:
|
| 34 |
+
print(f"Error loading from ClearML for {modelName}: {e}")
|
| 35 |
+
raise RuntimeError(f"Failed to load model from ClearML: {e}")
|
| 36 |
+
|
| 37 |
+
def loadModel(self, modelName) :
|
| 38 |
+
if modelName in self.modelCache:
|
| 39 |
+
return self.modelCache[modelName]
|
| 40 |
+
|
| 41 |
+
try:
|
| 42 |
+
model = self.loadFromClearml(modelName)
|
| 43 |
+
self.modelCache[modelName] = model
|
| 44 |
+
return model
|
| 45 |
+
|
| 46 |
+
except Exception as e:
|
| 47 |
+
raise RuntimeError(f"Could not load model {modelName}. Check ClearML connection.")
|
ui/utils.py
ADDED
|
@@ -0,0 +1,210 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Utility functions for the Plant Disease Detection UI
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
import torch
|
| 6 |
+
import numpy as np
|
| 7 |
+
from PIL import Image
|
| 8 |
+
import torchvision.transforms as transforms
|
| 9 |
+
import config
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
def preprocess_image(image, image_size=config.IMAGE_SIZE):
|
| 13 |
+
"""
|
| 14 |
+
Preprocess image for model input
|
| 15 |
+
|
| 16 |
+
Args:
|
| 17 |
+
image: PIL Image or numpy array
|
| 18 |
+
image_size: Target size (height, width)
|
| 19 |
+
|
| 20 |
+
Returns:
|
| 21 |
+
Preprocessed tensor ready for model
|
| 22 |
+
"""
|
| 23 |
+
# Convert to PIL Image if numpy array
|
| 24 |
+
if isinstance(image, np.ndarray):
|
| 25 |
+
image = Image.fromarray(image.astype('uint8'))
|
| 26 |
+
|
| 27 |
+
# Convert RGBA to RGB if necessary
|
| 28 |
+
if image.mode == 'RGBA':
|
| 29 |
+
image = image.convert('RGB')
|
| 30 |
+
|
| 31 |
+
# Define preprocessing transforms
|
| 32 |
+
transform = transforms.Compose([
|
| 33 |
+
transforms.Resize(image_size),
|
| 34 |
+
transforms.ToTensor(),
|
| 35 |
+
transforms.Normalize(mean=config.NORMALIZE_MEAN, std=config.NORMALIZE_STD)
|
| 36 |
+
])
|
| 37 |
+
|
| 38 |
+
# Apply transforms
|
| 39 |
+
tensor = transform(image)
|
| 40 |
+
|
| 41 |
+
# Add batch dimension
|
| 42 |
+
tensor = tensor.unsqueeze(0)
|
| 43 |
+
|
| 44 |
+
return tensor
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
def postprocess_predictions(logits, class_names=config.CLASS_NAMES, top_k=config.TOP_K_PREDICTIONS):
|
| 48 |
+
"""
|
| 49 |
+
Convert model logits to human-readable predictions
|
| 50 |
+
|
| 51 |
+
Args:
|
| 52 |
+
logits: Raw model output
|
| 53 |
+
class_names: List of class names
|
| 54 |
+
top_k: Number of top predictions to return
|
| 55 |
+
|
| 56 |
+
Returns:
|
| 57 |
+
Dictionary of predictions with confidences
|
| 58 |
+
"""
|
| 59 |
+
# Convert logits to probabilities using softmax
|
| 60 |
+
probs = torch.nn.functional.softmax(logits, dim=1)
|
| 61 |
+
|
| 62 |
+
# Convert to numpy
|
| 63 |
+
probs = probs.cpu().detach().numpy()[0]
|
| 64 |
+
|
| 65 |
+
# Create predictions dictionary
|
| 66 |
+
predictions = {name: float(prob) for name, prob in zip(class_names, probs)}
|
| 67 |
+
|
| 68 |
+
# Get top-k predictions
|
| 69 |
+
top_predictions = sorted(predictions.items(), key=lambda x: x[1], reverse=True)[:top_k]
|
| 70 |
+
|
| 71 |
+
return dict(top_predictions), predictions
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
def format_prediction_for_display(predictions):
|
| 75 |
+
"""
|
| 76 |
+
Format predictions for Gradio display
|
| 77 |
+
|
| 78 |
+
Args:
|
| 79 |
+
predictions: Dictionary of class names and probabilities
|
| 80 |
+
|
| 81 |
+
Returns:
|
| 82 |
+
Dictionary formatted for Gradio Label component
|
| 83 |
+
"""
|
| 84 |
+
# Filter out very low confidence predictions
|
| 85 |
+
filtered = {k: v for k, v in predictions.items() if v >= config.CONFIDENCE_THRESHOLD}
|
| 86 |
+
|
| 87 |
+
return filtered
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
def format_class_name(class_name):
|
| 91 |
+
"""
|
| 92 |
+
Format class name for better readability
|
| 93 |
+
|
| 94 |
+
Args:
|
| 95 |
+
class_name: Original class name (e.g., "Tomato___Late_blight")
|
| 96 |
+
|
| 97 |
+
Returns:
|
| 98 |
+
Formatted class name (e.g., "Tomato - Late blight")
|
| 99 |
+
"""
|
| 100 |
+
# Replace underscores with spaces and split on ___
|
| 101 |
+
parts = class_name.split("___")
|
| 102 |
+
|
| 103 |
+
if len(parts) == 2:
|
| 104 |
+
plant, disease = parts
|
| 105 |
+
plant = plant.replace("_", " ")
|
| 106 |
+
disease = disease.replace("_", " ")
|
| 107 |
+
return f"{plant} - {disease}"
|
| 108 |
+
else:
|
| 109 |
+
return class_name.replace("_", " ")
|
| 110 |
+
|
| 111 |
+
|
| 112 |
+
def get_disease_info(class_name):
|
| 113 |
+
"""
|
| 114 |
+
Get information about a disease (for future enhancement)
|
| 115 |
+
|
| 116 |
+
Args:
|
| 117 |
+
class_name: Disease class name
|
| 118 |
+
|
| 119 |
+
Returns:
|
| 120 |
+
Dictionary with disease information
|
| 121 |
+
"""
|
| 122 |
+
# This is a placeholder - you could expand this with actual disease information
|
| 123 |
+
parts = class_name.split("___")
|
| 124 |
+
|
| 125 |
+
info = {
|
| 126 |
+
"plant": parts[0].replace("_", " ") if len(parts) > 0 else "Unknown",
|
| 127 |
+
"disease": parts[1].replace("_", " ") if len(parts) > 1 else "Unknown",
|
| 128 |
+
"is_healthy": "healthy" in class_name.lower(),
|
| 129 |
+
"formatted_name": format_class_name(class_name)
|
| 130 |
+
}
|
| 131 |
+
|
| 132 |
+
return info
|
| 133 |
+
|
| 134 |
+
|
| 135 |
+
def batch_preprocess_images(images):
|
| 136 |
+
"""
|
| 137 |
+
Preprocess multiple images for batch prediction
|
| 138 |
+
|
| 139 |
+
Args:
|
| 140 |
+
images: List of PIL Images or numpy arrays
|
| 141 |
+
|
| 142 |
+
Returns:
|
| 143 |
+
Batched tensor ready for model
|
| 144 |
+
"""
|
| 145 |
+
tensors = [preprocess_image(img) for img in images]
|
| 146 |
+
batch = torch.cat(tensors, dim=0)
|
| 147 |
+
return batch
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
def create_confidence_label(predictions, top_k=5):
|
| 151 |
+
"""
|
| 152 |
+
Create a formatted string showing top predictions
|
| 153 |
+
|
| 154 |
+
Args:
|
| 155 |
+
predictions: Dictionary of predictions
|
| 156 |
+
top_k: Number of top predictions to show
|
| 157 |
+
|
| 158 |
+
Returns:
|
| 159 |
+
Formatted string
|
| 160 |
+
"""
|
| 161 |
+
top_preds = sorted(predictions.items(), key=lambda x: x[1], reverse=True)[:top_k]
|
| 162 |
+
|
| 163 |
+
lines = []
|
| 164 |
+
for i, (class_name, prob) in enumerate(top_preds, 1):
|
| 165 |
+
formatted_name = format_class_name(class_name)
|
| 166 |
+
lines.append(f"{i}. {formatted_name}: {prob*100:.2f}%")
|
| 167 |
+
|
| 168 |
+
return "\n".join(lines)
|
| 169 |
+
|
| 170 |
+
|
| 171 |
+
if __name__ == "__main__":
|
| 172 |
+
# Test utilities
|
| 173 |
+
print("Testing utility functions...")
|
| 174 |
+
|
| 175 |
+
# Test class name formatting
|
| 176 |
+
test_names = [
|
| 177 |
+
"Tomato___Late_blight",
|
| 178 |
+
"Apple___healthy",
|
| 179 |
+
"Corn_(maize)___Common_rust_"
|
| 180 |
+
]
|
| 181 |
+
|
| 182 |
+
print("\nClass name formatting:")
|
| 183 |
+
for name in test_names:
|
| 184 |
+
print(f" {name} -> {format_class_name(name)}")
|
| 185 |
+
|
| 186 |
+
# Test disease info
|
| 187 |
+
print("\nDisease info:")
|
| 188 |
+
for name in test_names:
|
| 189 |
+
info = get_disease_info(name)
|
| 190 |
+
print(f" {name}:")
|
| 191 |
+
print(f" Plant: {info['plant']}")
|
| 192 |
+
print(f" Disease: {info['disease']}")
|
| 193 |
+
print(f" Healthy: {info['is_healthy']}")
|
| 194 |
+
|
| 195 |
+
# Test image preprocessing
|
| 196 |
+
print("\nImage preprocessing:")
|
| 197 |
+
dummy_image = Image.new('RGB', (512, 512), color='red')
|
| 198 |
+
tensor = preprocess_image(dummy_image)
|
| 199 |
+
print(f" Input size: {dummy_image.size}")
|
| 200 |
+
print(f" Output tensor shape: {tensor.shape}")
|
| 201 |
+
|
| 202 |
+
# Test mock predictions
|
| 203 |
+
print("\nMock predictions:")
|
| 204 |
+
from models.mock_model import create_mock_predictions
|
| 205 |
+
preds = create_mock_predictions(config.CLASS_NAMES)
|
| 206 |
+
top_preds, all_preds = postprocess_predictions(
|
| 207 |
+
torch.tensor([list(preds.values())]),
|
| 208 |
+
config.CLASS_NAMES
|
| 209 |
+
)
|
| 210 |
+
print(create_confidence_label(top_preds))
|