Mert Yerlikaya commited on
Commit
505fc99
·
1 Parent(s): a400927

Add feature-rich Gradio UI with mock model

Browse files

- Implemented all core UI requirements
- Added 7 advanced features for extra marks
- Created comprehensive documentation
- Includes mock model for parallel development

README.md CHANGED
@@ -1 +1,372 @@
1
- Small Group Project
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Plant Disease Detection - UI and Deployment
2
+
3
+ This directory contains the Gradio-based user interface and deployment code for the Plant Disease Detection project.
4
+
5
+ ## Team Information
6
+
7
+ **Team Number:** [Add your team number]
8
+
9
+ **Team Members:**
10
+ - [Add team member names here]
11
+
12
+ ## Links
13
+
14
+ - **GitHub Repository:** https://github.kcl.ac.uk/K23064919/smallGroupProject
15
+ - **Deployed App:** [Add Hugging Face Spaces URL here]
16
+ - **Trained Model:** [Add model download link or ClearML model ID here]
17
+
18
+ ## Project Structure
19
+
20
+ ```
21
+ plant-disease-ui/
22
+ ├── ui/
23
+ │ ├── app.py # Main Gradio application
24
+ │ ├── config.py # Configuration (class names, paths, etc.)
25
+ │ ├── model_loader.py # Model loading utilities
26
+ │ ├── utils.py # Utility functions (preprocessing, etc.)
27
+ │ └── examples/ # Example images for gallery
28
+ ├── models/
29
+ │ ├── mock_model.py # Mock model for development
30
+ │ └── best_model.pth # (To be added) Trained model weights
31
+ ├── docs/
32
+ │ └── deployment_guide.md # Deployment instructions
33
+ ├── requirements.txt # Python dependencies
34
+ └── README.md # This file
35
+ ```
36
+
37
+ ## Features
38
+
39
+ ### Core Features
40
+ - ✅ **Image Upload:** Upload plant leaf images for disease detection
41
+ - ✅ **Top-K Predictions:** Display top 10 predictions with confidence scores
42
+ - ✅ **Formatted Output:** Clean, readable prediction results
43
+
44
+ ### Advanced Features
45
+ - ✅ **Multiple Models:** Switch between different trained models (CNN, Transfer Learning)
46
+ - ✅ **Example Gallery:** Pre-loaded example images for quick testing
47
+ - ✅ **Batch Processing:** Upload and classify multiple images at once
48
+ - ✅ **Flag Predictions:** Report incorrect predictions
49
+ - ✅ **Confidence Threshold:** Filter predictions by minimum confidence level
50
+ - ✅ **Detailed Information:** View plant type, disease name, and health status
51
+
52
+ ## Setup Instructions
53
+
54
+ ### 1. Install Dependencies
55
+
56
+ ```bash
57
+ # Create a virtual environment (recommended)
58
+ python -m venv venv
59
+ source venv/bin/activate # On Windows: venv\Scripts\activate
60
+
61
+ # Install required packages
62
+ pip install -r requirements.txt
63
+ ```
64
+
65
+ ### 2. Add Example Images (Optional)
66
+
67
+ To enable the example gallery feature:
68
+
69
+ ```bash
70
+ # Create examples directory
71
+ mkdir -p ui/examples
72
+
73
+ # Add plant disease images to ui/examples/
74
+ # You can download sample images from the PlantVillage dataset
75
+ ```
76
+
77
+ To download example images programmatically:
78
+
79
+ ```python
80
+ from datasets import load_dataset
81
+
82
+ # Load PlantVillage dataset
83
+ dataset = load_dataset("EdBianchi/plant-village")
84
+
85
+ # Save some example images
86
+ import os
87
+ os.makedirs("ui/examples", exist_ok=True)
88
+
89
+ for i in range(10): # Save 10 examples
90
+ img = dataset['train'][i * 1000]['image'] # Sample every 1000th image
91
+ img.save(f"ui/examples/example_{i}.jpg")
92
+ ```
93
+
94
+ ### 3. Run the App Locally
95
+
96
+ **Option A: Using Mock Model (for development)**
97
+
98
+ ```bash
99
+ cd ui
100
+ python app.py
101
+ ```
102
+
103
+ The app will start at `http://localhost:7860`
104
+
105
+ **Option B: Using Your Trained Model**
106
+
107
+ First, modify `app.py` to load your real model:
108
+
109
+ ```python
110
+ # In app.py, change the last line:
111
+ demo = create_interface(use_mock=False) # Change to False
112
+ ```
113
+
114
+ Then run:
115
+
116
+ ```bash
117
+ cd ui
118
+ python app.py
119
+ ```
120
+
121
+ ### 4. Configure for Real Model
122
+
123
+ When your team's model is ready, you have several options:
124
+
125
+ #### Option 1: Load from Local File
126
+
127
+ ```python
128
+ # In model_loader.py, update the model path
129
+ MODEL_PATH = "models/best_model.pth"
130
+
131
+ # Then in app.py:
132
+ app = PlantDiseaseApp(use_mock=False)
133
+ ```
134
+
135
+ #### Option 2: Load from ClearML
136
+
137
+ ```python
138
+ # In app.py or model_loader.py:
139
+ loader = ModelLoader(use_mock=False)
140
+ model = loader.load_from_clearml(
141
+ project_name="Plant Disease Detection",
142
+ task_name="CNN Training"
143
+ )
144
+ ```
145
+
146
+ #### Option 3: Load from Hugging Face Hub
147
+
148
+ ```python
149
+ # First, upload your model to HF Hub
150
+ # Then in model_loader.py:
151
+ loader = ModelLoader(use_mock=False)
152
+ model = loader.load_from_huggingface("your-username/plant-disease-model")
153
+ ```
154
+
155
+ ## Deployment to Hugging Face Spaces
156
+
157
+ ### Step 1: Create a Hugging Face Account
158
+
159
+ 1. Go to https://huggingface.co/ and create an account
160
+ 2. Verify your email address
161
+
162
+ ### Step 2: Create a New Space
163
+
164
+ 1. Click on your profile → "New Space"
165
+ 2. Space name: `plant-disease-detection`
166
+ 3. License: Apache 2.0
167
+ 4. Select SDK: **Gradio**
168
+ 5. Make it **Public**
169
+ 6. Click "Create Space"
170
+
171
+ ### Step 3: Prepare Files for Deployment
172
+
173
+ Create these files in the root of your Space:
174
+
175
+ **app.py** (Simplified version for HF Spaces)
176
+ ```python
177
+ # Copy ui/app.py and modify the imports to work in the flat structure
178
+ ```
179
+
180
+ **requirements.txt**
181
+ ```
182
+ torch
183
+ torchvision
184
+ gradio
185
+ Pillow
186
+ numpy
187
+ huggingface-hub
188
+ ```
189
+
190
+ **README.md** (for the Space)
191
+ ```markdown
192
+ ---
193
+ title: Plant Disease Detection
194
+ emoji: 🌱
195
+ colorFrom: green
196
+ colorTo: blue
197
+ sdk: gradio
198
+ sdk_version: 4.0.0
199
+ app_file: app.py
200
+ pinned: false
201
+ ---
202
+
203
+ # Plant Disease Detection
204
+
205
+ AI-powered plant disease detection from leaf images.
206
+ Developed by [Your Team Name] for King's College London.
207
+ ```
208
+
209
+ ### Step 4: Upload Your Model
210
+
211
+ **Option A: Upload weights to the Space**
212
+
213
+ 1. Upload your `best_model.pth` to the Space
214
+ 2. Modify `app.py` to load from this file
215
+
216
+ **Option B: Use Hugging Face Hub**
217
+
218
+ 1. Upload model to HF Model Hub:
219
+ ```python
220
+ from huggingface_hub import HfApi
221
+
222
+ api = HfApi()
223
+ api.upload_file(
224
+ path_or_fileobj="models/best_model.pth",
225
+ path_in_repo="model.pth",
226
+ repo_id="your-username/plant-disease-model",
227
+ repo_type="model"
228
+ )
229
+ ```
230
+
231
+ 2. Load in app:
232
+ ```python
233
+ from huggingface_hub import hf_hub_download
234
+ model_path = hf_hub_download(
235
+ repo_id="your-username/plant-disease-model",
236
+ filename="model.pth"
237
+ )
238
+ ```
239
+
240
+ **Option C: Fetch from ClearML**
241
+
242
+ 1. Add ClearML credentials to Space Secrets
243
+ 2. Use the `load_from_clearml()` function
244
+
245
+ ### Step 5: Deploy
246
+
247
+ 1. Upload all files to your HF Space repository
248
+ 2. The app will automatically build and deploy
249
+ 3. Test at: `https://huggingface.co/spaces/your-username/plant-disease-detection`
250
+
251
+ ## Model Integration Guide
252
+
253
+ ### Your CNN Model Structure
254
+
255
+ When integrating your actual trained model, make sure to update `model_loader.py` with your actual CNN architecture:
256
+
257
+ ```python
258
+ class YourCNNModel(nn.Module):
259
+ def __init__(self, num_classes=39):
260
+ super(YourCNNModel, self).__init__()
261
+
262
+ # Add your actual CNN architecture here
263
+ # This should match what you used for training
264
+
265
+ def forward(self, x):
266
+ # Your forward pass
267
+ return x
268
+ ```
269
+
270
+ ### Loading Trained Weights
271
+
272
+ ```python
273
+ # Load model
274
+ model = YourCNNModel(num_classes=39)
275
+
276
+ # Load trained weights
277
+ checkpoint = torch.load('path/to/best_model.pth', map_location=device)
278
+
279
+ # If you saved the entire model:
280
+ model = checkpoint
281
+
282
+ # If you saved just state_dict:
283
+ model.load_state_dict(checkpoint)
284
+
285
+ # Or if you saved optimizer and other info:
286
+ model.load_state_dict(checkpoint['model_state_dict'])
287
+ ```
288
+
289
+ ## Testing the UI
290
+
291
+ ### Manual Testing Checklist
292
+
293
+ - [ ] Upload a single image and get predictions
294
+ - [ ] Try different models from the dropdown
295
+ - [ ] Adjust confidence threshold slider
296
+ - [ ] Test example gallery (if images added)
297
+ - [ ] Upload multiple images for batch processing
298
+ - [ ] Flag a prediction
299
+ - [ ] Check all tabs load correctly
300
+ - [ ] Verify predictions match expected classes
301
+
302
+ ### Automated Testing
303
+
304
+ ```python
305
+ # Run tests
306
+ cd ui
307
+ python -m pytest test_app.py # (Create tests if needed)
308
+ ```
309
+
310
+ ## Troubleshooting
311
+
312
+ ### Common Issues
313
+
314
+ **1. ModuleNotFoundError**
315
+ ```bash
316
+ # Make sure all dependencies are installed
317
+ pip install -r requirements.txt
318
+ ```
319
+
320
+ **2. Model Loading Error**
321
+ ```python
322
+ # Check that the model architecture matches the saved weights
323
+ # Make sure you're using the same num_classes (39)
324
+ ```
325
+
326
+ **3. Image Size Issues**
327
+ ```python
328
+ # Ensure images are being resized to (256, 256)
329
+ # Check config.py IMAGE_SIZE setting
330
+ ```
331
+
332
+ **4. CUDA/GPU Errors**
333
+ ```python
334
+ # The app automatically falls back to CPU
335
+ # Check: torch.cuda.is_available()
336
+ ```
337
+
338
+ ## Contributing
339
+
340
+ When contributing to this UI:
341
+
342
+ 1. Create a new branch for your feature
343
+ 2. Test locally with mock model first
344
+ 3. Test with real model before pushing
345
+ 4. Update this README if adding new features
346
+ 5. Ensure code is well-commented
347
+
348
+ ## TODO
349
+
350
+ - [ ] Add more example images to gallery
351
+ - [ ] Integrate with actual trained models
352
+ - [ ] Add disease information/treatment suggestions
353
+ - [ ] Implement persistent flagging system (database)
354
+ - [ ] Add data visualization for batch results
355
+ - [ ] Create comprehensive tests
356
+
357
+ ## Resources
358
+
359
+ - [Gradio Documentation](https://gradio.app/docs/)
360
+ - [HuggingFace Spaces Guide](https://huggingface.co/docs/hub/spaces)
361
+ - [ClearML Python API](https://clear.ml/docs/latest/docs/references/sdk/)
362
+ - [PlantVillage Dataset](https://huggingface.co/datasets/EdBianchi/plant-village)
363
+
364
+ ## License
365
+
366
+ [Specify your license here]
367
+
368
+ ## Acknowledgments
369
+
370
+ - King's College London, 5CCSAGAP Course
371
+ - PlantVillage Dataset creators
372
+ - Course instructors and TAs
docs/USAGE.md ADDED
@@ -0,0 +1,401 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Usage Guide - Plant Disease Detection UI
2
+
3
+ This guide explains how to use the Plant Disease Detection application.
4
+
5
+ ## For Developers
6
+
7
+ ### Running Locally
8
+
9
+ **Quick Start:**
10
+ ```bash
11
+ ./quickstart.sh
12
+ ```
13
+
14
+ **Manual Start:**
15
+ ```bash
16
+ # Activate virtual environment
17
+ source venv/bin/activate # On Windows: venv\Scripts\activate
18
+
19
+ # Run the app
20
+ cd ui
21
+ python app.py
22
+ ```
23
+
24
+ The app will be available at `http://localhost:7860`
25
+
26
+ ### Development with Mock Model
27
+
28
+ During development, the app uses a mock model by default. This allows you to:
29
+ - Test the UI without waiting for model training
30
+ - Develop features in parallel with the ML team
31
+ - Verify the interface works correctly
32
+
33
+ To use the mock model:
34
+ ```python
35
+ # In ui/app.py
36
+ demo = create_interface(use_mock=True) # Default
37
+ ```
38
+
39
+ ### Switching to Real Model
40
+
41
+ Once your team has trained a model:
42
+
43
+ 1. **Save your model:**
44
+ ```python
45
+ # In your training script
46
+ torch.save(model.state_dict(), 'best_model.pth')
47
+ ```
48
+
49
+ 2. **Copy to models directory:**
50
+ ```bash
51
+ cp path/to/best_model.pth models/
52
+ ```
53
+
54
+ 3. **Update model_loader.py with your architecture:**
55
+ ```python
56
+ # Replace MockPlantDiseaseModel with your actual model
57
+ from your_training_code import YourCNNModel
58
+
59
+ def _load_real_model(self, model_name, model_path=None):
60
+ model = YourCNNModel(num_classes=39)
61
+ # ... rest of the code
62
+ ```
63
+
64
+ 4. **Change app.py to use real model:**
65
+ ```python
66
+ demo = create_interface(use_mock=False)
67
+ ```
68
+
69
+ ### Testing
70
+
71
+ **Test individual components:**
72
+ ```bash
73
+ # Test mock model
74
+ python models/mock_model.py
75
+
76
+ # Test model loader
77
+ python ui/model_loader.py
78
+
79
+ # Test utilities
80
+ python ui/utils.py
81
+ ```
82
+
83
+ **Test with different images:**
84
+ 1. Download example images:
85
+ ```bash
86
+ python download_examples.py --num 20
87
+ ```
88
+
89
+ 2. Run the app and test each feature:
90
+ - Single image prediction
91
+ - Batch processing
92
+ - Model switching
93
+ - Confidence threshold
94
+ - Flagging predictions
95
+
96
+ ## For End Users
97
+
98
+ ### Single Image Classification
99
+
100
+ 1. Open the app in your browser
101
+ 2. Go to the **"Single Image"** tab
102
+ 3. Upload an image:
103
+ - Click the image upload area
104
+ - Select a plant leaf photo from your computer
105
+ - Supported formats: JPG, PNG
106
+ 4. (Optional) Select a different model from the dropdown
107
+ 5. (Optional) Adjust the confidence threshold
108
+ 6. Click **"Predict Disease"**
109
+ 7. View the results:
110
+ - Top predictions shown as a chart
111
+ - Detailed information about the top prediction
112
+ - Raw JSON data available in the accordion
113
+
114
+ ### Using Example Images
115
+
116
+ 1. Go to the **"Example Images"** tab
117
+ 2. Click on any example image
118
+ 3. The image will be loaded into the predictor
119
+ 4. Go back to the "Single Image" tab
120
+ 5. Click "Predict Disease"
121
+
122
+ ### Batch Processing
123
+
124
+ To classify multiple images at once:
125
+
126
+ 1. Go to the **"Batch Processing"** tab
127
+ 2. Click "Upload Multiple Images"
128
+ 3. Select multiple image files (use Ctrl/Cmd + Click)
129
+ 4. Click "Predict All"
130
+ 5. View results for all images
131
+
132
+ ### Flagging Incorrect Predictions
133
+
134
+ If you notice a wrong prediction:
135
+
136
+ 1. After getting a prediction, expand **"Flag Incorrect Prediction"**
137
+ 2. Enter feedback (e.g., "This is actually Apple Scab, not Black Rot")
138
+ 3. Click **"Submit Flag"**
139
+ 4. Your feedback is recorded for the developers
140
+
141
+ ### Adjusting Confidence Threshold
142
+
143
+ The confidence threshold filters out low-confidence predictions:
144
+
145
+ 1. Use the slider at the top: **"Confidence Threshold (%)"**
146
+ 2. Move it right to see only high-confidence predictions
147
+ 3. Move it left to see more predictions (including uncertain ones)
148
+
149
+ **Example:**
150
+ - Set to 50%: Only shows predictions the model is at least 50% confident about
151
+ - Set to 1%: Shows almost all predictions
152
+
153
+ ### Understanding Results
154
+
155
+ **Prediction Display:**
156
+ ```
157
+ Tomato - Late blight: 85.2%
158
+ Tomato - Early blight: 8.3%
159
+ Tomato - Leaf Mold: 3.1%
160
+ ...
161
+ ```
162
+
163
+ **Detailed Info:**
164
+ - **Top Prediction:** The most likely disease
165
+ - **Confidence:** How certain the model is (0-100%)
166
+ - **Plant:** The type of plant detected
167
+ - **Status:** Whether the plant is healthy or diseased
168
+
169
+ ### Tips for Best Results
170
+
171
+ 1. **Image Quality:**
172
+ - Use clear, well-lit photos
173
+ - Focus on the leaf
174
+ - Avoid blurry images
175
+
176
+ 2. **Image Content:**
177
+ - Show the diseased area clearly
178
+ - Include the whole leaf if possible
179
+ - One leaf per image works best
180
+
181
+ 3. **File Size:**
182
+ - The app automatically resizes images
183
+ - But uploading smaller images (<5MB) is faster
184
+
185
+ 4. **Interpreting Confidence:**
186
+ - >80%: High confidence - likely correct
187
+ - 50-80%: Moderate confidence - possible
188
+ - <50%: Low confidence - uncertain
189
+
190
+ ## Advanced Features
191
+
192
+ ### Switching Between Models
193
+
194
+ If your team trained multiple models:
195
+
196
+ 1. Use the **"Select Model"** dropdown at the top
197
+ 2. Options might include:
198
+ - CNN from Scratch
199
+ - Transfer Learning (ResNet18)
200
+ 3. Each model may perform differently
201
+ 4. Try both and compare results
202
+
203
+ ### Viewing Raw Predictions
204
+
205
+ For technical analysis:
206
+
207
+ 1. After prediction, expand **"Advanced: View Raw Predictions"**
208
+ 2. See the raw probability scores in JSON format
209
+ 3. Useful for debugging or detailed analysis
210
+
211
+ ### Batch Results Analysis
212
+
213
+ When processing multiple images:
214
+
215
+ 1. Results show the top prediction for each image
216
+ 2. Format: `Image 1: Disease Name (confidence%)`
217
+ 3. Scroll through all results
218
+ 4. Use this for analyzing a collection of plants
219
+
220
+ ## Integration with Training Pipeline
221
+
222
+ ### For ML Team Members
223
+
224
+ **Updating the Model:**
225
+
226
+ After training a new model:
227
+
228
+ ```python
229
+ # Option 1: Upload to ClearML (recommended)
230
+ from clearml import Task
231
+ task = Task.current_task()
232
+ # Model is automatically uploaded
233
+
234
+ # Then in UI:
235
+ loader.load_from_clearml(task_id="your_task_id")
236
+ ```
237
+
238
+ ```python
239
+ # Option 2: Save locally
240
+ torch.save(model.state_dict(), 'models/best_model.pth')
241
+
242
+ # Then in UI:
243
+ loader.load_model(model_path='models/best_model.pth')
244
+ ```
245
+
246
+ ```python
247
+ # Option 3: Upload to HuggingFace Hub
248
+ from huggingface_hub import HfApi
249
+ api = HfApi()
250
+ api.upload_file(
251
+ path_or_fileobj="best_model.pth",
252
+ path_in_repo="model.pth",
253
+ repo_id="username/model-name"
254
+ )
255
+
256
+ # Then in UI:
257
+ loader.load_from_huggingface("username/model-name")
258
+ ```
259
+
260
+ ### Experiment Tracking
261
+
262
+ The UI can load any model from your ClearML experiments:
263
+
264
+ ```python
265
+ # Get task ID from ClearML dashboard
266
+ # Then update model_loader.py or pass as parameter
267
+ ```
268
+
269
+ ## Troubleshooting
270
+
271
+ ### Common Issues
272
+
273
+ **"Please upload an image"**
274
+ - Solution: Make sure you've selected an image before clicking Predict
275
+
276
+ **"No predictions above confidence threshold"**
277
+ - Solution: Lower the confidence threshold slider
278
+ - Or the image might not be a plant leaf
279
+
280
+ **"Error during prediction"**
281
+ - Check the error message in the output
282
+ - Verify the image is valid (not corrupted)
283
+ - Try a different image
284
+
285
+ **Slow predictions**
286
+ - First prediction may be slow (model loading)
287
+ - Subsequent predictions should be faster
288
+ - Batch processing might take longer for many images
289
+
290
+ **Example gallery is empty**
291
+ - Run `python download_examples.py` to download examples
292
+ - Or manually add images to `ui/examples/`
293
+
294
+ ### Getting Help
295
+
296
+ 1. Check the error message displayed in the UI
297
+ 2. Look at the terminal/console for detailed errors
298
+ 3. Refer to README.md for setup issues
299
+ 4. Check docs/deployment_guide.md for deployment issues
300
+ 5. Contact your team members or course TAs
301
+
302
+ ## Recording a Demo Video
303
+
304
+ For your project submission:
305
+
306
+ ### What to Include
307
+
308
+ 1. **Introduction** (10 sec)
309
+ - "This is our Plant Disease Detection system..."
310
+
311
+ 2. **Single Image Demo** (30-60 sec)
312
+ - Upload an image
313
+ - Show prediction results
314
+ - Explain the output
315
+
316
+ 3. **Advanced Features** (30-60 sec)
317
+ - Show model selection
318
+ - Demonstrate batch processing
319
+ - Show flagging feature
320
+
321
+ 4. **Example Gallery** (15-30 sec)
322
+ - Browse example images
323
+ - Select and predict
324
+
325
+ 5. **Conclusion** (10-15 sec)
326
+ - Summarize capabilities
327
+ - Mention accuracy/performance
328
+
329
+ ### Recording Tips
330
+
331
+ - Use screen recording software (QuickTime, OBS, etc.)
332
+ - Enable audio narration
333
+ - Show your face (optional but personal)
334
+ - Keep it concise (2-3 minutes for basic, 5-6 for feature-rich)
335
+ - Test audio quality before final recording
336
+ - Practice once before recording
337
+
338
+ ### Video Quality
339
+
340
+ - Resolution: At least 1080p
341
+ - Format: MP4 (most compatible)
342
+ - Audio: Clear voice, no background noise
343
+ - Editing: Simple cuts are fine, no need for fancy effects
344
+
345
+ ## API Documentation
346
+
347
+ For programmatic use (advanced):
348
+
349
+ ```python
350
+ from model_loader import get_model
351
+ from utils import preprocess_image, postprocess_predictions
352
+
353
+ # Load model
354
+ model, loader = get_model(use_mock=False)
355
+
356
+ # Prepare image
357
+ from PIL import Image
358
+ image = Image.open("path/to/leaf.jpg")
359
+ tensor = preprocess_image(image)
360
+
361
+ # Predict
362
+ import torch
363
+ with torch.no_grad():
364
+ logits = model(tensor.to(loader.device))
365
+
366
+ # Get results
367
+ top_preds, all_preds = postprocess_predictions(logits)
368
+ print(top_preds)
369
+ ```
370
+
371
+ ## FAQ
372
+
373
+ **Q: Can I use this with my own plant images?**
374
+ A: Yes! Upload any plant leaf image. Works best with the plants/diseases in the training set.
375
+
376
+ **Q: How accurate is the model?**
377
+ A: Check the README or About tab for test accuracy. Typically 85-95% on validation set.
378
+
379
+ **Q: Can I add more disease categories?**
380
+ A: You'd need to retrain the model with additional data for new categories.
381
+
382
+ **Q: Is my data saved?**
383
+ A: Images uploaded during use are not saved unless you flag a prediction. Flagged data stays in memory only.
384
+
385
+ **Q: Can I run this offline?**
386
+ A: Yes, once installed, the app runs locally and doesn't need internet (except for downloading model from HF/ClearML).
387
+
388
+ **Q: How do I cite this in a report?**
389
+ A: Reference your team's GitHub repo and the deployed app URL.
390
+
391
+ ## Next Steps
392
+
393
+ - **Test thoroughly:** Try various images, edge cases
394
+ - **Integrate real model:** Replace mock model with trained model
395
+ - **Deploy:** Follow deployment guide to put on HF Spaces
396
+ - **Record demo:** Create your submission video
397
+ - **Write report:** Document the UI features in your report
398
+
399
+ ---
400
+
401
+ **Happy classifying! 🌱**
docs/deployment_guide.md ADDED
@@ -0,0 +1,404 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Deployment Guide - Hugging Face Spaces
2
+
3
+ This guide walks you through deploying the Plant Disease Detection app to Hugging Face Spaces.
4
+
5
+ ## Prerequisites
6
+
7
+ - Hugging Face account (free): https://huggingface.co/join
8
+ - Trained model weights
9
+ - Git installed locally
10
+
11
+ ## Deployment Options
12
+
13
+ You have three options for deploying your model:
14
+
15
+ ### Option 1: Upload Model Weights to Space (Recommended)
16
+ **Pros:** Simple, no external dependencies
17
+ **Cons:** Weights are part of the repo
18
+
19
+ ### Option 2: Load from Hugging Face Model Hub
20
+ **Pros:** Separate model versioning, smaller Space repo
21
+ **Cons:** Need to upload model separately
22
+
23
+ ### Option 3: Fetch from ClearML
24
+ **Pros:** Direct integration with training pipeline
25
+ **Cons:** Requires ClearML credentials in Space secrets
26
+
27
+ ## Step-by-Step: Option 1 (Upload Weights)
28
+
29
+ ### 1. Create a Hugging Face Space
30
+
31
+ 1. Go to https://huggingface.co/spaces
32
+ 2. Click "Create new Space"
33
+ 3. Fill in details:
34
+ - **Name:** `plant-disease-detection`
35
+ - **License:** Apache 2.0
36
+ - **Space SDK:** Gradio
37
+ - **Visibility:** Public
38
+ 4. Click "Create Space"
39
+
40
+ ### 2. Prepare Files for Deployment
41
+
42
+ Create a new directory for your Space:
43
+
44
+ ```bash
45
+ mkdir hf-space-deployment
46
+ cd hf-space-deployment
47
+ ```
48
+
49
+ Copy and flatten the UI structure:
50
+
51
+ ```bash
52
+ # Copy main app file
53
+ cp ../ui/app.py ./
54
+
55
+ # Copy supporting files
56
+ cp ../ui/config.py ./
57
+ cp ../ui/model_loader.py ./
58
+ cp ../ui/utils.py ./
59
+ cp ../models/mock_model.py ./
60
+
61
+ # Copy requirements
62
+ cp ../requirements.txt ./
63
+
64
+ # Copy your trained model weights
65
+ cp ../path/to/best_model.pth ./
66
+ ```
67
+
68
+ ### 3. Modify app.py for Deployment
69
+
70
+ Edit `app.py` to work with the flat structure:
71
+
72
+ ```python
73
+ # Change imports at the top of app.py
74
+ import config
75
+ from model_loader import ModelLoader
76
+ from utils import preprocess_image, postprocess_predictions, ...
77
+ from mock_model import create_mock_predictions
78
+
79
+ # At the bottom, change:
80
+ demo = create_interface(use_mock=False) # Use real model
81
+ demo.launch() # Remove server_name and server_port
82
+ ```
83
+
84
+ ### 4. Modify model_loader.py
85
+
86
+ Update the model loading to use your actual architecture:
87
+
88
+ ```python
89
+ # In _load_real_model method, replace mock model with your actual model:
90
+
91
+ def _load_real_model(self, model_name, model_path=None):
92
+ if model_config["model_type"] == "cnn":
93
+ # Import your actual model class
94
+ from your_model_module import YourCNNModel
95
+ model = YourCNNModel(num_classes=len(config.CLASS_NAMES))
96
+
97
+ # Load weights
98
+ if model_path:
99
+ model.load_state_dict(torch.load(model_path, map_location=self.device))
100
+ else:
101
+ # Load default model
102
+ model.load_state_dict(torch.load("best_model.pth", map_location=self.device))
103
+
104
+ return model
105
+ ```
106
+
107
+ ### 5. Create README for Space
108
+
109
+ Create `README.md` in the deployment directory:
110
+
111
+ ```markdown
112
+ ---
113
+ title: Plant Disease Detection
114
+ emoji: 🌱
115
+ colorFrom: green
116
+ colorTo: blue
117
+ sdk: gradio
118
+ sdk_version: 4.0.0
119
+ app_file: app.py
120
+ pinned: false
121
+ license: apache-2.0
122
+ ---
123
+
124
+ # 🌱 Plant Disease Detection
125
+
126
+ AI-powered plant disease detection from leaf images.
127
+
128
+ ## About
129
+
130
+ This application uses a Convolutional Neural Network (CNN) trained on the PlantVillage
131
+ dataset to identify plant diseases from leaf images.
132
+
133
+ **Developed by:** [Your Team Name]
134
+ **Course:** 5CCSAGAP - AI Group Project, King's College London
135
+ **Academic Year:** 2024-2025
136
+
137
+ ## Features
138
+
139
+ - Upload plant leaf images for disease detection
140
+ - Support for 39 different plant disease categories
141
+ - Multiple model options (CNN, Transfer Learning)
142
+ - Batch processing capability
143
+ - Confidence threshold adjustment
144
+
145
+ ## How to Use
146
+
147
+ 1. Go to the "Single Image" tab
148
+ 2. Upload a photo of a plant leaf
149
+ 3. Click "Predict Disease"
150
+ 4. View the top predictions with confidence scores
151
+
152
+ ## Dataset
153
+
154
+ Trained on the [PlantVillage dataset](https://huggingface.co/datasets/EdBianchi/plant-village)
155
+ containing 55,400 images across 39 disease categories.
156
+
157
+ ## Model Performance
158
+
159
+ - **Test Accuracy:** [Add your test accuracy]
160
+ - **Architecture:** Custom CNN / ResNet18 Transfer Learning
161
+ - **Training Framework:** PyTorch
162
+ - **Experiment Tracking:** ClearML
163
+
164
+ ## Links
165
+
166
+ - **GitHub:** [Add your repo link]
167
+ - **Team Project:** [Add project documentation link]
168
+
169
+ ---
170
+
171
+ **Disclaimer:** This is an educational project. Predictions should be verified by agricultural experts.
172
+ ```
173
+
174
+ ### 6. Upload to Hugging Face Space
175
+
176
+ Initialize Git and push:
177
+
178
+ ```bash
179
+ # Initialize git
180
+ git init
181
+
182
+ # Add Hugging Face Space as remote
183
+ git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/plant-disease-detection
184
+
185
+ # Create .gitignore
186
+ cat > .gitignore << EOF
187
+ __pycache__/
188
+ *.pyc
189
+ .DS_Store
190
+ EOF
191
+
192
+ # Add files
193
+ git add .
194
+
195
+ # Commit
196
+ git commit -m "Initial deployment of Plant Disease Detection app"
197
+
198
+ # Push to Hugging Face
199
+ git push origin main
200
+ ```
201
+
202
+ You'll be prompted for credentials:
203
+ - **Username:** Your HF username
204
+ - **Password:** Your HF access token (create at https://huggingface.co/settings/tokens)
205
+
206
+ ### 7. Verify Deployment
207
+
208
+ 1. Go to your Space URL: `https://huggingface.co/spaces/YOUR_USERNAME/plant-disease-detection`
209
+ 2. Wait for the build to complete (check "Building" status)
210
+ 3. Test the app with sample images
211
+
212
+ ## Step-by-Step: Option 2 (HF Model Hub)
213
+
214
+ ### 1. Upload Model to HF Model Hub
215
+
216
+ ```python
217
+ from huggingface_hub import HfApi, create_repo
218
+
219
+ # Create repository for model
220
+ repo_id = "YOUR_USERNAME/plant-disease-cnn"
221
+ create_repo(repo_id=repo_id, repo_type="model", exist_ok=True)
222
+
223
+ # Upload model
224
+ api = HfApi()
225
+ api.upload_file(
226
+ path_or_fileobj="path/to/best_model.pth",
227
+ path_in_repo="pytorch_model.pth",
228
+ repo_id=repo_id,
229
+ repo_type="model"
230
+ )
231
+
232
+ # Create model card
233
+ model_card = """
234
+ ---
235
+ license: apache-2.0
236
+ tags:
237
+ - image-classification
238
+ - plant-disease
239
+ - pytorch
240
+ ---
241
+
242
+ # Plant Disease CNN
243
+
244
+ CNN model for plant disease detection, trained on PlantVillage dataset.
245
+
246
+ ## Model Details
247
+
248
+ - **Architecture:** Custom CNN
249
+ - **Framework:** PyTorch
250
+ - **Classes:** 39 plant disease categories
251
+ - **Input Size:** 256x256 RGB images
252
+ - **Test Accuracy:** [Add your accuracy]
253
+
254
+ ## Usage
255
+
256
+ ```python
257
+ import torch
258
+ from huggingface_hub import hf_hub_download
259
+
260
+ model_path = hf_hub_download(
261
+ repo_id="YOUR_USERNAME/plant-disease-cnn",
262
+ filename="pytorch_model.pth"
263
+ )
264
+ model.load_state_dict(torch.load(model_path))
265
+ ```
266
+ """
267
+
268
+ api.upload_file(
269
+ path_or_fileobj=model_card.encode(),
270
+ path_in_repo="README.md",
271
+ repo_id=repo_id,
272
+ repo_type="model"
273
+ )
274
+ ```
275
+
276
+ ### 2. Modify model_loader.py
277
+
278
+ ```python
279
+ # In model_loader.py, set default to load from HF:
280
+
281
+ def _load_real_model(self, model_name, model_path=None):
282
+ if model_path is None:
283
+ # Download from HF Hub
284
+ from huggingface_hub import hf_hub_download
285
+ model_path = hf_hub_download(
286
+ repo_id="YOUR_USERNAME/plant-disease-cnn",
287
+ filename="pytorch_model.pth"
288
+ )
289
+
290
+ # Load model architecture
291
+ model = YourCNNModel(num_classes=39)
292
+ model.load_state_dict(torch.load(model_path, map_location=self.device))
293
+
294
+ return model
295
+ ```
296
+
297
+ ### 3. Deploy to Space
298
+
299
+ Follow steps 1-7 from Option 1, but you don't need to include the .pth file.
300
+
301
+ ## Step-by-Step: Option 3 (ClearML)
302
+
303
+ ### 1. Get ClearML Credentials
304
+
305
+ 1. Log in to https://5ccsagap.er.kcl.ac.uk/
306
+ 2. Go to Settings → Workspace → Create new credentials
307
+ 3. Copy the credentials
308
+
309
+ ### 2. Add Secrets to Space
310
+
311
+ 1. Go to your Space settings
312
+ 2. Click "Repository secrets"
313
+ 3. Add three secrets:
314
+ - `CLEARML_API_HOST`: `https://5ccsagap.er.kcl.ac.uk`
315
+ - `CLEARML_API_ACCESS_KEY`: Your access key
316
+ - `CLEARML_API_SECRET_KEY`: Your secret key
317
+
318
+ ### 3. Modify model_loader.py
319
+
320
+ ```python
321
+ def _load_real_model(self, model_name, model_path=None):
322
+ if model_path is None:
323
+ # Load from ClearML
324
+ import os
325
+ from clearml import Task, Model
326
+
327
+ # Initialize ClearML with environment variables
328
+ Task.init(
329
+ project_name="Plant Disease Detection",
330
+ task_name="UI Model Loading",
331
+ api_host=os.environ.get("CLEARML_API_HOST"),
332
+ api_access_key=os.environ.get("CLEARML_API_ACCESS_KEY"),
333
+ api_secret_key=os.environ.get("CLEARML_API_SECRET_KEY")
334
+ )
335
+
336
+ # Get the model
337
+ model_id = "YOUR_MODEL_ID" # Get this from ClearML
338
+ model_obj = Model(model_id)
339
+ model_path = model_obj.get_local_copy()
340
+
341
+ # Load model
342
+ model = YourCNNModel(num_classes=39)
343
+ model.load_state_dict(torch.load(model_path, map_location=self.device))
344
+
345
+ return model
346
+ ```
347
+
348
+ ## Troubleshooting
349
+
350
+ ### Build Failures
351
+
352
+ **Error: "Out of Memory"**
353
+ - Your model might be too large
354
+ - Try using a smaller model or quantization
355
+
356
+ **Error: "Module not found"**
357
+ - Check all dependencies are in requirements.txt
358
+ - Verify imports work with flat file structure
359
+
360
+ ### Runtime Errors
361
+
362
+ **Error: "Model file not found"**
363
+ - Verify the .pth file is uploaded
364
+ - Check file path in model_loader.py
365
+
366
+ **Error: "Incompatible architecture"**
367
+ - Ensure your model class definition matches the saved weights
368
+ - Check num_classes=39
369
+
370
+ ## Updating the Deployment
371
+
372
+ To update your deployed app:
373
+
374
+ ```bash
375
+ # Make changes to files
376
+ # Commit and push
377
+ git add .
378
+ git commit -m "Update: description of changes"
379
+ git push origin main
380
+ ```
381
+
382
+ The Space will automatically rebuild.
383
+
384
+ ## Best Practices
385
+
386
+ 1. **Test locally first:** Always test the app locally before deploying
387
+ 2. **Use small example images:** Don't upload large images to the repo
388
+ 3. **Version your models:** Tag model versions in HF Hub or ClearML
389
+ 4. **Monitor usage:** Check Space analytics to see usage patterns
390
+ 5. **Update README:** Keep the model card and space README up to date
391
+
392
+ ## Resources
393
+
394
+ - [HF Spaces Documentation](https://huggingface.co/docs/hub/spaces)
395
+ - [Gradio Documentation](https://gradio.app/docs/)
396
+ - [HF Hub Python API](https://huggingface.co/docs/huggingface_hub/)
397
+
398
+ ## Support
399
+
400
+ If you encounter issues:
401
+ 1. Check the Space build logs
402
+ 2. Test locally with the exact same file structure
403
+ 3. Consult the course TAs
404
+ 4. Check HF Community forums
download_examples.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Script to download example images from PlantVillage dataset
3
+ Run this to populate the ui/examples/ directory with sample images
4
+ """
5
+
6
+ import os
7
+ from pathlib import Path
8
+
9
+
10
+ def download_examples(num_examples=15):
11
+ """
12
+ Download example images from PlantVillage dataset
13
+
14
+ Args:
15
+ num_examples: Number of example images to download
16
+ """
17
+ try:
18
+ from datasets import load_dataset
19
+ except ImportError:
20
+ print("Error: 'datasets' library not installed")
21
+ print("Install it with: pip install datasets")
22
+ return
23
+
24
+ print("Loading PlantVillage dataset...")
25
+ print("This might take a few minutes on first run...")
26
+
27
+ try:
28
+ # Load the dataset
29
+ dataset = load_dataset("LinusWallin/plant-village", split="train")
30
+
31
+ print(f"Dataset loaded! Total images: {len(dataset)}")
32
+
33
+ # Create examples directory
34
+ examples_dir = Path(__file__).parent / "ui" / "examples"
35
+ examples_dir.mkdir(parents=True, exist_ok=True)
36
+
37
+ print(f"\nDownloading {num_examples} example images...")
38
+
39
+ # Calculate step size to get diverse examples
40
+ step = len(dataset) // num_examples
41
+
42
+ # Download examples
43
+ for i in range(num_examples):
44
+ idx = i * step
45
+ item = dataset[idx]
46
+
47
+ # Get image and label
48
+ image = item['image']
49
+ label = item['labels']
50
+
51
+ # Save image
52
+ filename = f"example_{i:02d}_class_{label}.jpg"
53
+ filepath = examples_dir / filename
54
+
55
+ image.save(filepath)
56
+ print(f" ✓ Saved {filename}")
57
+
58
+ print(f"\n✅ Successfully downloaded {num_examples} example images to {examples_dir}")
59
+ print("\nYou can now run the app with the example gallery populated!")
60
+
61
+ except Exception as e:
62
+ print(f"Error downloading dataset: {e}")
63
+ print("\nAlternative: Manually add images to ui/examples/")
64
+ print("You can use any plant leaf images from the PlantVillage dataset")
65
+
66
+
67
+ if __name__ == "__main__":
68
+ import argparse
69
+
70
+ parser = argparse.ArgumentParser(description="Download example images for the UI")
71
+ parser.add_argument(
72
+ "--num",
73
+ type=int,
74
+ default=15,
75
+ help="Number of example images to download (default: 15)"
76
+ )
77
+
78
+ args = parser.parse_args()
79
+
80
+ download_examples(num_examples=args.num)
models/__init__.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ """
2
+ Models package
3
+ """
models/__pycache__/__init__.cpython-39.pyc ADDED
Binary file (184 Bytes). View file
 
models/__pycache__/mock_model.cpython-39.pyc ADDED
Binary file (2.87 kB). View file
 
models/mock_model.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Mock model for UI development
3
+ This allows parallel development while the actual model is being trained
4
+ """
5
+
6
+ import torch
7
+ import torch.nn as nn
8
+ import numpy as np
9
+
10
+
11
+ class MockPlantDiseaseModel(nn.Module):
12
+ """
13
+ Mock CNN model that mimics the structure of a real plant disease classifier
14
+ Returns realistic-looking predictions for UI testing
15
+ """
16
+
17
+ def __init__(self, num_classes=39):
18
+ super(MockPlantDiseaseModel, self).__init__()
19
+
20
+ # Simple architecture that matches expected input/output
21
+ self.features = nn.Sequential(
22
+ nn.Conv2d(3, 32, kernel_size=3, padding=1),
23
+ nn.ReLU(),
24
+ nn.MaxPool2d(2),
25
+ nn.Conv2d(32, 64, kernel_size=3, padding=1),
26
+ nn.ReLU(),
27
+ nn.MaxPool2d(2),
28
+ nn.AdaptiveAvgPool2d((1, 1))
29
+ )
30
+
31
+ self.classifier = nn.Sequential(
32
+ nn.Flatten(),
33
+ nn.Linear(64, num_classes)
34
+ )
35
+
36
+ self.num_classes = num_classes
37
+
38
+ def forward(self, x):
39
+ """
40
+ Forward pass that returns realistic probabilities
41
+ """
42
+ x = self.features(x)
43
+ x = self.classifier(x)
44
+
45
+ # Add some controlled randomness to make predictions look realistic
46
+ # In a real model, this would be learned weights
47
+ return x
48
+
49
+
50
+ def create_mock_predictions(class_names):
51
+ """
52
+ Create realistic-looking mock predictions
53
+ Returns a dict with class names and probabilities
54
+ """
55
+ num_classes = len(class_names)
56
+
57
+ # Create random probabilities that sum to 1
58
+ # Give higher weight to a few "predicted" classes
59
+ logits = np.random.randn(num_classes)
60
+ logits[np.random.randint(0, num_classes)] += 3 # Make one class likely
61
+ logits[np.random.randint(0, num_classes)] += 1.5 # Make another somewhat likely
62
+
63
+ # Convert to probabilities using softmax
64
+ probs = np.exp(logits) / np.sum(np.exp(logits))
65
+
66
+ # Create prediction dict
67
+ predictions = {name: float(prob) for name, prob in zip(class_names, probs)}
68
+
69
+ return predictions
70
+
71
+
72
+ def get_mock_model():
73
+ """
74
+ Returns a mock model instance
75
+ """
76
+ model = MockPlantDiseaseModel(num_classes=39)
77
+ model.eval() # Set to evaluation mode
78
+ return model
79
+
80
+
81
+ if __name__ == "__main__":
82
+ # Test the mock model
83
+ print("Testing mock model...")
84
+ model = get_mock_model()
85
+
86
+ # Test with random input
87
+ test_input = torch.randn(1, 3, 256, 256)
88
+ with torch.no_grad():
89
+ output = model(test_input)
90
+
91
+ print(f"Output shape: {output.shape}")
92
+ print(f"Sample logits: {output[0][:5]}")
93
+
94
+ # Test mock predictions
95
+ from config import CLASS_NAMES
96
+ predictions = create_mock_predictions(CLASS_NAMES)
97
+ top_5 = sorted(predictions.items(), key=lambda x: x[1], reverse=True)[:5]
98
+
99
+ print("\nTop 5 predictions:")
100
+ for name, prob in top_5:
101
+ print(f" {name}: {prob:.4f}")
quickstart.sh ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Quickstart script for Plant Disease Detection UI
4
+ # This script sets up the environment and runs the app
5
+
6
+ echo "🌱 Plant Disease Detection - Quick Start"
7
+ echo "========================================"
8
+ echo ""
9
+
10
+ # Check if Python is installed
11
+ if ! command -v python3 &> /dev/null; then
12
+ echo "❌ Error: Python 3 is not installed"
13
+ echo "Please install Python 3.8 or higher"
14
+ exit 1
15
+ fi
16
+
17
+ echo "✓ Python found: $(python3 --version)"
18
+ echo ""
19
+
20
+ # Check if virtual environment exists
21
+ if [ ! -d "venv" ]; then
22
+ echo "📦 Creating virtual environment..."
23
+ python3 -m venv venv
24
+ echo "✓ Virtual environment created"
25
+ else
26
+ echo "✓ Virtual environment already exists"
27
+ fi
28
+
29
+ echo ""
30
+
31
+ # Activate virtual environment
32
+ echo "🔧 Activating virtual environment..."
33
+ source venv/bin/activate
34
+
35
+ # Install/upgrade dependencies
36
+ echo "📥 Installing dependencies..."
37
+ pip install --upgrade pip > /dev/null 2>&1
38
+ pip install -r requirements.txt
39
+
40
+ echo "✓ Dependencies installed"
41
+ echo ""
42
+
43
+ # Check if example images exist
44
+ if [ ! -d "ui/examples" ] || [ -z "$(ls -A ui/examples 2>/dev/null)" ]; then
45
+ echo "📸 No example images found"
46
+ echo ""
47
+ read -p "Would you like to download example images? (y/n): " -n 1 -r
48
+ echo ""
49
+ if [[ $REPLY =~ ^[Yy]$ ]]; then
50
+ echo "Downloading example images..."
51
+ python3 download_examples.py
52
+ echo ""
53
+ fi
54
+ fi
55
+
56
+ # Run the app
57
+ echo "🚀 Starting the application..."
58
+ echo ""
59
+ echo "The app will be available at: http://localhost:7860"
60
+ echo "Press Ctrl+C to stop the server"
61
+ echo ""
62
+ echo "========================================"
63
+ echo ""
64
+
65
+ cd ui
66
+ python3 app.py
requirements.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Core dependencies
2
+ torch>=2.0.0
3
+ torchvision>=0.15.0
4
+ gradio>=4.0.0
5
+ numpy>=1.24.0
6
+ Pillow>=10.0.0
7
+
8
+ # For model deployment and tracking
9
+ huggingface-hub>=0.19.0
10
+ clearml>=1.14.0
11
+
12
+ # Optional: for advanced features
13
+ datasets>=2.14.0 # For loading PlantVillage dataset from HuggingFace
ui/__init__.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ """
2
+ Plant Disease Detection UI Package
3
+ """
4
+
5
+ __version__ = "1.0.0"
ui/__pycache__/config.cpython-39.pyc ADDED
Binary file (1.91 kB). View file
 
ui/__pycache__/model_loader.cpython-39.pyc ADDED
Binary file (5.58 kB). View file
 
ui/__pycache__/utils.cpython-39.pyc ADDED
Binary file (5.64 kB). View file
 
ui/app.py ADDED
@@ -0,0 +1,416 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Plant Disease Detection Gradio App
3
+ Main UI application with advanced features
4
+ """
5
+
6
+ import gradio as gr
7
+ import torch
8
+ import sys
9
+ from pathlib import Path
10
+ import json
11
+ from datetime import datetime
12
+
13
+ # Add current directory to path
14
+ sys.path.append(str(Path(__file__).parent))
15
+ sys.path.append(str(Path(__file__).parent.parent))
16
+
17
+ import config
18
+ from model_loader import ModelLoader
19
+ from utils import (
20
+ preprocess_image,
21
+ postprocess_predictions,
22
+ format_class_name,
23
+ get_disease_info,
24
+ batch_preprocess_images
25
+ )
26
+ from models.mock_model import create_mock_predictions
27
+
28
+
29
+ class PlantDiseaseApp:
30
+ """
31
+ Main application class for Plant Disease Detection
32
+ """
33
+
34
+ def __init__(self, use_mock=True):
35
+ """
36
+ Initialize the application
37
+
38
+ Args:
39
+ use_mock: Whether to use mock model for development
40
+ """
41
+ self.use_mock = use_mock
42
+ self.model_loader = ModelLoader(use_mock=use_mock)
43
+ self.current_model_name = "CNN from Scratch"
44
+ self.model = self.model_loader.load_model(self.current_model_name)
45
+ self.flagged_predictions = []
46
+
47
+ def predict(self, image, model_name, confidence_threshold):
48
+ """
49
+ Make prediction on a single image
50
+
51
+ Args:
52
+ image: Input image
53
+ model_name: Name of model to use
54
+ confidence_threshold: Minimum confidence to display
55
+
56
+ Returns:
57
+ Predictions, formatted info, and detailed results
58
+ """
59
+ if image is None:
60
+ return None, "Please upload an image", ""
61
+
62
+ try:
63
+ # Switch model if needed
64
+ if model_name != self.current_model_name:
65
+ self.model = self.model_loader.load_model(model_name)
66
+ self.current_model_name = model_name
67
+
68
+ # Preprocess image
69
+ tensor = preprocess_image(image)
70
+ tensor = tensor.to(self.model_loader.device)
71
+
72
+ # Get prediction
73
+ with torch.no_grad():
74
+ if self.use_mock:
75
+ # Use mock predictions for development
76
+ predictions = create_mock_predictions(config.CLASS_NAMES)
77
+ logits = torch.tensor([list(predictions.values())])
78
+ else:
79
+ logits = self.model(tensor)
80
+
81
+ # Postprocess
82
+ top_predictions, all_predictions = postprocess_predictions(
83
+ logits, config.CLASS_NAMES, config.TOP_K_PREDICTIONS
84
+ )
85
+
86
+ # Filter by confidence threshold
87
+ filtered_predictions = {
88
+ k: v for k, v in top_predictions.items() if v >= confidence_threshold / 100
89
+ }
90
+
91
+ # Get top prediction info
92
+ if filtered_predictions:
93
+ top_class = max(filtered_predictions.items(), key=lambda x: x[1])[0]
94
+ top_prob = filtered_predictions[top_class]
95
+ disease_info = get_disease_info(top_class)
96
+
97
+ result_text = f"""
98
+ **Top Prediction:** {disease_info['formatted_name']}
99
+ **Confidence:** {top_prob*100:.2f}%
100
+ **Plant:** {disease_info['plant']}
101
+ **Status:** {'Healthy' if disease_info['is_healthy'] else 'Disease Detected'}
102
+ """
103
+ else:
104
+ result_text = "No predictions above confidence threshold"
105
+
106
+ # Format for Gradio Label component
107
+ display_predictions = {
108
+ format_class_name(k): v for k, v in filtered_predictions.items()
109
+ }
110
+
111
+ return display_predictions, result_text, json.dumps(filtered_predictions, indent=2)
112
+
113
+ except Exception as e:
114
+ return None, f"Error during prediction: {str(e)}", ""
115
+
116
+ def predict_batch(self, files, model_name, confidence_threshold):
117
+ """
118
+ Make predictions on multiple images
119
+
120
+ Args:
121
+ files: List of uploaded files
122
+ model_name: Name of model to use
123
+ confidence_threshold: Minimum confidence to display
124
+
125
+ Returns:
126
+ Results for each image
127
+ """
128
+ if not files:
129
+ return "Please upload at least one image"
130
+
131
+ results = []
132
+ for i, file in enumerate(files):
133
+ try:
134
+ # Get predictions for this image
135
+ preds, info, _ = self.predict(file, model_name, confidence_threshold)
136
+
137
+ if preds:
138
+ top_class = max(preds.items(), key=lambda x: x[1])[0]
139
+ top_prob = preds[top_class]
140
+ results.append(f"**Image {i+1}:** {top_class} ({top_prob*100:.2f}%)")
141
+ else:
142
+ results.append(f"**Image {i+1}:** No prediction")
143
+
144
+ except Exception as e:
145
+ results.append(f"**Image {i+1}:** Error - {str(e)}")
146
+
147
+ return "\n\n".join(results)
148
+
149
+ def flag_prediction(self, image, prediction, user_feedback):
150
+ """
151
+ Flag a prediction as incorrect
152
+
153
+ Args:
154
+ image: The input image
155
+ prediction: The model's prediction
156
+ user_feedback: User's feedback text
157
+
158
+ Returns:
159
+ Confirmation message
160
+ """
161
+ if image is None:
162
+ return "No image to flag"
163
+
164
+ flag_entry = {
165
+ "timestamp": datetime.now().isoformat(),
166
+ "prediction": prediction,
167
+ "feedback": user_feedback
168
+ }
169
+
170
+ self.flagged_predictions.append(flag_entry)
171
+
172
+ # In a real deployment, you would save this to a file or database
173
+ # For now, we'll just keep it in memory
174
+ return f"Thank you! Flagged prediction #{len(self.flagged_predictions)}"
175
+
176
+ def get_example_images(self):
177
+ """
178
+ Get list of example images from examples directory
179
+
180
+ Returns:
181
+ List of example image paths
182
+ """
183
+ examples_dir = Path(__file__).parent / "examples"
184
+
185
+ if not examples_dir.exists():
186
+ return []
187
+
188
+ # Get all image files
189
+ image_extensions = ['.jpg', '.jpeg', '.png']
190
+ examples = []
191
+
192
+ for ext in image_extensions:
193
+ examples.extend(list(examples_dir.glob(f"*{ext}")))
194
+
195
+ return [str(path) for path in examples[:10]] # Return max 10 examples
196
+
197
+
198
+ def create_interface(use_mock=True):
199
+ """
200
+ Create the Gradio interface
201
+
202
+ Args:
203
+ use_mock: Whether to use mock model
204
+
205
+ Returns:
206
+ Gradio Blocks interface
207
+ """
208
+ app = PlantDiseaseApp(use_mock=use_mock)
209
+
210
+ # Custom CSS for better styling
211
+ custom_css = """
212
+ .main-header {
213
+ text-align: center;
214
+ background: linear-gradient(90deg, #667eea 0%, #764ba2 100%);
215
+ padding: 2rem;
216
+ border-radius: 10px;
217
+ color: white;
218
+ margin-bottom: 2rem;
219
+ }
220
+ .prediction-box {
221
+ border: 2px solid #667eea;
222
+ border-radius: 10px;
223
+ padding: 1rem;
224
+ background: #f8f9fa;
225
+ }
226
+ """
227
+
228
+ with gr.Blocks(css=custom_css, title="Plant Disease Detection") as demo:
229
+
230
+ # Header
231
+ gr.Markdown(
232
+ """
233
+ <div class="main-header">
234
+ <h1>🌱 Plant Disease Detection System</h1>
235
+ <p>Upload a plant leaf image to detect diseases using AI</p>
236
+ </div>
237
+ """
238
+ )
239
+
240
+ # Model selection (available to all tabs)
241
+ with gr.Row():
242
+ model_selector = gr.Dropdown(
243
+ choices=list(config.MODEL_CONFIGS.keys()),
244
+ value="CNN from Scratch",
245
+ label="Select Model",
246
+ info="Choose which model to use for predictions"
247
+ )
248
+ confidence_slider = gr.Slider(
249
+ minimum=0,
250
+ maximum=100,
251
+ value=1,
252
+ step=1,
253
+ label="Confidence Threshold (%)",
254
+ info="Only show predictions above this confidence"
255
+ )
256
+
257
+ # Tabs for different features
258
+ with gr.Tabs():
259
+
260
+ # Tab 1: Single Image Prediction
261
+ with gr.Tab("Single Image"):
262
+ with gr.Row():
263
+ with gr.Column(scale=1):
264
+ image_input = gr.Image(
265
+ label="Upload Plant Leaf Image",
266
+ type="pil"
267
+ )
268
+
269
+ predict_btn = gr.Button("Predict Disease", variant="primary", size="lg")
270
+
271
+ with gr.Accordion("Flag Incorrect Prediction", open=False):
272
+ feedback_text = gr.Textbox(
273
+ label="Your Feedback",
274
+ placeholder="What should the correct classification be?",
275
+ lines=2
276
+ )
277
+ flag_btn = gr.Button("Submit Flag")
278
+ flag_output = gr.Textbox(label="Status", interactive=False)
279
+
280
+ with gr.Column(scale=1):
281
+ prediction_output = gr.Label(
282
+ label="Top Predictions",
283
+ num_top_classes=10
284
+ )
285
+ result_info = gr.Markdown(label="Detailed Results")
286
+
287
+ with gr.Accordion("Advanced: View Raw Predictions", open=False):
288
+ raw_predictions = gr.Textbox(
289
+ label="Raw JSON Output",
290
+ lines=10,
291
+ interactive=False
292
+ )
293
+
294
+ # Connect buttons
295
+ predict_btn.click(
296
+ fn=app.predict,
297
+ inputs=[image_input, model_selector, confidence_slider],
298
+ outputs=[prediction_output, result_info, raw_predictions]
299
+ )
300
+
301
+ flag_btn.click(
302
+ fn=app.flag_prediction,
303
+ inputs=[image_input, result_info, feedback_text],
304
+ outputs=flag_output
305
+ )
306
+
307
+ # Tab 2: Example Gallery
308
+ with gr.Tab("Example Images"):
309
+ gr.Markdown("### Try these example plant images")
310
+ gr.Markdown("Click on an example below to load it into the predictor")
311
+
312
+ example_images = app.get_example_images()
313
+
314
+ if example_images:
315
+ examples = gr.Examples(
316
+ examples=example_images,
317
+ inputs=image_input,
318
+ label="Example Plant Disease Images"
319
+ )
320
+ else:
321
+ gr.Markdown(
322
+ """
323
+ **No example images found.**
324
+
325
+ To add example images:
326
+ 1. Create a folder: `ui/examples/`
327
+ 2. Add plant leaf images (.jpg, .png) to this folder
328
+ 3. Restart the app
329
+ """
330
+ )
331
+
332
+ # Tab 3: Batch Processing
333
+ with gr.Tab("Batch Processing"):
334
+ gr.Markdown("### Upload multiple images for batch processing")
335
+
336
+ batch_input = gr.File(
337
+ label="Upload Multiple Images",
338
+ file_count="multiple",
339
+ type="filepath"
340
+ )
341
+
342
+ batch_predict_btn = gr.Button("Predict All", variant="primary")
343
+
344
+ batch_output = gr.Markdown(label="Batch Results")
345
+
346
+ batch_predict_btn.click(
347
+ fn=app.predict_batch,
348
+ inputs=[batch_input, model_selector, confidence_slider],
349
+ outputs=batch_output
350
+ )
351
+
352
+ # Tab 4: About
353
+ with gr.Tab("About"):
354
+ gr.Markdown(
355
+ """
356
+ ## About This Application
357
+
358
+ This Plant Disease Detection system was developed as part of the
359
+ 5CCSAGAP Artificial Intelligence Group Project at King's College London.
360
+
361
+ ### Features
362
+ - **Single Image Prediction**: Upload and classify individual plant images
363
+ - **Multiple Models**: Switch between different trained models
364
+ - **Batch Processing**: Classify multiple images at once
365
+ - **Example Gallery**: Try pre-loaded example images
366
+ - **Flagging System**: Report incorrect predictions to help improve the model
367
+ - **Confidence Threshold**: Filter predictions by confidence level
368
+
369
+ ### Dataset
370
+ The model is trained on the PlantVillage dataset, which contains 55,400 images
371
+ across 39 different plant disease categories.
372
+
373
+ ### Model Architecture
374
+ - **CNN from Scratch**: Custom convolutional neural network
375
+ - **Transfer Learning**: Fine-tuned ResNet18 (if available)
376
+
377
+ ### Technology Stack
378
+ - **PyTorch**: Model training and inference
379
+ - **Gradio**: User interface
380
+ - **ClearML**: Experiment tracking
381
+ - **Hugging Face Spaces**: Deployment platform
382
+
383
+ ### Team
384
+ [Add your team members' names here]
385
+
386
+ ### Links
387
+ - [GitHub Repository](https://github.kcl.ac.uk/K23064919/smallGroupProject)
388
+ - [ClearML Dashboard](https://5ccsagap.er.kcl.ac.uk/)
389
+ """
390
+ )
391
+
392
+ # Footer
393
+ gr.Markdown(
394
+ """
395
+ ---
396
+ **Note:** This is an AI-powered system and predictions should be verified by experts.
397
+ Built with ❤️ by KCL AI Students
398
+ """
399
+ )
400
+
401
+ return demo
402
+
403
+
404
+ if __name__ == "__main__":
405
+ # Create and launch the app
406
+ print("Starting Plant Disease Detection App...")
407
+
408
+ # Use mock=True for development, mock=False when you have real models
409
+ demo = create_interface(use_mock=True)
410
+
411
+ # Launch the app
412
+ demo.launch(
413
+ share=False, # Set to True to create a public link
414
+ server_name="0.0.0.0", # Makes it accessible on your network
415
+ server_port=7860
416
+ )
ui/config.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Configuration file for Plant Disease Detection UI
3
+ Contains class names, paths, and other settings
4
+ """
5
+
6
+ # PlantVillage dataset has 39 classes
7
+ CLASS_NAMES = [
8
+ "Apple___Apple_scab",
9
+ "Apple___Black_rot",
10
+ "Apple___Cedar_apple_rust",
11
+ "Apple___healthy",
12
+ "Blueberry___healthy",
13
+ "Cherry_(including_sour)___Powdery_mildew",
14
+ "Cherry_(including_sour)___healthy",
15
+ "Corn_(maize)___Cercospora_leaf_spot Gray_leaf_spot",
16
+ "Corn_(maize)___Common_rust_",
17
+ "Corn_(maize)___Northern_Leaf_Blight",
18
+ "Corn_(maize)___healthy",
19
+ "Grape___Black_rot",
20
+ "Grape___Esca_(Black_Measles)",
21
+ "Grape___Leaf_blight_(Isariopsis_Leaf_Spot)",
22
+ "Grape___healthy",
23
+ "Orange___Haunglongbing_(Citrus_greening)",
24
+ "Peach___Bacterial_spot",
25
+ "Peach___healthy",
26
+ "Pepper,_bell___Bacterial_spot",
27
+ "Pepper,_bell___healthy",
28
+ "Potato___Early_blight",
29
+ "Potato___Late_blight",
30
+ "Potato___healthy",
31
+ "Raspberry___healthy",
32
+ "Soybean___healthy",
33
+ "Squash___Powdery_mildew",
34
+ "Strawberry___Leaf_scorch",
35
+ "Strawberry___healthy",
36
+ "Tomato___Bacterial_spot",
37
+ "Tomato___Early_blight",
38
+ "Tomato___Late_blight",
39
+ "Tomato___Leaf_Mold",
40
+ "Tomato___Septoria_leaf_spot",
41
+ "Tomato___Spider_mites Two-spotted_spider_mite",
42
+ "Tomato___Target_Spot",
43
+ "Tomato___Tomato_Yellow_Leaf_Curl_Virus",
44
+ "Tomato___Tomato_mosaic_virus",
45
+ "Tomato___healthy"
46
+ ]
47
+
48
+ # Model configurations
49
+ MODEL_CONFIGS = {
50
+ "CNN from Scratch": {
51
+ "description": "Custom CNN model trained from scratch",
52
+ "input_size": (256, 256),
53
+ "model_type": "cnn"
54
+ },
55
+ "Transfer Learning (ResNet18)": {
56
+ "description": "Fine-tuned ResNet18 model",
57
+ "input_size": (256, 256),
58
+ "model_type": "resnet18"
59
+ }
60
+ }
61
+
62
+ # Image preprocessing settings
63
+ IMAGE_SIZE = (256, 256)
64
+ NORMALIZE_MEAN = [0.485, 0.456, 0.406] # ImageNet mean
65
+ NORMALIZE_STD = [0.229, 0.224, 0.225] # ImageNet std
66
+
67
+ # UI settings
68
+ TOP_K_PREDICTIONS = 10
69
+ CONFIDENCE_THRESHOLD = 0.01 # Minimum confidence to display
70
+
71
+ # Paths (will be updated when integrating with real model)
72
+ MODEL_PATH = "models/best_model.pth"
73
+ EXAMPLES_PATH = "ui/examples/"
74
+
75
+ # ClearML settings (for fetching model from ClearML)
76
+ CLEARML_PROJECT_NAME = "Plant Disease Detection"
77
+ CLEARML_TASK_NAME = "CNN Training"
ui/model_loader.py ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Model loading utilities
3
+ Handles loading models from different sources: local files, HuggingFace, ClearML
4
+ """
5
+
6
+ import torch
7
+ import sys
8
+ from pathlib import Path
9
+
10
+ # Add parent directory to path to import from models
11
+ sys.path.append(str(Path(__file__).parent.parent))
12
+
13
+ from models.mock_model import MockPlantDiseaseModel, create_mock_predictions
14
+ import config
15
+
16
+
17
+ class ModelLoader:
18
+ """
19
+ Handles loading and managing plant disease models
20
+ """
21
+
22
+ def __init__(self, use_mock=True):
23
+ """
24
+ Initialize model loader
25
+
26
+ Args:
27
+ use_mock: If True, use mock model for development
28
+ """
29
+ self.use_mock = use_mock
30
+ self.model = None
31
+ self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
32
+
33
+ def load_model(self, model_name="CNN from Scratch", model_path=None):
34
+ """
35
+ Load a model based on configuration
36
+
37
+ Args:
38
+ model_name: Name of the model configuration
39
+ model_path: Optional path to model weights
40
+
41
+ Returns:
42
+ Loaded model
43
+ """
44
+ if self.use_mock:
45
+ print("Loading mock model for development...")
46
+ self.model = self._load_mock_model()
47
+ else:
48
+ print(f"Loading real model: {model_name}")
49
+ self.model = self._load_real_model(model_name, model_path)
50
+
51
+ self.model.to(self.device)
52
+ self.model.eval()
53
+ return self.model
54
+
55
+ def _load_mock_model(self):
56
+ """Load the mock model"""
57
+ model = MockPlantDiseaseModel(num_classes=len(config.CLASS_NAMES))
58
+ return model
59
+
60
+ def _load_real_model(self, model_name, model_path=None):
61
+ """
62
+ Load a real trained model
63
+
64
+ Args:
65
+ model_name: Model configuration name
66
+ model_path: Path to model weights
67
+
68
+ Returns:
69
+ Loaded model
70
+ """
71
+ model_config = config.MODEL_CONFIGS.get(model_name)
72
+
73
+ if model_config is None:
74
+ raise ValueError(f"Unknown model: {model_name}")
75
+
76
+ # TODO: Replace this with your actual model architecture
77
+ # For now, using mock model structure
78
+ if model_config["model_type"] == "cnn":
79
+ model = MockPlantDiseaseModel(num_classes=len(config.CLASS_NAMES))
80
+ elif model_config["model_type"] == "resnet18":
81
+ # TODO: Load ResNet18 transfer learning model
82
+ import torchvision.models as models
83
+ model = models.resnet18(pretrained=False)
84
+ model.fc = torch.nn.Linear(model.fc.in_features, len(config.CLASS_NAMES))
85
+ else:
86
+ raise ValueError(f"Unknown model type: {model_config['model_type']}")
87
+
88
+ # Load weights if path provided
89
+ if model_path:
90
+ print(f"Loading weights from {model_path}")
91
+ model.load_state_dict(torch.load(model_path, map_location=self.device))
92
+
93
+ return model
94
+
95
+ def load_from_clearml(self, task_id=None, project_name=None, task_name=None):
96
+ """
97
+ Load model from ClearML
98
+
99
+ Args:
100
+ task_id: ClearML task ID (if known)
101
+ project_name: ClearML project name
102
+ task_name: ClearML task name
103
+
104
+ Returns:
105
+ Loaded model
106
+ """
107
+ try:
108
+ from clearml import Task, Model
109
+
110
+ if task_id:
111
+ task = Task.get_task(task_id=task_id)
112
+ elif project_name and task_name:
113
+ # Get the latest task with this name
114
+ task = Task.get_task(
115
+ project_name=project_name,
116
+ task_name=task_name
117
+ )
118
+ else:
119
+ raise ValueError("Must provide either task_id or (project_name and task_name)")
120
+
121
+ # Get the model from the task
122
+ model_id = task.models['output'][-1].id if task.models.get('output') else None
123
+
124
+ if model_id:
125
+ model_obj = Model(model_id)
126
+ model_path = model_obj.get_local_copy()
127
+
128
+ # Load the model
129
+ self.model = self._load_real_model("CNN from Scratch", model_path)
130
+ print(f"Model loaded from ClearML task: {task_id or task_name}")
131
+
132
+ return self.model
133
+ else:
134
+ raise ValueError("No output model found in ClearML task")
135
+
136
+ except ImportError:
137
+ print("ClearML not installed. Install with: pip install clearml")
138
+ print("Falling back to mock model")
139
+ return self._load_mock_model()
140
+ except Exception as e:
141
+ print(f"Error loading from ClearML: {e}")
142
+ print("Falling back to mock model")
143
+ return self._load_mock_model()
144
+
145
+ def load_from_huggingface(self, model_id):
146
+ """
147
+ Load model from HuggingFace Hub
148
+
149
+ Args:
150
+ model_id: HuggingFace model ID (e.g., "username/model-name")
151
+
152
+ Returns:
153
+ Loaded model
154
+ """
155
+ try:
156
+ from huggingface_hub import hf_hub_download
157
+
158
+ # Download model file
159
+ model_path = hf_hub_download(repo_id=model_id, filename="model.pth")
160
+
161
+ # Load the model
162
+ self.model = self._load_real_model("CNN from Scratch", model_path)
163
+ print(f"Model loaded from HuggingFace: {model_id}")
164
+
165
+ return self.model
166
+
167
+ except ImportError:
168
+ print("huggingface_hub not installed. Install with: pip install huggingface_hub")
169
+ print("Falling back to mock model")
170
+ return self._load_mock_model()
171
+ except Exception as e:
172
+ print(f"Error loading from HuggingFace: {e}")
173
+ print("Falling back to mock model")
174
+ return self._load_mock_model()
175
+
176
+
177
+ def get_model(use_mock=True, **kwargs):
178
+ """
179
+ Convenience function to get a loaded model
180
+
181
+ Args:
182
+ use_mock: Whether to use mock model
183
+ **kwargs: Additional arguments for model loading
184
+
185
+ Returns:
186
+ Loaded model and model loader instance
187
+ """
188
+ loader = ModelLoader(use_mock=use_mock)
189
+ model = loader.load_model(**kwargs)
190
+ return model, loader
191
+
192
+
193
+ if __name__ == "__main__":
194
+ # Test model loading
195
+ print("Testing model loading...")
196
+
197
+ # Test mock model
198
+ print("\n1. Loading mock model:")
199
+ model, loader = get_model(use_mock=True)
200
+ print(f"Model type: {type(model).__name__}")
201
+ print(f"Device: {loader.device}")
202
+
203
+ # Test with dummy input
204
+ dummy_input = torch.randn(1, 3, 256, 256).to(loader.device)
205
+ with torch.no_grad():
206
+ output = model(dummy_input)
207
+ print(f"Output shape: {output.shape}")
ui/utils.py ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Utility functions for the Plant Disease Detection UI
3
+ """
4
+
5
+ import torch
6
+ import numpy as np
7
+ from PIL import Image
8
+ import torchvision.transforms as transforms
9
+ import config
10
+
11
+
12
+ def preprocess_image(image, image_size=config.IMAGE_SIZE):
13
+ """
14
+ Preprocess image for model input
15
+
16
+ Args:
17
+ image: PIL Image or numpy array
18
+ image_size: Target size (height, width)
19
+
20
+ Returns:
21
+ Preprocessed tensor ready for model
22
+ """
23
+ # Convert to PIL Image if numpy array
24
+ if isinstance(image, np.ndarray):
25
+ image = Image.fromarray(image.astype('uint8'))
26
+
27
+ # Convert RGBA to RGB if necessary
28
+ if image.mode == 'RGBA':
29
+ image = image.convert('RGB')
30
+
31
+ # Define preprocessing transforms
32
+ transform = transforms.Compose([
33
+ transforms.Resize(image_size),
34
+ transforms.ToTensor(),
35
+ transforms.Normalize(mean=config.NORMALIZE_MEAN, std=config.NORMALIZE_STD)
36
+ ])
37
+
38
+ # Apply transforms
39
+ tensor = transform(image)
40
+
41
+ # Add batch dimension
42
+ tensor = tensor.unsqueeze(0)
43
+
44
+ return tensor
45
+
46
+
47
+ def postprocess_predictions(logits, class_names=config.CLASS_NAMES, top_k=config.TOP_K_PREDICTIONS):
48
+ """
49
+ Convert model logits to human-readable predictions
50
+
51
+ Args:
52
+ logits: Raw model output
53
+ class_names: List of class names
54
+ top_k: Number of top predictions to return
55
+
56
+ Returns:
57
+ Dictionary of predictions with confidences
58
+ """
59
+ # Convert logits to probabilities using softmax
60
+ probs = torch.nn.functional.softmax(logits, dim=1)
61
+
62
+ # Convert to numpy
63
+ probs = probs.cpu().detach().numpy()[0]
64
+
65
+ # Create predictions dictionary
66
+ predictions = {name: float(prob) for name, prob in zip(class_names, probs)}
67
+
68
+ # Get top-k predictions
69
+ top_predictions = sorted(predictions.items(), key=lambda x: x[1], reverse=True)[:top_k]
70
+
71
+ return dict(top_predictions), predictions
72
+
73
+
74
+ def format_prediction_for_display(predictions):
75
+ """
76
+ Format predictions for Gradio display
77
+
78
+ Args:
79
+ predictions: Dictionary of class names and probabilities
80
+
81
+ Returns:
82
+ Dictionary formatted for Gradio Label component
83
+ """
84
+ # Filter out very low confidence predictions
85
+ filtered = {k: v for k, v in predictions.items() if v >= config.CONFIDENCE_THRESHOLD}
86
+
87
+ return filtered
88
+
89
+
90
+ def format_class_name(class_name):
91
+ """
92
+ Format class name for better readability
93
+
94
+ Args:
95
+ class_name: Original class name (e.g., "Tomato___Late_blight")
96
+
97
+ Returns:
98
+ Formatted class name (e.g., "Tomato - Late blight")
99
+ """
100
+ # Replace underscores with spaces and split on ___
101
+ parts = class_name.split("___")
102
+
103
+ if len(parts) == 2:
104
+ plant, disease = parts
105
+ plant = plant.replace("_", " ")
106
+ disease = disease.replace("_", " ")
107
+ return f"{plant} - {disease}"
108
+ else:
109
+ return class_name.replace("_", " ")
110
+
111
+
112
+ def get_disease_info(class_name):
113
+ """
114
+ Get information about a disease (for future enhancement)
115
+
116
+ Args:
117
+ class_name: Disease class name
118
+
119
+ Returns:
120
+ Dictionary with disease information
121
+ """
122
+ # This is a placeholder - you could expand this with actual disease information
123
+ parts = class_name.split("___")
124
+
125
+ info = {
126
+ "plant": parts[0].replace("_", " ") if len(parts) > 0 else "Unknown",
127
+ "disease": parts[1].replace("_", " ") if len(parts) > 1 else "Unknown",
128
+ "is_healthy": "healthy" in class_name.lower(),
129
+ "formatted_name": format_class_name(class_name)
130
+ }
131
+
132
+ return info
133
+
134
+
135
+ def batch_preprocess_images(images):
136
+ """
137
+ Preprocess multiple images for batch prediction
138
+
139
+ Args:
140
+ images: List of PIL Images or numpy arrays
141
+
142
+ Returns:
143
+ Batched tensor ready for model
144
+ """
145
+ tensors = [preprocess_image(img) for img in images]
146
+ batch = torch.cat(tensors, dim=0)
147
+ return batch
148
+
149
+
150
+ def create_confidence_label(predictions, top_k=5):
151
+ """
152
+ Create a formatted string showing top predictions
153
+
154
+ Args:
155
+ predictions: Dictionary of predictions
156
+ top_k: Number of top predictions to show
157
+
158
+ Returns:
159
+ Formatted string
160
+ """
161
+ top_preds = sorted(predictions.items(), key=lambda x: x[1], reverse=True)[:top_k]
162
+
163
+ lines = []
164
+ for i, (class_name, prob) in enumerate(top_preds, 1):
165
+ formatted_name = format_class_name(class_name)
166
+ lines.append(f"{i}. {formatted_name}: {prob*100:.2f}%")
167
+
168
+ return "\n".join(lines)
169
+
170
+
171
+ if __name__ == "__main__":
172
+ # Test utilities
173
+ print("Testing utility functions...")
174
+
175
+ # Test class name formatting
176
+ test_names = [
177
+ "Tomato___Late_blight",
178
+ "Apple___healthy",
179
+ "Corn_(maize)___Common_rust_"
180
+ ]
181
+
182
+ print("\nClass name formatting:")
183
+ for name in test_names:
184
+ print(f" {name} -> {format_class_name(name)}")
185
+
186
+ # Test disease info
187
+ print("\nDisease info:")
188
+ for name in test_names:
189
+ info = get_disease_info(name)
190
+ print(f" {name}:")
191
+ print(f" Plant: {info['plant']}")
192
+ print(f" Disease: {info['disease']}")
193
+ print(f" Healthy: {info['is_healthy']}")
194
+
195
+ # Test image preprocessing
196
+ print("\nImage preprocessing:")
197
+ dummy_image = Image.new('RGB', (512, 512), color='red')
198
+ tensor = preprocess_image(dummy_image)
199
+ print(f" Input size: {dummy_image.size}")
200
+ print(f" Output tensor shape: {tensor.shape}")
201
+
202
+ # Test mock predictions
203
+ print("\nMock predictions:")
204
+ from models.mock_model import create_mock_predictions
205
+ preds = create_mock_predictions(config.CLASS_NAMES)
206
+ top_preds, all_preds = postprocess_predictions(
207
+ torch.tensor([list(preds.values())]),
208
+ config.CLASS_NAMES
209
+ )
210
+ print(create_confidence_label(top_preds))