smallGroupProject / README.md
Mert Yerlikaya
Add feature-rich Gradio UI with mock model
505fc99
|
raw
history blame
8.9 kB

Plant Disease Detection - UI and Deployment

This directory contains the Gradio-based user interface and deployment code for the Plant Disease Detection project.

Team Information

Team Number: [Add your team number]

Team Members:

  • [Add team member names here]

Links

Project Structure

plant-disease-ui/
β”œβ”€β”€ ui/
β”‚   β”œβ”€β”€ app.py              # Main Gradio application
β”‚   β”œβ”€β”€ config.py           # Configuration (class names, paths, etc.)
β”‚   β”œβ”€β”€ model_loader.py     # Model loading utilities
β”‚   β”œβ”€β”€ utils.py            # Utility functions (preprocessing, etc.)
β”‚   └── examples/           # Example images for gallery
β”œβ”€β”€ models/
β”‚   β”œβ”€β”€ mock_model.py       # Mock model for development
β”‚   └── best_model.pth      # (To be added) Trained model weights
β”œβ”€β”€ docs/
β”‚   └── deployment_guide.md # Deployment instructions
β”œβ”€β”€ requirements.txt        # Python dependencies
└── README.md              # This file

Features

Core Features

  • βœ… Image Upload: Upload plant leaf images for disease detection
  • βœ… Top-K Predictions: Display top 10 predictions with confidence scores
  • βœ… Formatted Output: Clean, readable prediction results

Advanced Features

  • βœ… Multiple Models: Switch between different trained models (CNN, Transfer Learning)
  • βœ… Example Gallery: Pre-loaded example images for quick testing
  • βœ… Batch Processing: Upload and classify multiple images at once
  • βœ… Flag Predictions: Report incorrect predictions
  • βœ… Confidence Threshold: Filter predictions by minimum confidence level
  • βœ… Detailed Information: View plant type, disease name, and health status

Setup Instructions

1. Install Dependencies

# Create a virtual environment (recommended)
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install required packages
pip install -r requirements.txt

2. Add Example Images (Optional)

To enable the example gallery feature:

# Create examples directory
mkdir -p ui/examples

# Add plant disease images to ui/examples/
# You can download sample images from the PlantVillage dataset

To download example images programmatically:

from datasets import load_dataset

# Load PlantVillage dataset
dataset = load_dataset("EdBianchi/plant-village")

# Save some example images
import os
os.makedirs("ui/examples", exist_ok=True)

for i in range(10):  # Save 10 examples
    img = dataset['train'][i * 1000]['image']  # Sample every 1000th image
    img.save(f"ui/examples/example_{i}.jpg")

3. Run the App Locally

Option A: Using Mock Model (for development)

cd ui
python app.py

The app will start at http://localhost:7860

Option B: Using Your Trained Model

First, modify app.py to load your real model:

# In app.py, change the last line:
demo = create_interface(use_mock=False)  # Change to False

Then run:

cd ui
python app.py

4. Configure for Real Model

When your team's model is ready, you have several options:

Option 1: Load from Local File

# In model_loader.py, update the model path
MODEL_PATH = "models/best_model.pth"

# Then in app.py:
app = PlantDiseaseApp(use_mock=False)

Option 2: Load from ClearML

# In app.py or model_loader.py:
loader = ModelLoader(use_mock=False)
model = loader.load_from_clearml(
    project_name="Plant Disease Detection",
    task_name="CNN Training"
)

Option 3: Load from Hugging Face Hub

# First, upload your model to HF Hub
# Then in model_loader.py:
loader = ModelLoader(use_mock=False)
model = loader.load_from_huggingface("your-username/plant-disease-model")

Deployment to Hugging Face Spaces

Step 1: Create a Hugging Face Account

  1. Go to https://huggingface.co/ and create an account
  2. Verify your email address

Step 2: Create a New Space

  1. Click on your profile β†’ "New Space"
  2. Space name: plant-disease-detection
  3. License: Apache 2.0
  4. Select SDK: Gradio
  5. Make it Public
  6. Click "Create Space"

Step 3: Prepare Files for Deployment

Create these files in the root of your Space:

app.py (Simplified version for HF Spaces)

# Copy ui/app.py and modify the imports to work in the flat structure

requirements.txt

torch
torchvision
gradio
Pillow
numpy
huggingface-hub

README.md (for the Space) ```markdown

title: Plant Disease Detection emoji: 🌱 colorFrom: green colorTo: blue sdk: gradio sdk_version: 4.0.0 app_file: app.py pinned: false

Plant Disease Detection

AI-powered plant disease detection from leaf images. Developed by [Your Team Name] for King's College London.


### Step 4: Upload Your Model

**Option A: Upload weights to the Space**

1. Upload your `best_model.pth` to the Space
2. Modify `app.py` to load from this file

**Option B: Use Hugging Face Hub**

1. Upload model to HF Model Hub:
```python
from huggingface_hub import HfApi

api = HfApi()
api.upload_file(
    path_or_fileobj="models/best_model.pth",
    path_in_repo="model.pth",
    repo_id="your-username/plant-disease-model",
    repo_type="model"
)
  1. Load in app:
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(
    repo_id="your-username/plant-disease-model",
    filename="model.pth"
)

Option C: Fetch from ClearML

  1. Add ClearML credentials to Space Secrets
  2. Use the load_from_clearml() function

Step 5: Deploy

  1. Upload all files to your HF Space repository
  2. The app will automatically build and deploy
  3. Test at: https://huggingface.co/spaces/your-username/plant-disease-detection

Model Integration Guide

Your CNN Model Structure

When integrating your actual trained model, make sure to update model_loader.py with your actual CNN architecture:

class YourCNNModel(nn.Module):
    def __init__(self, num_classes=39):
        super(YourCNNModel, self).__init__()

        # Add your actual CNN architecture here
        # This should match what you used for training

    def forward(self, x):
        # Your forward pass
        return x

Loading Trained Weights

# Load model
model = YourCNNModel(num_classes=39)

# Load trained weights
checkpoint = torch.load('path/to/best_model.pth', map_location=device)

# If you saved the entire model:
model = checkpoint

# If you saved just state_dict:
model.load_state_dict(checkpoint)

# Or if you saved optimizer and other info:
model.load_state_dict(checkpoint['model_state_dict'])

Testing the UI

Manual Testing Checklist

  • Upload a single image and get predictions
  • Try different models from the dropdown
  • Adjust confidence threshold slider
  • Test example gallery (if images added)
  • Upload multiple images for batch processing
  • Flag a prediction
  • Check all tabs load correctly
  • Verify predictions match expected classes

Automated Testing

# Run tests
cd ui
python -m pytest test_app.py  # (Create tests if needed)

Troubleshooting

Common Issues

1. ModuleNotFoundError

# Make sure all dependencies are installed
pip install -r requirements.txt

2. Model Loading Error

# Check that the model architecture matches the saved weights
# Make sure you're using the same num_classes (39)

3. Image Size Issues

# Ensure images are being resized to (256, 256)
# Check config.py IMAGE_SIZE setting

4. CUDA/GPU Errors

# The app automatically falls back to CPU
# Check: torch.cuda.is_available()

Contributing

When contributing to this UI:

  1. Create a new branch for your feature
  2. Test locally with mock model first
  3. Test with real model before pushing
  4. Update this README if adding new features
  5. Ensure code is well-commented

TODO

  • Add more example images to gallery
  • Integrate with actual trained models
  • Add disease information/treatment suggestions
  • Implement persistent flagging system (database)
  • Add data visualization for batch results
  • Create comprehensive tests

Resources

License

[Specify your license here]

Acknowledgments

  • King's College London, 5CCSAGAP Course
  • PlantVillage Dataset creators
  • Course instructors and TAs