DeepActionPotential commited on
Commit
a4dd111
Β·
verified Β·
1 Parent(s): 0e4247a

πŸš€ Initial upload of my app

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ demo/demo.mp4 filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Eslam Tarek
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,12 +1,97 @@
1
- ---
2
- title: LeafNet
3
- emoji: πŸ“‰
4
- colorFrom: gray
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 5.47.2
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LeafNet - Healyth vs unhealthy tea classifier
2
+ *A deep learning tool to classify tea leaves as healthy or unhealthy from images.*
3
+
4
+ ![MIT License](https://img.shields.io/badge/license-MIT-green)
5
+
6
+ ---
7
+
8
+ ## Table of Contents
9
+
10
+ - [Demo](#demo)
11
+ - [Features](#features)
12
+ - [Installation / Setup](#installation--setup)
13
+ - [Usage](#usage)
14
+ - [Configuration / Options](#configuration--options)
15
+ - [Contributing](#contributing)
16
+ - [License](#license)
17
+ - [Acknowledgements / Credits](#acknowledgements--credits)
18
+
19
+ ---
20
+
21
+ ## Demo
22
+
23
+ ![Demo Screenshot](./demo/demo.png)
24
+ *Main interface for uploading and classifying tea leaf images.*
25
+
26
+ ![Demo Video](./demo/demo.mp4)
27
+ *Video walkthrough of the classification workflow.*
28
+
29
+ ---
30
+
31
+ ## Features
32
+
33
+ - Classifies tea leaf images as healthy or unhealthy using deep learning.
34
+ - Simple, interactive web-based UI for image upload and prediction.
35
+ - Modular codebase for easy extension and retraining.
36
+ - Fast inference for both single and batch image processing.
37
+
38
+ ---
39
+
40
+ ## Installation / Setup
41
+
42
+ ```bash
43
+ # Create a virtual environment
44
+ python -m venv .venv
45
+
46
+ # Activate it
47
+ # On Linux/Mac:
48
+ source .venv/bin/activate
49
+ # On Windows:
50
+ .venv\Scripts\activate
51
+
52
+ # Install dependencies
53
+ pip install -r requirements.txt
54
+ ```
55
+
56
+ ---
57
+
58
+ ## Usage
59
+
60
+ Run the application:
61
+
62
+ ```bash
63
+ python app.py
64
+ ```
65
+
66
+ This will launch the web interface in your browser.
67
+ Upload an image of a tea leaf to get a health classification.
68
+
69
+ ---
70
+
71
+ ## Configuration / Options
72
+
73
+ - UI and model configuration can be adjusted in the source files.
74
+ - For advanced settings (e.g., model path, thresholds), edit the relevant Python files.
75
+
76
+ ---
77
+
78
+ ## Contributing
79
+
80
+ Contributions are welcome!
81
+ - Open issues for bugs or feature requests.
82
+ - Submit pull requests for improvements.
83
+ - Please follow standard Python code style and include tests where possible.
84
+
85
+ ---
86
+
87
+ ## License
88
+
89
+ This project is licensed under the MIT License. See the [LICENSE](./LICENSE) file for details.
90
+
91
+ ---
92
+
93
+ ## Acknowledgements / Credits
94
+
95
+ - Developed by Eslam Tarek.
96
+ - Thanks to the open-source community for libraries and inspiration.
97
+
__pycache__/ui.cpython-311.pyc ADDED
Binary file (1.51 kB). View file
 
__pycache__/utils.cpython-311.pyc ADDED
Binary file (2.14 kB). View file
 
app.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ from ui import build_ui
2
+
3
+ if __name__ == "__main__":
4
+ demo = build_ui()
5
+ demo.launch()
demo/demo.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b20c7a3446aa813a424a4c915f110ce9a7705d7401a4c009e1beafb97d27312a
3
+ size 1371590
demo/demo.png ADDED
models/tea_leaves_model.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12da32b1907f8e8e4fc3b1820e143f695b949fcd628c6a50387c2460a34fed9b
3
+ size 94425194
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ gradio==4.25.0
2
+ torch==2.2.2
3
+ pillow==10.3.0
4
+ # Add any other dependencies found in your ui.py or model code
tea-leafs-disease-resnet-classifier-94-f1-score.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
ui.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from utils import predict_image
3
+
4
+ def build_ui():
5
+ with gr.Blocks() as demo:
6
+ gr.Markdown("# πŸƒ Tea Leaf Disease Classifier")
7
+ gr.Markdown("Upload a tea leaf image and get the predicted disease class.")
8
+
9
+ with gr.Row():
10
+ image_input = gr.Image(type="pil", label="Upload Leaf Image")
11
+ output_text = gr.Textbox(label="Prediction Result")
12
+
13
+ gr.Button("Predict").click(
14
+ fn=predict_image,
15
+ inputs=image_input,
16
+ outputs=output_text
17
+ )
18
+
19
+ return demo
utils.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ from torchvision import transforms
4
+ from PIL import Image
5
+
6
+ # Load class names
7
+ class_names = [
8
+ "Anthracnose", "algal leaf", "bird eye spot",
9
+ "brown blight", "gray light", "healthy",
10
+ "red leaf spot", "white spot"
11
+ ]
12
+
13
+ # Device setup
14
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
15
+
16
+ # Load model (assumes you used torch.save(model, "model.pth"))
17
+ model = torch.load("models/tea_leaves_model.pth", map_location=device, weights_only=False)
18
+ model.eval()
19
+
20
+ # Image preprocessing
21
+ transform = transforms.Compose([
22
+ transforms.Resize((224, 224)),
23
+ transforms.ToTensor(),
24
+ transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
25
+ ])
26
+
27
+ def predict_image(image: Image.Image) -> str:
28
+ """
29
+ Run inference on a single PIL image and return class label.
30
+ """
31
+ img_t = transform(image).unsqueeze(0).to(device)
32
+
33
+ with torch.no_grad():
34
+ outputs = model(img_t)
35
+ _, preds = torch.max(outputs, 1)
36
+ pred_class = class_names[preds.item()]
37
+
38
+ return pred_class