AI-Solutions-KK commited on
Commit
792a500
Β·
1 Parent(s): 7b2fd8b

igonore & readme added

Browse files
Files changed (2) hide show
  1. .dockerignore +4 -0
  2. README.md +230 -2
.dockerignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ .venv/
2
+ __pycache__/
3
+ *.pyc
4
+ .git/
README.md CHANGED
@@ -1,4 +1,4 @@
1
- ---
2
  title: Mango Disease Api
3
  emoji: πŸ“š
4
  colorFrom: yellow
@@ -7,6 +7,234 @@ sdk: docker
7
  pinned: false
8
  license: mit
9
  short_description: Mango Disease Detection API - Backend
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```
2
  title: Mango Disease Api
3
  emoji: πŸ“š
4
  colorFrom: yellow
 
7
  pinned: false
8
  license: mit
9
  short_description: Mango Disease Detection API - Backend
10
+
11
+ ```
12
+
13
+
14
+
15
+ # api/README.md
16
+ # Mango Disease Detection API - Backend
17
+
18
+ ## πŸš€ Quick Start (Local Testing)
19
+
20
+ ### 1. Install Dependencies
21
+ ```bash
22
+ cd api
23
+ pip install -r requirements.txt
24
+ ```
25
+
26
+ ### 2. Ensure Model Files Exist
27
+ ```
28
+ api/
29
+ β”œβ”€β”€ models/
30
+ β”‚ └── efficientnetv2_b0_embedding_512.tflite βœ…
31
+ └── embeddings_cache/
32
+ β”œβ”€β”€ svc_model.pkl βœ…
33
+ └── classes.npy βœ…
34
+ ```
35
+
36
+ ### 3. Run Server
37
+ ```bash
38
+ python main.py
39
+ ```
40
+
41
+ Server starts at: **http://localhost:8000**
42
+
43
+ ---
44
+
45
+ ## πŸ“– Swagger UI Testing
46
+
47
+ ### Open Swagger Docs
48
+ ```
49
+ http://localhost:8000/docs
50
+ ```
51
+
52
+ ### Test Endpoints
53
+
54
+ #### 1. Health Check
55
+ ```http
56
+ GET /health
57
+ ```
58
+
59
+ **Expected Response:**
60
+ ```json
61
+ {
62
+ "status": "healthy",
63
+ "tflite_model": true,
64
+ "svm_model": true
65
+ }
66
+ ```
67
+
68
+ #### 2. Diagnose Image
69
+ ```http
70
+ POST /diagnose
71
+ ```
72
+
73
+ **Request Body:**
74
+ ```json
75
+ {
76
+ "image": "BASE64_ENCODED_IMAGE_HERE",
77
+ "enable_voice": false
78
+ }
79
+ ```
80
+
81
+ **Response:**
82
+ ```json
83
+ {
84
+ "status": "success",
85
+ "predicted_label": "Anthracnose",
86
+ "confidence": 0.9234,
87
+ "cause": "Fungal infection causing dark sunken lesions...",
88
+ "treatment": "Spray Carbendazim 0.1%...",
89
+ "prevention": "Avoid overhead irrigation..."
90
+ }
91
+ ```
92
+
93
+ ---
94
+
95
+ ## πŸ§ͺ Testing with Python
96
+
97
+ ### Convert Image to Base64
98
+ ```python
99
+ import base64
100
+ import requests
101
+
102
+ # Read image
103
+ with open("test_image.jpg", "rb") as f:
104
+ img_b64 = base64.b64encode(f.read()).decode()
105
+
106
+ # Send request
107
+ response = requests.post(
108
+ "http://localhost:8000/diagnose",
109
+ json={
110
+ "image": img_b64,
111
+ "enable_voice": False # Set True for voice output
112
+ }
113
+ )
114
+
115
+ print(response.json())
116
+ ```
117
+
118
  ---
119
 
120
+ ## πŸ”Š Voice Output Testing
121
+
122
+ ### Enable Voice (Local Only)
123
+ ```python
124
+ response = requests.post(
125
+ "http://localhost:8000/diagnose",
126
+ json={
127
+ "image": img_b64,
128
+ "enable_voice": True # πŸ”Š Voice enabled
129
+ }
130
+ )
131
+ ```
132
+
133
+ **Voice Device Priority:**
134
+ 1. Bluetooth speaker (if connected)
135
+ 2. Wired/USB speaker
136
+ 3. Built-in speaker (PC/Laptop)
137
+ 4. Raspberry Pi 3.5mm jack
138
+ 5. HDMI audio
139
+
140
+ **On Raspberry Pi:**
141
+ ```bash
142
+ # Install espeak for better voice quality
143
+ sudo apt-get install espeak pulseaudio
144
+
145
+ # Test audio
146
+ speaker-test -t wav -c 2
147
+ ```
148
+
149
+ ---
150
+
151
+ ## ☁️ Deploy to HuggingFace Spaces
152
+
153
+ ### 1. Create Space
154
+ - Go to: https://huggingface.co/spaces
155
+ - Create new Space (SDK: **Docker** or **Gradio**)
156
+
157
+ ### 2. Upload Files
158
+ ```
159
+ api/
160
+ β”œβ”€β”€ main.py
161
+ β”œβ”€β”€ inference.py
162
+ β”œβ”€β”€ voice.py
163
+ β”œβ”€β”€ requirements.txt
164
+ β”œβ”€β”€ models/
165
+ └── embeddings_cache/
166
+ ```
167
+
168
+ ### 3. Create `Dockerfile` (for HF Spaces)
169
+ ```dockerfile
170
+ FROM python:3.10-slim
171
+
172
+ WORKDIR /app
173
+
174
+ COPY requirements.txt .
175
+ RUN pip install --no-cache-dir -r requirements.txt
176
+
177
+ COPY . .
178
+
179
+ EXPOSE 8000
180
+
181
+ CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
182
+ ```
183
+
184
+ ### 4. Update Raspberry Pi Client
185
+ Change API endpoint in your Pi client:
186
+ ```python
187
+ API_URL = "https://your-space.hf.space/diagnose"
188
+ ```
189
+
190
+ ---
191
+
192
+ ## πŸ› Troubleshooting
193
+
194
+ ### Voice Not Working?
195
+ ```bash
196
+ # Check audio devices (Linux)
197
+ pactl list sinks short
198
+
199
+ # Test pyttsx3
200
+ python -c "import pyttsx3; pyttsx3.speak('Test')"
201
+ ```
202
+
203
+ ### Model Not Loading?
204
+ - Check file paths in `inference.py`
205
+ - Ensure `.tflite` and `.pkl` files are not corrupted
206
+ - Verify permissions: `chmod +r models/* embeddings_cache/*`
207
+
208
+ ### Swagger UI Not Loading?
209
+ - Clear browser cache
210
+ - Try: `http://localhost:8000/redoc` (alternative docs)
211
+
212
+ ---
213
+
214
+ ## πŸ“Š API Performance
215
+
216
+ - **Inference Time:** ~300-500ms (CPU)
217
+ - **Model Size:** ~16MB (TFLite)
218
+ - **Memory Usage:** ~200MB RAM
219
+ - **Voice Latency:** Non-blocking (background thread)
220
+
221
+ ---
222
+
223
+ ## πŸ”’ Production Checklist
224
+
225
+ - [ ] Change CORS origins to specific IPs
226
+ - [ ] Add API key authentication
227
+ - [ ] Set up HTTPS (Let's Encrypt)
228
+ - [ ] Monitor with logging
229
+ - [ ] Rate limiting (10 req/min per IP)
230
+ - [ ] Disable voice on cloud (keep for Pi only)
231
+
232
+ ---
233
+
234
+ ## πŸ“ž Support
235
+
236
+ **Issues?** Check:
237
+ 1. Model files present
238
+ 2. Dependencies installed
239
+ 3. Port 8000 not blocked
240
+ 4. Check logs: `uvicorn main:app --log-level debug`