nusaibah0110 commited on
Commit
26c7cba
·
1 Parent(s): 4f997f9

Add comprehensive LLM endpoints and enhance FastAPI backend

Browse files

- Add Google Gemini AI integration for chat and report generation
- Implement /api/chat endpoint for conversational AI assistant
- Implement /api/generate-report endpoint for automated report generation
- Add Pydantic models for request validation
- Update health check endpoint to show AI model and LLM status
- Add google-generativeai and python-dotenv to requirements
- Create .env.example template for environment variables
- Add comprehensive API documentation (API_DOCUMENTATION.md)
- Create test_api.py script for endpoint testing
- Update .gitignore to exclude .env files
- Support both GEMINI_API_KEY and VITE_GEMINI_API_KEY env vars

.env.example ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # Gemini API Key for LLM functionality
2
+ # Get your API key from: https://makersuite.google.com/app/apikey
3
+ GEMINI_API_KEY=your_gemini_api_key_here
4
+ VITE_GEMINI_API_KEY=your_gemini_api_key_here
5
+
6
+ # Optional: Model Configuration
7
+ # GEMINI_MODEL=gemini-1.5-flash
.gitignore CHANGED
@@ -19,6 +19,11 @@ __pycache__/
19
  *.pyd
20
  .venv/
21
 
 
 
 
 
 
22
  # Editor directories and files
23
  .vscode/*
24
  !.vscode/extensions.json
 
19
  *.pyd
20
  .venv/
21
 
22
+ # Environment variables
23
+ .env
24
+ .env.local
25
+ .env.*.local
26
+
27
  # Editor directories and files
28
  .vscode/*
29
  !.vscode/extensions.json
API_DOCUMENTATION.md ADDED
@@ -0,0 +1,363 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pathora Colposcopy API Documentation
2
+
3
+ ## Overview
4
+ FastAPI backend for Pathora Colposcopy Assistant with AI model inference and LLM capabilities.
5
+
6
+ ## Base URL
7
+ - Local: `http://localhost:8000`
8
+ - Production: `https://huggingface.co/spaces/ManalifeAI/Pathora_Colposcopy_Assistant`
9
+
10
+ ---
11
+
12
+ ## Endpoints
13
+
14
+ ### Health Check
15
+ **GET** `/health`
16
+
17
+ Check API health status and verify AI models and LLM availability.
18
+
19
+ **Response:**
20
+ ```json
21
+ {
22
+ "status": "healthy",
23
+ "service": "Pathora Colposcopy API",
24
+ "ai_models": {
25
+ "acetowhite_model": "loaded",
26
+ "cervix_model": "loaded"
27
+ },
28
+ "llm": {
29
+ "gemini_available": true,
30
+ "api_key_configured": true
31
+ }
32
+ }
33
+ ```
34
+
35
+ ---
36
+
37
+ ## AI Model Endpoints
38
+
39
+ ### 1. Acetowhite Contour Detection
40
+ **POST** `/api/infer-aw-contour`
41
+
42
+ Detect acetowhite lesions and generate contour overlays.
43
+
44
+ **Parameters:**
45
+ - `file` (UploadFile): Image file
46
+ - `conf_threshold` (float, optional): Confidence threshold (0.0-1.0, default: 0.4)
47
+
48
+ **Response:**
49
+ ```json
50
+ {
51
+ "status": "success",
52
+ "message": "Inference completed successfully",
53
+ "result_image": "base64_encoded_image",
54
+ "contours": [
55
+ {
56
+ "points": [[x1, y1], [x2, y2], ...],
57
+ "area": 1234.5,
58
+ "confidence": 0.85
59
+ }
60
+ ],
61
+ "detections": 2,
62
+ "confidence_threshold": 0.4
63
+ }
64
+ ```
65
+
66
+ ### 2. Cervix Bounding Box Detection
67
+ **POST** `/api/infer-cervix-bbox`
68
+
69
+ Detect cervix location and return bounding boxes.
70
+
71
+ **Parameters:**
72
+ - `file` (UploadFile): Image file
73
+ - `conf_threshold` (float, optional): Confidence threshold (0.0-1.0, default: 0.4)
74
+
75
+ **Response:**
76
+ ```json
77
+ {
78
+ "status": "success",
79
+ "message": "Cervix bounding box detection completed",
80
+ "result_image": "base64_encoded_image",
81
+ "bounding_boxes": [
82
+ {
83
+ "x1": 100,
84
+ "y1": 150,
85
+ "x2": 400,
86
+ "y2": 450,
87
+ "confidence": 0.92,
88
+ "class": "cervix"
89
+ }
90
+ ],
91
+ "detections": 1,
92
+ "frame_width": 1920,
93
+ "frame_height": 1080,
94
+ "confidence_threshold": 0.4
95
+ }
96
+ ```
97
+
98
+ ### 3. Batch Image Inference
99
+ **POST** `/api/batch-infer`
100
+
101
+ Process multiple images for acetowhite detection in one request.
102
+
103
+ **Parameters:**
104
+ - `files` (List[UploadFile]): Multiple image files
105
+ - `conf_threshold` (float, optional): Confidence threshold (default: 0.4)
106
+
107
+ **Response:**
108
+ ```json
109
+ {
110
+ "status": "completed",
111
+ "total_files": 3,
112
+ "results": [
113
+ {
114
+ "filename": "image1.jpg",
115
+ "status": "success",
116
+ "result_image": "base64...",
117
+ "contours": [...],
118
+ "detections": 2
119
+ }
120
+ ]
121
+ }
122
+ ```
123
+
124
+ ### 4. Single Frame Analysis
125
+ **POST** `/infer/image`
126
+
127
+ Analyze single image for cervix quality assessment.
128
+
129
+ **Parameters:**
130
+ - `file` (UploadFile): Image file
131
+
132
+ **Response:**
133
+ ```json
134
+ {
135
+ "status": "Excellent",
136
+ "quality_percent": 95,
137
+ "cervix_detected": true,
138
+ "focus_score": 0.89,
139
+ "brightness_score": 0.92
140
+ }
141
+ ```
142
+
143
+ ### 5. Video Frame Analysis
144
+ **POST** `/infer/video`
145
+
146
+ Process video frames for quality assessment.
147
+
148
+ **Parameters:**
149
+ - `file` (UploadFile): Video file
150
+
151
+ **Response:**
152
+ ```json
153
+ {
154
+ "total_frames": 150,
155
+ "results": [
156
+ {
157
+ "frame": 0,
158
+ "status": "Excellent",
159
+ "quality_percent": 95
160
+ }
161
+ ]
162
+ }
163
+ ```
164
+
165
+ ---
166
+
167
+ ## LLM Endpoints
168
+
169
+ ### 6. Chat with AI Assistant
170
+ **POST** `/api/chat`
171
+
172
+ Conversational AI endpoint for colposcopy guidance.
173
+
174
+ **Request Body:**
175
+ ```json
176
+ {
177
+ "message": "What are the signs of high-grade lesions?",
178
+ "history": [
179
+ {
180
+ "role": "user",
181
+ "text": "Hello"
182
+ },
183
+ {
184
+ "role": "bot",
185
+ "text": "Hello! I'm Pathora AI..."
186
+ }
187
+ ],
188
+ "system_prompt": "Optional custom system prompt"
189
+ }
190
+ ```
191
+
192
+ **Response:**
193
+ ```json
194
+ {
195
+ "status": "success",
196
+ "response": "High-grade lesions typically show...",
197
+ "model": "gemini-1.5-flash"
198
+ }
199
+ ```
200
+
201
+ ### 7. Generate Colposcopy Report
202
+ **POST** `/api/generate-report`
203
+
204
+ Generate comprehensive colposcopy report based on patient data and findings.
205
+
206
+ **Request Body:**
207
+ ```json
208
+ {
209
+ "patient_data": {
210
+ "age": 35,
211
+ "gravida": 2,
212
+ "para": 2,
213
+ "lmp": "2024-02-01",
214
+ "indication": "Abnormal Pap smear"
215
+ },
216
+ "exam_findings": {
217
+ "native": {
218
+ "cervix_visible": true,
219
+ "transformation_zone": "Type 1"
220
+ },
221
+ "acetic_acid": {
222
+ "acetowhite_lesions": true,
223
+ "location": "6-9 o'clock"
224
+ },
225
+ "green_filter": {
226
+ "vascular_patterns": "Punctation"
227
+ },
228
+ "lugol": {
229
+ "iodine_uptake": "Partial"
230
+ }
231
+ },
232
+ "images": [],
233
+ "system_prompt": "Optional custom prompt"
234
+ }
235
+ ```
236
+
237
+ **Response:**
238
+ ```json
239
+ {
240
+ "status": "success",
241
+ "report": "COLPOSCOPY REPORT\n\nCLINICAL SUMMARY:\n...",
242
+ "model": "gemini-1.5-flash"
243
+ }
244
+ ```
245
+
246
+ ---
247
+
248
+ ## Environment Variables
249
+
250
+ Required for LLM functionality:
251
+
252
+ ```bash
253
+ GEMINI_API_KEY=your_api_key_here
254
+ VITE_GEMINI_API_KEY=your_api_key_here # For frontend compatibility
255
+ ```
256
+
257
+ Get your API key from: https://makersuite.google.com/app/apikey
258
+
259
+ ---
260
+
261
+ ## Error Responses
262
+
263
+ All endpoints return standardized error responses:
264
+
265
+ ```json
266
+ {
267
+ "detail": "Error message description"
268
+ }
269
+ ```
270
+
271
+ **Common HTTP Status Codes:**
272
+ - `400`: Bad Request (invalid file, parameters)
273
+ - `500`: Internal Server Error (AI model error, processing failure)
274
+ - `503`: Service Unavailable (LLM not configured, API key missing)
275
+
276
+ ---
277
+
278
+ ## Model Information
279
+
280
+ ### AI Models
281
+ - **Acetowhite Detection**: YOLO-based segmentation model (`AW_yolo.pt`)
282
+ - **Cervix Detection**: YOLO-based object detection model (`cervix_yolo.pt`)
283
+
284
+ ### LLM Model
285
+ - **Gemini 1.5 Flash**: Google's generative AI for chat and report generation
286
+ - Temperature: 0.4 (balanced between creativity and consistency)
287
+ - Max Output Tokens: 2048
288
+
289
+ ---
290
+
291
+ ## Usage Examples
292
+
293
+ ### Python
294
+ ```python
295
+ import requests
296
+
297
+ # AI Model Inference
298
+ with open('image.jpg', 'rb') as f:
299
+ response = requests.post(
300
+ 'http://localhost:8000/api/infer-aw-contour',
301
+ files={'file': f},
302
+ data={'conf_threshold': 0.5}
303
+ )
304
+ result = response.json()
305
+
306
+ # Chat
307
+ response = requests.post(
308
+ 'http://localhost:8000/api/chat',
309
+ json={
310
+ 'message': 'What is Reid colposcopic index?',
311
+ 'history': []
312
+ }
313
+ )
314
+ chat_result = response.json()
315
+ ```
316
+
317
+ ### JavaScript/TypeScript
318
+ ```typescript
319
+ // AI Model Inference
320
+ const formData = new FormData();
321
+ formData.append('file', imageFile);
322
+ formData.append('conf_threshold', '0.5');
323
+
324
+ const response = await fetch('/api/infer-aw-contour', {
325
+ method: 'POST',
326
+ body: formData
327
+ });
328
+ const result = await response.json();
329
+
330
+ // Chat
331
+ const chatResponse = await fetch('/api/chat', {
332
+ method: 'POST',
333
+ headers: { 'Content-Type': 'application/json' },
334
+ body: JSON.stringify({
335
+ message: 'Explain transformation zone types',
336
+ history: []
337
+ })
338
+ });
339
+ const chatResult = await chatResponse.json();
340
+ ```
341
+
342
+ ---
343
+
344
+ ## Development
345
+
346
+ ### Running Locally
347
+ ```bash
348
+ # Install dependencies
349
+ cd backend
350
+ pip install -r requirements.txt
351
+
352
+ # Set environment variables
353
+ export GEMINI_API_KEY=your_key
354
+
355
+ # Run server
356
+ uvicorn backend.app:app --reload --host 0.0.0.0 --port 8000
357
+ ```
358
+
359
+ ### Building with Docker
360
+ ```bash
361
+ docker build -t pathora-colpo .
362
+ docker run -p 7860:7860 -e GEMINI_API_KEY=your_key pathora-colpo
363
+ ```
backend/app.py CHANGED
@@ -1,7 +1,8 @@
1
- from fastapi import FastAPI, File, UploadFile, HTTPException
2
  from fastapi.responses import JSONResponse
3
  from fastapi.middleware.cors import CORSMiddleware
4
  from fastapi.staticfiles import StaticFiles
 
5
  import cv2
6
  import numpy as np
7
  import tempfile
@@ -10,8 +11,18 @@ from io import BytesIO
10
  from PIL import Image
11
  import uvicorn
12
  import traceback
 
 
13
  from .inference import infer_aw_contour, analyze_frame, analyze_video_frame, infer_cervix_bbox
14
 
 
 
 
 
 
 
 
 
15
  app = FastAPI(title="Pathora Colposcopy API", version="1.0.0")
16
 
17
  # Add CORS middleware to allow requests from frontend
@@ -23,6 +34,34 @@ app.add_middleware(
23
  allow_headers=["*"],
24
  )
25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  class SPAStaticFiles(StaticFiles):
28
  async def get_response(self, path: str, scope):
@@ -35,7 +74,148 @@ class SPAStaticFiles(StaticFiles):
35
  @app.get("/health")
36
  async def health_check():
37
  """Health check endpoint"""
38
- return {"status": "healthy", "service": "Pathora Colposcopy API"}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
 
41
  @app.post("/api/infer-aw-contour")
 
1
+ from fastapi import FastAPI, File, UploadFile, HTTPException, Body
2
  from fastapi.responses import JSONResponse
3
  from fastapi.middleware.cors import CORSMiddleware
4
  from fastapi.staticfiles import StaticFiles
5
+ from pydantic import BaseModel
6
  import cv2
7
  import numpy as np
8
  import tempfile
 
11
  from PIL import Image
12
  import uvicorn
13
  import traceback
14
+ import json
15
+ from typing import List, Dict, Optional
16
  from .inference import infer_aw_contour, analyze_frame, analyze_video_frame, infer_cervix_bbox
17
 
18
+ # Import Google Gemini (optional - graceful degradation if not installed)
19
+ try:
20
+ import google.generativeai as genai
21
+ GEMINI_AVAILABLE = True
22
+ except ImportError:
23
+ GEMINI_AVAILABLE = False
24
+ print("⚠️ google-generativeai not installed. LLM endpoints will be unavailable.")
25
+
26
  app = FastAPI(title="Pathora Colposcopy API", version="1.0.0")
27
 
28
  # Add CORS middleware to allow requests from frontend
 
34
  allow_headers=["*"],
35
  )
36
 
37
+ # Initialize Gemini if available
38
+ GEMINI_API_KEY = os.getenv("GEMINI_API_KEY") or os.getenv("VITE_GEMINI_API_KEY")
39
+ if GEMINI_AVAILABLE and GEMINI_API_KEY:
40
+ try:
41
+ genai.configure(api_key=GEMINI_API_KEY)
42
+ print("✅ Gemini AI configured successfully")
43
+ except Exception as e:
44
+ print(f"⚠️ Failed to configure Gemini: {e}")
45
+ GEMINI_AVAILABLE = False
46
+ elif GEMINI_AVAILABLE:
47
+ print("⚠️ GEMINI_API_KEY not found in environment variables")
48
+
49
+ # Pydantic models for LLM endpoints
50
+ class ChatMessage(BaseModel):
51
+ role: str
52
+ text: str
53
+
54
+ class ChatRequest(BaseModel):
55
+ message: str
56
+ history: List[ChatMessage] = []
57
+ system_prompt: Optional[str] = None
58
+
59
+ class ReportGenerationRequest(BaseModel):
60
+ patient_data: Dict
61
+ exam_findings: Dict
62
+ images: Optional[List[str]] = [] # base64 encoded images
63
+ system_prompt: Optional[str] = None
64
+
65
 
66
  class SPAStaticFiles(StaticFiles):
67
  async def get_response(self, path: str, scope):
 
74
  @app.get("/health")
75
  async def health_check():
76
  """Health check endpoint"""
77
+ return {
78
+ "status": "healthy",
79
+ "service": "Pathora Colposcopy API",
80
+ "ai_models": {
81
+ "acetowhite_model": "loaded",
82
+ "cervix_model": "loaded"
83
+ },
84
+ "llm": {
85
+ "gemini_available": GEMINI_AVAILABLE,
86
+ "api_key_configured": bool(GEMINI_API_KEY)
87
+ }
88
+ }
89
+
90
+
91
+ @app.post("/api/chat")
92
+ async def chat_endpoint(request: ChatRequest):
93
+ """
94
+ LLM Chat endpoint for conversational AI assistant
95
+
96
+ Args:
97
+ request: ChatRequest with message, history, and optional system_prompt
98
+
99
+ Returns:
100
+ JSON with AI response
101
+ """
102
+ if not GEMINI_AVAILABLE:
103
+ raise HTTPException(
104
+ status_code=503,
105
+ detail="Gemini AI is not available. Install google-generativeai package."
106
+ )
107
+
108
+ if not GEMINI_API_KEY:
109
+ raise HTTPException(
110
+ status_code=503,
111
+ detail="GEMINI_API_KEY not configured in environment variables"
112
+ )
113
+
114
+ try:
115
+ # Use system prompt or default
116
+ system_prompt = request.system_prompt or """You are Pathora AI — a specialist colposcopy assistant. \
117
+ Provide expert guidance on examination techniques, findings interpretation, and management guidelines. \
118
+ Be professional, evidence-based, and concise."""
119
+
120
+ # Initialize Gemini model
121
+ model = genai.GenerativeModel(
122
+ model_name="gemini-1.5-flash",
123
+ system_instruction=system_prompt
124
+ )
125
+
126
+ # Build conversation history
127
+ chat_history = []
128
+ for msg in request.history:
129
+ role = "model" if msg.role == "bot" else "user"
130
+ chat_history.append({
131
+ "role": role,
132
+ "parts": [msg.text]
133
+ })
134
+
135
+ # Start chat with history
136
+ chat = model.start_chat(history=chat_history)
137
+
138
+ # Send message and get response
139
+ response = chat.send_message(request.message)
140
+
141
+ return JSONResponse({
142
+ "status": "success",
143
+ "response": response.text,
144
+ "model": "gemini-1.5-flash"
145
+ })
146
+
147
+ except Exception as e:
148
+ print(f"❌ Chat error: {e}")
149
+ traceback.print_exc()
150
+ raise HTTPException(status_code=500, detail=f"Chat error: {str(e)}")
151
+
152
+
153
+ @app.post("/api/generate-report")
154
+ async def generate_report_endpoint(request: ReportGenerationRequest):
155
+ """
156
+ Generate colposcopy report using LLM based on patient data and exam findings
157
+
158
+ Args:
159
+ request: ReportGenerationRequest with patient data, exam findings, and images
160
+
161
+ Returns:
162
+ JSON with generated report
163
+ """
164
+ if not GEMINI_AVAILABLE:
165
+ raise HTTPException(
166
+ status_code=503,
167
+ detail="Gemini AI is not available. Install google-generativeai package."
168
+ )
169
+
170
+ if not GEMINI_API_KEY:
171
+ raise HTTPException(
172
+ status_code=503,
173
+ detail="GEMINI_API_KEY not configured in environment variables"
174
+ )
175
+
176
+ try:
177
+ # Use system prompt or default
178
+ system_prompt = request.system_prompt or """You are an expert colposcopy AI assistant acting as a specialist gynaecologist.
179
+ Analyse ALL the clinical data and the attached colposcopy images to generate a professional, evidence-based colposcopy report conclusion."""
180
+
181
+ # Build prompt with patient data and findings
182
+ prompt_parts = []
183
+
184
+ # Add patient data
185
+ prompt_parts.append("PATIENT DATA:")
186
+ prompt_parts.append(json.dumps(request.patient_data, indent=2))
187
+
188
+ # Add exam findings
189
+ prompt_parts.append("\nEXAMINATION FINDINGS:")
190
+ prompt_parts.append(json.dumps(request.exam_findings, indent=2))
191
+
192
+ # Add instruction
193
+ prompt_parts.append("\nBased on the above data, generate a professional colposcopy report with:")
194
+ prompt_parts.append("1. Summary of findings")
195
+ prompt_parts.append("2. Clinical impression")
196
+ prompt_parts.append("3. Recommendations")
197
+
198
+ full_prompt = "\n".join(prompt_parts)
199
+
200
+ # Initialize model
201
+ model = genai.GenerativeModel(
202
+ model_name="gemini-1.5-flash",
203
+ system_instruction=system_prompt
204
+ )
205
+
206
+ # Generate report
207
+ response = model.generate_content(full_prompt)
208
+
209
+ return JSONResponse({
210
+ "status": "success",
211
+ "report": response.text,
212
+ "model": "gemini-1.5-flash"
213
+ })
214
+
215
+ except Exception as e:
216
+ print(f"❌ Report generation error: {e}")
217
+ traceback.print_exc()
218
+ raise HTTPException(status_code=500, detail=f"Report generation error: {str(e)}")
219
 
220
 
221
  @app.post("/api/infer-aw-contour")
backend/requirements.txt CHANGED
@@ -8,3 +8,6 @@ ultralytics
8
  pillow==10.2.0
9
  python-multipart==0.0.6
10
  setuptools>=69.0.0
 
 
 
 
8
  pillow==10.2.0
9
  python-multipart==0.0.6
10
  setuptools>=69.0.0
11
+ google-generativeai>=0.3.0
12
+ python-dotenv>=1.0.0
13
+ pydantic>=2.0.0
backend/test_api.py ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Test script for Pathora Colposcopy API endpoints
3
+ Demonstrates how to use both AI model and LLM endpoints
4
+ """
5
+
6
+ import requests
7
+ import json
8
+ import base64
9
+ from pathlib import Path
10
+
11
+ # API Configuration
12
+ BASE_URL = "http://localhost:8000" # Change to your deployment URL
13
+ API_KEY = "your_gemini_api_key_here" # For local testing
14
+
15
+ def test_health_check():
16
+ """Test the health check endpoint"""
17
+ print("=" * 60)
18
+ print("Testing Health Check Endpoint")
19
+ print("=" * 60)
20
+
21
+ response = requests.get(f"{BASE_URL}/health")
22
+ print(f"Status Code: {response.status_code}")
23
+ print(f"Response: {json.dumps(response.json(), indent=2)}")
24
+ print()
25
+
26
+ def test_acetowhite_detection(image_path: str):
27
+ """Test acetowhite contour detection"""
28
+ print("=" * 60)
29
+ print("Testing Acetowhite Contour Detection")
30
+ print("=" * 60)
31
+
32
+ with open(image_path, 'rb') as f:
33
+ files = {'file': f}
34
+ data = {'conf_threshold': 0.5}
35
+
36
+ response = requests.post(
37
+ f"{BASE_URL}/api/infer-aw-contour",
38
+ files=files,
39
+ data=data
40
+ )
41
+
42
+ print(f"Status Code: {response.status_code}")
43
+ result = response.json()
44
+
45
+ # Print without base64 image for readability
46
+ print(f"Status: {result.get('status')}")
47
+ print(f"Detections: {result.get('detections')}")
48
+ print(f"Contours: {len(result.get('contours', []))}")
49
+ print(f"Confidence Threshold: {result.get('confidence_threshold')}")
50
+
51
+ # Save result image if available
52
+ if result.get('result_image'):
53
+ output_path = "test_output_aw.png"
54
+ img_data = base64.b64decode(result['result_image'])
55
+ with open(output_path, 'wb') as f:
56
+ f.write(img_data)
57
+ print(f"Result image saved to: {output_path}")
58
+ print()
59
+
60
+ def test_cervix_detection(image_path: str):
61
+ """Test cervix bounding box detection"""
62
+ print("=" * 60)
63
+ print("Testing Cervix Bounding Box Detection")
64
+ print("=" * 60)
65
+
66
+ with open(image_path, 'rb') as f:
67
+ files = {'file': f}
68
+ data = {'conf_threshold': 0.4}
69
+
70
+ response = requests.post(
71
+ f"{BASE_URL}/api/infer-cervix-bbox",
72
+ files=files,
73
+ data=data
74
+ )
75
+
76
+ print(f"Status Code: {response.status_code}")
77
+ result = response.json()
78
+
79
+ print(f"Status: {result.get('status')}")
80
+ print(f"Detections: {result.get('detections')}")
81
+ print(f"Bounding Boxes: {json.dumps(result.get('bounding_boxes', []), indent=2)}")
82
+
83
+ # Save result image if available
84
+ if result.get('result_image'):
85
+ output_path = "test_output_cervix.png"
86
+ img_data = base64.b64decode(result['result_image'])
87
+ with open(output_path, 'wb') as f:
88
+ f.write(img_data)
89
+ print(f"Result image saved to: {output_path}")
90
+ print()
91
+
92
+ def test_batch_inference(image_paths: list):
93
+ """Test batch inference on multiple images"""
94
+ print("=" * 60)
95
+ print("Testing Batch Inference")
96
+ print("=" * 60)
97
+
98
+ files = [('files', open(img, 'rb')) for img in image_paths]
99
+ data = {'conf_threshold': 0.5}
100
+
101
+ response = requests.post(
102
+ f"{BASE_URL}/api/batch-infer",
103
+ files=files,
104
+ data=data
105
+ )
106
+
107
+ # Close file handles
108
+ for _, f in files:
109
+ f.close()
110
+
111
+ print(f"Status Code: {response.status_code}")
112
+ result = response.json()
113
+
114
+ print(f"Status: {result.get('status')}")
115
+ print(f"Total Files: {result.get('total_files')}")
116
+
117
+ for i, res in enumerate(result.get('results', [])):
118
+ print(f"\nImage {i+1}: {res.get('filename')}")
119
+ print(f" Status: {res.get('status')}")
120
+ print(f" Detections: {res.get('detections')}")
121
+ print()
122
+
123
+ def test_chat():
124
+ """Test LLM chat endpoint"""
125
+ print("=" * 60)
126
+ print("Testing Chat Endpoint")
127
+ print("=" * 60)
128
+
129
+ payload = {
130
+ "message": "What are the typical signs of a high-grade squamous intraepithelial lesion (HSIL) on colposcopy?",
131
+ "history": []
132
+ }
133
+
134
+ response = requests.post(
135
+ f"{BASE_URL}/api/chat",
136
+ json=payload
137
+ )
138
+
139
+ print(f"Status Code: {response.status_code}")
140
+
141
+ if response.status_code == 200:
142
+ result = response.json()
143
+ print(f"Status: {result.get('status')}")
144
+ print(f"Model: {result.get('model')}")
145
+ print(f"Response:\n{result.get('response')}")
146
+ else:
147
+ print(f"Error: {response.json()}")
148
+ print()
149
+
150
+ def test_chat_with_history():
151
+ """Test chat with conversation history"""
152
+ print("=" * 60)
153
+ print("Testing Chat with History")
154
+ print("=" * 60)
155
+
156
+ payload = {
157
+ "message": "What about low-grade lesions?",
158
+ "history": [
159
+ {
160
+ "role": "user",
161
+ "text": "What are high-grade lesions?"
162
+ },
163
+ {
164
+ "role": "bot",
165
+ "text": "High-grade lesions (HSIL) show dense acetowhite epithelium, coarse punctation, and sharp borders."
166
+ }
167
+ ]
168
+ }
169
+
170
+ response = requests.post(
171
+ f"{BASE_URL}/api/chat",
172
+ json=payload
173
+ )
174
+
175
+ print(f"Status Code: {response.status_code}")
176
+
177
+ if response.status_code == 200:
178
+ result = response.json()
179
+ print(f"Response:\n{result.get('response')}")
180
+ else:
181
+ print(f"Error: {response.json()}")
182
+ print()
183
+
184
+ def test_report_generation():
185
+ """Test report generation endpoint"""
186
+ print("=" * 60)
187
+ print("Testing Report Generation")
188
+ print("=" * 60)
189
+
190
+ payload = {
191
+ "patient_data": {
192
+ "age": 35,
193
+ "gravida": 2,
194
+ "para": 2,
195
+ "lmp": "2024-02-01",
196
+ "indication": "Abnormal Pap smear - ASCUS",
197
+ "menstrual_status": "Regular"
198
+ },
199
+ "exam_findings": {
200
+ "native": {
201
+ "cervix_visible": True,
202
+ "transformation_zone": "Type 1 (fully visible)",
203
+ "ectropion": "Mild",
204
+ "discharge": "None"
205
+ },
206
+ "acetic_acid": {
207
+ "acetowhite_lesions": True,
208
+ "location": "6-9 o'clock position",
209
+ "density": "Dense white",
210
+ "borders": "Sharp, well-defined",
211
+ "size": "Moderate (covering 2 quadrants)"
212
+ },
213
+ "green_filter": {
214
+ "vascular_patterns": "Coarse punctation",
215
+ "mosaic": "Present",
216
+ "atypical_vessels": "None"
217
+ },
218
+ "lugol": {
219
+ "iodine_uptake": "Partial iodine negative area",
220
+ "pattern": "Corresponds to acetowhite area"
221
+ }
222
+ }
223
+ }
224
+
225
+ response = requests.post(
226
+ f"{BASE_URL}/api/generate-report",
227
+ json=payload
228
+ )
229
+
230
+ print(f"Status Code: {response.status_code}")
231
+
232
+ if response.status_code == 200:
233
+ result = response.json()
234
+ print(f"Status: {result.get('status')}")
235
+ print(f"Model: {result.get('model')}")
236
+ print(f"\nGenerated Report:\n{'-' * 60}")
237
+ print(result.get('report'))
238
+ print('-' * 60)
239
+ else:
240
+ print(f"Error: {response.json()}")
241
+ print()
242
+
243
+ def main():
244
+ """Run all tests"""
245
+ print("\n" + "=" * 60)
246
+ print("PATHORA COLPOSCOPY API TEST SUITE")
247
+ print("=" * 60 + "\n")
248
+
249
+ # Test health check
250
+ test_health_check()
251
+
252
+ # Test AI model endpoints (you'll need to provide actual image paths)
253
+ # Uncomment and add your image paths:
254
+ # test_acetowhite_detection("path/to/your/image.jpg")
255
+ # test_cervix_detection("path/to/your/image.jpg")
256
+ # test_batch_inference(["image1.jpg", "image2.jpg"])
257
+
258
+ # Test LLM endpoints
259
+ test_chat()
260
+ test_chat_with_history()
261
+ test_report_generation()
262
+
263
+ print("\n" + "=" * 60)
264
+ print("ALL TESTS COMPLETED")
265
+ print("=" * 60 + "\n")
266
+
267
+ if __name__ == "__main__":
268
+ # Check if requests is installed
269
+ try:
270
+ import requests
271
+ except ImportError:
272
+ print("Please install requests: pip install requests")
273
+ exit(1)
274
+
275
+ main()