Pathora Colposcopy API Documentation
Overview
FastAPI backend for Pathora Colposcopy Assistant with AI model inference and LLM capabilities.
Base URL
- Local:
http://localhost:8000 - Production:
https://huggingface.co/spaces/ManalifeAI/Pathora_Colposcopy_Assistant
Endpoints
Health Check
GET /health
Check API health status and verify AI models and LLM availability.
Response:
{
"status": "healthy",
"service": "Pathora Colposcopy API",
"ai_models": {
"acetowhite_model": "loaded",
"cervix_model": "loaded"
},
"llm": {
"gemini_available": true,
"api_key_configured": true
}
}
AI Model Endpoints
1. Acetowhite Contour Detection
POST /api/infer-aw-contour
Detect acetowhite lesions and generate contour overlays.
Parameters:
file(UploadFile): Image fileconf_threshold(float, optional): Confidence threshold (0.0-1.0, default: 0.4)
Response:
{
"status": "success",
"message": "Inference completed successfully",
"result_image": "base64_encoded_image",
"contours": [
{
"points": [[x1, y1], [x2, y2], ...],
"area": 1234.5,
"confidence": 0.85
}
],
"detections": 2,
"confidence_threshold": 0.4
}
2. Cervix Bounding Box Detection
POST /api/infer-cervix-bbox
Detect cervix location and return bounding boxes.
Parameters:
file(UploadFile): Image fileconf_threshold(float, optional): Confidence threshold (0.0-1.0, default: 0.4)
Response:
{
"status": "success",
"message": "Cervix bounding box detection completed",
"result_image": "base64_encoded_image",
"bounding_boxes": [
{
"x1": 100,
"y1": 150,
"x2": 400,
"y2": 450,
"confidence": 0.92,
"class": "cervix"
}
],
"detections": 1,
"frame_width": 1920,
"frame_height": 1080,
"confidence_threshold": 0.4
}
3. Batch Image Inference
POST /api/batch-infer
Process multiple images for acetowhite detection in one request.
Parameters:
files(List[UploadFile]): Multiple image filesconf_threshold(float, optional): Confidence threshold (default: 0.4)
Response:
{
"status": "completed",
"total_files": 3,
"results": [
{
"filename": "image1.jpg",
"status": "success",
"result_image": "base64...",
"contours": [...],
"detections": 2
}
]
}
4. Single Frame Analysis
POST /infer/image
Analyze single image for cervix quality assessment.
Parameters:
file(UploadFile): Image file
Response:
{
"status": "Excellent",
"quality_percent": 95,
"cervix_detected": true,
"focus_score": 0.89,
"brightness_score": 0.92
}
5. Video Frame Analysis
POST /infer/video
Process video frames for quality assessment.
Parameters:
file(UploadFile): Video file
Response:
{
"total_frames": 150,
"results": [
{
"frame": 0,
"status": "Excellent",
"quality_percent": 95
}
]
}
LLM Endpoints
6. Chat with AI Assistant
POST /api/chat
Conversational AI endpoint for colposcopy guidance.
Request Body:
{
"message": "What are the signs of high-grade lesions?",
"history": [
{
"role": "user",
"text": "Hello"
},
{
"role": "bot",
"text": "Hello! I'm Pathora AI..."
}
],
"system_prompt": "Optional custom system prompt"
}
Response:
{
"status": "success",
"response": "High-grade lesions typically show...",
"model": "gemini-1.5-flash"
}
7. Generate Colposcopy Report
POST /api/generate-report
Generate comprehensive colposcopy report based on patient data and findings.
Request Body:
{
"patient_data": {
"age": 35,
"gravida": 2,
"para": 2,
"lmp": "2024-02-01",
"indication": "Abnormal Pap smear"
},
"exam_findings": {
"native": {
"cervix_visible": true,
"transformation_zone": "Type 1"
},
"acetic_acid": {
"acetowhite_lesions": true,
"location": "6-9 o'clock"
},
"green_filter": {
"vascular_patterns": "Punctation"
},
"lugol": {
"iodine_uptake": "Partial"
}
},
"images": [],
"system_prompt": "Optional custom prompt"
}
Response:
{
"status": "success",
"report": "COLPOSCOPY REPORT\n\nCLINICAL SUMMARY:\n...",
"model": "gemini-1.5-flash"
}
Environment Variables
Required for LLM functionality:
GEMINI_API_KEY=your_api_key_here
VITE_GEMINI_API_KEY=your_api_key_here # For frontend compatibility
Get your API key from: https://makersuite.google.com/app/apikey
Error Responses
All endpoints return standardized error responses:
{
"detail": "Error message description"
}
Common HTTP Status Codes:
400: Bad Request (invalid file, parameters)500: Internal Server Error (AI model error, processing failure)503: Service Unavailable (LLM not configured, API key missing)
Model Information
AI Models
- Acetowhite Detection: YOLO-based segmentation model (
AW_yolo.pt) - Cervix Detection: YOLO-based object detection model (
cervix_yolo.pt)
LLM Model
- Gemini 1.5 Flash: Google's generative AI for chat and report generation
- Temperature: 0.4 (balanced between creativity and consistency)
- Max Output Tokens: 2048
Usage Examples
Python
import requests
# AI Model Inference
with open('image.jpg', 'rb') as f:
response = requests.post(
'http://localhost:8000/api/infer-aw-contour',
files={'file': f},
data={'conf_threshold': 0.5}
)
result = response.json()
# Chat
response = requests.post(
'http://localhost:8000/api/chat',
json={
'message': 'What is Reid colposcopic index?',
'history': []
}
)
chat_result = response.json()
JavaScript/TypeScript
// AI Model Inference
const formData = new FormData();
formData.append('file', imageFile);
formData.append('conf_threshold', '0.5');
const response = await fetch('/api/infer-aw-contour', {
method: 'POST',
body: formData
});
const result = await response.json();
// Chat
const chatResponse = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: 'Explain transformation zone types',
history: []
})
});
const chatResult = await chatResponse.json();
Development
Running Locally
# Install dependencies
cd backend
pip install -r requirements.txt
# Set environment variables
export GEMINI_API_KEY=your_key
# Run server
uvicorn backend.app:app --reload --host 0.0.0.0 --port 8000
Building with Docker
docker build -t pathora-colpo .
docker run -p 7860:7860 -e GEMINI_API_KEY=your_key pathora-colpo