JibexBanks commited on
Commit
5e0459d
Β·
1 Parent(s): 5619c17

trying to deploy

Browse files
.dockerignore ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python caches & venvs
2
+ __pycache__/
3
+ *.pyc
4
+ *.pyo
5
+ *.pyd
6
+ .Python
7
+ *.so
8
+ *.egg
9
+ *.egg-info/
10
+ dist/
11
+ build/
12
+ .env
13
+ .venv/
14
+ venv/
15
+ avenv/
16
+
17
+ # Tests & coverage
18
+ .pytest_cache/
19
+ .coverage
20
+ htmlcov/
21
+
22
+ # Git
23
+ .git/
24
+ .gitignore
25
+
26
+ # Node
27
+ node_modules/
28
+
29
+ # AI / large assets
30
+ models/
31
+ datasets/
32
+ *.bin
33
+ *.pt
34
+ *.onnx
35
+ *.ckpt
36
+
37
+ # Logs
38
+ *.log
39
+
40
+ # IDE / OS
41
+ .DS_Store
42
+ .vscode/
43
+ .idea/
44
+ *.swp
45
+ *.swo
.gitignore ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # =========================
2
+ # Environment variables
3
+ # =========================
4
+ .env
5
+ .env.*
6
+ !.env.example
7
+
8
+ # =========================
9
+ # Python
10
+ # =========================
11
+ __pycache__/
12
+ *.py[cod]
13
+ *.pyo
14
+ *.pyd
15
+ *.log
16
+
17
+ # =========================
18
+ # Virtual environments
19
+ # =========================
20
+ venv/
21
+ env/
22
+ avenv/
23
+ .venv/
24
+
25
+ # =========================
26
+ # OS files
27
+ # =========================
28
+ .DS_Store
29
+ Thumbs.db
30
+
31
+ # =========================
32
+ # Editor / IDE
33
+ # =========================
34
+ .vscode/
35
+ .idea/
36
+ *.swp
37
+ *.swo
38
+
39
+ # =========================
40
+ # Docker
41
+ # =========================
42
+ docker-compose.override.yml
43
+
44
+ # =========================
45
+ # Database / Local state
46
+ # =========================
47
+ *.sqlite3
48
+ *.db
49
+
50
+ # =========================
51
+ # Models & caches (IMPORTANT for HF)
52
+ # =========================
53
+ models/
54
+ .cache/
55
+ hf_cache/
56
+ transformers_cache/
57
+
58
+ # =========================
59
+ # Alembic
60
+ # =========================
61
+ alembic/versions/*.pyc
62
+
63
+ # =========================
64
+ # Logs
65
+ # =========================
66
+ logs/
67
+ *.log
68
+
69
+ # =========================
70
+ # Test / Temp files
71
+ # =========================
72
+ .tmp/
73
+ .temp/
74
+ *.bak
Dockerfile ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Optimized for Hugging Face Spaces
2
+ FROM python:3.11-slim
3
+
4
+ WORKDIR /app
5
+
6
+ # Install system dependencies (minimal)
7
+ RUN apt-get update && apt-get install -y \
8
+ curl \
9
+ postgresql-client \
10
+ && rm -rf /var/lib/apt/lists/*
11
+
12
+ # Copy requirements
13
+ COPY requirements.txt .
14
+
15
+ # Install Python packages with cache
16
+ RUN pip install --no-cache-dir -r requirements.txt
17
+
18
+ # Copy application
19
+ COPY . /app
20
+
21
+ # Create non-root user
22
+ RUN useradd -m -u 1000 user && \
23
+ mkdir -p /app/models && \
24
+ chown -R user:user /app
25
+
26
+ USER user
27
+
28
+ # Hugging Face Spaces needs port 7860
29
+ ENV PORT=7860
30
+ EXPOSE 7860
31
+
32
+ # Health check
33
+ HEALTHCHECK --interval=30s --timeout=10s --start-period=180s \
34
+ CMD curl -f http://localhost:7860/health || exit 1
35
+
36
+ # Start server
37
+ CMD uvicorn main:app --host 0.0.0.0 --port 7860 --workers 1
README copy.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Afiya Care Backend - N-ATLaS Powered Medical Assistant
2
+
3
+ AI-powered medical assistance with N-ATLaS multilingual support for Awarri Hackathon.
4
+
5
+ ## Features
6
+ - πŸ‡³πŸ‡¬ N-ATLaS integration (Yoruba, Hausa, Igbo, Pidgin, English)
7
+ - πŸ” Vector-based medical knowledge search
8
+ - ⚠️ Red flag detection for emergencies
9
+ - πŸ”’ JWT authentication
10
+ - πŸ“± Offline sync support
11
+ - 🐳 Docker deployment ready
12
+
13
+ ## Quick Start
14
+ ```bash
15
+ # Setup
16
+ chmod +x setup.sh
17
+ ./setup.sh
18
+
19
+ # Start
20
+ docker-compose up -d
21
+
22
+ # API Docs
23
+ open http://localhost:8000/docs
24
+ ```
25
+
26
+ ## Test
27
+ ```bash
28
+ curl -X POST "http://localhost:8000/api/v1/diagnose" \
29
+ -H "Content-Type: application/json" \
30
+ -d '{"symptoms": "Mo ni irora ori", "language": "yo"}'
31
+ ```
32
+
33
+ ## Model
34
+ - **N-ATLaS**: NCAIR1/N-ATLaS (Llama-3 8B)
35
+ - **Embeddings**: paraphrase-multilingual-MiniLM-L12-v2
36
+
37
+ ## License
38
+ MIT
README.md CHANGED
@@ -1,10 +1,36 @@
1
  ---
2
- title: Afiya Care
3
- emoji: πŸŒ–
4
- colorFrom: green
5
- colorTo: red
6
  sdk: docker
7
  pinned: false
 
8
  ---
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: Afiya Care Backend
3
+ emoji: πŸ₯
4
+ colorFrom: blue
5
+ colorTo: green
6
  sdk: docker
7
  pinned: false
8
+ license: mit
9
  ---
10
 
11
+ # Afiya Care - AI Health Assistant
12
+
13
+ Powered by N-ATLaS (NCAIR1/N-ATLaS)
14
+
15
+ ## API Endpoints
16
+
17
+ - `GET /` - Welcome message
18
+ - `GET /health` - Health check
19
+ - `POST /api/v1/diagnose` - Symptom diagnosis
20
+ - `GET /api/v1/languages` - Supported languages
21
+ - `GET /docs` - Interactive API documentation
22
+
23
+ ## Supported Languages
24
+ - English
25
+ - Yoruba
26
+ - Hausa
27
+ - Igbo
28
+ - Nigerian Pidgin
29
+
30
+ ## Usage
31
+ ```bash
32
+ curl -X POST "https://YOUR_USERNAME-afiya-care-backend.hf.space/api/v1/diagnose" \
33
+ -H "Content-Type: application/json" \
34
+ -d '{"symptoms": "I have a headache", "language": "en"}'
35
+ ```
36
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
alembic.ini ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [alembic]
2
+ script_location = alembic
3
+ prepend_sys_path = .
4
+ version_path_separator = os
5
+
6
+ sqlalchemy.url = driver://user:pass@localhost/dbname
7
+
8
+ [loggers]
9
+ keys = root,sqlalchemy,alembic
10
+
11
+ [handlers]
12
+ keys = console
13
+
14
+ [formatters]
15
+ keys = generic
16
+
17
+ [logger_root]
18
+ level = WARN
19
+ handlers = console
20
+ qualname =
21
+
22
+ [logger_sqlalchemy]
23
+ level = WARN
24
+ handlers =
25
+ qualname = sqlalchemy.engine
26
+
27
+ [logger_alembic]
28
+ level = INFO
29
+ handlers =
30
+ qualname = alembic
31
+
32
+ [handler_console]
33
+ class = StreamHandler
34
+ args = (sys.stderr,)
35
+ level = NOTSET
36
+ formatter = generic
37
+
38
+ [formatter_generic]
39
+ format = %(levelname)-5.5s [%(name)s] %(message)s
40
+ datefmt = %H:%M:%S
41
+
42
+
43
+ ; ```
44
+
45
+ ; ### Initialize Alembic
46
+
47
+ ; ```bash
48
+ ; # Initialize alembic
49
+ ; alembic init alembic
50
+
51
+ ; # Create initial migration
52
+ ; alembic revision --autogenerate -m "Initial tables"
53
+
54
+ ; # Apply migration
55
+ ; alembic upgrade head
56
+ ; ```
core/config.py ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pydantic_settings import BaseSettings
2
+ from typing import Optional
3
+
4
+ class Settings(BaseSettings):
5
+ # Application
6
+ APP_NAME: str = "Afiya Care"
7
+ ENV: str = "development"
8
+ DEBUG: bool = True
9
+ API_VERSION: str = "v1"
10
+
11
+ # Server
12
+ HOST: str = "0.0.0.0"
13
+ PORT: int = 7860
14
+
15
+ # Database
16
+ DATABASE_URL: str
17
+ DB_POOL_SIZE: int = 5
18
+ DB_MAX_OVERFLOW: int = 10
19
+
20
+ # Vector Database
21
+ QDRANT_HOST: str
22
+ QDRANT_PORT: int = 6333
23
+ QDRANT_API_KEY: str
24
+ QDRANT_HTTPS: bool = True
25
+ QDRANT_COLLECTION: str = "medical_knowledge"
26
+
27
+ # Authentication
28
+ SECRET_KEY: str
29
+ ALGORITHM: str = "HS256"
30
+ ACCESS_TOKEN_EXPIRE_MINUTES: int = 30
31
+
32
+ # N-ATLaS Configuration
33
+ NATLAS_MODEL: str = "NCAIR1/N-ATLaS"
34
+ NATLAS_MAX_LENGTH: int = 2048
35
+ NATLAS_TEMPERATURE: float = 0.7
36
+ NATLAS_TOP_P: float = 0.9
37
+ HF_TOKEN: str
38
+
39
+ # Embedding Model
40
+ EMBEDDING_MODEL: str = "sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2"
41
+ MODEL_CACHE_DIR: str = "./models"
42
+
43
+ # Redis
44
+ REDIS_URL: str
45
+
46
+ # Language Settings
47
+ DEFAULT_LANGUAGE: str = "en"
48
+ SUPPORTED_LANGUAGES: str = "en,yo,ha,ig,pcm"
49
+ ENABLE_AUTO_LANGUAGE_DETECTION: bool = True
50
+
51
+ # Safety & Compliance
52
+ ENABLE_RED_FLAG_DETECTION: bool = True
53
+ REQUIRE_DISCLAIMER: bool = True
54
+ LOG_ANONYMIZATION: bool = True
55
+
56
+ # Rate Limiting
57
+ RATE_LIMIT_PER_MINUTE: int = 60
58
+
59
+ # Monitoring
60
+ ENABLE_METRICS: bool = True
61
+ PROMETHEUS_PORT: int = 9090
62
+
63
+ class Config:
64
+ env_file = ".env"
65
+ case_sensitive = True
66
+
67
+ settings = Settings()
core/database.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from sqlalchemy import create_engine
2
+ from sqlalchemy.ext.declarative import declarative_base
3
+ from sqlalchemy.orm import sessionmaker
4
+ from .config import settings
5
+
6
+ engine = create_engine(
7
+ settings.DATABASE_URL,
8
+ pool_size=settings.DB_POOL_SIZE,
9
+ max_overflow=settings.DB_MAX_OVERFLOW,
10
+ echo=settings.DEBUG
11
+ )
12
+
13
+ SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
14
+ Base = declarative_base()
15
+
16
+ def get_db():
17
+ """Database session dependency"""
18
+ db = SessionLocal()
19
+ try:
20
+ yield db
21
+ finally:
22
+ db.close()
core/security.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datetime import datetime, timedelta
2
+ from typing import Optional
3
+ from jose import JWTError, jwt
4
+ from passlib.context import CryptContext
5
+ from fastapi import Depends, HTTPException, status
6
+ from fastapi.security import OAuth2PasswordBearer
7
+ from core.config import settings
8
+
9
+ pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
10
+ oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
11
+
12
+ def verify_password(plain_password: str, hashed_password: str) -> bool:
13
+ """Verify a password against a hash"""
14
+ return pwd_context.verify(plain_password, hashed_password)
15
+
16
+ def get_password_hash(password: str) -> str:
17
+ """Generate password hash"""
18
+ return pwd_context.hash(password)
19
+
20
+ def create_access_token(data: dict, expires_delta: Optional[timedelta] = None):
21
+ """Create JWT access token"""
22
+ to_encode = data.copy()
23
+ if expires_delta:
24
+ expire = datetime.utcnow() + expires_delta
25
+ else:
26
+ expire = datetime.utcnow() + timedelta(
27
+ minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES
28
+ )
29
+
30
+ to_encode.update({"exp": expire})
31
+ encoded_jwt = jwt.encode(
32
+ to_encode,
33
+ settings.SECRET_KEY,
34
+ algorithm=settings.ALGORITHM
35
+ )
36
+ return encoded_jwt
37
+
38
+ def verify_token(token: str = Depends(oauth2_scheme)):
39
+ """Verify JWT token"""
40
+ credentials_exception = HTTPException(
41
+ status_code=status.HTTP_401_UNAUTHORIZED,
42
+ detail="Could not validate credentials",
43
+ headers={"WWW-Authenticate": "Bearer"},
44
+ )
45
+ try:
46
+ payload = jwt.decode(
47
+ token,
48
+ settings.SECRET_KEY,
49
+ algorithms=[settings.ALGORITHM]
50
+ )
51
+ user_id: str = payload.get("sub")
52
+ if user_id is None:
53
+ raise credentials_exception
54
+ return user_id
55
+ except JWTError:
56
+ raise credentials_exception
db/models.py ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from sqlalchemy import Column, Integer, String, Text, DateTime, Boolean, JSON, Float
2
+ from datetime import datetime
3
+ from core.database import Base
4
+
5
+ class User(Base):
6
+ """User model for authentication"""
7
+ __tablename__ = "users"
8
+
9
+ id = Column(Integer, primary_key=True, index=True)
10
+ email = Column(String, unique=True, index=True, nullable=False)
11
+ hashed_password = Column(String, nullable=False)
12
+ is_active = Column(Boolean, default=True)
13
+ is_admin = Column(Boolean, default=False)
14
+ created_at = Column(DateTime, default=datetime.utcnow)
15
+ updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
16
+
17
+ class MedicalCondition(Base):
18
+ """Medical condition knowledge base"""
19
+ __tablename__ = "medical_conditions"
20
+
21
+ id = Column(Integer, primary_key=True, index=True)
22
+ title = Column(String, nullable=False, index=True)
23
+ symptoms = Column(JSON)
24
+ description = Column(Text)
25
+ treatments = Column(JSON)
26
+ red_flags = Column(JSON)
27
+ tags = Column(JSON)
28
+ severity_level = Column(String)
29
+ version = Column(String, default="1.0.0")
30
+ created_at = Column(DateTime, default=datetime.utcnow)
31
+ updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
32
+
33
+ class DiagnosisLog(Base):
34
+ """Log of diagnosis requests"""
35
+ __tablename__ = "diagnosis_logs"
36
+
37
+ id = Column(Integer, primary_key=True, index=True)
38
+ user_id = Column(Integer, nullable=True)
39
+ session_id = Column(String, index=True)
40
+ symptoms_text = Column(Text)
41
+ detected_language = Column(String)
42
+ matched_conditions = Column(JSON)
43
+ red_flags_detected = Column(JSON)
44
+ response_time_ms = Column(Integer)
45
+ created_at = Column(DateTime, default=datetime.utcnow)
46
+
47
+ class OfflineSync(Base):
48
+ """Offline synchronization tracking"""
49
+ __tablename__ = "offline_sync"
50
+
51
+ id = Column(Integer, primary_key=True, index=True)
52
+ user_id = Column(Integer, nullable=True)
53
+ device_id = Column(String, index=True)
54
+ pending_queries = Column(JSON)
55
+ client_kb_version = Column(String)
56
+ synced_at = Column(DateTime, default=datetime.utcnow)
57
+ sync_status = Column(String, default="pending")
db/schemas.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pydantic import BaseModel, EmailStr, Field
2
+ from typing import List, Optional, Dict
3
+ from datetime import datetime
4
+
5
+ # Authentication Schemas
6
+ class UserCreate(BaseModel):
7
+ email: EmailStr
8
+ password: str = Field(..., min_length=8)
9
+
10
+ class UserLogin(BaseModel):
11
+ email: EmailStr
12
+ password: str
13
+
14
+ class Token(BaseModel):
15
+ access_token: str
16
+ token_type: str
17
+
18
+ # Diagnosis Schemas
19
+ class DiagnosisRequest(BaseModel):
20
+ symptoms: str = Field(..., min_length=10, max_length=1000)
21
+ age: Optional[int] = Field(None, ge=0, le=150)
22
+ gender: Optional[str] = Field(None, pattern="^(male|female|other)$")
23
+ additional_info: Optional[str] = Field(None, max_length=500)
24
+ language: Optional[str] = Field(
25
+ None,
26
+ description="Language code (en, yo, ha, ig, pcm)"
27
+ )
28
+
29
+ class ConditionMatch(BaseModel):
30
+ title: str
31
+ description: str
32
+ symptoms: List[str]
33
+ treatments: List[str]
34
+ severity: str
35
+ confidence: float
36
+
37
+ class DiagnosisResponse(BaseModel):
38
+ conditions: List[ConditionMatch]
39
+ red_flags: List[str]
40
+ disclaimer: str
41
+ response_id: str
42
+ processing_time_ms: int
43
+ recommendations: List[str]
44
+ detected_language: Optional[str] = None
45
+ natlas_analysis: Optional[str] = None
46
+
47
+ # Embedding Schemas
48
+ class EmbeddingRequest(BaseModel):
49
+ text: str = Field(..., min_length=1, max_length=1000)
50
+ language: Optional[str] = None
51
+
52
+ class EmbeddingResponse(BaseModel):
53
+ embedding: List[float]
54
+ dimension: int
55
+ model_used: str
56
+
57
+ # Offline Sync Schemas
58
+ class OfflineSyncRequest(BaseModel):
59
+ device_id: str
60
+ pending_queries: List[DiagnosisRequest]
61
+ client_kb_version: str
62
+ last_sync_timestamp: Optional[datetime] = None
63
+
64
+ class OfflineSyncResponse(BaseModel):
65
+ kb_update_required: bool
66
+ kb_version: str
67
+ processed_queries: List[DiagnosisResponse]
68
+ sync_timestamp: datetime
69
+
70
+ # Admin Schemas
71
+ class MedicalConditionCreate(BaseModel):
72
+ title: str
73
+ symptoms: List[str]
74
+ description: str
75
+ treatments: List[str]
76
+ red_flags: List[str]
77
+ tags: List[str]
78
+ severity_level: str
79
+
80
+ class KnowledgeBaseUpload(BaseModel):
81
+ conditions: List[MedicalConditionCreate]
82
+
83
+ class KnowledgeBaseResponse(BaseModel):
84
+ status: str
85
+ conditions_added: int
86
+ conditions_updated: int
87
+ version: str
88
+ timestamp: datetime
docker-compose.yml ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: '3.8'
2
+
3
+ services:
4
+ postgres:
5
+ image: postgres:15-alpine
6
+ container_name: afiya_postgres
7
+ environment:
8
+ POSTGRES_USER: ${POSTGRES_USER:-afiya_user}
9
+ POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-afiya_pass}
10
+ POSTGRES_DB: ${POSTGRES_DB:-afiya_care}
11
+ ports:
12
+ - "5432:5432"
13
+ volumes:
14
+ - postgres_data:/var/lib/postgresql/data
15
+ healthcheck:
16
+ test: ["CMD-SHELL", "pg_isready -U afiya_user"]
17
+ interval: 10s
18
+ timeout: 5s
19
+ retries: 5
20
+ networks:
21
+ - afiya_network
22
+
23
+ qdrant:
24
+ image: qdrant/qdrant:latest
25
+ container_name: afiya_qdrant
26
+ ports:
27
+ - "6333:6333"
28
+ - "6334:6334"
29
+ volumes:
30
+ - qdrant_data:/qdrant/storage
31
+ environment:
32
+ - QDRANT__SERVICE__GRPC_PORT=6334
33
+ healthcheck:
34
+ test: ["CMD", "curl", "-f", "http://localhost:6333/health"]
35
+ interval: 10s
36
+ timeout: 5s
37
+ retries: 5
38
+ networks:
39
+ - afiya_network
40
+
41
+ redis:
42
+ image: redis:7-alpine
43
+ container_name: afiya_redis
44
+ ports:
45
+ - "6379:6379"
46
+ volumes:
47
+ - redis_data:/data
48
+ command: redis-server --appendonly yes
49
+ healthcheck:
50
+ test: ["CMD", "redis-cli", "ping"]
51
+ interval: 10s
52
+ timeout: 5s
53
+ retries: 5
54
+ networks:
55
+ - afiya_network
56
+
57
+ backend:
58
+ build: .
59
+ container_name: afiya_backend
60
+ ports:
61
+ - "8000:8000"
62
+ environment:
63
+ - DATABASE_URL=postgresql://afiya_user:afiya_pass@postgres:5432/afiya_care
64
+ - QDRANT_HOST=qdrant
65
+ - QDRANT_PORT=6333
66
+ - REDIS_URL=redis://redis:6379/0
67
+ - SECRET_KEY=${SECRET_KEY:-change-this-in-production}
68
+ - NATLAS_MODEL=NCAIR1/N-ATLaS
69
+ - ENV=production
70
+ - DEBUG=False
71
+ depends_on:
72
+ postgres:
73
+ condition: service_healthy
74
+ qdrant:
75
+ condition: service_healthy
76
+ redis:
77
+ condition: service_healthy
78
+ volumes:
79
+ - ./models:/app/models
80
+ networks:
81
+ - afiya_network
82
+ restart: unless-stopped
83
+
84
+ volumes:
85
+ postgres_data:
86
+ qdrant_data:
87
+ redis_data:
88
+
89
+ networks:
90
+ afiya_network:
91
+ driver: bridge
main.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI
2
+ from fastapi.middleware.cors import CORSMiddleware
3
+ from contextlib import asynccontextmanager
4
+ from prometheus_client import make_asgi_app
5
+
6
+ from core.config import settings
7
+ from core.database import engine, Base
8
+ from routers import diagnose, embedding, offline, admin, auth
9
+ from services.ml_service import MLService
10
+ from services.vector_service import VectorService
11
+
12
+ @asynccontextmanager
13
+ async def lifespan(app: FastAPI):
14
+ # Startup
15
+ print("=" * 60)
16
+ print("πŸš€ Starting Afiya Care Backend with N-ATLaS")
17
+ print("=" * 60)
18
+
19
+ # Initialize database tables
20
+ print("πŸ“Š Creating database tables...")
21
+ Base.metadata.create_all(bind=engine)
22
+ print("βœ… Database tables created")
23
+
24
+ # Initialize ML service (includes N-ATLaS)
25
+ print("πŸ€– Initializing ML Services...")
26
+ app.state.ml_service = MLService()
27
+ await app.state.ml_service.initialize()
28
+ print("βœ… ML Services initialized")
29
+
30
+ # Initialize vector service
31
+ print("πŸ’Ύ Initializing Vector Database...")
32
+ app.state.vector_service = VectorService()
33
+ await app.state.vector_service.initialize()
34
+ print("βœ… Vector Database initialized")
35
+
36
+ print("=" * 60)
37
+ print("βœ… Afiya Care Backend Ready!")
38
+ print(f"πŸ“š API Docs: http://localhost:{settings.PORT}/docs")
39
+ print(f"🌍 N-ATLaS Languages: Yoruba, Hausa, Igbo, Pidgin, English")
40
+ print("=" * 60)
41
+
42
+ yield
43
+
44
+ # Shutdown
45
+ print("\nπŸ›‘ Shutting down services...")
46
+ await app.state.vector_service.close()
47
+ print("βœ… Shutdown complete")
48
+
49
+ # Get port from environment (HF Spaces uses 7860)
50
+ PORT = settings.PORT
51
+
52
+ app = FastAPI(
53
+ title=settings.APP_NAME,
54
+ description="AI-powered medical assistance with N-ATLaS multilingual support",
55
+ version=settings.API_VERSION,
56
+ lifespan=lifespan
57
+ )
58
+
59
+ # CORS middleware
60
+ app.add_middleware(
61
+ CORSMiddleware,
62
+ allow_origins=["*"], # Configure properly in production
63
+ allow_credentials=True,
64
+ allow_methods=["*"],
65
+ allow_headers=["*"],
66
+ )
67
+
68
+ # Prometheus metrics
69
+ metrics_app = make_asgi_app()
70
+ app.mount("/metrics", metrics_app)
71
+
72
+ # Include routers
73
+ app.include_router(auth.router, prefix=f"/api/{settings.API_VERSION}/auth", tags=["Authentication"])
74
+ app.include_router(diagnose.router, prefix=f"/api/{settings.API_VERSION}", tags=["Diagnosis"])
75
+ app.include_router(embedding.router, prefix=f"/api/{settings.API_VERSION}", tags=["Embeddings"])
76
+ app.include_router(offline.router, prefix=f"/api/{settings.API_VERSION}/offline", tags=["Offline Sync"])
77
+ app.include_router(admin.router, prefix=f"/api/{settings.API_VERSION}/admin", tags=["Admin"])
78
+
79
+ @app.get("/")
80
+ async def root():
81
+ return {
82
+ "message": "Welcome to Afiya Care API",
83
+ "version": settings.API_VERSION,
84
+ "model": "N-ATLaS (NCAIR1/N-ATLaS)",
85
+ "languages": ["English", "Yoruba", "Hausa", "Igbo", "Nigerian Pidgin"],
86
+ "status": "operational",
87
+ "docs": f"/docs"
88
+ }
89
+
90
+ @app.get("/health")
91
+ async def health_check():
92
+ return {
93
+ "status": "healthy",
94
+ "database": "connected",
95
+ "ml_service": "ready",
96
+ "natlas": "ready",
97
+ "vector_db": "ready"
98
+ }
99
+
100
+ if __name__ == "__main__":
101
+ import uvicorn
102
+ uvicorn.run(app, host="0.0.0.0", port=PORT)
requirements.txt ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Core FastAPI
2
+ fastapi==0.104.1
3
+ uvicorn[standard]==0.24.0
4
+ python-dotenv==1.0.0
5
+ pydantic==2.5.0
6
+ pydantic-settings==2.1.0
7
+
8
+ # Database
9
+ sqlalchemy==2.0.23
10
+ psycopg2-binary==2.9.9
11
+ alembic==1.12.1
12
+
13
+ # Authentication
14
+ python-jose[cryptography]==3.3.0
15
+ passlib[bcrypt]==1.7.4
16
+ python-multipart==0.0.6
17
+
18
+ # N-ATLaS and ML Dependencies
19
+
20
+ transformers >= 4.40.0
21
+ tokenizers >= 0.15.0
22
+ torch==2.2.0
23
+ accelerate==0.25.0
24
+ sentencepiece==0.2.0
25
+ protobuf==3.20.3
26
+
27
+ # Vector Database
28
+ qdrant-client==1.7.0
29
+
30
+ # Sentence Transformers (for fallback embeddings)
31
+ sentence-transformers==5.2.0
32
+
33
+ # Additional ML
34
+ numpy==1.24.3
35
+ pandas==2.1.3
36
+ scikit-learn==1.3.2
37
+
38
+ # Redis & Celery
39
+ redis==5.0.1
40
+ celery==5.3.4
41
+
42
+ # Language Detection
43
+ langdetect==1.0.9
44
+ fasttext-wheel==0.9.2
45
+ # fasttext==0.9.2
46
+
47
+ # Monitoring
48
+ prometheus-client==0.19.0
49
+
50
+ # Testing
51
+ pytest==7.4.3
52
+ pytest-asyncio==0.21.1
53
+ httpx==0.25.2
54
+ locust==2.18.0
55
+
routers/admin.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, Depends, HTTPException, Request
2
+ from sqlalchemy.orm import Session
3
+ from datetime import datetime
4
+
5
+ from core.database import get_db
6
+ from core.security import verify_token
7
+ from db.schemas import KnowledgeBaseUpload, KnowledgeBaseResponse
8
+ from db.models import User, MedicalCondition
9
+
10
+ router = APIRouter()
11
+
12
+ def verify_admin(user_id: str = Depends(verify_token), db: Session = Depends(get_db)):
13
+ user = db.query(User).filter(User.id == int(user_id)).first()
14
+ if not user or not user.is_admin:
15
+ raise HTTPException(status_code=403, detail="Admin required")
16
+ return user
17
+
18
+ @router.post("/upload-kb", response_model=KnowledgeBaseResponse)
19
+ async def upload_kb(kb_data: KnowledgeBaseUpload, req: Request, db: Session = Depends(get_db), admin: User = Depends(verify_admin)):
20
+ """Upload knowledge base"""
21
+ try:
22
+ from app.services.ml_service import MLService
23
+ from app.services.vector_service import VectorService
24
+
25
+ ml_service: MLService = req.app.state.ml_service
26
+ vector_service: VectorService = req.app.state.vector_service
27
+
28
+ added, updated = 0, 0
29
+ version = f"1.0.{int(datetime.utcnow().timestamp())}"
30
+
31
+ texts, condition_objs = [], []
32
+
33
+ for cond in kb_data.conditions:
34
+ existing = db.query(MedicalCondition).filter(MedicalCondition.title == cond.title).first()
35
+
36
+ if existing:
37
+ existing.symptoms = cond.symptoms
38
+ existing.description = cond.description
39
+ existing.treatments = cond.treatments
40
+ existing.version = version
41
+ updated += 1
42
+ cond_obj = existing
43
+ else:
44
+ cond_obj = MedicalCondition(**cond.dict(), version=version)
45
+ db.add(cond_obj)
46
+ added += 1
47
+
48
+ texts.append(f"{cond.title}. {', '.join(cond.symptoms)}. {cond.description}")
49
+ condition_objs.append(cond_obj)
50
+
51
+ db.commit()
52
+
53
+ embeddings = await ml_service.generate_embeddings_batch(texts)
54
+ payloads = [
55
+ {
56
+ "condition_id": c.id,
57
+ "title": c.title,
58
+ "symptoms": c.symptoms,
59
+ "description": c.description,
60
+ "treatments": c.treatments,
61
+ "severity_level": c.severity_level
62
+ }
63
+ for c in condition_objs
64
+ ]
65
+
66
+ await vector_service.insert(embeddings, payloads)
67
+
68
+ return KnowledgeBaseResponse(
69
+ status="success",
70
+ conditions_added=added,
71
+ conditions_updated=updated,
72
+ version=version,
73
+ timestamp=datetime.utcnow()
74
+ )
75
+ except Exception as e:
76
+ db.rollback()
77
+ raise HTTPException(status_code=500, detail=str(e))
routers/auth.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, Depends, HTTPException, status
2
+ from sqlalchemy.orm import Session
3
+ from datetime import timedelta
4
+
5
+ from core.database import get_db
6
+ from core.security import verify_password, get_password_hash, create_access_token, verify_token
7
+ from core.config import settings
8
+ from db.models import User
9
+ from db.schemas import UserCreate, UserLogin, Token
10
+
11
+ router = APIRouter()
12
+
13
+ @router.post("/register", response_model=Token)
14
+ async def register(user_data: UserCreate, db: Session = Depends(get_db)):
15
+ """Register new user"""
16
+ if db.query(User).filter(User.email == user_data.email).first():
17
+ raise HTTPException(status_code=400, detail="Email already registered")
18
+
19
+ new_user = User(
20
+ email=user_data.email,
21
+ hashed_password=get_password_hash(user_data.password)
22
+ )
23
+ db.add(new_user)
24
+ db.commit()
25
+ db.refresh(new_user)
26
+
27
+ token = create_access_token(
28
+ data={"sub": str(new_user.id)},
29
+ expires_delta=timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES)
30
+ )
31
+ return {"access_token": token, "token_type": "bearer"}
32
+
33
+ @router.post("/login", response_model=Token)
34
+ async def login(user_data: UserLogin, db: Session = Depends(get_db)):
35
+ """Login"""
36
+ user = db.query(User).filter(User.email == user_data.email).first()
37
+ if not user or not verify_password(user_data.password, user.hashed_password):
38
+ raise HTTPException(status_code=401, detail="Incorrect credentials")
39
+
40
+ token = create_access_token(data={"sub": str(user.id)})
41
+ return {"access_token": token, "token_type": "bearer"}
routers/diagnose.py ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, Depends, HTTPException, Request
2
+ from sqlalchemy.orm import Session
3
+ import time
4
+ import uuid
5
+ from typing import List
6
+
7
+ from core.database import get_db
8
+ from db.schemas import DiagnosisRequest, DiagnosisResponse, ConditionMatch
9
+ from services.ml_service import MLService
10
+ from services.vector_service import VectorService
11
+ from services.safety_service import SafetyService
12
+ from db.models import DiagnosisLog
13
+
14
+ router = APIRouter()
15
+
16
+ @router.post("/diagnose", response_model=DiagnosisResponse)
17
+ async def diagnose_symptoms(request: DiagnosisRequest, req: Request, db: Session = Depends(get_db)):
18
+ """πŸ‡³πŸ‡¬ N-ATLaS powered diagnosis - Supports EN, YO, HA, IG, PCM"""
19
+ start_time = time.time()
20
+ session_id = str(uuid.uuid4())
21
+
22
+ try:
23
+ ml_service: MLService = req.app.state.ml_service
24
+ vector_service: VectorService = req.app.state.vector_service
25
+ safety_service = SafetyService()
26
+
27
+ # Detect language
28
+ detected_lang = request.language or ml_service.detect_language(request.symptoms)
29
+ print(f"🌍 Language: {detected_lang}")
30
+
31
+ # N-ATLaS analysis
32
+ natlas_analysis = await ml_service.analyze_with_natlas(request.symptoms, detected_lang)
33
+
34
+ # Generate embedding
35
+ embedding = await ml_service.generate_embedding(request.symptoms)
36
+
37
+ # Search knowledge base
38
+ search_results = await vector_service.search(embedding, top_k=5)
39
+
40
+ # Check red flags
41
+ red_flags = safety_service.detect_red_flags(request.symptoms, detected_lang)
42
+
43
+ # Format conditions
44
+ conditions: List[ConditionMatch] = []
45
+ for result in search_results:
46
+ p = result["payload"]
47
+ conditions.append(ConditionMatch(
48
+ title=p.get("title", "Unknown"),
49
+ description=p.get("description", ""),
50
+ symptoms=p.get("symptoms", []),
51
+ treatments=p.get("treatments", []),
52
+ severity=p.get("severity_level", "moderate"),
53
+ confidence=round(result["score"], 3)
54
+ ))
55
+
56
+ recommendations = safety_service.get_recommendations(red_flags)
57
+ disclaimer = safety_service.get_disclaimer(detected_lang)
58
+ processing_time = int((time.time() - start_time) * 1000)
59
+
60
+ # Log
61
+ log = DiagnosisLog(
62
+ session_id=session_id,
63
+ symptoms_text=request.symptoms[:100],
64
+ detected_language=detected_lang,
65
+ matched_conditions=[c.title for c in conditions],
66
+ red_flags_detected=[f["category"] for f in red_flags],
67
+ response_time_ms=processing_time
68
+ )
69
+ db.add(log)
70
+ db.commit()
71
+
72
+ return DiagnosisResponse(
73
+ conditions=conditions,
74
+ red_flags=[f["message"] for f in red_flags],
75
+ disclaimer=disclaimer,
76
+ response_id=session_id,
77
+ processing_time_ms=processing_time,
78
+ recommendations=recommendations,
79
+ detected_language=detected_lang,
80
+ natlas_analysis=natlas_analysis[:200]
81
+ )
82
+
83
+ except Exception as e:
84
+ raise HTTPException(status_code=500, detail=str(e))
85
+
86
+ @router.get("/languages")
87
+ async def get_supported_languages(req: Request):
88
+ """Get supported languages"""
89
+ ml_service: MLService = req.app.state.ml_service
90
+ return ml_service.get_model_info()["natlas"]["supported_languages"]
routers/embedding.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, Request, HTTPException
2
+ from db.schemas import EmbeddingRequest, EmbeddingResponse
3
+
4
+ router = APIRouter()
5
+
6
+ @router.post("/embedding", response_model=EmbeddingResponse)
7
+ async def generate_embedding(request: EmbeddingRequest, req: Request):
8
+ """Generate embedding"""
9
+ try:
10
+ from services.ml_service import MLService
11
+ ml_service: MLService = req.app.state.ml_service
12
+
13
+ embedding = await ml_service.generate_embedding(request.text, request.language)
14
+
15
+ return EmbeddingResponse(
16
+ embedding=embedding,
17
+ dimension=len(embedding),
18
+ model_used=ml_service.get_model_info()["embedding_model"]
19
+ )
20
+ except Exception as e:
21
+ raise HTTPException(status_code=500, detail=str(e))
routers/offline.py ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter, Depends, Request
2
+ from sqlalchemy.orm import Session
3
+ from datetime import datetime
4
+
5
+ from core.database import get_db
6
+ from db.schemas import OfflineSyncRequest, OfflineSyncResponse
7
+ from db.models import OfflineSync
8
+
9
+ router = APIRouter()
10
+ KB_VERSION = "1.0.0"
11
+
12
+ @router.post("/sync", response_model=OfflineSyncResponse)
13
+ async def sync_offline_data(request: OfflineSyncRequest, req: Request, db: Session = Depends(get_db)):
14
+ """Sync offline data"""
15
+ from routers.diagnose import diagnose_symptoms
16
+
17
+ processed = []
18
+ for query in request.pending_queries:
19
+ try:
20
+ result = await diagnose_symptoms(query, req, db)
21
+ processed.append(result)
22
+ except:
23
+ continue
24
+
25
+ sync_log = OfflineSync(
26
+ device_id=request.device_id,
27
+ pending_queries=[q.dict() for q in request.pending_queries],
28
+ client_kb_version=request.client_kb_version,
29
+ sync_status="completed"
30
+ )
31
+ db.add(sync_log)
32
+ db.commit()
33
+
34
+ return OfflineSyncResponse(
35
+ kb_update_required=request.client_kb_version != KB_VERSION,
36
+ kb_version=KB_VERSION,
37
+ processed_queries=processed,
38
+ sync_timestamp=datetime.utcnow()
39
+ )
services/ml_service.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from sentence_transformers import SentenceTransformer
2
+ from typing import List, Optional
3
+ import torch
4
+ from core.config import settings
5
+ from services.natlas_service import NATLaSService
6
+
7
+ class MLService:
8
+ """ML Service with N-ATLaS and embeddings"""
9
+
10
+ def __init__(self):
11
+ self.embedding_model = None
12
+ self.natlas_service = None
13
+ self.device = "cuda" if torch.cuda.is_available() else "cpu"
14
+
15
+ async def initialize(self):
16
+ """Initialize all ML services"""
17
+ print(f"πŸ€– Initializing ML Services on {self.device}")
18
+
19
+ # Load embedding model
20
+ print(f"πŸ“Š Loading: {settings.EMBEDDING_MODEL}")
21
+ self.embedding_model = SentenceTransformer(
22
+ settings.EMBEDDING_MODEL,
23
+ device=self.device
24
+ )
25
+ print("βœ… Embedding model loaded")
26
+
27
+ # Initialize N-ATLaS
28
+ self.natlas_service = NATLaSService()
29
+ await self.natlas_service.initialize()
30
+
31
+ async def generate_embedding(self, text: str, language: Optional[str] = None) -> List[float]:
32
+ """Generate embedding"""
33
+ if self.embedding_model is None:
34
+ raise RuntimeError("Embedding model not initialized")
35
+
36
+ embedding = self.embedding_model.encode(
37
+ text.strip(),
38
+ convert_to_numpy=True,
39
+ normalize_embeddings=True
40
+ )
41
+ return embedding.tolist()
42
+
43
+ async def generate_embeddings_batch(self, texts: List[str]) -> List[List[float]]:
44
+ """Generate embeddings in batch"""
45
+ if self.embedding_model is None:
46
+ raise RuntimeError("Embedding model not initialized")
47
+
48
+ embeddings = self.embedding_model.encode(
49
+ [t.strip() for t in texts],
50
+ convert_to_numpy=True,
51
+ batch_size=32,
52
+ normalize_embeddings=True
53
+ )
54
+ return embeddings.tolist()
55
+
56
+ async def analyze_with_natlas(self, symptoms: str, language: str = "en") -> str:
57
+ """Use N-ATLaS for analysis"""
58
+ return await self.natlas_service.analyze_symptoms(symptoms, language)
59
+
60
+ def detect_language(self, text: str) -> str:
61
+ """Detect language"""
62
+ return self.natlas_service.detect_language(text)
63
+
64
+ def get_model_info(self) -> dict:
65
+ """Get model information"""
66
+ return {
67
+ "embedding_model": settings.EMBEDDING_MODEL,
68
+ "device": self.device,
69
+ "dimension": self.embedding_model.get_sentence_embedding_dimension(),
70
+ "natlas": self.natlas_service.get_model_info()
71
+ }
services/natlas_service.py ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoTokenizer, AutoModelForCausalLM
2
+ import torch
3
+ from typing import List, Optional, Dict
4
+ from core.config import settings
5
+
6
+
7
+ class NATLaSService:
8
+ """
9
+ N-ATLaS Language Model Service
10
+ Nigerian Atlas for Languages & AI at Scale
11
+ """
12
+
13
+ def __init__(self):
14
+ self.model = None
15
+ self.tokenizer = None
16
+ self.device = "cuda" if torch.cuda.is_available() else "cpu"
17
+ self.supported_languages = {
18
+ 'en': 'English (Nigerian accent)',
19
+ 'yo': 'Yoruba',
20
+ 'ha': 'Hausa',
21
+ 'ig': 'Igbo',
22
+ 'pcm': 'Nigerian Pidgin'
23
+ }
24
+
25
+ async def initialize(self):
26
+ """Load N-ATLaS model and tokenizer"""
27
+ print(f"πŸ‡³πŸ‡¬ Loading N-ATLaS model: {settings.NATLAS_MODEL}")
28
+ print(f"πŸ”§ Using device: {self.device}")
29
+
30
+ try:
31
+ # Load tokenizer
32
+ self.tokenizer = AutoTokenizer.from_pretrained(
33
+ settings.NATLAS_MODEL,
34
+ trust_remote_code=True,
35
+ use_fast=False,
36
+ token=settings.HF_TOKEN,
37
+ )
38
+
39
+ # Load model
40
+ self.model = AutoModelForCausalLM.from_pretrained(
41
+ settings.NATLAS_MODEL,
42
+ torch_dtype=torch.float16 if self.device == "cuda" else torch.float32,
43
+ device_map="auto" if self.device == "cuda" else None,
44
+ trust_remote_code=True
45
+ )
46
+
47
+ if self.device == "cpu":
48
+ self.model = self.model.to(self.device)
49
+
50
+ print("βœ… N-ATLaS model loaded successfully")
51
+ print(f"πŸ“Š Model size: ~8B parameters (Llama-3 based)")
52
+ print(f"🌍 Languages: {', '.join(self.supported_languages.values())}")
53
+
54
+ except Exception as e:
55
+ print(f"❌ Error loading N-ATLaS: {e}")
56
+ raise RuntimeError(f"Failed to load N-ATLaS model: {e}")
57
+
58
+ async def generate_response(
59
+ self,
60
+ prompt: str,
61
+ language: Optional[str] = "en",
62
+ max_length: int = None,
63
+ temperature: float = None,
64
+ top_p: float = None
65
+ ) -> str:
66
+ """
67
+ Generate response using N-ATLaS
68
+
69
+ Args:
70
+ prompt: Input prompt/question
71
+ language: Language code (en, yo, ha, ig, pcm)
72
+ max_length: Maximum response length
73
+ temperature: Sampling temperature
74
+ top_p: Nucleus sampling parameter
75
+ """
76
+ if self.model is None or self.tokenizer is None:
77
+ raise RuntimeError("N-ATLaS service not initialized")
78
+
79
+ # Set defaults
80
+ max_length = max_length or settings.NATLAS_MAX_LENGTH
81
+ temperature = temperature or settings.NATLAS_TEMPERATURE
82
+ top_p = top_p or settings.NATLAS_TOP_P
83
+
84
+ # Format prompt for N-ATLaS
85
+ formatted_prompt = self._format_prompt(prompt, language)
86
+
87
+ # Tokenize
88
+ inputs = self.tokenizer(
89
+ formatted_prompt,
90
+ return_tensors="pt",
91
+ truncation=True,
92
+ max_length=max_length
93
+ ).to(self.device)
94
+
95
+ # Generate
96
+ with torch.no_grad():
97
+ outputs = self.model.generate(
98
+ **inputs,
99
+ max_length=max_length,
100
+ temperature=temperature,
101
+ top_p=top_p,
102
+ do_sample=True,
103
+ pad_token_id=self.tokenizer.eos_token_id
104
+ )
105
+
106
+ # Decode
107
+ response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
108
+
109
+ # Remove the prompt from response
110
+ response = response.replace(formatted_prompt, "").strip()
111
+
112
+ return response
113
+
114
+ def _format_prompt(self, prompt: str, language: str) -> str:
115
+ """
116
+ Format prompt for N-ATLaS based on language
117
+ """
118
+ lang_prefixes = {
119
+ 'en': '', # Default
120
+ 'yo': '[Yoruba] ',
121
+ 'ha': '[Hausa] ',
122
+ 'ig': '[Igbo] ',
123
+ 'pcm': '[Pidgin] '
124
+ }
125
+
126
+ prefix = lang_prefixes.get(language, '')
127
+ return f"{prefix}{prompt}"
128
+
129
+ async def analyze_symptoms_natlas(
130
+ self,
131
+ symptoms: str,
132
+ language: str = "en"
133
+ ) -> Dict:
134
+ """
135
+ Use N-ATLaS to analyze symptoms in local languages
136
+
137
+ Args:
138
+ symptoms: Patient symptom description
139
+ language: Language of the symptoms
140
+ """
141
+ # Create medical analysis prompt
142
+ prompt = f"""As a medical assistant, analyze these symptoms and provide:
143
+ 1. Possible conditions
144
+ 2. Severity level
145
+ 3. Recommendations
146
+
147
+ Symptoms: {symptoms}
148
+
149
+ Provide a clear, helpful response."""
150
+
151
+ # Generate response
152
+ response = await self.generate_response(
153
+ prompt=prompt,
154
+ language=language,
155
+ max_length=1024,
156
+ temperature=0.7
157
+ )
158
+
159
+ return {
160
+ "analysis": response,
161
+ "language": language,
162
+ "model": "N-ATLaS"
163
+ }
164
+
165
+ async def translate_medical_info(
166
+ self,
167
+ text: str,
168
+ target_language: str
169
+ ) -> str:
170
+ """
171
+ Translate medical information to local language
172
+
173
+ Args:
174
+ text: Text to translate
175
+ target_language: Target language code
176
+ """
177
+ prompt = f"Translate the following medical information to {self.supported_languages.get(target_language, 'English')}:\n\n{text}"
178
+
179
+ translation = await self.generate_response(
180
+ prompt=prompt,
181
+ language=target_language,
182
+ max_length=512
183
+ )
184
+
185
+ return translation
186
+
187
+ def get_model_info(self) -> Dict:
188
+ """Get N-ATLaS model information"""
189
+ return {
190
+ "model_name": settings.NATLAS_MODEL,
191
+ "architecture": "Llama-3 8B (Fine-tuned)",
192
+ "device": self.device,
193
+ "supported_languages": self.supported_languages,
194
+ "developer": "Awarri Technologies + FMCIDE Nigeria",
195
+ "training_tokens": "391M+ multilingual tokens",
196
+ "release": "September 2025"
197
+ }
198
+
199
+ def detect_language(self, text: str) -> str:
200
+ """
201
+ Detect language from text (basic implementation)
202
+ """
203
+ text_lower = text.lower()
204
+
205
+ # Yoruba markers
206
+ yoruba_markers = ['ẹ', 'ọ', 'ṣ', 'bawo', 'e se', 'omo']
207
+ if any(marker in text_lower for marker in yoruba_markers):
208
+ return 'yo'
209
+
210
+ # Hausa markers
211
+ hausa_markers = ['sannu', 'yaya', 'ina', 'Ζ™', 'Ι—']
212
+ if any(marker in text_lower for marker in hausa_markers):
213
+ return 'ha'
214
+
215
+ # Igbo markers
216
+ igbo_markers = ['kedu', 'ndewo', 'α»‹', 'α»₯']
217
+ if any(marker in text_lower for marker in igbo_markers):
218
+ return 'ig'
219
+
220
+ # Pidgin markers
221
+ pidgin_markers = ['wetin', 'dey', 'no', 'go', 'fit']
222
+ pidgin_count = sum(1 for marker in pidgin_markers if marker in text_lower.split())
223
+ if pidgin_count >= 2:
224
+ return 'pcm'
225
+
226
+ return 'en' # Default to English
services/safety_service.py ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Dict
2
+ import re
3
+
4
+ class SafetyService:
5
+ """Medical safety and red flag detection service"""
6
+
7
+ # Critical red flags that require immediate medical attention
8
+ RED_FLAGS = {
9
+ "chest_pain": {
10
+ "patterns": [
11
+ r"chest pain",
12
+ r"heart pain",
13
+ r"tightness in chest",
14
+ r"crushing sensation",
15
+ r"pressure in chest"
16
+ ],
17
+ "severity": "EMERGENCY",
18
+ "message": "Chest pain may indicate a heart attack. Call emergency services immediately."
19
+ },
20
+ "breathing": {
21
+ "patterns": [
22
+ r"can't breathe",
23
+ r"difficulty breathing",
24
+ r"shortness of breath",
25
+ r"gasping for air",
26
+ r"unable to breathe"
27
+ ],
28
+ "severity": "EMERGENCY",
29
+ "message": "Severe breathing difficulty requires immediate medical attention."
30
+ },
31
+ "consciousness": {
32
+ "patterns": [
33
+ r"unconscious",
34
+ r"passed out",
35
+ r"losing consciousness",
36
+ r"fainting repeatedly",
37
+ r"blacking out"
38
+ ],
39
+ "severity": "EMERGENCY",
40
+ "message": "Loss of consciousness is a medical emergency. Call 911/emergency services."
41
+ },
42
+ "severe_bleeding": {
43
+ "patterns": [
44
+ r"heavy bleeding",
45
+ r"won't stop bleeding",
46
+ r"bleeding profusely",
47
+ r"blood won't clot"
48
+ ],
49
+ "severity": "EMERGENCY",
50
+ "message": "Uncontrolled bleeding requires immediate medical attention."
51
+ },
52
+ "stroke": {
53
+ "patterns": [
54
+ r"face drooping",
55
+ r"arm weakness",
56
+ r"speech difficulty",
57
+ r"sudden confusion",
58
+ r"vision loss sudden"
59
+ ],
60
+ "severity": "EMERGENCY",
61
+ "message": "These symptoms may indicate a stroke. Call emergency services immediately. Remember FAST: Face drooping, Arm weakness, Speech difficulty, Time to call 911."
62
+ },
63
+ "mental_health_crisis": {
64
+ "patterns": [
65
+ r"want to die",
66
+ r"kill myself",
67
+ r"end my life",
68
+ r"suicide",
69
+ r"not worth living"
70
+ ],
71
+ "severity": "CRISIS",
72
+ "message": "Please contact a crisis helpline immediately. National Suicide Prevention Lifeline: 988. You're not alone, and help is available."
73
+ },
74
+ "severe_abdominal_pain": {
75
+ "patterns": [
76
+ r"severe abdominal pain",
77
+ r"intense stomach pain",
78
+ r"sharp belly pain",
79
+ r"vomiting blood"
80
+ ],
81
+ "severity": "URGENT",
82
+ "message": "Severe abdominal pain may indicate a serious condition. Seek medical attention promptly."
83
+ },
84
+ "head_injury": {
85
+ "patterns": [
86
+ r"head injury",
87
+ r"hit my head hard",
88
+ r"concussion",
89
+ r"severe headache after trauma"
90
+ ],
91
+ "severity": "URGENT",
92
+ "message": "Head injuries should be evaluated by a medical professional."
93
+ }
94
+ }
95
+
96
+ def detect_red_flags(self, symptoms_text: str) -> List[Dict]:
97
+ """Detect emergency red flags in symptom text"""
98
+ detected_flags = []
99
+ symptoms_lower = symptoms_text.lower()
100
+
101
+ for flag_category, flag_data in self.RED_FLAGS.items():
102
+ for pattern in flag_data["patterns"]:
103
+ if re.search(pattern, symptoms_lower):
104
+ detected_flags.append({
105
+ "category": flag_category,
106
+ "severity": flag_data["severity"],
107
+ "message": f"⚠️ {flag_data['severity']}: {flag_data['message']}"
108
+ })
109
+ break # Only add once per category
110
+
111
+ return detected_flags
112
+
113
+ def get_disclaimer(self) -> str:
114
+ """Get medical disclaimer"""
115
+ return (
116
+ "βš•οΈ IMPORTANT MEDICAL DISCLAIMER: This AI-powered tool is for "
117
+ "informational and educational purposes only. It does NOT provide "
118
+ "medical advice, diagnosis, or treatment. Always consult with a "
119
+ "qualified healthcare professional for medical concerns. In case of "
120
+ "emergency, call your local emergency services immediately."
121
+ )
122
+
123
+ def get_recommendations(self, red_flags: List[Dict]) -> List[str]:
124
+ """Get safety recommendations based on detected red flags"""
125
+ recommendations = []
126
+
127
+ if any(flag["severity"] == "EMERGENCY" for flag in red_flags):
128
+ recommendations.append("🚨 CALL EMERGENCY SERVICES IMMEDIATELY")
129
+ recommendations.append("Do not wait - this could be life-threatening")
130
+ elif any(flag["severity"] == "CRISIS" for flag in red_flags):
131
+ recommendations.append("πŸ“ž Contact a crisis helpline now - help is available 24/7")
132
+ recommendations.append("National Suicide Prevention Lifeline: 988")
133
+ elif any(flag["severity"] == "URGENT" for flag in red_flags):
134
+ recommendations.append("⏰ Seek medical attention within 24 hours")
135
+ recommendations.append("Consider visiting urgent care or emergency department")
136
+ else:
137
+ recommendations.append("βœ… Monitor your symptoms")
138
+ recommendations.append("Consult a healthcare provider if symptoms worsen")
139
+ recommendations.append("Keep a symptom diary to share with your doctor")
140
+
141
+ return recommendations
services/vector_service.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from qdrant_client import QdrantClient
2
+ from qdrant_client.models import Distance, VectorParams, PointStruct, Filter, FieldCondition, MatchValue
3
+ from typing import List, Dict, Optional
4
+ from core.config import settings
5
+ import uuid
6
+
7
+ class VectorService:
8
+ """Vector database service using Qdrant"""
9
+
10
+ def __init__(self):
11
+ self.client = None
12
+
13
+ async def initialize(self):
14
+ """Initialize Qdrant client and create collection"""
15
+ print(f"πŸ”— Connecting to Qdrant at {settings.QDRANT_HOST}:{settings.QDRANT_PORT}")
16
+
17
+ self.client = QdrantClient(
18
+ host=settings.QDRANT_HOST,
19
+ port=settings.QDRANT_PORT,
20
+ timeout=30
21
+ )
22
+
23
+ # Create collection if it doesn't exist
24
+ try:
25
+ collections = self.client.get_collections().collections
26
+ collection_names = [c.name for c in collections]
27
+
28
+ if settings.QDRANT_COLLECTION not in collection_names:
29
+ self.client.create_collection(
30
+ collection_name=settings.QDRANT_COLLECTION,
31
+ vectors_config=VectorParams(
32
+ size=384, # all-MiniLM-L6-v2 dimension
33
+ distance=Distance.COSINE
34
+ )
35
+ )
36
+ print(f"βœ… Created Qdrant collection: {settings.QDRANT_COLLECTION}")
37
+ else:
38
+ print(f"βœ… Qdrant collection exists: {settings.QDRANT_COLLECTION}")
39
+
40
+ except Exception as e:
41
+ print(f"❌ Error initializing Qdrant: {e}")
42
+ raise
43
+
44
+ async def insert(
45
+ self,
46
+ embeddings: List[List[float]],
47
+ payloads: List[Dict]
48
+ ) -> int:
49
+ """Insert embeddings with metadata"""
50
+ points = [
51
+ PointStruct(
52
+ id=str(uuid.uuid4()),
53
+ vector=embedding,
54
+ payload=payload
55
+ )
56
+ for embedding, payload in zip(embeddings, payloads)
57
+ ]
58
+
59
+ result = self.client.upsert(
60
+ collection_name=settings.QDRANT_COLLECTION,
61
+ points=points
62
+ )
63
+
64
+ return len(points)
65
+
66
+ async def search(
67
+ self,
68
+ query_embedding: List[float],
69
+ top_k: int = 5,
70
+ filters: Optional[Dict] = None
71
+ ) -> List[Dict]:
72
+ """Search for similar vectors"""
73
+
74
+ # Build filter if provided
75
+ search_filter = None
76
+ if filters:
77
+ conditions = []
78
+ for key, value in filters.items():
79
+ conditions.append(
80
+ FieldCondition(
81
+ key=key,
82
+ match=MatchValue(value=value)
83
+ )
84
+ )
85
+ search_filter = Filter(must=conditions)
86
+
87
+ results = self.client.search(
88
+ collection_name=settings.QDRANT_COLLECTION,
89
+ query_vector=query_embedding,
90
+ limit=top_k,
91
+ query_filter=search_filter
92
+ )
93
+
94
+ return [
95
+ {
96
+ "id": result.id,
97
+ "score": result.score,
98
+ "payload": result.payload
99
+ }
100
+ for result in results
101
+ ]
102
+
103
+ async def delete_by_id(self, point_id: str):
104
+ """Delete a point by ID"""
105
+ self.client.delete(
106
+ collection_name=settings.QDRANT_COLLECTION,
107
+ points_selector=[point_id]
108
+ )
109
+
110
+ async def get_collection_info(self) -> Dict:
111
+ """Get collection information"""
112
+ info = self.client.get_collection(settings.QDRANT_COLLECTION)
113
+ return {
114
+ "vectors_count": info.vectors_count,
115
+ "points_count": info.points_count,
116
+ "status": info.status
117
+ }
118
+
119
+ async def close(self):
120
+ """Close the client connection"""
121
+ if self.client:
122
+ self.client.close()
123
+ print("βœ… Qdrant connection closed")
setup.bat ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # #!/bin/bash
2
+ # echo "πŸš€ Afiya Care Setup"
3
+ # echo "==================="
4
+
5
+ # # Create directories
6
+ # mkdir -p models alembic/versions
7
+
8
+ # # Copy .env
9
+ # if [ ! -f .env ]; then
10
+ # cp .env.example .env
11
+ # echo "βœ… Created .env file"
12
+ # fi
13
+
14
+ # # Start Docker services
15
+ # echo "🐳 Starting services..."
16
+ # docker-compose up -d postgres qdrant redis
17
+ # sleep 10
18
+
19
+ # # Install dependencies
20
+ # echo "πŸ“¦ Installing dependencies..."
21
+ # pip install -r requirements.txt
22
+
23
+ # # Run migrations
24
+ # echo "πŸ—„οΈ Running migrations..."
25
+ # alembic upgrade head
26
+
27
+ # echo "βœ… Setup complete!"
28
+ # echo "Start backend: uvicorn app.main:app --reload"
29
+
30
+
31
+ # CMD
32
+
33
+ @echo off
34
+ echo πŸš€ Afiya Care Setup
35
+ echo ===================
36
+
37
+ REM --- Create directories ---
38
+ if not exist models mkdir models
39
+ if not exist alembic mkdir alembic
40
+ if not exist alembic\versions mkdir alembic\versions
41
+
42
+ REM --- Copy .env if missing ---
43
+ if not exist .env (
44
+ copy .env.example .env
45
+ echo βœ… Created .env file
46
+ )
47
+
48
+ REM --- Start Docker services ---
49
+ echo 🐳 Starting services...
50
+ docker-compose up -d postgres qdrant redis
51
+ timeout /t 10 >nul
52
+
53
+ REM --- Install dependencies ---
54
+ echo πŸ“¦ Installing dependencies...
55
+ pip install -r requirements.txt
56
+
57
+ REM --- Run migrations ---
58
+ echo πŸ—„οΈ Running migrations...
59
+ alembic upgrade head
60
+
61
+ echo βœ… Setup complete!
62
+ echo Start backend: uvicorn app.main:app --reload
63
+ pause