Spaces:
Running
title: Dermatolog AI Scan
emoji: π
colorFrom: blue
colorTo: green
sdk: docker
pinned: true
Dermatolog AI Scan
A privacy-first, free, and easy-to-use dermatology scan app powered by latest AI models.
Live Demo
- π Live Demo (Hugging Face)
- π Blog
Features
- Local Models: Direct interface with MedSigLIP model locally or on Cloud Run.
- Lesion Detection: Uses YOLOv8-Nano to automatically identify and localise skin lesions for optimized preprocessing.
- Session-based Photo Management:
- Local-Only Storage: Images are processed and stored entirely within your browser's memory using DataURLs. No image files are ever written to the server's disk, ensuring maximum patient privacy.
- Drag & Drop Upload: Upload multiple images easily.
- Clipboard Paste Support: Paste images directly from your clipboard (Ctrl+V) to preview them instantly.
- Smart Timeline: Photos are automatically grouped into "Virtual Directories" based on their creation date (extracted from EXIF).
- Privacy: All data is scoped to your browser session.
- Zero-Shot Dermatology Analysis:
- Uses Google Health's MedSigLIP (
google/medsiglip-448) model for localized analysis. - Classifies images against a focused set of 12 dermatological conditions relevant to EU medical practices.
- Rationale: The label set focuses on high-mortality cancers (Melanoma), high-prevalence conditions (Eczema, Acne), and common differential diagnoses to aid in effective triage.
- Uses Google Health's MedSigLIP (
- Explainable AI:
- Generates Grad-CAM heatmaps (Saliency Maps) to visually indicate the specific areas of an image the AI focused on when making a prediction.
Confidence & Interpretation Logic
The application uses specialized logic to convert raw model scores into clinical insights:
- Cancerous Tumor Consolidation: If the top-ranked results are malignant tumor diseases ( Melanoma, Basal Cell Carcinoma, Squamous Cell Carcinoma, Bowen's Disease ) the confidence margin is calculated as the difference between the sum of these top tumor scores and the first non-tumor result. This ensures high confidence is reported when the AI is certain of malignancy, even if it is debating the specific tumor subtype.
- Predictive Entropy: The system calculates Shannon Entropy across all predictions. If entropy is high (e.g., above 2.0 bits), the result is flagged as unreliable regardless of the top score.
- Interpretation Margin: For mixed cases (Tumor vs. Non-Tumor), if the margin is below the configurable threshold (default 5%), the application flags the result as "Not clear" to prompt manual review.
Supported Dermatological Conditions
The system is tuned to detect the following conditions based on EU referral guidelines and prevalence statistics:
| Category | Conditions | Rationale |
|---|---|---|
| Malignant / Pre-malignant | Melanoma, Basal Cell Carcinoma (BCC) | Priority for early detection due to mortality risk (Melanoma) or high prevalence impacting healthcare resources (BCC). |
| Inflammatory | Psoriasis, Atopic Dermatitis (Eczema), Acne Vulgaris, Rosacea | Represents the highest burden of disease on quality of life in the EU population. |
| Infectious | Herpes Zoster (Shingles), Warts, Molluscum Contagiosum | Contagious nature requires accurate identification, often with distinct morphologies. |
| Benign / Differential | Melanocytic Nevus, Seborrheic Keratosis | Crucial for distinguishing from malignant lesions to reduce unnecessary anxiety and referrals. |
| Baseline | Normal Skin | Provides a control basis for healthy skin. |
Privacy & Security
Dermatolog AI Scan is built with a Privacy-First architecture:
- Browser-Side Image Handling: When you select an image, it is read by the
FileReaderAPI and converted to a Base64 DataURL. - No Server-Side Persistence: The backend receives the image data only for the duration of the analysis request. It process the image in-memory and returns the results. No temporary or permanent image files are created on the server's filesystem.
- Local Memory State: Image data is pinned to the JavaScript state of your current browser tab. Refreshing the page or closing the tab clears the local image memory.
- Session Isolation: Each user is assigned a unique, random session ID to isolate their requests and analysis cache.
Getting Started
Prerequisites
- Docker and Docker Compose installed.
- VS Code with the Dev Containers extension.
- Node.js (v18+) and npm (for frontend tests).
Development Setup
The project is designed to be developed inside a Dev Container. This ensures a consistent environment with all dependencies pre-installed.
Clone the Repository:
git clone <repository-url> cd dermatolog-ai-scanHuggingFace Configuration: Access to the MedSigLIP model is gated. You must provide a token in your
.envfile to download/load the model.Environment Variables (
.env):Create a
.envfile in the root directory to store configuration variables. This file is automatically loaded by:- Docker Compose: Used to populate
environment:variables indocker-compose.yml. - Development Container: To set workspace environment variables.
- Deployment Script:
bin/deploy.shreadsPROJECT_IDfrom this file.
Template
.env:# GCP Project Configuration (for deployment) PROJECT_ID=your-gcp-project-id REGION=europe-west1 REPOSITORY=repo-name SERVICE_NAME=app-name # --- Hardware (Optional Overrides) --- MEMORY=8Gi CPU=4 # Optional: HuggingFace Token for Gated Models (Local MedSigLIP) HF_TOKEN=your_hf_tokenTo obtain
HF_TOKENforgoogle/medsiglip-448:- Create a Hugging Face account.
- Visit the google/medsiglip-448 model page and check if you need to accept a license agreement (gated access).
- Go to your Settings > Access Tokens page.
- Create a new token with Read permissions.
- Copy the token and paste it into your
.envfile asHF_TOKEN.
- Docker Compose: Used to populate
Start Dev Container:
- Open the folder in VS Code.
- When prompted, click "Reopen in Container" (or run standard command
Dev Containers: Reopen in Container).
VS Code will build the container and install all dependencies defined in
requirements-dev.txtandpackage.json.CLI Alternative: If you cannot find the "Rebuild" option in the UI, you can force a rebuild of the environment from your local terminal:
# Ensure HF_TOKEN is exported for the build export HF_TOKEN=$(cat .env | grep HF_TOKEN | cut -d'=' -f2) docker compose up -d --buildInside the integrated terminal of VS Code (running in the container):
source venv/bin/activate # If using a local virtual environment npm install # If not run automatically uvicorn app.main:app --reload --host 0.0.0.0 --port 8080- The API will be available at: http://localhost:8080 (docs at http://localhost:8080/docs/)
- Frontend: http://localhost:8080/
- Debug Mode: Append
?debugto the URL (e.g., http://localhost:8080/?debug) to reveal detailed model logs, execution timers, saliency maps, and preprocessing calibration settings.
Running with Docker (Manual)
If you prefer to run the container manually (outside VS Code):
1. Build the Image:
You MUST pass your HF_TOKEN as a build argument to download the gated model.
# Load token from .env or export it matches your environment
export HF_TOKEN=your_token_here
docker build --build-arg HF_TOKEN=$HF_TOKEN -t medgemma-app .
2. Run the Container: Pass the token as an environment variable for runtime checks (optional if baked in, but recommended).
docker run -p 8080:8080 -e HF_TOKEN=$HF_TOKEN medgemma-app
Running Tests
We use pytest for Python tests and Jest for JavaScript unit tests.
Unit Tests: Tests the core logic (image preprocessing, result interpretation, API endpoints) with external dependencies like AI models mocked for speed.
pytest tests/unitYOLO Tests: Specifically tests the lesion detection and cropping logic.
pytest tests/yolo_testsRunning All Backend Tests: You can run the core logic and YOLO tests together:
pytest tests/unit tests/yolo_tests
Note on YOLO State: Currently, both unit and YOLO tests use mocking for the actual YOLOv8 inference. This allows the test suite to run in seconds without requiring model weights or high-performance hardware.
Integration & E2E Tests: These tests interact with a live server and require
playwrightfor browser automation.# Run UI and E2E tests pytest tests/test_ui.py tests/test_e2e_local.pyJavaScript Unit Tests:
npm test
API Documentation
Since the application is built on FastAPI, it automatically provides an interactive API documentation interface.
Once the server is running, simply navigate to the following URL in your browser to explore the API, view schemas, and test endpoints directly:
- Swagger UI: http://localhost:8080/docs
- OpenAPI JSON: http://localhost:8080/openapi.json
Legal & Regulatory Status
Not Medical Advice
Dermatolog AI Scan is for educational and research purposes only. The results generated by the AI models are NOT medical advice. They are not intended to be used for clinical diagnosis, patient management, or treatment decisions. Always seek the advice of a physician or other qualified health provider with any questions you may have regarding a medical condition.
Non-Regulated Software
This application is NOT a regulated medical device. It has NOT been reviewed, cleared, or approved by the:
- FDA (U.S. Food and Drug Administration)
- European Health Authorities (no CE mark)
HAI-DEF Terms of Use
The Google Health MedSigLIP model used in this application (including MedSigLIP) are part of the Health AI Developer Foundations (HAI-DEF). Their use is subject to the Health AI Developer Foundations Terms of Use.
Deployment
The application is containerized and can be deployed to Google Cloud Run, AWS, or Kubernetes.
π See DEPLOY.md for full deployment instructions.