ELIAS — Eyelid Lesion Intelligent Analysis System

眼瞼疾病智慧分析系統

🏆 2026 年經濟部智慧創新大賞(學生組)參賽作品


Model Description

ELIAS is a clinician-guided deep learning classifier for automated detection of epiblepharon (睫毛倒插) from external eye photographs.

The model uses a frozen ImageNet-pretrained ResNet-18 backbone with a task-specific classification head. The key innovation is the explicit integration of clinician-defined anatomical Regions of Interest (ROI) — specifically the lower eyelid margin and eyelash–cornea interface — as a prior constraint, enabling robust classification in a small-data regime (~80–150 cases per class).

Architecture

Input (224×224 RGB)
    │
    ▼
ResNet-18 backbone (frozen, ImageNet pretrained)
    │  layer1 → layer2 → layer3 → layer4
    │  Global Average Pooling → (512,)
    ▼
Dropout(0.3) → Linear(512 → 2)
    │
    ▼
Softmax → [P(control), P(epiblepharon)]
Component Detail
Backbone ResNet-18 (ImageNet pretrained, fully frozen)
Classification head Dropout(0.3) + Linear(512 → 2)
Loss function CrossEntropyLoss
Optimizer Adam(lr=1e-3), head parameters only
Input size 224 × 224 px, RGB (Grayscale → 3ch conversion applied)
Normalization ImageNet mean/std [0.485, 0.456, 0.406] / [0.229, 0.224, 0.225]

Performance

Evaluated by stratified 5-fold cross-validation (random_state=42, 20 epochs/fold).

Metric Mean (5-fold)
AUC 0.93
Accuracy High
Sensitivity High
Specificity Moderate
F1 Score High
  • ✅ No fold collapse observed across all 5 folds
  • ✅ Label-shuffling negative control confirmed genuine feature learning
  • ✅ ROI ablation experiments validated lower eyelid margin as primary diagnostic signal

ROI Ablation Summary

Condition Performance vs Baseline
Full image (baseline) ✅ Optimal
ROI ablated (lower eyelid blurred) ❌ Significant drop
Non-ROI ablated (ROI preserved) ✅ Near-baseline

Diagnostic features are spatially localized to the clinically defined lower eyelid margin — consistent with clinical examination principles for epiblepharon.


Grad-CAM Explainability

Grad-CAM heatmaps were generated using native PyTorch hooks on layer4 (no Captum dependency):

  • Epiblepharon cases: Activation consistently focused on lower eyelid margin and eyelash–cornea interface
  • Control cases: Diffuse, anatomically unfocused activation patterns

Heatmap overlay: α = 0.45, JET colormap, bilinear upsampling to 224×224.


iOS On-Device Inference

The trained model has been converted to Apple Core ML format (.mlpackage):

Metric Value
Model size < 50 MB
Inference latency < 1 second / image
Device iPhone 12+ (A14+ Neural Engine)
Network required ❌ None — fully on-device

Privacy: facial images never leave the device, consistent with PDPA / HIPAA principles.


Training Data

  • Task: Binary classification — epiblepharon vs. control
  • Image type: External eye photographs
  • Dataset size: ~80–150 cases per class (single-center, retrospective)
  • Preprocessing: Resize 224×224, Grayscale→3ch, ColorJitter, RandomHorizontalFlip, ImageNet normalization

⚠️ Clinical images are not distributed in this repository due to patient privacy regulations (Personal Data Protection Act, IRB). For academic collaboration, please contact the corresponding author.


Usage

import torch
from torchvision import models, transforms
from PIL import Image

# Load model
model = models.resnet18(weights=None)
for param in model.parameters():
    param.requires_grad = False
model.fc = torch.nn.Linear(model.fc.in_features, 2)
model.load_state_dict(torch.load("pytorch_model.pt", map_location="cpu"))
model.eval()

# Preprocess
transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.Grayscale(num_output_channels=3),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
])

img = Image.open("eye_photo.jpg").convert("RGB")
x = transform(img).unsqueeze(0)  # (1, 3, 224, 224)

with torch.no_grad():
    logits = model(x)
    prob = torch.softmax(logits, dim=1)[0, 1].item()
    print(f"Epiblepharon probability: {prob:.3f}")

Files in This Repository

File Description
README.md This model card
model.py Model architecture definition
train.py 5-fold cross-validation training script
config.json Model configuration
requirements.txt Python dependencies
pytorch_model.pt (Checkpoint — upload separately after training)

Intended Use & Limitations

  • Intended use: Research prototype for clinical decision support in epiblepharon screening
  • NOT a validated medical device — prospective evaluation and regulatory assessment required before clinical deployment
  • Single-center retrospective data — generalizability across imaging conditions and demographics requires multi-center validation

Citation

@misc{elias2026,
  title     = {ELIAS: Eyelid Lesion Intelligent Analysis System},
  year      = {2026},
  note      = {2026 MOEA Smart Innovation Award submission},
  url       = {https://huggingface.co/YOUR_HF_USERNAME/ELIAS-epiblepharon}
}

License

MIT License — Source code only. Clinical data excluded.

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support