KD-OCT: Knowledge Distillation for OCT Image Classification

This model is part of the KD-OCT project for retinal OCT image classification using knowledge distillation.

Model Description

Student Model: Compressed model trained via knowledge distillation from the teacher.

Training Details

  • Framework: PyTorch
  • Task: Multi-class classification of retinal OCT images
  • Classes: CNV, DME, DRUSEN, NORMAL
  • Training Method: Knowledge Distillation

Usage

import torch
from torchvision import transforms

# Load model
model = torch.load("model.pth")
model.eval()

# Prepare image
transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], 
                       std=[0.229, 0.224, 0.225])
])

# Inference
with torch.no_grad():
    output = model(image)
    prediction = torch.argmax(output, dim=1)

Citation

If you use this model, please cite:

@article{nourbakhsh2025kd,
  title={KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification},
  author={Nourbakhsh, Erfan and Sanjari, Nasrin and Nourbakhsh, Ali},
  journal={arXiv preprint arXiv:2512.09069},
  year={2025}
}

Repository

🔗 GitHub Repository

License

MIT License

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for Erfan-Nourbakhsh/KD-OCT-EfficientNet