KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification
Paper
• 2512.09069 • Published
• 1
This model is part of the KD-OCT project for retinal OCT image classification using knowledge distillation.
Student Model: Compressed model trained via knowledge distillation from the teacher.
import torch
from torchvision import transforms
# Load model
model = torch.load("model.pth")
model.eval()
# Prepare image
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
# Inference
with torch.no_grad():
output = model(image)
prediction = torch.argmax(output, dim=1)
If you use this model, please cite:
@article{nourbakhsh2025kd,
title={KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification},
author={Nourbakhsh, Erfan and Sanjari, Nasrin and Nourbakhsh, Ali},
journal={arXiv preprint arXiv:2512.09069},
year={2025}
}
🔗 GitHub Repository
MIT License