MEDiC: Multi-objective Exploration of Distillation from CLIP
Paper • 2603.29009 • Published
Multi-objective Exploration of Distillation from CLIP
This model was trained using the MEDiC codebase, implementing the method from "MEDiC: Multi-objective Exploration of Distillation from CLIP".
MEDiC extends MaskDistill by combining three complementary training objectives:
Architecture:
| Evaluation | Result |
|---|---|
| k-NN (k=20) | 73.9% top-1 |
| Linear Probe | 60.5% top-1 |
| Finetuning (ImageNet-1K) | 85.1% top-1 |
| Sem. Seg. (ADE20K, UPerNet) | 52.5 mIoU |
| Configuration | k-NN |
|---|---|
| Token only | 68.6% |
| + Pixel | 71.4% |
| + CLS | 72.3% |
| + Pixel + CLS (MEDiC) | 73.9% |
import torch
from src.models.vision_transformer import VisionTransformerMIM
# Load pretrained student backbone
model = VisionTransformerMIM(
img_size=224, patch_size=16, embed_dim=768, depth=12, num_heads=12,
use_abs_pos_emb=True, use_mask_tokens=False,
)
ckpt = torch.load("medic_vit_base_ep299.pth", map_location="cpu")
state = {k.replace("module.student.", ""): v for k, v in ckpt["model"].items() if "student" in k}
model.load_state_dict(state, strict=False)
See the GitHub repo for full training and evaluation code.
@article{georgiou2025medic,
title={MEDiC: Multi-objective Exploration of Distillation from CLIP},
author={Georgiou, Kostas},
journal={arXiv preprint arXiv:2603.29009},
year={2025}
}