EUPE-transfomers
Collection
6 items • Updated
This repository contains a converted EUPE checkpoint (from the original Facebook release) in safetensors format, prepared under BiliSakura for downstream upload and reuse.
ConvNeXtsmall)768model.safetensors: converted checkpoint weightsconfig.json: architecture/config parameterspreprocessor_config.json: image preprocessing setuptransformers_eupe.py: local EUPE Transformers registration wrappereupe/: vendored ConvNeXt backbone used by transformers_eupe.pypreprocessor_config.json uses:
256 x 2561/255[0.485, 0.456, 0.406][0.229, 0.224, 0.225]import sys
import torch
from PIL import Image
from transformers import AutoImageProcessor, AutoModel
model_dir = "./EUPE-ConvNeXt-S"
sys.path.insert(0, model_dir)
from transformers_eupe import register_eupe_transformers
register_eupe_transformers()
processor = AutoImageProcessor.from_pretrained(model_dir)
model = AutoModel.from_pretrained(model_dir).eval()
image = Image.open("example.jpg").convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
print(outputs.last_hidden_state.shape, outputs.pooler_output.shape)
If you use this model, please cite EUPE:
@misc{zhu2026eupe,
title={Efficient Universal Perception Encoder},
author={Zhu, Chenchen and Suri, Saksham and Jose, Cijo and Oquab, Maxime and Szafraniec, Marc and Wen, Wei and Xiong, Yunyang and Labatut, Patrick and Bojanowski, Piotr and Krishnamoorthi, Raghuraman and Chandra, Vikas},
year={2026},
eprint={2603.22387},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.22387},
}