EDJE: Efficient Discriminative Joint Encoders for Large Scale Vision-Language Reranking
A multimodal vision-language model combining SigLIP vision encoder with BERT for efficient image-text matching and retrieval.
Model Description
This model performs image-text matching and retrieval tasks by fusing visual features from SigLIP with textual representations from BERT.
Architecture
- Vision Encoder: SigLIP (google/siglip2-base-patch16-224)
- Language Model: BERT
- Fusion: Multimodal projection with optional token compression
Usage
import torch
from huggingface_hub import hf_hub_download
from pretrain_model import MultimodalPretrainModel
# Download and load the model
checkpoint_path = hf_hub_download(
repo_id="shahafw/edje",
filename="pytorch_model.pth"
)
# Initialize model architecture
model = MultimodalPretrainModel(
siglip_path="google/siglip2-base-patch16-224",
base_language_model_path="google-bert/bert-base-uncased",
multimodal_projection_hidden_dim=8192,
)
# Load trained weights
checkpoint = torch.load(checkpoint_path, map_location="cpu")
model.load_state_dict(checkpoint["model"])
model.eval()
Training
This model was trained using contrastive learning with:
- Image-Text Matching (ITM) loss
- Image-Text Contrastive (ITC) loss
- Masked Language Modeling (MLM) loss
- Knowledge distillation
Evaluation
The model was evaluated on:
- Flickr30k retrieval
- COCO retrieval
Citation
@misc{simple-efficient-fusion,
author = {Mitchell Keren Taraday, Shahaf Wagner, Chaim Baskin},
title = {Simple Efficient Fusion},
year = {2025},
publisher = {HuggingFace},
}
License
BSD-3-Clause
- Downloads last month
- 58
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support