SigLIP 2
Collection
OpenCLIP and timm SigLIP 2 models • 47 items • Updated • 27
How to use timm/vit_base_patch16_siglip_384.v2_webli with timm:
import timm
model = timm.create_model("hf_hub:timm/vit_base_patch16_siglip_384.v2_webli", pretrained=True)How to use timm/vit_base_patch16_siglip_384.v2_webli with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-feature-extraction", model="timm/vit_base_patch16_siglip_384.v2_webli") # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("timm/vit_base_patch16_siglip_384.v2_webli", dtype="auto")A SigLIP 2 ViT (image encoder only) for timm. Equivalent to image tower from https://huggingface.co/timm/ViT-B-16-SigLIP2-384.
@article{tschannen2025siglip,
title={SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and H'enaff, Olivier and Harmsen, Jeremiah and Steiner, Andreas and Zhai, Xiaohua},
year={2025},
journal={arXiv preprint arXiv:2502.14786}
}
@inproceedings{zhai2023sigmoid,
title={Sigmoid loss for language image pre-training},
author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={11975--11986},
year={2023}
}