|
|
--- |
|
|
license: other |
|
|
license_name: apple-ascl |
|
|
license_link: LICENSE |
|
|
library_name: mobileclip |
|
|
--- |
|
|
|
|
|
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training (Converted to CoreML) |
|
|
|
|
|
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training |
|
|
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel. |
|
|
|
|
|
This repository contains [MobileCLIP-S0](https://huggingface.co/apple/MobileCLIP-S0) split to Text, Image encoders which where converted to CoreML using [this Notebook](https://huggingface.co/Norod78/CoreML-MobileCLIP-S0/blob/main/PyTorch2CoreML-mobileclip.ipynb) |
|
|
|
|
|
# Usage exmaple |
|
|
For an example of how to use these converted models in Swift, please reffer to [Queryable-MC](https://github.com/Norod/Queryable-mobileclip) |