Upload folder using huggingface_hub
Browse files- .gitattributes +2 -0
- README.md +60 -0
- fig_accuracy_latency.png +3 -0
- mobileclip_blt_image.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- mobileclip_blt_image.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- mobileclip_blt_image.mlpackage/Manifest.json +18 -0
- mobileclip_blt_text.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- mobileclip_blt_text.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- mobileclip_blt_text.mlpackage/Manifest.json +18 -0
- mobileclip_s0_image.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- mobileclip_s0_image.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- mobileclip_s0_image.mlpackage/Manifest.json +18 -0
- mobileclip_s0_text.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- mobileclip_s0_text.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- mobileclip_s0_text.mlpackage/Manifest.json +18 -0
- mobileclip_s1_image.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- mobileclip_s1_image.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- mobileclip_s1_image.mlpackage/Manifest.json +18 -0
- mobileclip_s1_text.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- mobileclip_s1_text.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- mobileclip_s1_text.mlpackage/Manifest.json +18 -0
- mobileclip_s2_image.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- mobileclip_s2_image.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- mobileclip_s2_image.mlpackage/Manifest.json +18 -0
- mobileclip_s2_text.mlpackage/Data/com.apple.CoreML/model.mlmodel +3 -0
- mobileclip_s2_text.mlpackage/Data/com.apple.CoreML/weights/weight.bin +3 -0
- mobileclip_s2_text.mlpackage/Manifest.json +18 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
mce_example.gif filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
fig_accuracy_latency.png filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: coreml
|
| 3 |
+
license: other
|
| 4 |
+
license_name: apple-ascl
|
| 5 |
+
license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data
|
| 6 |
+
datasets:
|
| 7 |
+
- apple/DataCompDR-1B
|
| 8 |
+
---
|
| 9 |
+
|
| 10 |
+
# MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
|
| 11 |
+
|
| 12 |
+
MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
|
| 13 |
+
](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
|
| 14 |
+
|
| 15 |
+
This repository contains the text and image encoders of all variants of MobileCLIP exported to Core ML. These Core ML models can be plugged-into the demo app provided in the official [MobileCLIP repo](https://github.com/apple/ml-mobileclip)
|
| 16 |
+
|
| 17 |
+
<img src="mce_example.gif" width="240" height="540" />
|
| 18 |
+
|
| 19 |
+
### Highlights
|
| 20 |
+
|
| 21 |
+
* Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
|
| 22 |
+
* `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
|
| 23 |
+
* `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
|
| 24 |
+
|
| 25 |
+
## Checkpoints
|
| 26 |
+
|
| 27 |
+
| Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
|
| 28 |
+
|:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
|
| 29 |
+
| [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
|
| 30 |
+
| [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
|
| 31 |
+
| [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
|
| 32 |
+
| [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
|
| 33 |
+
| [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
|
| 34 |
+
|
| 35 |
+
## Download
|
| 36 |
+
|
| 37 |
+
Install `huggingface-cli`
|
| 38 |
+
|
| 39 |
+
```bash
|
| 40 |
+
brew install huggingface-cli
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
```bash
|
| 44 |
+
huggingface-cli download --local-dir models apple/coreml-mobileclip
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
## Citation
|
| 48 |
+
|
| 49 |
+
**[MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/pdf/2311.17049.pdf). (CVPR 2024)**
|
| 50 |
+
*Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.*
|
| 51 |
+
|
| 52 |
+
```bibtex
|
| 53 |
+
@InProceedings{mobileclip2024,
|
| 54 |
+
author = {Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel},
|
| 55 |
+
title = {MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training},
|
| 56 |
+
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
| 57 |
+
month = {June},
|
| 58 |
+
year = {2024},
|
| 59 |
+
}
|
| 60 |
+
```
|
fig_accuracy_latency.png
ADDED
|
Git LFS Details
|
mobileclip_blt_image.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3acaec5c9eca2f27b7dc6d3bffb19cbb94d34e97cdd8aec70987e4ae7de09fae
|
| 3 |
+
size 136798
|
mobileclip_blt_image.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c12ec418eadf5d536f11e2e575b26c0d0bbc1270a7080d97f218a0a11595c289
|
| 3 |
+
size 172707392
|
mobileclip_blt_image.mlpackage/Manifest.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"fileFormatVersion": "1.0.0",
|
| 3 |
+
"itemInfoEntries": {
|
| 4 |
+
"9D0B0D8C-A92C-4048-972A-F216AE35ADCD": {
|
| 5 |
+
"author": "com.apple.CoreML",
|
| 6 |
+
"description": "CoreML Model Weights",
|
| 7 |
+
"name": "weights",
|
| 8 |
+
"path": "com.apple.CoreML/weights"
|
| 9 |
+
},
|
| 10 |
+
"F030A6CD-0EE5-41C8-BEEB-87EB2DC00BAB": {
|
| 11 |
+
"author": "com.apple.CoreML",
|
| 12 |
+
"description": "CoreML Model Specification",
|
| 13 |
+
"name": "model.mlmodel",
|
| 14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
| 15 |
+
}
|
| 16 |
+
},
|
| 17 |
+
"rootModelIdentifier": "F030A6CD-0EE5-41C8-BEEB-87EB2DC00BAB"
|
| 18 |
+
}
|
mobileclip_blt_text.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8a96162d32dcf3ba80c313d6126152a1f70e102a64547e4c8d1f69ce7f9b2384
|
| 3 |
+
size 130291
|
mobileclip_blt_text.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bd0a34acafc0fd5fd142afd2a0c27aacd16edcce8aa7410870f82541a3fec000
|
| 3 |
+
size 126878848
|
mobileclip_blt_text.mlpackage/Manifest.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"fileFormatVersion": "1.0.0",
|
| 3 |
+
"itemInfoEntries": {
|
| 4 |
+
"33EC9AB6-4248-4192-957C-43971D774DC0": {
|
| 5 |
+
"author": "com.apple.CoreML",
|
| 6 |
+
"description": "CoreML Model Weights",
|
| 7 |
+
"name": "weights",
|
| 8 |
+
"path": "com.apple.CoreML/weights"
|
| 9 |
+
},
|
| 10 |
+
"F9218945-054B-4065-A433-A98BE465BE19": {
|
| 11 |
+
"author": "com.apple.CoreML",
|
| 12 |
+
"description": "CoreML Model Specification",
|
| 13 |
+
"name": "model.mlmodel",
|
| 14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
| 15 |
+
}
|
| 16 |
+
},
|
| 17 |
+
"rootModelIdentifier": "F9218945-054B-4065-A433-A98BE465BE19"
|
| 18 |
+
}
|
mobileclip_s0_image.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2c1afa132c41c6535817cc67894bd7484bc2cbd084ed5e2f12b24f611af17591
|
| 3 |
+
size 153260
|
mobileclip_s0_image.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:87d8f63997bbd2f38ba7defeaaa2c571928bdece56aa9629542198b3ce906ed6
|
| 3 |
+
size 22717696
|
mobileclip_s0_image.mlpackage/Manifest.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"fileFormatVersion": "1.0.0",
|
| 3 |
+
"itemInfoEntries": {
|
| 4 |
+
"CD146415-3A4F-4A31-922F-453338F52D17": {
|
| 5 |
+
"author": "com.apple.CoreML",
|
| 6 |
+
"description": "CoreML Model Weights",
|
| 7 |
+
"name": "weights",
|
| 8 |
+
"path": "com.apple.CoreML/weights"
|
| 9 |
+
},
|
| 10 |
+
"F472A9FF-F93F-4BD2-BE9A-100E411F6DE2": {
|
| 11 |
+
"author": "com.apple.CoreML",
|
| 12 |
+
"description": "CoreML Model Specification",
|
| 13 |
+
"name": "model.mlmodel",
|
| 14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
| 15 |
+
}
|
| 16 |
+
},
|
| 17 |
+
"rootModelIdentifier": "F472A9FF-F93F-4BD2-BE9A-100E411F6DE2"
|
| 18 |
+
}
|
mobileclip_s0_text.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:81eba836ff4dbc8ae021d70006288b533ba7eed3c2973d245b0d5ea047305bfd
|
| 3 |
+
size 57953
|
mobileclip_s0_text.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:34723e51445b2630106e94e1fdbebed80e7676b404fb839f4eb9bec97bdcad68
|
| 3 |
+
size 84871616
|
mobileclip_s0_text.mlpackage/Manifest.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"fileFormatVersion": "1.0.0",
|
| 3 |
+
"itemInfoEntries": {
|
| 4 |
+
"5FE5957F-3F75-4966-8C7E-1B7913FD0B76": {
|
| 5 |
+
"author": "com.apple.CoreML",
|
| 6 |
+
"description": "CoreML Model Specification",
|
| 7 |
+
"name": "model.mlmodel",
|
| 8 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
| 9 |
+
},
|
| 10 |
+
"9073107D-294B-4CCE-A1A0-7D9A18658D3C": {
|
| 11 |
+
"author": "com.apple.CoreML",
|
| 12 |
+
"description": "CoreML Model Weights",
|
| 13 |
+
"name": "weights",
|
| 14 |
+
"path": "com.apple.CoreML/weights"
|
| 15 |
+
}
|
| 16 |
+
},
|
| 17 |
+
"rootModelIdentifier": "5FE5957F-3F75-4966-8C7E-1B7913FD0B76"
|
| 18 |
+
}
|
mobileclip_s1_image.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3b1cc781d6d0af08d95d338b083ae6fb97315cc5810037ceb34bc4b19ea41219
|
| 3 |
+
size 277025
|
mobileclip_s1_image.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0d817354a9b98b17f289d1f3e398c1f21d1f7e659ae04d18aa7f94e5a3283da2
|
| 3 |
+
size 42921984
|
mobileclip_s1_image.mlpackage/Manifest.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"fileFormatVersion": "1.0.0",
|
| 3 |
+
"itemInfoEntries": {
|
| 4 |
+
"0BB3E4FD-3A5B-4392-8864-DF82EE5A68F5": {
|
| 5 |
+
"author": "com.apple.CoreML",
|
| 6 |
+
"description": "CoreML Model Specification",
|
| 7 |
+
"name": "model.mlmodel",
|
| 8 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
| 9 |
+
},
|
| 10 |
+
"3B678D65-914C-4986-B205-8D14FBF5B8B2": {
|
| 11 |
+
"author": "com.apple.CoreML",
|
| 12 |
+
"description": "CoreML Model Weights",
|
| 13 |
+
"name": "weights",
|
| 14 |
+
"path": "com.apple.CoreML/weights"
|
| 15 |
+
}
|
| 16 |
+
},
|
| 17 |
+
"rootModelIdentifier": "0BB3E4FD-3A5B-4392-8864-DF82EE5A68F5"
|
| 18 |
+
}
|
mobileclip_s1_text.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b8651b6d030bae419a9548b41c8fae11f96b59cfa21b6e532a4c4434522b4b80
|
| 3 |
+
size 128127
|
mobileclip_s1_text.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5d54c20d8c105221d16532ded16765532b015db8d730b00d605252bc03cdd8ff
|
| 3 |
+
size 126866880
|
mobileclip_s1_text.mlpackage/Manifest.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"fileFormatVersion": "1.0.0",
|
| 3 |
+
"itemInfoEntries": {
|
| 4 |
+
"A3B5166A-526B-43E6-8EF1-708E632A1BAF": {
|
| 5 |
+
"author": "com.apple.CoreML",
|
| 6 |
+
"description": "CoreML Model Weights",
|
| 7 |
+
"name": "weights",
|
| 8 |
+
"path": "com.apple.CoreML/weights"
|
| 9 |
+
},
|
| 10 |
+
"B071EF55-9CA6-4A87-B563-305828969206": {
|
| 11 |
+
"author": "com.apple.CoreML",
|
| 12 |
+
"description": "CoreML Model Specification",
|
| 13 |
+
"name": "model.mlmodel",
|
| 14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
| 15 |
+
}
|
| 16 |
+
},
|
| 17 |
+
"rootModelIdentifier": "B071EF55-9CA6-4A87-B563-305828969206"
|
| 18 |
+
}
|
mobileclip_s2_image.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2aeb3359f6cde65e9f9248ec2a742e9939bd4bbf48c2f55fcd255b4504d96a1b
|
| 3 |
+
size 299056
|
mobileclip_s2_image.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6cbc7fb06b6072c1cae9c4496d67e0e6217adbf726dfeb82e44d4efe87c34c00
|
| 3 |
+
size 71397632
|
mobileclip_s2_image.mlpackage/Manifest.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"fileFormatVersion": "1.0.0",
|
| 3 |
+
"itemInfoEntries": {
|
| 4 |
+
"0FC07BEB-5990-4420-986E-A2C6804FBB52": {
|
| 5 |
+
"author": "com.apple.CoreML",
|
| 6 |
+
"description": "CoreML Model Weights",
|
| 7 |
+
"name": "weights",
|
| 8 |
+
"path": "com.apple.CoreML/weights"
|
| 9 |
+
},
|
| 10 |
+
"58408CDA-22F8-492C-A7B9-E1172DBD1283": {
|
| 11 |
+
"author": "com.apple.CoreML",
|
| 12 |
+
"description": "CoreML Model Specification",
|
| 13 |
+
"name": "model.mlmodel",
|
| 14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
| 15 |
+
}
|
| 16 |
+
},
|
| 17 |
+
"rootModelIdentifier": "58408CDA-22F8-492C-A7B9-E1172DBD1283"
|
| 18 |
+
}
|
mobileclip_s2_text.mlpackage/Data/com.apple.CoreML/model.mlmodel
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b8651b6d030bae419a9548b41c8fae11f96b59cfa21b6e532a4c4434522b4b80
|
| 3 |
+
size 128127
|
mobileclip_s2_text.mlpackage/Data/com.apple.CoreML/weights/weight.bin
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8e8d5454f104b6cbb58d98bf11e038ff1f1943599efea111260a832f094cd0ce
|
| 3 |
+
size 126866880
|
mobileclip_s2_text.mlpackage/Manifest.json
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"fileFormatVersion": "1.0.0",
|
| 3 |
+
"itemInfoEntries": {
|
| 4 |
+
"5DAD6BBB-8BD5-4AA7-BF03-1B66E983B9DE": {
|
| 5 |
+
"author": "com.apple.CoreML",
|
| 6 |
+
"description": "CoreML Model Weights",
|
| 7 |
+
"name": "weights",
|
| 8 |
+
"path": "com.apple.CoreML/weights"
|
| 9 |
+
},
|
| 10 |
+
"6EA38C9B-AEDD-4AD0-B0FA-AC98337A4C29": {
|
| 11 |
+
"author": "com.apple.CoreML",
|
| 12 |
+
"description": "CoreML Model Specification",
|
| 13 |
+
"name": "model.mlmodel",
|
| 14 |
+
"path": "com.apple.CoreML/model.mlmodel"
|
| 15 |
+
}
|
| 16 |
+
},
|
| 17 |
+
"rootModelIdentifier": "6EA38C9B-AEDD-4AD0-B0FA-AC98337A4C29"
|
| 18 |
+
}
|