narugo1992
commited on
Export model 'vit_base_patch16_siglip_224.webli', on 2025-01-20 20:49:19 JST
Browse files
README.md
CHANGED
|
@@ -79,6 +79,7 @@ base_model:
|
|
| 79 |
- timm/test_vit2.r160_in1k
|
| 80 |
- timm/test_vit3.r160_in1k
|
| 81 |
- timm/test_vit.r160_in1k
|
|
|
|
| 82 |
- timm/vit_base_patch16_siglip_256.webli_i18n
|
| 83 |
- timm/vit_base_patch16_siglip_384.webli
|
| 84 |
- timm/vit_base_patch16_siglip_512.webli
|
|
@@ -96,7 +97,6 @@ base_model:
|
|
| 96 |
- timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k
|
| 97 |
- timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k
|
| 98 |
- timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k
|
| 99 |
-
- timm/vit_mediumd_patch16_reg4_gap_256.sbb_in12k_ft_in1k
|
| 100 |
- timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k
|
| 101 |
- timm/vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k
|
| 102 |
- timm/vit_pwee_patch16_reg1_gap_256.sbb_in1k
|
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -1240,7 +1240,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1240 |
|
| 1241 |
## VisionTransformer
|
| 1242 |
|
| 1243 |
-
|
| 1244 |
|
| 1245 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1246 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -1273,6 +1273,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1273 |
| [flexivit_base.600ep_in1k](https://huggingface.co/timm/flexivit_base.600ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
| 1274 |
| [flexivit_base.300ep_in1k](https://huggingface.co/timm/flexivit_base.300ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
| 1275 |
| [vit_base_patch32_clip_448.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_448.laion2b_ft_in12k_in1k) | 88.2M | 17.2G | 448 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_448 | 2022-11-06 |
|
|
|
|
| 1276 |
| [vit_base_r50_s16_224.orig_in21k](https://huggingface.co/timm/vit_base_r50_s16_224.orig_in21k) | 85.8M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_r50_s16_224 | 2022-12-23 |
|
| 1277 |
| [vit_base_patch16_clip_224.openai_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-22 |
|
| 1278 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-10 |
|
|
|
|
| 79 |
- timm/test_vit2.r160_in1k
|
| 80 |
- timm/test_vit3.r160_in1k
|
| 81 |
- timm/test_vit.r160_in1k
|
| 82 |
+
- timm/vit_base_patch16_siglip_224.webli
|
| 83 |
- timm/vit_base_patch16_siglip_256.webli_i18n
|
| 84 |
- timm/vit_base_patch16_siglip_384.webli
|
| 85 |
- timm/vit_base_patch16_siglip_512.webli
|
|
|
|
| 97 |
- timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k
|
| 98 |
- timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k
|
| 99 |
- timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k
|
|
|
|
| 100 |
- timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k
|
| 101 |
- timm/vit_mediumd_patch16_rope_reg1_gap_256.sbb_in1k
|
| 102 |
- timm/vit_pwee_patch16_reg1_gap_256.sbb_in1k
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
902 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 1240 |
|
| 1241 |
## VisionTransformer
|
| 1242 |
|
| 1243 |
+
110 models with model class `VisionTransformer`.
|
| 1244 |
|
| 1245 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1246 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 1273 |
| [flexivit_base.600ep_in1k](https://huggingface.co/timm/flexivit_base.600ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
| 1274 |
| [flexivit_base.300ep_in1k](https://huggingface.co/timm/flexivit_base.300ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
| 1275 |
| [vit_base_patch32_clip_448.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_448.laion2b_ft_in12k_in1k) | 88.2M | 17.2G | 448 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_448 | 2022-11-06 |
|
| 1276 |
+
| [vit_base_patch16_siglip_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_224.webli) | 92.7M | 17.0G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_224 | 2024-12-24 |
|
| 1277 |
| [vit_base_r50_s16_224.orig_in21k](https://huggingface.co/timm/vit_base_r50_s16_224.orig_in21k) | 85.8M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_r50_s16_224 | 2022-12-23 |
|
| 1278 |
| [vit_base_patch16_clip_224.openai_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-22 |
|
| 1279 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-10 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d7a1f1d1707f7541e27918fbf2489621270a7fa408935ce521016ae98114c696
|
| 3 |
+
size 51926
|
vit_base_patch16_siglip_224.webli/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:36865c730a86f5ed56056f1d74f8104f66e50c27f7a1a64dbc92244aa124560e
|
| 3 |
+
size 484
|
vit_base_patch16_siglip_224.webli/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ea225826fb39bb0df8f2631dd8af871d03eae4f2d510829f9188740e58097d55
|
| 3 |
+
size 371719190
|
vit_base_patch16_siglip_224.webli/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:75c033d894bbd7f7cf6880042701e974ed810733c52b2db8b094efeebf78fed2
|
| 3 |
+
size 642
|