narugo1992
commited on
Export model 'vit_base_patch32_clip_448.laion2b_ft_in12k_in1k', on 2025-01-20 19:35:13 JST
Browse files
README.md
CHANGED
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -1145,7 +1145,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1145 |
|
| 1146 |
## VisionTransformer
|
| 1147 |
|
| 1148 |
-
|
| 1149 |
|
| 1150 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1151 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -1177,6 +1177,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1177 |
| [flexivit_base.1200ep_in1k](https://huggingface.co/timm/flexivit_base.1200ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
| 1178 |
| [flexivit_base.600ep_in1k](https://huggingface.co/timm/flexivit_base.600ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
| 1179 |
| [flexivit_base.300ep_in1k](https://huggingface.co/timm/flexivit_base.300ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
|
|
|
| 1180 |
| [vit_base_r50_s16_224.orig_in21k](https://huggingface.co/timm/vit_base_r50_s16_224.orig_in21k) | 85.8M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_r50_s16_224 | 2022-12-23 |
|
| 1181 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-10 |
|
| 1182 |
| [vit_base_patch16_224_miil.in21k](https://huggingface.co/timm/vit_base_patch16_224_miil.in21k) | 94.2M | 16.9G | 224 | True | 768 | 11221 | imagenet-21k-miil | VisionTransformer | vit_base_patch16_224_miil | 2022-12-22 |
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
792 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 1145 |
|
| 1146 |
## VisionTransformer
|
| 1147 |
|
| 1148 |
+
99 models with model class `VisionTransformer`.
|
| 1149 |
|
| 1150 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1151 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 1177 |
| [flexivit_base.1200ep_in1k](https://huggingface.co/timm/flexivit_base.1200ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
| 1178 |
| [flexivit_base.600ep_in1k](https://huggingface.co/timm/flexivit_base.600ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
| 1179 |
| [flexivit_base.300ep_in1k](https://huggingface.co/timm/flexivit_base.300ep_in1k) | 86.4M | 19.3G | 240 | True | 768 | 1000 | imagenet-1k | VisionTransformer | flexivit_base | 2022-12-22 |
|
| 1180 |
+
| [vit_base_patch32_clip_448.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_448.laion2b_ft_in12k_in1k) | 88.2M | 17.2G | 448 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_448 | 2022-11-06 |
|
| 1181 |
| [vit_base_r50_s16_224.orig_in21k](https://huggingface.co/timm/vit_base_r50_s16_224.orig_in21k) | 85.8M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_r50_s16_224 | 2022-12-23 |
|
| 1182 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-10 |
|
| 1183 |
| [vit_base_patch16_224_miil.in21k](https://huggingface.co/timm/vit_base_patch16_224_miil.in21k) | 94.2M | 16.9G | 224 | True | 768 | 11221 | imagenet-21k-miil | VisionTransformer | vit_base_patch16_224_miil | 2022-12-22 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:41fe6e2cec73fda4550ea639def9084d543d62b497c3aaa67091286e6d68b2d2
|
| 3 |
+
size 47560
|
vit_base_patch32_clip_448.laion2b_ft_in12k_in1k/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3f93294e81184aaa960b1d3e4f33483011251f017f1adcd3fa802cd5b4a4b102
|
| 3 |
+
size 169874
|
vit_base_patch32_clip_448.laion2b_ft_in12k_in1k/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:87e282d3b7e76927510757e9cc4a6ab93c9753616094b4069eb8667f82947712
|
| 3 |
+
size 353525069
|
vit_base_patch32_clip_448.laion2b_ft_in12k_in1k/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1c565446d790de0e73eb5712a7bda9ec157ccbb26d7563358eb63f8ebd8b81b9
|
| 3 |
+
size 736
|