narugo1992 commited on
Export model 'vit_base_patch16_224.augreg_in21k', on 2025-01-21 14:40:50 JST
Browse files
README.md
CHANGED
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -1381,7 +1381,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1381 |
|
| 1382 |
## VisionTransformer
|
| 1383 |
|
| 1384 |
-
|
| 1385 |
|
| 1386 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1387 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -1423,6 +1423,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1423 |
| [vit_base_patch32_clip_448.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_448.laion2b_ft_in12k_in1k) | 88.2M | 17.2G | 448 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_448 | 2022-11-06 |
|
| 1424 |
| [vit_base_patch16_siglip_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_224.webli) | 92.7M | 17.0G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_224 | 2024-12-24 |
|
| 1425 |
| [vit_base_r50_s16_224.orig_in21k](https://huggingface.co/timm/vit_base_r50_s16_224.orig_in21k) | 85.8M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_r50_s16_224 | 2022-12-23 |
|
|
|
|
| 1426 |
| [vit_base_patch16_clip_224.openai_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-22 |
|
| 1427 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-10 |
|
| 1428 |
| [vit_base_patch16_224_miil.in21k](https://huggingface.co/timm/vit_base_patch16_224_miil.in21k) | 94.2M | 16.9G | 224 | True | 768 | 11221 | imagenet-21k-miil | VisionTransformer | vit_base_patch16_224_miil | 2022-12-22 |
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
1058 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 1381 |
|
| 1382 |
## VisionTransformer
|
| 1383 |
|
| 1384 |
+
125 models with model class `VisionTransformer`.
|
| 1385 |
|
| 1386 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1387 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 1423 |
| [vit_base_patch32_clip_448.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_448.laion2b_ft_in12k_in1k) | 88.2M | 17.2G | 448 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_448 | 2022-11-06 |
|
| 1424 |
| [vit_base_patch16_siglip_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_224.webli) | 92.7M | 17.0G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_224 | 2024-12-24 |
|
| 1425 |
| [vit_base_r50_s16_224.orig_in21k](https://huggingface.co/timm/vit_base_r50_s16_224.orig_in21k) | 85.8M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_r50_s16_224 | 2022-12-23 |
|
| 1426 |
+
| [vit_base_patch16_224.augreg_in21k](https://huggingface.co/timm/vit_base_patch16_224.augreg_in21k) | 102.4M | 16.9G | 224 | True | 768 | 21843 | imagenet-21k-goog | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
|
| 1427 |
| [vit_base_patch16_clip_224.openai_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-22 |
|
| 1428 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k) | 94.7M | 16.9G | 224 | True | 768 | 11821 | imagenet-12k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-10 |
|
| 1429 |
| [vit_base_patch16_224_miil.in21k](https://huggingface.co/timm/vit_base_patch16_224_miil.in21k) | 94.2M | 16.9G | 224 | True | 768 | 11221 | imagenet-21k-miil | VisionTransformer | vit_base_patch16_224_miil | 2022-12-22 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:37fe942f30741ee3a31b8cf3210958ea8ee24fb3fe45417d6e6e988d18696276
|
| 3 |
+
size 59064
|
vit_base_patch16_224.augreg_in21k/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:08be9e9f1669a8284d6834713a7193951b0b8aa8298611055774e383c353104d
|
| 3 |
+
size 3714053
|
vit_base_patch16_224.augreg_in21k/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:54a783133f9736b691fd96f07fce3475a555adb6251bdd479039889375f44bb8
|
| 3 |
+
size 410555950
|
vit_base_patch16_224.augreg_in21k/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:75c033d894bbd7f7cf6880042701e974ed810733c52b2db8b094efeebf78fed2
|
| 3 |
+
size 642
|