narugo1992 commited on
Export model 'vit_base_patch16_224.mae', on 2025-01-20 20:03:59 JST
Browse files- README.md +3 -2
- models.parquet +2 -2
- vit_base_patch16_224.mae/meta.json +3 -0
- vit_base_patch16_224.mae/model.onnx +3 -0
- vit_base_patch16_224.mae/preprocess.json +3 -0
README.md
CHANGED
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -1177,7 +1177,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1177 |
|
| 1178 |
## VisionTransformer
|
| 1179 |
|
| 1180 |
-
|
| 1181 |
|
| 1182 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1183 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -1226,6 +1226,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1226 |
| [vit_base_patch16_224.augreg_in1k](https://huggingface.co/timm/vit_base_patch16_224.augreg_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
|
| 1227 |
| [vit_base_patch16_224.augreg2_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch16_224.augreg2_in21k_ft_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
|
| 1228 |
| [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-17 |
|
|
|
|
| 1229 |
| [vit_base_patch16_224.dino](https://huggingface.co/timm/vit_base_patch16_224.dino) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
|
| 1230 |
| [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
|
| 1231 |
| [vit_base_patch16_siglip_gap_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_224.webli) | 85.6M | 16.8G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_224 | 2024-12-24 |
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
834 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 1177 |
|
| 1178 |
## VisionTransformer
|
| 1179 |
|
| 1180 |
+
107 models with model class `VisionTransformer`.
|
| 1181 |
|
| 1182 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1183 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 1226 |
| [vit_base_patch16_224.augreg_in1k](https://huggingface.co/timm/vit_base_patch16_224.augreg_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
|
| 1227 |
| [vit_base_patch16_224.augreg2_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch16_224.augreg2_in21k_ft_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
|
| 1228 |
| [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-17 |
|
| 1229 |
+
| [vit_base_patch16_224.mae](https://huggingface.co/timm/vit_base_patch16_224.mae) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-05-10 |
|
| 1230 |
| [vit_base_patch16_224.dino](https://huggingface.co/timm/vit_base_patch16_224.dino) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
|
| 1231 |
| [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
|
| 1232 |
| [vit_base_patch16_siglip_gap_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_224.webli) | 85.6M | 16.8G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_224 | 2024-12-24 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e58f4f32d5dfd780773dbd52fa0c86f23a2e396164caecd4f59cb57394230ec8
|
| 3 |
+
size 49371
|
vit_base_patch16_224.mae/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c26afb19ecbbadd301f46e7f19cfef717dd56ab1228e9f7e9c8ba9798998578c
|
| 3 |
+
size 459
|
vit_base_patch16_224.mae/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e8443449ec6f094864d924a566b4b2be4866ead8c86a7558990e95d8b387721d
|
| 3 |
+
size 343366700
|
vit_base_patch16_224.mae/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0c893c9365d4dd7675e5a744c55cbf3af06c8aeeabcbe2db46ba48f5fae256c5
|
| 3 |
+
size 734
|