narugo1992
commited on
Export model 'vit_small_patch32_224.augreg_in21k_ft_in1k', on 2025-01-20 20:49:50 JST
Browse files
README.md
CHANGED
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -1240,7 +1240,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1240 |
|
| 1241 |
## VisionTransformer
|
| 1242 |
|
| 1243 |
-
|
| 1244 |
|
| 1245 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1246 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -1343,6 +1343,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 1343 |
| [vit_small_r26_s32_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_r26_s32_384.augreg_in21k_ft_in1k) | 22.5M | 3.2G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_r26_s32_384 | 2022-12-23 |
|
| 1344 |
| [vit_tiny_patch16_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_tiny_patch16_384.augreg_in21k_ft_in1k) | 5.7M | 3.2G | 384 | True | 192 | 1000 | imagenet-1k | VisionTransformer | vit_tiny_patch16_384 | 2022-12-22 |
|
| 1345 |
| [vit_small_patch32_224.augreg_in21k](https://huggingface.co/timm/vit_small_patch32_224.augreg_in21k) | 30.9M | 1.1G | 224 | True | 384 | 21843 | imagenet-21k-goog | VisionTransformer | vit_small_patch32_224 | 2022-12-22 |
|
|
|
|
| 1346 |
| [vit_small_r26_s32_224.augreg_in21k](https://huggingface.co/timm/vit_small_r26_s32_224.augreg_in21k) | 30.5M | 1.1G | 224 | True | 384 | 21843 | imagenet-21k-goog | VisionTransformer | vit_small_r26_s32_224 | 2022-12-23 |
|
| 1347 |
| [vit_small_r26_s32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_r26_s32_224.augreg_in21k_ft_in1k) | 22.5M | 1.1G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_r26_s32_224 | 2022-12-23 |
|
| 1348 |
| [vit_tiny_patch16_224.augreg_in21k](https://huggingface.co/timm/vit_tiny_patch16_224.augreg_in21k) | 9.7M | 1.1G | 224 | True | 192 | 21843 | imagenet-21k-goog | VisionTransformer | vit_tiny_patch16_224 | 2022-12-22 |
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
903 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 1240 |
|
| 1241 |
## VisionTransformer
|
| 1242 |
|
| 1243 |
+
111 models with model class `VisionTransformer`.
|
| 1244 |
|
| 1245 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 1246 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 1343 |
| [vit_small_r26_s32_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_r26_s32_384.augreg_in21k_ft_in1k) | 22.5M | 3.2G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_r26_s32_384 | 2022-12-23 |
|
| 1344 |
| [vit_tiny_patch16_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_tiny_patch16_384.augreg_in21k_ft_in1k) | 5.7M | 3.2G | 384 | True | 192 | 1000 | imagenet-1k | VisionTransformer | vit_tiny_patch16_384 | 2022-12-22 |
|
| 1345 |
| [vit_small_patch32_224.augreg_in21k](https://huggingface.co/timm/vit_small_patch32_224.augreg_in21k) | 30.9M | 1.1G | 224 | True | 384 | 21843 | imagenet-21k-goog | VisionTransformer | vit_small_patch32_224 | 2022-12-22 |
|
| 1346 |
+
| [vit_small_patch32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_patch32_224.augreg_in21k_ft_in1k) | 22.9M | 1.1G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch32_224 | 2022-12-22 |
|
| 1347 |
| [vit_small_r26_s32_224.augreg_in21k](https://huggingface.co/timm/vit_small_r26_s32_224.augreg_in21k) | 30.5M | 1.1G | 224 | True | 384 | 21843 | imagenet-21k-goog | VisionTransformer | vit_small_r26_s32_224 | 2022-12-23 |
|
| 1348 |
| [vit_small_r26_s32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_r26_s32_224.augreg_in21k_ft_in1k) | 22.5M | 1.1G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_r26_s32_224 | 2022-12-23 |
|
| 1349 |
| [vit_tiny_patch16_224.augreg_in21k](https://huggingface.co/timm/vit_tiny_patch16_224.augreg_in21k) | 9.7M | 1.1G | 224 | True | 192 | 21843 | imagenet-21k-goog | VisionTransformer | vit_tiny_patch16_224 | 2022-12-22 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8a5ef765ec5d4c56a72908c681e0ecdc12eb64f5fbb716c4e3b98678a5c052ef
|
| 3 |
+
size 51949
|
vit_small_patch32_224.augreg_in21k_ft_in1k/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6d53cd058385a0743df4b7be7617f72a50b1b17147ae8a26bdbf0266c0b6e853
|
| 3 |
+
size 169859
|
vit_small_patch32_224.augreg_in21k_ft_in1k/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3e9c71d285399b0da098ac706897069936681b3f2f63dd5f17255ad244872675
|
| 3 |
+
size 91687384
|
vit_small_patch32_224.augreg_in21k_ft_in1k/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:75c033d894bbd7f7cf6880042701e974ed810733c52b2db8b094efeebf78fed2
|
| 3 |
+
size 642
|