narugo1992
commited on
Export model 'vit_small_patch16_224.augreg_in21k', on 2025-01-20 06:31:18 UTC
Browse files
README.md
CHANGED
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -793,7 +793,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 793 |
|
| 794 |
## VisionTransformer
|
| 795 |
|
| 796 |
-
|
| 797 |
|
| 798 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 799 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -839,6 +839,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 839 |
| [vit_base_patch32_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_224.laion2b_ft_in12k_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_224 | 2022-11-05 |
|
| 840 |
| [vit_base_patch32_224.sam_in1k](https://huggingface.co/timm/vit_base_patch32_224.sam_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
|
| 841 |
| [vit_base_patch32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch32_224.augreg_in21k_ft_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
|
|
|
|
| 842 |
| [vit_small_patch16_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in21k_ft_in1k) | 22.0M | 4.2G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
|
| 843 |
| [vit_small_patch16_224.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in1k) | 22.0M | 4.2G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
|
| 844 |
| [vit_small_patch16_224.dino](https://huggingface.co/timm/vit_small_patch16_224.dino) | 21.6M | 4.2G | 224 | False | 384 | 384 | | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
382 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 793 |
|
| 794 |
## VisionTransformer
|
| 795 |
|
| 796 |
+
50 models with model class `VisionTransformer`.
|
| 797 |
|
| 798 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 799 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 839 |
| [vit_base_patch32_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_224.laion2b_ft_in12k_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_224 | 2022-11-05 |
|
| 840 |
| [vit_base_patch32_224.sam_in1k](https://huggingface.co/timm/vit_base_patch32_224.sam_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
|
| 841 |
| [vit_base_patch32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch32_224.augreg_in21k_ft_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
|
| 842 |
+
| [vit_small_patch16_224.augreg_in21k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in21k) | 30.0M | 4.3G | 224 | True | 384 | 21843 | imagenet-21k-goog | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
|
| 843 |
| [vit_small_patch16_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in21k_ft_in1k) | 22.0M | 4.2G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
|
| 844 |
| [vit_small_patch16_224.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in1k) | 22.0M | 4.2G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
|
| 845 |
| [vit_small_patch16_224.dino](https://huggingface.co/timm/vit_small_patch16_224.dino) | 21.6M | 4.2G | 224 | False | 384 | 384 | | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9e05392d0d67984de21c86032d2f578af095fceb78d7ee6aed44c8940db7d386
|
| 3 |
+
size 30565
|
vit_small_patch16_224.augreg_in21k/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ba2f11fcd9189a46d18f3f6a920f8af811a5f95b06156adf334d95fa2edf8ee6
|
| 3 |
+
size 3714054
|
vit_small_patch16_224.augreg_in21k/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e934aa6716763ca516740e6f6cd84a696382e168a1abbae5dffc47d549034a8a
|
| 3 |
+
size 120473083
|
vit_small_patch16_224.augreg_in21k/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:75c033d894bbd7f7cf6880042701e974ed810733c52b2db8b094efeebf78fed2
|
| 3 |
+
size 642
|