narugo1992 commited on
Export model 'vit_medium_patch16_gap_256.sw_in12k_ft_in1k', on 2025-01-20 05:35:21 UTC
Browse files
README.md
CHANGED
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -679,7 +679,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 679 |
|
| 680 |
## VisionTransformer
|
| 681 |
|
| 682 |
-
|
| 683 |
|
| 684 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 685 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -705,6 +705,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 705 |
| [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-05 |
|
| 706 |
| [vit_small_patch16_384.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_384.augreg_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_384 | 2022-12-22 |
|
| 707 |
| [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 38.7M | 9.9G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_reg4_gap_256 | 2024-05-20 |
|
|
|
|
| 708 |
| [deit3_medium_patch16_224.fb_in1k](https://huggingface.co/timm/deit3_medium_patch16_224.fb_in1k) | 38.7M | 7.5G | 224 | True | 512 | 1000 | imagenet-1k | VisionTransformer | deit3_medium_patch16_224 | 2023-03-28 |
|
| 709 |
| [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg4_gap_256 | 2024-05-10 |
|
| 710 |
| [vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg1_gap_256 | 2024-05-27 |
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
276 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 679 |
|
| 680 |
## VisionTransformer
|
| 681 |
|
| 682 |
+
35 models with model class `VisionTransformer`.
|
| 683 |
|
| 684 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 685 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 705 |
| [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-05 |
|
| 706 |
| [vit_small_patch16_384.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_384.augreg_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_384 | 2022-12-22 |
|
| 707 |
| [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 38.7M | 9.9G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_reg4_gap_256 | 2024-05-20 |
|
| 708 |
+
| [vit_medium_patch16_gap_256.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_256.sw_in12k_ft_in1k) | 38.7M | 9.8G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_256 | 2022-12-02 |
|
| 709 |
| [deit3_medium_patch16_224.fb_in1k](https://huggingface.co/timm/deit3_medium_patch16_224.fb_in1k) | 38.7M | 7.5G | 224 | True | 512 | 1000 | imagenet-1k | VisionTransformer | deit3_medium_patch16_224 | 2023-03-28 |
|
| 710 |
| [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg4_gap_256 | 2024-05-10 |
|
| 711 |
| [vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg1_gap_256 | 2024-05-27 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:81fd1fb386b61ec32e316abf4b8aa0a34a3280c5105e47a144c57c750b120b09
|
| 3 |
+
size 25409
|
vit_medium_patch16_gap_256.sw_in12k_ft_in1k/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:920b63b8896d6d5145d5a532ecc1ae7bea39e489cbda9b239be13093123f80a3
|
| 3 |
+
size 169866
|
vit_medium_patch16_gap_256.sw_in12k_ft_in1k/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2b14bedd7995ee2669f8ae3e3a9ac0acdd10a74eabe96717d42e5d7a0338811f
|
| 3 |
+
size 155618857
|
vit_medium_patch16_gap_256.sw_in12k_ft_in1k/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f4c3ceed99ae49aa023c8c7cbc224be5ba023f409004ac0091862f1e398d784e
|
| 3 |
+
size 642
|