narugo1992 commited on
Commit
3d18d69
·
verified ·
1 Parent(s): 8dd5463

Export model 'vit_base_patch16_384.orig_in21k_ft_in1k', on 2025-01-20 06:12:08 UTC

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 341 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -762,7 +762,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
762
 
763
  ## VisionTransformer
764
 
765
- 42 models with model class `VisionTransformer`.
766
 
767
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
768
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
@@ -772,6 +772,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
772
  | [vit_base_patch16_clip_384.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-30 |
773
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
774
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
 
775
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
776
  | [vit_small_patch14_dinov2.lvd142m](https://huggingface.co/timm/vit_small_patch14_dinov2.lvd142m) | 21.5M | 29.5G | 518 | False | 384 | 384 | | VisionTransformer | vit_small_patch14_dinov2 | 2023-05-09 |
777
  | [vit_base_patch16_siglip_256.webli_i18n](https://huggingface.co/timm/vit_base_patch16_siglip_256.webli_i18n) | 92.7M | 22.2G | 256 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_256 | 2024-12-24 |
 
114
 
115
  # Models
116
 
117
+ 342 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
762
 
763
  ## VisionTransformer
764
 
765
+ 43 models with model class `VisionTransformer`.
766
 
767
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
768
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
 
772
  | [vit_base_patch16_clip_384.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-30 |
773
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
774
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
775
+ | [vit_base_patch16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch16_384.orig_in21k_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_384 | 2022-12-22 |
776
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
777
  | [vit_small_patch14_dinov2.lvd142m](https://huggingface.co/timm/vit_small_patch14_dinov2.lvd142m) | 21.5M | 29.5G | 518 | False | 384 | 384 | | VisionTransformer | vit_small_patch14_dinov2 | 2023-05-09 |
778
  | [vit_base_patch16_siglip_256.webli_i18n](https://huggingface.co/timm/vit_base_patch16_siglip_256.webli_i18n) | 92.7M | 22.2G | 256 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_256 | 2024-12-24 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:41c0fda412b423320f936d7f537cffc0ccffa46c6198da267c9260d1a2b64ec1
3
- size 28543
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f98fa04498555b7ddfea3dfeb7670778896415cc836fd7d7ac41be822f4cab17
3
+ size 28577
vit_base_patch16_384.orig_in21k_ft_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45645cffa6267a3e3b5f92544d5315ae2017e4d7ff34d81055d4089c06cd7763
3
+ size 169853
vit_base_patch16_384.orig_in21k_ft_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37e8377caf4ee232c856c68635ddc0553cd789ed2c21e751a370cf09201e1266
3
+ size 347610236
vit_base_patch16_384.orig_in21k_ft_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a08b6782724d3a3274363ac60488e44bc3f5df57b1c3d6ae9caad44d03e625e
3
+ size 642