narugo1992 commited on
Commit
913e104
·
verified ·
1 Parent(s): db69100

Export model 'vit_small_patch16_384.augreg_in21k_ft_in1k', on 2025-01-20 05:40:26 UTC

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 287 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -694,7 +694,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
694
 
695
  ## VisionTransformer
696
 
697
- 37 models with model class `VisionTransformer`.
698
 
699
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
700
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
@@ -718,6 +718,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
718
  | [vit_base_patch16_siglip_gap_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_224.webli) | 85.6M | 16.8G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_224 | 2024-12-24 |
719
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
720
  | [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-05 |
 
721
  | [vit_small_patch16_384.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_384.augreg_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_384 | 2022-12-22 |
722
  | [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 38.7M | 9.9G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_reg4_gap_256 | 2024-05-20 |
723
  | [vit_medium_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg1_gap_256.sbb_in1k) | 38.7M | 9.8G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_reg1_gap_256 | 2024-05-10 |
 
114
 
115
  # Models
116
 
117
+ 288 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
694
 
695
  ## VisionTransformer
696
 
697
+ 38 models with model class `VisionTransformer`.
698
 
699
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
700
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
 
718
  | [vit_base_patch16_siglip_gap_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_224.webli) | 85.6M | 16.8G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_224 | 2024-12-24 |
719
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
720
  | [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-05 |
721
+ | [vit_small_patch16_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_patch16_384.augreg_in21k_ft_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_384 | 2022-12-22 |
722
  | [vit_small_patch16_384.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_384.augreg_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_384 | 2022-12-22 |
723
  | [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 38.7M | 9.9G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_reg4_gap_256 | 2024-05-20 |
724
  | [vit_medium_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg1_gap_256.sbb_in1k) | 38.7M | 9.8G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_reg1_gap_256 | 2024-05-10 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:31f4390f60b9a54f83b518ac75ea6cf2f35b92974a94d6a5cf2e6922884cbde7
3
- size 25901
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abc0cd1623b5cce31d9e2722617aa6bd7a431d7aa3c9955a3f0d43d63c7d67eb
3
+ size 25935
vit_small_patch16_384.augreg_in21k_ft_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb0a1e835216bac397a156439c2462119a359f1b3a8e42eee1fd8e1ba3563b82
3
+ size 169860
vit_small_patch16_384.augreg_in21k_ft_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78af220b0397f09436adc9532e55454cfa93c58a8bfd9a8b6906890622b0d5bf
3
+ size 88958535
vit_small_patch16_384.augreg_in21k_ft_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a08b6782724d3a3274363ac60488e44bc3f5df57b1c3d6ae9caad44d03e625e
3
+ size 642