narugo1992 commited on
Commit
14bb8c1
·
verified ·
1 Parent(s): 5b479a0

Export model 'vit_medium_patch16_reg1_gap_256.sbb_in1k', on 2025-01-20 05:36:35 UTC

Browse files
README.md CHANGED
@@ -16,7 +16,6 @@ base_model:
16
  - timm/convnext_nano.r384_ad_in12k
17
  - timm/convnext_nano.r384_in12k_ft_in1k
18
  - timm/convnext_zepto_rms.ra4_e3600_r224_in1k
19
- - timm/dla102x.in1k
20
  - timm/efficientvit_b1.r256_in1k
21
  - timm/efficientvit_b2.r256_in1k
22
  - timm/efficientvit_l1.r224_in1k
@@ -98,6 +97,7 @@ base_model:
98
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
99
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
100
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
 
101
  - timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k
102
  - timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k
103
  - timm/vit_small_patch14_dinov2.lvd142m
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 279 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -681,7 +681,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
681
 
682
  ## VisionTransformer
683
 
684
- 35 models with model class `VisionTransformer`.
685
 
686
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
687
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
@@ -707,6 +707,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
707
  | [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-05 |
708
  | [vit_small_patch16_384.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_384.augreg_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_384 | 2022-12-22 |
709
  | [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 38.7M | 9.9G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_reg4_gap_256 | 2024-05-20 |
 
710
  | [vit_medium_patch16_gap_256.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_256.sw_in12k_ft_in1k) | 38.7M | 9.8G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_256 | 2022-12-02 |
711
  | [deit3_medium_patch16_224.fb_in1k](https://huggingface.co/timm/deit3_medium_patch16_224.fb_in1k) | 38.7M | 7.5G | 224 | True | 512 | 1000 | imagenet-1k | VisionTransformer | deit3_medium_patch16_224 | 2023-03-28 |
712
  | [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg4_gap_256 | 2024-05-10 |
 
16
  - timm/convnext_nano.r384_ad_in12k
17
  - timm/convnext_nano.r384_in12k_ft_in1k
18
  - timm/convnext_zepto_rms.ra4_e3600_r224_in1k
 
19
  - timm/efficientvit_b1.r256_in1k
20
  - timm/efficientvit_b2.r256_in1k
21
  - timm/efficientvit_l1.r224_in1k
 
97
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
98
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
99
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
100
+ - timm/vit_medium_patch16_reg1_gap_256.sbb_in1k
101
  - timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k
102
  - timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k
103
  - timm/vit_small_patch14_dinov2.lvd142m
 
114
 
115
  # Models
116
 
117
+ 280 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
681
 
682
  ## VisionTransformer
683
 
684
+ 36 models with model class `VisionTransformer`.
685
 
686
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
687
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
 
707
  | [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-05 |
708
  | [vit_small_patch16_384.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_384.augreg_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_384 | 2022-12-22 |
709
  | [vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 38.7M | 9.9G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_reg4_gap_256 | 2024-05-20 |
710
+ | [vit_medium_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_medium_patch16_reg1_gap_256.sbb_in1k) | 38.7M | 9.8G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_reg1_gap_256 | 2024-05-10 |
711
  | [vit_medium_patch16_gap_256.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_256.sw_in12k_ft_in1k) | 38.7M | 9.8G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_256 | 2022-12-02 |
712
  | [deit3_medium_patch16_224.fb_in1k](https://huggingface.co/timm/deit3_medium_patch16_224.fb_in1k) | 38.7M | 7.5G | 224 | True | 512 | 1000 | imagenet-1k | VisionTransformer | deit3_medium_patch16_224 | 2023-03-28 |
713
  | [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg4_gap_256 | 2024-05-10 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6ccc489069acdb46891afa9508412506e41384d6d5e0d23be9493fc4ceb65ced
3
- size 25491
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b6931710001b828c647a7eaef27c0d683898910f9095974178cd0a8653759aa
3
+ size 25531
vit_medium_patch16_reg1_gap_256.sbb_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b081cf3553627d5b1aac829ee200bd1ab631f3f6152e633cb18e3a78b41aecc
3
+ size 169865
vit_medium_patch16_reg1_gap_256.sbb_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df7d09b1729e6cff873cd52eb6b939848a5ee1fa2bb4f7ef285cf64b320bb3ae
3
+ size 155752268
vit_medium_patch16_reg1_gap_256.sbb_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4c3ceed99ae49aa023c8c7cbc224be5ba023f409004ac0091862f1e398d784e
3
+ size 642