narugo1992 commited on
Commit
4132e0f
·
verified ·
1 Parent(s): 756b327

Export model 'vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k', on 2025-01-20 06:57:56 UTC

Browse files
README.md CHANGED
@@ -53,7 +53,6 @@ base_model:
53
  - timm/nextvit_base.bd_in1k
54
  - timm/nextvit_small.bd_in1k_384
55
  - timm/nextvit_small.bd_ssld_6m_in1k
56
- - timm/poolformerv2_s12.sail_in1k
57
  - timm/repghostnet_050.in1k
58
  - timm/repghostnet_058.in1k
59
  - timm/repghostnet_080.in1k
@@ -92,6 +91,7 @@ base_model:
92
  - timm/vit_base_patch16_siglip_gap_384.webli
93
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
94
  - timm/vit_betwixt_patch16_reg4_gap_256.sbb_in1k
 
95
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k
96
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
97
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 431 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -835,7 +835,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
835
 
836
  ## VisionTransformer
837
 
838
- 54 models with model class `VisionTransformer`.
839
 
840
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
841
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
@@ -866,6 +866,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
866
  | [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
867
  | [vit_base_patch16_siglip_gap_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_224.webli) | 85.6M | 16.8G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_224 | 2024-12-24 |
868
  | [vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k) | 64.0M | 16.5G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_256 | 2024-08-21 |
 
869
  | [vit_betwixt_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in1k) | 60.2M | 15.5G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg4_gap_256 | 2024-05-10 |
870
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
871
  | [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-05 |
 
53
  - timm/nextvit_base.bd_in1k
54
  - timm/nextvit_small.bd_in1k_384
55
  - timm/nextvit_small.bd_ssld_6m_in1k
 
56
  - timm/repghostnet_050.in1k
57
  - timm/repghostnet_058.in1k
58
  - timm/repghostnet_080.in1k
 
91
  - timm/vit_base_patch16_siglip_gap_384.webli
92
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
93
  - timm/vit_betwixt_patch16_reg4_gap_256.sbb_in1k
94
+ - timm/vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k
95
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k
96
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
97
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
 
114
 
115
  # Models
116
 
117
+ 432 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
835
 
836
  ## VisionTransformer
837
 
838
+ 55 models with model class `VisionTransformer`.
839
 
840
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
841
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
 
866
  | [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
867
  | [vit_base_patch16_siglip_gap_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_224.webli) | 85.6M | 16.8G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_224 | 2024-12-24 |
868
  | [vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_256.sbb2_e200_in12k_ft_in1k) | 64.0M | 16.5G | 256 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_256 | 2024-08-21 |
869
+ | [vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k) | 60.2M | 15.5G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg4_gap_256 | 2024-05-10 |
870
  | [vit_betwixt_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg4_gap_256.sbb_in1k) | 60.2M | 15.5G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg4_gap_256 | 2024-05-10 |
871
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
872
  | [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-05 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:04c0fd7efba97ca7d7e0fc94a0029ce91c61f2f415befba9f4fba84300307a0d
3
- size 32660
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50776db5d7069f07a750540ab52af74e31aad9a265092ac55ead0fe720709bc0
3
+ size 32681
vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:798f8c992da815d1fd950c8dfa37ca6fa42306e7360cf36ae786bf7b1b45d921
3
+ size 169887
vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f72b4c49b85c481244209f78510f75e4201ec1e780077ad8a525552af4d729a
3
+ size 241773579
vit_betwixt_patch16_reg4_gap_256.sbb_in12k_ft_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4c3ceed99ae49aa023c8c7cbc224be5ba023f409004ac0091862f1e398d784e
3
+ size 642