narugo1992 commited on
Commit
49f3640
·
verified ·
1 Parent(s): 5424d77

Export model 'vit_base_patch32_384.augreg_in21k_ft_in1k', on 2025-01-20 20:46:22 JST

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 897 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -1237,7 +1237,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
1237
 
1238
  ## VisionTransformer
1239
 
1240
- 108 models with model class `VisionTransformer`.
1241
 
1242
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
1243
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
@@ -1302,6 +1302,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
1302
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-11 |
1303
  | [vit_base_patch32_clip_384.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.openai_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-11 |
1304
  | [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-06 |
 
1305
  | [deit3_small_patch16_384.fb_in22k_ft_in1k](https://huggingface.co/timm/deit3_small_patch16_384.fb_in22k_ft_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | deit3_small_patch16_384 | 2023-03-28 |
1306
  | [deit3_small_patch16_384.fb_in1k](https://huggingface.co/timm/deit3_small_patch16_384.fb_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | deit3_small_patch16_384 | 2023-03-28 |
1307
  | [vit_small_patch16_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_patch16_384.augreg_in21k_ft_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_384 | 2022-12-22 |
 
114
 
115
  # Models
116
 
117
+ 898 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
1237
 
1238
  ## VisionTransformer
1239
 
1240
+ 109 models with model class `VisionTransformer`.
1241
 
1242
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
1243
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
 
1302
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-11 |
1303
  | [vit_base_patch32_clip_384.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.openai_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-11 |
1304
  | [vit_base_patch32_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_384 | 2022-11-06 |
1305
+ | [vit_base_patch32_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch32_384.augreg_in21k_ft_in1k) | 88.2M | 12.7G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_384 | 2022-12-22 |
1306
  | [deit3_small_patch16_384.fb_in22k_ft_in1k](https://huggingface.co/timm/deit3_small_patch16_384.fb_in22k_ft_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | deit3_small_patch16_384 | 2023-03-28 |
1307
  | [deit3_small_patch16_384.fb_in1k](https://huggingface.co/timm/deit3_small_patch16_384.fb_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | deit3_small_patch16_384 | 2023-03-28 |
1308
  | [vit_small_patch16_384.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_small_patch16_384.augreg_in21k_ft_in1k) | 22.0M | 12.4G | 384 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_384 | 2022-12-22 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c95d42841a108a4af95c51655376841c8bb4ea5d6ab874062f772d7947bc4f5
3
- size 51735
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f619d68f7d25d40d28b9911069d4b2589864d5db3ad6ddbfc83ad1b214811bbf
3
+ size 51763
vit_base_patch32_384.augreg_in21k_ft_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4de8122a7da84758ae60a843aa1665d68b92e24875f21abbe50d7523fdbbefa3
3
+ size 169857
vit_base_patch32_384.augreg_in21k_ft_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c581a2b0138231cc263ebbb6b82d461323cd4cccc19cdd730dba897e00e53d34
3
+ size 353361020
vit_base_patch32_384.augreg_in21k_ft_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a08b6782724d3a3274363ac60488e44bc3f5df57b1c3d6ae9caad44d03e625e
3
+ size 642