narugo1992 commited on
Commit
b55d22d
·
verified ·
1 Parent(s): b217644

Export model 'vit_medium_patch16_gap_384.sw_in12k_ft_in1k', on 2025-01-20 04:09:20 UTC

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 106 models exported from TIMM in total.
118
 
119
  ## ByobNet
120
 
@@ -416,7 +416,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
416
 
417
  ## VisionTransformer
418
 
419
- 7 models with model class `VisionTransformer`.
420
 
421
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
422
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------|:------------------|:---------------------------------|:-------------|
@@ -424,6 +424,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
424
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
425
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
426
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
 
427
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
428
  | [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg4_gap_256 | 2024-05-10 |
429
  | [flexivit_small.600ep_in1k](https://huggingface.co/timm/flexivit_small.600ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
 
114
 
115
  # Models
116
 
117
+ 107 models exported from TIMM in total.
118
 
119
  ## ByobNet
120
 
 
416
 
417
  ## VisionTransformer
418
 
419
+ 8 models with model class `VisionTransformer`.
420
 
421
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
422
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------|:------------------|:---------------------------------|:-------------|
 
424
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
425
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
426
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
427
+ | [vit_medium_patch16_gap_384.sw_in12k_ft_in1k](https://huggingface.co/timm/vit_medium_patch16_gap_384.sw_in12k_ft_in1k) | 38.7M | 22.0G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_medium_patch16_gap_384 | 2022-12-02 |
428
  | [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
429
  | [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg4_gap_256 | 2024-05-10 |
430
  | [flexivit_small.600ep_in1k](https://huggingface.co/timm/flexivit_small.600ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9951045b7e362f15df7aa62f7923798bfc8d61a1deda6e09ebf18d8cebcf700b
3
- size 15740
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0207f791b93e6938758034659afc632399836d52cb347a64d6569a6a267cad10
3
+ size 15809
vit_medium_patch16_gap_384.sw_in12k_ft_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e07ed71f01100ee43daba30750ed67f0a2be0273c2c7a5ab24afa911a3f6ada0
3
+ size 169867
vit_medium_patch16_gap_384.sw_in12k_ft_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a14b7a9590a98f9c1ae3f2ce06d14ca38ab8faad1816dcd0ee05ff30ab92add
3
+ size 156274217
vit_medium_patch16_gap_384.sw_in12k_ft_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd9cd9922b0941bf75a8cb381ae002aba1a0dc3d1c66334f434df8bec4c70d87
3
+ size 695