narugo1992 commited on
Commit
359f393
·
verified ·
1 Parent(s): 6d3e146

Export model 'vit_base_patch16_siglip_gap_384.webli', on 2025-01-20 06:17:41 UTC

Browse files
README.md CHANGED
@@ -2,7 +2,6 @@
2
  pipeline_tag: image-classification
3
  base_model:
4
  - timm/beitv2_base_patch16_224.in1k_ft_in1k
5
- - timm/caformer_b36.sail_in1k
6
  - timm/caformer_m36.sail_in1k
7
  - timm/caformer_m36.sail_in1k_384
8
  - timm/caformer_m36.sail_in22k_ft_in1k_384
@@ -93,6 +92,7 @@ base_model:
93
  - timm/vit_base_patch16_siglip_384.webli
94
  - timm/vit_base_patch16_siglip_gap_224.webli
95
  - timm/vit_base_patch16_siglip_gap_256.webli_i18n
 
96
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
97
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
98
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 352 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -771,7 +771,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
771
 
772
  ## VisionTransformer
773
 
774
- 43 models with model class `VisionTransformer`.
775
 
776
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
777
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
@@ -782,6 +782,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
782
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
783
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
784
  | [vit_base_patch16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch16_384.orig_in21k_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_384 | 2022-12-22 |
 
785
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
786
  | [vit_small_patch14_dinov2.lvd142m](https://huggingface.co/timm/vit_small_patch14_dinov2.lvd142m) | 21.5M | 29.5G | 518 | False | 384 | 384 | | VisionTransformer | vit_small_patch14_dinov2 | 2023-05-09 |
787
  | [vit_base_patch16_siglip_256.webli_i18n](https://huggingface.co/timm/vit_base_patch16_siglip_256.webli_i18n) | 92.7M | 22.2G | 256 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_256 | 2024-12-24 |
 
2
  pipeline_tag: image-classification
3
  base_model:
4
  - timm/beitv2_base_patch16_224.in1k_ft_in1k
 
5
  - timm/caformer_m36.sail_in1k
6
  - timm/caformer_m36.sail_in1k_384
7
  - timm/caformer_m36.sail_in22k_ft_in1k_384
 
92
  - timm/vit_base_patch16_siglip_384.webli
93
  - timm/vit_base_patch16_siglip_gap_224.webli
94
  - timm/vit_base_patch16_siglip_gap_256.webli_i18n
95
+ - timm/vit_base_patch16_siglip_gap_384.webli
96
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
97
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
98
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
 
114
 
115
  # Models
116
 
117
+ 353 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
771
 
772
  ## VisionTransformer
773
 
774
+ 44 models with model class `VisionTransformer`.
775
 
776
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
777
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
 
782
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
783
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
784
  | [vit_base_patch16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch16_384.orig_in21k_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_384 | 2022-12-22 |
785
+ | [vit_base_patch16_siglip_gap_384.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_384.webli) | 85.6M | 49.3G | 384 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_384 | 2024-12-24 |
786
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
787
  | [vit_small_patch14_dinov2.lvd142m](https://huggingface.co/timm/vit_small_patch14_dinov2.lvd142m) | 21.5M | 29.5G | 518 | False | 384 | 384 | | VisionTransformer | vit_small_patch14_dinov2 | 2023-05-09 |
788
  | [vit_base_patch16_siglip_256.webli_i18n](https://huggingface.co/timm/vit_base_patch16_siglip_256.webli_i18n) | 92.7M | 22.2G | 256 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_256 | 2024-12-24 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9a4d9ec4e871a69097010f55026b52fb1ead84d8d331d34ae645932499ce177a
3
- size 29087
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6cd040f9e746953f506f4437c432acd2f818aa8a5e02a793455728a1832df31
3
+ size 29176
vit_base_patch16_siglip_gap_384.webli/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:054ec619cb0de627821d130e5dbe5b303ddd5f6cfc2f357fdd12812c05c059ed
3
+ size 496
vit_base_patch16_siglip_gap_384.webli/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcc32887286f9e0b62350e18bd8030e545df889388ab439dee5ee506da6790f1
3
+ size 344526200
vit_base_patch16_siglip_gap_384.webli/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12eb69f461d904bd37771631fbd78ee5e3f973ce8269097856c99756d57dd898
3
+ size 642