narugo1992 commited on
Commit
660cdf2
·
verified ·
1 Parent(s): d070f80

Export model 'vit_base_patch16_clip_224.laion2b_ft_in1k', on 2025-01-20 19:28:02 JST

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 780 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -1138,7 +1138,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
1138
 
1139
  ## VisionTransformer
1140
 
1141
- 95 models with model class `VisionTransformer`.
1142
 
1143
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
1144
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
@@ -1175,6 +1175,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
1175
  | [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-28 |
1176
  | [vit_base_patch16_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-28 |
1177
  | [vit_base_patch16_clip_224.openai_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
 
1178
  | [deit_base_patch16_224.fb_in1k](https://huggingface.co/timm/deit_base_patch16_224.fb_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit_base_patch16_224 | 2023-03-28 |
1179
  | [deit3_base_patch16_224.fb_in1k](https://huggingface.co/timm/deit3_base_patch16_224.fb_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit3_base_patch16_224 | 2023-03-28 |
1180
  | [vit_base_patch16_224.sam_in1k](https://huggingface.co/timm/vit_base_patch16_224.sam_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
 
114
 
115
  # Models
116
 
117
+ 781 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
1138
 
1139
  ## VisionTransformer
1140
 
1141
+ 96 models with model class `VisionTransformer`.
1142
 
1143
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
1144
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
 
1175
  | [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-28 |
1176
  | [vit_base_patch16_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-28 |
1177
  | [vit_base_patch16_clip_224.openai_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
1178
+ | [vit_base_patch16_clip_224.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-09 |
1179
  | [deit_base_patch16_224.fb_in1k](https://huggingface.co/timm/deit_base_patch16_224.fb_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit_base_patch16_224 | 2023-03-28 |
1180
  | [deit3_base_patch16_224.fb_in1k](https://huggingface.co/timm/deit3_base_patch16_224.fb_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit3_base_patch16_224 | 2023-03-28 |
1181
  | [vit_base_patch16_224.sam_in1k](https://huggingface.co/timm/vit_base_patch16_224.sam_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2f914b444ceda0fbf83dd96dc4130fe157a22ebb73c393e8d476d38751c2aab2
3
- size 47119
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcb6c649690709820e05b0c38603f60c53a11875b054d926a101dc2463617e80
3
+ size 47158
vit_base_patch16_clip_224.laion2b_ft_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9828526cc3b729a98d3d708820177821bbdd2316c287dc19f4824a20055f3c9e
3
+ size 169862
vit_base_patch16_clip_224.laion2b_ft_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bce20c9eaa0cc03bf69716ad13230ee72caf82b1418956152516d03cf8bd459
3
+ size 346447181
vit_base_patch16_clip_224.laion2b_ft_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3682d4797f20efb977406f35d3a5b2093108d65194bd539fcc185b5fcc73d161
3
+ size 736