narugo1992 commited on
Commit
df3d448
·
verified ·
1 Parent(s): d2e46e9

Export model 'vit_base_patch32_clip_224.openai_ft_in1k', on 2025-01-20 06:55:00 UTC

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 427 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -833,7 +833,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
833
 
834
  ## VisionTransformer
835
 
836
- 52 models with model class `VisionTransformer`.
837
 
838
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
839
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
@@ -878,6 +878,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
878
  | [flexivit_small.1200ep_in1k](https://huggingface.co/timm/flexivit_small.1200ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
879
  | [flexivit_small.600ep_in1k](https://huggingface.co/timm/flexivit_small.600ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
880
  | [flexivit_small.300ep_in1k](https://huggingface.co/timm/flexivit_small.300ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
 
881
  | [vit_base_patch32_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_224.laion2b_ft_in12k_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_224 | 2022-11-05 |
882
  | [vit_base_patch32_224.sam_in1k](https://huggingface.co/timm/vit_base_patch32_224.sam_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
883
  | [vit_base_patch32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch32_224.augreg_in21k_ft_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
 
114
 
115
  # Models
116
 
117
+ 428 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
833
 
834
  ## VisionTransformer
835
 
836
+ 53 models with model class `VisionTransformer`.
837
 
838
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
839
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
 
878
  | [flexivit_small.1200ep_in1k](https://huggingface.co/timm/flexivit_small.1200ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
879
  | [flexivit_small.600ep_in1k](https://huggingface.co/timm/flexivit_small.600ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
880
  | [flexivit_small.300ep_in1k](https://huggingface.co/timm/flexivit_small.300ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
881
+ | [vit_base_patch32_clip_224.openai_ft_in1k](https://huggingface.co/timm/vit_base_patch32_clip_224.openai_ft_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_224 | 2022-11-10 |
882
  | [vit_base_patch32_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_224.laion2b_ft_in12k_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_224 | 2022-11-05 |
883
  | [vit_base_patch32_224.sam_in1k](https://huggingface.co/timm/vit_base_patch32_224.sam_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
884
  | [vit_base_patch32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch32_224.augreg_in21k_ft_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb5cec8522f522c85ff3138c3592d47229ad9bea78c40e4b8b3a1d8744690a83
3
- size 32541
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0f74ce7c158b4198edcd7d877ad4f7ef2250a1df3478a42f13390418b770315
3
+ size 32568
vit_base_patch32_clip_224.openai_ft_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b26d78ccf316be79f6a92d33cb75804b90f9ef674723329e8e18b9a87e690806
3
+ size 169859
vit_base_patch32_clip_224.openai_ft_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:063ea220e6b78de6039f7768387a592307c80bafab362254fb6f23bc475bed01
3
+ size 353072851
vit_base_patch32_clip_224.openai_ft_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:574b771c20861320cb80c928011fd91d4b6bda6830376ee87b045e2191b1bf4c
3
+ size 736