narugo1992
commited on
Export model 'vit_base_patch16_clip_384.openai_ft_in1k', on 2025-01-20 06:59:55 UTC
Browse files
README.md
CHANGED
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -838,7 +838,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 838 |
|
| 839 |
## VisionTransformer
|
| 840 |
|
| 841 |
-
|
| 842 |
|
| 843 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 844 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -847,6 +847,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 847 |
| [vit_base_r50_s16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_r50_s16_384.orig_in21k_ft_in1k) | 86.6M | 49.5G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_r50_s16_384 | 2022-12-23 |
|
| 848 |
| [vit_base_patch16_clip_384.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-30 |
|
| 849 |
| [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
|
|
|
|
| 850 |
| [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
|
| 851 |
| [deit_base_patch16_384.fb_in1k](https://huggingface.co/timm/deit_base_patch16_384.fb_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit_base_patch16_384 | 2023-03-28 |
|
| 852 |
| [deit3_base_patch16_384.fb_in1k](https://huggingface.co/timm/deit3_base_patch16_384.fb_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit3_base_patch16_384 | 2023-03-28 |
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
436 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 838 |
|
| 839 |
## VisionTransformer
|
| 840 |
|
| 841 |
+
56 models with model class `VisionTransformer`.
|
| 842 |
|
| 843 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 844 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 847 |
| [vit_base_r50_s16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_r50_s16_384.orig_in21k_ft_in1k) | 86.6M | 49.5G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_r50_s16_384 | 2022-12-23 |
|
| 848 |
| [vit_base_patch16_clip_384.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-30 |
|
| 849 |
| [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
|
| 850 |
+
| [vit_base_patch16_clip_384.openai_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
|
| 851 |
| [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
|
| 852 |
| [deit_base_patch16_384.fb_in1k](https://huggingface.co/timm/deit_base_patch16_384.fb_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit_base_patch16_384 | 2023-03-28 |
|
| 853 |
| [deit3_base_patch16_384.fb_in1k](https://huggingface.co/timm/deit3_base_patch16_384.fb_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit3_base_patch16_384 | 2023-03-28 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5731dabe315a74e11e309499859f419182c7edf8dd34f63e6b9e6461bb8d2963
|
| 3 |
+
size 32866
|
vit_base_patch16_clip_384.openai_ft_in1k/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:50223d8d63318b9a3638014dd64e0a27efacd6b6e5485631ed0382151544295f
|
| 3 |
+
size 169860
|
vit_base_patch16_clip_384.openai_ft_in1k/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b490fb1c4f81255ed7b207c5ce75c94433024363fe72ece25170e1d519cf3167
|
| 3 |
+
size 347614541
|
vit_base_patch16_clip_384.openai_ft_in1k/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c3fe8c8024a120fc5ab528b277d31feb09fe9a0997e76fba27a1ac0f77651353
|
| 3 |
+
size 789
|