narugo1992
commited on
Export model 'vit_base_patch16_clip_384.laion2b_ft_in1k', on 2025-01-19 18:32:35 UTC
Browse files
README.md
CHANGED
|
@@ -69,6 +69,7 @@ base_model:
|
|
| 69 |
- timm/swin_s3_small_224.ms_in1k
|
| 70 |
- timm/test_convnext2.r160_in1k
|
| 71 |
- timm/twins_pcpvt_base.in1k
|
|
|
|
| 72 |
- timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k
|
| 73 |
- timm/vit_base_r50_s16_384.orig_in21k_ft_in1k
|
| 74 |
- timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
|
|
@@ -88,7 +89,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 88 |
|
| 89 |
# Models
|
| 90 |
|
| 91 |
-
|
| 92 |
|
| 93 |
## ByobNet
|
| 94 |
|
|
@@ -348,12 +349,13 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 348 |
|
| 349 |
## VisionTransformer
|
| 350 |
|
| 351 |
-
|
| 352 |
|
| 353 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 354 |
|:-------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------|:------------------|:---------------------------------|:-------------|
|
| 355 |
| [vit_base_r50_s16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_r50_s16_384.orig_in21k_ft_in1k) | 86.6M | 49.5G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_r50_s16_384 | 2022-12-23 |
|
| 356 |
| [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
|
|
|
|
| 357 |
| [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
|
| 358 |
| [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg4_gap_256 | 2024-05-10 |
|
| 359 |
|
|
|
|
| 69 |
- timm/swin_s3_small_224.ms_in1k
|
| 70 |
- timm/test_convnext2.r160_in1k
|
| 71 |
- timm/twins_pcpvt_base.in1k
|
| 72 |
+
- timm/vit_base_patch16_clip_384.laion2b_ft_in1k
|
| 73 |
- timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k
|
| 74 |
- timm/vit_base_r50_s16_384.orig_in21k_ft_in1k
|
| 75 |
- timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
|
|
|
|
| 89 |
|
| 90 |
# Models
|
| 91 |
|
| 92 |
+
75 models exported from TIMM in total.
|
| 93 |
|
| 94 |
## ByobNet
|
| 95 |
|
|
|
|
| 349 |
|
| 350 |
## VisionTransformer
|
| 351 |
|
| 352 |
+
5 models with model class `VisionTransformer`.
|
| 353 |
|
| 354 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 355 |
|:-------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------|:------------------|:---------------------------------|:-------------|
|
| 356 |
| [vit_base_r50_s16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_r50_s16_384.orig_in21k_ft_in1k) | 86.6M | 49.5G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_r50_s16_384 | 2022-12-23 |
|
| 357 |
| [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
|
| 358 |
+
| [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
|
| 359 |
| [vit_betwixt_patch16_reg1_gap_256.sbb_in1k](https://huggingface.co/timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k) | 60.2M | 15.3G | 256 | True | 640 | 1000 | imagenet-1k | VisionTransformer | vit_betwixt_patch16_reg1_gap_256 | 2024-05-10 |
|
| 360 |
| [vit_little_patch16_reg4_gap_256.sbb_in1k](https://huggingface.co/timm/vit_little_patch16_reg4_gap_256.sbb_in1k) | 22.4M | 5.7G | 256 | True | 320 | 1000 | imagenet-1k | VisionTransformer | vit_little_patch16_reg4_gap_256 | 2024-05-10 |
|
| 361 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:db91437ad8a57091b56efdcdcf81b8fa4e2f0a20d2b236283c63f70eb1dfb40a
|
| 3 |
+
size 13648
|
vit_base_patch16_clip_384.laion2b_ft_in1k/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ba186254af5cf27659a39d60d031718f95e781bd6d9ae240471dc63252863491
|
| 3 |
+
size 169862
|
vit_base_patch16_clip_384.laion2b_ft_in1k/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1817c76f3d91e7dc9a796a492f2b616773888a1cfc26ec14b3b35ddd798341c0
|
| 3 |
+
size 347614541
|
vit_base_patch16_clip_384.laion2b_ft_in1k/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c3fe8c8024a120fc5ab528b277d31feb09fe9a0997e76fba27a1ac0f77651353
|
| 3 |
+
size 789
|