narugo1992
commited on
Export model 'vit_base_patch32_clip_224.laion2b_ft_in12k_in1k', on 2025-01-20 05:39:54 UTC
Browse files
README.md
CHANGED
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -694,7 +694,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 694 |
|
| 695 |
## VisionTransformer
|
| 696 |
|
| 697 |
-
|
| 698 |
|
| 699 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 700 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -728,6 +728,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 728 |
| [flexivit_small.1200ep_in1k](https://huggingface.co/timm/flexivit_small.1200ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
|
| 729 |
| [flexivit_small.600ep_in1k](https://huggingface.co/timm/flexivit_small.600ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
|
| 730 |
| [flexivit_small.300ep_in1k](https://huggingface.co/timm/flexivit_small.300ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
|
|
|
|
| 731 |
| [vit_base_patch32_224.sam_in1k](https://huggingface.co/timm/vit_base_patch32_224.sam_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
|
| 732 |
| [vit_base_patch32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch32_224.augreg_in21k_ft_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
|
| 733 |
| [vit_small_patch16_224.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in1k) | 22.0M | 4.2G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
287 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 694 |
|
| 695 |
## VisionTransformer
|
| 696 |
|
| 697 |
+
37 models with model class `VisionTransformer`.
|
| 698 |
|
| 699 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 700 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:-------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 728 |
| [flexivit_small.1200ep_in1k](https://huggingface.co/timm/flexivit_small.1200ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
|
| 729 |
| [flexivit_small.600ep_in1k](https://huggingface.co/timm/flexivit_small.600ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
|
| 730 |
| [flexivit_small.300ep_in1k](https://huggingface.co/timm/flexivit_small.300ep_in1k) | 22.0M | 4.9G | 240 | True | 384 | 1000 | imagenet-1k | VisionTransformer | flexivit_small | 2022-12-22 |
|
| 731 |
+
| [vit_base_patch32_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch32_clip_224.laion2b_ft_in12k_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_clip_224 | 2022-11-05 |
|
| 732 |
| [vit_base_patch32_224.sam_in1k](https://huggingface.co/timm/vit_base_patch32_224.sam_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
|
| 733 |
| [vit_base_patch32_224.augreg_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch32_224.augreg_in21k_ft_in1k) | 88.2M | 4.4G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch32_224 | 2022-12-22 |
|
| 734 |
| [vit_small_patch16_224.augreg_in1k](https://huggingface.co/timm/vit_small_patch16_224.augreg_in1k) | 22.0M | 4.2G | 224 | True | 384 | 1000 | imagenet-1k | VisionTransformer | vit_small_patch16_224 | 2022-12-22 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:31f4390f60b9a54f83b518ac75ea6cf2f35b92974a94d6a5cf2e6922884cbde7
|
| 3 |
+
size 25901
|
vit_base_patch32_clip_224.laion2b_ft_in12k_in1k/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:17d6403856f673ac883a61af7b42ffb1dc4a39fa8faf87bf340df031ecf9cdaf
|
| 3 |
+
size 169873
|
vit_base_patch32_clip_224.laion2b_ft_in12k_in1k/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a688c50432d1d038ff933c24319baa1793ca59d9c51c3b1fd458201f5adbb91c
|
| 3 |
+
size 353072851
|
vit_base_patch32_clip_224.laion2b_ft_in12k_in1k/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:574b771c20861320cb80c928011fd91d4b6bda6830376ee87b045e2191b1bf4c
|
| 3 |
+
size 736
|