narugo1992
commited on
Export model 'vit_base_patch16_224.sam_in1k', on 2025-01-20 06:21:42 UTC
Browse files
README.md
CHANGED
|
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
-
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
@@ -773,7 +773,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 773 |
|
| 774 |
## VisionTransformer
|
| 775 |
|
| 776 |
-
|
| 777 |
|
| 778 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 779 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
@@ -796,6 +796,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
|
|
| 796 |
| [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
|
| 797 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
|
| 798 |
| [deit_base_patch16_224.fb_in1k](https://huggingface.co/timm/deit_base_patch16_224.fb_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit_base_patch16_224 | 2023-03-28 |
|
|
|
|
| 799 |
| [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-16 |
|
| 800 |
| [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
|
| 801 |
| [vit_base_patch16_siglip_gap_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_224.webli) | 85.6M | 16.8G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_224 | 2024-12-24 |
|
|
|
|
| 114 |
|
| 115 |
# Models
|
| 116 |
|
| 117 |
+
357 models exported from TIMM in total.
|
| 118 |
|
| 119 |
## Beit
|
| 120 |
|
|
|
|
| 773 |
|
| 774 |
## VisionTransformer
|
| 775 |
|
| 776 |
+
46 models with model class `VisionTransformer`.
|
| 777 |
|
| 778 |
| Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
|
| 779 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
|
|
|
|
| 796 |
| [vit_base_patch16_clip_224.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.openai_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
|
| 797 |
| [vit_base_patch16_clip_224.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_224.laion2b_ft_in12k_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_224 | 2022-11-27 |
|
| 798 |
| [deit_base_patch16_224.fb_in1k](https://huggingface.co/timm/deit_base_patch16_224.fb_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit_base_patch16_224 | 2023-03-28 |
|
| 799 |
+
| [vit_base_patch16_224.sam_in1k](https://huggingface.co/timm/vit_base_patch16_224.sam_in1k) | 86.4M | 16.9G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_224 | 2022-12-22 |
|
| 800 |
| [vit_base_patch16_224.orig_in21k](https://huggingface.co/timm/vit_base_patch16_224.orig_in21k) | 85.6M | 16.9G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_224 | 2023-11-16 |
|
| 801 |
| [vit_base_patch16_rpn_224.sw_in1k](https://huggingface.co/timm/vit_base_patch16_rpn_224.sw_in1k) | 86.4M | 16.8G | 224 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_rpn_224 | 2022-12-22 |
|
| 802 |
| [vit_base_patch16_siglip_gap_224.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_224.webli) | 85.6M | 16.8G | 224 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_224 | 2024-12-24 |
|
models.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aea4b5da05ddfd52e3778f86c95dce77fd08051fbd33ffa46540f98e4a9bcb96
|
| 3 |
+
size 29394
|
vit_base_patch16_224.sam_in1k/meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:14dfb8944bb2f802055081b4ce36e1ffbebf542aa1642f915b6816ee993c70a4
|
| 3 |
+
size 169833
|
vit_base_patch16_224.sam_in1k/model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e07170afe29e5b82908206c202fcb3bbb78555f40fed86ed937bbcbae3c8c6c5
|
| 3 |
+
size 346442876
|
vit_base_patch16_224.sam_in1k/preprocess.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:75c033d894bbd7f7cf6880042701e974ed810733c52b2db8b094efeebf78fed2
|
| 3 |
+
size 642
|