narugo1992 commited on
Commit
b507488
·
verified ·
1 Parent(s): 561afa1

Export model 'deit3_base_patch16_384.fb_in1k', on 2025-01-20 06:20:56 UTC

Browse files
README.md CHANGED
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 355 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
@@ -773,7 +773,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
773
 
774
  ## VisionTransformer
775
 
776
- 44 models with model class `VisionTransformer`.
777
 
778
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
779
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
@@ -783,6 +783,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
783
  | [vit_base_patch16_clip_384.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-30 |
784
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
785
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
 
786
  | [vit_base_patch16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch16_384.orig_in21k_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_384 | 2022-12-22 |
787
  | [vit_base_patch16_siglip_gap_384.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_384.webli) | 85.6M | 49.3G | 384 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_384 | 2024-12-24 |
788
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
 
114
 
115
  # Models
116
 
117
+ 356 models exported from TIMM in total.
118
 
119
  ## Beit
120
 
 
773
 
774
  ## VisionTransformer
775
 
776
+ 45 models with model class `VisionTransformer`.
777
 
778
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
779
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------------|:------------------|:---------------------------------|:-------------|
 
783
  | [vit_base_patch16_clip_384.openai_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.openai_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-30 |
784
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
785
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
786
+ | [deit3_base_patch16_384.fb_in1k](https://huggingface.co/timm/deit3_base_patch16_384.fb_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | deit3_base_patch16_384 | 2023-03-28 |
787
  | [vit_base_patch16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_patch16_384.orig_in21k_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_384 | 2022-12-22 |
788
  | [vit_base_patch16_siglip_gap_384.webli](https://huggingface.co/timm/vit_base_patch16_siglip_gap_384.webli) | 85.6M | 49.3G | 384 | False | 768 | 768 | | VisionTransformer | vit_base_patch16_siglip_gap_384 | 2024-12-24 |
789
  | [vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k](https://huggingface.co/timm/vit_mediumd_patch16_reg4_gap_384.sbb2_e200_in12k_ft_in1k) | 64.0M | 36.8G | 384 | True | 512 | 1000 | imagenet-1k | VisionTransformer | vit_mediumd_patch16_reg4_gap_384 | 2024-08-21 |
deit3_base_patch16_384.fb_in1k/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e3cea30d101a86f8ef3b86926265d1e6731dc93994b52d9176f7107e059cec9
3
+ size 169837
deit3_base_patch16_384.fb_in1k/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c88fba5012b2bbb699627a420a57aa9247cdf2418f00b873dc91da30fa32fa43
3
+ size 347739657
deit3_base_patch16_384.fb_in1k/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3d905649f9af19e946d208250dca795cd7a9609676ccbe6e5db51131a5b3c7e
3
+ size 734
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:07ec2f720c887463bfdd35cf3399102beaccff604191d6fc3070a029b7dded1c
3
- size 29333
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46ff886f86642a12ef20039ba20fc6a938b98563e58c6cbad97eeb979cd9d96c
3
+ size 29360