narugo1992 commited on
Commit
b74130d
·
verified ·
1 Parent(s): 57d6de5

Export model 'vit_base_patch14_reg4_dinov2.lvd142m', on 2025-01-20 04:20:02 UTC

Browse files
README.md CHANGED
@@ -11,7 +11,6 @@ base_model:
11
  - timm/convnext_base.clip_laion2b_augreg_ft_in12k
12
  - timm/convnext_nano.r384_in12k_ft_in1k
13
  - timm/convnext_zepto_rms.ra4_e3600_r224_in1k
14
- - timm/convnextv2_nano.fcmae_ft_in22k_in1k
15
  - timm/cs3edgenet_x.c2_in1k
16
  - timm/cs3sedarknet_x.c2ns_in1k
17
  - timm/cspresnet50.ra_in1k
@@ -93,6 +92,7 @@ base_model:
93
  - timm/test_convnext2.r160_in1k
94
  - timm/tresnet_v2_l.miil_in21k_ft_in1k
95
  - timm/twins_pcpvt_base.in1k
 
96
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
97
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
98
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
@@ -114,7 +114,7 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
114
 
115
  # Models
116
 
117
- 121 models exported from TIMM in total.
118
 
119
  ## ByobNet
120
 
@@ -448,10 +448,11 @@ ONNX export version from [TIMM](https://huggingface.co/timm).
448
 
449
  ## VisionTransformer
450
 
451
- 10 models with model class `VisionTransformer`.
452
 
453
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
454
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------|:------------------|:---------------------------------|:-------------|
 
455
  | [vit_base_r50_s16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_r50_s16_384.orig_in21k_ft_in1k) | 86.6M | 49.5G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_r50_s16_384 | 2022-12-23 |
456
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
457
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
 
11
  - timm/convnext_base.clip_laion2b_augreg_ft_in12k
12
  - timm/convnext_nano.r384_in12k_ft_in1k
13
  - timm/convnext_zepto_rms.ra4_e3600_r224_in1k
 
14
  - timm/cs3edgenet_x.c2_in1k
15
  - timm/cs3sedarknet_x.c2ns_in1k
16
  - timm/cspresnet50.ra_in1k
 
92
  - timm/test_convnext2.r160_in1k
93
  - timm/tresnet_v2_l.miil_in21k_ft_in1k
94
  - timm/twins_pcpvt_base.in1k
95
+ - timm/vit_base_patch14_reg4_dinov2.lvd142m
96
  - timm/vit_betwixt_patch16_reg1_gap_256.sbb_in1k
97
  - timm/vit_little_patch16_reg1_gap_256.sbb_in12k_ft_in1k
98
  - timm/vit_little_patch16_reg4_gap_256.sbb_in1k
 
114
 
115
  # Models
116
 
117
+ 122 models exported from TIMM in total.
118
 
119
  ## ByobNet
120
 
 
448
 
449
  ## VisionTransformer
450
 
451
+ 11 models with model class `VisionTransformer`.
452
 
453
  | Name | Params | Flops | Input Size | Can Classify | Features | Classes | Dataset | Model | Architecture | Created At |
454
  |:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------|:--------|-------------:|:---------------|-----------:|----------:|:------------|:------------------|:---------------------------------|:-------------|
455
+ | [vit_base_patch14_reg4_dinov2.lvd142m](https://huggingface.co/timm/vit_base_patch14_reg4_dinov2.lvd142m) | 85.5M | 117.4G | 518 | False | 768 | 768 | | VisionTransformer | vit_base_patch14_reg4_dinov2 | 2023-10-30 |
456
  | [vit_base_r50_s16_384.orig_in21k_ft_in1k](https://huggingface.co/timm/vit_base_r50_s16_384.orig_in21k_ft_in1k) | 86.6M | 49.5G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_r50_s16_384 | 2022-12-23 |
457
  | [vit_base_patch16_clip_384.laion2b_ft_in12k_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in12k_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-11 |
458
  | [vit_base_patch16_clip_384.laion2b_ft_in1k](https://huggingface.co/timm/vit_base_patch16_clip_384.laion2b_ft_in1k) | 86.4M | 49.4G | 384 | True | 768 | 1000 | imagenet-1k | VisionTransformer | vit_base_patch16_clip_384 | 2022-11-09 |
models.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d4e9347cc434eadd4031af6d3e5c0354386cac6639969c8ccf03b475b8586d74
3
- size 16698
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a951e162fab7df548e575ab34b0546f589a14cc3e589e7ac8debe8d9fd589de3
3
+ size 16800
vit_base_patch14_reg4_dinov2.lvd142m/meta.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aec5772c7c43ceec07829ed08fc056926d46ecd18914dd2e71116d268b7b598c
3
+ size 492
vit_base_patch14_reg4_dinov2.lvd142m/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a0e2b136c1baeb13af03836bc780850d1c4cf454269f588b16c318a99c8e520
3
+ size 346559204
vit_base_patch14_reg4_dinov2.lvd142m/preprocess.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b02dd3c1fcb9b7594cf55c4319cf7b0e2a308622254976f030b1335d241b0d5
3
+ size 734