hc99 commited on
Commit
dee9fba
·
verified ·
1 Parent(s): 08887ec

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. testbed/huggingface__pytorch-image-models/docs/javascripts/tables.js +6 -0
  2. testbed/huggingface__pytorch-image-models/docs/models/.pages +1 -0
  3. testbed/huggingface__pytorch-image-models/docs/models/.templates/code_snippets.md +62 -0
  4. testbed/huggingface__pytorch-image-models/docs/models/.templates/generate_readmes.py +64 -0
  5. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/adversarial-inception-v3.md +98 -0
  6. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/csp-resnet.md +76 -0
  7. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/csp-resnext.md +77 -0
  8. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/densenet.md +305 -0
  9. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/dla.md +545 -0
  10. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/ecaresnet.md +236 -0
  11. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/efficientnet-pruned.md +145 -0
  12. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/efficientnet.md +325 -0
  13. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/ensemble-adversarial.md +98 -0
  14. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/ese-vovnet.md +92 -0
  15. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/fbnet.md +76 -0
  16. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-inception-v3.md +78 -0
  17. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-resnet.md +504 -0
  18. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-resnext.md +142 -0
  19. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-senet.md +63 -0
  20. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-seresnext.md +136 -0
  21. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-xception.md +66 -0
  22. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/hrnet.md +358 -0
  23. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/ig-resnext.md +209 -0
  24. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/inception-resnet-v2.md +72 -0
  25. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/inception-v3.md +85 -0
  26. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/inception-v4.md +71 -0
  27. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/legacy-se-resnet.md +257 -0
  28. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/legacy-se-resnext.md +167 -0
  29. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/legacy-senet.md +74 -0
  30. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/mixnet.md +164 -0
  31. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/mnasnet.md +109 -0
  32. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/mobilenet-v2.md +210 -0
  33. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/mobilenet-v3.md +138 -0
  34. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/nasnet.md +70 -0
  35. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/noisy-student.md +510 -0
  36. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/pnasnet.md +71 -0
  37. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/regnetx.md +492 -0
  38. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/regnety.md +506 -0
  39. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/res2net.md +260 -0
  40. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/res2next.md +75 -0
  41. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/resnest.md +408 -0
  42. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/resnet-d.md +263 -0
  43. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/resnet.md +378 -0
  44. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/resnext.md +183 -0
  45. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/rexnet.md +197 -0
  46. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/se-resnet.md +122 -0
  47. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/selecsls.md +136 -0
  48. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/seresnext.md +167 -0
  49. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/skresnet.md +112 -0
  50. testbed/huggingface__pytorch-image-models/docs/models/.templates/models/skresnext.md +70 -0
testbed/huggingface__pytorch-image-models/docs/javascripts/tables.js ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ app.location$.subscribe(function() {
2
+ var tables = document.querySelectorAll("article table")
3
+ tables.forEach(function(table) {
4
+ new Tablesort(table)
5
+ })
6
+ })
testbed/huggingface__pytorch-image-models/docs/models/.pages ADDED
@@ -0,0 +1 @@
 
 
1
+ title: Model Pages
testbed/huggingface__pytorch-image-models/docs/models/.templates/code_snippets.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## How do I use this model on an image?
2
+ To load a pretrained model:
3
+
4
+ ```python
5
+ import timm
6
+ model = timm.create_model('{{ model_name }}', pretrained=True)
7
+ model.eval()
8
+ ```
9
+
10
+ To load and preprocess the image:
11
+ ```python
12
+ import urllib
13
+ from PIL import Image
14
+ from timm.data import resolve_data_config
15
+ from timm.data.transforms_factory import create_transform
16
+
17
+ config = resolve_data_config({}, model=model)
18
+ transform = create_transform(**config)
19
+
20
+ url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
21
+ urllib.request.urlretrieve(url, filename)
22
+ img = Image.open(filename).convert('RGB')
23
+ tensor = transform(img).unsqueeze(0) # transform and add batch dimension
24
+ ```
25
+
26
+ To get the model predictions:
27
+ ```python
28
+ import torch
29
+ with torch.no_grad():
30
+ out = model(tensor)
31
+ probabilities = torch.nn.functional.softmax(out[0], dim=0)
32
+ print(probabilities.shape)
33
+ # prints: torch.Size([1000])
34
+ ```
35
+
36
+ To get the top-5 predictions class names:
37
+ ```python
38
+ # Get imagenet class mappings
39
+ url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
40
+ urllib.request.urlretrieve(url, filename)
41
+ with open("imagenet_classes.txt", "r") as f:
42
+ categories = [s.strip() for s in f.readlines()]
43
+
44
+ # Print top categories per image
45
+ top5_prob, top5_catid = torch.topk(probabilities, 5)
46
+ for i in range(top5_prob.size(0)):
47
+ print(categories[top5_catid[i]], top5_prob[i].item())
48
+ # prints class names and probabilities like:
49
+ # [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
50
+ ```
51
+
52
+ Replace the model name with the variant you want to use, e.g. `{{ model_name }}`. You can find the IDs in the model summaries at the top of this page.
53
+
54
+ To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use.
55
+
56
+ ## How do I finetune this model?
57
+ You can finetune any of the pre-trained models just by changing the classifier (the last layer).
58
+ ```python
59
+ model = timm.create_model('{{ model_name }}', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
60
+ ```
61
+ To finetune on your own dataset, you have to write a training loop or adapt [timm's training
62
+ script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
testbed/huggingface__pytorch-image-models/docs/models/.templates/generate_readmes.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Run this script to generate the model-index files in `models` from the templates in `.templates/models`.
3
+ """
4
+
5
+ import argparse
6
+ from pathlib import Path
7
+
8
+ from jinja2 import Environment, FileSystemLoader
9
+
10
+ import modelindex
11
+
12
+
13
+ def generate_readmes(templates_path: Path, dest_path: Path):
14
+ """Add the code snippet template to the readmes"""
15
+ readme_templates_path = templates_path / "models"
16
+ code_template_path = templates_path / "code_snippets.md"
17
+
18
+ env = Environment(
19
+ loader=FileSystemLoader([readme_templates_path, readme_templates_path.parent]),
20
+ )
21
+
22
+ for readme in readme_templates_path.iterdir():
23
+ if readme.suffix == ".md":
24
+ template = env.get_template(readme.name)
25
+
26
+ # get the first model_name for this model family
27
+ mi = modelindex.load(str(readme))
28
+ model_name = mi.models[0].name
29
+
30
+ full_content = template.render(model_name=model_name)
31
+
32
+ # generate full_readme
33
+ with open(dest_path / readme.name, "w") as f:
34
+ f.write(full_content)
35
+
36
+
37
+ def main():
38
+ parser = argparse.ArgumentParser(description="Model index generation config")
39
+ parser.add_argument(
40
+ "-t",
41
+ "--templates",
42
+ default=Path(__file__).parent / ".templates",
43
+ type=str,
44
+ help="Location of the markdown templates",
45
+ )
46
+ parser.add_argument(
47
+ "-d",
48
+ "--dest",
49
+ default=Path(__file__).parent / "models",
50
+ type=str,
51
+ help="Destination folder that contains the generated model-index files.",
52
+ )
53
+ args = parser.parse_args()
54
+ templates_path = Path(args.templates)
55
+ dest_readmes_path = Path(args.dest)
56
+
57
+ generate_readmes(
58
+ templates_path,
59
+ dest_readmes_path,
60
+ )
61
+
62
+
63
+ if __name__ == "__main__":
64
+ main()
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/adversarial-inception-v3.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Adversarial Inception v3
2
+
3
+ **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module).
4
+
5
+ This particular model was trained for study of adversarial examples (adversarial training).
6
+
7
+ The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models).
8
+
9
+ {% include 'code_snippets.md' %}
10
+
11
+ ## How do I train this model?
12
+
13
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
14
+
15
+ ## Citation
16
+
17
+ ```BibTeX
18
+ @article{DBLP:journals/corr/abs-1804-00097,
19
+ author = {Alexey Kurakin and
20
+ Ian J. Goodfellow and
21
+ Samy Bengio and
22
+ Yinpeng Dong and
23
+ Fangzhou Liao and
24
+ Ming Liang and
25
+ Tianyu Pang and
26
+ Jun Zhu and
27
+ Xiaolin Hu and
28
+ Cihang Xie and
29
+ Jianyu Wang and
30
+ Zhishuai Zhang and
31
+ Zhou Ren and
32
+ Alan L. Yuille and
33
+ Sangxia Huang and
34
+ Yao Zhao and
35
+ Yuzhe Zhao and
36
+ Zhonglin Han and
37
+ Junjiajia Long and
38
+ Yerkebulan Berdibekov and
39
+ Takuya Akiba and
40
+ Seiya Tokui and
41
+ Motoki Abe},
42
+ title = {Adversarial Attacks and Defences Competition},
43
+ journal = {CoRR},
44
+ volume = {abs/1804.00097},
45
+ year = {2018},
46
+ url = {http://arxiv.org/abs/1804.00097},
47
+ archivePrefix = {arXiv},
48
+ eprint = {1804.00097},
49
+ timestamp = {Thu, 31 Oct 2019 16:31:22 +0100},
50
+ biburl = {https://dblp.org/rec/journals/corr/abs-1804-00097.bib},
51
+ bibsource = {dblp computer science bibliography, https://dblp.org}
52
+ }
53
+ ```
54
+
55
+ <!--
56
+ Type: model-index
57
+ Collections:
58
+ - Name: Adversarial Inception v3
59
+ Paper:
60
+ Title: Adversarial Attacks and Defences Competition
61
+ URL: https://paperswithcode.com/paper/adversarial-attacks-and-defences-competition
62
+ Models:
63
+ - Name: adv_inception_v3
64
+ In Collection: Adversarial Inception v3
65
+ Metadata:
66
+ FLOPs: 7352418880
67
+ Parameters: 23830000
68
+ File Size: 95549439
69
+ Architecture:
70
+ - 1x1 Convolution
71
+ - Auxiliary Classifier
72
+ - Average Pooling
73
+ - Average Pooling
74
+ - Batch Normalization
75
+ - Convolution
76
+ - Dense Connections
77
+ - Dropout
78
+ - Inception-v3 Module
79
+ - Max Pooling
80
+ - ReLU
81
+ - Softmax
82
+ Tasks:
83
+ - Image Classification
84
+ Training Data:
85
+ - ImageNet
86
+ ID: adv_inception_v3
87
+ Crop Pct: '0.875'
88
+ Image Size: '299'
89
+ Interpolation: bicubic
90
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v3.py#L456
91
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/adv_inception_v3-9e27bd63.pth
92
+ Results:
93
+ - Task: Image Classification
94
+ Dataset: ImageNet
95
+ Metrics:
96
+ Top 1 Accuracy: 77.58%
97
+ Top 5 Accuracy: 93.74%
98
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/csp-resnet.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CSP-ResNet
2
+
3
+ **CSPResNet** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNet](https://paperswithcode.com/method/resnet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{wang2019cspnet,
15
+ title={CSPNet: A New Backbone that can Enhance Learning Capability of CNN},
16
+ author={Chien-Yao Wang and Hong-Yuan Mark Liao and I-Hau Yeh and Yueh-Hua Wu and Ping-Yang Chen and Jun-Wei Hsieh},
17
+ year={2019},
18
+ eprint={1911.11929},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: CSP ResNet
28
+ Paper:
29
+ Title: 'CSPNet: A New Backbone that can Enhance Learning Capability of CNN'
30
+ URL: https://paperswithcode.com/paper/cspnet-a-new-backbone-that-can-enhance
31
+ Models:
32
+ - Name: cspresnet50
33
+ In Collection: CSP ResNet
34
+ Metadata:
35
+ FLOPs: 5924992000
36
+ Parameters: 21620000
37
+ File Size: 86679303
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Batch Normalization
41
+ - Bottleneck Residual Block
42
+ - Convolution
43
+ - Global Average Pooling
44
+ - Max Pooling
45
+ - ReLU
46
+ - Residual Block
47
+ - Residual Connection
48
+ - Softmax
49
+ Tasks:
50
+ - Image Classification
51
+ Training Techniques:
52
+ - Label Smoothing
53
+ - Polynomial Learning Rate Decay
54
+ - SGD with Momentum
55
+ - Weight Decay
56
+ Training Data:
57
+ - ImageNet
58
+ ID: cspresnet50
59
+ LR: 0.1
60
+ Layers: 50
61
+ Crop Pct: '0.887'
62
+ Momentum: 0.9
63
+ Batch Size: 128
64
+ Image Size: '256'
65
+ Weight Decay: 0.005
66
+ Interpolation: bilinear
67
+ Training Steps: 8000000
68
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/cspnet.py#L415
69
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/cspresnet50_ra-d3e8d487.pth
70
+ Results:
71
+ - Task: Image Classification
72
+ Dataset: ImageNet
73
+ Metrics:
74
+ Top 1 Accuracy: 79.57%
75
+ Top 5 Accuracy: 94.71%
76
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/csp-resnext.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CSP-ResNeXt
2
+
3
+ **CSPResNeXt** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNeXt](https://paperswithcode.com/method/resnext). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{wang2019cspnet,
15
+ title={CSPNet: A New Backbone that can Enhance Learning Capability of CNN},
16
+ author={Chien-Yao Wang and Hong-Yuan Mark Liao and I-Hau Yeh and Yueh-Hua Wu and Ping-Yang Chen and Jun-Wei Hsieh},
17
+ year={2019},
18
+ eprint={1911.11929},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: CSP ResNeXt
28
+ Paper:
29
+ Title: 'CSPNet: A New Backbone that can Enhance Learning Capability of CNN'
30
+ URL: https://paperswithcode.com/paper/cspnet-a-new-backbone-that-can-enhance
31
+ Models:
32
+ - Name: cspresnext50
33
+ In Collection: CSP ResNeXt
34
+ Metadata:
35
+ FLOPs: 3962945536
36
+ Parameters: 20570000
37
+ File Size: 82562887
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Batch Normalization
41
+ - Convolution
42
+ - Global Average Pooling
43
+ - Grouped Convolution
44
+ - Max Pooling
45
+ - ReLU
46
+ - ResNeXt Block
47
+ - Residual Connection
48
+ - Softmax
49
+ Tasks:
50
+ - Image Classification
51
+ Training Techniques:
52
+ - Label Smoothing
53
+ - Polynomial Learning Rate Decay
54
+ - SGD with Momentum
55
+ - Weight Decay
56
+ Training Data:
57
+ - ImageNet
58
+ Training Resources: 1x GPU
59
+ ID: cspresnext50
60
+ LR: 0.1
61
+ Layers: 50
62
+ Crop Pct: '0.875'
63
+ Momentum: 0.9
64
+ Batch Size: 128
65
+ Image Size: '224'
66
+ Weight Decay: 0.005
67
+ Interpolation: bilinear
68
+ Training Steps: 8000000
69
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/cspnet.py#L430
70
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/cspresnext50_ra_224-648b4713.pth
71
+ Results:
72
+ - Task: Image Classification
73
+ Dataset: ImageNet
74
+ Metrics:
75
+ Top 1 Accuracy: 80.05%
76
+ Top 5 Accuracy: 94.94%
77
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/densenet.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DenseNet
2
+
3
+ **DenseNet** is a type of convolutional neural network that utilises dense connections between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers.
4
+
5
+ The **DenseNet Blur** variant in this collection by Ross Wightman employs [Blur Pooling](http://www.paperswithcode.com/method/blur-pooling)
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @article{DBLP:journals/corr/HuangLW16a,
17
+ author = {Gao Huang and
18
+ Zhuang Liu and
19
+ Kilian Q. Weinberger},
20
+ title = {Densely Connected Convolutional Networks},
21
+ journal = {CoRR},
22
+ volume = {abs/1608.06993},
23
+ year = {2016},
24
+ url = {http://arxiv.org/abs/1608.06993},
25
+ archivePrefix = {arXiv},
26
+ eprint = {1608.06993},
27
+ timestamp = {Mon, 10 Sep 2018 15:49:32 +0200},
28
+ biburl = {https://dblp.org/rec/journals/corr/HuangLW16a.bib},
29
+ bibsource = {dblp computer science bibliography, https://dblp.org}
30
+ }
31
+ ```
32
+
33
+ ```
34
+ @misc{rw2019timm,
35
+ author = {Ross Wightman},
36
+ title = {PyTorch Image Models},
37
+ year = {2019},
38
+ publisher = {GitHub},
39
+ journal = {GitHub repository},
40
+ doi = {10.5281/zenodo.4414861},
41
+ howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
42
+ }
43
+ ```
44
+
45
+ <!--
46
+ Type: model-index
47
+ Collections:
48
+ - Name: DenseNet
49
+ Paper:
50
+ Title: Densely Connected Convolutional Networks
51
+ URL: https://paperswithcode.com/paper/densely-connected-convolutional-networks
52
+ Models:
53
+ - Name: densenet121
54
+ In Collection: DenseNet
55
+ Metadata:
56
+ FLOPs: 3641843200
57
+ Parameters: 7980000
58
+ File Size: 32376726
59
+ Architecture:
60
+ - 1x1 Convolution
61
+ - Average Pooling
62
+ - Batch Normalization
63
+ - Convolution
64
+ - Dense Block
65
+ - Dense Connections
66
+ - Dropout
67
+ - Max Pooling
68
+ - ReLU
69
+ - Softmax
70
+ Tasks:
71
+ - Image Classification
72
+ Training Techniques:
73
+ - Kaiming Initialization
74
+ - Nesterov Accelerated Gradient
75
+ - Weight Decay
76
+ Training Data:
77
+ - ImageNet
78
+ ID: densenet121
79
+ LR: 0.1
80
+ Epochs: 90
81
+ Layers: 121
82
+ Dropout: 0.2
83
+ Crop Pct: '0.875'
84
+ Momentum: 0.9
85
+ Batch Size: 256
86
+ Image Size: '224'
87
+ Weight Decay: 0.0001
88
+ Interpolation: bicubic
89
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L295
90
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/densenet121_ra-50efcf5c.pth
91
+ Results:
92
+ - Task: Image Classification
93
+ Dataset: ImageNet
94
+ Metrics:
95
+ Top 1 Accuracy: 75.56%
96
+ Top 5 Accuracy: 92.65%
97
+ - Name: densenet161
98
+ In Collection: DenseNet
99
+ Metadata:
100
+ FLOPs: 9931959264
101
+ Parameters: 28680000
102
+ File Size: 115730790
103
+ Architecture:
104
+ - 1x1 Convolution
105
+ - Average Pooling
106
+ - Batch Normalization
107
+ - Convolution
108
+ - Dense Block
109
+ - Dense Connections
110
+ - Dropout
111
+ - Max Pooling
112
+ - ReLU
113
+ - Softmax
114
+ Tasks:
115
+ - Image Classification
116
+ Training Techniques:
117
+ - Kaiming Initialization
118
+ - Nesterov Accelerated Gradient
119
+ - Weight Decay
120
+ Training Data:
121
+ - ImageNet
122
+ ID: densenet161
123
+ LR: 0.1
124
+ Epochs: 90
125
+ Layers: 161
126
+ Dropout: 0.2
127
+ Crop Pct: '0.875'
128
+ Momentum: 0.9
129
+ Batch Size: 256
130
+ Image Size: '224'
131
+ Weight Decay: 0.0001
132
+ Interpolation: bicubic
133
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L347
134
+ Weights: https://download.pytorch.org/models/densenet161-8d451a50.pth
135
+ Results:
136
+ - Task: Image Classification
137
+ Dataset: ImageNet
138
+ Metrics:
139
+ Top 1 Accuracy: 77.36%
140
+ Top 5 Accuracy: 93.63%
141
+ - Name: densenet169
142
+ In Collection: DenseNet
143
+ Metadata:
144
+ FLOPs: 4316945792
145
+ Parameters: 14150000
146
+ File Size: 57365526
147
+ Architecture:
148
+ - 1x1 Convolution
149
+ - Average Pooling
150
+ - Batch Normalization
151
+ - Convolution
152
+ - Dense Block
153
+ - Dense Connections
154
+ - Dropout
155
+ - Max Pooling
156
+ - ReLU
157
+ - Softmax
158
+ Tasks:
159
+ - Image Classification
160
+ Training Techniques:
161
+ - Kaiming Initialization
162
+ - Nesterov Accelerated Gradient
163
+ - Weight Decay
164
+ Training Data:
165
+ - ImageNet
166
+ ID: densenet169
167
+ LR: 0.1
168
+ Epochs: 90
169
+ Layers: 169
170
+ Dropout: 0.2
171
+ Crop Pct: '0.875'
172
+ Momentum: 0.9
173
+ Batch Size: 256
174
+ Image Size: '224'
175
+ Weight Decay: 0.0001
176
+ Interpolation: bicubic
177
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L327
178
+ Weights: https://download.pytorch.org/models/densenet169-b2777c0a.pth
179
+ Results:
180
+ - Task: Image Classification
181
+ Dataset: ImageNet
182
+ Metrics:
183
+ Top 1 Accuracy: 75.9%
184
+ Top 5 Accuracy: 93.02%
185
+ - Name: densenet201
186
+ In Collection: DenseNet
187
+ Metadata:
188
+ FLOPs: 5514321024
189
+ Parameters: 20010000
190
+ File Size: 81131730
191
+ Architecture:
192
+ - 1x1 Convolution
193
+ - Average Pooling
194
+ - Batch Normalization
195
+ - Convolution
196
+ - Dense Block
197
+ - Dense Connections
198
+ - Dropout
199
+ - Max Pooling
200
+ - ReLU
201
+ - Softmax
202
+ Tasks:
203
+ - Image Classification
204
+ Training Techniques:
205
+ - Kaiming Initialization
206
+ - Nesterov Accelerated Gradient
207
+ - Weight Decay
208
+ Training Data:
209
+ - ImageNet
210
+ ID: densenet201
211
+ LR: 0.1
212
+ Epochs: 90
213
+ Layers: 201
214
+ Dropout: 0.2
215
+ Crop Pct: '0.875'
216
+ Momentum: 0.9
217
+ Batch Size: 256
218
+ Image Size: '224'
219
+ Weight Decay: 0.0001
220
+ Interpolation: bicubic
221
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L337
222
+ Weights: https://download.pytorch.org/models/densenet201-c1103571.pth
223
+ Results:
224
+ - Task: Image Classification
225
+ Dataset: ImageNet
226
+ Metrics:
227
+ Top 1 Accuracy: 77.29%
228
+ Top 5 Accuracy: 93.48%
229
+ - Name: densenetblur121d
230
+ In Collection: DenseNet
231
+ Metadata:
232
+ FLOPs: 3947812864
233
+ Parameters: 8000000
234
+ File Size: 32456500
235
+ Architecture:
236
+ - 1x1 Convolution
237
+ - Batch Normalization
238
+ - Blur Pooling
239
+ - Convolution
240
+ - Dense Block
241
+ - Dense Connections
242
+ - Dropout
243
+ - Max Pooling
244
+ - ReLU
245
+ - Softmax
246
+ Tasks:
247
+ - Image Classification
248
+ Training Data:
249
+ - ImageNet
250
+ ID: densenetblur121d
251
+ Crop Pct: '0.875'
252
+ Image Size: '224'
253
+ Interpolation: bicubic
254
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L305
255
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/densenetblur121d_ra-100dcfbc.pth
256
+ Results:
257
+ - Task: Image Classification
258
+ Dataset: ImageNet
259
+ Metrics:
260
+ Top 1 Accuracy: 76.59%
261
+ Top 5 Accuracy: 93.2%
262
+ - Name: tv_densenet121
263
+ In Collection: DenseNet
264
+ Metadata:
265
+ FLOPs: 3641843200
266
+ Parameters: 7980000
267
+ File Size: 32342954
268
+ Architecture:
269
+ - 1x1 Convolution
270
+ - Average Pooling
271
+ - Batch Normalization
272
+ - Convolution
273
+ - Dense Block
274
+ - Dense Connections
275
+ - Dropout
276
+ - Max Pooling
277
+ - ReLU
278
+ - Softmax
279
+ Tasks:
280
+ - Image Classification
281
+ Training Techniques:
282
+ - SGD with Momentum
283
+ - Weight Decay
284
+ Training Data:
285
+ - ImageNet
286
+ ID: tv_densenet121
287
+ LR: 0.1
288
+ Epochs: 90
289
+ Crop Pct: '0.875'
290
+ LR Gamma: 0.1
291
+ Momentum: 0.9
292
+ Batch Size: 32
293
+ Image Size: '224'
294
+ LR Step Size: 30
295
+ Weight Decay: 0.0001
296
+ Interpolation: bicubic
297
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/densenet.py#L379
298
+ Weights: https://download.pytorch.org/models/densenet121-a639ec97.pth
299
+ Results:
300
+ - Task: Image Classification
301
+ Dataset: ImageNet
302
+ Metrics:
303
+ Top 1 Accuracy: 74.74%
304
+ Top 5 Accuracy: 92.15%
305
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/dla.md ADDED
@@ -0,0 +1,545 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Deep Layer Aggregation
2
+
3
+ Extending “shallow” skip connections, **Dense Layer Aggregation (DLA)** incorporates more depth and sharing. The authors introduce two structures for deep layer aggregation (DLA): iterative deep aggregation (IDA) and hierarchical deep aggregation (HDA). These structures are expressed through an architectural framework, independent of the choice of backbone, for compatibility with current and future networks.
4
+
5
+ IDA focuses on fusing resolutions and scales while HDA focuses on merging features from all modules and channels. IDA follows the base hierarchy to refine resolution and aggregate scale stage-bystage. HDA assembles its own hierarchy of tree-structured connections that cross and merge stages to aggregate different levels of representation.
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @misc{yu2019deep,
17
+ title={Deep Layer Aggregation},
18
+ author={Fisher Yu and Dequan Wang and Evan Shelhamer and Trevor Darrell},
19
+ year={2019},
20
+ eprint={1707.06484},
21
+ archivePrefix={arXiv},
22
+ primaryClass={cs.CV}
23
+ }
24
+ ```
25
+
26
+ <!--
27
+ Type: model-index
28
+ Collections:
29
+ - Name: DLA
30
+ Paper:
31
+ Title: Deep Layer Aggregation
32
+ URL: https://paperswithcode.com/paper/deep-layer-aggregation
33
+ Models:
34
+ - Name: dla102
35
+ In Collection: DLA
36
+ Metadata:
37
+ FLOPs: 7192952808
38
+ Parameters: 33270000
39
+ File Size: 135290579
40
+ Architecture:
41
+ - 1x1 Convolution
42
+ - Batch Normalization
43
+ - Convolution
44
+ - DLA Bottleneck Residual Block
45
+ - DLA Residual Block
46
+ - Global Average Pooling
47
+ - Max Pooling
48
+ - ReLU
49
+ - Residual Block
50
+ - Residual Connection
51
+ - Softmax
52
+ Tasks:
53
+ - Image Classification
54
+ Training Techniques:
55
+ - SGD with Momentum
56
+ - Weight Decay
57
+ Training Data:
58
+ - ImageNet
59
+ Training Resources: 8x GPUs
60
+ ID: dla102
61
+ LR: 0.1
62
+ Epochs: 120
63
+ Layers: 102
64
+ Crop Pct: '0.875'
65
+ Momentum: 0.9
66
+ Batch Size: 256
67
+ Image Size: '224'
68
+ Weight Decay: 0.0001
69
+ Interpolation: bilinear
70
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L410
71
+ Weights: http://dl.yf.io/dla/models/imagenet/dla102-d94d9790.pth
72
+ Results:
73
+ - Task: Image Classification
74
+ Dataset: ImageNet
75
+ Metrics:
76
+ Top 1 Accuracy: 78.03%
77
+ Top 5 Accuracy: 93.95%
78
+ - Name: dla102x
79
+ In Collection: DLA
80
+ Metadata:
81
+ FLOPs: 5886821352
82
+ Parameters: 26310000
83
+ File Size: 107552695
84
+ Architecture:
85
+ - 1x1 Convolution
86
+ - Batch Normalization
87
+ - Convolution
88
+ - DLA Bottleneck Residual Block
89
+ - DLA Residual Block
90
+ - Global Average Pooling
91
+ - Max Pooling
92
+ - ReLU
93
+ - Residual Block
94
+ - Residual Connection
95
+ - Softmax
96
+ Tasks:
97
+ - Image Classification
98
+ Training Techniques:
99
+ - SGD with Momentum
100
+ - Weight Decay
101
+ Training Data:
102
+ - ImageNet
103
+ Training Resources: 8x GPUs
104
+ ID: dla102x
105
+ LR: 0.1
106
+ Epochs: 120
107
+ Layers: 102
108
+ Crop Pct: '0.875'
109
+ Momentum: 0.9
110
+ Batch Size: 256
111
+ Image Size: '224'
112
+ Weight Decay: 0.0001
113
+ Interpolation: bilinear
114
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L418
115
+ Weights: http://dl.yf.io/dla/models/imagenet/dla102x-ad62be81.pth
116
+ Results:
117
+ - Task: Image Classification
118
+ Dataset: ImageNet
119
+ Metrics:
120
+ Top 1 Accuracy: 78.51%
121
+ Top 5 Accuracy: 94.23%
122
+ - Name: dla102x2
123
+ In Collection: DLA
124
+ Metadata:
125
+ FLOPs: 9343847400
126
+ Parameters: 41280000
127
+ File Size: 167645295
128
+ Architecture:
129
+ - 1x1 Convolution
130
+ - Batch Normalization
131
+ - Convolution
132
+ - DLA Bottleneck Residual Block
133
+ - DLA Residual Block
134
+ - Global Average Pooling
135
+ - Max Pooling
136
+ - ReLU
137
+ - Residual Block
138
+ - Residual Connection
139
+ - Softmax
140
+ Tasks:
141
+ - Image Classification
142
+ Training Techniques:
143
+ - SGD with Momentum
144
+ - Weight Decay
145
+ Training Data:
146
+ - ImageNet
147
+ Training Resources: 8x GPUs
148
+ ID: dla102x2
149
+ LR: 0.1
150
+ Epochs: 120
151
+ Layers: 102
152
+ Crop Pct: '0.875'
153
+ Momentum: 0.9
154
+ Batch Size: 256
155
+ Image Size: '224'
156
+ Weight Decay: 0.0001
157
+ Interpolation: bilinear
158
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L426
159
+ Weights: http://dl.yf.io/dla/models/imagenet/dla102x2-262837b6.pth
160
+ Results:
161
+ - Task: Image Classification
162
+ Dataset: ImageNet
163
+ Metrics:
164
+ Top 1 Accuracy: 79.44%
165
+ Top 5 Accuracy: 94.65%
166
+ - Name: dla169
167
+ In Collection: DLA
168
+ Metadata:
169
+ FLOPs: 11598004200
170
+ Parameters: 53390000
171
+ File Size: 216547113
172
+ Architecture:
173
+ - 1x1 Convolution
174
+ - Batch Normalization
175
+ - Convolution
176
+ - DLA Bottleneck Residual Block
177
+ - DLA Residual Block
178
+ - Global Average Pooling
179
+ - Max Pooling
180
+ - ReLU
181
+ - Residual Block
182
+ - Residual Connection
183
+ - Softmax
184
+ Tasks:
185
+ - Image Classification
186
+ Training Techniques:
187
+ - SGD with Momentum
188
+ - Weight Decay
189
+ Training Data:
190
+ - ImageNet
191
+ Training Resources: 8x GPUs
192
+ ID: dla169
193
+ LR: 0.1
194
+ Epochs: 120
195
+ Layers: 169
196
+ Crop Pct: '0.875'
197
+ Momentum: 0.9
198
+ Batch Size: 256
199
+ Image Size: '224'
200
+ Weight Decay: 0.0001
201
+ Interpolation: bilinear
202
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L434
203
+ Weights: http://dl.yf.io/dla/models/imagenet/dla169-0914e092.pth
204
+ Results:
205
+ - Task: Image Classification
206
+ Dataset: ImageNet
207
+ Metrics:
208
+ Top 1 Accuracy: 78.69%
209
+ Top 5 Accuracy: 94.33%
210
+ - Name: dla34
211
+ In Collection: DLA
212
+ Metadata:
213
+ FLOPs: 3070105576
214
+ Parameters: 15740000
215
+ File Size: 63228658
216
+ Architecture:
217
+ - 1x1 Convolution
218
+ - Batch Normalization
219
+ - Convolution
220
+ - DLA Bottleneck Residual Block
221
+ - DLA Residual Block
222
+ - Global Average Pooling
223
+ - Max Pooling
224
+ - ReLU
225
+ - Residual Block
226
+ - Residual Connection
227
+ - Softmax
228
+ Tasks:
229
+ - Image Classification
230
+ Training Techniques:
231
+ - SGD with Momentum
232
+ - Weight Decay
233
+ Training Data:
234
+ - ImageNet
235
+ ID: dla34
236
+ LR: 0.1
237
+ Epochs: 120
238
+ Layers: 32
239
+ Crop Pct: '0.875'
240
+ Momentum: 0.9
241
+ Batch Size: 256
242
+ Image Size: '224'
243
+ Weight Decay: 0.0001
244
+ Interpolation: bilinear
245
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L362
246
+ Weights: http://dl.yf.io/dla/models/imagenet/dla34-ba72cf86.pth
247
+ Results:
248
+ - Task: Image Classification
249
+ Dataset: ImageNet
250
+ Metrics:
251
+ Top 1 Accuracy: 74.62%
252
+ Top 5 Accuracy: 92.06%
253
+ - Name: dla46_c
254
+ In Collection: DLA
255
+ Metadata:
256
+ FLOPs: 583277288
257
+ Parameters: 1300000
258
+ File Size: 5307963
259
+ Architecture:
260
+ - 1x1 Convolution
261
+ - Batch Normalization
262
+ - Convolution
263
+ - DLA Bottleneck Residual Block
264
+ - DLA Residual Block
265
+ - Global Average Pooling
266
+ - Max Pooling
267
+ - ReLU
268
+ - Residual Block
269
+ - Residual Connection
270
+ - Softmax
271
+ Tasks:
272
+ - Image Classification
273
+ Training Techniques:
274
+ - SGD with Momentum
275
+ - Weight Decay
276
+ Training Data:
277
+ - ImageNet
278
+ ID: dla46_c
279
+ LR: 0.1
280
+ Epochs: 120
281
+ Layers: 46
282
+ Crop Pct: '0.875'
283
+ Momentum: 0.9
284
+ Batch Size: 256
285
+ Image Size: '224'
286
+ Weight Decay: 0.0001
287
+ Interpolation: bilinear
288
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L369
289
+ Weights: http://dl.yf.io/dla/models/imagenet/dla46_c-2bfd52c3.pth
290
+ Results:
291
+ - Task: Image Classification
292
+ Dataset: ImageNet
293
+ Metrics:
294
+ Top 1 Accuracy: 64.87%
295
+ Top 5 Accuracy: 86.29%
296
+ - Name: dla46x_c
297
+ In Collection: DLA
298
+ Metadata:
299
+ FLOPs: 544052200
300
+ Parameters: 1070000
301
+ File Size: 4387641
302
+ Architecture:
303
+ - 1x1 Convolution
304
+ - Batch Normalization
305
+ - Convolution
306
+ - DLA Bottleneck Residual Block
307
+ - DLA Residual Block
308
+ - Global Average Pooling
309
+ - Max Pooling
310
+ - ReLU
311
+ - Residual Block
312
+ - Residual Connection
313
+ - Softmax
314
+ Tasks:
315
+ - Image Classification
316
+ Training Techniques:
317
+ - SGD with Momentum
318
+ - Weight Decay
319
+ Training Data:
320
+ - ImageNet
321
+ ID: dla46x_c
322
+ LR: 0.1
323
+ Epochs: 120
324
+ Layers: 46
325
+ Crop Pct: '0.875'
326
+ Momentum: 0.9
327
+ Batch Size: 256
328
+ Image Size: '224'
329
+ Weight Decay: 0.0001
330
+ Interpolation: bilinear
331
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L378
332
+ Weights: http://dl.yf.io/dla/models/imagenet/dla46x_c-d761bae7.pth
333
+ Results:
334
+ - Task: Image Classification
335
+ Dataset: ImageNet
336
+ Metrics:
337
+ Top 1 Accuracy: 65.98%
338
+ Top 5 Accuracy: 86.99%
339
+ - Name: dla60
340
+ In Collection: DLA
341
+ Metadata:
342
+ FLOPs: 4256251880
343
+ Parameters: 22040000
344
+ File Size: 89560235
345
+ Architecture:
346
+ - 1x1 Convolution
347
+ - Batch Normalization
348
+ - Convolution
349
+ - DLA Bottleneck Residual Block
350
+ - DLA Residual Block
351
+ - Global Average Pooling
352
+ - Max Pooling
353
+ - ReLU
354
+ - Residual Block
355
+ - Residual Connection
356
+ - Softmax
357
+ Tasks:
358
+ - Image Classification
359
+ Training Techniques:
360
+ - SGD with Momentum
361
+ - Weight Decay
362
+ Training Data:
363
+ - ImageNet
364
+ ID: dla60
365
+ LR: 0.1
366
+ Epochs: 120
367
+ Layers: 60
368
+ Dropout: 0.2
369
+ Crop Pct: '0.875'
370
+ Momentum: 0.9
371
+ Batch Size: 256
372
+ Image Size: '224'
373
+ Weight Decay: 0.0001
374
+ Interpolation: bilinear
375
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L394
376
+ Weights: http://dl.yf.io/dla/models/imagenet/dla60-24839fc4.pth
377
+ Results:
378
+ - Task: Image Classification
379
+ Dataset: ImageNet
380
+ Metrics:
381
+ Top 1 Accuracy: 77.04%
382
+ Top 5 Accuracy: 93.32%
383
+ - Name: dla60_res2net
384
+ In Collection: DLA
385
+ Metadata:
386
+ FLOPs: 4147578504
387
+ Parameters: 20850000
388
+ File Size: 84886593
389
+ Architecture:
390
+ - 1x1 Convolution
391
+ - Batch Normalization
392
+ - Convolution
393
+ - DLA Bottleneck Residual Block
394
+ - DLA Residual Block
395
+ - Global Average Pooling
396
+ - Max Pooling
397
+ - ReLU
398
+ - Residual Block
399
+ - Residual Connection
400
+ - Softmax
401
+ Tasks:
402
+ - Image Classification
403
+ Training Techniques:
404
+ - SGD with Momentum
405
+ - Weight Decay
406
+ Training Data:
407
+ - ImageNet
408
+ ID: dla60_res2net
409
+ Layers: 60
410
+ Crop Pct: '0.875'
411
+ Image Size: '224'
412
+ Interpolation: bilinear
413
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L346
414
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net_dla60_4s-d88db7f9.pth
415
+ Results:
416
+ - Task: Image Classification
417
+ Dataset: ImageNet
418
+ Metrics:
419
+ Top 1 Accuracy: 78.46%
420
+ Top 5 Accuracy: 94.21%
421
+ - Name: dla60_res2next
422
+ In Collection: DLA
423
+ Metadata:
424
+ FLOPs: 3485335272
425
+ Parameters: 17030000
426
+ File Size: 69639245
427
+ Architecture:
428
+ - 1x1 Convolution
429
+ - Batch Normalization
430
+ - Convolution
431
+ - DLA Bottleneck Residual Block
432
+ - DLA Residual Block
433
+ - Global Average Pooling
434
+ - Max Pooling
435
+ - ReLU
436
+ - Residual Block
437
+ - Residual Connection
438
+ - Softmax
439
+ Tasks:
440
+ - Image Classification
441
+ Training Techniques:
442
+ - SGD with Momentum
443
+ - Weight Decay
444
+ Training Data:
445
+ - ImageNet
446
+ ID: dla60_res2next
447
+ Layers: 60
448
+ Crop Pct: '0.875'
449
+ Image Size: '224'
450
+ Interpolation: bilinear
451
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L354
452
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2next_dla60_4s-d327927b.pth
453
+ Results:
454
+ - Task: Image Classification
455
+ Dataset: ImageNet
456
+ Metrics:
457
+ Top 1 Accuracy: 78.44%
458
+ Top 5 Accuracy: 94.16%
459
+ - Name: dla60x
460
+ In Collection: DLA
461
+ Metadata:
462
+ FLOPs: 3544204264
463
+ Parameters: 17350000
464
+ File Size: 70883139
465
+ Architecture:
466
+ - 1x1 Convolution
467
+ - Batch Normalization
468
+ - Convolution
469
+ - DLA Bottleneck Residual Block
470
+ - DLA Residual Block
471
+ - Global Average Pooling
472
+ - Max Pooling
473
+ - ReLU
474
+ - Residual Block
475
+ - Residual Connection
476
+ - Softmax
477
+ Tasks:
478
+ - Image Classification
479
+ Training Techniques:
480
+ - SGD with Momentum
481
+ - Weight Decay
482
+ Training Data:
483
+ - ImageNet
484
+ ID: dla60x
485
+ LR: 0.1
486
+ Epochs: 120
487
+ Layers: 60
488
+ Crop Pct: '0.875'
489
+ Momentum: 0.9
490
+ Batch Size: 256
491
+ Image Size: '224'
492
+ Weight Decay: 0.0001
493
+ Interpolation: bilinear
494
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L402
495
+ Weights: http://dl.yf.io/dla/models/imagenet/dla60x-d15cacda.pth
496
+ Results:
497
+ - Task: Image Classification
498
+ Dataset: ImageNet
499
+ Metrics:
500
+ Top 1 Accuracy: 78.25%
501
+ Top 5 Accuracy: 94.02%
502
+ - Name: dla60x_c
503
+ In Collection: DLA
504
+ Metadata:
505
+ FLOPs: 593325032
506
+ Parameters: 1320000
507
+ File Size: 5454396
508
+ Architecture:
509
+ - 1x1 Convolution
510
+ - Batch Normalization
511
+ - Convolution
512
+ - DLA Bottleneck Residual Block
513
+ - DLA Residual Block
514
+ - Global Average Pooling
515
+ - Max Pooling
516
+ - ReLU
517
+ - Residual Block
518
+ - Residual Connection
519
+ - Softmax
520
+ Tasks:
521
+ - Image Classification
522
+ Training Techniques:
523
+ - SGD with Momentum
524
+ - Weight Decay
525
+ Training Data:
526
+ - ImageNet
527
+ ID: dla60x_c
528
+ LR: 0.1
529
+ Epochs: 120
530
+ Layers: 60
531
+ Crop Pct: '0.875'
532
+ Momentum: 0.9
533
+ Batch Size: 256
534
+ Image Size: '224'
535
+ Weight Decay: 0.0001
536
+ Interpolation: bilinear
537
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/dla.py#L386
538
+ Weights: http://dl.yf.io/dla/models/imagenet/dla60x_c-b870c45c.pth
539
+ Results:
540
+ - Task: Image Classification
541
+ Dataset: ImageNet
542
+ Metrics:
543
+ Top 1 Accuracy: 67.91%
544
+ Top 5 Accuracy: 88.42%
545
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/ecaresnet.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ECA-ResNet
2
+
3
+ An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) that reduces model complexity without dimensionality reduction.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{wang2020ecanet,
15
+ title={ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks},
16
+ author={Qilong Wang and Banggu Wu and Pengfei Zhu and Peihua Li and Wangmeng Zuo and Qinghua Hu},
17
+ year={2020},
18
+ eprint={1910.03151},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: ECAResNet
28
+ Paper:
29
+ Title: 'ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks'
30
+ URL: https://paperswithcode.com/paper/eca-net-efficient-channel-attention-for-deep
31
+ Models:
32
+ - Name: ecaresnet101d
33
+ In Collection: ECAResNet
34
+ Metadata:
35
+ FLOPs: 10377193728
36
+ Parameters: 44570000
37
+ File Size: 178815067
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Batch Normalization
41
+ - Bottleneck Residual Block
42
+ - Convolution
43
+ - Efficient Channel Attention
44
+ - Global Average Pooling
45
+ - Max Pooling
46
+ - ReLU
47
+ - Residual Block
48
+ - Residual Connection
49
+ - Softmax
50
+ - Squeeze-and-Excitation Block
51
+ Tasks:
52
+ - Image Classification
53
+ Training Techniques:
54
+ - SGD with Momentum
55
+ - Weight Decay
56
+ Training Data:
57
+ - ImageNet
58
+ Training Resources: 4x RTX 2080Ti GPUs
59
+ ID: ecaresnet101d
60
+ LR: 0.1
61
+ Epochs: 100
62
+ Layers: 101
63
+ Crop Pct: '0.875'
64
+ Batch Size: 256
65
+ Image Size: '224'
66
+ Weight Decay: 0.0001
67
+ Interpolation: bicubic
68
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1087
69
+ Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet101D_281c5844.pth
70
+ Results:
71
+ - Task: Image Classification
72
+ Dataset: ImageNet
73
+ Metrics:
74
+ Top 1 Accuracy: 82.18%
75
+ Top 5 Accuracy: 96.06%
76
+ - Name: ecaresnet101d_pruned
77
+ In Collection: ECAResNet
78
+ Metadata:
79
+ FLOPs: 4463972081
80
+ Parameters: 24880000
81
+ File Size: 99852736
82
+ Architecture:
83
+ - 1x1 Convolution
84
+ - Batch Normalization
85
+ - Bottleneck Residual Block
86
+ - Convolution
87
+ - Efficient Channel Attention
88
+ - Global Average Pooling
89
+ - Max Pooling
90
+ - ReLU
91
+ - Residual Block
92
+ - Residual Connection
93
+ - Softmax
94
+ - Squeeze-and-Excitation Block
95
+ Tasks:
96
+ - Image Classification
97
+ Training Techniques:
98
+ - SGD with Momentum
99
+ - Weight Decay
100
+ Training Data:
101
+ - ImageNet
102
+ ID: ecaresnet101d_pruned
103
+ Layers: 101
104
+ Crop Pct: '0.875'
105
+ Image Size: '224'
106
+ Interpolation: bicubic
107
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1097
108
+ Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45610/outputs/ECAResNet101D_P_75a3370e.pth
109
+ Results:
110
+ - Task: Image Classification
111
+ Dataset: ImageNet
112
+ Metrics:
113
+ Top 1 Accuracy: 80.82%
114
+ Top 5 Accuracy: 95.64%
115
+ - Name: ecaresnet50d
116
+ In Collection: ECAResNet
117
+ Metadata:
118
+ FLOPs: 5591090432
119
+ Parameters: 25580000
120
+ File Size: 102579290
121
+ Architecture:
122
+ - 1x1 Convolution
123
+ - Batch Normalization
124
+ - Bottleneck Residual Block
125
+ - Convolution
126
+ - Efficient Channel Attention
127
+ - Global Average Pooling
128
+ - Max Pooling
129
+ - ReLU
130
+ - Residual Block
131
+ - Residual Connection
132
+ - Softmax
133
+ - Squeeze-and-Excitation Block
134
+ Tasks:
135
+ - Image Classification
136
+ Training Techniques:
137
+ - SGD with Momentum
138
+ - Weight Decay
139
+ Training Data:
140
+ - ImageNet
141
+ Training Resources: 4x RTX 2080Ti GPUs
142
+ ID: ecaresnet50d
143
+ LR: 0.1
144
+ Epochs: 100
145
+ Layers: 50
146
+ Crop Pct: '0.875'
147
+ Batch Size: 256
148
+ Image Size: '224'
149
+ Weight Decay: 0.0001
150
+ Interpolation: bicubic
151
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1045
152
+ Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNet50D_833caf58.pth
153
+ Results:
154
+ - Task: Image Classification
155
+ Dataset: ImageNet
156
+ Metrics:
157
+ Top 1 Accuracy: 80.61%
158
+ Top 5 Accuracy: 95.31%
159
+ - Name: ecaresnet50d_pruned
160
+ In Collection: ECAResNet
161
+ Metadata:
162
+ FLOPs: 3250730657
163
+ Parameters: 19940000
164
+ File Size: 79990436
165
+ Architecture:
166
+ - 1x1 Convolution
167
+ - Batch Normalization
168
+ - Bottleneck Residual Block
169
+ - Convolution
170
+ - Efficient Channel Attention
171
+ - Global Average Pooling
172
+ - Max Pooling
173
+ - ReLU
174
+ - Residual Block
175
+ - Residual Connection
176
+ - Softmax
177
+ - Squeeze-and-Excitation Block
178
+ Tasks:
179
+ - Image Classification
180
+ Training Techniques:
181
+ - SGD with Momentum
182
+ - Weight Decay
183
+ Training Data:
184
+ - ImageNet
185
+ ID: ecaresnet50d_pruned
186
+ Layers: 50
187
+ Crop Pct: '0.875'
188
+ Image Size: '224'
189
+ Interpolation: bicubic
190
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1055
191
+ Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45899/outputs/ECAResNet50D_P_9c67f710.pth
192
+ Results:
193
+ - Task: Image Classification
194
+ Dataset: ImageNet
195
+ Metrics:
196
+ Top 1 Accuracy: 79.71%
197
+ Top 5 Accuracy: 94.88%
198
+ - Name: ecaresnetlight
199
+ In Collection: ECAResNet
200
+ Metadata:
201
+ FLOPs: 5276118784
202
+ Parameters: 30160000
203
+ File Size: 120956612
204
+ Architecture:
205
+ - 1x1 Convolution
206
+ - Batch Normalization
207
+ - Bottleneck Residual Block
208
+ - Convolution
209
+ - Efficient Channel Attention
210
+ - Global Average Pooling
211
+ - Max Pooling
212
+ - ReLU
213
+ - Residual Block
214
+ - Residual Connection
215
+ - Softmax
216
+ - Squeeze-and-Excitation Block
217
+ Tasks:
218
+ - Image Classification
219
+ Training Techniques:
220
+ - SGD with Momentum
221
+ - Weight Decay
222
+ Training Data:
223
+ - ImageNet
224
+ ID: ecaresnetlight
225
+ Crop Pct: '0.875'
226
+ Image Size: '224'
227
+ Interpolation: bicubic
228
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1077
229
+ Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45402/outputs/ECAResNetLight_4f34b35b.pth
230
+ Results:
231
+ - Task: Image Classification
232
+ Dataset: ImageNet
233
+ Metrics:
234
+ Top 1 Accuracy: 80.46%
235
+ Top 5 Accuracy: 95.25%
236
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/efficientnet-pruned.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EfficientNet (Knapsack Pruned)
2
+
3
+ **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use $2^N$ times more computational resources, then we can simply increase the network depth by $\alpha ^ N$, width by $\beta ^ N$, and image size by $\gamma ^ N$, where $\alpha, \beta, \gamma$ are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient $\phi$ to uniformly scales network width, depth, and resolution in a principled way.
4
+
5
+ The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image.
6
+
7
+ The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2), in addition to [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block).
8
+
9
+ This collection consists of pruned EfficientNet models.
10
+
11
+ {% include 'code_snippets.md' %}
12
+
13
+ ## How do I train this model?
14
+
15
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
16
+
17
+ ## Citation
18
+
19
+ ```BibTeX
20
+ @misc{tan2020efficientnet,
21
+ title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks},
22
+ author={Mingxing Tan and Quoc V. Le},
23
+ year={2020},
24
+ eprint={1905.11946},
25
+ archivePrefix={arXiv},
26
+ primaryClass={cs.LG}
27
+ }
28
+ ```
29
+
30
+ ```
31
+ @misc{aflalo2020knapsack,
32
+ title={Knapsack Pruning with Inner Distillation},
33
+ author={Yonathan Aflalo and Asaf Noy and Ming Lin and Itamar Friedman and Lihi Zelnik},
34
+ year={2020},
35
+ eprint={2002.08258},
36
+ archivePrefix={arXiv},
37
+ primaryClass={cs.LG}
38
+ }
39
+ ```
40
+
41
+ <!--
42
+ Type: model-index
43
+ Collections:
44
+ - Name: EfficientNet Pruned
45
+ Paper:
46
+ Title: Knapsack Pruning with Inner Distillation
47
+ URL: https://paperswithcode.com/paper/knapsack-pruning-with-inner-distillation
48
+ Models:
49
+ - Name: efficientnet_b1_pruned
50
+ In Collection: EfficientNet Pruned
51
+ Metadata:
52
+ FLOPs: 489653114
53
+ Parameters: 6330000
54
+ File Size: 25595162
55
+ Architecture:
56
+ - 1x1 Convolution
57
+ - Average Pooling
58
+ - Batch Normalization
59
+ - Convolution
60
+ - Dense Connections
61
+ - Dropout
62
+ - Inverted Residual Block
63
+ - Squeeze-and-Excitation Block
64
+ - Swish
65
+ Tasks:
66
+ - Image Classification
67
+ Training Data:
68
+ - ImageNet
69
+ ID: efficientnet_b1_pruned
70
+ Crop Pct: '0.882'
71
+ Image Size: '240'
72
+ Interpolation: bicubic
73
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1208
74
+ Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45403/outputs/effnetb1_pruned_9ebb3fe6.pth
75
+ Results:
76
+ - Task: Image Classification
77
+ Dataset: ImageNet
78
+ Metrics:
79
+ Top 1 Accuracy: 78.25%
80
+ Top 5 Accuracy: 93.84%
81
+ - Name: efficientnet_b2_pruned
82
+ In Collection: EfficientNet Pruned
83
+ Metadata:
84
+ FLOPs: 878133915
85
+ Parameters: 8310000
86
+ File Size: 33555005
87
+ Architecture:
88
+ - 1x1 Convolution
89
+ - Average Pooling
90
+ - Batch Normalization
91
+ - Convolution
92
+ - Dense Connections
93
+ - Dropout
94
+ - Inverted Residual Block
95
+ - Squeeze-and-Excitation Block
96
+ - Swish
97
+ Tasks:
98
+ - Image Classification
99
+ Training Data:
100
+ - ImageNet
101
+ ID: efficientnet_b2_pruned
102
+ Crop Pct: '0.89'
103
+ Image Size: '260'
104
+ Interpolation: bicubic
105
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1219
106
+ Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45403/outputs/effnetb2_pruned_203f55bc.pth
107
+ Results:
108
+ - Task: Image Classification
109
+ Dataset: ImageNet
110
+ Metrics:
111
+ Top 1 Accuracy: 79.91%
112
+ Top 5 Accuracy: 94.86%
113
+ - Name: efficientnet_b3_pruned
114
+ In Collection: EfficientNet Pruned
115
+ Metadata:
116
+ FLOPs: 1239590641
117
+ Parameters: 9860000
118
+ File Size: 39770812
119
+ Architecture:
120
+ - 1x1 Convolution
121
+ - Average Pooling
122
+ - Batch Normalization
123
+ - Convolution
124
+ - Dense Connections
125
+ - Dropout
126
+ - Inverted Residual Block
127
+ - Squeeze-and-Excitation Block
128
+ - Swish
129
+ Tasks:
130
+ - Image Classification
131
+ Training Data:
132
+ - ImageNet
133
+ ID: efficientnet_b3_pruned
134
+ Crop Pct: '0.904'
135
+ Image Size: '300'
136
+ Interpolation: bicubic
137
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1230
138
+ Weights: https://imvl-automl-sh.oss-cn-shanghai.aliyuncs.com/darts/hyperml/hyperml/job_45403/outputs/effnetb3_pruned_5abcc29f.pth
139
+ Results:
140
+ - Task: Image Classification
141
+ Dataset: ImageNet
142
+ Metrics:
143
+ Top 1 Accuracy: 80.86%
144
+ Top 5 Accuracy: 95.24%
145
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/efficientnet.md ADDED
@@ -0,0 +1,325 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EfficientNet
2
+
3
+ **EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network width, depth, and resolution with a set of fixed scaling coefficients. For example, if we want to use $2^N$ times more computational resources, then we can simply increase the network depth by $\alpha ^ N$, width by $\beta ^ N$, and image size by $\gamma ^ N$, where $\alpha, \beta, \gamma$ are constant coefficients determined by a small grid search on the original small model. EfficientNet uses a compound coefficient $\phi$ to uniformly scales network width, depth, and resolution in a principled way.
4
+
5
+ The compound scaling method is justified by the intuition that if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image.
6
+
7
+ The base EfficientNet-B0 network is based on the inverted bottleneck residual blocks of [MobileNetV2](https://paperswithcode.com/method/mobilenetv2), in addition to [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block).
8
+
9
+ {% include 'code_snippets.md' %}
10
+
11
+ ## How do I train this model?
12
+
13
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
14
+
15
+ ## Citation
16
+
17
+ ```BibTeX
18
+ @misc{tan2020efficientnet,
19
+ title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks},
20
+ author={Mingxing Tan and Quoc V. Le},
21
+ year={2020},
22
+ eprint={1905.11946},
23
+ archivePrefix={arXiv},
24
+ primaryClass={cs.LG}
25
+ }
26
+ ```
27
+
28
+ <!--
29
+ Type: model-index
30
+ Collections:
31
+ - Name: EfficientNet
32
+ Paper:
33
+ Title: 'EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks'
34
+ URL: https://paperswithcode.com/paper/efficientnet-rethinking-model-scaling-for
35
+ Models:
36
+ - Name: efficientnet_b0
37
+ In Collection: EfficientNet
38
+ Metadata:
39
+ FLOPs: 511241564
40
+ Parameters: 5290000
41
+ File Size: 21376743
42
+ Architecture:
43
+ - 1x1 Convolution
44
+ - Average Pooling
45
+ - Batch Normalization
46
+ - Convolution
47
+ - Dense Connections
48
+ - Dropout
49
+ - Inverted Residual Block
50
+ - Squeeze-and-Excitation Block
51
+ - Swish
52
+ Tasks:
53
+ - Image Classification
54
+ Training Data:
55
+ - ImageNet
56
+ ID: efficientnet_b0
57
+ Layers: 18
58
+ Crop Pct: '0.875'
59
+ Image Size: '224'
60
+ Interpolation: bicubic
61
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1002
62
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b0_ra-3dd342df.pth
63
+ Results:
64
+ - Task: Image Classification
65
+ Dataset: ImageNet
66
+ Metrics:
67
+ Top 1 Accuracy: 77.71%
68
+ Top 5 Accuracy: 93.52%
69
+ - Name: efficientnet_b1
70
+ In Collection: EfficientNet
71
+ Metadata:
72
+ FLOPs: 909691920
73
+ Parameters: 7790000
74
+ File Size: 31502706
75
+ Architecture:
76
+ - 1x1 Convolution
77
+ - Average Pooling
78
+ - Batch Normalization
79
+ - Convolution
80
+ - Dense Connections
81
+ - Dropout
82
+ - Inverted Residual Block
83
+ - Squeeze-and-Excitation Block
84
+ - Swish
85
+ Tasks:
86
+ - Image Classification
87
+ Training Data:
88
+ - ImageNet
89
+ ID: efficientnet_b1
90
+ Crop Pct: '0.875'
91
+ Image Size: '240'
92
+ Interpolation: bicubic
93
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1011
94
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b1-533bc792.pth
95
+ Results:
96
+ - Task: Image Classification
97
+ Dataset: ImageNet
98
+ Metrics:
99
+ Top 1 Accuracy: 78.71%
100
+ Top 5 Accuracy: 94.15%
101
+ - Name: efficientnet_b2
102
+ In Collection: EfficientNet
103
+ Metadata:
104
+ FLOPs: 1265324514
105
+ Parameters: 9110000
106
+ File Size: 36788104
107
+ Architecture:
108
+ - 1x1 Convolution
109
+ - Average Pooling
110
+ - Batch Normalization
111
+ - Convolution
112
+ - Dense Connections
113
+ - Dropout
114
+ - Inverted Residual Block
115
+ - Squeeze-and-Excitation Block
116
+ - Swish
117
+ Tasks:
118
+ - Image Classification
119
+ Training Data:
120
+ - ImageNet
121
+ ID: efficientnet_b2
122
+ Crop Pct: '0.875'
123
+ Image Size: '260'
124
+ Interpolation: bicubic
125
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1020
126
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b2_ra-bcdf34b7.pth
127
+ Results:
128
+ - Task: Image Classification
129
+ Dataset: ImageNet
130
+ Metrics:
131
+ Top 1 Accuracy: 80.38%
132
+ Top 5 Accuracy: 95.08%
133
+ - Name: efficientnet_b2a
134
+ In Collection: EfficientNet
135
+ Metadata:
136
+ FLOPs: 1452041554
137
+ Parameters: 9110000
138
+ File Size: 49369973
139
+ Architecture:
140
+ - 1x1 Convolution
141
+ - Average Pooling
142
+ - Batch Normalization
143
+ - Convolution
144
+ - Dense Connections
145
+ - Dropout
146
+ - Inverted Residual Block
147
+ - Squeeze-and-Excitation Block
148
+ - Swish
149
+ Tasks:
150
+ - Image Classification
151
+ Training Data:
152
+ - ImageNet
153
+ ID: efficientnet_b2a
154
+ Crop Pct: '1.0'
155
+ Image Size: '288'
156
+ Interpolation: bicubic
157
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1029
158
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b3_ra2-cf984f9c.pth
159
+ Results:
160
+ - Task: Image Classification
161
+ Dataset: ImageNet
162
+ Metrics:
163
+ Top 1 Accuracy: 80.61%
164
+ Top 5 Accuracy: 95.32%
165
+ - Name: efficientnet_b3
166
+ In Collection: EfficientNet
167
+ Metadata:
168
+ FLOPs: 2327905920
169
+ Parameters: 12230000
170
+ File Size: 49369973
171
+ Architecture:
172
+ - 1x1 Convolution
173
+ - Average Pooling
174
+ - Batch Normalization
175
+ - Convolution
176
+ - Dense Connections
177
+ - Dropout
178
+ - Inverted Residual Block
179
+ - Squeeze-and-Excitation Block
180
+ - Swish
181
+ Tasks:
182
+ - Image Classification
183
+ Training Data:
184
+ - ImageNet
185
+ ID: efficientnet_b3
186
+ Crop Pct: '0.904'
187
+ Image Size: '300'
188
+ Interpolation: bicubic
189
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1038
190
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b3_ra2-cf984f9c.pth
191
+ Results:
192
+ - Task: Image Classification
193
+ Dataset: ImageNet
194
+ Metrics:
195
+ Top 1 Accuracy: 82.08%
196
+ Top 5 Accuracy: 96.03%
197
+ - Name: efficientnet_b3a
198
+ In Collection: EfficientNet
199
+ Metadata:
200
+ FLOPs: 2600628304
201
+ Parameters: 12230000
202
+ File Size: 49369973
203
+ Architecture:
204
+ - 1x1 Convolution
205
+ - Average Pooling
206
+ - Batch Normalization
207
+ - Convolution
208
+ - Dense Connections
209
+ - Dropout
210
+ - Inverted Residual Block
211
+ - Squeeze-and-Excitation Block
212
+ - Swish
213
+ Tasks:
214
+ - Image Classification
215
+ Training Data:
216
+ - ImageNet
217
+ ID: efficientnet_b3a
218
+ Crop Pct: '1.0'
219
+ Image Size: '320'
220
+ Interpolation: bicubic
221
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1047
222
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_b3_ra2-cf984f9c.pth
223
+ Results:
224
+ - Task: Image Classification
225
+ Dataset: ImageNet
226
+ Metrics:
227
+ Top 1 Accuracy: 82.25%
228
+ Top 5 Accuracy: 96.11%
229
+ - Name: efficientnet_em
230
+ In Collection: EfficientNet
231
+ Metadata:
232
+ FLOPs: 3935516480
233
+ Parameters: 6900000
234
+ File Size: 27927309
235
+ Architecture:
236
+ - 1x1 Convolution
237
+ - Average Pooling
238
+ - Batch Normalization
239
+ - Convolution
240
+ - Dense Connections
241
+ - Dropout
242
+ - Inverted Residual Block
243
+ - Squeeze-and-Excitation Block
244
+ - Swish
245
+ Tasks:
246
+ - Image Classification
247
+ Training Data:
248
+ - ImageNet
249
+ ID: efficientnet_em
250
+ Crop Pct: '0.882'
251
+ Image Size: '240'
252
+ Interpolation: bicubic
253
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1118
254
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_em_ra2-66250f76.pth
255
+ Results:
256
+ - Task: Image Classification
257
+ Dataset: ImageNet
258
+ Metrics:
259
+ Top 1 Accuracy: 79.26%
260
+ Top 5 Accuracy: 94.79%
261
+ - Name: efficientnet_es
262
+ In Collection: EfficientNet
263
+ Metadata:
264
+ FLOPs: 2317181824
265
+ Parameters: 5440000
266
+ File Size: 22003339
267
+ Architecture:
268
+ - 1x1 Convolution
269
+ - Average Pooling
270
+ - Batch Normalization
271
+ - Convolution
272
+ - Dense Connections
273
+ - Dropout
274
+ - Inverted Residual Block
275
+ - Squeeze-and-Excitation Block
276
+ - Swish
277
+ Tasks:
278
+ - Image Classification
279
+ Training Data:
280
+ - ImageNet
281
+ ID: efficientnet_es
282
+ Crop Pct: '0.875'
283
+ Image Size: '224'
284
+ Interpolation: bicubic
285
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1110
286
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_es_ra-f111e99c.pth
287
+ Results:
288
+ - Task: Image Classification
289
+ Dataset: ImageNet
290
+ Metrics:
291
+ Top 1 Accuracy: 78.09%
292
+ Top 5 Accuracy: 93.93%
293
+ - Name: efficientnet_lite0
294
+ In Collection: EfficientNet
295
+ Metadata:
296
+ FLOPs: 510605024
297
+ Parameters: 4650000
298
+ File Size: 18820005
299
+ Architecture:
300
+ - 1x1 Convolution
301
+ - Average Pooling
302
+ - Batch Normalization
303
+ - Convolution
304
+ - Dense Connections
305
+ - Dropout
306
+ - Inverted Residual Block
307
+ - Squeeze-and-Excitation Block
308
+ - Swish
309
+ Tasks:
310
+ - Image Classification
311
+ Training Data:
312
+ - ImageNet
313
+ ID: efficientnet_lite0
314
+ Crop Pct: '0.875'
315
+ Image Size: '224'
316
+ Interpolation: bicubic
317
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/efficientnet.py#L1163
318
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/efficientnet_lite0_ra-37913777.pth
319
+ Results:
320
+ - Task: Image Classification
321
+ Dataset: ImageNet
322
+ Metrics:
323
+ Top 1 Accuracy: 75.5%
324
+ Top 5 Accuracy: 92.51%
325
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/ensemble-adversarial.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # # Ensemble Adversarial Inception ResNet v2
2
+
3
+ **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture).
4
+
5
+ This particular model was trained for study of adversarial examples (adversarial training).
6
+
7
+ The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models).
8
+
9
+ {% include 'code_snippets.md' %}
10
+
11
+ ## How do I train this model?
12
+
13
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
14
+
15
+ ## Citation
16
+
17
+ ```BibTeX
18
+ @article{DBLP:journals/corr/abs-1804-00097,
19
+ author = {Alexey Kurakin and
20
+ Ian J. Goodfellow and
21
+ Samy Bengio and
22
+ Yinpeng Dong and
23
+ Fangzhou Liao and
24
+ Ming Liang and
25
+ Tianyu Pang and
26
+ Jun Zhu and
27
+ Xiaolin Hu and
28
+ Cihang Xie and
29
+ Jianyu Wang and
30
+ Zhishuai Zhang and
31
+ Zhou Ren and
32
+ Alan L. Yuille and
33
+ Sangxia Huang and
34
+ Yao Zhao and
35
+ Yuzhe Zhao and
36
+ Zhonglin Han and
37
+ Junjiajia Long and
38
+ Yerkebulan Berdibekov and
39
+ Takuya Akiba and
40
+ Seiya Tokui and
41
+ Motoki Abe},
42
+ title = {Adversarial Attacks and Defences Competition},
43
+ journal = {CoRR},
44
+ volume = {abs/1804.00097},
45
+ year = {2018},
46
+ url = {http://arxiv.org/abs/1804.00097},
47
+ archivePrefix = {arXiv},
48
+ eprint = {1804.00097},
49
+ timestamp = {Thu, 31 Oct 2019 16:31:22 +0100},
50
+ biburl = {https://dblp.org/rec/journals/corr/abs-1804-00097.bib},
51
+ bibsource = {dblp computer science bibliography, https://dblp.org}
52
+ }
53
+ ```
54
+
55
+ <!--
56
+ Type: model-index
57
+ Collections:
58
+ - Name: Ensemble Adversarial
59
+ Paper:
60
+ Title: Adversarial Attacks and Defences Competition
61
+ URL: https://paperswithcode.com/paper/adversarial-attacks-and-defences-competition
62
+ Models:
63
+ - Name: ens_adv_inception_resnet_v2
64
+ In Collection: Ensemble Adversarial
65
+ Metadata:
66
+ FLOPs: 16959133120
67
+ Parameters: 55850000
68
+ File Size: 223774238
69
+ Architecture:
70
+ - 1x1 Convolution
71
+ - Auxiliary Classifier
72
+ - Average Pooling
73
+ - Average Pooling
74
+ - Batch Normalization
75
+ - Convolution
76
+ - Dense Connections
77
+ - Dropout
78
+ - Inception-v3 Module
79
+ - Max Pooling
80
+ - ReLU
81
+ - Softmax
82
+ Tasks:
83
+ - Image Classification
84
+ Training Data:
85
+ - ImageNet
86
+ ID: ens_adv_inception_resnet_v2
87
+ Crop Pct: '0.897'
88
+ Image Size: '299'
89
+ Interpolation: bicubic
90
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_resnet_v2.py#L351
91
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ens_adv_inception_resnet_v2-2592a550.pth
92
+ Results:
93
+ - Task: Image Classification
94
+ Dataset: ImageNet
95
+ Metrics:
96
+ Top 1 Accuracy: 1.0%
97
+ Top 5 Accuracy: 17.32%
98
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/ese-vovnet.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ESE-VoVNet
2
+
3
+ **VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel.
4
+
5
+ Read about [one-shot aggregation here](https://paperswithcode.com/method/one-shot-aggregation).
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @misc{lee2019energy,
17
+ title={An Energy and GPU-Computation Efficient Backbone Network for Real-Time Object Detection},
18
+ author={Youngwan Lee and Joong-won Hwang and Sangrok Lee and Yuseok Bae and Jongyoul Park},
19
+ year={2019},
20
+ eprint={1904.09730},
21
+ archivePrefix={arXiv},
22
+ primaryClass={cs.CV}
23
+ }
24
+ ```
25
+
26
+ <!--
27
+ Type: model-index
28
+ Collections:
29
+ - Name: ESE VovNet
30
+ Paper:
31
+ Title: 'CenterMask : Real-Time Anchor-Free Instance Segmentation'
32
+ URL: https://paperswithcode.com/paper/centermask-real-time-anchor-free-instance-1
33
+ Models:
34
+ - Name: ese_vovnet19b_dw
35
+ In Collection: ESE VovNet
36
+ Metadata:
37
+ FLOPs: 1711959904
38
+ Parameters: 6540000
39
+ File Size: 26243175
40
+ Architecture:
41
+ - Batch Normalization
42
+ - Convolution
43
+ - Max Pooling
44
+ - One-Shot Aggregation
45
+ - ReLU
46
+ Tasks:
47
+ - Image Classification
48
+ Training Data:
49
+ - ImageNet
50
+ ID: ese_vovnet19b_dw
51
+ Layers: 19
52
+ Crop Pct: '0.875'
53
+ Image Size: '224'
54
+ Interpolation: bicubic
55
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/vovnet.py#L361
56
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ese_vovnet19b_dw-a8741004.pth
57
+ Results:
58
+ - Task: Image Classification
59
+ Dataset: ImageNet
60
+ Metrics:
61
+ Top 1 Accuracy: 76.82%
62
+ Top 5 Accuracy: 93.28%
63
+ - Name: ese_vovnet39b
64
+ In Collection: ESE VovNet
65
+ Metadata:
66
+ FLOPs: 9089259008
67
+ Parameters: 24570000
68
+ File Size: 98397138
69
+ Architecture:
70
+ - Batch Normalization
71
+ - Convolution
72
+ - Max Pooling
73
+ - One-Shot Aggregation
74
+ - ReLU
75
+ Tasks:
76
+ - Image Classification
77
+ Training Data:
78
+ - ImageNet
79
+ ID: ese_vovnet39b
80
+ Layers: 39
81
+ Crop Pct: '0.875'
82
+ Image Size: '224'
83
+ Interpolation: bicubic
84
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/vovnet.py#L371
85
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/ese_vovnet39b-f912fe73.pth
86
+ Results:
87
+ - Task: Image Classification
88
+ Dataset: ImageNet
89
+ Metrics:
90
+ Top 1 Accuracy: 79.31%
91
+ Top 5 Accuracy: 94.72%
92
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/fbnet.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FBNet
2
+
3
+ **FBNet** is a type of convolutional neural architectures discovered through [DNAS](https://paperswithcode.com/method/dnas) neural architecture search. It utilises a basic type of image model block inspired by [MobileNetv2](https://paperswithcode.com/method/mobilenetv2) that utilises depthwise convolutions and an inverted residual structure (see components).
4
+
5
+ The principal building block is the [FBNet Block](https://paperswithcode.com/method/fbnet-block).
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @misc{wu2019fbnet,
17
+ title={FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search},
18
+ author={Bichen Wu and Xiaoliang Dai and Peizhao Zhang and Yanghan Wang and Fei Sun and Yiming Wu and Yuandong Tian and Peter Vajda and Yangqing Jia and Kurt Keutzer},
19
+ year={2019},
20
+ eprint={1812.03443},
21
+ archivePrefix={arXiv},
22
+ primaryClass={cs.CV}
23
+ }
24
+ ```
25
+
26
+ <!--
27
+ Type: model-index
28
+ Collections:
29
+ - Name: FBNet
30
+ Paper:
31
+ Title: 'FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural
32
+ Architecture Search'
33
+ URL: https://paperswithcode.com/paper/fbnet-hardware-aware-efficient-convnet-design
34
+ Models:
35
+ - Name: fbnetc_100
36
+ In Collection: FBNet
37
+ Metadata:
38
+ FLOPs: 508940064
39
+ Parameters: 5570000
40
+ File Size: 22525094
41
+ Architecture:
42
+ - 1x1 Convolution
43
+ - Convolution
44
+ - Dense Connections
45
+ - Dropout
46
+ - FBNet Block
47
+ - Global Average Pooling
48
+ - Softmax
49
+ Tasks:
50
+ - Image Classification
51
+ Training Techniques:
52
+ - SGD with Momentum
53
+ - Weight Decay
54
+ Training Data:
55
+ - ImageNet
56
+ Training Resources: 8x GPUs
57
+ ID: fbnetc_100
58
+ LR: 0.1
59
+ Epochs: 360
60
+ Layers: 22
61
+ Dropout: 0.2
62
+ Crop Pct: '0.875'
63
+ Momentum: 0.9
64
+ Batch Size: 256
65
+ Image Size: '224'
66
+ Weight Decay: 0.0005
67
+ Interpolation: bilinear
68
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L985
69
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/fbnetc_100-c345b898.pth
70
+ Results:
71
+ - Task: Image Classification
72
+ Dataset: ImageNet
73
+ Metrics:
74
+ Top 1 Accuracy: 75.12%
75
+ Top 5 Accuracy: 92.37%
76
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-inception-v3.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (Gluon) Inception v3
2
+
3
+ **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module).
4
+
5
+ The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html).
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @article{DBLP:journals/corr/SzegedyVISW15,
17
+ author = {Christian Szegedy and
18
+ Vincent Vanhoucke and
19
+ Sergey Ioffe and
20
+ Jonathon Shlens and
21
+ Zbigniew Wojna},
22
+ title = {Rethinking the Inception Architecture for Computer Vision},
23
+ journal = {CoRR},
24
+ volume = {abs/1512.00567},
25
+ year = {2015},
26
+ url = {http://arxiv.org/abs/1512.00567},
27
+ archivePrefix = {arXiv},
28
+ eprint = {1512.00567},
29
+ timestamp = {Mon, 13 Aug 2018 16:49:07 +0200},
30
+ biburl = {https://dblp.org/rec/journals/corr/SzegedyVISW15.bib},
31
+ bibsource = {dblp computer science bibliography, https://dblp.org}
32
+ }
33
+ ```
34
+
35
+ <!--
36
+ Type: model-index
37
+ Collections:
38
+ - Name: Gloun Inception v3
39
+ Paper:
40
+ Title: Rethinking the Inception Architecture for Computer Vision
41
+ URL: https://paperswithcode.com/paper/rethinking-the-inception-architecture-for
42
+ Models:
43
+ - Name: gluon_inception_v3
44
+ In Collection: Gloun Inception v3
45
+ Metadata:
46
+ FLOPs: 7352418880
47
+ Parameters: 23830000
48
+ File Size: 95567055
49
+ Architecture:
50
+ - 1x1 Convolution
51
+ - Auxiliary Classifier
52
+ - Average Pooling
53
+ - Average Pooling
54
+ - Batch Normalization
55
+ - Convolution
56
+ - Dense Connections
57
+ - Dropout
58
+ - Inception-v3 Module
59
+ - Max Pooling
60
+ - ReLU
61
+ - Softmax
62
+ Tasks:
63
+ - Image Classification
64
+ Training Data:
65
+ - ImageNet
66
+ ID: gluon_inception_v3
67
+ Crop Pct: '0.875'
68
+ Image Size: '299'
69
+ Interpolation: bicubic
70
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v3.py#L464
71
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_inception_v3-9f746940.pth
72
+ Results:
73
+ - Task: Image Classification
74
+ Dataset: ImageNet
75
+ Metrics:
76
+ Top 1 Accuracy: 78.8%
77
+ Top 5 Accuracy: 94.38%
78
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-resnet.md ADDED
@@ -0,0 +1,504 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (Gluon) ResNet
2
+
3
+ **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https://paperswithcode.com/method/residual-block) ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.
4
+
5
+ The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html).
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @article{DBLP:journals/corr/HeZRS15,
17
+ author = {Kaiming He and
18
+ Xiangyu Zhang and
19
+ Shaoqing Ren and
20
+ Jian Sun},
21
+ title = {Deep Residual Learning for Image Recognition},
22
+ journal = {CoRR},
23
+ volume = {abs/1512.03385},
24
+ year = {2015},
25
+ url = {http://arxiv.org/abs/1512.03385},
26
+ archivePrefix = {arXiv},
27
+ eprint = {1512.03385},
28
+ timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
29
+ biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
30
+ bibsource = {dblp computer science bibliography, https://dblp.org}
31
+ }
32
+ ```
33
+
34
+ <!--
35
+ Type: model-index
36
+ Collections:
37
+ - Name: Gloun ResNet
38
+ Paper:
39
+ Title: Deep Residual Learning for Image Recognition
40
+ URL: https://paperswithcode.com/paper/deep-residual-learning-for-image-recognition
41
+ Models:
42
+ - Name: gluon_resnet101_v1b
43
+ In Collection: Gloun ResNet
44
+ Metadata:
45
+ FLOPs: 10068547584
46
+ Parameters: 44550000
47
+ File Size: 178723172
48
+ Architecture:
49
+ - 1x1 Convolution
50
+ - Batch Normalization
51
+ - Bottleneck Residual Block
52
+ - Convolution
53
+ - Global Average Pooling
54
+ - Max Pooling
55
+ - ReLU
56
+ - Residual Block
57
+ - Residual Connection
58
+ - Softmax
59
+ Tasks:
60
+ - Image Classification
61
+ Training Data:
62
+ - ImageNet
63
+ ID: gluon_resnet101_v1b
64
+ Crop Pct: '0.875'
65
+ Image Size: '224'
66
+ Interpolation: bicubic
67
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L89
68
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1b-3b017079.pth
69
+ Results:
70
+ - Task: Image Classification
71
+ Dataset: ImageNet
72
+ Metrics:
73
+ Top 1 Accuracy: 79.3%
74
+ Top 5 Accuracy: 94.53%
75
+ - Name: gluon_resnet101_v1c
76
+ In Collection: Gloun ResNet
77
+ Metadata:
78
+ FLOPs: 10376567296
79
+ Parameters: 44570000
80
+ File Size: 178802575
81
+ Architecture:
82
+ - 1x1 Convolution
83
+ - Batch Normalization
84
+ - Bottleneck Residual Block
85
+ - Convolution
86
+ - Global Average Pooling
87
+ - Max Pooling
88
+ - ReLU
89
+ - Residual Block
90
+ - Residual Connection
91
+ - Softmax
92
+ Tasks:
93
+ - Image Classification
94
+ Training Data:
95
+ - ImageNet
96
+ ID: gluon_resnet101_v1c
97
+ Crop Pct: '0.875'
98
+ Image Size: '224'
99
+ Interpolation: bicubic
100
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L113
101
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1c-1f26822a.pth
102
+ Results:
103
+ - Task: Image Classification
104
+ Dataset: ImageNet
105
+ Metrics:
106
+ Top 1 Accuracy: 79.53%
107
+ Top 5 Accuracy: 94.59%
108
+ - Name: gluon_resnet101_v1d
109
+ In Collection: Gloun ResNet
110
+ Metadata:
111
+ FLOPs: 10377018880
112
+ Parameters: 44570000
113
+ File Size: 178802755
114
+ Architecture:
115
+ - 1x1 Convolution
116
+ - Batch Normalization
117
+ - Bottleneck Residual Block
118
+ - Convolution
119
+ - Global Average Pooling
120
+ - Max Pooling
121
+ - ReLU
122
+ - Residual Block
123
+ - Residual Connection
124
+ - Softmax
125
+ Tasks:
126
+ - Image Classification
127
+ Training Data:
128
+ - ImageNet
129
+ ID: gluon_resnet101_v1d
130
+ Crop Pct: '0.875'
131
+ Image Size: '224'
132
+ Interpolation: bicubic
133
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L138
134
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1d-0f9c8644.pth
135
+ Results:
136
+ - Task: Image Classification
137
+ Dataset: ImageNet
138
+ Metrics:
139
+ Top 1 Accuracy: 80.4%
140
+ Top 5 Accuracy: 95.02%
141
+ - Name: gluon_resnet101_v1s
142
+ In Collection: Gloun ResNet
143
+ Metadata:
144
+ FLOPs: 11805511680
145
+ Parameters: 44670000
146
+ File Size: 179221777
147
+ Architecture:
148
+ - 1x1 Convolution
149
+ - Batch Normalization
150
+ - Bottleneck Residual Block
151
+ - Convolution
152
+ - Global Average Pooling
153
+ - Max Pooling
154
+ - ReLU
155
+ - Residual Block
156
+ - Residual Connection
157
+ - Softmax
158
+ Tasks:
159
+ - Image Classification
160
+ Training Data:
161
+ - ImageNet
162
+ ID: gluon_resnet101_v1s
163
+ Crop Pct: '0.875'
164
+ Image Size: '224'
165
+ Interpolation: bicubic
166
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L166
167
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet101_v1s-60fe0cc1.pth
168
+ Results:
169
+ - Task: Image Classification
170
+ Dataset: ImageNet
171
+ Metrics:
172
+ Top 1 Accuracy: 80.29%
173
+ Top 5 Accuracy: 95.16%
174
+ - Name: gluon_resnet152_v1b
175
+ In Collection: Gloun ResNet
176
+ Metadata:
177
+ FLOPs: 14857660416
178
+ Parameters: 60190000
179
+ File Size: 241534001
180
+ Architecture:
181
+ - 1x1 Convolution
182
+ - Batch Normalization
183
+ - Bottleneck Residual Block
184
+ - Convolution
185
+ - Global Average Pooling
186
+ - Max Pooling
187
+ - ReLU
188
+ - Residual Block
189
+ - Residual Connection
190
+ - Softmax
191
+ Tasks:
192
+ - Image Classification
193
+ Training Data:
194
+ - ImageNet
195
+ ID: gluon_resnet152_v1b
196
+ Crop Pct: '0.875'
197
+ Image Size: '224'
198
+ Interpolation: bicubic
199
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L97
200
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1b-c1edb0dd.pth
201
+ Results:
202
+ - Task: Image Classification
203
+ Dataset: ImageNet
204
+ Metrics:
205
+ Top 1 Accuracy: 79.69%
206
+ Top 5 Accuracy: 94.73%
207
+ - Name: gluon_resnet152_v1c
208
+ In Collection: Gloun ResNet
209
+ Metadata:
210
+ FLOPs: 15165680128
211
+ Parameters: 60210000
212
+ File Size: 241613404
213
+ Architecture:
214
+ - 1x1 Convolution
215
+ - Batch Normalization
216
+ - Bottleneck Residual Block
217
+ - Convolution
218
+ - Global Average Pooling
219
+ - Max Pooling
220
+ - ReLU
221
+ - Residual Block
222
+ - Residual Connection
223
+ - Softmax
224
+ Tasks:
225
+ - Image Classification
226
+ Training Data:
227
+ - ImageNet
228
+ ID: gluon_resnet152_v1c
229
+ Crop Pct: '0.875'
230
+ Image Size: '224'
231
+ Interpolation: bicubic
232
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L121
233
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1c-a3bb0b98.pth
234
+ Results:
235
+ - Task: Image Classification
236
+ Dataset: ImageNet
237
+ Metrics:
238
+ Top 1 Accuracy: 79.91%
239
+ Top 5 Accuracy: 94.85%
240
+ - Name: gluon_resnet152_v1d
241
+ In Collection: Gloun ResNet
242
+ Metadata:
243
+ FLOPs: 15166131712
244
+ Parameters: 60210000
245
+ File Size: 241613584
246
+ Architecture:
247
+ - 1x1 Convolution
248
+ - Batch Normalization
249
+ - Bottleneck Residual Block
250
+ - Convolution
251
+ - Global Average Pooling
252
+ - Max Pooling
253
+ - ReLU
254
+ - Residual Block
255
+ - Residual Connection
256
+ - Softmax
257
+ Tasks:
258
+ - Image Classification
259
+ Training Data:
260
+ - ImageNet
261
+ ID: gluon_resnet152_v1d
262
+ Crop Pct: '0.875'
263
+ Image Size: '224'
264
+ Interpolation: bicubic
265
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L147
266
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1d-bd354e12.pth
267
+ Results:
268
+ - Task: Image Classification
269
+ Dataset: ImageNet
270
+ Metrics:
271
+ Top 1 Accuracy: 80.48%
272
+ Top 5 Accuracy: 95.2%
273
+ - Name: gluon_resnet152_v1s
274
+ In Collection: Gloun ResNet
275
+ Metadata:
276
+ FLOPs: 16594624512
277
+ Parameters: 60320000
278
+ File Size: 242032606
279
+ Architecture:
280
+ - 1x1 Convolution
281
+ - Batch Normalization
282
+ - Bottleneck Residual Block
283
+ - Convolution
284
+ - Global Average Pooling
285
+ - Max Pooling
286
+ - ReLU
287
+ - Residual Block
288
+ - Residual Connection
289
+ - Softmax
290
+ Tasks:
291
+ - Image Classification
292
+ Training Data:
293
+ - ImageNet
294
+ ID: gluon_resnet152_v1s
295
+ Crop Pct: '0.875'
296
+ Image Size: '224'
297
+ Interpolation: bicubic
298
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L175
299
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet152_v1s-dcc41b81.pth
300
+ Results:
301
+ - Task: Image Classification
302
+ Dataset: ImageNet
303
+ Metrics:
304
+ Top 1 Accuracy: 81.02%
305
+ Top 5 Accuracy: 95.42%
306
+ - Name: gluon_resnet18_v1b
307
+ In Collection: Gloun ResNet
308
+ Metadata:
309
+ FLOPs: 2337073152
310
+ Parameters: 11690000
311
+ File Size: 46816736
312
+ Architecture:
313
+ - 1x1 Convolution
314
+ - Batch Normalization
315
+ - Bottleneck Residual Block
316
+ - Convolution
317
+ - Global Average Pooling
318
+ - Max Pooling
319
+ - ReLU
320
+ - Residual Block
321
+ - Residual Connection
322
+ - Softmax
323
+ Tasks:
324
+ - Image Classification
325
+ Training Data:
326
+ - ImageNet
327
+ ID: gluon_resnet18_v1b
328
+ Crop Pct: '0.875'
329
+ Image Size: '224'
330
+ Interpolation: bicubic
331
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L65
332
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet18_v1b-0757602b.pth
333
+ Results:
334
+ - Task: Image Classification
335
+ Dataset: ImageNet
336
+ Metrics:
337
+ Top 1 Accuracy: 70.84%
338
+ Top 5 Accuracy: 89.76%
339
+ - Name: gluon_resnet34_v1b
340
+ In Collection: Gloun ResNet
341
+ Metadata:
342
+ FLOPs: 4718469120
343
+ Parameters: 21800000
344
+ File Size: 87295112
345
+ Architecture:
346
+ - 1x1 Convolution
347
+ - Batch Normalization
348
+ - Bottleneck Residual Block
349
+ - Convolution
350
+ - Global Average Pooling
351
+ - Max Pooling
352
+ - ReLU
353
+ - Residual Block
354
+ - Residual Connection
355
+ - Softmax
356
+ Tasks:
357
+ - Image Classification
358
+ Training Data:
359
+ - ImageNet
360
+ ID: gluon_resnet34_v1b
361
+ Crop Pct: '0.875'
362
+ Image Size: '224'
363
+ Interpolation: bicubic
364
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L73
365
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet34_v1b-c6d82d59.pth
366
+ Results:
367
+ - Task: Image Classification
368
+ Dataset: ImageNet
369
+ Metrics:
370
+ Top 1 Accuracy: 74.59%
371
+ Top 5 Accuracy: 92.0%
372
+ - Name: gluon_resnet50_v1b
373
+ In Collection: Gloun ResNet
374
+ Metadata:
375
+ FLOPs: 5282531328
376
+ Parameters: 25560000
377
+ File Size: 102493763
378
+ Architecture:
379
+ - 1x1 Convolution
380
+ - Batch Normalization
381
+ - Bottleneck Residual Block
382
+ - Convolution
383
+ - Global Average Pooling
384
+ - Max Pooling
385
+ - ReLU
386
+ - Residual Block
387
+ - Residual Connection
388
+ - Softmax
389
+ Tasks:
390
+ - Image Classification
391
+ Training Data:
392
+ - ImageNet
393
+ ID: gluon_resnet50_v1b
394
+ Crop Pct: '0.875'
395
+ Image Size: '224'
396
+ Interpolation: bicubic
397
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L81
398
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1b-0ebe02e2.pth
399
+ Results:
400
+ - Task: Image Classification
401
+ Dataset: ImageNet
402
+ Metrics:
403
+ Top 1 Accuracy: 77.58%
404
+ Top 5 Accuracy: 93.72%
405
+ - Name: gluon_resnet50_v1c
406
+ In Collection: Gloun ResNet
407
+ Metadata:
408
+ FLOPs: 5590551040
409
+ Parameters: 25580000
410
+ File Size: 102573166
411
+ Architecture:
412
+ - 1x1 Convolution
413
+ - Batch Normalization
414
+ - Bottleneck Residual Block
415
+ - Convolution
416
+ - Global Average Pooling
417
+ - Max Pooling
418
+ - ReLU
419
+ - Residual Block
420
+ - Residual Connection
421
+ - Softmax
422
+ Tasks:
423
+ - Image Classification
424
+ Training Data:
425
+ - ImageNet
426
+ ID: gluon_resnet50_v1c
427
+ Crop Pct: '0.875'
428
+ Image Size: '224'
429
+ Interpolation: bicubic
430
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L105
431
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1c-48092f55.pth
432
+ Results:
433
+ - Task: Image Classification
434
+ Dataset: ImageNet
435
+ Metrics:
436
+ Top 1 Accuracy: 78.01%
437
+ Top 5 Accuracy: 93.99%
438
+ - Name: gluon_resnet50_v1d
439
+ In Collection: Gloun ResNet
440
+ Metadata:
441
+ FLOPs: 5591002624
442
+ Parameters: 25580000
443
+ File Size: 102573346
444
+ Architecture:
445
+ - 1x1 Convolution
446
+ - Batch Normalization
447
+ - Bottleneck Residual Block
448
+ - Convolution
449
+ - Global Average Pooling
450
+ - Max Pooling
451
+ - ReLU
452
+ - Residual Block
453
+ - Residual Connection
454
+ - Softmax
455
+ Tasks:
456
+ - Image Classification
457
+ Training Data:
458
+ - ImageNet
459
+ ID: gluon_resnet50_v1d
460
+ Crop Pct: '0.875'
461
+ Image Size: '224'
462
+ Interpolation: bicubic
463
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L129
464
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1d-818a1b1b.pth
465
+ Results:
466
+ - Task: Image Classification
467
+ Dataset: ImageNet
468
+ Metrics:
469
+ Top 1 Accuracy: 79.06%
470
+ Top 5 Accuracy: 94.46%
471
+ - Name: gluon_resnet50_v1s
472
+ In Collection: Gloun ResNet
473
+ Metadata:
474
+ FLOPs: 7019495424
475
+ Parameters: 25680000
476
+ File Size: 102992368
477
+ Architecture:
478
+ - 1x1 Convolution
479
+ - Batch Normalization
480
+ - Bottleneck Residual Block
481
+ - Convolution
482
+ - Global Average Pooling
483
+ - Max Pooling
484
+ - ReLU
485
+ - Residual Block
486
+ - Residual Connection
487
+ - Softmax
488
+ Tasks:
489
+ - Image Classification
490
+ Training Data:
491
+ - ImageNet
492
+ ID: gluon_resnet50_v1s
493
+ Crop Pct: '0.875'
494
+ Image Size: '224'
495
+ Interpolation: bicubic
496
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L156
497
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnet50_v1s-1762acc0.pth
498
+ Results:
499
+ - Task: Image Classification
500
+ Dataset: ImageNet
501
+ Metrics:
502
+ Top 1 Accuracy: 78.7%
503
+ Top 5 Accuracy: 94.25%
504
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-resnext.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (Gluon) ResNeXt
2
+
3
+ A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth and width.
4
+
5
+ The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html).
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @article{DBLP:journals/corr/XieGDTH16,
17
+ author = {Saining Xie and
18
+ Ross B. Girshick and
19
+ Piotr Doll{\'{a}}r and
20
+ Zhuowen Tu and
21
+ Kaiming He},
22
+ title = {Aggregated Residual Transformations for Deep Neural Networks},
23
+ journal = {CoRR},
24
+ volume = {abs/1611.05431},
25
+ year = {2016},
26
+ url = {http://arxiv.org/abs/1611.05431},
27
+ archivePrefix = {arXiv},
28
+ eprint = {1611.05431},
29
+ timestamp = {Mon, 13 Aug 2018 16:45:58 +0200},
30
+ biburl = {https://dblp.org/rec/journals/corr/XieGDTH16.bib},
31
+ bibsource = {dblp computer science bibliography, https://dblp.org}
32
+ }
33
+ ```
34
+
35
+ <!--
36
+ Type: model-index
37
+ Collections:
38
+ - Name: Gloun ResNeXt
39
+ Paper:
40
+ Title: Aggregated Residual Transformations for Deep Neural Networks
41
+ URL: https://paperswithcode.com/paper/aggregated-residual-transformations-for-deep
42
+ Models:
43
+ - Name: gluon_resnext101_32x4d
44
+ In Collection: Gloun ResNeXt
45
+ Metadata:
46
+ FLOPs: 10298145792
47
+ Parameters: 44180000
48
+ File Size: 177367414
49
+ Architecture:
50
+ - 1x1 Convolution
51
+ - Batch Normalization
52
+ - Convolution
53
+ - Global Average Pooling
54
+ - Grouped Convolution
55
+ - Max Pooling
56
+ - ReLU
57
+ - ResNeXt Block
58
+ - Residual Connection
59
+ - Softmax
60
+ Tasks:
61
+ - Image Classification
62
+ Training Data:
63
+ - ImageNet
64
+ ID: gluon_resnext101_32x4d
65
+ Crop Pct: '0.875'
66
+ Image Size: '224'
67
+ Interpolation: bicubic
68
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L193
69
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnext101_32x4d-b253c8c4.pth
70
+ Results:
71
+ - Task: Image Classification
72
+ Dataset: ImageNet
73
+ Metrics:
74
+ Top 1 Accuracy: 80.33%
75
+ Top 5 Accuracy: 94.91%
76
+ - Name: gluon_resnext101_64x4d
77
+ In Collection: Gloun ResNeXt
78
+ Metadata:
79
+ FLOPs: 19954172928
80
+ Parameters: 83460000
81
+ File Size: 334737852
82
+ Architecture:
83
+ - 1x1 Convolution
84
+ - Batch Normalization
85
+ - Convolution
86
+ - Global Average Pooling
87
+ - Grouped Convolution
88
+ - Max Pooling
89
+ - ReLU
90
+ - ResNeXt Block
91
+ - Residual Connection
92
+ - Softmax
93
+ Tasks:
94
+ - Image Classification
95
+ Training Data:
96
+ - ImageNet
97
+ ID: gluon_resnext101_64x4d
98
+ Crop Pct: '0.875'
99
+ Image Size: '224'
100
+ Interpolation: bicubic
101
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L201
102
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnext101_64x4d-f9a8e184.pth
103
+ Results:
104
+ - Task: Image Classification
105
+ Dataset: ImageNet
106
+ Metrics:
107
+ Top 1 Accuracy: 80.63%
108
+ Top 5 Accuracy: 95.0%
109
+ - Name: gluon_resnext50_32x4d
110
+ In Collection: Gloun ResNeXt
111
+ Metadata:
112
+ FLOPs: 5472648192
113
+ Parameters: 25030000
114
+ File Size: 100441719
115
+ Architecture:
116
+ - 1x1 Convolution
117
+ - Batch Normalization
118
+ - Convolution
119
+ - Global Average Pooling
120
+ - Grouped Convolution
121
+ - Max Pooling
122
+ - ReLU
123
+ - ResNeXt Block
124
+ - Residual Connection
125
+ - Softmax
126
+ Tasks:
127
+ - Image Classification
128
+ Training Data:
129
+ - ImageNet
130
+ ID: gluon_resnext50_32x4d
131
+ Crop Pct: '0.875'
132
+ Image Size: '224'
133
+ Interpolation: bicubic
134
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L185
135
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_resnext50_32x4d-e6a097c1.pth
136
+ Results:
137
+ - Task: Image Classification
138
+ Dataset: ImageNet
139
+ Metrics:
140
+ Top 1 Accuracy: 79.35%
141
+ Top 5 Accuracy: 94.42%
142
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-senet.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (Gluon) SENet
2
+
3
+ A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
4
+
5
+ The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html).
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @misc{hu2019squeezeandexcitation,
17
+ title={Squeeze-and-Excitation Networks},
18
+ author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
19
+ year={2019},
20
+ eprint={1709.01507},
21
+ archivePrefix={arXiv},
22
+ primaryClass={cs.CV}
23
+ }
24
+ ```
25
+
26
+ <!--
27
+ Type: model-index
28
+ Collections:
29
+ - Name: Gloun SENet
30
+ Paper:
31
+ Title: Squeeze-and-Excitation Networks
32
+ URL: https://paperswithcode.com/paper/squeeze-and-excitation-networks
33
+ Models:
34
+ - Name: gluon_senet154
35
+ In Collection: Gloun SENet
36
+ Metadata:
37
+ FLOPs: 26681705136
38
+ Parameters: 115090000
39
+ File Size: 461546622
40
+ Architecture:
41
+ - Convolution
42
+ - Dense Connections
43
+ - Global Average Pooling
44
+ - Max Pooling
45
+ - Softmax
46
+ - Squeeze-and-Excitation Block
47
+ Tasks:
48
+ - Image Classification
49
+ Training Data:
50
+ - ImageNet
51
+ ID: gluon_senet154
52
+ Crop Pct: '0.875'
53
+ Image Size: '224'
54
+ Interpolation: bicubic
55
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L239
56
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_senet154-70a1a3c0.pth
57
+ Results:
58
+ - Task: Image Classification
59
+ Dataset: ImageNet
60
+ Metrics:
61
+ Top 1 Accuracy: 81.23%
62
+ Top 5 Accuracy: 95.35%
63
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-seresnext.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (Gluon) SE-ResNeXt
2
+
3
+ **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
4
+
5
+ The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html).
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @misc{hu2019squeezeandexcitation,
17
+ title={Squeeze-and-Excitation Networks},
18
+ author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
19
+ year={2019},
20
+ eprint={1709.01507},
21
+ archivePrefix={arXiv},
22
+ primaryClass={cs.CV}
23
+ }
24
+ ```
25
+
26
+ <!--
27
+ Type: model-index
28
+ Collections:
29
+ - Name: Gloun SEResNeXt
30
+ Paper:
31
+ Title: Squeeze-and-Excitation Networks
32
+ URL: https://paperswithcode.com/paper/squeeze-and-excitation-networks
33
+ Models:
34
+ - Name: gluon_seresnext101_32x4d
35
+ In Collection: Gloun SEResNeXt
36
+ Metadata:
37
+ FLOPs: 10302923504
38
+ Parameters: 48960000
39
+ File Size: 196505510
40
+ Architecture:
41
+ - 1x1 Convolution
42
+ - Batch Normalization
43
+ - Convolution
44
+ - Global Average Pooling
45
+ - Grouped Convolution
46
+ - Max Pooling
47
+ - ReLU
48
+ - ResNeXt Block
49
+ - Residual Connection
50
+ - Softmax
51
+ - Squeeze-and-Excitation Block
52
+ Tasks:
53
+ - Image Classification
54
+ Training Data:
55
+ - ImageNet
56
+ ID: gluon_seresnext101_32x4d
57
+ Crop Pct: '0.875'
58
+ Image Size: '224'
59
+ Interpolation: bicubic
60
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L219
61
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_seresnext101_32x4d-cf52900d.pth
62
+ Results:
63
+ - Task: Image Classification
64
+ Dataset: ImageNet
65
+ Metrics:
66
+ Top 1 Accuracy: 80.87%
67
+ Top 5 Accuracy: 95.29%
68
+ - Name: gluon_seresnext101_64x4d
69
+ In Collection: Gloun SEResNeXt
70
+ Metadata:
71
+ FLOPs: 19958950640
72
+ Parameters: 88230000
73
+ File Size: 353875948
74
+ Architecture:
75
+ - 1x1 Convolution
76
+ - Batch Normalization
77
+ - Convolution
78
+ - Global Average Pooling
79
+ - Grouped Convolution
80
+ - Max Pooling
81
+ - ReLU
82
+ - ResNeXt Block
83
+ - Residual Connection
84
+ - Softmax
85
+ - Squeeze-and-Excitation Block
86
+ Tasks:
87
+ - Image Classification
88
+ Training Data:
89
+ - ImageNet
90
+ ID: gluon_seresnext101_64x4d
91
+ Crop Pct: '0.875'
92
+ Image Size: '224'
93
+ Interpolation: bicubic
94
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L229
95
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_seresnext101_64x4d-f9926f93.pth
96
+ Results:
97
+ - Task: Image Classification
98
+ Dataset: ImageNet
99
+ Metrics:
100
+ Top 1 Accuracy: 80.88%
101
+ Top 5 Accuracy: 95.31%
102
+ - Name: gluon_seresnext50_32x4d
103
+ In Collection: Gloun SEResNeXt
104
+ Metadata:
105
+ FLOPs: 5475179184
106
+ Parameters: 27560000
107
+ File Size: 110578827
108
+ Architecture:
109
+ - 1x1 Convolution
110
+ - Batch Normalization
111
+ - Convolution
112
+ - Global Average Pooling
113
+ - Grouped Convolution
114
+ - Max Pooling
115
+ - ReLU
116
+ - ResNeXt Block
117
+ - Residual Connection
118
+ - Softmax
119
+ - Squeeze-and-Excitation Block
120
+ Tasks:
121
+ - Image Classification
122
+ Training Data:
123
+ - ImageNet
124
+ ID: gluon_seresnext50_32x4d
125
+ Crop Pct: '0.875'
126
+ Image Size: '224'
127
+ Interpolation: bicubic
128
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_resnet.py#L209
129
+ Weights: https://github.com/rwightman/pytorch-pretrained-gluonresnet/releases/download/v0.1/gluon_seresnext50_32x4d-90cf2d6e.pth
130
+ Results:
131
+ - Task: Image Classification
132
+ Dataset: ImageNet
133
+ Metrics:
134
+ Top 1 Accuracy: 79.92%
135
+ Top 5 Accuracy: 94.82%
136
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/gloun-xception.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (Gluon) Xception
2
+
3
+ **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers.
4
+
5
+ The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html).
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @misc{chollet2017xception,
17
+ title={Xception: Deep Learning with Depthwise Separable Convolutions},
18
+ author={François Chollet},
19
+ year={2017},
20
+ eprint={1610.02357},
21
+ archivePrefix={arXiv},
22
+ primaryClass={cs.CV}
23
+ }
24
+ ```
25
+
26
+ <!--
27
+ Type: model-index
28
+ Collections:
29
+ - Name: Gloun Xception
30
+ Paper:
31
+ Title: 'Xception: Deep Learning with Depthwise Separable Convolutions'
32
+ URL: https://paperswithcode.com/paper/xception-deep-learning-with-depthwise
33
+ Models:
34
+ - Name: gluon_xception65
35
+ In Collection: Gloun Xception
36
+ Metadata:
37
+ FLOPs: 17594889728
38
+ Parameters: 39920000
39
+ File Size: 160551306
40
+ Architecture:
41
+ - 1x1 Convolution
42
+ - Convolution
43
+ - Dense Connections
44
+ - Depthwise Separable Convolution
45
+ - Global Average Pooling
46
+ - Max Pooling
47
+ - ReLU
48
+ - Residual Connection
49
+ - Softmax
50
+ Tasks:
51
+ - Image Classification
52
+ Training Data:
53
+ - ImageNet
54
+ ID: gluon_xception65
55
+ Crop Pct: '0.903'
56
+ Image Size: '299'
57
+ Interpolation: bicubic
58
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/gluon_xception.py#L241
59
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_xception-7015a15c.pth
60
+ Results:
61
+ - Task: Image Classification
62
+ Dataset: ImageNet
63
+ Metrics:
64
+ Top 1 Accuracy: 79.7%
65
+ Top 5 Accuracy: 94.87%
66
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/hrnet.md ADDED
@@ -0,0 +1,358 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HRNet
2
+
3
+ **HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution convolution stream, gradually add high-to-low resolution convolution streams one by one, and connect the multi-resolution streams in parallel. The resulting network consists of several ($4$ in the paper) stages and the $n$th stage contains $n$ streams corresponding to $n$ resolutions. The authors conduct repeated multi-resolution fusions by exchanging the information across the parallel streams over and over.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{sun2019highresolution,
15
+ title={High-Resolution Representations for Labeling Pixels and Regions},
16
+ author={Ke Sun and Yang Zhao and Borui Jiang and Tianheng Cheng and Bin Xiao and Dong Liu and Yadong Mu and Xinggang Wang and Wenyu Liu and Jingdong Wang},
17
+ year={2019},
18
+ eprint={1904.04514},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: HRNet
28
+ Paper:
29
+ Title: Deep High-Resolution Representation Learning for Visual Recognition
30
+ URL: https://paperswithcode.com/paper/190807919
31
+ Models:
32
+ - Name: hrnet_w18
33
+ In Collection: HRNet
34
+ Metadata:
35
+ FLOPs: 5547205500
36
+ Parameters: 21300000
37
+ File Size: 85718883
38
+ Architecture:
39
+ - Batch Normalization
40
+ - Convolution
41
+ - ReLU
42
+ - Residual Connection
43
+ Tasks:
44
+ - Image Classification
45
+ Training Techniques:
46
+ - Nesterov Accelerated Gradient
47
+ - Weight Decay
48
+ Training Data:
49
+ - ImageNet
50
+ Training Resources: 4x NVIDIA V100 GPUs
51
+ ID: hrnet_w18
52
+ Epochs: 100
53
+ Layers: 18
54
+ Crop Pct: '0.875'
55
+ Momentum: 0.9
56
+ Batch Size: 256
57
+ Image Size: '224'
58
+ Weight Decay: 0.001
59
+ Interpolation: bilinear
60
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L800
61
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-hrnet/hrnetv2_w18-8cb57bb9.pth
62
+ Results:
63
+ - Task: Image Classification
64
+ Dataset: ImageNet
65
+ Metrics:
66
+ Top 1 Accuracy: 76.76%
67
+ Top 5 Accuracy: 93.44%
68
+ - Name: hrnet_w18_small
69
+ In Collection: HRNet
70
+ Metadata:
71
+ FLOPs: 2071651488
72
+ Parameters: 13190000
73
+ File Size: 52934302
74
+ Architecture:
75
+ - Batch Normalization
76
+ - Convolution
77
+ - ReLU
78
+ - Residual Connection
79
+ Tasks:
80
+ - Image Classification
81
+ Training Techniques:
82
+ - Nesterov Accelerated Gradient
83
+ - Weight Decay
84
+ Training Data:
85
+ - ImageNet
86
+ Training Resources: 4x NVIDIA V100 GPUs
87
+ ID: hrnet_w18_small
88
+ Epochs: 100
89
+ Layers: 18
90
+ Crop Pct: '0.875'
91
+ Momentum: 0.9
92
+ Batch Size: 256
93
+ Image Size: '224'
94
+ Weight Decay: 0.001
95
+ Interpolation: bilinear
96
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L790
97
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-hrnet/hrnet_w18_small_v1-f460c6bc.pth
98
+ Results:
99
+ - Task: Image Classification
100
+ Dataset: ImageNet
101
+ Metrics:
102
+ Top 1 Accuracy: 72.34%
103
+ Top 5 Accuracy: 90.68%
104
+ - Name: hrnet_w18_small_v2
105
+ In Collection: HRNet
106
+ Metadata:
107
+ FLOPs: 3360023160
108
+ Parameters: 15600000
109
+ File Size: 62682879
110
+ Architecture:
111
+ - Batch Normalization
112
+ - Convolution
113
+ - ReLU
114
+ - Residual Connection
115
+ Tasks:
116
+ - Image Classification
117
+ Training Techniques:
118
+ - Nesterov Accelerated Gradient
119
+ - Weight Decay
120
+ Training Data:
121
+ - ImageNet
122
+ Training Resources: 4x NVIDIA V100 GPUs
123
+ ID: hrnet_w18_small_v2
124
+ Epochs: 100
125
+ Layers: 18
126
+ Crop Pct: '0.875'
127
+ Momentum: 0.9
128
+ Batch Size: 256
129
+ Image Size: '224'
130
+ Weight Decay: 0.001
131
+ Interpolation: bilinear
132
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L795
133
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-hrnet/hrnet_w18_small_v2-4c50a8cb.pth
134
+ Results:
135
+ - Task: Image Classification
136
+ Dataset: ImageNet
137
+ Metrics:
138
+ Top 1 Accuracy: 75.11%
139
+ Top 5 Accuracy: 92.41%
140
+ - Name: hrnet_w30
141
+ In Collection: HRNet
142
+ Metadata:
143
+ FLOPs: 10474119492
144
+ Parameters: 37710000
145
+ File Size: 151452218
146
+ Architecture:
147
+ - Batch Normalization
148
+ - Convolution
149
+ - ReLU
150
+ - Residual Connection
151
+ Tasks:
152
+ - Image Classification
153
+ Training Techniques:
154
+ - Nesterov Accelerated Gradient
155
+ - Weight Decay
156
+ Training Data:
157
+ - ImageNet
158
+ Training Resources: 4x NVIDIA V100 GPUs
159
+ ID: hrnet_w30
160
+ Epochs: 100
161
+ Layers: 30
162
+ Crop Pct: '0.875'
163
+ Momentum: 0.9
164
+ Batch Size: 256
165
+ Image Size: '224'
166
+ Weight Decay: 0.001
167
+ Interpolation: bilinear
168
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L805
169
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-hrnet/hrnetv2_w30-8d7f8dab.pth
170
+ Results:
171
+ - Task: Image Classification
172
+ Dataset: ImageNet
173
+ Metrics:
174
+ Top 1 Accuracy: 78.21%
175
+ Top 5 Accuracy: 94.22%
176
+ - Name: hrnet_w32
177
+ In Collection: HRNet
178
+ Metadata:
179
+ FLOPs: 11524528320
180
+ Parameters: 41230000
181
+ File Size: 165547812
182
+ Architecture:
183
+ - Batch Normalization
184
+ - Convolution
185
+ - ReLU
186
+ - Residual Connection
187
+ Tasks:
188
+ - Image Classification
189
+ Training Techniques:
190
+ - Nesterov Accelerated Gradient
191
+ - Weight Decay
192
+ Training Data:
193
+ - ImageNet
194
+ Training Resources: 4x NVIDIA V100 GPUs
195
+ Training Time: 60 hours
196
+ ID: hrnet_w32
197
+ Epochs: 100
198
+ Layers: 32
199
+ Crop Pct: '0.875'
200
+ Momentum: 0.9
201
+ Batch Size: 256
202
+ Image Size: '224'
203
+ Weight Decay: 0.001
204
+ Interpolation: bilinear
205
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L810
206
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-hrnet/hrnetv2_w32-90d8c5fb.pth
207
+ Results:
208
+ - Task: Image Classification
209
+ Dataset: ImageNet
210
+ Metrics:
211
+ Top 1 Accuracy: 78.45%
212
+ Top 5 Accuracy: 94.19%
213
+ - Name: hrnet_w40
214
+ In Collection: HRNet
215
+ Metadata:
216
+ FLOPs: 16381182192
217
+ Parameters: 57560000
218
+ File Size: 230899236
219
+ Architecture:
220
+ - Batch Normalization
221
+ - Convolution
222
+ - ReLU
223
+ - Residual Connection
224
+ Tasks:
225
+ - Image Classification
226
+ Training Techniques:
227
+ - Nesterov Accelerated Gradient
228
+ - Weight Decay
229
+ Training Data:
230
+ - ImageNet
231
+ Training Resources: 4x NVIDIA V100 GPUs
232
+ ID: hrnet_w40
233
+ Epochs: 100
234
+ Layers: 40
235
+ Crop Pct: '0.875'
236
+ Momentum: 0.9
237
+ Batch Size: 256
238
+ Image Size: '224'
239
+ Weight Decay: 0.001
240
+ Interpolation: bilinear
241
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L815
242
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-hrnet/hrnetv2_w40-7cd397a4.pth
243
+ Results:
244
+ - Task: Image Classification
245
+ Dataset: ImageNet
246
+ Metrics:
247
+ Top 1 Accuracy: 78.93%
248
+ Top 5 Accuracy: 94.48%
249
+ - Name: hrnet_w44
250
+ In Collection: HRNet
251
+ Metadata:
252
+ FLOPs: 19202520264
253
+ Parameters: 67060000
254
+ File Size: 268957432
255
+ Architecture:
256
+ - Batch Normalization
257
+ - Convolution
258
+ - ReLU
259
+ - Residual Connection
260
+ Tasks:
261
+ - Image Classification
262
+ Training Techniques:
263
+ - Nesterov Accelerated Gradient
264
+ - Weight Decay
265
+ Training Data:
266
+ - ImageNet
267
+ Training Resources: 4x NVIDIA V100 GPUs
268
+ ID: hrnet_w44
269
+ Epochs: 100
270
+ Layers: 44
271
+ Crop Pct: '0.875'
272
+ Momentum: 0.9
273
+ Batch Size: 256
274
+ Image Size: '224'
275
+ Weight Decay: 0.001
276
+ Interpolation: bilinear
277
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L820
278
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-hrnet/hrnetv2_w44-c9ac8c18.pth
279
+ Results:
280
+ - Task: Image Classification
281
+ Dataset: ImageNet
282
+ Metrics:
283
+ Top 1 Accuracy: 78.89%
284
+ Top 5 Accuracy: 94.37%
285
+ - Name: hrnet_w48
286
+ In Collection: HRNet
287
+ Metadata:
288
+ FLOPs: 22285865760
289
+ Parameters: 77470000
290
+ File Size: 310603710
291
+ Architecture:
292
+ - Batch Normalization
293
+ - Convolution
294
+ - ReLU
295
+ - Residual Connection
296
+ Tasks:
297
+ - Image Classification
298
+ Training Techniques:
299
+ - Nesterov Accelerated Gradient
300
+ - Weight Decay
301
+ Training Data:
302
+ - ImageNet
303
+ Training Resources: 4x NVIDIA V100 GPUs
304
+ Training Time: 80 hours
305
+ ID: hrnet_w48
306
+ Epochs: 100
307
+ Layers: 48
308
+ Crop Pct: '0.875'
309
+ Momentum: 0.9
310
+ Batch Size: 256
311
+ Image Size: '224'
312
+ Weight Decay: 0.001
313
+ Interpolation: bilinear
314
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L825
315
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-hrnet/hrnetv2_w48-abd2e6ab.pth
316
+ Results:
317
+ - Task: Image Classification
318
+ Dataset: ImageNet
319
+ Metrics:
320
+ Top 1 Accuracy: 79.32%
321
+ Top 5 Accuracy: 94.51%
322
+ - Name: hrnet_w64
323
+ In Collection: HRNet
324
+ Metadata:
325
+ FLOPs: 37239321984
326
+ Parameters: 128060000
327
+ File Size: 513071818
328
+ Architecture:
329
+ - Batch Normalization
330
+ - Convolution
331
+ - ReLU
332
+ - Residual Connection
333
+ Tasks:
334
+ - Image Classification
335
+ Training Techniques:
336
+ - Nesterov Accelerated Gradient
337
+ - Weight Decay
338
+ Training Data:
339
+ - ImageNet
340
+ Training Resources: 4x NVIDIA V100 GPUs
341
+ ID: hrnet_w64
342
+ Epochs: 100
343
+ Layers: 64
344
+ Crop Pct: '0.875'
345
+ Momentum: 0.9
346
+ Batch Size: 256
347
+ Image Size: '224'
348
+ Weight Decay: 0.001
349
+ Interpolation: bilinear
350
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L830
351
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-hrnet/hrnetv2_w64-b47cc881.pth
352
+ Results:
353
+ - Task: Image Classification
354
+ Dataset: ImageNet
355
+ Metrics:
356
+ Top 1 Accuracy: 79.46%
357
+ Top 5 Accuracy: 94.65%
358
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/ig-resnext.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Instagram ResNeXt WSL
2
+
3
+ A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth and width.
4
+
5
+ This model was trained on billions of Instagram images using thousands of distinct hashtags as labels exhibit excellent transfer learning performance.
6
+
7
+ Please note the CC-BY-NC 4.0 license on theses weights, non-commercial use only.
8
+
9
+ {% include 'code_snippets.md' %}
10
+
11
+ ## How do I train this model?
12
+
13
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
14
+
15
+ ## Citation
16
+
17
+ ```BibTeX
18
+ @misc{mahajan2018exploring,
19
+ title={Exploring the Limits of Weakly Supervised Pretraining},
20
+ author={Dhruv Mahajan and Ross Girshick and Vignesh Ramanathan and Kaiming He and Manohar Paluri and Yixuan Li and Ashwin Bharambe and Laurens van der Maaten},
21
+ year={2018},
22
+ eprint={1805.00932},
23
+ archivePrefix={arXiv},
24
+ primaryClass={cs.CV}
25
+ }
26
+ ```
27
+
28
+ <!--
29
+ Type: model-index
30
+ Collections:
31
+ - Name: IG ResNeXt
32
+ Paper:
33
+ Title: Exploring the Limits of Weakly Supervised Pretraining
34
+ URL: https://paperswithcode.com/paper/exploring-the-limits-of-weakly-supervised
35
+ Models:
36
+ - Name: ig_resnext101_32x16d
37
+ In Collection: IG ResNeXt
38
+ Metadata:
39
+ FLOPs: 46623691776
40
+ Parameters: 194030000
41
+ File Size: 777518664
42
+ Architecture:
43
+ - 1x1 Convolution
44
+ - Batch Normalization
45
+ - Convolution
46
+ - Global Average Pooling
47
+ - Grouped Convolution
48
+ - Max Pooling
49
+ - ReLU
50
+ - ResNeXt Block
51
+ - Residual Connection
52
+ - Softmax
53
+ Tasks:
54
+ - Image Classification
55
+ Training Techniques:
56
+ - Nesterov Accelerated Gradient
57
+ - Weight Decay
58
+ Training Data:
59
+ - IG-3.5B-17k
60
+ - ImageNet
61
+ Training Resources: 336x GPUs
62
+ ID: ig_resnext101_32x16d
63
+ Epochs: 100
64
+ Layers: 101
65
+ Crop Pct: '0.875'
66
+ Momentum: 0.9
67
+ Batch Size: 8064
68
+ Image Size: '224'
69
+ Weight Decay: 0.001
70
+ Interpolation: bilinear
71
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L874
72
+ Weights: https://download.pytorch.org/models/ig_resnext101_32x16-c6f796b0.pth
73
+ Results:
74
+ - Task: Image Classification
75
+ Dataset: ImageNet
76
+ Metrics:
77
+ Top 1 Accuracy: 84.16%
78
+ Top 5 Accuracy: 97.19%
79
+ - Name: ig_resnext101_32x32d
80
+ In Collection: IG ResNeXt
81
+ Metadata:
82
+ FLOPs: 112225170432
83
+ Parameters: 468530000
84
+ File Size: 1876573776
85
+ Architecture:
86
+ - 1x1 Convolution
87
+ - Batch Normalization
88
+ - Convolution
89
+ - Global Average Pooling
90
+ - Grouped Convolution
91
+ - Max Pooling
92
+ - ReLU
93
+ - ResNeXt Block
94
+ - Residual Connection
95
+ - Softmax
96
+ Tasks:
97
+ - Image Classification
98
+ Training Techniques:
99
+ - Nesterov Accelerated Gradient
100
+ - Weight Decay
101
+ Training Data:
102
+ - IG-3.5B-17k
103
+ - ImageNet
104
+ Training Resources: 336x GPUs
105
+ ID: ig_resnext101_32x32d
106
+ Epochs: 100
107
+ Layers: 101
108
+ Crop Pct: '0.875'
109
+ Momentum: 0.9
110
+ Batch Size: 8064
111
+ Image Size: '224'
112
+ Weight Decay: 0.001
113
+ Interpolation: bilinear
114
+ Minibatch Size: 8064
115
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L885
116
+ Weights: https://download.pytorch.org/models/ig_resnext101_32x32-e4b90b00.pth
117
+ Results:
118
+ - Task: Image Classification
119
+ Dataset: ImageNet
120
+ Metrics:
121
+ Top 1 Accuracy: 85.09%
122
+ Top 5 Accuracy: 97.44%
123
+ - Name: ig_resnext101_32x48d
124
+ In Collection: IG ResNeXt
125
+ Metadata:
126
+ FLOPs: 197446554624
127
+ Parameters: 828410000
128
+ File Size: 3317136976
129
+ Architecture:
130
+ - 1x1 Convolution
131
+ - Batch Normalization
132
+ - Convolution
133
+ - Global Average Pooling
134
+ - Grouped Convolution
135
+ - Max Pooling
136
+ - ReLU
137
+ - ResNeXt Block
138
+ - Residual Connection
139
+ - Softmax
140
+ Tasks:
141
+ - Image Classification
142
+ Training Techniques:
143
+ - Nesterov Accelerated Gradient
144
+ - Weight Decay
145
+ Training Data:
146
+ - IG-3.5B-17k
147
+ - ImageNet
148
+ Training Resources: 336x GPUs
149
+ ID: ig_resnext101_32x48d
150
+ Epochs: 100
151
+ Layers: 101
152
+ Crop Pct: '0.875'
153
+ Momentum: 0.9
154
+ Batch Size: 8064
155
+ Image Size: '224'
156
+ Weight Decay: 0.001
157
+ Interpolation: bilinear
158
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L896
159
+ Weights: https://download.pytorch.org/models/ig_resnext101_32x48-3e41cc8a.pth
160
+ Results:
161
+ - Task: Image Classification
162
+ Dataset: ImageNet
163
+ Metrics:
164
+ Top 1 Accuracy: 85.42%
165
+ Top 5 Accuracy: 97.58%
166
+ - Name: ig_resnext101_32x8d
167
+ In Collection: IG ResNeXt
168
+ Metadata:
169
+ FLOPs: 21180417024
170
+ Parameters: 88790000
171
+ File Size: 356056638
172
+ Architecture:
173
+ - 1x1 Convolution
174
+ - Batch Normalization
175
+ - Convolution
176
+ - Global Average Pooling
177
+ - Grouped Convolution
178
+ - Max Pooling
179
+ - ReLU
180
+ - ResNeXt Block
181
+ - Residual Connection
182
+ - Softmax
183
+ Tasks:
184
+ - Image Classification
185
+ Training Techniques:
186
+ - Nesterov Accelerated Gradient
187
+ - Weight Decay
188
+ Training Data:
189
+ - IG-3.5B-17k
190
+ - ImageNet
191
+ Training Resources: 336x GPUs
192
+ ID: ig_resnext101_32x8d
193
+ Epochs: 100
194
+ Layers: 101
195
+ Crop Pct: '0.875'
196
+ Momentum: 0.9
197
+ Batch Size: 8064
198
+ Image Size: '224'
199
+ Weight Decay: 0.001
200
+ Interpolation: bilinear
201
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L863
202
+ Weights: https://download.pytorch.org/models/ig_resnext101_32x8-c38310e5.pth
203
+ Results:
204
+ - Task: Image Classification
205
+ Dataset: ImageNet
206
+ Metrics:
207
+ Top 1 Accuracy: 82.7%
208
+ Top 5 Accuracy: 96.64%
209
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/inception-resnet-v2.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inception ResNet v2
2
+
3
+ **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture).
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{szegedy2016inceptionv4,
15
+ title={Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning},
16
+ author={Christian Szegedy and Sergey Ioffe and Vincent Vanhoucke and Alex Alemi},
17
+ year={2016},
18
+ eprint={1602.07261},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: Inception ResNet v2
28
+ Paper:
29
+ Title: Inception-v4, Inception-ResNet and the Impact of Residual Connections on
30
+ Learning
31
+ URL: https://paperswithcode.com/paper/inception-v4-inception-resnet-and-the-impact
32
+ Models:
33
+ - Name: inception_resnet_v2
34
+ In Collection: Inception ResNet v2
35
+ Metadata:
36
+ FLOPs: 16959133120
37
+ Parameters: 55850000
38
+ File Size: 223774238
39
+ Architecture:
40
+ - Average Pooling
41
+ - Dropout
42
+ - Inception-ResNet-v2 Reduction-B
43
+ - Inception-ResNet-v2-A
44
+ - Inception-ResNet-v2-B
45
+ - Inception-ResNet-v2-C
46
+ - Reduction-A
47
+ - Softmax
48
+ Tasks:
49
+ - Image Classification
50
+ Training Techniques:
51
+ - Label Smoothing
52
+ - RMSProp
53
+ - Weight Decay
54
+ Training Data:
55
+ - ImageNet
56
+ Training Resources: 20x NVIDIA Kepler GPUs
57
+ ID: inception_resnet_v2
58
+ LR: 0.045
59
+ Dropout: 0.2
60
+ Crop Pct: '0.897'
61
+ Momentum: 0.9
62
+ Image Size: '299'
63
+ Interpolation: bicubic
64
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_resnet_v2.py#L343
65
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/inception_resnet_v2-940b1cd6.pth
66
+ Results:
67
+ - Task: Image Classification
68
+ Dataset: ImageNet
69
+ Metrics:
70
+ Top 1 Accuracy: 0.95%
71
+ Top 5 Accuracy: 17.29%
72
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/inception-v3.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inception v3
2
+
3
+ **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module).
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @article{DBLP:journals/corr/SzegedyVISW15,
15
+ author = {Christian Szegedy and
16
+ Vincent Vanhoucke and
17
+ Sergey Ioffe and
18
+ Jonathon Shlens and
19
+ Zbigniew Wojna},
20
+ title = {Rethinking the Inception Architecture for Computer Vision},
21
+ journal = {CoRR},
22
+ volume = {abs/1512.00567},
23
+ year = {2015},
24
+ url = {http://arxiv.org/abs/1512.00567},
25
+ archivePrefix = {arXiv},
26
+ eprint = {1512.00567},
27
+ timestamp = {Mon, 13 Aug 2018 16:49:07 +0200},
28
+ biburl = {https://dblp.org/rec/journals/corr/SzegedyVISW15.bib},
29
+ bibsource = {dblp computer science bibliography, https://dblp.org}
30
+ }
31
+ ```
32
+
33
+ <!--
34
+ Type: model-index
35
+ Collections:
36
+ - Name: Inception v3
37
+ Paper:
38
+ Title: Rethinking the Inception Architecture for Computer Vision
39
+ URL: https://paperswithcode.com/paper/rethinking-the-inception-architecture-for
40
+ Models:
41
+ - Name: inception_v3
42
+ In Collection: Inception v3
43
+ Metadata:
44
+ FLOPs: 7352418880
45
+ Parameters: 23830000
46
+ File Size: 108857766
47
+ Architecture:
48
+ - 1x1 Convolution
49
+ - Auxiliary Classifier
50
+ - Average Pooling
51
+ - Average Pooling
52
+ - Batch Normalization
53
+ - Convolution
54
+ - Dense Connections
55
+ - Dropout
56
+ - Inception-v3 Module
57
+ - Max Pooling
58
+ - ReLU
59
+ - Softmax
60
+ Tasks:
61
+ - Image Classification
62
+ Training Techniques:
63
+ - Gradient Clipping
64
+ - Label Smoothing
65
+ - RMSProp
66
+ - Weight Decay
67
+ Training Data:
68
+ - ImageNet
69
+ Training Resources: 50x NVIDIA Kepler GPUs
70
+ ID: inception_v3
71
+ LR: 0.045
72
+ Dropout: 0.2
73
+ Crop Pct: '0.875'
74
+ Momentum: 0.9
75
+ Image Size: '299'
76
+ Interpolation: bicubic
77
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v3.py#L442
78
+ Weights: https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth
79
+ Results:
80
+ - Task: Image Classification
81
+ Dataset: ImageNet
82
+ Metrics:
83
+ Top 1 Accuracy: 77.46%
84
+ Top 5 Accuracy: 93.48%
85
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/inception-v4.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Inception v4
2
+
3
+ **Inception-v4** is a convolutional neural network architecture that builds on previous iterations of the Inception family by simplifying the architecture and using more inception modules than [Inception-v3](https://paperswithcode.com/method/inception-v3).
4
+ {% include 'code_snippets.md' %}
5
+
6
+ ## How do I train this model?
7
+
8
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
9
+
10
+ ## Citation
11
+
12
+ ```BibTeX
13
+ @misc{szegedy2016inceptionv4,
14
+ title={Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning},
15
+ author={Christian Szegedy and Sergey Ioffe and Vincent Vanhoucke and Alex Alemi},
16
+ year={2016},
17
+ eprint={1602.07261},
18
+ archivePrefix={arXiv},
19
+ primaryClass={cs.CV}
20
+ }
21
+ ```
22
+
23
+ <!--
24
+ Type: model-index
25
+ Collections:
26
+ - Name: Inception v4
27
+ Paper:
28
+ Title: Inception-v4, Inception-ResNet and the Impact of Residual Connections on
29
+ Learning
30
+ URL: https://paperswithcode.com/paper/inception-v4-inception-resnet-and-the-impact
31
+ Models:
32
+ - Name: inception_v4
33
+ In Collection: Inception v4
34
+ Metadata:
35
+ FLOPs: 15806527936
36
+ Parameters: 42680000
37
+ File Size: 171082495
38
+ Architecture:
39
+ - Average Pooling
40
+ - Dropout
41
+ - Inception-A
42
+ - Inception-B
43
+ - Inception-C
44
+ - Reduction-A
45
+ - Reduction-B
46
+ - Softmax
47
+ Tasks:
48
+ - Image Classification
49
+ Training Techniques:
50
+ - Label Smoothing
51
+ - RMSProp
52
+ - Weight Decay
53
+ Training Data:
54
+ - ImageNet
55
+ Training Resources: 20x NVIDIA Kepler GPUs
56
+ ID: inception_v4
57
+ LR: 0.045
58
+ Dropout: 0.2
59
+ Crop Pct: '0.875'
60
+ Momentum: 0.9
61
+ Image Size: '299'
62
+ Interpolation: bicubic
63
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/inception_v4.py#L313
64
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/inceptionv4-8e4777a0.pth
65
+ Results:
66
+ - Task: Image Classification
67
+ Dataset: ImageNet
68
+ Metrics:
69
+ Top 1 Accuracy: 1.01%
70
+ Top 5 Accuracy: 16.85%
71
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/legacy-se-resnet.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (Legacy) SE-ResNet
2
+
3
+ **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{hu2019squeezeandexcitation,
15
+ title={Squeeze-and-Excitation Networks},
16
+ author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
17
+ year={2019},
18
+ eprint={1709.01507},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: Legacy SE ResNet
28
+ Paper:
29
+ Title: Squeeze-and-Excitation Networks
30
+ URL: https://paperswithcode.com/paper/squeeze-and-excitation-networks
31
+ Models:
32
+ - Name: legacy_seresnet101
33
+ In Collection: Legacy SE ResNet
34
+ Metadata:
35
+ FLOPs: 9762614000
36
+ Parameters: 49330000
37
+ File Size: 197822624
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Batch Normalization
41
+ - Bottleneck Residual Block
42
+ - Convolution
43
+ - Global Average Pooling
44
+ - Max Pooling
45
+ - ReLU
46
+ - Residual Block
47
+ - Residual Connection
48
+ - Softmax
49
+ - Squeeze-and-Excitation Block
50
+ Tasks:
51
+ - Image Classification
52
+ Training Techniques:
53
+ - Label Smoothing
54
+ - SGD with Momentum
55
+ - Weight Decay
56
+ Training Data:
57
+ - ImageNet
58
+ Training Resources: 8x NVIDIA Titan X GPUs
59
+ ID: legacy_seresnet101
60
+ LR: 0.6
61
+ Epochs: 100
62
+ Layers: 101
63
+ Dropout: 0.2
64
+ Crop Pct: '0.875'
65
+ Momentum: 0.9
66
+ Batch Size: 1024
67
+ Image Size: '224'
68
+ Interpolation: bilinear
69
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/senet.py#L426
70
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/se_resnet101-7e38fcc6.pth
71
+ Results:
72
+ - Task: Image Classification
73
+ Dataset: ImageNet
74
+ Metrics:
75
+ Top 1 Accuracy: 78.38%
76
+ Top 5 Accuracy: 94.26%
77
+ - Name: legacy_seresnet152
78
+ In Collection: Legacy SE ResNet
79
+ Metadata:
80
+ FLOPs: 14553578160
81
+ Parameters: 66819999
82
+ File Size: 268033864
83
+ Architecture:
84
+ - 1x1 Convolution
85
+ - Batch Normalization
86
+ - Bottleneck Residual Block
87
+ - Convolution
88
+ - Global Average Pooling
89
+ - Max Pooling
90
+ - ReLU
91
+ - Residual Block
92
+ - Residual Connection
93
+ - Softmax
94
+ - Squeeze-and-Excitation Block
95
+ Tasks:
96
+ - Image Classification
97
+ Training Techniques:
98
+ - Label Smoothing
99
+ - SGD with Momentum
100
+ - Weight Decay
101
+ Training Data:
102
+ - ImageNet
103
+ Training Resources: 8x NVIDIA Titan X GPUs
104
+ ID: legacy_seresnet152
105
+ LR: 0.6
106
+ Epochs: 100
107
+ Layers: 152
108
+ Dropout: 0.2
109
+ Crop Pct: '0.875'
110
+ Momentum: 0.9
111
+ Batch Size: 1024
112
+ Image Size: '224'
113
+ Interpolation: bilinear
114
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/senet.py#L433
115
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/se_resnet152-d17c99b7.pth
116
+ Results:
117
+ - Task: Image Classification
118
+ Dataset: ImageNet
119
+ Metrics:
120
+ Top 1 Accuracy: 78.67%
121
+ Top 5 Accuracy: 94.38%
122
+ - Name: legacy_seresnet18
123
+ In Collection: Legacy SE ResNet
124
+ Metadata:
125
+ FLOPs: 2328876024
126
+ Parameters: 11780000
127
+ File Size: 47175663
128
+ Architecture:
129
+ - 1x1 Convolution
130
+ - Batch Normalization
131
+ - Bottleneck Residual Block
132
+ - Convolution
133
+ - Global Average Pooling
134
+ - Max Pooling
135
+ - ReLU
136
+ - Residual Block
137
+ - Residual Connection
138
+ - Softmax
139
+ - Squeeze-and-Excitation Block
140
+ Tasks:
141
+ - Image Classification
142
+ Training Techniques:
143
+ - Label Smoothing
144
+ - SGD with Momentum
145
+ - Weight Decay
146
+ Training Data:
147
+ - ImageNet
148
+ Training Resources: 8x NVIDIA Titan X GPUs
149
+ ID: legacy_seresnet18
150
+ LR: 0.6
151
+ Epochs: 100
152
+ Layers: 18
153
+ Dropout: 0.2
154
+ Crop Pct: '0.875'
155
+ Momentum: 0.9
156
+ Batch Size: 1024
157
+ Image Size: '224'
158
+ Interpolation: bicubic
159
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/senet.py#L405
160
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet18-4bb0ce65.pth
161
+ Results:
162
+ - Task: Image Classification
163
+ Dataset: ImageNet
164
+ Metrics:
165
+ Top 1 Accuracy: 71.74%
166
+ Top 5 Accuracy: 90.34%
167
+ - Name: legacy_seresnet34
168
+ In Collection: Legacy SE ResNet
169
+ Metadata:
170
+ FLOPs: 4706201004
171
+ Parameters: 21960000
172
+ File Size: 87958697
173
+ Architecture:
174
+ - 1x1 Convolution
175
+ - Batch Normalization
176
+ - Bottleneck Residual Block
177
+ - Convolution
178
+ - Global Average Pooling
179
+ - Max Pooling
180
+ - ReLU
181
+ - Residual Block
182
+ - Residual Connection
183
+ - Softmax
184
+ - Squeeze-and-Excitation Block
185
+ Tasks:
186
+ - Image Classification
187
+ Training Techniques:
188
+ - Label Smoothing
189
+ - SGD with Momentum
190
+ - Weight Decay
191
+ Training Data:
192
+ - ImageNet
193
+ Training Resources: 8x NVIDIA Titan X GPUs
194
+ ID: legacy_seresnet34
195
+ LR: 0.6
196
+ Epochs: 100
197
+ Layers: 34
198
+ Dropout: 0.2
199
+ Crop Pct: '0.875'
200
+ Momentum: 0.9
201
+ Batch Size: 1024
202
+ Image Size: '224'
203
+ Interpolation: bilinear
204
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/senet.py#L412
205
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet34-a4004e63.pth
206
+ Results:
207
+ - Task: Image Classification
208
+ Dataset: ImageNet
209
+ Metrics:
210
+ Top 1 Accuracy: 74.79%
211
+ Top 5 Accuracy: 92.13%
212
+ - Name: legacy_seresnet50
213
+ In Collection: Legacy SE ResNet
214
+ Metadata:
215
+ FLOPs: 4974351024
216
+ Parameters: 28090000
217
+ File Size: 112611220
218
+ Architecture:
219
+ - 1x1 Convolution
220
+ - Batch Normalization
221
+ - Bottleneck Residual Block
222
+ - Convolution
223
+ - Global Average Pooling
224
+ - Max Pooling
225
+ - ReLU
226
+ - Residual Block
227
+ - Residual Connection
228
+ - Softmax
229
+ - Squeeze-and-Excitation Block
230
+ Tasks:
231
+ - Image Classification
232
+ Training Techniques:
233
+ - Label Smoothing
234
+ - SGD with Momentum
235
+ - Weight Decay
236
+ Training Data:
237
+ - ImageNet
238
+ Training Resources: 8x NVIDIA Titan X GPUs
239
+ ID: legacy_seresnet50
240
+ LR: 0.6
241
+ Epochs: 100
242
+ Layers: 50
243
+ Dropout: 0.2
244
+ Crop Pct: '0.875'
245
+ Momentum: 0.9
246
+ Image Size: '224'
247
+ Interpolation: bilinear
248
+ Minibatch Size: 1024
249
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/senet.py#L419
250
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/se_resnet50-ce0d4300.pth
251
+ Results:
252
+ - Task: Image Classification
253
+ Dataset: ImageNet
254
+ Metrics:
255
+ Top 1 Accuracy: 77.64%
256
+ Top 5 Accuracy: 93.74%
257
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/legacy-se-resnext.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (Legacy) SE-ResNeXt
2
+
3
+ **SE ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{hu2019squeezeandexcitation,
15
+ title={Squeeze-and-Excitation Networks},
16
+ author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
17
+ year={2019},
18
+ eprint={1709.01507},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: Legacy SE ResNeXt
28
+ Paper:
29
+ Title: Squeeze-and-Excitation Networks
30
+ URL: https://paperswithcode.com/paper/squeeze-and-excitation-networks
31
+ Models:
32
+ - Name: legacy_seresnext101_32x4d
33
+ In Collection: Legacy SE ResNeXt
34
+ Metadata:
35
+ FLOPs: 10287698672
36
+ Parameters: 48960000
37
+ File Size: 196466866
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Batch Normalization
41
+ - Convolution
42
+ - Global Average Pooling
43
+ - Grouped Convolution
44
+ - Max Pooling
45
+ - ReLU
46
+ - ResNeXt Block
47
+ - Residual Connection
48
+ - Softmax
49
+ - Squeeze-and-Excitation Block
50
+ Tasks:
51
+ - Image Classification
52
+ Training Techniques:
53
+ - Label Smoothing
54
+ - SGD with Momentum
55
+ - Weight Decay
56
+ Training Data:
57
+ - ImageNet
58
+ Training Resources: 8x NVIDIA Titan X GPUs
59
+ ID: legacy_seresnext101_32x4d
60
+ LR: 0.6
61
+ Epochs: 100
62
+ Layers: 101
63
+ Dropout: 0.2
64
+ Crop Pct: '0.875'
65
+ Momentum: 0.9
66
+ Batch Size: 1024
67
+ Image Size: '224'
68
+ Interpolation: bilinear
69
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/senet.py#L462
70
+ Weights: http://data.lip6.fr/cadene/pretrainedmodels/se_resnext101_32x4d-3b2fe3d8.pth
71
+ Results:
72
+ - Task: Image Classification
73
+ Dataset: ImageNet
74
+ Metrics:
75
+ Top 1 Accuracy: 80.23%
76
+ Top 5 Accuracy: 95.02%
77
+ - Name: legacy_seresnext26_32x4d
78
+ In Collection: Legacy SE ResNeXt
79
+ Metadata:
80
+ FLOPs: 3187342304
81
+ Parameters: 16790000
82
+ File Size: 67346327
83
+ Architecture:
84
+ - 1x1 Convolution
85
+ - Batch Normalization
86
+ - Convolution
87
+ - Global Average Pooling
88
+ - Grouped Convolution
89
+ - Max Pooling
90
+ - ReLU
91
+ - ResNeXt Block
92
+ - Residual Connection
93
+ - Softmax
94
+ - Squeeze-and-Excitation Block
95
+ Tasks:
96
+ - Image Classification
97
+ Training Techniques:
98
+ - Label Smoothing
99
+ - SGD with Momentum
100
+ - Weight Decay
101
+ Training Data:
102
+ - ImageNet
103
+ Training Resources: 8x NVIDIA Titan X GPUs
104
+ ID: legacy_seresnext26_32x4d
105
+ LR: 0.6
106
+ Epochs: 100
107
+ Layers: 26
108
+ Dropout: 0.2
109
+ Crop Pct: '0.875'
110
+ Momentum: 0.9
111
+ Batch Size: 1024
112
+ Image Size: '224'
113
+ Interpolation: bicubic
114
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/senet.py#L448
115
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext26_32x4d-65ebdb501.pth
116
+ Results:
117
+ - Task: Image Classification
118
+ Dataset: ImageNet
119
+ Metrics:
120
+ Top 1 Accuracy: 77.11%
121
+ Top 5 Accuracy: 93.31%
122
+ - Name: legacy_seresnext50_32x4d
123
+ In Collection: Legacy SE ResNeXt
124
+ Metadata:
125
+ FLOPs: 5459954352
126
+ Parameters: 27560000
127
+ File Size: 110559176
128
+ Architecture:
129
+ - 1x1 Convolution
130
+ - Batch Normalization
131
+ - Convolution
132
+ - Global Average Pooling
133
+ - Grouped Convolution
134
+ - Max Pooling
135
+ - ReLU
136
+ - ResNeXt Block
137
+ - Residual Connection
138
+ - Softmax
139
+ - Squeeze-and-Excitation Block
140
+ Tasks:
141
+ - Image Classification
142
+ Training Techniques:
143
+ - Label Smoothing
144
+ - SGD with Momentum
145
+ - Weight Decay
146
+ Training Data:
147
+ - ImageNet
148
+ Training Resources: 8x NVIDIA Titan X GPUs
149
+ ID: legacy_seresnext50_32x4d
150
+ LR: 0.6
151
+ Epochs: 100
152
+ Layers: 50
153
+ Dropout: 0.2
154
+ Crop Pct: '0.875'
155
+ Momentum: 0.9
156
+ Batch Size: 1024
157
+ Image Size: '224'
158
+ Interpolation: bilinear
159
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/senet.py#L455
160
+ Weights: http://data.lip6.fr/cadene/pretrainedmodels/se_resnext50_32x4d-a260b3a4.pth
161
+ Results:
162
+ - Task: Image Classification
163
+ Dataset: ImageNet
164
+ Metrics:
165
+ Top 1 Accuracy: 79.08%
166
+ Top 5 Accuracy: 94.43%
167
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/legacy-senet.md ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # (Legacy) SENet
2
+
3
+ A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
4
+
5
+ The weights from this model were ported from Gluon.
6
+
7
+ {% include 'code_snippets.md' %}
8
+
9
+ ## How do I train this model?
10
+
11
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
12
+
13
+ ## Citation
14
+
15
+ ```BibTeX
16
+ @misc{hu2019squeezeandexcitation,
17
+ title={Squeeze-and-Excitation Networks},
18
+ author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
19
+ year={2019},
20
+ eprint={1709.01507},
21
+ archivePrefix={arXiv},
22
+ primaryClass={cs.CV}
23
+ }
24
+ ```
25
+
26
+ <!--
27
+ Type: model-index
28
+ Collections:
29
+ - Name: Legacy SENet
30
+ Paper:
31
+ Title: Squeeze-and-Excitation Networks
32
+ URL: https://paperswithcode.com/paper/squeeze-and-excitation-networks
33
+ Models:
34
+ - Name: legacy_senet154
35
+ In Collection: Legacy SENet
36
+ Metadata:
37
+ FLOPs: 26659556016
38
+ Parameters: 115090000
39
+ File Size: 461488402
40
+ Architecture:
41
+ - Convolution
42
+ - Dense Connections
43
+ - Global Average Pooling
44
+ - Max Pooling
45
+ - Softmax
46
+ - Squeeze-and-Excitation Block
47
+ Tasks:
48
+ - Image Classification
49
+ Training Techniques:
50
+ - Label Smoothing
51
+ - SGD with Momentum
52
+ - Weight Decay
53
+ Training Data:
54
+ - ImageNet
55
+ Training Resources: 8x NVIDIA Titan X GPUs
56
+ ID: legacy_senet154
57
+ LR: 0.6
58
+ Epochs: 100
59
+ Layers: 154
60
+ Dropout: 0.2
61
+ Crop Pct: '0.875'
62
+ Momentum: 0.9
63
+ Batch Size: 1024
64
+ Image Size: '224'
65
+ Interpolation: bilinear
66
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/senet.py#L440
67
+ Weights: http://data.lip6.fr/cadene/pretrainedmodels/senet154-c7b49a05.pth
68
+ Results:
69
+ - Task: Image Classification
70
+ Dataset: ImageNet
71
+ Metrics:
72
+ Top 1 Accuracy: 81.33%
73
+ Top 5 Accuracy: 95.51%
74
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/mixnet.md ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MixNet
2
+
3
+ **MixNet** is a type of convolutional neural network discovered via AutoML that utilises [MixConvs](https://paperswithcode.com/method/mixconv) instead of regular [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution).
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{tan2019mixconv,
15
+ title={MixConv: Mixed Depthwise Convolutional Kernels},
16
+ author={Mingxing Tan and Quoc V. Le},
17
+ year={2019},
18
+ eprint={1907.09595},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: MixNet
28
+ Paper:
29
+ Title: 'MixConv: Mixed Depthwise Convolutional Kernels'
30
+ URL: https://paperswithcode.com/paper/mixnet-mixed-depthwise-convolutional-kernels
31
+ Models:
32
+ - Name: mixnet_l
33
+ In Collection: MixNet
34
+ Metadata:
35
+ FLOPs: 738671316
36
+ Parameters: 7330000
37
+ File Size: 29608232
38
+ Architecture:
39
+ - Batch Normalization
40
+ - Dense Connections
41
+ - Dropout
42
+ - Global Average Pooling
43
+ - Grouped Convolution
44
+ - MixConv
45
+ - Squeeze-and-Excitation Block
46
+ - Swish
47
+ Tasks:
48
+ - Image Classification
49
+ Training Techniques:
50
+ - MNAS
51
+ Training Data:
52
+ - ImageNet
53
+ ID: mixnet_l
54
+ Crop Pct: '0.875'
55
+ Image Size: '224'
56
+ Interpolation: bicubic
57
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1669
58
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_l-5a9a2ed8.pth
59
+ Results:
60
+ - Task: Image Classification
61
+ Dataset: ImageNet
62
+ Metrics:
63
+ Top 1 Accuracy: 78.98%
64
+ Top 5 Accuracy: 94.18%
65
+ - Name: mixnet_m
66
+ In Collection: MixNet
67
+ Metadata:
68
+ FLOPs: 454543374
69
+ Parameters: 5010000
70
+ File Size: 20298347
71
+ Architecture:
72
+ - Batch Normalization
73
+ - Dense Connections
74
+ - Dropout
75
+ - Global Average Pooling
76
+ - Grouped Convolution
77
+ - MixConv
78
+ - Squeeze-and-Excitation Block
79
+ - Swish
80
+ Tasks:
81
+ - Image Classification
82
+ Training Techniques:
83
+ - MNAS
84
+ Training Data:
85
+ - ImageNet
86
+ ID: mixnet_m
87
+ Crop Pct: '0.875'
88
+ Image Size: '224'
89
+ Interpolation: bicubic
90
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1660
91
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_m-4647fc68.pth
92
+ Results:
93
+ - Task: Image Classification
94
+ Dataset: ImageNet
95
+ Metrics:
96
+ Top 1 Accuracy: 77.27%
97
+ Top 5 Accuracy: 93.42%
98
+ - Name: mixnet_s
99
+ In Collection: MixNet
100
+ Metadata:
101
+ FLOPs: 321264910
102
+ Parameters: 4130000
103
+ File Size: 16727982
104
+ Architecture:
105
+ - Batch Normalization
106
+ - Dense Connections
107
+ - Dropout
108
+ - Global Average Pooling
109
+ - Grouped Convolution
110
+ - MixConv
111
+ - Squeeze-and-Excitation Block
112
+ - Swish
113
+ Tasks:
114
+ - Image Classification
115
+ Training Techniques:
116
+ - MNAS
117
+ Training Data:
118
+ - ImageNet
119
+ ID: mixnet_s
120
+ Crop Pct: '0.875'
121
+ Image Size: '224'
122
+ Interpolation: bicubic
123
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1651
124
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_s-a907afbc.pth
125
+ Results:
126
+ - Task: Image Classification
127
+ Dataset: ImageNet
128
+ Metrics:
129
+ Top 1 Accuracy: 75.99%
130
+ Top 5 Accuracy: 92.79%
131
+ - Name: mixnet_xl
132
+ In Collection: MixNet
133
+ Metadata:
134
+ FLOPs: 1195880424
135
+ Parameters: 11900000
136
+ File Size: 48001170
137
+ Architecture:
138
+ - Batch Normalization
139
+ - Dense Connections
140
+ - Dropout
141
+ - Global Average Pooling
142
+ - Grouped Convolution
143
+ - MixConv
144
+ - Squeeze-and-Excitation Block
145
+ - Swish
146
+ Tasks:
147
+ - Image Classification
148
+ Training Techniques:
149
+ - MNAS
150
+ Training Data:
151
+ - ImageNet
152
+ ID: mixnet_xl
153
+ Crop Pct: '0.875'
154
+ Image Size: '224'
155
+ Interpolation: bicubic
156
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1678
157
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mixnet_xl_ra-aac3c00c.pth
158
+ Results:
159
+ - Task: Image Classification
160
+ Dataset: ImageNet
161
+ Metrics:
162
+ Top 1 Accuracy: 80.47%
163
+ Top 5 Accuracy: 94.93%
164
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/mnasnet.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MnasNet
2
+
3
+ **MnasNet** is a type of convolutional neural network optimized for mobile devices that is discovered through mobile neural architecture search, which explicitly incorporates model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency. The main building block is an [inverted residual block](https://paperswithcode.com/method/inverted-residual-block) (from [MobileNetV2](https://paperswithcode.com/method/mobilenetv2)).
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{tan2019mnasnet,
15
+ title={MnasNet: Platform-Aware Neural Architecture Search for Mobile},
16
+ author={Mingxing Tan and Bo Chen and Ruoming Pang and Vijay Vasudevan and Mark Sandler and Andrew Howard and Quoc V. Le},
17
+ year={2019},
18
+ eprint={1807.11626},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: MNASNet
28
+ Paper:
29
+ Title: 'MnasNet: Platform-Aware Neural Architecture Search for Mobile'
30
+ URL: https://paperswithcode.com/paper/mnasnet-platform-aware-neural-architecture
31
+ Models:
32
+ - Name: mnasnet_100
33
+ In Collection: MNASNet
34
+ Metadata:
35
+ FLOPs: 416415488
36
+ Parameters: 4380000
37
+ File Size: 17731774
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Batch Normalization
41
+ - Convolution
42
+ - Depthwise Separable Convolution
43
+ - Dropout
44
+ - Global Average Pooling
45
+ - Inverted Residual Block
46
+ - Max Pooling
47
+ - ReLU
48
+ - Residual Connection
49
+ - Softmax
50
+ Tasks:
51
+ - Image Classification
52
+ Training Techniques:
53
+ - RMSProp
54
+ - Weight Decay
55
+ Training Data:
56
+ - ImageNet
57
+ ID: mnasnet_100
58
+ Layers: 100
59
+ Dropout: 0.2
60
+ Crop Pct: '0.875'
61
+ Momentum: 0.9
62
+ Batch Size: 4000
63
+ Image Size: '224'
64
+ Interpolation: bicubic
65
+ RMSProp Decay: 0.9
66
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L894
67
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mnasnet_b1-74cb7081.pth
68
+ Results:
69
+ - Task: Image Classification
70
+ Dataset: ImageNet
71
+ Metrics:
72
+ Top 1 Accuracy: 74.67%
73
+ Top 5 Accuracy: 92.1%
74
+ - Name: semnasnet_100
75
+ In Collection: MNASNet
76
+ Metadata:
77
+ FLOPs: 414570766
78
+ Parameters: 3890000
79
+ File Size: 15731489
80
+ Architecture:
81
+ - 1x1 Convolution
82
+ - Batch Normalization
83
+ - Convolution
84
+ - Depthwise Separable Convolution
85
+ - Dropout
86
+ - Global Average Pooling
87
+ - Inverted Residual Block
88
+ - Max Pooling
89
+ - ReLU
90
+ - Residual Connection
91
+ - Softmax
92
+ - Squeeze-and-Excitation Block
93
+ Tasks:
94
+ - Image Classification
95
+ Training Data:
96
+ - ImageNet
97
+ ID: semnasnet_100
98
+ Crop Pct: '0.875'
99
+ Image Size: '224'
100
+ Interpolation: bicubic
101
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L928
102
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mnasnet_a1-d9418771.pth
103
+ Results:
104
+ - Task: Image Classification
105
+ Dataset: ImageNet
106
+ Metrics:
107
+ Top 1 Accuracy: 75.45%
108
+ Top 5 Accuracy: 92.61%
109
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/mobilenet-v2.md ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MobileNet v2
2
+
3
+ **MobileNetV2** is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an [inverted residual structure](https://paperswithcode.com/method/inverted-residual-block) where the residual connections are between the bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. As a whole, the architecture of MobileNetV2 contains the initial fully convolution layer with 32 filters, followed by 19 residual bottleneck layers.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @article{DBLP:journals/corr/abs-1801-04381,
15
+ author = {Mark Sandler and
16
+ Andrew G. Howard and
17
+ Menglong Zhu and
18
+ Andrey Zhmoginov and
19
+ Liang{-}Chieh Chen},
20
+ title = {Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification,
21
+ Detection and Segmentation},
22
+ journal = {CoRR},
23
+ volume = {abs/1801.04381},
24
+ year = {2018},
25
+ url = {http://arxiv.org/abs/1801.04381},
26
+ archivePrefix = {arXiv},
27
+ eprint = {1801.04381},
28
+ timestamp = {Tue, 12 Jan 2021 15:30:06 +0100},
29
+ biburl = {https://dblp.org/rec/journals/corr/abs-1801-04381.bib},
30
+ bibsource = {dblp computer science bibliography, https://dblp.org}
31
+ }
32
+ ```
33
+
34
+ <!--
35
+ Type: model-index
36
+ Collections:
37
+ - Name: MobileNet V2
38
+ Paper:
39
+ Title: 'MobileNetV2: Inverted Residuals and Linear Bottlenecks'
40
+ URL: https://paperswithcode.com/paper/mobilenetv2-inverted-residuals-and-linear
41
+ Models:
42
+ - Name: mobilenetv2_100
43
+ In Collection: MobileNet V2
44
+ Metadata:
45
+ FLOPs: 401920448
46
+ Parameters: 3500000
47
+ File Size: 14202571
48
+ Architecture:
49
+ - 1x1 Convolution
50
+ - Batch Normalization
51
+ - Convolution
52
+ - Depthwise Separable Convolution
53
+ - Dropout
54
+ - Inverted Residual Block
55
+ - Max Pooling
56
+ - ReLU6
57
+ - Residual Connection
58
+ - Softmax
59
+ Tasks:
60
+ - Image Classification
61
+ Training Techniques:
62
+ - RMSProp
63
+ - Weight Decay
64
+ Training Data:
65
+ - ImageNet
66
+ Training Resources: 16x GPUs
67
+ ID: mobilenetv2_100
68
+ LR: 0.045
69
+ Crop Pct: '0.875'
70
+ Momentum: 0.9
71
+ Batch Size: 1536
72
+ Image Size: '224'
73
+ Weight Decay: 4.0e-05
74
+ Interpolation: bicubic
75
+ RMSProp Decay: 0.9
76
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L955
77
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_100_ra-b33bc2c4.pth
78
+ Results:
79
+ - Task: Image Classification
80
+ Dataset: ImageNet
81
+ Metrics:
82
+ Top 1 Accuracy: 72.95%
83
+ Top 5 Accuracy: 91.0%
84
+ - Name: mobilenetv2_110d
85
+ In Collection: MobileNet V2
86
+ Metadata:
87
+ FLOPs: 573958832
88
+ Parameters: 4520000
89
+ File Size: 18316431
90
+ Architecture:
91
+ - 1x1 Convolution
92
+ - Batch Normalization
93
+ - Convolution
94
+ - Depthwise Separable Convolution
95
+ - Dropout
96
+ - Inverted Residual Block
97
+ - Max Pooling
98
+ - ReLU6
99
+ - Residual Connection
100
+ - Softmax
101
+ Tasks:
102
+ - Image Classification
103
+ Training Techniques:
104
+ - RMSProp
105
+ - Weight Decay
106
+ Training Data:
107
+ - ImageNet
108
+ Training Resources: 16x GPUs
109
+ ID: mobilenetv2_110d
110
+ LR: 0.045
111
+ Crop Pct: '0.875'
112
+ Momentum: 0.9
113
+ Batch Size: 1536
114
+ Image Size: '224'
115
+ Weight Decay: 4.0e-05
116
+ Interpolation: bicubic
117
+ RMSProp Decay: 0.9
118
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L969
119
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_110d_ra-77090ade.pth
120
+ Results:
121
+ - Task: Image Classification
122
+ Dataset: ImageNet
123
+ Metrics:
124
+ Top 1 Accuracy: 75.05%
125
+ Top 5 Accuracy: 92.19%
126
+ - Name: mobilenetv2_120d
127
+ In Collection: MobileNet V2
128
+ Metadata:
129
+ FLOPs: 888510048
130
+ Parameters: 5830000
131
+ File Size: 23651121
132
+ Architecture:
133
+ - 1x1 Convolution
134
+ - Batch Normalization
135
+ - Convolution
136
+ - Depthwise Separable Convolution
137
+ - Dropout
138
+ - Inverted Residual Block
139
+ - Max Pooling
140
+ - ReLU6
141
+ - Residual Connection
142
+ - Softmax
143
+ Tasks:
144
+ - Image Classification
145
+ Training Techniques:
146
+ - RMSProp
147
+ - Weight Decay
148
+ Training Data:
149
+ - ImageNet
150
+ Training Resources: 16x GPUs
151
+ ID: mobilenetv2_120d
152
+ LR: 0.045
153
+ Crop Pct: '0.875'
154
+ Momentum: 0.9
155
+ Batch Size: 1536
156
+ Image Size: '224'
157
+ Weight Decay: 4.0e-05
158
+ Interpolation: bicubic
159
+ RMSProp Decay: 0.9
160
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L977
161
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_120d_ra-5987e2ed.pth
162
+ Results:
163
+ - Task: Image Classification
164
+ Dataset: ImageNet
165
+ Metrics:
166
+ Top 1 Accuracy: 77.28%
167
+ Top 5 Accuracy: 93.51%
168
+ - Name: mobilenetv2_140
169
+ In Collection: MobileNet V2
170
+ Metadata:
171
+ FLOPs: 770196784
172
+ Parameters: 6110000
173
+ File Size: 24673555
174
+ Architecture:
175
+ - 1x1 Convolution
176
+ - Batch Normalization
177
+ - Convolution
178
+ - Depthwise Separable Convolution
179
+ - Dropout
180
+ - Inverted Residual Block
181
+ - Max Pooling
182
+ - ReLU6
183
+ - Residual Connection
184
+ - Softmax
185
+ Tasks:
186
+ - Image Classification
187
+ Training Techniques:
188
+ - RMSProp
189
+ - Weight Decay
190
+ Training Data:
191
+ - ImageNet
192
+ Training Resources: 16x GPUs
193
+ ID: mobilenetv2_140
194
+ LR: 0.045
195
+ Crop Pct: '0.875'
196
+ Momentum: 0.9
197
+ Batch Size: 1536
198
+ Image Size: '224'
199
+ Weight Decay: 4.0e-05
200
+ Interpolation: bicubic
201
+ RMSProp Decay: 0.9
202
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L962
203
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv2_140_ra-21a4e913.pth
204
+ Results:
205
+ - Task: Image Classification
206
+ Dataset: ImageNet
207
+ Metrics:
208
+ Top 1 Accuracy: 76.51%
209
+ Top 5 Accuracy: 93.0%
210
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/mobilenet-v3.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MobileNet v3
2
+
3
+ **MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) modules in the [MBConv blocks](https://paperswithcode.com/method/inverted-residual-block).
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @article{DBLP:journals/corr/abs-1905-02244,
15
+ author = {Andrew Howard and
16
+ Mark Sandler and
17
+ Grace Chu and
18
+ Liang{-}Chieh Chen and
19
+ Bo Chen and
20
+ Mingxing Tan and
21
+ Weijun Wang and
22
+ Yukun Zhu and
23
+ Ruoming Pang and
24
+ Vijay Vasudevan and
25
+ Quoc V. Le and
26
+ Hartwig Adam},
27
+ title = {Searching for MobileNetV3},
28
+ journal = {CoRR},
29
+ volume = {abs/1905.02244},
30
+ year = {2019},
31
+ url = {http://arxiv.org/abs/1905.02244},
32
+ archivePrefix = {arXiv},
33
+ eprint = {1905.02244},
34
+ timestamp = {Tue, 12 Jan 2021 15:30:06 +0100},
35
+ biburl = {https://dblp.org/rec/journals/corr/abs-1905-02244.bib},
36
+ bibsource = {dblp computer science bibliography, https://dblp.org}
37
+ }
38
+ ```
39
+
40
+ <!--
41
+ Type: model-index
42
+ Collections:
43
+ - Name: MobileNet V3
44
+ Paper:
45
+ Title: Searching for MobileNetV3
46
+ URL: https://paperswithcode.com/paper/searching-for-mobilenetv3
47
+ Models:
48
+ - Name: mobilenetv3_large_100
49
+ In Collection: MobileNet V3
50
+ Metadata:
51
+ FLOPs: 287193752
52
+ Parameters: 5480000
53
+ File Size: 22076443
54
+ Architecture:
55
+ - 1x1 Convolution
56
+ - Batch Normalization
57
+ - Convolution
58
+ - Dense Connections
59
+ - Depthwise Separable Convolution
60
+ - Dropout
61
+ - Global Average Pooling
62
+ - Hard Swish
63
+ - Inverted Residual Block
64
+ - ReLU
65
+ - Residual Connection
66
+ - Softmax
67
+ - Squeeze-and-Excitation Block
68
+ Tasks:
69
+ - Image Classification
70
+ Training Techniques:
71
+ - RMSProp
72
+ - Weight Decay
73
+ Training Data:
74
+ - ImageNet
75
+ Training Resources: 4x4 TPU Pod
76
+ ID: mobilenetv3_large_100
77
+ LR: 0.1
78
+ Dropout: 0.8
79
+ Crop Pct: '0.875'
80
+ Momentum: 0.9
81
+ Batch Size: 4096
82
+ Image Size: '224'
83
+ Weight Decay: 1.0e-05
84
+ Interpolation: bicubic
85
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/mobilenetv3.py#L363
86
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_large_100_ra-f55367f5.pth
87
+ Results:
88
+ - Task: Image Classification
89
+ Dataset: ImageNet
90
+ Metrics:
91
+ Top 1 Accuracy: 75.77%
92
+ Top 5 Accuracy: 92.54%
93
+ - Name: mobilenetv3_rw
94
+ In Collection: MobileNet V3
95
+ Metadata:
96
+ FLOPs: 287190638
97
+ Parameters: 5480000
98
+ File Size: 22064048
99
+ Architecture:
100
+ - 1x1 Convolution
101
+ - Batch Normalization
102
+ - Convolution
103
+ - Dense Connections
104
+ - Depthwise Separable Convolution
105
+ - Dropout
106
+ - Global Average Pooling
107
+ - Hard Swish
108
+ - Inverted Residual Block
109
+ - ReLU
110
+ - Residual Connection
111
+ - Softmax
112
+ - Squeeze-and-Excitation Block
113
+ Tasks:
114
+ - Image Classification
115
+ Training Techniques:
116
+ - RMSProp
117
+ - Weight Decay
118
+ Training Data:
119
+ - ImageNet
120
+ Training Resources: 4x4 TPU Pod
121
+ ID: mobilenetv3_rw
122
+ LR: 0.1
123
+ Dropout: 0.8
124
+ Crop Pct: '0.875'
125
+ Momentum: 0.9
126
+ Batch Size: 4096
127
+ Image Size: '224'
128
+ Weight Decay: 1.0e-05
129
+ Interpolation: bicubic
130
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/mobilenetv3.py#L384
131
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_100-35495452.pth
132
+ Results:
133
+ - Task: Image Classification
134
+ Dataset: ImageNet
135
+ Metrics:
136
+ Top 1 Accuracy: 75.62%
137
+ Top 5 Accuracy: 92.71%
138
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/nasnet.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NASNet
2
+
3
+ **NASNet** is a type of convolutional neural network discovered through neural architecture search. The building blocks consist of normal and reduction cells.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{zoph2018learning,
15
+ title={Learning Transferable Architectures for Scalable Image Recognition},
16
+ author={Barret Zoph and Vijay Vasudevan and Jonathon Shlens and Quoc V. Le},
17
+ year={2018},
18
+ eprint={1707.07012},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: NASNet
28
+ Paper:
29
+ Title: Learning Transferable Architectures for Scalable Image Recognition
30
+ URL: https://paperswithcode.com/paper/learning-transferable-architectures-for
31
+ Models:
32
+ - Name: nasnetalarge
33
+ In Collection: NASNet
34
+ Metadata:
35
+ FLOPs: 30242402862
36
+ Parameters: 88750000
37
+ File Size: 356056626
38
+ Architecture:
39
+ - Average Pooling
40
+ - Batch Normalization
41
+ - Convolution
42
+ - Depthwise Separable Convolution
43
+ - Dropout
44
+ - ReLU
45
+ Tasks:
46
+ - Image Classification
47
+ Training Techniques:
48
+ - Label Smoothing
49
+ - RMSProp
50
+ - Weight Decay
51
+ Training Data:
52
+ - ImageNet
53
+ Training Resources: 50x Tesla K40 GPUs
54
+ ID: nasnetalarge
55
+ Dropout: 0.5
56
+ Crop Pct: '0.911'
57
+ Momentum: 0.9
58
+ Image Size: '331'
59
+ Interpolation: bicubic
60
+ Label Smoothing: 0.1
61
+ RMSProp $\epsilon$: 1.0
62
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/nasnet.py#L562
63
+ Weights: http://data.lip6.fr/cadene/pretrainedmodels/nasnetalarge-a1897284.pth
64
+ Results:
65
+ - Task: Image Classification
66
+ Dataset: ImageNet
67
+ Metrics:
68
+ Top 1 Accuracy: 82.63%
69
+ Top 5 Accuracy: 96.05%
70
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/noisy-student.md ADDED
@@ -0,0 +1,510 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Noisy Student (EfficientNet)
2
+
3
+ **Noisy Student Training** is a semi-supervised learning approach. It extends the idea of self-training
4
+ and distillation with the use of equal-or-larger student models and noise added to the student during learning. It has three main steps:
5
+
6
+ 1. train a teacher model on labeled images
7
+ 2. use the teacher to generate pseudo labels on unlabeled images
8
+ 3. train a student model on the combination of labeled images and pseudo labeled images.
9
+
10
+ The algorithm is iterated a few times by treating the student as a teacher to relabel the unlabeled data and training a new student.
11
+
12
+ Noisy Student Training seeks to improve on self-training and distillation in two ways. First, it makes the student larger than, or at least equal to, the teacher so the student can better learn from a larger dataset. Second, it adds noise to the student so the noised student is forced to learn harder from the pseudo labels. To noise the student, it uses input noise such as RandAugment data augmentation, and model noise such as dropout and stochastic depth during training.
13
+
14
+ {% include 'code_snippets.md' %}
15
+
16
+ ## How do I train this model?
17
+
18
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
19
+
20
+ ## Citation
21
+
22
+ ```BibTeX
23
+ @misc{xie2020selftraining,
24
+ title={Self-training with Noisy Student improves ImageNet classification},
25
+ author={Qizhe Xie and Minh-Thang Luong and Eduard Hovy and Quoc V. Le},
26
+ year={2020},
27
+ eprint={1911.04252},
28
+ archivePrefix={arXiv},
29
+ primaryClass={cs.LG}
30
+ }
31
+ ```
32
+
33
+ <!--
34
+ Type: model-index
35
+ Collections:
36
+ - Name: Noisy Student
37
+ Paper:
38
+ Title: Self-training with Noisy Student improves ImageNet classification
39
+ URL: https://paperswithcode.com/paper/self-training-with-noisy-student-improves
40
+ Models:
41
+ - Name: tf_efficientnet_b0_ns
42
+ In Collection: Noisy Student
43
+ Metadata:
44
+ FLOPs: 488688572
45
+ Parameters: 5290000
46
+ File Size: 21386709
47
+ Architecture:
48
+ - 1x1 Convolution
49
+ - Average Pooling
50
+ - Batch Normalization
51
+ - Convolution
52
+ - Dense Connections
53
+ - Dropout
54
+ - Inverted Residual Block
55
+ - Squeeze-and-Excitation Block
56
+ - Swish
57
+ Tasks:
58
+ - Image Classification
59
+ Training Techniques:
60
+ - AutoAugment
61
+ - FixRes
62
+ - Label Smoothing
63
+ - Noisy Student
64
+ - RMSProp
65
+ - RandAugment
66
+ - Weight Decay
67
+ Training Data:
68
+ - ImageNet
69
+ - JFT-300M
70
+ Training Resources: Cloud TPU v3 Pod
71
+ ID: tf_efficientnet_b0_ns
72
+ LR: 0.128
73
+ Epochs: 700
74
+ Dropout: 0.5
75
+ Crop Pct: '0.875'
76
+ Momentum: 0.9
77
+ Batch Size: 2048
78
+ Image Size: '224'
79
+ Weight Decay: 1.0e-05
80
+ Interpolation: bicubic
81
+ RMSProp Decay: 0.9
82
+ Label Smoothing: 0.1
83
+ BatchNorm Momentum: 0.99
84
+ Stochastic Depth Survival: 0.8
85
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1427
86
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b0_ns-c0e6a31c.pth
87
+ Results:
88
+ - Task: Image Classification
89
+ Dataset: ImageNet
90
+ Metrics:
91
+ Top 1 Accuracy: 78.66%
92
+ Top 5 Accuracy: 94.37%
93
+ - Name: tf_efficientnet_b1_ns
94
+ In Collection: Noisy Student
95
+ Metadata:
96
+ FLOPs: 883633200
97
+ Parameters: 7790000
98
+ File Size: 31516408
99
+ Architecture:
100
+ - 1x1 Convolution
101
+ - Average Pooling
102
+ - Batch Normalization
103
+ - Convolution
104
+ - Dense Connections
105
+ - Dropout
106
+ - Inverted Residual Block
107
+ - Squeeze-and-Excitation Block
108
+ - Swish
109
+ Tasks:
110
+ - Image Classification
111
+ Training Techniques:
112
+ - AutoAugment
113
+ - FixRes
114
+ - Label Smoothing
115
+ - Noisy Student
116
+ - RMSProp
117
+ - RandAugment
118
+ - Weight Decay
119
+ Training Data:
120
+ - ImageNet
121
+ - JFT-300M
122
+ Training Resources: Cloud TPU v3 Pod
123
+ ID: tf_efficientnet_b1_ns
124
+ LR: 0.128
125
+ Epochs: 700
126
+ Dropout: 0.5
127
+ Crop Pct: '0.882'
128
+ Momentum: 0.9
129
+ Batch Size: 2048
130
+ Image Size: '240'
131
+ Weight Decay: 1.0e-05
132
+ Interpolation: bicubic
133
+ RMSProp Decay: 0.9
134
+ Label Smoothing: 0.1
135
+ BatchNorm Momentum: 0.99
136
+ Stochastic Depth Survival: 0.8
137
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1437
138
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b1_ns-99dd0c41.pth
139
+ Results:
140
+ - Task: Image Classification
141
+ Dataset: ImageNet
142
+ Metrics:
143
+ Top 1 Accuracy: 81.39%
144
+ Top 5 Accuracy: 95.74%
145
+ - Name: tf_efficientnet_b2_ns
146
+ In Collection: Noisy Student
147
+ Metadata:
148
+ FLOPs: 1234321170
149
+ Parameters: 9110000
150
+ File Size: 36801803
151
+ Architecture:
152
+ - 1x1 Convolution
153
+ - Average Pooling
154
+ - Batch Normalization
155
+ - Convolution
156
+ - Dense Connections
157
+ - Dropout
158
+ - Inverted Residual Block
159
+ - Squeeze-and-Excitation Block
160
+ - Swish
161
+ Tasks:
162
+ - Image Classification
163
+ Training Techniques:
164
+ - AutoAugment
165
+ - FixRes
166
+ - Label Smoothing
167
+ - Noisy Student
168
+ - RMSProp
169
+ - RandAugment
170
+ - Weight Decay
171
+ Training Data:
172
+ - ImageNet
173
+ - JFT-300M
174
+ Training Resources: Cloud TPU v3 Pod
175
+ ID: tf_efficientnet_b2_ns
176
+ LR: 0.128
177
+ Epochs: 700
178
+ Dropout: 0.5
179
+ Crop Pct: '0.89'
180
+ Momentum: 0.9
181
+ Batch Size: 2048
182
+ Image Size: '260'
183
+ Weight Decay: 1.0e-05
184
+ Interpolation: bicubic
185
+ RMSProp Decay: 0.9
186
+ Label Smoothing: 0.1
187
+ BatchNorm Momentum: 0.99
188
+ Stochastic Depth Survival: 0.8
189
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1447
190
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b2_ns-00306e48.pth
191
+ Results:
192
+ - Task: Image Classification
193
+ Dataset: ImageNet
194
+ Metrics:
195
+ Top 1 Accuracy: 82.39%
196
+ Top 5 Accuracy: 96.24%
197
+ - Name: tf_efficientnet_b3_ns
198
+ In Collection: Noisy Student
199
+ Metadata:
200
+ FLOPs: 2275247568
201
+ Parameters: 12230000
202
+ File Size: 49385734
203
+ Architecture:
204
+ - 1x1 Convolution
205
+ - Average Pooling
206
+ - Batch Normalization
207
+ - Convolution
208
+ - Dense Connections
209
+ - Dropout
210
+ - Inverted Residual Block
211
+ - Squeeze-and-Excitation Block
212
+ - Swish
213
+ Tasks:
214
+ - Image Classification
215
+ Training Techniques:
216
+ - AutoAugment
217
+ - FixRes
218
+ - Label Smoothing
219
+ - Noisy Student
220
+ - RMSProp
221
+ - RandAugment
222
+ - Weight Decay
223
+ Training Data:
224
+ - ImageNet
225
+ - JFT-300M
226
+ Training Resources: Cloud TPU v3 Pod
227
+ ID: tf_efficientnet_b3_ns
228
+ LR: 0.128
229
+ Epochs: 700
230
+ Dropout: 0.5
231
+ Crop Pct: '0.904'
232
+ Momentum: 0.9
233
+ Batch Size: 2048
234
+ Image Size: '300'
235
+ Weight Decay: 1.0e-05
236
+ Interpolation: bicubic
237
+ RMSProp Decay: 0.9
238
+ Label Smoothing: 0.1
239
+ BatchNorm Momentum: 0.99
240
+ Stochastic Depth Survival: 0.8
241
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1457
242
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b3_ns-9d44bf68.pth
243
+ Results:
244
+ - Task: Image Classification
245
+ Dataset: ImageNet
246
+ Metrics:
247
+ Top 1 Accuracy: 84.04%
248
+ Top 5 Accuracy: 96.91%
249
+ - Name: tf_efficientnet_b4_ns
250
+ In Collection: Noisy Student
251
+ Metadata:
252
+ FLOPs: 5749638672
253
+ Parameters: 19340000
254
+ File Size: 77995057
255
+ Architecture:
256
+ - 1x1 Convolution
257
+ - Average Pooling
258
+ - Batch Normalization
259
+ - Convolution
260
+ - Dense Connections
261
+ - Dropout
262
+ - Inverted Residual Block
263
+ - Squeeze-and-Excitation Block
264
+ - Swish
265
+ Tasks:
266
+ - Image Classification
267
+ Training Techniques:
268
+ - AutoAugment
269
+ - FixRes
270
+ - Label Smoothing
271
+ - Noisy Student
272
+ - RMSProp
273
+ - RandAugment
274
+ - Weight Decay
275
+ Training Data:
276
+ - ImageNet
277
+ - JFT-300M
278
+ Training Resources: Cloud TPU v3 Pod
279
+ ID: tf_efficientnet_b4_ns
280
+ LR: 0.128
281
+ Epochs: 700
282
+ Dropout: 0.5
283
+ Crop Pct: '0.922'
284
+ Momentum: 0.9
285
+ Batch Size: 2048
286
+ Image Size: '380'
287
+ Weight Decay: 1.0e-05
288
+ Interpolation: bicubic
289
+ RMSProp Decay: 0.9
290
+ Label Smoothing: 0.1
291
+ BatchNorm Momentum: 0.99
292
+ Stochastic Depth Survival: 0.8
293
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1467
294
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b4_ns-d6313a46.pth
295
+ Results:
296
+ - Task: Image Classification
297
+ Dataset: ImageNet
298
+ Metrics:
299
+ Top 1 Accuracy: 85.15%
300
+ Top 5 Accuracy: 97.47%
301
+ - Name: tf_efficientnet_b5_ns
302
+ In Collection: Noisy Student
303
+ Metadata:
304
+ FLOPs: 13176501888
305
+ Parameters: 30390000
306
+ File Size: 122404944
307
+ Architecture:
308
+ - 1x1 Convolution
309
+ - Average Pooling
310
+ - Batch Normalization
311
+ - Convolution
312
+ - Dense Connections
313
+ - Dropout
314
+ - Inverted Residual Block
315
+ - Squeeze-and-Excitation Block
316
+ - Swish
317
+ Tasks:
318
+ - Image Classification
319
+ Training Techniques:
320
+ - AutoAugment
321
+ - FixRes
322
+ - Label Smoothing
323
+ - Noisy Student
324
+ - RMSProp
325
+ - RandAugment
326
+ - Weight Decay
327
+ Training Data:
328
+ - ImageNet
329
+ - JFT-300M
330
+ Training Resources: Cloud TPU v3 Pod
331
+ ID: tf_efficientnet_b5_ns
332
+ LR: 0.128
333
+ Epochs: 350
334
+ Dropout: 0.5
335
+ Crop Pct: '0.934'
336
+ Momentum: 0.9
337
+ Batch Size: 2048
338
+ Image Size: '456'
339
+ Weight Decay: 1.0e-05
340
+ Interpolation: bicubic
341
+ RMSProp Decay: 0.9
342
+ Label Smoothing: 0.1
343
+ BatchNorm Momentum: 0.99
344
+ Stochastic Depth Survival: 0.8
345
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1477
346
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b5_ns-6f26d0cf.pth
347
+ Results:
348
+ - Task: Image Classification
349
+ Dataset: ImageNet
350
+ Metrics:
351
+ Top 1 Accuracy: 86.08%
352
+ Top 5 Accuracy: 97.75%
353
+ - Name: tf_efficientnet_b6_ns
354
+ In Collection: Noisy Student
355
+ Metadata:
356
+ FLOPs: 24180518488
357
+ Parameters: 43040000
358
+ File Size: 173239537
359
+ Architecture:
360
+ - 1x1 Convolution
361
+ - Average Pooling
362
+ - Batch Normalization
363
+ - Convolution
364
+ - Dense Connections
365
+ - Dropout
366
+ - Inverted Residual Block
367
+ - Squeeze-and-Excitation Block
368
+ - Swish
369
+ Tasks:
370
+ - Image Classification
371
+ Training Techniques:
372
+ - AutoAugment
373
+ - FixRes
374
+ - Label Smoothing
375
+ - Noisy Student
376
+ - RMSProp
377
+ - RandAugment
378
+ - Weight Decay
379
+ Training Data:
380
+ - ImageNet
381
+ - JFT-300M
382
+ Training Resources: Cloud TPU v3 Pod
383
+ ID: tf_efficientnet_b6_ns
384
+ LR: 0.128
385
+ Epochs: 350
386
+ Dropout: 0.5
387
+ Crop Pct: '0.942'
388
+ Momentum: 0.9
389
+ Batch Size: 2048
390
+ Image Size: '528'
391
+ Weight Decay: 1.0e-05
392
+ Interpolation: bicubic
393
+ RMSProp Decay: 0.9
394
+ Label Smoothing: 0.1
395
+ BatchNorm Momentum: 0.99
396
+ Stochastic Depth Survival: 0.8
397
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1487
398
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b6_ns-51548356.pth
399
+ Results:
400
+ - Task: Image Classification
401
+ Dataset: ImageNet
402
+ Metrics:
403
+ Top 1 Accuracy: 86.45%
404
+ Top 5 Accuracy: 97.88%
405
+ - Name: tf_efficientnet_b7_ns
406
+ In Collection: Noisy Student
407
+ Metadata:
408
+ FLOPs: 48205304880
409
+ Parameters: 66349999
410
+ File Size: 266853140
411
+ Architecture:
412
+ - 1x1 Convolution
413
+ - Average Pooling
414
+ - Batch Normalization
415
+ - Convolution
416
+ - Dense Connections
417
+ - Dropout
418
+ - Inverted Residual Block
419
+ - Squeeze-and-Excitation Block
420
+ - Swish
421
+ Tasks:
422
+ - Image Classification
423
+ Training Techniques:
424
+ - AutoAugment
425
+ - FixRes
426
+ - Label Smoothing
427
+ - Noisy Student
428
+ - RMSProp
429
+ - RandAugment
430
+ - Weight Decay
431
+ Training Data:
432
+ - ImageNet
433
+ - JFT-300M
434
+ Training Resources: Cloud TPU v3 Pod
435
+ ID: tf_efficientnet_b7_ns
436
+ LR: 0.128
437
+ Epochs: 350
438
+ Dropout: 0.5
439
+ Crop Pct: '0.949'
440
+ Momentum: 0.9
441
+ Batch Size: 2048
442
+ Image Size: '600'
443
+ Weight Decay: 1.0e-05
444
+ Interpolation: bicubic
445
+ RMSProp Decay: 0.9
446
+ Label Smoothing: 0.1
447
+ BatchNorm Momentum: 0.99
448
+ Stochastic Depth Survival: 0.8
449
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1498
450
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_b7_ns-1dbc32de.pth
451
+ Results:
452
+ - Task: Image Classification
453
+ Dataset: ImageNet
454
+ Metrics:
455
+ Top 1 Accuracy: 86.83%
456
+ Top 5 Accuracy: 98.08%
457
+ - Name: tf_efficientnet_l2_ns
458
+ In Collection: Noisy Student
459
+ Metadata:
460
+ FLOPs: 611646113804
461
+ Parameters: 480310000
462
+ File Size: 1925950424
463
+ Architecture:
464
+ - 1x1 Convolution
465
+ - Average Pooling
466
+ - Batch Normalization
467
+ - Convolution
468
+ - Dense Connections
469
+ - Dropout
470
+ - Inverted Residual Block
471
+ - Squeeze-and-Excitation Block
472
+ - Swish
473
+ Tasks:
474
+ - Image Classification
475
+ Training Techniques:
476
+ - AutoAugment
477
+ - FixRes
478
+ - Label Smoothing
479
+ - Noisy Student
480
+ - RMSProp
481
+ - RandAugment
482
+ - Weight Decay
483
+ Training Data:
484
+ - ImageNet
485
+ - JFT-300M
486
+ Training Resources: Cloud TPU v3 Pod
487
+ Training Time: 6 days
488
+ ID: tf_efficientnet_l2_ns
489
+ LR: 0.128
490
+ Epochs: 350
491
+ Dropout: 0.5
492
+ Crop Pct: '0.96'
493
+ Momentum: 0.9
494
+ Batch Size: 2048
495
+ Image Size: '800'
496
+ Weight Decay: 1.0e-05
497
+ Interpolation: bicubic
498
+ RMSProp Decay: 0.9
499
+ Label Smoothing: 0.1
500
+ BatchNorm Momentum: 0.99
501
+ Stochastic Depth Survival: 0.8
502
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/efficientnet.py#L1520
503
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_efficientnet_l2_ns-df73bb44.pth
504
+ Results:
505
+ - Task: Image Classification
506
+ Dataset: ImageNet
507
+ Metrics:
508
+ Top 1 Accuracy: 88.35%
509
+ Top 5 Accuracy: 98.66%
510
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/pnasnet.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PNASNet
2
+
3
+ **Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to complex ones, pruning out unpromising structures as we go.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{liu2018progressive,
15
+ title={Progressive Neural Architecture Search},
16
+ author={Chenxi Liu and Barret Zoph and Maxim Neumann and Jonathon Shlens and Wei Hua and Li-Jia Li and Li Fei-Fei and Alan Yuille and Jonathan Huang and Kevin Murphy},
17
+ year={2018},
18
+ eprint={1712.00559},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: PNASNet
28
+ Paper:
29
+ Title: Progressive Neural Architecture Search
30
+ URL: https://paperswithcode.com/paper/progressive-neural-architecture-search
31
+ Models:
32
+ - Name: pnasnet5large
33
+ In Collection: PNASNet
34
+ Metadata:
35
+ FLOPs: 31458865950
36
+ Parameters: 86060000
37
+ File Size: 345153926
38
+ Architecture:
39
+ - Average Pooling
40
+ - Batch Normalization
41
+ - Convolution
42
+ - Depthwise Separable Convolution
43
+ - Dropout
44
+ - ReLU
45
+ Tasks:
46
+ - Image Classification
47
+ Training Techniques:
48
+ - Label Smoothing
49
+ - RMSProp
50
+ - Weight Decay
51
+ Training Data:
52
+ - ImageNet
53
+ Training Resources: 100x NVIDIA P100 GPUs
54
+ ID: pnasnet5large
55
+ LR: 0.015
56
+ Dropout: 0.5
57
+ Crop Pct: '0.911'
58
+ Momentum: 0.9
59
+ Batch Size: 1600
60
+ Image Size: '331'
61
+ Interpolation: bicubic
62
+ Label Smoothing: 0.1
63
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/pnasnet.py#L343
64
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-cadene/pnasnet5large-bf079911.pth
65
+ Results:
66
+ - Task: Image Classification
67
+ Dataset: ImageNet
68
+ Metrics:
69
+ Top 1 Accuracy: 0.98%
70
+ Top 5 Accuracy: 18.58%
71
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/regnetx.md ADDED
@@ -0,0 +1,492 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RegNetX
2
+
3
+ **RegNetX** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear parameterisation of block widths (the design space only contains models with this linear structure):
4
+
5
+ $$ u\_{j} = w\_{0} + w\_{a}\cdot{j} $$
6
+
7
+ For **RegNetX** we have additional restrictions: we set $b = 1$ (the bottleneck ratio), $12 \leq d \leq 28$, and $w\_{m} \geq 2$ (the width multiplier).
8
+
9
+ {% include 'code_snippets.md' %}
10
+
11
+ ## How do I train this model?
12
+
13
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
14
+
15
+ ## Citation
16
+
17
+ ```BibTeX
18
+ @misc{radosavovic2020designing,
19
+ title={Designing Network Design Spaces},
20
+ author={Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Dollár},
21
+ year={2020},
22
+ eprint={2003.13678},
23
+ archivePrefix={arXiv},
24
+ primaryClass={cs.CV}
25
+ }
26
+ ```
27
+
28
+ <!--
29
+ Type: model-index
30
+ Collections:
31
+ - Name: RegNetX
32
+ Paper:
33
+ Title: Designing Network Design Spaces
34
+ URL: https://paperswithcode.com/paper/designing-network-design-spaces
35
+ Models:
36
+ - Name: regnetx_002
37
+ In Collection: RegNetX
38
+ Metadata:
39
+ FLOPs: 255276032
40
+ Parameters: 2680000
41
+ File Size: 10862199
42
+ Architecture:
43
+ - 1x1 Convolution
44
+ - Batch Normalization
45
+ - Convolution
46
+ - Dense Connections
47
+ - Global Average Pooling
48
+ - Grouped Convolution
49
+ - ReLU
50
+ Tasks:
51
+ - Image Classification
52
+ Training Techniques:
53
+ - SGD with Momentum
54
+ - Weight Decay
55
+ Training Data:
56
+ - ImageNet
57
+ Training Resources: 8x NVIDIA V100 GPUs
58
+ ID: regnetx_002
59
+ Epochs: 100
60
+ Crop Pct: '0.875'
61
+ Momentum: 0.9
62
+ Batch Size: 1024
63
+ Image Size: '224'
64
+ Weight Decay: 5.0e-05
65
+ Interpolation: bicubic
66
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L337
67
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_002-e7e85e5c.pth
68
+ Results:
69
+ - Task: Image Classification
70
+ Dataset: ImageNet
71
+ Metrics:
72
+ Top 1 Accuracy: 68.75%
73
+ Top 5 Accuracy: 88.56%
74
+ - Name: regnetx_004
75
+ In Collection: RegNetX
76
+ Metadata:
77
+ FLOPs: 510619136
78
+ Parameters: 5160000
79
+ File Size: 20841309
80
+ Architecture:
81
+ - 1x1 Convolution
82
+ - Batch Normalization
83
+ - Convolution
84
+ - Dense Connections
85
+ - Global Average Pooling
86
+ - Grouped Convolution
87
+ - ReLU
88
+ Tasks:
89
+ - Image Classification
90
+ Training Techniques:
91
+ - SGD with Momentum
92
+ - Weight Decay
93
+ Training Data:
94
+ - ImageNet
95
+ Training Resources: 8x NVIDIA V100 GPUs
96
+ ID: regnetx_004
97
+ Epochs: 100
98
+ Crop Pct: '0.875'
99
+ Momentum: 0.9
100
+ Batch Size: 1024
101
+ Image Size: '224'
102
+ Weight Decay: 5.0e-05
103
+ Interpolation: bicubic
104
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L343
105
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_004-7d0e9424.pth
106
+ Results:
107
+ - Task: Image Classification
108
+ Dataset: ImageNet
109
+ Metrics:
110
+ Top 1 Accuracy: 72.39%
111
+ Top 5 Accuracy: 90.82%
112
+ - Name: regnetx_006
113
+ In Collection: RegNetX
114
+ Metadata:
115
+ FLOPs: 771659136
116
+ Parameters: 6200000
117
+ File Size: 24965172
118
+ Architecture:
119
+ - 1x1 Convolution
120
+ - Batch Normalization
121
+ - Convolution
122
+ - Dense Connections
123
+ - Global Average Pooling
124
+ - Grouped Convolution
125
+ - ReLU
126
+ Tasks:
127
+ - Image Classification
128
+ Training Techniques:
129
+ - SGD with Momentum
130
+ - Weight Decay
131
+ Training Data:
132
+ - ImageNet
133
+ Training Resources: 8x NVIDIA V100 GPUs
134
+ ID: regnetx_006
135
+ Epochs: 100
136
+ Crop Pct: '0.875'
137
+ Momentum: 0.9
138
+ Batch Size: 1024
139
+ Image Size: '224'
140
+ Weight Decay: 5.0e-05
141
+ Interpolation: bicubic
142
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L349
143
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_006-85ec1baa.pth
144
+ Results:
145
+ - Task: Image Classification
146
+ Dataset: ImageNet
147
+ Metrics:
148
+ Top 1 Accuracy: 73.84%
149
+ Top 5 Accuracy: 91.68%
150
+ - Name: regnetx_008
151
+ In Collection: RegNetX
152
+ Metadata:
153
+ FLOPs: 1027038208
154
+ Parameters: 7260000
155
+ File Size: 29235944
156
+ Architecture:
157
+ - 1x1 Convolution
158
+ - Batch Normalization
159
+ - Convolution
160
+ - Dense Connections
161
+ - Global Average Pooling
162
+ - Grouped Convolution
163
+ - ReLU
164
+ Tasks:
165
+ - Image Classification
166
+ Training Techniques:
167
+ - SGD with Momentum
168
+ - Weight Decay
169
+ Training Data:
170
+ - ImageNet
171
+ Training Resources: 8x NVIDIA V100 GPUs
172
+ ID: regnetx_008
173
+ Epochs: 100
174
+ Crop Pct: '0.875'
175
+ Momentum: 0.9
176
+ Batch Size: 1024
177
+ Image Size: '224'
178
+ Weight Decay: 5.0e-05
179
+ Interpolation: bicubic
180
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L355
181
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_008-d8b470eb.pth
182
+ Results:
183
+ - Task: Image Classification
184
+ Dataset: ImageNet
185
+ Metrics:
186
+ Top 1 Accuracy: 75.05%
187
+ Top 5 Accuracy: 92.34%
188
+ - Name: regnetx_016
189
+ In Collection: RegNetX
190
+ Metadata:
191
+ FLOPs: 2059337856
192
+ Parameters: 9190000
193
+ File Size: 36988158
194
+ Architecture:
195
+ - 1x1 Convolution
196
+ - Batch Normalization
197
+ - Convolution
198
+ - Dense Connections
199
+ - Global Average Pooling
200
+ - Grouped Convolution
201
+ - ReLU
202
+ Tasks:
203
+ - Image Classification
204
+ Training Techniques:
205
+ - SGD with Momentum
206
+ - Weight Decay
207
+ Training Data:
208
+ - ImageNet
209
+ Training Resources: 8x NVIDIA V100 GPUs
210
+ ID: regnetx_016
211
+ Epochs: 100
212
+ Crop Pct: '0.875'
213
+ Momentum: 0.9
214
+ Batch Size: 1024
215
+ Image Size: '224'
216
+ Weight Decay: 5.0e-05
217
+ Interpolation: bicubic
218
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L361
219
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_016-65ca972a.pth
220
+ Results:
221
+ - Task: Image Classification
222
+ Dataset: ImageNet
223
+ Metrics:
224
+ Top 1 Accuracy: 76.95%
225
+ Top 5 Accuracy: 93.43%
226
+ - Name: regnetx_032
227
+ In Collection: RegNetX
228
+ Metadata:
229
+ FLOPs: 4082555904
230
+ Parameters: 15300000
231
+ File Size: 61509573
232
+ Architecture:
233
+ - 1x1 Convolution
234
+ - Batch Normalization
235
+ - Convolution
236
+ - Dense Connections
237
+ - Global Average Pooling
238
+ - Grouped Convolution
239
+ - ReLU
240
+ Tasks:
241
+ - Image Classification
242
+ Training Techniques:
243
+ - SGD with Momentum
244
+ - Weight Decay
245
+ Training Data:
246
+ - ImageNet
247
+ Training Resources: 8x NVIDIA V100 GPUs
248
+ ID: regnetx_032
249
+ Epochs: 100
250
+ Crop Pct: '0.875'
251
+ Momentum: 0.9
252
+ Batch Size: 512
253
+ Image Size: '224'
254
+ Weight Decay: 5.0e-05
255
+ Interpolation: bicubic
256
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L367
257
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_032-ed0c7f7e.pth
258
+ Results:
259
+ - Task: Image Classification
260
+ Dataset: ImageNet
261
+ Metrics:
262
+ Top 1 Accuracy: 78.15%
263
+ Top 5 Accuracy: 94.09%
264
+ - Name: regnetx_040
265
+ In Collection: RegNetX
266
+ Metadata:
267
+ FLOPs: 5095167744
268
+ Parameters: 22120000
269
+ File Size: 88844824
270
+ Architecture:
271
+ - 1x1 Convolution
272
+ - Batch Normalization
273
+ - Convolution
274
+ - Dense Connections
275
+ - Global Average Pooling
276
+ - Grouped Convolution
277
+ - ReLU
278
+ Tasks:
279
+ - Image Classification
280
+ Training Techniques:
281
+ - SGD with Momentum
282
+ - Weight Decay
283
+ Training Data:
284
+ - ImageNet
285
+ Training Resources: 8x NVIDIA V100 GPUs
286
+ ID: regnetx_040
287
+ Epochs: 100
288
+ Crop Pct: '0.875'
289
+ Momentum: 0.9
290
+ Batch Size: 512
291
+ Image Size: '224'
292
+ Weight Decay: 5.0e-05
293
+ Interpolation: bicubic
294
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L373
295
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_040-73c2a654.pth
296
+ Results:
297
+ - Task: Image Classification
298
+ Dataset: ImageNet
299
+ Metrics:
300
+ Top 1 Accuracy: 78.48%
301
+ Top 5 Accuracy: 94.25%
302
+ - Name: regnetx_064
303
+ In Collection: RegNetX
304
+ Metadata:
305
+ FLOPs: 8303405824
306
+ Parameters: 26210000
307
+ File Size: 105184854
308
+ Architecture:
309
+ - 1x1 Convolution
310
+ - Batch Normalization
311
+ - Convolution
312
+ - Dense Connections
313
+ - Global Average Pooling
314
+ - Grouped Convolution
315
+ - ReLU
316
+ Tasks:
317
+ - Image Classification
318
+ Training Techniques:
319
+ - SGD with Momentum
320
+ - Weight Decay
321
+ Training Data:
322
+ - ImageNet
323
+ Training Resources: 8x NVIDIA V100 GPUs
324
+ ID: regnetx_064
325
+ Epochs: 100
326
+ Crop Pct: '0.875'
327
+ Momentum: 0.9
328
+ Batch Size: 512
329
+ Image Size: '224'
330
+ Weight Decay: 5.0e-05
331
+ Interpolation: bicubic
332
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L379
333
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_064-29278baa.pth
334
+ Results:
335
+ - Task: Image Classification
336
+ Dataset: ImageNet
337
+ Metrics:
338
+ Top 1 Accuracy: 79.06%
339
+ Top 5 Accuracy: 94.47%
340
+ - Name: regnetx_080
341
+ In Collection: RegNetX
342
+ Metadata:
343
+ FLOPs: 10276726784
344
+ Parameters: 39570000
345
+ File Size: 158720042
346
+ Architecture:
347
+ - 1x1 Convolution
348
+ - Batch Normalization
349
+ - Convolution
350
+ - Dense Connections
351
+ - Global Average Pooling
352
+ - Grouped Convolution
353
+ - ReLU
354
+ Tasks:
355
+ - Image Classification
356
+ Training Techniques:
357
+ - SGD with Momentum
358
+ - Weight Decay
359
+ Training Data:
360
+ - ImageNet
361
+ Training Resources: 8x NVIDIA V100 GPUs
362
+ ID: regnetx_080
363
+ Epochs: 100
364
+ Crop Pct: '0.875'
365
+ Momentum: 0.9
366
+ Batch Size: 512
367
+ Image Size: '224'
368
+ Weight Decay: 5.0e-05
369
+ Interpolation: bicubic
370
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L385
371
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_080-7c7fcab1.pth
372
+ Results:
373
+ - Task: Image Classification
374
+ Dataset: ImageNet
375
+ Metrics:
376
+ Top 1 Accuracy: 79.21%
377
+ Top 5 Accuracy: 94.55%
378
+ - Name: regnetx_120
379
+ In Collection: RegNetX
380
+ Metadata:
381
+ FLOPs: 15536378368
382
+ Parameters: 46110000
383
+ File Size: 184866342
384
+ Architecture:
385
+ - 1x1 Convolution
386
+ - Batch Normalization
387
+ - Convolution
388
+ - Dense Connections
389
+ - Global Average Pooling
390
+ - Grouped Convolution
391
+ - ReLU
392
+ Tasks:
393
+ - Image Classification
394
+ Training Techniques:
395
+ - SGD with Momentum
396
+ - Weight Decay
397
+ Training Data:
398
+ - ImageNet
399
+ Training Resources: 8x NVIDIA V100 GPUs
400
+ ID: regnetx_120
401
+ Epochs: 100
402
+ Crop Pct: '0.875'
403
+ Momentum: 0.9
404
+ Batch Size: 512
405
+ Image Size: '224'
406
+ Weight Decay: 5.0e-05
407
+ Interpolation: bicubic
408
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L391
409
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_120-65d5521e.pth
410
+ Results:
411
+ - Task: Image Classification
412
+ Dataset: ImageNet
413
+ Metrics:
414
+ Top 1 Accuracy: 79.61%
415
+ Top 5 Accuracy: 94.73%
416
+ - Name: regnetx_160
417
+ In Collection: RegNetX
418
+ Metadata:
419
+ FLOPs: 20491740672
420
+ Parameters: 54280000
421
+ File Size: 217623862
422
+ Architecture:
423
+ - 1x1 Convolution
424
+ - Batch Normalization
425
+ - Convolution
426
+ - Dense Connections
427
+ - Global Average Pooling
428
+ - Grouped Convolution
429
+ - ReLU
430
+ Tasks:
431
+ - Image Classification
432
+ Training Techniques:
433
+ - SGD with Momentum
434
+ - Weight Decay
435
+ Training Data:
436
+ - ImageNet
437
+ Training Resources: 8x NVIDIA V100 GPUs
438
+ ID: regnetx_160
439
+ Epochs: 100
440
+ Crop Pct: '0.875'
441
+ Momentum: 0.9
442
+ Batch Size: 512
443
+ Image Size: '224'
444
+ Weight Decay: 5.0e-05
445
+ Interpolation: bicubic
446
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L397
447
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_160-c98c4112.pth
448
+ Results:
449
+ - Task: Image Classification
450
+ Dataset: ImageNet
451
+ Metrics:
452
+ Top 1 Accuracy: 79.84%
453
+ Top 5 Accuracy: 94.82%
454
+ - Name: regnetx_320
455
+ In Collection: RegNetX
456
+ Metadata:
457
+ FLOPs: 40798958592
458
+ Parameters: 107810000
459
+ File Size: 431962133
460
+ Architecture:
461
+ - 1x1 Convolution
462
+ - Batch Normalization
463
+ - Convolution
464
+ - Dense Connections
465
+ - Global Average Pooling
466
+ - Grouped Convolution
467
+ - ReLU
468
+ Tasks:
469
+ - Image Classification
470
+ Training Techniques:
471
+ - SGD with Momentum
472
+ - Weight Decay
473
+ Training Data:
474
+ - ImageNet
475
+ Training Resources: 8x NVIDIA V100 GPUs
476
+ ID: regnetx_320
477
+ Epochs: 100
478
+ Crop Pct: '0.875'
479
+ Momentum: 0.9
480
+ Batch Size: 256
481
+ Image Size: '224'
482
+ Weight Decay: 5.0e-05
483
+ Interpolation: bicubic
484
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L403
485
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnetx_320-8ea38b93.pth
486
+ Results:
487
+ - Task: Image Classification
488
+ Dataset: ImageNet
489
+ Metrics:
490
+ Top 1 Accuracy: 80.25%
491
+ Top 5 Accuracy: 95.03%
492
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/regnety.md ADDED
@@ -0,0 +1,506 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RegNetY
2
+
3
+ **RegNetY** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear parameterisation of block widths (the design space only contains models with this linear structure):
4
+
5
+ $$ u\_{j} = w\_{0} + w\_{a}\cdot{j} $$
6
+
7
+ For **RegNetX** authors have additional restrictions: we set $b = 1$ (the bottleneck ratio), $12 \leq d \leq 28$, and $w\_{m} \geq 2$ (the width multiplier).
8
+
9
+ For **RegNetY** authors make one change, which is to include [Squeeze-and-Excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block).
10
+
11
+ {% include 'code_snippets.md' %}
12
+
13
+ ## How do I train this model?
14
+
15
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
16
+
17
+ ## Citation
18
+
19
+ ```BibTeX
20
+ @misc{radosavovic2020designing,
21
+ title={Designing Network Design Spaces},
22
+ author={Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Dollár},
23
+ year={2020},
24
+ eprint={2003.13678},
25
+ archivePrefix={arXiv},
26
+ primaryClass={cs.CV}
27
+ }
28
+ ```
29
+
30
+ <!--
31
+ Type: model-index
32
+ Collections:
33
+ - Name: RegNetY
34
+ Paper:
35
+ Title: Designing Network Design Spaces
36
+ URL: https://paperswithcode.com/paper/designing-network-design-spaces
37
+ Models:
38
+ - Name: regnety_002
39
+ In Collection: RegNetY
40
+ Metadata:
41
+ FLOPs: 255754236
42
+ Parameters: 3160000
43
+ File Size: 12782926
44
+ Architecture:
45
+ - 1x1 Convolution
46
+ - Batch Normalization
47
+ - Convolution
48
+ - Dense Connections
49
+ - Global Average Pooling
50
+ - Grouped Convolution
51
+ - ReLU
52
+ - Squeeze-and-Excitation Block
53
+ Tasks:
54
+ - Image Classification
55
+ Training Techniques:
56
+ - SGD with Momentum
57
+ - Weight Decay
58
+ Training Data:
59
+ - ImageNet
60
+ Training Resources: 8x NVIDIA V100 GPUs
61
+ ID: regnety_002
62
+ Epochs: 100
63
+ Crop Pct: '0.875'
64
+ Momentum: 0.9
65
+ Batch Size: 1024
66
+ Image Size: '224'
67
+ Weight Decay: 5.0e-05
68
+ Interpolation: bicubic
69
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L409
70
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_002-e68ca334.pth
71
+ Results:
72
+ - Task: Image Classification
73
+ Dataset: ImageNet
74
+ Metrics:
75
+ Top 1 Accuracy: 70.28%
76
+ Top 5 Accuracy: 89.55%
77
+ - Name: regnety_004
78
+ In Collection: RegNetY
79
+ Metadata:
80
+ FLOPs: 515664568
81
+ Parameters: 4340000
82
+ File Size: 17542753
83
+ Architecture:
84
+ - 1x1 Convolution
85
+ - Batch Normalization
86
+ - Convolution
87
+ - Dense Connections
88
+ - Global Average Pooling
89
+ - Grouped Convolution
90
+ - ReLU
91
+ - Squeeze-and-Excitation Block
92
+ Tasks:
93
+ - Image Classification
94
+ Training Techniques:
95
+ - SGD with Momentum
96
+ - Weight Decay
97
+ Training Data:
98
+ - ImageNet
99
+ Training Resources: 8x NVIDIA V100 GPUs
100
+ ID: regnety_004
101
+ Epochs: 100
102
+ Crop Pct: '0.875'
103
+ Momentum: 0.9
104
+ Batch Size: 1024
105
+ Image Size: '224'
106
+ Weight Decay: 5.0e-05
107
+ Interpolation: bicubic
108
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L415
109
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_004-0db870e6.pth
110
+ Results:
111
+ - Task: Image Classification
112
+ Dataset: ImageNet
113
+ Metrics:
114
+ Top 1 Accuracy: 74.02%
115
+ Top 5 Accuracy: 91.76%
116
+ - Name: regnety_006
117
+ In Collection: RegNetY
118
+ Metadata:
119
+ FLOPs: 771746928
120
+ Parameters: 6060000
121
+ File Size: 24394127
122
+ Architecture:
123
+ - 1x1 Convolution
124
+ - Batch Normalization
125
+ - Convolution
126
+ - Dense Connections
127
+ - Global Average Pooling
128
+ - Grouped Convolution
129
+ - ReLU
130
+ - Squeeze-and-Excitation Block
131
+ Tasks:
132
+ - Image Classification
133
+ Training Techniques:
134
+ - SGD with Momentum
135
+ - Weight Decay
136
+ Training Data:
137
+ - ImageNet
138
+ Training Resources: 8x NVIDIA V100 GPUs
139
+ ID: regnety_006
140
+ Epochs: 100
141
+ Crop Pct: '0.875'
142
+ Momentum: 0.9
143
+ Batch Size: 1024
144
+ Image Size: '224'
145
+ Weight Decay: 5.0e-05
146
+ Interpolation: bicubic
147
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L421
148
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_006-c67e57ec.pth
149
+ Results:
150
+ - Task: Image Classification
151
+ Dataset: ImageNet
152
+ Metrics:
153
+ Top 1 Accuracy: 75.27%
154
+ Top 5 Accuracy: 92.53%
155
+ - Name: regnety_008
156
+ In Collection: RegNetY
157
+ Metadata:
158
+ FLOPs: 1023448952
159
+ Parameters: 6260000
160
+ File Size: 25223268
161
+ Architecture:
162
+ - 1x1 Convolution
163
+ - Batch Normalization
164
+ - Convolution
165
+ - Dense Connections
166
+ - Global Average Pooling
167
+ - Grouped Convolution
168
+ - ReLU
169
+ - Squeeze-and-Excitation Block
170
+ Tasks:
171
+ - Image Classification
172
+ Training Techniques:
173
+ - SGD with Momentum
174
+ - Weight Decay
175
+ Training Data:
176
+ - ImageNet
177
+ Training Resources: 8x NVIDIA V100 GPUs
178
+ ID: regnety_008
179
+ Epochs: 100
180
+ Crop Pct: '0.875'
181
+ Momentum: 0.9
182
+ Batch Size: 1024
183
+ Image Size: '224'
184
+ Weight Decay: 5.0e-05
185
+ Interpolation: bicubic
186
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L427
187
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_008-dc900dbe.pth
188
+ Results:
189
+ - Task: Image Classification
190
+ Dataset: ImageNet
191
+ Metrics:
192
+ Top 1 Accuracy: 76.32%
193
+ Top 5 Accuracy: 93.07%
194
+ - Name: regnety_016
195
+ In Collection: RegNetY
196
+ Metadata:
197
+ FLOPs: 2070895094
198
+ Parameters: 11200000
199
+ File Size: 45115589
200
+ Architecture:
201
+ - 1x1 Convolution
202
+ - Batch Normalization
203
+ - Convolution
204
+ - Dense Connections
205
+ - Global Average Pooling
206
+ - Grouped Convolution
207
+ - ReLU
208
+ - Squeeze-and-Excitation Block
209
+ Tasks:
210
+ - Image Classification
211
+ Training Techniques:
212
+ - SGD with Momentum
213
+ - Weight Decay
214
+ Training Data:
215
+ - ImageNet
216
+ Training Resources: 8x NVIDIA V100 GPUs
217
+ ID: regnety_016
218
+ Epochs: 100
219
+ Crop Pct: '0.875'
220
+ Momentum: 0.9
221
+ Batch Size: 1024
222
+ Image Size: '224'
223
+ Weight Decay: 5.0e-05
224
+ Interpolation: bicubic
225
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L433
226
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_016-54367f74.pth
227
+ Results:
228
+ - Task: Image Classification
229
+ Dataset: ImageNet
230
+ Metrics:
231
+ Top 1 Accuracy: 77.87%
232
+ Top 5 Accuracy: 93.73%
233
+ - Name: regnety_032
234
+ In Collection: RegNetY
235
+ Metadata:
236
+ FLOPs: 4081118714
237
+ Parameters: 19440000
238
+ File Size: 78084523
239
+ Architecture:
240
+ - 1x1 Convolution
241
+ - Batch Normalization
242
+ - Convolution
243
+ - Dense Connections
244
+ - Global Average Pooling
245
+ - Grouped Convolution
246
+ - ReLU
247
+ - Squeeze-and-Excitation Block
248
+ Tasks:
249
+ - Image Classification
250
+ Training Techniques:
251
+ - SGD with Momentum
252
+ - Weight Decay
253
+ Training Data:
254
+ - ImageNet
255
+ Training Resources: 8x NVIDIA V100 GPUs
256
+ ID: regnety_032
257
+ Epochs: 100
258
+ Crop Pct: '0.875'
259
+ Momentum: 0.9
260
+ Batch Size: 512
261
+ Image Size: '224'
262
+ Weight Decay: 5.0e-05
263
+ Interpolation: bicubic
264
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L439
265
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/regnety_032_ra-7f2439f9.pth
266
+ Results:
267
+ - Task: Image Classification
268
+ Dataset: ImageNet
269
+ Metrics:
270
+ Top 1 Accuracy: 82.01%
271
+ Top 5 Accuracy: 95.91%
272
+ - Name: regnety_040
273
+ In Collection: RegNetY
274
+ Metadata:
275
+ FLOPs: 5105933432
276
+ Parameters: 20650000
277
+ File Size: 82913909
278
+ Architecture:
279
+ - 1x1 Convolution
280
+ - Batch Normalization
281
+ - Convolution
282
+ - Dense Connections
283
+ - Global Average Pooling
284
+ - Grouped Convolution
285
+ - ReLU
286
+ - Squeeze-and-Excitation Block
287
+ Tasks:
288
+ - Image Classification
289
+ Training Techniques:
290
+ - SGD with Momentum
291
+ - Weight Decay
292
+ Training Data:
293
+ - ImageNet
294
+ Training Resources: 8x NVIDIA V100 GPUs
295
+ ID: regnety_040
296
+ Epochs: 100
297
+ Crop Pct: '0.875'
298
+ Momentum: 0.9
299
+ Batch Size: 512
300
+ Image Size: '224'
301
+ Weight Decay: 5.0e-05
302
+ Interpolation: bicubic
303
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L445
304
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_040-f0d569f9.pth
305
+ Results:
306
+ - Task: Image Classification
307
+ Dataset: ImageNet
308
+ Metrics:
309
+ Top 1 Accuracy: 79.23%
310
+ Top 5 Accuracy: 94.64%
311
+ - Name: regnety_064
312
+ In Collection: RegNetY
313
+ Metadata:
314
+ FLOPs: 8167730444
315
+ Parameters: 30580000
316
+ File Size: 122751416
317
+ Architecture:
318
+ - 1x1 Convolution
319
+ - Batch Normalization
320
+ - Convolution
321
+ - Dense Connections
322
+ - Global Average Pooling
323
+ - Grouped Convolution
324
+ - ReLU
325
+ - Squeeze-and-Excitation Block
326
+ Tasks:
327
+ - Image Classification
328
+ Training Techniques:
329
+ - SGD with Momentum
330
+ - Weight Decay
331
+ Training Data:
332
+ - ImageNet
333
+ Training Resources: 8x NVIDIA V100 GPUs
334
+ ID: regnety_064
335
+ Epochs: 100
336
+ Crop Pct: '0.875'
337
+ Momentum: 0.9
338
+ Batch Size: 512
339
+ Image Size: '224'
340
+ Weight Decay: 5.0e-05
341
+ Interpolation: bicubic
342
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L451
343
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_064-0a48325c.pth
344
+ Results:
345
+ - Task: Image Classification
346
+ Dataset: ImageNet
347
+ Metrics:
348
+ Top 1 Accuracy: 79.73%
349
+ Top 5 Accuracy: 94.76%
350
+ - Name: regnety_080
351
+ In Collection: RegNetY
352
+ Metadata:
353
+ FLOPs: 10233621420
354
+ Parameters: 39180000
355
+ File Size: 157124671
356
+ Architecture:
357
+ - 1x1 Convolution
358
+ - Batch Normalization
359
+ - Convolution
360
+ - Dense Connections
361
+ - Global Average Pooling
362
+ - Grouped Convolution
363
+ - ReLU
364
+ - Squeeze-and-Excitation Block
365
+ Tasks:
366
+ - Image Classification
367
+ Training Techniques:
368
+ - SGD with Momentum
369
+ - Weight Decay
370
+ Training Data:
371
+ - ImageNet
372
+ Training Resources: 8x NVIDIA V100 GPUs
373
+ ID: regnety_080
374
+ Epochs: 100
375
+ Crop Pct: '0.875'
376
+ Momentum: 0.9
377
+ Batch Size: 512
378
+ Image Size: '224'
379
+ Weight Decay: 5.0e-05
380
+ Interpolation: bicubic
381
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L457
382
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_080-e7f3eb93.pth
383
+ Results:
384
+ - Task: Image Classification
385
+ Dataset: ImageNet
386
+ Metrics:
387
+ Top 1 Accuracy: 79.87%
388
+ Top 5 Accuracy: 94.83%
389
+ - Name: regnety_120
390
+ In Collection: RegNetY
391
+ Metadata:
392
+ FLOPs: 15542094856
393
+ Parameters: 51820000
394
+ File Size: 207743949
395
+ Architecture:
396
+ - 1x1 Convolution
397
+ - Batch Normalization
398
+ - Convolution
399
+ - Dense Connections
400
+ - Global Average Pooling
401
+ - Grouped Convolution
402
+ - ReLU
403
+ - Squeeze-and-Excitation Block
404
+ Tasks:
405
+ - Image Classification
406
+ Training Techniques:
407
+ - SGD with Momentum
408
+ - Weight Decay
409
+ Training Data:
410
+ - ImageNet
411
+ Training Resources: 8x NVIDIA V100 GPUs
412
+ ID: regnety_120
413
+ Epochs: 100
414
+ Crop Pct: '0.875'
415
+ Momentum: 0.9
416
+ Batch Size: 512
417
+ Image Size: '224'
418
+ Weight Decay: 5.0e-05
419
+ Interpolation: bicubic
420
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L463
421
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_120-721ba79a.pth
422
+ Results:
423
+ - Task: Image Classification
424
+ Dataset: ImageNet
425
+ Metrics:
426
+ Top 1 Accuracy: 80.38%
427
+ Top 5 Accuracy: 95.12%
428
+ - Name: regnety_160
429
+ In Collection: RegNetY
430
+ Metadata:
431
+ FLOPs: 20450196852
432
+ Parameters: 83590000
433
+ File Size: 334916722
434
+ Architecture:
435
+ - 1x1 Convolution
436
+ - Batch Normalization
437
+ - Convolution
438
+ - Dense Connections
439
+ - Global Average Pooling
440
+ - Grouped Convolution
441
+ - ReLU
442
+ - Squeeze-and-Excitation Block
443
+ Tasks:
444
+ - Image Classification
445
+ Training Techniques:
446
+ - SGD with Momentum
447
+ - Weight Decay
448
+ Training Data:
449
+ - ImageNet
450
+ Training Resources: 8x NVIDIA V100 GPUs
451
+ ID: regnety_160
452
+ Epochs: 100
453
+ Crop Pct: '0.875'
454
+ Momentum: 0.9
455
+ Batch Size: 512
456
+ Image Size: '224'
457
+ Weight Decay: 5.0e-05
458
+ Interpolation: bicubic
459
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L469
460
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_160-d64013cd.pth
461
+ Results:
462
+ - Task: Image Classification
463
+ Dataset: ImageNet
464
+ Metrics:
465
+ Top 1 Accuracy: 80.28%
466
+ Top 5 Accuracy: 94.97%
467
+ - Name: regnety_320
468
+ In Collection: RegNetY
469
+ Metadata:
470
+ FLOPs: 41492618394
471
+ Parameters: 145050000
472
+ File Size: 580891965
473
+ Architecture:
474
+ - 1x1 Convolution
475
+ - Batch Normalization
476
+ - Convolution
477
+ - Dense Connections
478
+ - Global Average Pooling
479
+ - Grouped Convolution
480
+ - ReLU
481
+ - Squeeze-and-Excitation Block
482
+ Tasks:
483
+ - Image Classification
484
+ Training Techniques:
485
+ - SGD with Momentum
486
+ - Weight Decay
487
+ Training Data:
488
+ - ImageNet
489
+ Training Resources: 8x NVIDIA V100 GPUs
490
+ ID: regnety_320
491
+ Epochs: 100
492
+ Crop Pct: '0.875'
493
+ Momentum: 0.9
494
+ Batch Size: 256
495
+ Image Size: '224'
496
+ Weight Decay: 5.0e-05
497
+ Interpolation: bicubic
498
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/regnet.py#L475
499
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-regnet/regnety_320-ba464b29.pth
500
+ Results:
501
+ - Task: Image Classification
502
+ Dataset: ImageNet
503
+ Metrics:
504
+ Top 1 Accuracy: 80.8%
505
+ Top 5 Accuracy: 95.25%
506
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/res2net.md ADDED
@@ -0,0 +1,260 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Res2Net
2
+
3
+ **Res2Net** is an image model that employs a variation on bottleneck residual blocks, [Res2Net Blocks](https://paperswithcode.com/method/res2net-block). The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-like connections within one single residual block. This represents multi-scale features at a granular level and increases the range of receptive fields for each network layer.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @article{Gao_2021,
15
+ title={Res2Net: A New Multi-Scale Backbone Architecture},
16
+ volume={43},
17
+ ISSN={1939-3539},
18
+ url={http://dx.doi.org/10.1109/TPAMI.2019.2938758},
19
+ DOI={10.1109/tpami.2019.2938758},
20
+ number={2},
21
+ journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
22
+ publisher={Institute of Electrical and Electronics Engineers (IEEE)},
23
+ author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
24
+ year={2021},
25
+ month={Feb},
26
+ pages={652–662}
27
+ }
28
+ ```
29
+
30
+ <!--
31
+ Type: model-index
32
+ Collections:
33
+ - Name: Res2Net
34
+ Paper:
35
+ Title: 'Res2Net: A New Multi-scale Backbone Architecture'
36
+ URL: https://paperswithcode.com/paper/res2net-a-new-multi-scale-backbone
37
+ Models:
38
+ - Name: res2net101_26w_4s
39
+ In Collection: Res2Net
40
+ Metadata:
41
+ FLOPs: 10415881200
42
+ Parameters: 45210000
43
+ File Size: 181456059
44
+ Architecture:
45
+ - Batch Normalization
46
+ - Convolution
47
+ - Global Average Pooling
48
+ - ReLU
49
+ - Res2Net Block
50
+ Tasks:
51
+ - Image Classification
52
+ Training Techniques:
53
+ - SGD with Momentum
54
+ - Weight Decay
55
+ Training Data:
56
+ - ImageNet
57
+ Training Resources: 4x Titan Xp GPUs
58
+ ID: res2net101_26w_4s
59
+ LR: 0.1
60
+ Epochs: 100
61
+ Crop Pct: '0.875'
62
+ Momentum: 0.9
63
+ Batch Size: 256
64
+ Image Size: '224'
65
+ Weight Decay: 0.0001
66
+ Interpolation: bilinear
67
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L152
68
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net101_26w_4s-02a759a1.pth
69
+ Results:
70
+ - Task: Image Classification
71
+ Dataset: ImageNet
72
+ Metrics:
73
+ Top 1 Accuracy: 79.19%
74
+ Top 5 Accuracy: 94.43%
75
+ - Name: res2net50_14w_8s
76
+ In Collection: Res2Net
77
+ Metadata:
78
+ FLOPs: 5403546768
79
+ Parameters: 25060000
80
+ File Size: 100638543
81
+ Architecture:
82
+ - Batch Normalization
83
+ - Convolution
84
+ - Global Average Pooling
85
+ - ReLU
86
+ - Res2Net Block
87
+ Tasks:
88
+ - Image Classification
89
+ Training Techniques:
90
+ - SGD with Momentum
91
+ - Weight Decay
92
+ Training Data:
93
+ - ImageNet
94
+ Training Resources: 4x Titan Xp GPUs
95
+ ID: res2net50_14w_8s
96
+ LR: 0.1
97
+ Epochs: 100
98
+ Crop Pct: '0.875'
99
+ Momentum: 0.9
100
+ Batch Size: 256
101
+ Image Size: '224'
102
+ Weight Decay: 0.0001
103
+ Interpolation: bilinear
104
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L196
105
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_14w_8s-6527dddc.pth
106
+ Results:
107
+ - Task: Image Classification
108
+ Dataset: ImageNet
109
+ Metrics:
110
+ Top 1 Accuracy: 78.14%
111
+ Top 5 Accuracy: 93.86%
112
+ - Name: res2net50_26w_4s
113
+ In Collection: Res2Net
114
+ Metadata:
115
+ FLOPs: 5499974064
116
+ Parameters: 25700000
117
+ File Size: 103110087
118
+ Architecture:
119
+ - Batch Normalization
120
+ - Convolution
121
+ - Global Average Pooling
122
+ - ReLU
123
+ - Res2Net Block
124
+ Tasks:
125
+ - Image Classification
126
+ Training Techniques:
127
+ - SGD with Momentum
128
+ - Weight Decay
129
+ Training Data:
130
+ - ImageNet
131
+ Training Resources: 4x Titan Xp GPUs
132
+ ID: res2net50_26w_4s
133
+ LR: 0.1
134
+ Epochs: 100
135
+ Crop Pct: '0.875'
136
+ Momentum: 0.9
137
+ Batch Size: 256
138
+ Image Size: '224'
139
+ Weight Decay: 0.0001
140
+ Interpolation: bilinear
141
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L141
142
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_4s-06e79181.pth
143
+ Results:
144
+ - Task: Image Classification
145
+ Dataset: ImageNet
146
+ Metrics:
147
+ Top 1 Accuracy: 77.99%
148
+ Top 5 Accuracy: 93.85%
149
+ - Name: res2net50_26w_6s
150
+ In Collection: Res2Net
151
+ Metadata:
152
+ FLOPs: 8130156528
153
+ Parameters: 37050000
154
+ File Size: 148603239
155
+ Architecture:
156
+ - Batch Normalization
157
+ - Convolution
158
+ - Global Average Pooling
159
+ - ReLU
160
+ - Res2Net Block
161
+ Tasks:
162
+ - Image Classification
163
+ Training Techniques:
164
+ - SGD with Momentum
165
+ - Weight Decay
166
+ Training Data:
167
+ - ImageNet
168
+ Training Resources: 4x Titan Xp GPUs
169
+ ID: res2net50_26w_6s
170
+ LR: 0.1
171
+ Epochs: 100
172
+ Crop Pct: '0.875'
173
+ Momentum: 0.9
174
+ Batch Size: 256
175
+ Image Size: '224'
176
+ Weight Decay: 0.0001
177
+ Interpolation: bilinear
178
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L163
179
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_6s-19041792.pth
180
+ Results:
181
+ - Task: Image Classification
182
+ Dataset: ImageNet
183
+ Metrics:
184
+ Top 1 Accuracy: 78.57%
185
+ Top 5 Accuracy: 94.12%
186
+ - Name: res2net50_26w_8s
187
+ In Collection: Res2Net
188
+ Metadata:
189
+ FLOPs: 10760338992
190
+ Parameters: 48400000
191
+ File Size: 194085165
192
+ Architecture:
193
+ - Batch Normalization
194
+ - Convolution
195
+ - Global Average Pooling
196
+ - ReLU
197
+ - Res2Net Block
198
+ Tasks:
199
+ - Image Classification
200
+ Training Techniques:
201
+ - SGD with Momentum
202
+ - Weight Decay
203
+ Training Data:
204
+ - ImageNet
205
+ Training Resources: 4x Titan Xp GPUs
206
+ ID: res2net50_26w_8s
207
+ LR: 0.1
208
+ Epochs: 100
209
+ Crop Pct: '0.875'
210
+ Momentum: 0.9
211
+ Batch Size: 256
212
+ Image Size: '224'
213
+ Weight Decay: 0.0001
214
+ Interpolation: bilinear
215
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L174
216
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_26w_8s-2c7c9f12.pth
217
+ Results:
218
+ - Task: Image Classification
219
+ Dataset: ImageNet
220
+ Metrics:
221
+ Top 1 Accuracy: 79.19%
222
+ Top 5 Accuracy: 94.37%
223
+ - Name: res2net50_48w_2s
224
+ In Collection: Res2Net
225
+ Metadata:
226
+ FLOPs: 5375291520
227
+ Parameters: 25290000
228
+ File Size: 101421406
229
+ Architecture:
230
+ - Batch Normalization
231
+ - Convolution
232
+ - Global Average Pooling
233
+ - ReLU
234
+ - Res2Net Block
235
+ Tasks:
236
+ - Image Classification
237
+ Training Techniques:
238
+ - SGD with Momentum
239
+ - Weight Decay
240
+ Training Data:
241
+ - ImageNet
242
+ Training Resources: 4x Titan Xp GPUs
243
+ ID: res2net50_48w_2s
244
+ LR: 0.1
245
+ Epochs: 100
246
+ Crop Pct: '0.875'
247
+ Momentum: 0.9
248
+ Batch Size: 256
249
+ Image Size: '224'
250
+ Weight Decay: 0.0001
251
+ Interpolation: bilinear
252
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L185
253
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2net50_48w_2s-afed724a.pth
254
+ Results:
255
+ - Task: Image Classification
256
+ Dataset: ImageNet
257
+ Metrics:
258
+ Top 1 Accuracy: 77.53%
259
+ Top 5 Accuracy: 93.56%
260
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/res2next.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Res2NeXt
2
+
3
+ **Res2NeXt** is an image model that employs a variation on [ResNeXt](https://paperswithcode.com/method/resnext) bottleneck residual blocks. The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-like connections within one single residual block. This represents multi-scale features at a granular level and increases the range of receptive fields for each network layer.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @article{Gao_2021,
15
+ title={Res2Net: A New Multi-Scale Backbone Architecture},
16
+ volume={43},
17
+ ISSN={1939-3539},
18
+ url={http://dx.doi.org/10.1109/TPAMI.2019.2938758},
19
+ DOI={10.1109/tpami.2019.2938758},
20
+ number={2},
21
+ journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
22
+ publisher={Institute of Electrical and Electronics Engineers (IEEE)},
23
+ author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
24
+ year={2021},
25
+ month={Feb},
26
+ pages={652–662}
27
+ }
28
+ ```
29
+
30
+ <!--
31
+ Type: model-index
32
+ Collections:
33
+ - Name: Res2NeXt
34
+ Paper:
35
+ Title: 'Res2Net: A New Multi-scale Backbone Architecture'
36
+ URL: https://paperswithcode.com/paper/res2net-a-new-multi-scale-backbone
37
+ Models:
38
+ - Name: res2next50
39
+ In Collection: Res2NeXt
40
+ Metadata:
41
+ FLOPs: 5396798208
42
+ Parameters: 24670000
43
+ File Size: 99019592
44
+ Architecture:
45
+ - Batch Normalization
46
+ - Convolution
47
+ - Global Average Pooling
48
+ - ReLU
49
+ - Res2NeXt Block
50
+ Tasks:
51
+ - Image Classification
52
+ Training Techniques:
53
+ - SGD with Momentum
54
+ - Weight Decay
55
+ Training Data:
56
+ - ImageNet
57
+ Training Resources: 4x Titan Xp GPUs
58
+ ID: res2next50
59
+ LR: 0.1
60
+ Epochs: 100
61
+ Crop Pct: '0.875'
62
+ Momentum: 0.9
63
+ Batch Size: 256
64
+ Image Size: '224'
65
+ Weight Decay: 0.0001
66
+ Interpolation: bilinear
67
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/res2net.py#L207
68
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-res2net/res2next50_4s-6ef7e7bf.pth
69
+ Results:
70
+ - Task: Image Classification
71
+ Dataset: ImageNet
72
+ Metrics:
73
+ Top 1 Accuracy: 78.24%
74
+ Top 5 Accuracy: 93.91%
75
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/resnest.md ADDED
@@ -0,0 +1,408 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ResNeSt
2
+
3
+ A **ResNeSt** is a variant on a [ResNet](https://paperswithcode.com/method/resnet), which instead stacks [Split-Attention blocks](https://paperswithcode.com/method/split-attention). The cardinal group representations are then concatenated along the channel dimension: $V = \text{Concat}${$V^{1},V^{2},\cdots{V}^{K}$}. As in standard residual blocks, the final output $Y$ of otheur Split-Attention block is produced using a shortcut connection: $Y=V+X$, if the input and output feature-map share the same shape. For blocks with a stride, an appropriate transformation $\mathcal{T}$ is applied to the shortcut connection to align the output shapes: $Y=V+\mathcal{T}(X)$. For example, $\mathcal{T}$ can be strided convolution or combined convolution-with-pooling.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{zhang2020resnest,
15
+ title={ResNeSt: Split-Attention Networks},
16
+ author={Hang Zhang and Chongruo Wu and Zhongyue Zhang and Yi Zhu and Haibin Lin and Zhi Zhang and Yue Sun and Tong He and Jonas Mueller and R. Manmatha and Mu Li and Alexander Smola},
17
+ year={2020},
18
+ eprint={2004.08955},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: ResNeSt
28
+ Paper:
29
+ Title: 'ResNeSt: Split-Attention Networks'
30
+ URL: https://paperswithcode.com/paper/resnest-split-attention-networks
31
+ Models:
32
+ - Name: resnest101e
33
+ In Collection: ResNeSt
34
+ Metadata:
35
+ FLOPs: 17423183648
36
+ Parameters: 48280000
37
+ File Size: 193782911
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Convolution
41
+ - Dense Connections
42
+ - Global Average Pooling
43
+ - Max Pooling
44
+ - ReLU
45
+ - Residual Connection
46
+ - Softmax
47
+ - Split Attention
48
+ Tasks:
49
+ - Image Classification
50
+ Training Techniques:
51
+ - AutoAugment
52
+ - DropBlock
53
+ - Label Smoothing
54
+ - Mixup
55
+ - SGD with Momentum
56
+ - Weight Decay
57
+ Training Data:
58
+ - ImageNet
59
+ Training Resources: 64x NVIDIA V100 GPUs
60
+ ID: resnest101e
61
+ LR: 0.1
62
+ Epochs: 270
63
+ Layers: 101
64
+ Dropout: 0.2
65
+ Crop Pct: '0.875'
66
+ Momentum: 0.9
67
+ Batch Size: 4096
68
+ Image Size: '256'
69
+ Weight Decay: 0.0001
70
+ Interpolation: bilinear
71
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L182
72
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest101-22405ba7.pth
73
+ Results:
74
+ - Task: Image Classification
75
+ Dataset: ImageNet
76
+ Metrics:
77
+ Top 1 Accuracy: 82.88%
78
+ Top 5 Accuracy: 96.31%
79
+ - Name: resnest14d
80
+ In Collection: ResNeSt
81
+ Metadata:
82
+ FLOPs: 3548594464
83
+ Parameters: 10610000
84
+ File Size: 42562639
85
+ Architecture:
86
+ - 1x1 Convolution
87
+ - Convolution
88
+ - Dense Connections
89
+ - Global Average Pooling
90
+ - Max Pooling
91
+ - ReLU
92
+ - Residual Connection
93
+ - Softmax
94
+ - Split Attention
95
+ Tasks:
96
+ - Image Classification
97
+ Training Techniques:
98
+ - AutoAugment
99
+ - DropBlock
100
+ - Label Smoothing
101
+ - Mixup
102
+ - SGD with Momentum
103
+ - Weight Decay
104
+ Training Data:
105
+ - ImageNet
106
+ Training Resources: 64x NVIDIA V100 GPUs
107
+ ID: resnest14d
108
+ LR: 0.1
109
+ Epochs: 270
110
+ Layers: 14
111
+ Dropout: 0.2
112
+ Crop Pct: '0.875'
113
+ Momentum: 0.9
114
+ Batch Size: 8192
115
+ Image Size: '224'
116
+ Weight Decay: 0.0001
117
+ Interpolation: bilinear
118
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L148
119
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_resnest14-9c8fe254.pth
120
+ Results:
121
+ - Task: Image Classification
122
+ Dataset: ImageNet
123
+ Metrics:
124
+ Top 1 Accuracy: 75.51%
125
+ Top 5 Accuracy: 92.52%
126
+ - Name: resnest200e
127
+ In Collection: ResNeSt
128
+ Metadata:
129
+ FLOPs: 45954387872
130
+ Parameters: 70200000
131
+ File Size: 193782911
132
+ Architecture:
133
+ - 1x1 Convolution
134
+ - Convolution
135
+ - Dense Connections
136
+ - Global Average Pooling
137
+ - Max Pooling
138
+ - ReLU
139
+ - Residual Connection
140
+ - Softmax
141
+ - Split Attention
142
+ Tasks:
143
+ - Image Classification
144
+ Training Techniques:
145
+ - AutoAugment
146
+ - DropBlock
147
+ - Label Smoothing
148
+ - Mixup
149
+ - SGD with Momentum
150
+ - Weight Decay
151
+ Training Data:
152
+ - ImageNet
153
+ Training Resources: 64x NVIDIA V100 GPUs
154
+ ID: resnest200e
155
+ LR: 0.1
156
+ Epochs: 270
157
+ Layers: 200
158
+ Dropout: 0.2
159
+ Crop Pct: '0.909'
160
+ Momentum: 0.9
161
+ Batch Size: 2048
162
+ Image Size: '320'
163
+ Weight Decay: 0.0001
164
+ Interpolation: bicubic
165
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L194
166
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest101-22405ba7.pth
167
+ Results:
168
+ - Task: Image Classification
169
+ Dataset: ImageNet
170
+ Metrics:
171
+ Top 1 Accuracy: 83.85%
172
+ Top 5 Accuracy: 96.89%
173
+ - Name: resnest269e
174
+ In Collection: ResNeSt
175
+ Metadata:
176
+ FLOPs: 100830307104
177
+ Parameters: 110930000
178
+ File Size: 445402691
179
+ Architecture:
180
+ - 1x1 Convolution
181
+ - Convolution
182
+ - Dense Connections
183
+ - Global Average Pooling
184
+ - Max Pooling
185
+ - ReLU
186
+ - Residual Connection
187
+ - Softmax
188
+ - Split Attention
189
+ Tasks:
190
+ - Image Classification
191
+ Training Techniques:
192
+ - AutoAugment
193
+ - DropBlock
194
+ - Label Smoothing
195
+ - Mixup
196
+ - SGD with Momentum
197
+ - Weight Decay
198
+ Training Data:
199
+ - ImageNet
200
+ Training Resources: 64x NVIDIA V100 GPUs
201
+ ID: resnest269e
202
+ LR: 0.1
203
+ Epochs: 270
204
+ Layers: 269
205
+ Dropout: 0.2
206
+ Crop Pct: '0.928'
207
+ Momentum: 0.9
208
+ Batch Size: 2048
209
+ Image Size: '416'
210
+ Weight Decay: 0.0001
211
+ Interpolation: bicubic
212
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L206
213
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest269-0cc87c48.pth
214
+ Results:
215
+ - Task: Image Classification
216
+ Dataset: ImageNet
217
+ Metrics:
218
+ Top 1 Accuracy: 84.53%
219
+ Top 5 Accuracy: 96.99%
220
+ - Name: resnest26d
221
+ In Collection: ResNeSt
222
+ Metadata:
223
+ FLOPs: 4678918720
224
+ Parameters: 17070000
225
+ File Size: 68470242
226
+ Architecture:
227
+ - 1x1 Convolution
228
+ - Convolution
229
+ - Dense Connections
230
+ - Global Average Pooling
231
+ - Max Pooling
232
+ - ReLU
233
+ - Residual Connection
234
+ - Softmax
235
+ - Split Attention
236
+ Tasks:
237
+ - Image Classification
238
+ Training Techniques:
239
+ - AutoAugment
240
+ - DropBlock
241
+ - Label Smoothing
242
+ - Mixup
243
+ - SGD with Momentum
244
+ - Weight Decay
245
+ Training Data:
246
+ - ImageNet
247
+ Training Resources: 64x NVIDIA V100 GPUs
248
+ ID: resnest26d
249
+ LR: 0.1
250
+ Epochs: 270
251
+ Layers: 26
252
+ Dropout: 0.2
253
+ Crop Pct: '0.875'
254
+ Momentum: 0.9
255
+ Batch Size: 8192
256
+ Image Size: '224'
257
+ Weight Decay: 0.0001
258
+ Interpolation: bilinear
259
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L159
260
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gluon_resnest26-50eb607c.pth
261
+ Results:
262
+ - Task: Image Classification
263
+ Dataset: ImageNet
264
+ Metrics:
265
+ Top 1 Accuracy: 78.48%
266
+ Top 5 Accuracy: 94.3%
267
+ - Name: resnest50d
268
+ In Collection: ResNeSt
269
+ Metadata:
270
+ FLOPs: 6937106336
271
+ Parameters: 27480000
272
+ File Size: 110273258
273
+ Architecture:
274
+ - 1x1 Convolution
275
+ - Convolution
276
+ - Dense Connections
277
+ - Global Average Pooling
278
+ - Max Pooling
279
+ - ReLU
280
+ - Residual Connection
281
+ - Softmax
282
+ - Split Attention
283
+ Tasks:
284
+ - Image Classification
285
+ Training Techniques:
286
+ - AutoAugment
287
+ - DropBlock
288
+ - Label Smoothing
289
+ - Mixup
290
+ - SGD with Momentum
291
+ - Weight Decay
292
+ Training Data:
293
+ - ImageNet
294
+ Training Resources: 64x NVIDIA V100 GPUs
295
+ ID: resnest50d
296
+ LR: 0.1
297
+ Epochs: 270
298
+ Layers: 50
299
+ Dropout: 0.2
300
+ Crop Pct: '0.875'
301
+ Momentum: 0.9
302
+ Batch Size: 8192
303
+ Image Size: '224'
304
+ Weight Decay: 0.0001
305
+ Interpolation: bilinear
306
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L170
307
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest50-528c19ca.pth
308
+ Results:
309
+ - Task: Image Classification
310
+ Dataset: ImageNet
311
+ Metrics:
312
+ Top 1 Accuracy: 80.96%
313
+ Top 5 Accuracy: 95.38%
314
+ - Name: resnest50d_1s4x24d
315
+ In Collection: ResNeSt
316
+ Metadata:
317
+ FLOPs: 5686764544
318
+ Parameters: 25680000
319
+ File Size: 103045531
320
+ Architecture:
321
+ - 1x1 Convolution
322
+ - Convolution
323
+ - Dense Connections
324
+ - Global Average Pooling
325
+ - Max Pooling
326
+ - ReLU
327
+ - Residual Connection
328
+ - Softmax
329
+ - Split Attention
330
+ Tasks:
331
+ - Image Classification
332
+ Training Techniques:
333
+ - AutoAugment
334
+ - DropBlock
335
+ - Label Smoothing
336
+ - Mixup
337
+ - SGD with Momentum
338
+ - Weight Decay
339
+ Training Data:
340
+ - ImageNet
341
+ Training Resources: 64x NVIDIA V100 GPUs
342
+ ID: resnest50d_1s4x24d
343
+ LR: 0.1
344
+ Epochs: 270
345
+ Layers: 50
346
+ Dropout: 0.2
347
+ Crop Pct: '0.875'
348
+ Momentum: 0.9
349
+ Batch Size: 8192
350
+ Image Size: '224'
351
+ Weight Decay: 0.0001
352
+ Interpolation: bicubic
353
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L229
354
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest50_fast_1s4x24d-d4a4f76f.pth
355
+ Results:
356
+ - Task: Image Classification
357
+ Dataset: ImageNet
358
+ Metrics:
359
+ Top 1 Accuracy: 81.0%
360
+ Top 5 Accuracy: 95.33%
361
+ - Name: resnest50d_4s2x40d
362
+ In Collection: ResNeSt
363
+ Metadata:
364
+ FLOPs: 5657064720
365
+ Parameters: 30420000
366
+ File Size: 122133282
367
+ Architecture:
368
+ - 1x1 Convolution
369
+ - Convolution
370
+ - Dense Connections
371
+ - Global Average Pooling
372
+ - Max Pooling
373
+ - ReLU
374
+ - Residual Connection
375
+ - Softmax
376
+ - Split Attention
377
+ Tasks:
378
+ - Image Classification
379
+ Training Techniques:
380
+ - AutoAugment
381
+ - DropBlock
382
+ - Label Smoothing
383
+ - Mixup
384
+ - SGD with Momentum
385
+ - Weight Decay
386
+ Training Data:
387
+ - ImageNet
388
+ Training Resources: 64x NVIDIA V100 GPUs
389
+ ID: resnest50d_4s2x40d
390
+ LR: 0.1
391
+ Epochs: 270
392
+ Layers: 50
393
+ Dropout: 0.2
394
+ Crop Pct: '0.875'
395
+ Momentum: 0.9
396
+ Batch Size: 8192
397
+ Image Size: '224'
398
+ Weight Decay: 0.0001
399
+ Interpolation: bicubic
400
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnest.py#L218
401
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-resnest/resnest50_fast_4s2x40d-41d14ed0.pth
402
+ Results:
403
+ - Task: Image Classification
404
+ Dataset: ImageNet
405
+ Metrics:
406
+ Top 1 Accuracy: 81.11%
407
+ Top 5 Accuracy: 95.55%
408
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/resnet-d.md ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ResNet-D
2
+
3
+ **ResNet-D** is a modification on the [ResNet](https://paperswithcode.com/method/resnet) architecture that utilises an [average pooling](https://paperswithcode.com/method/average-pooling) tweak for downsampling. The motivation is that in the unmodified ResNet, the [1×1 convolution](https://paperswithcode.com/method/1x1-convolution) for the downsampling block ignores 3/4 of input feature maps, so this is modified so no information will be ignored
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{he2018bag,
15
+ title={Bag of Tricks for Image Classification with Convolutional Neural Networks},
16
+ author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li},
17
+ year={2018},
18
+ eprint={1812.01187},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: ResNet-D
28
+ Paper:
29
+ Title: Bag of Tricks for Image Classification with Convolutional Neural Networks
30
+ URL: https://paperswithcode.com/paper/bag-of-tricks-for-image-classification-with
31
+ Models:
32
+ - Name: resnet101d
33
+ In Collection: ResNet-D
34
+ Metadata:
35
+ FLOPs: 13805639680
36
+ Parameters: 44570000
37
+ File Size: 178791263
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Batch Normalization
41
+ - Bottleneck Residual Block
42
+ - Convolution
43
+ - Global Average Pooling
44
+ - Max Pooling
45
+ - ReLU
46
+ - Residual Block
47
+ - Residual Connection
48
+ - Softmax
49
+ Tasks:
50
+ - Image Classification
51
+ Training Data:
52
+ - ImageNet
53
+ ID: resnet101d
54
+ Crop Pct: '0.94'
55
+ Image Size: '256'
56
+ Interpolation: bicubic
57
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L716
58
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet101d_ra2-2803ffab.pth
59
+ Results:
60
+ - Task: Image Classification
61
+ Dataset: ImageNet
62
+ Metrics:
63
+ Top 1 Accuracy: 82.31%
64
+ Top 5 Accuracy: 96.06%
65
+ - Name: resnet152d
66
+ In Collection: ResNet-D
67
+ Metadata:
68
+ FLOPs: 20155275264
69
+ Parameters: 60210000
70
+ File Size: 241596837
71
+ Architecture:
72
+ - 1x1 Convolution
73
+ - Batch Normalization
74
+ - Bottleneck Residual Block
75
+ - Convolution
76
+ - Global Average Pooling
77
+ - Max Pooling
78
+ - ReLU
79
+ - Residual Block
80
+ - Residual Connection
81
+ - Softmax
82
+ Tasks:
83
+ - Image Classification
84
+ Training Data:
85
+ - ImageNet
86
+ ID: resnet152d
87
+ Crop Pct: '0.94'
88
+ Image Size: '256'
89
+ Interpolation: bicubic
90
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L724
91
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet152d_ra2-5cac0439.pth
92
+ Results:
93
+ - Task: Image Classification
94
+ Dataset: ImageNet
95
+ Metrics:
96
+ Top 1 Accuracy: 83.13%
97
+ Top 5 Accuracy: 96.35%
98
+ - Name: resnet18d
99
+ In Collection: ResNet-D
100
+ Metadata:
101
+ FLOPs: 2645205760
102
+ Parameters: 11710000
103
+ File Size: 46893231
104
+ Architecture:
105
+ - 1x1 Convolution
106
+ - Batch Normalization
107
+ - Bottleneck Residual Block
108
+ - Convolution
109
+ - Global Average Pooling
110
+ - Max Pooling
111
+ - ReLU
112
+ - Residual Block
113
+ - Residual Connection
114
+ - Softmax
115
+ Tasks:
116
+ - Image Classification
117
+ Training Data:
118
+ - ImageNet
119
+ ID: resnet18d
120
+ Crop Pct: '0.875'
121
+ Image Size: '224'
122
+ Interpolation: bicubic
123
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L649
124
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet18d_ra2-48a79e06.pth
125
+ Results:
126
+ - Task: Image Classification
127
+ Dataset: ImageNet
128
+ Metrics:
129
+ Top 1 Accuracy: 72.27%
130
+ Top 5 Accuracy: 90.69%
131
+ - Name: resnet200d
132
+ In Collection: ResNet-D
133
+ Metadata:
134
+ FLOPs: 26034378752
135
+ Parameters: 64690000
136
+ File Size: 259662933
137
+ Architecture:
138
+ - 1x1 Convolution
139
+ - Batch Normalization
140
+ - Bottleneck Residual Block
141
+ - Convolution
142
+ - Global Average Pooling
143
+ - Max Pooling
144
+ - ReLU
145
+ - Residual Block
146
+ - Residual Connection
147
+ - Softmax
148
+ Tasks:
149
+ - Image Classification
150
+ Training Data:
151
+ - ImageNet
152
+ ID: resnet200d
153
+ Crop Pct: '0.94'
154
+ Image Size: '256'
155
+ Interpolation: bicubic
156
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L749
157
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet200d_ra2-bdba9bf9.pth
158
+ Results:
159
+ - Task: Image Classification
160
+ Dataset: ImageNet
161
+ Metrics:
162
+ Top 1 Accuracy: 83.24%
163
+ Top 5 Accuracy: 96.49%
164
+ - Name: resnet26d
165
+ In Collection: ResNet-D
166
+ Metadata:
167
+ FLOPs: 3335276032
168
+ Parameters: 16010000
169
+ File Size: 64209122
170
+ Architecture:
171
+ - 1x1 Convolution
172
+ - Batch Normalization
173
+ - Bottleneck Residual Block
174
+ - Convolution
175
+ - Global Average Pooling
176
+ - Max Pooling
177
+ - ReLU
178
+ - Residual Block
179
+ - Residual Connection
180
+ - Softmax
181
+ Tasks:
182
+ - Image Classification
183
+ Training Data:
184
+ - ImageNet
185
+ ID: resnet26d
186
+ Crop Pct: '0.875'
187
+ Image Size: '224'
188
+ Interpolation: bicubic
189
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L683
190
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet26d-69e92c46.pth
191
+ Results:
192
+ - Task: Image Classification
193
+ Dataset: ImageNet
194
+ Metrics:
195
+ Top 1 Accuracy: 76.69%
196
+ Top 5 Accuracy: 93.15%
197
+ - Name: resnet34d
198
+ In Collection: ResNet-D
199
+ Metadata:
200
+ FLOPs: 5026601728
201
+ Parameters: 21820000
202
+ File Size: 87369807
203
+ Architecture:
204
+ - 1x1 Convolution
205
+ - Batch Normalization
206
+ - Bottleneck Residual Block
207
+ - Convolution
208
+ - Global Average Pooling
209
+ - Max Pooling
210
+ - ReLU
211
+ - Residual Block
212
+ - Residual Connection
213
+ - Softmax
214
+ Tasks:
215
+ - Image Classification
216
+ Training Data:
217
+ - ImageNet
218
+ ID: resnet34d
219
+ Crop Pct: '0.875'
220
+ Image Size: '224'
221
+ Interpolation: bicubic
222
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L666
223
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34d_ra2-f8dcfcaf.pth
224
+ Results:
225
+ - Task: Image Classification
226
+ Dataset: ImageNet
227
+ Metrics:
228
+ Top 1 Accuracy: 77.11%
229
+ Top 5 Accuracy: 93.38%
230
+ - Name: resnet50d
231
+ In Collection: ResNet-D
232
+ Metadata:
233
+ FLOPs: 5591002624
234
+ Parameters: 25580000
235
+ File Size: 102567109
236
+ Architecture:
237
+ - 1x1 Convolution
238
+ - Batch Normalization
239
+ - Bottleneck Residual Block
240
+ - Convolution
241
+ - Global Average Pooling
242
+ - Max Pooling
243
+ - ReLU
244
+ - Residual Block
245
+ - Residual Connection
246
+ - Softmax
247
+ Tasks:
248
+ - Image Classification
249
+ Training Data:
250
+ - ImageNet
251
+ ID: resnet50d
252
+ Crop Pct: '0.875'
253
+ Image Size: '224'
254
+ Interpolation: bicubic
255
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L699
256
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet50d_ra2-464e36ba.pth
257
+ Results:
258
+ - Task: Image Classification
259
+ Dataset: ImageNet
260
+ Metrics:
261
+ Top 1 Accuracy: 80.55%
262
+ Top 5 Accuracy: 95.16%
263
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/resnet.md ADDED
@@ -0,0 +1,378 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ResNet
2
+
3
+ **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https://paperswithcode.com/method/residual-block) ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @article{DBLP:journals/corr/HeZRS15,
15
+ author = {Kaiming He and
16
+ Xiangyu Zhang and
17
+ Shaoqing Ren and
18
+ Jian Sun},
19
+ title = {Deep Residual Learning for Image Recognition},
20
+ journal = {CoRR},
21
+ volume = {abs/1512.03385},
22
+ year = {2015},
23
+ url = {http://arxiv.org/abs/1512.03385},
24
+ archivePrefix = {arXiv},
25
+ eprint = {1512.03385},
26
+ timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
27
+ biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
28
+ bibsource = {dblp computer science bibliography, https://dblp.org}
29
+ }
30
+ ```
31
+
32
+ <!--
33
+ Type: model-index
34
+ Collections:
35
+ - Name: ResNet
36
+ Paper:
37
+ Title: Deep Residual Learning for Image Recognition
38
+ URL: https://paperswithcode.com/paper/deep-residual-learning-for-image-recognition
39
+ Models:
40
+ - Name: resnet18
41
+ In Collection: ResNet
42
+ Metadata:
43
+ FLOPs: 2337073152
44
+ Parameters: 11690000
45
+ File Size: 46827520
46
+ Architecture:
47
+ - 1x1 Convolution
48
+ - Batch Normalization
49
+ - Bottleneck Residual Block
50
+ - Convolution
51
+ - Global Average Pooling
52
+ - Max Pooling
53
+ - ReLU
54
+ - Residual Block
55
+ - Residual Connection
56
+ - Softmax
57
+ Tasks:
58
+ - Image Classification
59
+ Training Data:
60
+ - ImageNet
61
+ ID: resnet18
62
+ Crop Pct: '0.875'
63
+ Image Size: '224'
64
+ Interpolation: bilinear
65
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L641
66
+ Weights: https://download.pytorch.org/models/resnet18-5c106cde.pth
67
+ Results:
68
+ - Task: Image Classification
69
+ Dataset: ImageNet
70
+ Metrics:
71
+ Top 1 Accuracy: 69.74%
72
+ Top 5 Accuracy: 89.09%
73
+ - Name: resnet26
74
+ In Collection: ResNet
75
+ Metadata:
76
+ FLOPs: 3026804736
77
+ Parameters: 16000000
78
+ File Size: 64129972
79
+ Architecture:
80
+ - 1x1 Convolution
81
+ - Batch Normalization
82
+ - Bottleneck Residual Block
83
+ - Convolution
84
+ - Global Average Pooling
85
+ - Max Pooling
86
+ - ReLU
87
+ - Residual Block
88
+ - Residual Connection
89
+ - Softmax
90
+ Tasks:
91
+ - Image Classification
92
+ Training Data:
93
+ - ImageNet
94
+ ID: resnet26
95
+ Crop Pct: '0.875'
96
+ Image Size: '224'
97
+ Interpolation: bicubic
98
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L675
99
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet26-9aa10e23.pth
100
+ Results:
101
+ - Task: Image Classification
102
+ Dataset: ImageNet
103
+ Metrics:
104
+ Top 1 Accuracy: 75.29%
105
+ Top 5 Accuracy: 92.57%
106
+ - Name: resnet34
107
+ In Collection: ResNet
108
+ Metadata:
109
+ FLOPs: 4718469120
110
+ Parameters: 21800000
111
+ File Size: 87290831
112
+ Architecture:
113
+ - 1x1 Convolution
114
+ - Batch Normalization
115
+ - Bottleneck Residual Block
116
+ - Convolution
117
+ - Global Average Pooling
118
+ - Max Pooling
119
+ - ReLU
120
+ - Residual Block
121
+ - Residual Connection
122
+ - Softmax
123
+ Tasks:
124
+ - Image Classification
125
+ Training Data:
126
+ - ImageNet
127
+ ID: resnet34
128
+ Crop Pct: '0.875'
129
+ Image Size: '224'
130
+ Interpolation: bilinear
131
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L658
132
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet34-43635321.pth
133
+ Results:
134
+ - Task: Image Classification
135
+ Dataset: ImageNet
136
+ Metrics:
137
+ Top 1 Accuracy: 75.11%
138
+ Top 5 Accuracy: 92.28%
139
+ - Name: resnet50
140
+ In Collection: ResNet
141
+ Metadata:
142
+ FLOPs: 5282531328
143
+ Parameters: 25560000
144
+ File Size: 102488165
145
+ Architecture:
146
+ - 1x1 Convolution
147
+ - Batch Normalization
148
+ - Bottleneck Residual Block
149
+ - Convolution
150
+ - Global Average Pooling
151
+ - Max Pooling
152
+ - ReLU
153
+ - Residual Block
154
+ - Residual Connection
155
+ - Softmax
156
+ Tasks:
157
+ - Image Classification
158
+ Training Data:
159
+ - ImageNet
160
+ ID: resnet50
161
+ Crop Pct: '0.875'
162
+ Image Size: '224'
163
+ Interpolation: bicubic
164
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L691
165
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet50_ram-a26f946b.pth
166
+ Results:
167
+ - Task: Image Classification
168
+ Dataset: ImageNet
169
+ Metrics:
170
+ Top 1 Accuracy: 79.04%
171
+ Top 5 Accuracy: 94.39%
172
+ - Name: resnetblur50
173
+ In Collection: ResNet
174
+ Metadata:
175
+ FLOPs: 6621606912
176
+ Parameters: 25560000
177
+ File Size: 102488165
178
+ Architecture:
179
+ - 1x1 Convolution
180
+ - Batch Normalization
181
+ - Blur Pooling
182
+ - Bottleneck Residual Block
183
+ - Convolution
184
+ - Global Average Pooling
185
+ - Max Pooling
186
+ - ReLU
187
+ - Residual Block
188
+ - Residual Connection
189
+ - Softmax
190
+ Tasks:
191
+ - Image Classification
192
+ Training Data:
193
+ - ImageNet
194
+ ID: resnetblur50
195
+ Crop Pct: '0.875'
196
+ Image Size: '224'
197
+ Interpolation: bicubic
198
+ Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/resnet.py#L1160
199
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnetblur50-84f4748f.pth
200
+ Results:
201
+ - Task: Image Classification
202
+ Dataset: ImageNet
203
+ Metrics:
204
+ Top 1 Accuracy: 79.29%
205
+ Top 5 Accuracy: 94.64%
206
+ - Name: tv_resnet101
207
+ In Collection: ResNet
208
+ Metadata:
209
+ FLOPs: 10068547584
210
+ Parameters: 44550000
211
+ File Size: 178728960
212
+ Architecture:
213
+ - 1x1 Convolution
214
+ - Batch Normalization
215
+ - Bottleneck Residual Block
216
+ - Convolution
217
+ - Global Average Pooling
218
+ - Max Pooling
219
+ - ReLU
220
+ - Residual Block
221
+ - Residual Connection
222
+ - Softmax
223
+ Tasks:
224
+ - Image Classification
225
+ Training Techniques:
226
+ - SGD with Momentum
227
+ - Weight Decay
228
+ Training Data:
229
+ - ImageNet
230
+ ID: tv_resnet101
231
+ LR: 0.1
232
+ Epochs: 90
233
+ Crop Pct: '0.875'
234
+ LR Gamma: 0.1
235
+ Momentum: 0.9
236
+ Batch Size: 32
237
+ Image Size: '224'
238
+ LR Step Size: 30
239
+ Weight Decay: 0.0001
240
+ Interpolation: bilinear
241
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L761
242
+ Weights: https://download.pytorch.org/models/resnet101-5d3b4d8f.pth
243
+ Results:
244
+ - Task: Image Classification
245
+ Dataset: ImageNet
246
+ Metrics:
247
+ Top 1 Accuracy: 77.37%
248
+ Top 5 Accuracy: 93.56%
249
+ - Name: tv_resnet152
250
+ In Collection: ResNet
251
+ Metadata:
252
+ FLOPs: 14857660416
253
+ Parameters: 60190000
254
+ File Size: 241530880
255
+ Architecture:
256
+ - 1x1 Convolution
257
+ - Batch Normalization
258
+ - Bottleneck Residual Block
259
+ - Convolution
260
+ - Global Average Pooling
261
+ - Max Pooling
262
+ - ReLU
263
+ - Residual Block
264
+ - Residual Connection
265
+ - Softmax
266
+ Tasks:
267
+ - Image Classification
268
+ Training Techniques:
269
+ - SGD with Momentum
270
+ - Weight Decay
271
+ Training Data:
272
+ - ImageNet
273
+ ID: tv_resnet152
274
+ LR: 0.1
275
+ Epochs: 90
276
+ Crop Pct: '0.875'
277
+ LR Gamma: 0.1
278
+ Momentum: 0.9
279
+ Batch Size: 32
280
+ Image Size: '224'
281
+ LR Step Size: 30
282
+ Weight Decay: 0.0001
283
+ Interpolation: bilinear
284
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L769
285
+ Weights: https://download.pytorch.org/models/resnet152-b121ed2d.pth
286
+ Results:
287
+ - Task: Image Classification
288
+ Dataset: ImageNet
289
+ Metrics:
290
+ Top 1 Accuracy: 78.32%
291
+ Top 5 Accuracy: 94.05%
292
+ - Name: tv_resnet34
293
+ In Collection: ResNet
294
+ Metadata:
295
+ FLOPs: 4718469120
296
+ Parameters: 21800000
297
+ File Size: 87306240
298
+ Architecture:
299
+ - 1x1 Convolution
300
+ - Batch Normalization
301
+ - Bottleneck Residual Block
302
+ - Convolution
303
+ - Global Average Pooling
304
+ - Max Pooling
305
+ - ReLU
306
+ - Residual Block
307
+ - Residual Connection
308
+ - Softmax
309
+ Tasks:
310
+ - Image Classification
311
+ Training Techniques:
312
+ - SGD with Momentum
313
+ - Weight Decay
314
+ Training Data:
315
+ - ImageNet
316
+ ID: tv_resnet34
317
+ LR: 0.1
318
+ Epochs: 90
319
+ Crop Pct: '0.875'
320
+ LR Gamma: 0.1
321
+ Momentum: 0.9
322
+ Batch Size: 32
323
+ Image Size: '224'
324
+ LR Step Size: 30
325
+ Weight Decay: 0.0001
326
+ Interpolation: bilinear
327
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L745
328
+ Weights: https://download.pytorch.org/models/resnet34-333f7ec4.pth
329
+ Results:
330
+ - Task: Image Classification
331
+ Dataset: ImageNet
332
+ Metrics:
333
+ Top 1 Accuracy: 73.3%
334
+ Top 5 Accuracy: 91.42%
335
+ - Name: tv_resnet50
336
+ In Collection: ResNet
337
+ Metadata:
338
+ FLOPs: 5282531328
339
+ Parameters: 25560000
340
+ File Size: 102502400
341
+ Architecture:
342
+ - 1x1 Convolution
343
+ - Batch Normalization
344
+ - Bottleneck Residual Block
345
+ - Convolution
346
+ - Global Average Pooling
347
+ - Max Pooling
348
+ - ReLU
349
+ - Residual Block
350
+ - Residual Connection
351
+ - Softmax
352
+ Tasks:
353
+ - Image Classification
354
+ Training Techniques:
355
+ - SGD with Momentum
356
+ - Weight Decay
357
+ Training Data:
358
+ - ImageNet
359
+ ID: tv_resnet50
360
+ LR: 0.1
361
+ Epochs: 90
362
+ Crop Pct: '0.875'
363
+ LR Gamma: 0.1
364
+ Momentum: 0.9
365
+ Batch Size: 32
366
+ Image Size: '224'
367
+ LR Step Size: 30
368
+ Weight Decay: 0.0001
369
+ Interpolation: bilinear
370
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L753
371
+ Weights: https://download.pytorch.org/models/resnet50-19c8e357.pth
372
+ Results:
373
+ - Task: Image Classification
374
+ Dataset: ImageNet
375
+ Metrics:
376
+ Top 1 Accuracy: 76.16%
377
+ Top 5 Accuracy: 92.88%
378
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/resnext.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ResNeXt
2
+
3
+ A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth and width.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @article{DBLP:journals/corr/XieGDTH16,
15
+ author = {Saining Xie and
16
+ Ross B. Girshick and
17
+ Piotr Doll{\'{a}}r and
18
+ Zhuowen Tu and
19
+ Kaiming He},
20
+ title = {Aggregated Residual Transformations for Deep Neural Networks},
21
+ journal = {CoRR},
22
+ volume = {abs/1611.05431},
23
+ year = {2016},
24
+ url = {http://arxiv.org/abs/1611.05431},
25
+ archivePrefix = {arXiv},
26
+ eprint = {1611.05431},
27
+ timestamp = {Mon, 13 Aug 2018 16:45:58 +0200},
28
+ biburl = {https://dblp.org/rec/journals/corr/XieGDTH16.bib},
29
+ bibsource = {dblp computer science bibliography, https://dblp.org}
30
+ }
31
+ ```
32
+
33
+ <!--
34
+ Type: model-index
35
+ Collections:
36
+ - Name: ResNeXt
37
+ Paper:
38
+ Title: Aggregated Residual Transformations for Deep Neural Networks
39
+ URL: https://paperswithcode.com/paper/aggregated-residual-transformations-for-deep
40
+ Models:
41
+ - Name: resnext101_32x8d
42
+ In Collection: ResNeXt
43
+ Metadata:
44
+ FLOPs: 21180417024
45
+ Parameters: 88790000
46
+ File Size: 356082095
47
+ Architecture:
48
+ - 1x1 Convolution
49
+ - Batch Normalization
50
+ - Convolution
51
+ - Global Average Pooling
52
+ - Grouped Convolution
53
+ - Max Pooling
54
+ - ReLU
55
+ - ResNeXt Block
56
+ - Residual Connection
57
+ - Softmax
58
+ Tasks:
59
+ - Image Classification
60
+ Training Data:
61
+ - ImageNet
62
+ ID: resnext101_32x8d
63
+ Crop Pct: '0.875'
64
+ Image Size: '224'
65
+ Interpolation: bilinear
66
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/resnet.py#L877
67
+ Weights: https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth
68
+ Results:
69
+ - Task: Image Classification
70
+ Dataset: ImageNet
71
+ Metrics:
72
+ Top 1 Accuracy: 79.3%
73
+ Top 5 Accuracy: 94.53%
74
+ - Name: resnext50_32x4d
75
+ In Collection: ResNeXt
76
+ Metadata:
77
+ FLOPs: 5472648192
78
+ Parameters: 25030000
79
+ File Size: 100435887
80
+ Architecture:
81
+ - 1x1 Convolution
82
+ - Batch Normalization
83
+ - Convolution
84
+ - Global Average Pooling
85
+ - Grouped Convolution
86
+ - Max Pooling
87
+ - ReLU
88
+ - ResNeXt Block
89
+ - Residual Connection
90
+ - Softmax
91
+ Tasks:
92
+ - Image Classification
93
+ Training Data:
94
+ - ImageNet
95
+ ID: resnext50_32x4d
96
+ Crop Pct: '0.875'
97
+ Image Size: '224'
98
+ Interpolation: bicubic
99
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/resnet.py#L851
100
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnext50_32x4d_ra-d733960d.pth
101
+ Results:
102
+ - Task: Image Classification
103
+ Dataset: ImageNet
104
+ Metrics:
105
+ Top 1 Accuracy: 79.79%
106
+ Top 5 Accuracy: 94.61%
107
+ - Name: resnext50d_32x4d
108
+ In Collection: ResNeXt
109
+ Metadata:
110
+ FLOPs: 5781119488
111
+ Parameters: 25050000
112
+ File Size: 100515304
113
+ Architecture:
114
+ - 1x1 Convolution
115
+ - Batch Normalization
116
+ - Convolution
117
+ - Global Average Pooling
118
+ - Grouped Convolution
119
+ - Max Pooling
120
+ - ReLU
121
+ - ResNeXt Block
122
+ - Residual Connection
123
+ - Softmax
124
+ Tasks:
125
+ - Image Classification
126
+ Training Data:
127
+ - ImageNet
128
+ ID: resnext50d_32x4d
129
+ Crop Pct: '0.875'
130
+ Image Size: '224'
131
+ Interpolation: bicubic
132
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/resnet.py#L869
133
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnext50d_32x4d-103e99f8.pth
134
+ Results:
135
+ - Task: Image Classification
136
+ Dataset: ImageNet
137
+ Metrics:
138
+ Top 1 Accuracy: 79.67%
139
+ Top 5 Accuracy: 94.87%
140
+ - Name: tv_resnext50_32x4d
141
+ In Collection: ResNeXt
142
+ Metadata:
143
+ FLOPs: 5472648192
144
+ Parameters: 25030000
145
+ File Size: 100441675
146
+ Architecture:
147
+ - 1x1 Convolution
148
+ - Batch Normalization
149
+ - Convolution
150
+ - Global Average Pooling
151
+ - Grouped Convolution
152
+ - Max Pooling
153
+ - ReLU
154
+ - ResNeXt Block
155
+ - Residual Connection
156
+ - Softmax
157
+ Tasks:
158
+ - Image Classification
159
+ Training Techniques:
160
+ - SGD with Momentum
161
+ - Weight Decay
162
+ Training Data:
163
+ - ImageNet
164
+ ID: tv_resnext50_32x4d
165
+ LR: 0.1
166
+ Epochs: 90
167
+ Crop Pct: '0.875'
168
+ LR Gamma: 0.1
169
+ Momentum: 0.9
170
+ Batch Size: 32
171
+ Image Size: '224'
172
+ LR Step Size: 30
173
+ Weight Decay: 0.0001
174
+ Interpolation: bilinear
175
+ Code: https://github.com/rwightman/pytorch-image-models/blob/9a25fdf3ad0414b4d66da443fe60ae0aa14edc84/timm/models/resnet.py#L842
176
+ Weights: https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth
177
+ Results:
178
+ - Task: Image Classification
179
+ Dataset: ImageNet
180
+ Metrics:
181
+ Top 1 Accuracy: 77.61%
182
+ Top 5 Accuracy: 93.68%
183
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/rexnet.md ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RexNet
2
+
3
+ **Rank Expansion Networks** (ReXNets) follow a set of new design principles for designing bottlenecks in image classification models. Authors refine each layer by 1) expanding the input channel size of the convolution layer and 2) replacing the [ReLU6s](https://www.paperswithcode.com/method/relu6).
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{han2020rexnet,
15
+ title={ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network},
16
+ author={Dongyoon Han and Sangdoo Yun and Byeongho Heo and YoungJoon Yoo},
17
+ year={2020},
18
+ eprint={2007.00992},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: RexNet
28
+ Paper:
29
+ Title: 'ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
30
+ Network'
31
+ URL: https://paperswithcode.com/paper/rexnet-diminishing-representational
32
+ Models:
33
+ - Name: rexnet_100
34
+ In Collection: RexNet
35
+ Metadata:
36
+ FLOPs: 509989377
37
+ Parameters: 4800000
38
+ File Size: 19417552
39
+ Architecture:
40
+ - Batch Normalization
41
+ - Convolution
42
+ - Dropout
43
+ - ReLU6
44
+ - Residual Connection
45
+ Tasks:
46
+ - Image Classification
47
+ Training Techniques:
48
+ - Label Smoothing
49
+ - Linear Warmup With Cosine Annealing
50
+ - Nesterov Accelerated Gradient
51
+ - Weight Decay
52
+ Training Data:
53
+ - ImageNet
54
+ Training Resources: 4x NVIDIA V100 GPUs
55
+ ID: rexnet_100
56
+ LR: 0.5
57
+ Epochs: 400
58
+ Dropout: 0.2
59
+ Crop Pct: '0.875'
60
+ Momentum: 0.9
61
+ Batch Size: 512
62
+ Image Size: '224'
63
+ Weight Decay: 1.0e-05
64
+ Interpolation: bicubic
65
+ Label Smoothing: 0.1
66
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/rexnet.py#L212
67
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rexnet/rexnetv1_100-1b4dddf4.pth
68
+ Results:
69
+ - Task: Image Classification
70
+ Dataset: ImageNet
71
+ Metrics:
72
+ Top 1 Accuracy: 77.86%
73
+ Top 5 Accuracy: 93.88%
74
+ - Name: rexnet_130
75
+ In Collection: RexNet
76
+ Metadata:
77
+ FLOPs: 848364461
78
+ Parameters: 7560000
79
+ File Size: 30508197
80
+ Architecture:
81
+ - Batch Normalization
82
+ - Convolution
83
+ - Dropout
84
+ - ReLU6
85
+ - Residual Connection
86
+ Tasks:
87
+ - Image Classification
88
+ Training Techniques:
89
+ - Label Smoothing
90
+ - Linear Warmup With Cosine Annealing
91
+ - Nesterov Accelerated Gradient
92
+ - Weight Decay
93
+ Training Data:
94
+ - ImageNet
95
+ Training Resources: 4x NVIDIA V100 GPUs
96
+ ID: rexnet_130
97
+ LR: 0.5
98
+ Epochs: 400
99
+ Dropout: 0.2
100
+ Crop Pct: '0.875'
101
+ Momentum: 0.9
102
+ Batch Size: 512
103
+ Image Size: '224'
104
+ Weight Decay: 1.0e-05
105
+ Interpolation: bicubic
106
+ Label Smoothing: 0.1
107
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/rexnet.py#L218
108
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rexnet/rexnetv1_130-590d768e.pth
109
+ Results:
110
+ - Task: Image Classification
111
+ Dataset: ImageNet
112
+ Metrics:
113
+ Top 1 Accuracy: 79.49%
114
+ Top 5 Accuracy: 94.67%
115
+ - Name: rexnet_150
116
+ In Collection: RexNet
117
+ Metadata:
118
+ FLOPs: 1122374469
119
+ Parameters: 9730000
120
+ File Size: 39227315
121
+ Architecture:
122
+ - Batch Normalization
123
+ - Convolution
124
+ - Dropout
125
+ - ReLU6
126
+ - Residual Connection
127
+ Tasks:
128
+ - Image Classification
129
+ Training Techniques:
130
+ - Label Smoothing
131
+ - Linear Warmup With Cosine Annealing
132
+ - Nesterov Accelerated Gradient
133
+ - Weight Decay
134
+ Training Data:
135
+ - ImageNet
136
+ Training Resources: 4x NVIDIA V100 GPUs
137
+ ID: rexnet_150
138
+ LR: 0.5
139
+ Epochs: 400
140
+ Dropout: 0.2
141
+ Crop Pct: '0.875'
142
+ Momentum: 0.9
143
+ Batch Size: 512
144
+ Image Size: '224'
145
+ Weight Decay: 1.0e-05
146
+ Interpolation: bicubic
147
+ Label Smoothing: 0.1
148
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/rexnet.py#L224
149
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rexnet/rexnetv1_150-bd1a6aa8.pth
150
+ Results:
151
+ - Task: Image Classification
152
+ Dataset: ImageNet
153
+ Metrics:
154
+ Top 1 Accuracy: 80.31%
155
+ Top 5 Accuracy: 95.16%
156
+ - Name: rexnet_200
157
+ In Collection: RexNet
158
+ Metadata:
159
+ FLOPs: 1960224938
160
+ Parameters: 16370000
161
+ File Size: 65862221
162
+ Architecture:
163
+ - Batch Normalization
164
+ - Convolution
165
+ - Dropout
166
+ - ReLU6
167
+ - Residual Connection
168
+ Tasks:
169
+ - Image Classification
170
+ Training Techniques:
171
+ - Label Smoothing
172
+ - Linear Warmup With Cosine Annealing
173
+ - Nesterov Accelerated Gradient
174
+ - Weight Decay
175
+ Training Data:
176
+ - ImageNet
177
+ Training Resources: 4x NVIDIA V100 GPUs
178
+ ID: rexnet_200
179
+ LR: 0.5
180
+ Epochs: 400
181
+ Dropout: 0.2
182
+ Crop Pct: '0.875'
183
+ Momentum: 0.9
184
+ Batch Size: 512
185
+ Image Size: '224'
186
+ Weight Decay: 1.0e-05
187
+ Interpolation: bicubic
188
+ Label Smoothing: 0.1
189
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/rexnet.py#L230
190
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-rexnet/rexnetv1_200-8c0b7f2d.pth
191
+ Results:
192
+ - Task: Image Classification
193
+ Dataset: ImageNet
194
+ Metrics:
195
+ Top 1 Accuracy: 81.63%
196
+ Top 5 Accuracy: 95.67%
197
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/se-resnet.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SE-ResNet
2
+
3
+ **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{hu2019squeezeandexcitation,
15
+ title={Squeeze-and-Excitation Networks},
16
+ author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
17
+ year={2019},
18
+ eprint={1709.01507},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: SE ResNet
28
+ Paper:
29
+ Title: Squeeze-and-Excitation Networks
30
+ URL: https://paperswithcode.com/paper/squeeze-and-excitation-networks
31
+ Models:
32
+ - Name: seresnet152d
33
+ In Collection: SE ResNet
34
+ Metadata:
35
+ FLOPs: 20161904304
36
+ Parameters: 66840000
37
+ File Size: 268144497
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Batch Normalization
41
+ - Bottleneck Residual Block
42
+ - Convolution
43
+ - Global Average Pooling
44
+ - Max Pooling
45
+ - ReLU
46
+ - Residual Block
47
+ - Residual Connection
48
+ - Softmax
49
+ - Squeeze-and-Excitation Block
50
+ Tasks:
51
+ - Image Classification
52
+ Training Techniques:
53
+ - Label Smoothing
54
+ - SGD with Momentum
55
+ - Weight Decay
56
+ Training Data:
57
+ - ImageNet
58
+ Training Resources: 8x NVIDIA Titan X GPUs
59
+ ID: seresnet152d
60
+ LR: 0.6
61
+ Epochs: 100
62
+ Layers: 152
63
+ Dropout: 0.2
64
+ Crop Pct: '0.94'
65
+ Momentum: 0.9
66
+ Batch Size: 1024
67
+ Image Size: '256'
68
+ Interpolation: bicubic
69
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1206
70
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet152d_ra2-04464dd2.pth
71
+ Results:
72
+ - Task: Image Classification
73
+ Dataset: ImageNet
74
+ Metrics:
75
+ Top 1 Accuracy: 83.74%
76
+ Top 5 Accuracy: 96.77%
77
+ - Name: seresnet50
78
+ In Collection: SE ResNet
79
+ Metadata:
80
+ FLOPs: 5285062320
81
+ Parameters: 28090000
82
+ File Size: 112621903
83
+ Architecture:
84
+ - 1x1 Convolution
85
+ - Batch Normalization
86
+ - Bottleneck Residual Block
87
+ - Convolution
88
+ - Global Average Pooling
89
+ - Max Pooling
90
+ - ReLU
91
+ - Residual Block
92
+ - Residual Connection
93
+ - Softmax
94
+ - Squeeze-and-Excitation Block
95
+ Tasks:
96
+ - Image Classification
97
+ Training Techniques:
98
+ - Label Smoothing
99
+ - SGD with Momentum
100
+ - Weight Decay
101
+ Training Data:
102
+ - ImageNet
103
+ Training Resources: 8x NVIDIA Titan X GPUs
104
+ ID: seresnet50
105
+ LR: 0.6
106
+ Epochs: 100
107
+ Layers: 50
108
+ Dropout: 0.2
109
+ Crop Pct: '0.875'
110
+ Momentum: 0.9
111
+ Batch Size: 1024
112
+ Image Size: '224'
113
+ Interpolation: bicubic
114
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1180
115
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnet50_ra_224-8efdb4bb.pth
116
+ Results:
117
+ - Task: Image Classification
118
+ Dataset: ImageNet
119
+ Metrics:
120
+ Top 1 Accuracy: 80.26%
121
+ Top 5 Accuracy: 95.07%
122
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/selecsls.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SelecSLS
2
+
3
+ **SelecSLS** uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @article{Mehta_2020,
15
+ title={XNect},
16
+ volume={39},
17
+ ISSN={1557-7368},
18
+ url={http://dx.doi.org/10.1145/3386569.3392410},
19
+ DOI={10.1145/3386569.3392410},
20
+ number={4},
21
+ journal={ACM Transactions on Graphics},
22
+ publisher={Association for Computing Machinery (ACM)},
23
+ author={Mehta, Dushyant and Sotnychenko, Oleksandr and Mueller, Franziska and Xu, Weipeng and Elgharib, Mohamed and Fua, Pascal and Seidel, Hans-Peter and Rhodin, Helge and Pons-Moll, Gerard and Theobalt, Christian},
24
+ year={2020},
25
+ month={Jul}
26
+ }
27
+ ```
28
+
29
+ <!--
30
+ Type: model-index
31
+ Collections:
32
+ - Name: SelecSLS
33
+ Paper:
34
+ Title: 'XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera'
35
+ URL: https://paperswithcode.com/paper/xnect-real-time-multi-person-3d-human-pose
36
+ Models:
37
+ - Name: selecsls42b
38
+ In Collection: SelecSLS
39
+ Metadata:
40
+ FLOPs: 3824022528
41
+ Parameters: 32460000
42
+ File Size: 129948954
43
+ Architecture:
44
+ - Batch Normalization
45
+ - Convolution
46
+ - Dense Connections
47
+ - Dropout
48
+ - Global Average Pooling
49
+ - ReLU
50
+ - SelecSLS Block
51
+ Tasks:
52
+ - Image Classification
53
+ Training Techniques:
54
+ - Cosine Annealing
55
+ - Random Erasing
56
+ Training Data:
57
+ - ImageNet
58
+ ID: selecsls42b
59
+ Crop Pct: '0.875'
60
+ Image Size: '224'
61
+ Interpolation: bicubic
62
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/selecsls.py#L335
63
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-selecsls/selecsls42b-8af30141.pth
64
+ Results:
65
+ - Task: Image Classification
66
+ Dataset: ImageNet
67
+ Metrics:
68
+ Top 1 Accuracy: 77.18%
69
+ Top 5 Accuracy: 93.39%
70
+ - Name: selecsls60
71
+ In Collection: SelecSLS
72
+ Metadata:
73
+ FLOPs: 4610472600
74
+ Parameters: 30670000
75
+ File Size: 122839714
76
+ Architecture:
77
+ - Batch Normalization
78
+ - Convolution
79
+ - Dense Connections
80
+ - Dropout
81
+ - Global Average Pooling
82
+ - ReLU
83
+ - SelecSLS Block
84
+ Tasks:
85
+ - Image Classification
86
+ Training Techniques:
87
+ - Cosine Annealing
88
+ - Random Erasing
89
+ Training Data:
90
+ - ImageNet
91
+ ID: selecsls60
92
+ Crop Pct: '0.875'
93
+ Image Size: '224'
94
+ Interpolation: bicubic
95
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/selecsls.py#L342
96
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-selecsls/selecsls60-bbf87526.pth
97
+ Results:
98
+ - Task: Image Classification
99
+ Dataset: ImageNet
100
+ Metrics:
101
+ Top 1 Accuracy: 77.99%
102
+ Top 5 Accuracy: 93.83%
103
+ - Name: selecsls60b
104
+ In Collection: SelecSLS
105
+ Metadata:
106
+ FLOPs: 4657653144
107
+ Parameters: 32770000
108
+ File Size: 131252898
109
+ Architecture:
110
+ - Batch Normalization
111
+ - Convolution
112
+ - Dense Connections
113
+ - Dropout
114
+ - Global Average Pooling
115
+ - ReLU
116
+ - SelecSLS Block
117
+ Tasks:
118
+ - Image Classification
119
+ Training Techniques:
120
+ - Cosine Annealing
121
+ - Random Erasing
122
+ Training Data:
123
+ - ImageNet
124
+ ID: selecsls60b
125
+ Crop Pct: '0.875'
126
+ Image Size: '224'
127
+ Interpolation: bicubic
128
+ Code: https://github.com/rwightman/pytorch-image-models/blob/b9843f954b0457af2db4f9dea41a8538f51f5d78/timm/models/selecsls.py#L349
129
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-selecsls/selecsls60b-94e619b5.pth
130
+ Results:
131
+ - Task: Image Classification
132
+ Dataset: ImageNet
133
+ Metrics:
134
+ Top 1 Accuracy: 78.41%
135
+ Top 5 Accuracy: 94.18%
136
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/seresnext.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SE-ResNeXt
2
+
3
+ **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resneXt) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{hu2019squeezeandexcitation,
15
+ title={Squeeze-and-Excitation Networks},
16
+ author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
17
+ year={2019},
18
+ eprint={1709.01507},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: SEResNeXt
28
+ Paper:
29
+ Title: Squeeze-and-Excitation Networks
30
+ URL: https://paperswithcode.com/paper/squeeze-and-excitation-networks
31
+ Models:
32
+ - Name: seresnext26d_32x4d
33
+ In Collection: SEResNeXt
34
+ Metadata:
35
+ FLOPs: 3507053024
36
+ Parameters: 16810000
37
+ File Size: 67425193
38
+ Architecture:
39
+ - 1x1 Convolution
40
+ - Batch Normalization
41
+ - Convolution
42
+ - Global Average Pooling
43
+ - Grouped Convolution
44
+ - Max Pooling
45
+ - ReLU
46
+ - ResNeXt Block
47
+ - Residual Connection
48
+ - Softmax
49
+ - Squeeze-and-Excitation Block
50
+ Tasks:
51
+ - Image Classification
52
+ Training Techniques:
53
+ - Label Smoothing
54
+ - SGD with Momentum
55
+ - Weight Decay
56
+ Training Data:
57
+ - ImageNet
58
+ Training Resources: 8x NVIDIA Titan X GPUs
59
+ ID: seresnext26d_32x4d
60
+ LR: 0.6
61
+ Epochs: 100
62
+ Layers: 26
63
+ Dropout: 0.2
64
+ Crop Pct: '0.875'
65
+ Momentum: 0.9
66
+ Batch Size: 1024
67
+ Image Size: '224'
68
+ Interpolation: bicubic
69
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1234
70
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext26d_32x4d-80fa48a3.pth
71
+ Results:
72
+ - Task: Image Classification
73
+ Dataset: ImageNet
74
+ Metrics:
75
+ Top 1 Accuracy: 77.59%
76
+ Top 5 Accuracy: 93.61%
77
+ - Name: seresnext26t_32x4d
78
+ In Collection: SEResNeXt
79
+ Metadata:
80
+ FLOPs: 3466436448
81
+ Parameters: 16820000
82
+ File Size: 67414838
83
+ Architecture:
84
+ - 1x1 Convolution
85
+ - Batch Normalization
86
+ - Convolution
87
+ - Global Average Pooling
88
+ - Grouped Convolution
89
+ - Max Pooling
90
+ - ReLU
91
+ - ResNeXt Block
92
+ - Residual Connection
93
+ - Softmax
94
+ - Squeeze-and-Excitation Block
95
+ Tasks:
96
+ - Image Classification
97
+ Training Techniques:
98
+ - Label Smoothing
99
+ - SGD with Momentum
100
+ - Weight Decay
101
+ Training Data:
102
+ - ImageNet
103
+ Training Resources: 8x NVIDIA Titan X GPUs
104
+ ID: seresnext26t_32x4d
105
+ LR: 0.6
106
+ Epochs: 100
107
+ Layers: 26
108
+ Dropout: 0.2
109
+ Crop Pct: '0.875'
110
+ Momentum: 0.9
111
+ Batch Size: 1024
112
+ Image Size: '224'
113
+ Interpolation: bicubic
114
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1246
115
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext26tn_32x4d-569cb627.pth
116
+ Results:
117
+ - Task: Image Classification
118
+ Dataset: ImageNet
119
+ Metrics:
120
+ Top 1 Accuracy: 77.99%
121
+ Top 5 Accuracy: 93.73%
122
+ - Name: seresnext50_32x4d
123
+ In Collection: SEResNeXt
124
+ Metadata:
125
+ FLOPs: 5475179184
126
+ Parameters: 27560000
127
+ File Size: 110569859
128
+ Architecture:
129
+ - 1x1 Convolution
130
+ - Batch Normalization
131
+ - Convolution
132
+ - Global Average Pooling
133
+ - Grouped Convolution
134
+ - Max Pooling
135
+ - ReLU
136
+ - ResNeXt Block
137
+ - Residual Connection
138
+ - Softmax
139
+ - Squeeze-and-Excitation Block
140
+ Tasks:
141
+ - Image Classification
142
+ Training Techniques:
143
+ - Label Smoothing
144
+ - SGD with Momentum
145
+ - Weight Decay
146
+ Training Data:
147
+ - ImageNet
148
+ Training Resources: 8x NVIDIA Titan X GPUs
149
+ ID: seresnext50_32x4d
150
+ LR: 0.6
151
+ Epochs: 100
152
+ Layers: 50
153
+ Dropout: 0.2
154
+ Crop Pct: '0.875'
155
+ Momentum: 0.9
156
+ Batch Size: 1024
157
+ Image Size: '224'
158
+ Interpolation: bicubic
159
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/resnet.py#L1267
160
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/seresnext50_32x4d_racm-a304a460.pth
161
+ Results:
162
+ - Task: Image Classification
163
+ Dataset: ImageNet
164
+ Metrics:
165
+ Top 1 Accuracy: 81.27%
166
+ Top 5 Accuracy: 95.62%
167
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/skresnet.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SK-ResNet
2
+
3
+ **SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convolutions](https://paperswithcode.com/method/selective-kernel-convolution), enabling the network to choose appropriate receptive field sizes in an adaptive manner.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{li2019selective,
15
+ title={Selective Kernel Networks},
16
+ author={Xiang Li and Wenhai Wang and Xiaolin Hu and Jian Yang},
17
+ year={2019},
18
+ eprint={1903.06586},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: SKResNet
28
+ Paper:
29
+ Title: Selective Kernel Networks
30
+ URL: https://paperswithcode.com/paper/selective-kernel-networks
31
+ Models:
32
+ - Name: skresnet18
33
+ In Collection: SKResNet
34
+ Metadata:
35
+ FLOPs: 2333467136
36
+ Parameters: 11960000
37
+ File Size: 47923238
38
+ Architecture:
39
+ - Convolution
40
+ - Dense Connections
41
+ - Global Average Pooling
42
+ - Max Pooling
43
+ - Residual Connection
44
+ - Selective Kernel
45
+ - Softmax
46
+ Tasks:
47
+ - Image Classification
48
+ Training Techniques:
49
+ - SGD with Momentum
50
+ - Weight Decay
51
+ Training Data:
52
+ - ImageNet
53
+ Training Resources: 8x GPUs
54
+ ID: skresnet18
55
+ LR: 0.1
56
+ Epochs: 100
57
+ Layers: 18
58
+ Crop Pct: '0.875'
59
+ Momentum: 0.9
60
+ Batch Size: 256
61
+ Image Size: '224'
62
+ Weight Decay: 4.0e-05
63
+ Interpolation: bicubic
64
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/sknet.py#L148
65
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/skresnet18_ra-4eec2804.pth
66
+ Results:
67
+ - Task: Image Classification
68
+ Dataset: ImageNet
69
+ Metrics:
70
+ Top 1 Accuracy: 73.03%
71
+ Top 5 Accuracy: 91.17%
72
+ - Name: skresnet34
73
+ In Collection: SKResNet
74
+ Metadata:
75
+ FLOPs: 4711849952
76
+ Parameters: 22280000
77
+ File Size: 89299314
78
+ Architecture:
79
+ - Convolution
80
+ - Dense Connections
81
+ - Global Average Pooling
82
+ - Max Pooling
83
+ - Residual Connection
84
+ - Selective Kernel
85
+ - Softmax
86
+ Tasks:
87
+ - Image Classification
88
+ Training Techniques:
89
+ - SGD with Momentum
90
+ - Weight Decay
91
+ Training Data:
92
+ - ImageNet
93
+ Training Resources: 8x GPUs
94
+ ID: skresnet34
95
+ LR: 0.1
96
+ Epochs: 100
97
+ Layers: 34
98
+ Crop Pct: '0.875'
99
+ Momentum: 0.9
100
+ Batch Size: 256
101
+ Image Size: '224'
102
+ Weight Decay: 4.0e-05
103
+ Interpolation: bicubic
104
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/sknet.py#L165
105
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/skresnet34_ra-bdc0ccde.pth
106
+ Results:
107
+ - Task: Image Classification
108
+ Dataset: ImageNet
109
+ Metrics:
110
+ Top 1 Accuracy: 76.93%
111
+ Top 5 Accuracy: 93.32%
112
+ -->
testbed/huggingface__pytorch-image-models/docs/models/.templates/models/skresnext.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SK-ResNeXt
2
+
3
+ **SK ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNext are replaced by the proposed [SK convolutions](https://paperswithcode.com/method/selective-kernel-convolution), enabling the network to choose appropriate receptive field sizes in an adaptive manner.
4
+
5
+ {% include 'code_snippets.md' %}
6
+
7
+ ## How do I train this model?
8
+
9
+ You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
10
+
11
+ ## Citation
12
+
13
+ ```BibTeX
14
+ @misc{li2019selective,
15
+ title={Selective Kernel Networks},
16
+ author={Xiang Li and Wenhai Wang and Xiaolin Hu and Jian Yang},
17
+ year={2019},
18
+ eprint={1903.06586},
19
+ archivePrefix={arXiv},
20
+ primaryClass={cs.CV}
21
+ }
22
+ ```
23
+
24
+ <!--
25
+ Type: model-index
26
+ Collections:
27
+ - Name: SKResNeXt
28
+ Paper:
29
+ Title: Selective Kernel Networks
30
+ URL: https://paperswithcode.com/paper/selective-kernel-networks
31
+ Models:
32
+ - Name: skresnext50_32x4d
33
+ In Collection: SKResNeXt
34
+ Metadata:
35
+ FLOPs: 5739845824
36
+ Parameters: 27480000
37
+ File Size: 110340975
38
+ Architecture:
39
+ - Convolution
40
+ - Dense Connections
41
+ - Global Average Pooling
42
+ - Grouped Convolution
43
+ - Max Pooling
44
+ - Residual Connection
45
+ - Selective Kernel
46
+ - Softmax
47
+ Tasks:
48
+ - Image Classification
49
+ Training Data:
50
+ - ImageNet
51
+ Training Resources: 8x GPUs
52
+ ID: skresnext50_32x4d
53
+ LR: 0.1
54
+ Epochs: 100
55
+ Layers: 50
56
+ Crop Pct: '0.875'
57
+ Momentum: 0.9
58
+ Batch Size: 256
59
+ Image Size: '224'
60
+ Weight Decay: 0.0001
61
+ Interpolation: bicubic
62
+ Code: https://github.com/rwightman/pytorch-image-models/blob/a7f95818e44b281137503bcf4b3e3e94d8ffa52f/timm/models/sknet.py#L210
63
+ Weights: https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/skresnext50_ra-f40e40bf.pth
64
+ Results:
65
+ - Task: Image Classification
66
+ Dataset: ImageNet
67
+ Metrics:
68
+ Top 1 Accuracy: 80.15%
69
+ Top 5 Accuracy: 94.64%
70
+ -->