File size: 21,576 Bytes
906b10a 7920b56 906b10a 72e75ab 906b10a 7920b56 56c67bf 7920b56 56c67bf 72e75ab 56c67bf 72e75ab 56c67bf 72e75ab 56c67bf 72e75ab 7920b56 a500e7d 7920b56 a500e7d 56c67bf a500e7d 7920b56 72e75ab 56c67bf 72e75ab 56c67bf 72e75ab 56c67bf 72e75ab 56c67bf 72e75ab 56c67bf 72e75ab 56c67bf 72e75ab 56c67bf 72e75ab 56c67bf 72e75ab 7920b56 56c67bf 72e75ab 7920b56 72e75ab 7920b56 56c67bf 7920b56 56c67bf 7920b56 56c67bf 7920b56 56c67bf 7920b56 906b10a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 |
---
license: mit
library_name: transformers
pipeline_tag: image-classification
---
# SPG: Sequential Policy Gradient for Adaptive Hyperparameter Optimization
This repository contains the models described in the paper [Sequential Policy Gradient for Adaptive Hyperparameter Optimization](https://huggingface.co/papers/2506.15051).
[Project page](https://huggingface.co/UniversalAlgorithmic/SPG)
[Github repository](https://github.com/SafeAILab/EAGLE)
> π If you're using Jupyter or Colab, you can follow the demo and run it on a single GPU:
[](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https%3A//huggingface.co/UniversalAlgorithmic/SPG/blob/main/demo_nas.ipynb)
## Model Zoo: Adaptive Hyperparameter Optimization (HPO) via SPG Algorithm
`Table 1: Performance of pre-trained vs. SPG-retrained models on ImageNet-1K`
| Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights | Command to reproduce |
|-------|------|----------|-----------|-----------|---------|----------------------|
| MobileNet-V2 | β | 3.5 M | 71.878 | 90.286 | <a href='https://download.pytorch.org/models/mobilenet_v2-b0353104.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#mobilenetv2'>Recipe</a> |
| MobileNet-V2 | β
| 3.5 M | 72.104 | 90.316 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/mobilenetv2/model_32.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/mobilenet_v2-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
| ResNet-50 | β | 25.6 M | 76.130 | 92.862 | <a href='https://download.pytorch.org/models/resnet50-0676ba61.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
| ResNet-50 | β
| 25.6 M | 77.234 | 93.322 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/resnet50/model_35.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet50-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
| EfficientNet-V2-M | β | 54.1 M | 85.112 | 97.156 | <a href='https://download.pytorch.org/models/efficientnet_v2_m-dc08266a.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#efficientnet-v2'>Recipe</a> |
| EfficientNet-V2-M | β
| 54.1 M | 85.218 | 97.208 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/efficientnet_v2_m/model_7.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/efficientnet_v2_m-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
| ViT-B16 | β | 86.6 M | 81.072 | 95.318 | <a href='https://download.pytorch.org/models/vit_b_16-c867db91.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#vit_b_16'>Recipe</a> |
| ViT-B16 | β
| 86.6 M | 81.092 | 95.304 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/image-classification/vit_b_16/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/vit_b_16-yellow'></a> | [examples/image-classification/run.sh](#retrain-model-on-imagenet-1k) |
`Table 2: Performance of pre-trained vs. SPG-retrained models. All models are evaluated a subset of COCO val2017, on the 21/20 categories that are present in the Pascal VOC dataset.`
> β οΈ `All model reported on TorchVision (with weight COCO_WITH_VOC_LABELS_V1) were benchmarked using only 20 categories. Researchers should first download the pre-trained model from TorchVision and conduct re-evaluation under the 21-categories (including "background") framework.`
| Model | SPG | # Params | mIoU (%) | pixelwise Acc (%) | Weights | Command to reproduce |
|---------------------|-----|----------|------------|---------------------|---------|----------------------|
| FCN-ResNet50 | β | 35.3 M | 58.9/60.5 | 90.9/91.4 | <a href='https://download.pytorch.org/models/fcn_resnet50_coco-1167a1af.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#fcn_resnet50'>Recipe</a> |
| FCN-ResNet50 | β
| 35.3 M | 59.4/60.9 | 90.9/91.6 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/fcn_resnet50/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/fcn_resnet50-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
| FCN-ResNet101 | β | 54.3 M | 62.2/63.7 | 91.1/91.9 | <a href='https://download.pytorch.org/models/fcn_resnet101_coco-7ecb50ca.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet101'>Recipe</a> |
| FCN-ResNet101 | β
| 54.3 M | 62.4/64.3 | 91.1/91.9 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/fcn_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/fcn_resnet101-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
| DeepLabV3-ResNet50 | β | 42.0 M | 63.8/66.4 | 91.5/92.4 | <a href='https://download.pytorch.org/models/deeplabv3_resnet50_coco-cd0a2569.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet50'>Recipe</a> |
| DeepLabV3-ResNet50 | β
| 42.0 M | 64.2/66.6 | 91.6/92.5 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/deeplabv3_resnet50/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet50-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
| DeepLabV3-ResNet101 | β | 61.0 M | 65.3/67.4 | 91.7/92.4 | <a href='https://download.pytorch.org/models/deeplabv3_resnet101_coco-586e9e4e.pth'><img src='https://img.shields.io/badge/PyTorch-COCO_WITH_VOC_LABELS_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/segmentation#deeplabv3_resnet101'>Recipe</a> |
| DeepLabV3-ResNet101 | β
| 61.0 M | 65.7/67.8 | 91.8/92.5 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/semantic-segmentation/deeplabv3_resnet101/model_4.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/deeplabv3_resnet101-yellow'></a> | [examples/semantic-segmentation/run.sh](#retrain-model-on-ms-coco-2017) |
`Table 3: Performance comparison of fine-tuned vs. SPG-retrained models across NLP and speech benchmarks.`
- GLUE (Text classification: BERT on CoLA, SST-2, MRPC, QQP, QNLI, and RTE task)
- SQuAD (Question answering: BERT)
- SUPERB (Speech classification: Wav2Vec2 for Audio Classification (AC))
| Task | SPG | Metric Type | Performance (%) | Weights | Command to reproduce |
|-------|------|-------------------|-----------------|---------|----------------------|
| CoLA | β | Matthews coor | 56.53 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
| CoLA | β
| Matthews coor | 62.13 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/cola'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/CoLA-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
| SST-2 | β | Accuracy | 92.32 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
| SST-2 | β
| Accuracy | 92.54 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/sst2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/SST2-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
| MRPC | β | F1/Accuracy | 88.85/84.09 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
| MRPC | β
| F1/Accuracy | 91.10/87.25 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/mrpc'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/MRPC-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
| QQP | β | F1/Accuracy | 87.49/90.71 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
| QQP | β
| F1/Accuracy | 89.72/90.88 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/qqp'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QQP-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
| QNLI | β | Accuracy | 90.66 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
| QNLI | β
| Accuracy | 91.10 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/qnli'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QNLI-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
| RTE | β | Accuracy | 65.70 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-text_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#glue-tasks'>Recipe</a> |
| RTE | β
| Accuracy | 72.56 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/text-classification/rte'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/RTE-yellow'></a> | [examples/text-classification/run.sh](#transfer-learning-on-glue) |
| Q/A* | β | F1/Extra match | 88.52/81.22 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-question_answering-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering#fine-tuning-bert-on-squad10'>Recipe</a> |
| Q/A* | β
| F1/Extra match | 88.67/81.51 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/question-answering/qa'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/QA-yellow'></a> | [examples/question-answering/run.sh](#transfer-learning-on-squad) |
| ACβ | β | Accuracy | 98.26 | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-audio_classification-yellow'></a> | <a href='https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification#single-gpu'>Recipe</a> |
| ACβ | β
| Accuracy | 98.31 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/tree/main/examples/audio-classification/ac'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/AC-yellow'></a> | [examples/audio-answering/run.sh](#transfer-learning-on-superb) |
## Model Zoo: Neural Architecture Search (NAS) via SPG Algorithm
`Table 4: Performance of pre-trained vs. SPG-retrained models on ImageNet-1K`
Depending on the base model, we explore the following architectures:
- ResNet-18: ResNet-18, ResNet-27, ResNet-36, ResNet-45
- ResNet-34: ResNet-34, ResNet-40, ResNet-46, ResNet-52
- ResNet-50: ResNet-50, ResNet-53, ResNet-56, ResNet-59
> β οΈ`Our SPG differs from most NAS algorithms, which typically use a gating network for architecture selection. In contrast, we neither employ a gating network nor a proxy network. Instead, after policy optimization, we keep only the base architecture (ResNet-18, ResNet-34, and ResNet-50) and remove all others (ResNet-27/36/45, ResNet-40/46/52, and ResNet-53/56/59).`
| Model | SPG | # Params | Acc@1 (%) | Acc@5 (%) | Weights | Command to reproduce |
|-------|------|----------|-----------|-----------|---------|----------------------|
| ResNet-18 | β | 11.7M | 69.758 | 89.078 | <a href='https://download.pytorch.org/models/resnet18-f37072fd.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
| ResNet-18 | β
| 11.7M | 70.092 | 89.314 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/neural-archicture-search/resnet18/model_3.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet18-yellow'></a> | [examples/neural-architecture-search/run.sh](#neural-architecture-search-for-resnet-on-imagenet-1k) |
| ResNet-34 | β | 21.8M | 73.314 | 91.420 | <a href='https://download.pytorch.org/models/resnet34-b627a593.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
| ResNet-34 | β
| 21.8M | 73.900 | 93.536 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/neural-archicture-search/resnet34/model_8.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet34-yellow'></a> | [examples/neural-architecture-search/run.sh](#neural-architecture-search-for-resnet-on-imagenet-1k) |
| ResNet-50 | β | 25.6 M | 76.130 | 92.862 | <a href='https://download.pytorch.org/models/resnet50-0676ba61.pth'><img src='https://img.shields.io/badge/PyTorch-IMAGENET1K_V1-FFA500?style=flat&logo=pytorch&logoColor=orange&labelColor=00000000'></a> | <a href='https://github.com/pytorch/vision/tree/main/references/classification#resnet'>Recipe</a> |
| ResNet-50 | β
| 25.6 M | 77.234 | 93.322 | <a href='https://huggingface.co/UniversalAlgorithmic/SPG/resolve/main/examples/neural-archicture-search/resnet50/model_9.pth'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Huggingface-SPG/resnet50-yellow'></a> | [examples/neural-architecture-search/run.sh](#neural-architecture-search-for-resnet-on-imagenet-1k) |
## Requirements
1. Install `torch>=2.0.0+cu118`.
2. To install other pip packages:
```setup
cd examples
pip install -r requirements.txt
```
3. Prepare the [ImageNet](http://image-net.org/) dataset manually and place it in `/path/to/imagenet`. For image classification examples, pass the argument `--data-path=/path/to/imagenet` to the training script. The extracted dataset directory should follow this structure:
```setup
/path/to/imagenet/:
train/:
n01440764:
n01440764_18.JPEG ...
n01443537:
n01443537_2.JPEG ...
val/:
n01440764:
ILSVRC2012_val_00000293.JPEG ...
n01443537:
ILSVRC2012_val_00000236.JPEG ...
```
4. Prepare the [MS-COCO 2017](https://cocodataset.org/#home) dataset manually and place it in `/path/to/coco`. For image classification examples, pass the argument `--data-path=/path/to/coco` to the training script. The extracted dataset directory should follow this structure:
```setup
/path/to/coco/:
annotations:
many_json_files.json ...
train2017:
000000000009.jpg ...
val2017:
000000000139.jpg ...
```
5. For [π£οΈ Keyword Spotting subset](https://huggingface.co/datasets/s3prl/superb#ks), [Common Language](https://huggingface.co/datasets/speechbrain/common_language), [SQuAD](https://huggingface.co/datasets/rajpurkar/squad), [Common Voice](https://huggingface.co/datasets/legacy-datasets/common_voice), [GLUE](https://gluebenchmark.com/) and [WMT](https://huggingface.co/datasets/wmt/wmt17) datasets, manual downloading is not required β they will be automatically loaded via the Hugging Face Datasets library when running our `audio-classification`, `question-answering`, `speech-recognition`, `text-classification`, or `translation` examples.
## Training
### Retrain model on ImageNet-1K
We use training recipes similar to those in [PyTorch Vision's classification reference](https://github.com/pytorch/vision/blob/main/references/classification/README.md) to retrain MobileNet-V2, ResNet, EfficientNet-V2, and ViT with our SPG on ImageNet-1K. The following command can be used:
```bash
cd ./examples/image-classification
# MobileNet-V2
torchrun --nproc_per_node=4 train.py\
--data-path /path/to/imagenet/\
--model mobilenet_v2 --output-dir mobilenet_v2 --weights MobileNet_V2_Weights.IMAGENET1K_V1\
--batch-size 192 --epochs 40 --lr 0.0004 --lr-step-size 10 --lr-gamma 0.5 --wd 0.00004 --apply-trp --trp-depths 1 --trp-p 0.15 --trp-lambdas 0.4 0.2 0.1
# ResNet-50
torchrun --nproc_per_node=4 train.py\
--data-path /path/to/imagenet/\
--model resnet50 --output-dir resnet50 --weights ResNet50_Weights.IMAGENET1K_V1\
--batch-size 64 --epochs 40 --lr 0.0004 --lr-step-size 10 --lr-gamma 0.5 --print-freq 100\
--apply-trp --trp-depths 1 --trp-p 0.2 --trp-lambdas 0.4 0.2 0.1
# EfficientNet-V2 M
torchrun --nproc_per_node=4 train.py \
--data-path /path/to/imagenet/\
--model efficientnet_v2_m --output-dir efficientnet_v2_m --weights EfficientNet_V2_M_Weights.IMAGENET1K_V1\
--epochs 10 --batch-size 64 --lr 5e-9 --lr-scheduler cosineannealinglr --weight-decay 0.00002 \
--lr-warmup-method constant --lr-warmup-epochs 8 --lr-warmup-decay 0. \
--auto-augment ta_wide --random-erase 0.1 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 --norm-weight-decay 0.0 \
--train-crop-size 384 --val-crop-size 480 --val-resize-size 480 --ra-sampler --ra-reps 4 --print-freq 100\
--apply-trp --trp-depths 1 --trp-p 0.2 --trp-lambdas 0.4 0.2 0.1
# ViT-B-16
torchrun --nproc_per_node=4 train.py\
--data-path /path/to/imagenet/\
--model vit_b_16 --output-dir vit_b_16 --weights ViT_B_16_Weights.IMAGENET1K_V1\
--epochs 5 --batch-size 196 --opt adamw --lr 5e-9 --lr-scheduler cosineannealinglr --wd 0.3\
--lr-warmup-method constant --lr-warmup-epochs 3 --lr-warmup-decay 0. \
--amp --label-smoothing 0.11 --mixup-alpha 0.2 --auto-augment ra --clip-grad-norm 1 --cutmix-alpha 1.0\
--apply-trp --trp-depths 1 --trp-p 0.1 --trp-lambdas 0.4 0.2 0.1 --print-freq 100
```
### Retrain model on MS-COCO 2017
We use training recipes similar to those in [PyTorch Vision's segmentation reference](https://github.com/pytorch/vision/blob/main/references/segmentation/README.md) to retrain FCN and DeepLab-V3 with our SPG on COCO dataset. The following command can be used:
```bash
cd ./examples/semantic-segmentation
# FCN-ResNet50
torchrun --nproc_per_node=4 train.py\
--workers 4 --dataset coco --data-path /path/to/coco/\
--model fcn_resnet50 --aux-loss --output-dir fcn_resnet50 --weights FCN_ResNet50 |