cyd0806 commited on
Commit
71f8409
·
verified ·
1 Parent(s): 9d7e7ba

Upload apex-master/examples/imagenet/README.md with huggingface_hub

Browse files
apex-master/examples/imagenet/README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Mixed Precision ImageNet Training in PyTorch
2
+
3
+ `main_amp.py` is based on [https://github.com/pytorch/examples/tree/master/imagenet](https://github.com/pytorch/examples/tree/master/imagenet).
4
+ It implements Automatic Mixed Precision (Amp) training of popular model architectures, such as ResNet, AlexNet, and VGG, on the ImageNet dataset. Command-line flags forwarded to `amp.initialize` are used to easily manipulate and switch between various pure and mixed precision "optimization levels" or `opt_level`s. For a detailed explanation of `opt_level`s, see the [updated API guide](https://nvidia.github.io/apex/amp.html).
5
+
6
+ Three lines enable Amp:
7
+ ```
8
+ # Added after model and optimizer construction
9
+ model, optimizer = amp.initialize(model, optimizer, flags...)
10
+ ...
11
+ # loss.backward() changed to:
12
+ with amp.scale_loss(loss, optimizer) as scaled_loss:
13
+ scaled_loss.backward()
14
+ ```
15
+
16
+ With the new Amp API **you never need to explicitly convert your model, or the input data, to half().**
17
+
18
+ ## Requirements
19
+
20
+ - Download the ImageNet dataset and move validation images to labeled subfolders
21
+ - The following script may be helpful: https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh
22
+
23
+ ## Training
24
+
25
+ To train a model, create softlinks to the Imagenet dataset, then run `main.py` with the desired model architecture, as shown in `Example commands` below.
26
+
27
+ The default learning rate schedule is set for ResNet50. `main_amp.py` script rescales the learning rate according to the global batch size (number of distributed processes \* per-process minibatch size).
28
+
29
+ ## Example commands
30
+
31
+ **Note:** batch size `--b 224` assumes your GPUs have >=16GB of onboard memory. You may be able to increase this to 256, but that's cutting it close, so it may out-of-memory for different Pytorch versions.
32
+
33
+ **Note:** All of the following use 4 dataloader subprocesses (`--workers 4`) to reduce potential
34
+ CPU data loading bottlenecks.
35
+
36
+ **Note:** `--opt-level` `O1` and `O2` both use dynamic loss scaling by default unless manually overridden.
37
+ `--opt-level` `O0` and `O3` (the "pure" training modes) do not use loss scaling by default.
38
+ `O0` and `O3` can be told to use loss scaling via manual overrides, but using loss scaling with `O0`
39
+ (pure FP32 training) does not really make sense, and will trigger a warning.
40
+
41
+ Softlink training and validation datasets into the current directory:
42
+ ```
43
+ $ ln -sf /data/imagenet/train-jpeg/ train
44
+ $ ln -sf /data/imagenet/val-jpeg/ val
45
+ ```
46
+
47
+ ### Summary
48
+
49
+ Amp allows easy experimentation with various pure and mixed precision options.
50
+ ```
51
+ $ python main_amp.py -a resnet50 --b 128 --workers 4 --opt-level O0 ./
52
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O3 ./
53
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O3 --keep-batchnorm-fp32 True ./
54
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O1 ./
55
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O1 --loss-scale 128.0 ./
56
+ $ python -m torch.distributed.launch --nproc_per_node=2 main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O1 ./
57
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O2 ./
58
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O2 --loss-scale 128.0 ./
59
+ $ python -m torch.distributed.launch --nproc_per_node=2 main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O2 ./
60
+ ```
61
+ Options are explained below. Again, the [updated API guide](https://nvidia.github.io/apex/amp.html) provides more detail.
62
+
63
+ #### `--opt-level O0` (FP32 training) and `O3` (FP16 training)
64
+
65
+ "Pure FP32" training:
66
+ ```
67
+ $ python main_amp.py -a resnet50 --b 128 --workers 4 --opt-level O0 ./
68
+ ```
69
+ "Pure FP16" training:
70
+ ```
71
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O3 ./
72
+ ```
73
+ FP16 training with FP32 batchnorm:
74
+ ```
75
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O3 --keep-batchnorm-fp32 True ./
76
+ ```
77
+ Keeping the batchnorms in FP32 improves stability and allows Pytorch
78
+ to use cudnn batchnorms, which significantly increases speed in Resnet50.
79
+
80
+ The `O3` options might not converge, because they are not true mixed precision.
81
+ However, they can be useful to establish "speed of light" performance for
82
+ your model, which provides a baseline for comparison with `O1` and `O2`.
83
+ For Resnet50 in particular, `--opt-level O3 --keep-batchnorm-fp32 True` establishes
84
+ the "speed of light." (Without `--keep-batchnorm-fp32`, it's slower, because it does
85
+ not use cudnn batchnorm.)
86
+
87
+ #### `--opt-level O1` (Official Mixed Precision recipe, recommended for typical use)
88
+
89
+ `O1` patches Torch functions to cast inputs according to a whitelist-blacklist model.
90
+ FP16-friendly (Tensor Core) ops like gemms and convolutions run in FP16, while ops
91
+ that benefit from FP32, like batchnorm and softmax, run in FP32.
92
+ Also, dynamic loss scaling is used by default.
93
+ ```
94
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O1 ./
95
+ ```
96
+ `O1` overridden to use static loss scaling:
97
+ ```
98
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O1 --loss-scale 128.0
99
+ ```
100
+ Distributed training with 2 processes (1 GPU per process, see **Distributed training** below
101
+ for more detail)
102
+ ```
103
+ $ python -m torch.distributed.launch --nproc_per_node=2 main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O1 ./
104
+ ```
105
+ For best performance, set `--nproc_per_node` equal to the total number of GPUs on the node
106
+ to use all available resources.
107
+
108
+ #### `--opt-level O2` ("Almost FP16" mixed precision. More dangerous than O1.)
109
+
110
+ `O2` exists mainly to support some internal use cases. Please prefer `O1`.
111
+
112
+ `O2` casts the model to FP16, keeps batchnorms in FP32,
113
+ maintains master weights in FP32, and implements
114
+ dynamic loss scaling by default. (Unlike --opt-level O1, --opt-level O2
115
+ does not patch Torch functions.)
116
+ ```
117
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O2 ./
118
+ ```
119
+ "Fast mixed precision" overridden to use static loss scaling:
120
+ ```
121
+ $ python main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O2 --loss-scale 128.0 ./
122
+ ```
123
+ Distributed training with 2 processes (1 GPU per process)
124
+ ```
125
+ $ python -m torch.distributed.launch --nproc_per_node=2 main_amp.py -a resnet50 --b 224 --workers 4 --opt-level O2 ./
126
+ ```
127
+
128
+ ## Distributed training
129
+
130
+ `main_amp.py` optionally uses `apex.parallel.DistributedDataParallel` (DDP) for multiprocess training with one GPU per process.
131
+ ```
132
+ model = apex.parallel.DistributedDataParallel(model)
133
+ ```
134
+ is a drop-in replacement for
135
+ ```
136
+ model = torch.nn.parallel.DistributedDataParallel(model,
137
+ device_ids=[arg.local_rank],
138
+ output_device=arg.local_rank)
139
+ ```
140
+ (because Torch DDP permits multiple GPUs per process, with Torch DDP you are required to
141
+ manually specify the device to run on and the output device.
142
+ With Apex DDP, it uses only the current device by default).
143
+
144
+ The choice of DDP wrapper (Torch or Apex) is orthogonal to the use of Amp and other Apex tools. It is safe to use `apex.amp` with either `torch.nn.parallel.DistributedDataParallel` or `apex.parallel.DistributedDataParallel`. In the future, I may add some features that permit optional tighter integration between `Amp` and `apex.parallel.DistributedDataParallel` for marginal performance benefits, but currently, there's no compelling reason to use Apex DDP versus Torch DDP for most models.
145
+
146
+ To use DDP with `apex.amp`, the only gotcha is that
147
+ ```
148
+ model, optimizer = amp.initialize(model, optimizer, flags...)
149
+ ```
150
+ must precede
151
+ ```
152
+ model = DDP(model)
153
+ ```
154
+ If DDP wrapping occurs before `amp.initialize`, `amp.initialize` will raise an error.
155
+
156
+ With both Apex DDP and Torch DDP, you must also call `torch.cuda.set_device(args.local_rank)` within
157
+ each process prior to initializing your model or any other tensors.
158
+ More information can be found in the docs for the
159
+ Pytorch multiprocess launcher module [torch.distributed.launch](https://pytorch.org/docs/stable/distributed.html#launch-utility).
160
+
161
+ `main_amp.py` is written to interact with
162
+ [torch.distributed.launch](https://pytorch.org/docs/master/distributed.html#launch-utility),
163
+ which spawns multiprocess jobs using the following syntax:
164
+ ```
165
+ python -m torch.distributed.launch --nproc_per_node=NUM_GPUS main_amp.py args...
166
+ ```
167
+ `NUM_GPUS` should be less than or equal to the number of visible GPU devices on the node. The use of `torch.distributed.launch` is unrelated to the choice of DDP wrapper. It is safe to use either apex DDP or torch DDP with `torch.distributed.launch`.
168
+
169
+ Optionally, one can run imagenet with synchronized batch normalization across processes by adding
170
+ `--sync_bn` to the `args...`
171
+
172
+ ## Deterministic training (for debugging purposes)
173
+
174
+ Running with the `--deterministic` flag should produce bitwise identical outputs run-to-run,
175
+ regardless of what other options are used (see [Pytorch docs on reproducibility](https://pytorch.org/docs/stable/notes/randomness.html)).
176
+ Since `--deterministic` disables `torch.backends.cudnn.benchmark`, `--deterministic` may
177
+ cause a modest performance decrease.
178
+
179
+ ## Profiling
180
+
181
+ If you're curious how the network actually looks on the CPU and GPU timelines (for example, how good is the overall utilization?
182
+ Is the prefetcher really overlapping data transfers?) try profiling `main_amp.py`.
183
+ [Detailed instructions can be found here](https://gist.github.com/mcarilli/213a4e698e4a0ae2234ddee56f4f3f95).