Upload apex-master/examples/dcgan/README.md with huggingface_hub
Browse files
apex-master/examples/dcgan/README.md
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Mixed Precision DCGAN Training in PyTorch
|
| 2 |
+
|
| 3 |
+
`main_amp.py` is based on [https://github.com/pytorch/examples/tree/master/dcgan](https://github.com/pytorch/examples/tree/master/dcgan).
|
| 4 |
+
It implements Automatic Mixed Precision (Amp) training of the DCGAN example for different datasets. Command-line flags forwarded to `amp.initialize` are used to easily manipulate and switch between various pure and mixed precision "optimization levels" or `opt_level`s. For a detailed explanation of `opt_level`s, see the [updated API guide](https://nvidia.github.io/apex/amp.html).
|
| 5 |
+
|
| 6 |
+
We introduce these changes to the PyTorch DCGAN example as described in the [Multiple models/optimizers/losses](https://nvidia.github.io/apex/advanced.html#multiple-models-optimizers-losses) section of the documentation::
|
| 7 |
+
```
|
| 8 |
+
# Added after models and optimizers construction
|
| 9 |
+
[netD, netG], [optimizerD, optimizerG] = amp.initialize(
|
| 10 |
+
[netD, netG], [optimizerD, optimizerG], opt_level=opt.opt_level, num_losses=3)
|
| 11 |
+
...
|
| 12 |
+
# loss.backward() changed to:
|
| 13 |
+
with amp.scale_loss(errD_real, optimizerD, loss_id=0) as errD_real_scaled:
|
| 14 |
+
errD_real_scaled.backward()
|
| 15 |
+
...
|
| 16 |
+
with amp.scale_loss(errD_fake, optimizerD, loss_id=1) as errD_fake_scaled:
|
| 17 |
+
errD_fake_scaled.backward()
|
| 18 |
+
...
|
| 19 |
+
with amp.scale_loss(errG, optimizerG, loss_id=2) as errG_scaled:
|
| 20 |
+
errG_scaled.backward()
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
Note that we use different `loss_scalers` for each computed loss.
|
| 24 |
+
Using a separate loss scaler per loss is [optional, not required](https://nvidia.github.io/apex/advanced.html#optionally-have-amp-use-a-different-loss-scaler-per-loss).
|
| 25 |
+
|
| 26 |
+
To improve the numerical stability, we swapped `nn.Sigmoid() + nn.BCELoss()` to `nn.BCEWithLogitsLoss()`.
|
| 27 |
+
|
| 28 |
+
With the new Amp API **you never need to explicitly convert your model, or the input data, to half().**
|
| 29 |
+
|
| 30 |
+
"Pure FP32" training:
|
| 31 |
+
```
|
| 32 |
+
$ python main_amp.py --opt_level O0
|
| 33 |
+
```
|
| 34 |
+
Recommended mixed precision training:
|
| 35 |
+
```
|
| 36 |
+
$ python main_amp.py --opt_level O1
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
Have a look at the original [DCGAN example](https://github.com/pytorch/examples/tree/master/dcgan) for more information about the used arguments.
|
| 40 |
+
|
| 41 |
+
To enable mixed precision training, we introduce the `--opt_level` argument.
|