Upload apex-master/docs/source/advanced.rst with huggingface_hub
Browse files
apex-master/docs/source/advanced.rst
ADDED
|
@@ -0,0 +1,219 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
.. role:: hidden
|
| 2 |
+
:class: hidden-section
|
| 3 |
+
|
| 4 |
+
Advanced Amp Usage
|
| 5 |
+
===================================
|
| 6 |
+
|
| 7 |
+
GANs
|
| 8 |
+
----
|
| 9 |
+
|
| 10 |
+
GANs are an interesting synthesis of several topics below. A `comprehensive example`_
|
| 11 |
+
is under construction.
|
| 12 |
+
|
| 13 |
+
.. _`comprehensive example`:
|
| 14 |
+
https://github.com/NVIDIA/apex/tree/master/examples/dcgan
|
| 15 |
+
|
| 16 |
+
Gradient clipping
|
| 17 |
+
-----------------
|
| 18 |
+
Amp calls the params owned directly by the optimizer's ``param_groups`` the "master params."
|
| 19 |
+
|
| 20 |
+
These master params may be fully or partially distinct from ``model.parameters()``.
|
| 21 |
+
For example, with `opt_level="O2"`_, ``amp.initialize`` casts most model params to FP16,
|
| 22 |
+
creates an FP32 master param outside the model for each newly-FP16 model param,
|
| 23 |
+
and updates the optimizer's ``param_groups`` to point to these FP32 params.
|
| 24 |
+
|
| 25 |
+
The master params owned by the optimizer's ``param_groups`` may also fully coincide with the
|
| 26 |
+
model params, which is typically true for ``opt_level``\s ``O0``, ``O1``, and ``O3``.
|
| 27 |
+
|
| 28 |
+
In all cases, correct practice is to clip the gradients of the params that are guaranteed to be
|
| 29 |
+
owned **by the optimizer's** ``param_groups``, instead of those retrieved via ``model.parameters()``.
|
| 30 |
+
|
| 31 |
+
Also, if Amp uses loss scaling, gradients must be clipped after they have been unscaled
|
| 32 |
+
(which occurs during exit from the ``amp.scale_loss`` context manager).
|
| 33 |
+
|
| 34 |
+
The following pattern should be correct for any ``opt_level``::
|
| 35 |
+
|
| 36 |
+
with amp.scale_loss(loss, optimizer) as scaled_loss:
|
| 37 |
+
scaled_loss.backward()
|
| 38 |
+
# Gradients are unscaled during context manager exit.
|
| 39 |
+
# Now it's safe to clip. Replace
|
| 40 |
+
# torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)
|
| 41 |
+
# with
|
| 42 |
+
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), max_norm)
|
| 43 |
+
# or
|
| 44 |
+
torch.nn.utils.clip_grad_value_(amp.master_params(optimizer), max_)
|
| 45 |
+
|
| 46 |
+
Note the use of the utility function ``amp.master_params(optimizer)``,
|
| 47 |
+
which returns a generator-expression that iterates over the
|
| 48 |
+
params in the optimizer's ``param_groups``.
|
| 49 |
+
|
| 50 |
+
Also note that ``clip_grad_norm_(amp.master_params(optimizer), max_norm)`` is invoked
|
| 51 |
+
*instead of*, not *in addition to*, ``clip_grad_norm_(model.parameters(), max_norm)``.
|
| 52 |
+
|
| 53 |
+
.. _`opt_level="O2"`:
|
| 54 |
+
https://nvidia.github.io/apex/amp.html#o2-fast-mixed-precision
|
| 55 |
+
|
| 56 |
+
Custom/user-defined autograd functions
|
| 57 |
+
--------------------------------------
|
| 58 |
+
|
| 59 |
+
The old Amp API for `registering user functions`_ is still considered correct. Functions must
|
| 60 |
+
be registered before calling ``amp.initialize``.
|
| 61 |
+
|
| 62 |
+
.. _`registering user functions`:
|
| 63 |
+
https://github.com/NVIDIA/apex/tree/master/apex/amp#annotating-user-functions
|
| 64 |
+
|
| 65 |
+
Forcing particular layers/functions to a desired type
|
| 66 |
+
-----------------------------------------------------
|
| 67 |
+
|
| 68 |
+
I'm still working on a generalizable exposure for this that won't require user-side code divergence
|
| 69 |
+
across different ``opt-level``\ s.
|
| 70 |
+
|
| 71 |
+
Multiple models/optimizers/losses
|
| 72 |
+
---------------------------------
|
| 73 |
+
|
| 74 |
+
Initialization with multiple models/optimizers
|
| 75 |
+
**********************************************
|
| 76 |
+
|
| 77 |
+
``amp.initialize``'s optimizer argument may be a single optimizer or a list of optimizers,
|
| 78 |
+
as long as the output you accept has the same type.
|
| 79 |
+
Similarly, the ``model`` argument may be a single model or a list of models, as long as the accepted
|
| 80 |
+
output matches. The following calls are all legal::
|
| 81 |
+
|
| 82 |
+
model, optim = amp.initialize(model, optim,...)
|
| 83 |
+
model, [optim0, optim1] = amp.initialize(model, [optim0, optim1],...)
|
| 84 |
+
[model0, model1], optim = amp.initialize([model0, model1], optim,...)
|
| 85 |
+
[model0, model1], [optim0, optim1] = amp.initialize([model0, model1], [optim0, optim1],...)
|
| 86 |
+
|
| 87 |
+
Backward passes with multiple optimizers
|
| 88 |
+
****************************************
|
| 89 |
+
|
| 90 |
+
Whenever you invoke a backward pass, the ``amp.scale_loss`` context manager must receive
|
| 91 |
+
**all the optimizers that own any params for which the current backward pass is creating gradients.**
|
| 92 |
+
This is true even if each optimizer owns only some, but not all, of the params that are about to
|
| 93 |
+
receive gradients.
|
| 94 |
+
|
| 95 |
+
If, for a given backward pass, there's only one optimizer whose params are about to receive gradients,
|
| 96 |
+
you may pass that optimizer directly to ``amp.scale_loss``. Otherwise, you must pass the
|
| 97 |
+
list of optimizers whose params are about to receive gradients. Example with 3 losses and 2 optimizers::
|
| 98 |
+
|
| 99 |
+
# loss0 accumulates gradients only into params owned by optim0:
|
| 100 |
+
with amp.scale_loss(loss0, optim0) as scaled_loss:
|
| 101 |
+
scaled_loss.backward()
|
| 102 |
+
|
| 103 |
+
# loss1 accumulates gradients only into params owned by optim1:
|
| 104 |
+
with amp.scale_loss(loss1, optim1) as scaled_loss:
|
| 105 |
+
scaled_loss.backward()
|
| 106 |
+
|
| 107 |
+
# loss2 accumulates gradients into some params owned by optim0
|
| 108 |
+
# and some params owned by optim1
|
| 109 |
+
with amp.scale_loss(loss2, [optim0, optim1]) as scaled_loss:
|
| 110 |
+
scaled_loss.backward()
|
| 111 |
+
|
| 112 |
+
Optionally have Amp use a different loss scaler per-loss
|
| 113 |
+
********************************************************
|
| 114 |
+
|
| 115 |
+
By default, Amp maintains a single global loss scaler that will be used for all backward passes
|
| 116 |
+
(all invocations of ``with amp.scale_loss(...)``). No additional arguments to ``amp.initialize``
|
| 117 |
+
or ``amp.scale_loss`` are required to use the global loss scaler. The code snippets above with
|
| 118 |
+
multiple optimizers/backward passes use the single global loss scaler under the hood,
|
| 119 |
+
and they should "just work."
|
| 120 |
+
|
| 121 |
+
However, you can optionally tell Amp to maintain a loss scaler per-loss, which gives Amp increased
|
| 122 |
+
numerical flexibility. This is accomplished by supplying the ``num_losses`` argument to
|
| 123 |
+
``amp.initialize`` (which tells Amp how many backward passes you plan to invoke, and therefore
|
| 124 |
+
how many loss scalers Amp should create), then supplying the ``loss_id`` argument to each of your
|
| 125 |
+
backward passes (which tells Amp the loss scaler to use for this particular backward pass)::
|
| 126 |
+
|
| 127 |
+
model, [optim0, optim1] = amp.initialize(model, [optim0, optim1], ..., num_losses=3)
|
| 128 |
+
|
| 129 |
+
with amp.scale_loss(loss0, optim0, loss_id=0) as scaled_loss:
|
| 130 |
+
scaled_loss.backward()
|
| 131 |
+
|
| 132 |
+
with amp.scale_loss(loss1, optim1, loss_id=1) as scaled_loss:
|
| 133 |
+
scaled_loss.backward()
|
| 134 |
+
|
| 135 |
+
with amp.scale_loss(loss2, [optim0, optim1], loss_id=2) as scaled_loss:
|
| 136 |
+
scaled_loss.backward()
|
| 137 |
+
|
| 138 |
+
``num_losses`` and ``loss_id``\ s should be specified purely based on the set of
|
| 139 |
+
losses/backward passes. The use of multiple optimizers, or association of single or
|
| 140 |
+
multiple optimizers with each backward pass, is unrelated.
|
| 141 |
+
|
| 142 |
+
Gradient accumulation across iterations
|
| 143 |
+
---------------------------------------
|
| 144 |
+
|
| 145 |
+
The following should "just work," and properly accommodate multiple models/optimizers/losses, as well as
|
| 146 |
+
gradient clipping via the `instructions above`_::
|
| 147 |
+
|
| 148 |
+
# If your intent is to simulate a larger batch size using gradient accumulation,
|
| 149 |
+
# you can divide the loss by the number of accumulation iterations (so that gradients
|
| 150 |
+
# will be averaged over that many iterations):
|
| 151 |
+
loss = loss/iters_to_accumulate
|
| 152 |
+
|
| 153 |
+
with amp.scale_loss(loss, optimizer) as scaled_loss:
|
| 154 |
+
scaled_loss.backward()
|
| 155 |
+
|
| 156 |
+
# Every iters_to_accumulate iterations, call step() and reset gradients:
|
| 157 |
+
if iter%iters_to_accumulate == 0:
|
| 158 |
+
# Gradient clipping if desired:
|
| 159 |
+
# torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), max_norm)
|
| 160 |
+
optimizer.step()
|
| 161 |
+
optimizer.zero_grad()
|
| 162 |
+
|
| 163 |
+
As a minor performance optimization, you can pass ``delay_unscale=True``
|
| 164 |
+
to ``amp.scale_loss`` until you're ready to ``step()``. You should only attempt ``delay_unscale=True``
|
| 165 |
+
if you're sure you know what you're doing, because the interaction with gradient clipping and
|
| 166 |
+
multiple models/optimizers/losses can become tricky.::
|
| 167 |
+
|
| 168 |
+
if iter%iters_to_accumulate == 0:
|
| 169 |
+
# Every iters_to_accumulate iterations, unscale and step
|
| 170 |
+
with amp.scale_loss(loss, optimizer) as scaled_loss:
|
| 171 |
+
scaled_loss.backward()
|
| 172 |
+
optimizer.step()
|
| 173 |
+
optimizer.zero_grad()
|
| 174 |
+
else:
|
| 175 |
+
# Otherwise, accumulate gradients, don't unscale or step.
|
| 176 |
+
with amp.scale_loss(loss, optimizer, delay_unscale=True) as scaled_loss:
|
| 177 |
+
scaled_loss.backward()
|
| 178 |
+
|
| 179 |
+
.. _`instructions above`:
|
| 180 |
+
https://nvidia.github.io/apex/advanced.html#gradient-clipping
|
| 181 |
+
|
| 182 |
+
Custom data batch types
|
| 183 |
+
-----------------------
|
| 184 |
+
|
| 185 |
+
The intention of Amp is that you never need to cast your input data manually, regardless of
|
| 186 |
+
``opt_level``. Amp accomplishes this by patching any models' ``forward`` methods to cast
|
| 187 |
+
incoming data appropriately for the ``opt_level``. But to cast incoming data,
|
| 188 |
+
Amp needs to know how. The patched ``forward`` will recognize and cast floating-point Tensors
|
| 189 |
+
(non-floating-point Tensors like IntTensors are not touched) and
|
| 190 |
+
Python containers of floating-point Tensors. However, if you wrap your Tensors in a custom class,
|
| 191 |
+
the casting logic doesn't know how to drill
|
| 192 |
+
through the tough custom shell to access and cast the juicy Tensor meat within. You need to tell
|
| 193 |
+
Amp how to cast your custom batch class, by assigning it a ``to`` method that accepts a ``torch.dtype``
|
| 194 |
+
(e.g., ``torch.float16`` or ``torch.float32``) and returns an instance of the custom batch cast to
|
| 195 |
+
``dtype``. The patched ``forward`` checks for the presence of your ``to`` method, and will
|
| 196 |
+
invoke it with the correct type for the ``opt_level``.
|
| 197 |
+
|
| 198 |
+
Example::
|
| 199 |
+
|
| 200 |
+
class CustomData(object):
|
| 201 |
+
def __init__(self):
|
| 202 |
+
self.tensor = torch.cuda.FloatTensor([1,2,3])
|
| 203 |
+
|
| 204 |
+
def to(self, dtype):
|
| 205 |
+
self.tensor = self.tensor.to(dtype)
|
| 206 |
+
return self
|
| 207 |
+
|
| 208 |
+
.. warning::
|
| 209 |
+
|
| 210 |
+
Amp also forwards numpy ndarrays without casting them. If you send input data as a raw, unwrapped
|
| 211 |
+
ndarray, then later use it to create a Tensor within your ``model.forward``, this Tensor's type will
|
| 212 |
+
not depend on the ``opt_level``, and may or may not be correct. Users are encouraged to pass
|
| 213 |
+
castable data inputs (Tensors, collections of Tensors, or custom classes with a ``to`` method)
|
| 214 |
+
wherever possible.
|
| 215 |
+
|
| 216 |
+
.. note::
|
| 217 |
+
|
| 218 |
+
Amp does not call ``.cuda()`` on any Tensors for you. Amp assumes that your original script
|
| 219 |
+
is already set up to move Tensors from the host to the device as needed.
|