Upload apex-master/docs/source/amp.rst with huggingface_hub
Browse files- apex-master/docs/source/amp.rst +288 -0
apex-master/docs/source/amp.rst
ADDED
|
@@ -0,0 +1,288 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
.. role:: hidden
|
| 2 |
+
:class: hidden-section
|
| 3 |
+
|
| 4 |
+
apex.amp
|
| 5 |
+
===================================
|
| 6 |
+
|
| 7 |
+
This page documents the updated API for Amp (Automatic Mixed Precision),
|
| 8 |
+
a tool to enable Tensor Core-accelerated training in only 3 lines of Python.
|
| 9 |
+
|
| 10 |
+
A `runnable, comprehensive Imagenet example`_ demonstrating good practices can be found
|
| 11 |
+
on the Github page.
|
| 12 |
+
|
| 13 |
+
GANs are a tricky case that many people have requested. A `comprehensive DCGAN example`_
|
| 14 |
+
is under construction.
|
| 15 |
+
|
| 16 |
+
If you already implemented Amp based on the instructions below, but it isn't behaving as expected,
|
| 17 |
+
please review `Advanced Amp Usage`_ to see if any topics match your use case. If that doesn't help,
|
| 18 |
+
`file an issue`_.
|
| 19 |
+
|
| 20 |
+
.. _`file an issue`:
|
| 21 |
+
https://github.com/NVIDIA/apex/issues
|
| 22 |
+
|
| 23 |
+
``opt_level``\ s and Properties
|
| 24 |
+
-------------------------------
|
| 25 |
+
|
| 26 |
+
Amp allows users to easily experiment with different pure and mixed precision modes.
|
| 27 |
+
Commonly-used default modes are chosen by
|
| 28 |
+
selecting an "optimization level" or ``opt_level``; each ``opt_level`` establishes a set of
|
| 29 |
+
properties that govern Amp's implementation of pure or mixed precision training.
|
| 30 |
+
Finer-grained control of how a given ``opt_level`` behaves can be achieved by passing values for
|
| 31 |
+
particular properties directly to ``amp.initialize``. These manually specified values
|
| 32 |
+
override the defaults established by the ``opt_level``.
|
| 33 |
+
|
| 34 |
+
Example::
|
| 35 |
+
|
| 36 |
+
# Declare model and optimizer as usual, with default (FP32) precision
|
| 37 |
+
model = torch.nn.Linear(D_in, D_out).cuda()
|
| 38 |
+
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
|
| 39 |
+
|
| 40 |
+
# Allow Amp to perform casts as required by the opt_level
|
| 41 |
+
model, optimizer = amp.initialize(model, optimizer, opt_level="O1")
|
| 42 |
+
...
|
| 43 |
+
# loss.backward() becomes:
|
| 44 |
+
with amp.scale_loss(loss, optimizer) as scaled_loss:
|
| 45 |
+
scaled_loss.backward()
|
| 46 |
+
...
|
| 47 |
+
|
| 48 |
+
Users **should not** manually cast their model or data to ``.half()``, regardless of what ``opt_level``
|
| 49 |
+
or properties are chosen. Amp intends that users start with an existing default (FP32) script,
|
| 50 |
+
add the three lines corresponding to the Amp API, and begin training with mixed precision.
|
| 51 |
+
Amp can also be disabled, in which case the original script will behave exactly as it used to.
|
| 52 |
+
In this way, there's no risk adhering to the Amp API, and a lot of potential performance benefit.
|
| 53 |
+
|
| 54 |
+
.. note::
|
| 55 |
+
Because it's never necessary to manually cast your model (aside from the call ``amp.initialize``)
|
| 56 |
+
or input data, a script that adheres to the new API
|
| 57 |
+
can switch between different ``opt-level``\ s without having to make any other changes.
|
| 58 |
+
|
| 59 |
+
.. _`runnable, comprehensive Imagenet example`:
|
| 60 |
+
https://github.com/NVIDIA/apex/tree/master/examples/imagenet
|
| 61 |
+
|
| 62 |
+
.. _`comprehensive DCGAN example`:
|
| 63 |
+
https://github.com/NVIDIA/apex/tree/master/examples/dcgan
|
| 64 |
+
|
| 65 |
+
.. _`Advanced Amp Usage`:
|
| 66 |
+
https://nvidia.github.io/apex/advanced.html
|
| 67 |
+
|
| 68 |
+
Properties
|
| 69 |
+
**********
|
| 70 |
+
|
| 71 |
+
Currently, the under-the-hood properties that govern pure or mixed precision training are the following:
|
| 72 |
+
|
| 73 |
+
- ``cast_model_type``: Casts your model's parameters and buffers to the desired type.
|
| 74 |
+
- ``patch_torch_functions``: Patch all Torch functions and Tensor methods to perform Tensor Core-friendly ops like GEMMs and convolutions in FP16, and any ops that benefit from FP32 precision in FP32.
|
| 75 |
+
- ``keep_batchnorm_fp32``: To enhance precision and enable cudnn batchnorm (which improves performance), it's often beneficial to keep batchnorm weights in FP32 even if the rest of the model is FP16.
|
| 76 |
+
- ``master_weights``: Maintain FP32 master weights to accompany any FP16 model weights. FP32 master weights are stepped by the optimizer to enhance precision and capture small gradients.
|
| 77 |
+
- ``loss_scale``: If ``loss_scale`` is a float value, use this value as the static (fixed) loss scale. If ``loss_scale`` is the string ``"dynamic"``, adaptively adjust the loss scale over time. Dynamic loss scale adjustments are performed by Amp automatically.
|
| 78 |
+
|
| 79 |
+
Again, you often don't need to specify these properties by hand. Instead, select an ``opt_level``,
|
| 80 |
+
which will set them up for you. After selecting an ``opt_level``, you can optionally pass property
|
| 81 |
+
kwargs as manual overrides.
|
| 82 |
+
|
| 83 |
+
If you attempt to override a property that does not make sense for the selected ``opt_level``,
|
| 84 |
+
Amp will raise an error with an explanation. For example, selecting ``opt_level="O1"`` combined with
|
| 85 |
+
the override ``master_weights=True`` does not make sense. ``O1`` inserts casts
|
| 86 |
+
around Torch functions rather than model weights. Data, activations, and weights are recast
|
| 87 |
+
out-of-place on the fly as they flow through patched functions. Therefore, the model weights themselves
|
| 88 |
+
can (and should) remain FP32, and there is no need to maintain separate FP32 master weights.
|
| 89 |
+
|
| 90 |
+
``opt_level``\ s
|
| 91 |
+
****************
|
| 92 |
+
|
| 93 |
+
Recognized ``opt_level``\ s are ``"O0"``, ``"O1"``, ``"O2"``, and ``"O3"``.
|
| 94 |
+
|
| 95 |
+
``O0`` and ``O3`` are not true mixed precision, but they are useful for establishing accuracy and
|
| 96 |
+
speed baselines, respectively.
|
| 97 |
+
|
| 98 |
+
``O1`` and ``O2`` are different implementations of mixed precision. Try both, and see
|
| 99 |
+
what gives the best speedup and accuracy for your model.
|
| 100 |
+
|
| 101 |
+
``O0``: FP32 training
|
| 102 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 103 |
+
Your incoming model should be FP32 already, so this is likely a no-op.
|
| 104 |
+
``O0`` can be useful to establish an accuracy baseline.
|
| 105 |
+
|
| 106 |
+
| Default properties set by ``O0``:
|
| 107 |
+
| ``cast_model_type=torch.float32``
|
| 108 |
+
| ``patch_torch_functions=False``
|
| 109 |
+
| ``keep_batchnorm_fp32=None`` (effectively, "not applicable," everything is FP32)
|
| 110 |
+
| ``master_weights=False``
|
| 111 |
+
| ``loss_scale=1.0``
|
| 112 |
+
|
|
| 113 |
+
|
|
| 114 |
+
|
| 115 |
+
``O1``: Mixed Precision (recommended for typical use)
|
| 116 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 117 |
+
Patch all Torch functions and Tensor methods to cast their inputs according to a whitelist-blacklist
|
| 118 |
+
model. Whitelist ops (for example, Tensor Core-friendly ops like GEMMs and convolutions) are performed
|
| 119 |
+
in FP16. Blacklist ops that benefit from FP32 precision (for example, softmax)
|
| 120 |
+
are performed in FP32. ``O1`` also uses dynamic loss scaling, unless overridden.
|
| 121 |
+
|
| 122 |
+
| Default properties set by ``O1``:
|
| 123 |
+
| ``cast_model_type=None`` (not applicable)
|
| 124 |
+
| ``patch_torch_functions=True``
|
| 125 |
+
| ``keep_batchnorm_fp32=None`` (again, not applicable, all model weights remain FP32)
|
| 126 |
+
| ``master_weights=None`` (not applicable, model weights remain FP32)
|
| 127 |
+
| ``loss_scale="dynamic"``
|
| 128 |
+
|
|
| 129 |
+
|
|
| 130 |
+
|
| 131 |
+
``O2``: "Almost FP16" Mixed Precision
|
| 132 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 133 |
+
``O2`` casts the model weights to FP16,
|
| 134 |
+
patches the model's ``forward`` method to cast input
|
| 135 |
+
data to FP16, keeps batchnorms in FP32, maintains FP32 master weights,
|
| 136 |
+
updates the optimizer's ``param_groups`` so that the ``optimizer.step()``
|
| 137 |
+
acts directly on the FP32 weights (followed by FP32 master weight->FP16 model weight
|
| 138 |
+
copies if necessary),
|
| 139 |
+
and implements dynamic loss scaling (unless overridden).
|
| 140 |
+
Unlike ``O1``, ``O2`` does not patch Torch functions or Tensor methods.
|
| 141 |
+
|
| 142 |
+
| Default properties set by ``O2``:
|
| 143 |
+
| ``cast_model_type=torch.float16``
|
| 144 |
+
| ``patch_torch_functions=False``
|
| 145 |
+
| ``keep_batchnorm_fp32=True``
|
| 146 |
+
| ``master_weights=True``
|
| 147 |
+
| ``loss_scale="dynamic"``
|
| 148 |
+
|
|
| 149 |
+
|
|
| 150 |
+
|
| 151 |
+
``O3``: FP16 training
|
| 152 |
+
^^^^^^^^^^^^^^^^^^^^^^
|
| 153 |
+
``O3`` may not achieve the stability of the true mixed precision options ``O1`` and ``O2``.
|
| 154 |
+
However, it can be useful to establish a speed baseline for your model, against which
|
| 155 |
+
the performance of ``O1`` and ``O2`` can be compared. If your model uses batch normalization,
|
| 156 |
+
to establish "speed of light" you can try ``O3`` with the additional property override
|
| 157 |
+
``keep_batchnorm_fp32=True`` (which enables cudnn batchnorm, as stated earlier).
|
| 158 |
+
|
| 159 |
+
| Default properties set by ``O3``:
|
| 160 |
+
| ``cast_model_type=torch.float16``
|
| 161 |
+
| ``patch_torch_functions=False``
|
| 162 |
+
| ``keep_batchnorm_fp32=False``
|
| 163 |
+
| ``master_weights=False``
|
| 164 |
+
| ``loss_scale=1.0``
|
| 165 |
+
|
|
| 166 |
+
|
|
| 167 |
+
|
| 168 |
+
Unified API
|
| 169 |
+
-----------
|
| 170 |
+
|
| 171 |
+
.. automodule:: apex.amp
|
| 172 |
+
.. currentmodule:: apex.amp
|
| 173 |
+
|
| 174 |
+
.. autofunction:: initialize
|
| 175 |
+
|
| 176 |
+
.. autofunction:: scale_loss
|
| 177 |
+
|
| 178 |
+
.. autofunction:: master_params
|
| 179 |
+
|
| 180 |
+
Checkpointing
|
| 181 |
+
-------------
|
| 182 |
+
|
| 183 |
+
To properly save and load your amp training, we introduce the ``amp.state_dict()``, which contains all ``loss_scaler``\ s and their corresponding unskipped steps, as well as ``amp.load_state_dict()`` to restore these attributes.
|
| 184 |
+
|
| 185 |
+
In order to get bitwise accuracy, we recommend the following workflow::
|
| 186 |
+
|
| 187 |
+
# Initialization
|
| 188 |
+
opt_level = 'O1'
|
| 189 |
+
model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
|
| 190 |
+
|
| 191 |
+
# Train your model
|
| 192 |
+
...
|
| 193 |
+
|
| 194 |
+
# Save checkpoint
|
| 195 |
+
checkpoint = {
|
| 196 |
+
'model': model.state_dict(),
|
| 197 |
+
'optimizer': optimizer.state_dict(),
|
| 198 |
+
'amp': amp.state_dict()
|
| 199 |
+
}
|
| 200 |
+
torch.save(checkpoint, 'amp_checkpoint.pt')
|
| 201 |
+
...
|
| 202 |
+
|
| 203 |
+
# Restore
|
| 204 |
+
model = ...
|
| 205 |
+
optimizer = ...
|
| 206 |
+
checkpoint = torch.load('amp_checkpoint.pt')
|
| 207 |
+
|
| 208 |
+
model, optimizer = amp.initialize(model, optimizer, opt_level=opt_level)
|
| 209 |
+
model.load_state_dict(checkpoint['model'])
|
| 210 |
+
optimizer.load_state_dict(checkpoint['optimizer'])
|
| 211 |
+
amp.load_state_dict(checkpoint['amp'])
|
| 212 |
+
|
| 213 |
+
# Continue training
|
| 214 |
+
...
|
| 215 |
+
|
| 216 |
+
Note that we recommend restoring the model using the same ``opt_level``. Also note that we recommend calling the ``load_state_dict`` methods after ``amp.initialize``.
|
| 217 |
+
|
| 218 |
+
Advanced use cases
|
| 219 |
+
------------------
|
| 220 |
+
|
| 221 |
+
The unified Amp API supports gradient accumulation across iterations,
|
| 222 |
+
multiple backward passes per iteration, multiple models/optimizers,
|
| 223 |
+
custom/user-defined autograd functions, and custom data batch classes. Gradient clipping and GANs also
|
| 224 |
+
require special treatment, but this treatment does not need to change
|
| 225 |
+
for different ``opt_level``\ s. Further details can be found here:
|
| 226 |
+
|
| 227 |
+
.. toctree::
|
| 228 |
+
:maxdepth: 1
|
| 229 |
+
|
| 230 |
+
advanced
|
| 231 |
+
|
| 232 |
+
Transition guide for old API users
|
| 233 |
+
----------------------------------
|
| 234 |
+
|
| 235 |
+
We strongly encourage moving to the new Amp API, because it's more versatile, easier to use, and future proof. The original :class:`FP16_Optimizer` and the old "Amp" API are deprecated, and subject to removal at at any time.
|
| 236 |
+
|
| 237 |
+
For users of the old "Amp" API
|
| 238 |
+
******************************
|
| 239 |
+
|
| 240 |
+
In the new API, ``opt-level O1`` performs the same patching of the Torch namespace as the old thing
|
| 241 |
+
called "Amp."
|
| 242 |
+
However, the new API allows static or dynamic loss scaling, while the old API only allowed dynamic loss scaling.
|
| 243 |
+
|
| 244 |
+
In the new API, the old call to ``amp_handle = amp.init()``, and the returned ``amp_handle``, are no
|
| 245 |
+
longer exposed or necessary. The new ``amp.initialize()`` does the duty of ``amp.init()`` (and more).
|
| 246 |
+
Therefore, any existing calls to ``amp_handle = amp.init()`` should be deleted.
|
| 247 |
+
|
| 248 |
+
The functions formerly exposed through ``amp_handle`` are now free
|
| 249 |
+
functions accessible through the ``amp`` module.
|
| 250 |
+
|
| 251 |
+
The backward context manager must be changed accordingly::
|
| 252 |
+
|
| 253 |
+
# old API
|
| 254 |
+
with amp_handle.scale_loss(loss, optimizer) as scaled_loss:
|
| 255 |
+
scaled_loss.backward()
|
| 256 |
+
->
|
| 257 |
+
# new API
|
| 258 |
+
with amp.scale_loss(loss, optimizer) as scaled_loss:
|
| 259 |
+
scaled_loss.backward()
|
| 260 |
+
|
| 261 |
+
For now, the deprecated "Amp" API documentation can still be found on the Github README: https://github.com/NVIDIA/apex/tree/master/apex/amp. The old API calls that `annotate user functions`_ to run
|
| 262 |
+
with a particular precision are still honored by the new API.
|
| 263 |
+
|
| 264 |
+
.. _`annotate user functions`:
|
| 265 |
+
https://github.com/NVIDIA/apex/tree/master/apex/amp#annotating-user-functions
|
| 266 |
+
|
| 267 |
+
|
| 268 |
+
For users of the old FP16_Optimizer
|
| 269 |
+
***********************************
|
| 270 |
+
|
| 271 |
+
``opt-level O2`` is equivalent to :class:`FP16_Optimizer` with ``dynamic_loss_scale=True``.
|
| 272 |
+
Once again, the backward pass must be changed to the unified version::
|
| 273 |
+
|
| 274 |
+
optimizer.backward(loss)
|
| 275 |
+
->
|
| 276 |
+
with amp.scale_loss(loss, optimizer) as scaled_loss:
|
| 277 |
+
scaled_loss.backward()
|
| 278 |
+
|
| 279 |
+
One annoying aspect of FP16_Optimizer was that the user had to manually convert their model to half
|
| 280 |
+
(either by calling ``.half()`` on it, or using a function or module wrapper from
|
| 281 |
+
``apex.fp16_utils``), and also manually call ``.half()`` on input data. **Neither of these are
|
| 282 |
+
necessary in the new API. No matter what --opt-level
|
| 283 |
+
you choose, you can and should simply build your model and pass input data in the default FP32 format.**
|
| 284 |
+
The new Amp API will perform the right conversions during
|
| 285 |
+
``model, optimizer = amp.initialize(model, optimizer, opt_level=....)`` based on the ``--opt-level``
|
| 286 |
+
and any overridden flags. Floating point input data may be FP32 or FP16, but you may as well just
|
| 287 |
+
let it be FP16, because the ``model`` returned by ``amp.initialize`` will have its ``forward``
|
| 288 |
+
method patched to cast the input data appropriately.
|