Upload apex-master/docs/source/fp16_utils.rst with huggingface_hub
Browse files
apex-master/docs/source/fp16_utils.rst
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
.. role:: hidden
|
| 2 |
+
:class: hidden-section
|
| 3 |
+
|
| 4 |
+
apex.fp16_utils
|
| 5 |
+
===================================
|
| 6 |
+
|
| 7 |
+
This submodule contains utilities designed to streamline the mixed precision training recipe
|
| 8 |
+
presented by NVIDIA `on Parallel Forall`_ and in GTC 2018 Sessions
|
| 9 |
+
`Training Neural Networks with Mixed Precision: Theory and Practice`_ and
|
| 10 |
+
`Training Neural Networks with Mixed Precision: Real Examples`_.
|
| 11 |
+
For Pytorch users, Real Examples in particular is recommended.
|
| 12 |
+
|
| 13 |
+
Full runnable Python scripts demonstrating ``apex.fp16_utils``
|
| 14 |
+
can be found on the Github page:
|
| 15 |
+
|
| 16 |
+
| `Simple FP16_Optimizer demos`_
|
| 17 |
+
|
|
| 18 |
+
| `Distributed Mixed Precision Training with imagenet`_
|
| 19 |
+
|
|
| 20 |
+
| `Mixed Precision Training with word_language_model`_
|
| 21 |
+
|
|
| 22 |
+
|
|
| 23 |
+
|
| 24 |
+
.. _`on Parallel Forall`:
|
| 25 |
+
https://devblogs.nvidia.com/mixed-precision-training-deep-neural-networks/
|
| 26 |
+
.. _`Training Neural Networks with Mixed Precision: Theory and Practice`:
|
| 27 |
+
http://on-demand.gputechconf.com/gtc/2018/video/S8923/
|
| 28 |
+
.. _`Training Neural Networks with Mixed Precision: Real Examples`:
|
| 29 |
+
http://on-demand.gputechconf.com/gtc/2018/video/S81012/
|
| 30 |
+
.. _`Simple FP16_Optimizer demos`:
|
| 31 |
+
https://github.com/NVIDIA/apex/tree/master/examples/FP16_Optimizer_simple
|
| 32 |
+
.. _`Distributed Mixed Precision Training with imagenet`:
|
| 33 |
+
https://github.com/NVIDIA/apex/tree/master/examples/imagenet
|
| 34 |
+
.. _`Mixed Precision Training with word_language_model`:
|
| 35 |
+
https://github.com/NVIDIA/apex/tree/master/examples/word_language_model
|
| 36 |
+
|
| 37 |
+
.. automodule:: apex.fp16_utils
|
| 38 |
+
.. currentmodule:: apex.fp16_utils
|
| 39 |
+
|
| 40 |
+
Automatic management of master params + loss scaling
|
| 41 |
+
----------------------------------------------------
|
| 42 |
+
|
| 43 |
+
.. autoclass:: FP16_Optimizer
|
| 44 |
+
:members:
|
| 45 |
+
|
| 46 |
+
.. autoclass:: LossScaler
|
| 47 |
+
:members:
|
| 48 |
+
|
| 49 |
+
.. autoclass:: DynamicLossScaler
|
| 50 |
+
:members:
|
| 51 |
+
|
| 52 |
+
Manual master parameter management
|
| 53 |
+
----------------------------------
|
| 54 |
+
|
| 55 |
+
.. autofunction:: prep_param_lists
|
| 56 |
+
|
| 57 |
+
.. autofunction:: master_params_to_model_params
|
| 58 |
+
|
| 59 |
+
.. autofunction:: model_grads_to_master_grads
|