Upload apex-master/examples/simple/distributed/README.md with huggingface_hub
Browse files
apex-master/examples/simple/distributed/README.md
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
**distributed_data_parallel.py** and **run.sh** show an example using Amp with
|
| 2 |
+
[apex.parallel.DistributedDataParallel](https://nvidia.github.io/apex/parallel.html) or
|
| 3 |
+
[torch.nn.parallel.DistributedDataParallel](https://pytorch.org/docs/stable/nn.html#distributeddataparallel)
|
| 4 |
+
and the Pytorch multiprocess launcher script,
|
| 5 |
+
[torch.distributed.launch](https://pytorch.org/docs/master/distributed.html#launch-utility).
|
| 6 |
+
The use of `Amp` with DistributedDataParallel does not need to change from ordinary
|
| 7 |
+
single-process use. The only gotcha is that wrapping your model with `DistributedDataParallel` must
|
| 8 |
+
come after the call to `amp.initialize`. Test via
|
| 9 |
+
```bash
|
| 10 |
+
bash run.sh
|
| 11 |
+
```
|
| 12 |
+
|
| 13 |
+
**This is intended purely as an instructional example, not a performance showcase.**
|