| <! |
|
|
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
| the License. You may obtain a copy of the License at |
|
|
| http://www.apache.org/licenses/LICENSE-2.0 |
|
|
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
| specific language governing permissions and limitations under the License. |
| |
|
|
| |
|
|
| Gradient accumulation is a technique where you can train on bigger batch sizes than |
| your machine would normally be able to fit into memory. This is done by accumulating gradients over |
| several batches, and only stepping the optimizer after a certain number of batches have been performed. |
|
|
| While technically standard gradient accumulation code would work fine in a distributed setup, it is not the most efficient |
| method for doing so and you may experience considerable slowdowns! |
|
|
| In this tutorial you will see how to quickly setup gradient accumulation and perform it with the utilities provided in π€ Accelerate, |
| which can total to adding just one new line of code! |
|
|
| This example will use a very simplistic PyTorch training loop that performs gradient accumulation every two batches: |
|
|
| ```python |
| device = "cuda" |
| model.to(device) |
|
|
| gradient_accumulation_steps = 2 |
|
|
| for index, batch in enumerate(training_dataloader): |
| inputs, targets = batch |
| inputs = inputs.to(device) |
| targets = targets.to(device) |
| outputs = model(inputs) |
| loss = loss_function(outputs, targets) |
| loss = loss / gradient_accumulation_steps |
| loss.backward() |
| if (index + 1) % gradient_accumulation_steps == 0: |
| optimizer.step() |
| scheduler.step() |
| optimizer.zero_grad() |
| ``` |
|
|
| |
|
|
| First the code shown earlier will be converted to utilize π€ Accelerate without the special gradient accumulation helper: |
|
|
| ```diff |
| + from accelerate import Accelerator |
| + accelerator = Accelerator() |
|
|
| + model, optimizer, training_dataloader, scheduler = accelerator.prepare( |
| + model, optimizer, training_dataloader, scheduler |
| + ) |
|
|
| for index, batch in enumerate(training_dataloader): |
| inputs, targets = batch |
| - inputs = inputs.to(device) |
| - targets = targets.to(device) |
| outputs = model(inputs) |
| loss = loss_function(outputs, targets) |
| loss = loss / gradient_accumulation_steps |
| + accelerator.backward(loss) |
| if (index+1) % gradient_accumulation_steps == 0: |
| optimizer.step() |
| scheduler.step() |
| optimizer.zero_grad() |
| ``` |
|
|
| <Tip warning={true}> |
|
|
| In its current state, this code is not going to perform gradient accumulation efficiently due to a process called gradient synchronization. Read more about that in the [Concepts tutorial](concept_guides/gradient_synchronization)! |
|
|
| </Tip> |
|
|
| |
|
|
| All that is left now is to let π€ Accelerate handle the gradient accumulation for us. To do so you should pass in a `gradient_accumulation_steps` parameter to [`Accelerator`], dictating the number |
| of steps to perform before each call to `step()` and how to automatically adjust the loss during the call to [`~Accelerator.backward`]: |
|
|
| ```diff |
| from accelerate import Accelerator |
| - accelerator = Accelerator() |
| + accelerator = Accelerator(gradient_accumulation_steps=2) |
| ``` |
|
|
| From here you can use the [`~Accelerator.accumulate`] context manager from inside your training loop to automatically perform the gradient accumulation for you! |
| You just wrap it around the entire training part of our code: |
|
|
| ```diff |
| - for index, batch in enumerate(training_dataloader): |
| + for batch in training_dataloader: |
| + with accelerator.accumulate(model): |
| inputs, targets = batch |
| outputs = model(inputs) |
| ``` |
|
|
| You can remove all the special checks for the step number and the loss adjustment: |
|
|
| ```diff |
| - loss = loss / gradient_accumulation_steps |
| accelerator.backward(loss) |
| - if (index+1) % gradient_accumulation_steps == 0: |
| optimizer.step() |
| scheduler.step() |
| optimizer.zero_grad() |
| ``` |
|
|
| As you can see the [`Accelerator`] is able to keep track of the batch number you are on and it will automatically know whether to step through the prepared optimizer and how to adjust the loss. |
|
|
| |
|
|
| Below is the finished implementation for performing gradient accumulation with π€ Accelerate |
|
|
| ```python |
| for batch in training_dataloader: |
| with accelerator.accumulate(model): |
| inputs, targets = batch |
| outputs = model(inputs) |
| loss = loss_function(outputs, targets) |
| accelerator.backward(loss) |
| optimizer.step() |
| scheduler.step() |
| optimizer.zero_grad() |
| ``` |
|
|
| To learn more about what magic this wraps around, read the [Gradient Synchronization concept guide](/concept_guides/gradient_synchronization) |