| <!--Copyright 2022 The HuggingFace Team. All rights reserved. | |
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | |
| the License. You may obtain a copy of the License at | |
| http://www.apache.org/licenses/LICENSE-2.0 | |
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | |
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | |
| specific language governing permissions and limitations under the License. | |
| --> | |
| # Fully Sharded Data Parallel | |
| To accelerate training huge models on larger batch sizes, we can use a fully sharded data parallel model. | |
| This type of data parallel paradigm enables fitting more data and larger models by sharding the optimizer states, gradients and parameters. | |
| To read more about it and the benefits, check out the [Fully Sharded Data Parallel blog](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/). | |
| We have integrated the latest PyTorch's Fully Sharded Data Parallel (FSDP) training feature. | |
| All you need to do is enable it through the config. | |
| ## How it works out of the box | |
| On your machine(s) just run: | |
| ```bash | |
| accelerate config | |
| ``` | |
| and answer the questions asked. This will generate a config file that will be used automatically to properly set the | |
| default options when doing | |
| ```bash | |
| accelerate launch my_script.py --args_to_my_script | |
| ``` | |
| For instance, here is how you would run the NLP example (from the root of the repo) with FSDP enabled: | |
| ```bash | |
| compute_environment: LOCAL_MACHINE | |
| deepspeed_config: {} | |
| distributed_type: FSDP | |
| downcast_bf16: 'no' | |
| fsdp_config: | |
| fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP | |
| fsdp_backward_prefetch_policy: BACKWARD_PRE | |
| fsdp_offload_params: false | |
| fsdp_sharding_strategy: 1 | |
| fsdp_state_dict_type: FULL_STATE_DICT | |
| fsdp_transformer_layer_cls_to_wrap: GPT2Block | |
| machine_rank: 0 | |
| main_process_ip: null | |
| main_process_port: null | |
| main_training_function: main | |
| mixed_precision: 'no' | |
| num_machines: 1 | |
| num_processes: 2 | |
| use_cpu: false | |
| ``` | |
| ```bash | |
| accelerate launch examples/nlp_example.py | |
| ``` | |
| Currently, `Accelerate` supports the following config through the CLI: | |
| ```bash | |
| `Sharding Strategy`: [1] FULL_SHARD (shards optimizer states, gradients and parameters), [2] SHARD_GRAD_OP (shards optimizer states and gradients), [3] NO_SHARD | |
| `Offload Params`: Decides Whether to offload parameters and gradients to CPU | |
| `Auto Wrap Policy`: [1] TRANSFORMER_BASED_WRAP, [2] SIZE_BASED_WRAP, [3] NO_WRAP | |
| `Transformer Layer Class to Wrap`: When using `TRANSFORMER_BASED_WRAP`, user specifies transformer layer class name (case-sensitive) to wrap ,e.g, `BertLayer`, `GPTJBlock`, `T5Block`... | |
| `Min Num Params`: minimum number of parameters when using `SIZE_BASED_WRAP` | |
| `Backward Prefetch`: [1] BACKWARD_PRE, [2] BACKWARD_POST, [3] NO_PREFETCH | |
| `State Dict Type`: [1] FULL_STATE_DICT, [2] LOCAL_STATE_DICT, [3] SHARDED_STATE_DICT | |
| ``` | |
| ## A few caveats to be aware of | |
| - PyTorch FSDP auto wraps sub-modules, flattens the parameters and shards the parameters in place. | |
| Due to this, any optimizer created before model wrapping gets broken and occupies more memory. | |
| Hence, it is highly recommended and efficient to prepare the model before creating the optimizer. | |
| `Accelerate` will automatically wrap the model and create an optimizer for you in case of single model with a warning message. | |
| > FSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer | |
| However, below is the recommended way to prepare model and optimizer while using FSDP: | |
| ```diff | |
| model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", return_dict=True) | |
| + model = accelerator.prepare(model) | |
| optimizer = torch.optim.AdamW(params=model.parameters(), lr=lr) | |
| - model, optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( | |
| - model, optimizer, train_dataloader, eval_dataloader, lr_scheduler | |
| - ) | |
| + optimizer, train_dataloader, eval_dataloader, lr_scheduler = accelerator.prepare( | |
| + optimizer, train_dataloader, eval_dataloader, lr_scheduler | |
| + ) | |
| ``` | |
| - In case of a single model, if you have created the optimizer with multiple parameter groups and called prepare with them together, | |
| then the parameter groups will be lost and the following warning is displayed: | |
| > FSDP Warning: When using FSDP, several parameter groups will be conflated into | |
| > a single one due to nested module wrapping and parameter flattening. | |
| This is because parameter groups created before wrapping will have no meaning post wrapping due to parameter flattening of nested FSDP modules into 1D arrays (which can consume many layers). | |
| For instance, below are the named parameters of an FSDP model on GPU 0 (When using 2 GPUs. Around 55M (110M/2) params in 1D arrays as this will have the 1st shard of the parameters). | |
| Here, if one has applied no weight decay for [bias, LayerNorm.weight] the named parameters of an unwrapped BERT model, | |
| it can't be applied to the below FSDP wrapped model as there are no named parameters with either of those strings and | |
| the parameters of those layers are concatenated with parameters of various other layers. | |
| ``` | |
| { | |
| '_fsdp_wrapped_module.flat_param': torch.Size([494209]), | |
| '_fsdp_wrapped_module._fpw_module.bert.embeddings.word_embeddings._fsdp_wrapped_module.flat_param': torch.Size([11720448]), | |
| '_fsdp_wrapped_module._fpw_module.bert.encoder._fsdp_wrapped_module.flat_param': torch.Size([42527232]) | |
| } | |
| ``` | |
| - In case of multiple models, it is necessary to prepare the models before creating optimizers or else it will throw an error. | |
| Then pass the optimizers to the prepare call in the same order as corresponding models else `accelerator.save_state()` and `accelerator.load_state()` will result in wrong/unexpected behaviour. | |
| - This feature is incompatible with `--predict_with_generate` in the `run_translation.py` script of 🤗 `Transformers` library. | |
| For more control, users can leverage the `FullyShardedDataParallelPlugin`. After creating an instance of this class, users can pass it to the Accelerator class instantiation. | |
| For more information on these options, please refer to the PyTorch [FullyShardedDataParallel](https://github.com/pytorch/pytorch/blob/0df2e863fbd5993a7b9e652910792bd21a516ff3/torch/distributed/fsdp/fully_sharded_data_parallel.py#L236) code. | |