| - sections: |
| - local: index |
| title: 🤗 Accelerate |
| - local: basic_tutorials/install |
| title: Installation |
| - local: quicktour |
| title: Quicktour |
| title: Getting started |
| - sections: |
| - local: basic_tutorials/overview |
| title: Overview |
| - local: basic_tutorials/migration |
| title: Migrating to 🤗 Accelerate |
| - local: basic_tutorials/launch |
| title: Launching distributed code |
| - local: basic_tutorials/notebook |
| title: Launching distributed training from Jupyter Notebooks |
| - local: basic_tutorials/troubleshooting |
| title: Troubleshooting guide |
| title: Tutorials |
| - sections: |
| - local: usage_guides/explore |
| title: Start Here! |
| - local: usage_guides/training_zoo |
| title: Example Zoo |
| - local: usage_guides/big_modeling |
| title: How to perform inference on large models with small resources |
| - local: usage_guides/model_size_estimator |
| title: Knowing how big of a model you can fit into memory |
| - local: usage_guides/quantization |
| title: How to quantize model |
| - local: usage_guides/distributed_inference |
| title: How to perform distributed inference with normal resources |
| - local: usage_guides/gradient_accumulation |
| title: Performing gradient accumulation |
| - local: usage_guides/local_sgd |
| title: Accelerating training with local SGD |
| - local: usage_guides/checkpoint |
| title: Saving and loading training states |
| - local: usage_guides/tracking |
| title: Using experiment trackers |
| - local: usage_guides/mps |
| title: How to use Apple Silicon M1 GPUs |
| - local: usage_guides/low_precision_training |
| title: How to train in low precision (FP8) |
| - local: usage_guides/deepspeed |
| title: How to use DeepSpeed |
| - local: usage_guides/fsdp |
| title: How to use Fully Sharded Data Parallelism |
| - local: usage_guides/megatron_lm |
| title: How to use Megatron-LM |
| - local: usage_guides/sagemaker |
| title: How to use 🤗 Accelerate with SageMaker |
| - local: usage_guides/ipex |
| title: How to use 🤗 Accelerate with Intel® Extension for PyTorch for cpu |
| title: How-To Guides |
| - sections: |
| - local: concept_guides/internal_mechanism |
| title: 🤗 Accelerate's internal mechanism |
| - local: concept_guides/big_model_inference |
| title: Loading big models into memory |
| - local: concept_guides/performance |
| title: Comparing performance across distributed setups |
| - local: concept_guides/deferring_execution |
| title: Executing and deferring jobs |
| - local: concept_guides/gradient_synchronization |
| title: Gradient synchronization |
| - local: concept_guides/low_precision_training |
| title: How training in low-precision environments is possible (FP8) |
| - local: concept_guides/training_tpu |
| title: TPU best practices |
| title: Concepts and fundamentals |
| - sections: |
| - local: package_reference/accelerator |
| title: Main Accelerator class |
| - local: package_reference/state |
| title: Stateful configuration classes |
| - local: package_reference/cli |
| title: The Command Line |
| - local: package_reference/torch_wrappers |
| title: Torch wrapper classes |
| - local: package_reference/tracking |
| title: Experiment trackers |
| - local: package_reference/launchers |
| title: Distributed launchers |
| - local: package_reference/deepspeed |
| title: DeepSpeed utilities |
| - local: package_reference/logging |
| title: Logging |
| - local: package_reference/big_modeling |
| title: Working with large models |
| - local: package_reference/kwargs |
| title: Kwargs handlers |
| - local: package_reference/utilities |
| title: Utility functions and classes |
| - local: package_reference/megatron_lm |
| title: Megatron-LM Utilities |
| - local: package_reference/fsdp |
| title: Fully Sharded Data Parallelism Utilities |
| title: "Reference" |
|
|