Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"Model training anatomy","local":"model-training-anatomy","sections":[{"title":"Load Model","local":"load-model","sections":[],"depth":2},{"title":"Memory utilization at vanilla training","local":"memory-utilization-at-vanilla-training","sections":[],"depth":2},{"title":"Anatomy of Model’s Operations","local":"anatomy-of-models-operations","sections":[],"depth":2},{"title":"Anatomy of Model’s Memory","local":"anatomy-of-models-memory","sections":[],"depth":2}],"depth":1}"> | |
| <link href="/docs/transformers/main/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/entry/start.2135b7e6.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/scheduler.25b97de1.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/singletons.0f2b7d5f.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/index.e188933d.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/paths.3d04d2c6.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/entry/app.24372c84.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/index.d9030fc9.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/nodes/0.026d2fdd.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/nodes/357.dac151a6.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/Tip.baa67368.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/CodeBlock.e6cd0d95.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/EditOnGithub.91d95064.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"Model training anatomy","local":"model-training-anatomy","sections":[{"title":"Load Model","local":"load-model","sections":[],"depth":2},{"title":"Memory utilization at vanilla training","local":"memory-utilization-at-vanilla-training","sections":[],"depth":2},{"title":"Anatomy of Model’s Operations","local":"anatomy-of-models-operations","sections":[],"depth":2},{"title":"Anatomy of Model’s Memory","local":"anatomy-of-models-memory","sections":[],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="model-training-anatomy" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#model-training-anatomy"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Model training anatomy</span></h1> <p data-svelte-h="svelte-hnkrmx">To understand performance optimization techniques that one can apply to improve efficiency of model training | |
| speed and memory utilization, it’s helpful to get familiar with how GPU is utilized during training, and how compute | |
| intensity varies depending on an operation performed.</p> <p data-svelte-h="svelte-1senzsq">Let’s start by exploring a motivating example of GPU utilization and the training run of a model. For the demonstration, | |
| we’ll need to install a few libraries:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pip install transformers datasets accelerate nvidia-ml-py3<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-12yc6mf">The <code>nvidia-ml-py3</code> library allows us to monitor the memory usage of the models from within Python. You might be familiar | |
| with the <code>nvidia-smi</code> command in the terminal - this library allows to access the same information in Python directly.</p> <p data-svelte-h="svelte-1jp5ro8">Then, we create some dummy data: random token IDs between 100 and 30000 and binary labels for a classifier. | |
| In total, we get 512 sequences each with length 512 and store them in a <a href="https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset" rel="nofollow">Dataset</a> with PyTorch format.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> datasets <span class="hljs-keyword">import</span> Dataset | |
| <span class="hljs-meta">>>> </span>seq_len, dataset_size = <span class="hljs-number">512</span>, <span class="hljs-number">512</span> | |
| <span class="hljs-meta">>>> </span>dummy_data = { | |
| <span class="hljs-meta">... </span> <span class="hljs-string">"input_ids"</span>: np.random.randint(<span class="hljs-number">100</span>, <span class="hljs-number">30000</span>, (dataset_size, seq_len)), | |
| <span class="hljs-meta">... </span> <span class="hljs-string">"labels"</span>: np.random.randint(<span class="hljs-number">0</span>, <span class="hljs-number">2</span>, (dataset_size)), | |
| <span class="hljs-meta">... </span>} | |
| <span class="hljs-meta">>>> </span>ds = Dataset.from_dict(dummy_data) | |
| <span class="hljs-meta">>>> </span>ds.set_format(<span class="hljs-string">"pt"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1870dpn">To print summary statistics for the GPU utilization and the training run with the <a href="/docs/transformers/main/en/main_classes/trainer#transformers.Trainer">Trainer</a> we define two helper functions:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> pynvml <span class="hljs-keyword">import</span> * | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">print_gpu_utilization</span>(): | |
| <span class="hljs-meta">... </span> nvmlInit() | |
| <span class="hljs-meta">... </span> handle = nvmlDeviceGetHandleByIndex(<span class="hljs-number">0</span>) | |
| <span class="hljs-meta">... </span> info = nvmlDeviceGetMemoryInfo(handle) | |
| <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(<span class="hljs-string">f"GPU memory occupied: <span class="hljs-subst">{info.used//<span class="hljs-number">1024</span>**<span class="hljs-number">2</span>}</span> MB."</span>) | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">def</span> <span class="hljs-title function_">print_summary</span>(<span class="hljs-params">result</span>): | |
| <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(<span class="hljs-string">f"Time: <span class="hljs-subst">{result.metrics[<span class="hljs-string">'train_runtime'</span>]:<span class="hljs-number">.2</span>f}</span>"</span>) | |
| <span class="hljs-meta">... </span> <span class="hljs-built_in">print</span>(<span class="hljs-string">f"Samples/second: <span class="hljs-subst">{result.metrics[<span class="hljs-string">'train_samples_per_second'</span>]:<span class="hljs-number">.2</span>f}</span>"</span>) | |
| <span class="hljs-meta">... </span> print_gpu_utilization()<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-ync61x">Let’s verify that we start with a free GPU memory:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span>print_gpu_utilization() | |
| GPU memory occupied: <span class="hljs-number">0</span> MB.<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-5czace">That looks good: the GPU memory is not occupied as we would expect before we load any models. If that’s not the case on | |
| your machine make sure to stop all processes that are using GPU memory. However, not all free GPU memory can be used by | |
| the user. When a model is loaded to the GPU the kernels are also loaded, which can take up 1-2GB of memory. To see how | |
| much it is we load a tiny tensor into the GPU which triggers the kernels to be loaded as well.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-meta">>>> </span>torch.ones((<span class="hljs-number">1</span>, <span class="hljs-number">1</span>)).to(<span class="hljs-string">"cuda"</span>) | |
| <span class="hljs-meta">>>> </span>print_gpu_utilization() | |
| GPU memory occupied: <span class="hljs-number">1343</span> MB.<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-i7swzw">We see that the kernels alone take up 1.3GB of GPU memory. Now let’s see how much space the model uses.</p> <h2 class="relative group"><a id="load-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load Model</span></h2> <p data-svelte-h="svelte-1gubaoi">First, we load the <code>google-bert/bert-large-uncased</code> model. We load the model weights directly to the GPU so that we can check | |
| how much space just the weights use.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoModelForSequenceClassification | |
| <span class="hljs-meta">>>> </span>model = AutoModelForSequenceClassification.from_pretrained(<span class="hljs-string">"google-bert/bert-large-uncased"</span>).to(<span class="hljs-string">"cuda"</span>) | |
| <span class="hljs-meta">>>> </span>print_gpu_utilization() | |
| GPU memory occupied: <span class="hljs-number">2631</span> MB.<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-11jwi8m">We can see that the model weights alone take up 1.3 GB of GPU memory. The exact number depends on the specific | |
| GPU you are using. Note that on newer GPUs a model can sometimes take up more space since the weights are loaded in an | |
| optimized fashion that speeds up the usage of the model. Now we can also quickly check if we get the same result | |
| as with <code>nvidia-smi</code> CLI:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->nvidia-smi<!-- HTML_TAG_END --></pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->Tue Jan 11 08:58:05 2022 | |
| +-----------------------------------------------------------------------------+ | |
| | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | | |
| |-------------------------------+----------------------+----------------------+ | |
| | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | |
| | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | |
| | | | MIG M. | | |
| |===============================+======================+======================| | |
| | 0 Tesla V100-SXM2... On | 00000000:00:04.0 Off | 0 | | |
| | N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default | | |
| | | | N/A | | |
| +-------------------------------+----------------------+----------------------+ | |
| +-----------------------------------------------------------------------------+ | |
| | Processes: | | |
| | GPU GI CI PID Type Process name GPU Memory | | |
| | ID ID Usage | | |
| |=============================================================================| | |
| | 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB | | |
| +-----------------------------------------------------------------------------+<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-3jv7wr">We get the same number as before and you can also see that we are using a V100 GPU with 16GB of memory. So now we can | |
| start training the model and see how the GPU memory consumption changes. First, we set up a few standard training | |
| arguments:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->default_args = { | |
| <span class="hljs-string">"output_dir"</span>: <span class="hljs-string">"tmp"</span>, | |
| <span class="hljs-string">"eval_strategy"</span>: <span class="hljs-string">"steps"</span>, | |
| <span class="hljs-string">"num_train_epochs"</span>: <span class="hljs-number">1</span>, | |
| <span class="hljs-string">"log_level"</span>: <span class="hljs-string">"error"</span>, | |
| <span class="hljs-string">"report_to"</span>: <span class="hljs-string">"none"</span>, | |
| }<!-- HTML_TAG_END --></pre></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1v4397i">If you plan to run multiple experiments, in order to properly clear the memory between experiments, restart the Python | |
| kernel between experiments.</p></div> <h2 class="relative group"><a id="memory-utilization-at-vanilla-training" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#memory-utilization-at-vanilla-training"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Memory utilization at vanilla training</span></h2> <p data-svelte-h="svelte-1y7sauk">Let’s use the <a href="/docs/transformers/main/en/main_classes/trainer#transformers.Trainer">Trainer</a> and train the model without using any GPU performance optimization techniques and a batch size of 4:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> TrainingArguments, Trainer, logging | |
| <span class="hljs-meta">>>> </span>logging.set_verbosity_error() | |
| <span class="hljs-meta">>>> </span>training_args = TrainingArguments(per_device_train_batch_size=<span class="hljs-number">4</span>, **default_args) | |
| <span class="hljs-meta">>>> </span>trainer = Trainer(model=model, args=training_args, train_dataset=ds) | |
| <span class="hljs-meta">>>> </span>result = trainer.train() | |
| <span class="hljs-meta">>>> </span>print_summary(result)<!-- HTML_TAG_END --></pre></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">Time:</span> 57.82 | |
| Samples/second: 8.86 | |
| GPU memory occupied: 14949 MB.<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-17rqaak">We see that already a relatively small batch size almost fills up our GPU’s entire memory. However, a larger batch size | |
| can often result in faster model convergence or better end performance. So ideally we want to tune the batch size to our | |
| model’s needs and not to the GPU limitations. What’s interesting is that we use much more memory than the size of the model. | |
| To understand a bit better why this is the case let’s have a look at a model’s operations and memory needs.</p> <h2 class="relative group"><a id="anatomy-of-models-operations" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#anatomy-of-models-operations"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Anatomy of Model’s Operations</span></h2> <p data-svelte-h="svelte-1yxwgxq">Transformers architecture includes 3 main groups of operations grouped below by compute-intensity.</p> <ol data-svelte-h="svelte-6rh9nr"><li><p><strong>Tensor Contractions</strong></p> <p>Linear layers and components of Multi-Head Attention all do batched <strong>matrix-matrix multiplications</strong>. These operations are the most compute-intensive part of training a transformer.</p></li> <li><p><strong>Statistical Normalizations</strong></p> <p>Softmax and layer normalization are less compute-intensive than tensor contractions, and involve one or more <strong>reduction operations</strong>, the result of which is then applied via a map.</p></li> <li><p><strong>Element-wise Operators</strong></p> <p>These are the remaining operators: <strong>biases, dropout, activations, and residual connections</strong>. These are the least compute-intensive operations.</p></li></ol> <p data-svelte-h="svelte-133lz8w">This knowledge can be helpful to know when analyzing performance bottlenecks.</p> <p data-svelte-h="svelte-kuotmr">This summary is derived from <a href="https://arxiv.org/abs/2007.00072" rel="nofollow">Data Movement Is All You Need: A Case Study on Optimizing Transformers 2020</a></p> <h2 class="relative group"><a id="anatomy-of-models-memory" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#anatomy-of-models-memory"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Anatomy of Model’s Memory</span></h2> <p data-svelte-h="svelte-1754txj">We’ve seen that training the model uses much more memory than just putting the model on the GPU. This is because there | |
| are many components during training that use GPU memory. The components on GPU memory are the following:</p> <ol data-svelte-h="svelte-sugems"><li>model weights</li> <li>optimizer states</li> <li>gradients</li> <li>forward activations saved for gradient computation</li> <li>temporary buffers</li> <li>functionality-specific memory</li></ol> <p data-svelte-h="svelte-fr1nan">A typical model trained in mixed precision with AdamW requires 18 bytes per model parameter plus activation memory. For | |
| inference there are no optimizer states and gradients, so we can subtract those. And thus we end up with 6 bytes per | |
| model parameter for mixed precision inference, plus activation memory.</p> <p data-svelte-h="svelte-gkt5oe">Let’s look at the details.</p> <p data-svelte-h="svelte-xrqy2v"><strong>Model Weights:</strong></p> <ul data-svelte-h="svelte-onmuxe"><li>4 bytes * number of parameters for fp32 training</li> <li>6 bytes * number of parameters for mixed precision training (maintains a model in fp32 and one in fp16 in memory)</li></ul> <p data-svelte-h="svelte-1lt6jd6"><strong>Optimizer States:</strong></p> <ul data-svelte-h="svelte-1da4la2"><li>8 bytes * number of parameters for normal AdamW (maintains 2 states)</li> <li>2 bytes * number of parameters for 8-bit AdamW optimizers like <a href="https://github.com/TimDettmers/bitsandbytes" rel="nofollow">bitsandbytes</a></li> <li>4 bytes * number of parameters for optimizers like SGD with momentum (maintains only 1 state)</li></ul> <p data-svelte-h="svelte-1ecbalg"><strong>Gradients</strong></p> <ul data-svelte-h="svelte-4g0r46"><li>4 bytes * number of parameters for either fp32 or mixed precision training (gradients are always kept in fp32)</li></ul> <p data-svelte-h="svelte-2pm1hh"><strong>Forward Activations</strong></p> <ul data-svelte-h="svelte-if0ig5"><li>size depends on many factors, the key ones being sequence length, hidden size and batch size.</li></ul> <p data-svelte-h="svelte-1nzmdyx">There are the input and output that are being passed and returned by the forward and the backward functions and the | |
| forward activations saved for gradient computation.</p> <p data-svelte-h="svelte-1wp748v"><strong>Temporary Memory</strong></p> <p data-svelte-h="svelte-1wjdkzj">Additionally, there are all kinds of temporary variables which get released once the calculation is done, but in the | |
| moment these could require additional memory and could push to OOM. Therefore, when coding it’s crucial to think | |
| strategically about such temporary variables and sometimes to explicitly free those as soon as they are no longer needed.</p> <p data-svelte-h="svelte-1fhakxc"><strong>Functionality-specific memory</strong></p> <p data-svelte-h="svelte-l8a4kr">Then, your software could have special memory needs. For example, when generating text using beam search, the software | |
| needs to maintain multiple copies of inputs and outputs.</p> <p data-svelte-h="svelte-1mll0j9"><strong><code>forward</code> vs <code>backward</code> Execution Speed</strong></p> <p data-svelte-h="svelte-10o41gl">For convolutions and linear layers there are 2x flops in the backward compared to the forward, which generally translates | |
| into ~2x slower (sometimes more, because sizes in the backward tend to be more awkward). Activations are usually | |
| bandwidth-limited, and it’s typical for an activation to have to read more data in the backward than in the forward | |
| (e.g. activation forward reads once, writes once, activation backward reads twice, gradOutput and output of the forward, | |
| and writes once, gradInput).</p> <p data-svelte-h="svelte-a36np0">As you can see, there are potentially a few places where we could save GPU memory or speed up operations. | |
| Now that you understand what affects GPU utilization and computation speed, refer to | |
| the <a href="perf_train_gpu_one">Methods and tools for efficient training on a single GPU</a> documentation page to learn about | |
| performance optimization techniques.</p> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/transformers/blob/main/docs/source/en/model_memory_anatomy.md" target="_blank"><span data-svelte-h="svelte-1kd6by1"><</span> <span data-svelte-h="svelte-x0xyl0">></span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_1xexzbk = { | |
| assets: "/docs/transformers/main/en", | |
| base: "/docs/transformers/main/en", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/transformers/main/en/_app/immutable/entry/start.2135b7e6.js"), | |
| import("/docs/transformers/main/en/_app/immutable/entry/app.24372c84.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 357], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 39.6 kB
- Xet hash:
- 8257d57b7aa6e2cd4a81304e8b0f1073e1b2907c40afcf31db9dba6d3df0350c
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.