Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"训练用的定制硬件","local":"训练用的定制硬件","sections":[{"title":"GPU","local":"gpu","sections":[{"title":"供电和散热","local":"供电和散热","sections":[],"depth":3},{"title":"多GPU连接","local":"多gpu连接","sections":[{"title":"NVlink","local":"nvlink","sections":[],"depth":4}],"depth":3}],"depth":2}],"depth":1}"> | |
| <link href="/docs/transformers/main/zh/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/entry/start.a61b9c50.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/chunks/scheduler.9991993c.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/chunks/singletons.2822fe91.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/chunks/index.02cfeb18.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/chunks/paths.d66588b4.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/entry/app.99775688.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/chunks/index.7fc9a5e7.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/nodes/0.f4c5a5c1.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/nodes/48.330ece03.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/chunks/CodeBlock.e11cba92.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/zh/_app/immutable/chunks/EditOnGithub.84ab7f0e.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"训练用的定制硬件","local":"训练用的定制硬件","sections":[{"title":"GPU","local":"gpu","sections":[{"title":"供电和散热","local":"供电和散热","sections":[],"depth":3},{"title":"多GPU连接","local":"多gpu连接","sections":[{"title":"NVlink","local":"nvlink","sections":[],"depth":4}],"depth":3}],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="训练用的定制硬件" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#训练用的定制硬件"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>训练用的定制硬件</span></h1> <p data-svelte-h="svelte-mc5nxz">您用来运行模型训练和推断的硬件可能会对性能产生重大影响。要深入了解 GPU,务必查看 Tim Dettmer 出色的<a href="https://timdettmers.com/2020/09/07/which-gpu-for-deep-learning/" rel="nofollow">博文</a>。</p> <p data-svelte-h="svelte-166hpoh">让我们来看一些关于 GPU 配置的实用建议。</p> <h2 class="relative group"><a id="gpu" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#gpu"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>GPU</span></h2> <p data-svelte-h="svelte-qacgmx">当你训练更大的模型时,基本上有三种选择:</p> <ul data-svelte-h="svelte-1rfifts"><li>更大的 GPU</li> <li>更多的 GPU</li> <li>更多的 CPU 和 NVMe(通过<a href="main_classes/deepspeed#nvme-support">DeepSpeed-Infinity</a>实现)</li></ul> <p data-svelte-h="svelte-159n9qx">让我们从只有一块GPU的情况开始。</p> <h3 class="relative group"><a id="供电和散热" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#供电和散热"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>供电和散热</span></h3> <p data-svelte-h="svelte-1ql8w6p">如果您购买了昂贵的高端GPU,请确保为其提供正确的供电和足够的散热。</p> <p data-svelte-h="svelte-197glyv"><strong>供电</strong>:</p> <p data-svelte-h="svelte-s9fmg9">一些高端消费者级GPU卡具有2个,有时甚至3个PCI-E-8针电源插口。请确保将与插口数量相同的独立12V PCI-E-8针线缆插入卡中。不要使用同一根线缆两端的2个分叉(也称为pigtail cable)。也就是说,如果您的GPU上有2个插口,您需要使用2条PCI-E-8针线缆连接电源和卡,而不是使用一条末端有2个PCI-E-8针连接器的线缆!否则,您无法充分发挥卡的性能。</p> <p data-svelte-h="svelte-1i1j87b">每个PCI-E-8针电源线缆需要插入电源侧的12V轨上,并且可以提供最多150W的功率。</p> <p data-svelte-h="svelte-1a54n6v">其他一些卡可能使用PCI-E-12针连接器,这些连接器可以提供最多500-600W的功率。</p> <p data-svelte-h="svelte-1j2fc2i">低端卡可能使用6针连接器,这些连接器可提供最多75W的功率。</p> <p data-svelte-h="svelte-8fml98">此外,您需要选择具有稳定电压的高端电源。一些质量较低的电源可能无法为卡提供所需的稳定电压以发挥其最大性能。</p> <p data-svelte-h="svelte-10x6vpl">当然,电源还需要有足够的未使用的瓦数来为卡供电。</p> <p data-svelte-h="svelte-vr849j"><strong>散热</strong>:</p> <p data-svelte-h="svelte-hj5mwm">当GPU过热时,它将开始降频,不会提供完整的性能。如果温度过高,可能会缩短GPU的使用寿命。</p> <p data-svelte-h="svelte-cb4w4a">当GPU负载很重时,很难确定最佳温度是多少,但任何低于+80度的温度都是好的,越低越好,也许在70-75度之间是一个非常好的范围。降频可能从大约84-90度开始。但是除了降频外,持续的高温可能会缩短GPU的使用寿命。</p> <p data-svelte-h="svelte-2danla">接下来让我们看一下拥有多个GPU时最重要的方面之一:连接。</p> <h3 class="relative group"><a id="多gpu连接" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#多gpu连接"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>多GPU连接</span></h3> <p data-svelte-h="svelte-hl2gd8">如果您使用多个GPU,则卡之间的互连方式可能会对总训练时间产生巨大影响。如果GPU位于同一物理节点上,您可以运行以下代码:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->nvidia-smi topo -m<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1odqbve">它将告诉您GPU如何互连。在具有双GPU并通过NVLink连接的机器上,您最有可能看到类似以下内容:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --> <span class="hljs-attribute">GPU0</span> GPU1 CPU Affinity NUMA Affinity | |
| <span class="hljs-attribute">GPU0</span> X NV2 <span class="hljs-number">0</span>-<span class="hljs-number">23</span> N/A | |
| <span class="hljs-attribute">GPU1</span> NV2 X <span class="hljs-number">0</span>-<span class="hljs-number">23</span> N/A<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1waxkfx">在不同的机器上,如果没有NVLink,我们可能会看到:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --> <span class="hljs-attribute">GPU0</span> GPU1 CPU Affinity NUMA Affinity | |
| <span class="hljs-attribute">GPU0</span> X PHB <span class="hljs-number">0</span>-<span class="hljs-number">11</span> N/A | |
| <span class="hljs-attribute">GPU1</span> PHB X <span class="hljs-number">0</span>-<span class="hljs-number">11</span> N/A<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1vt7wb9">这个报告包括了这个输出:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --> X = Self | |
| SYS = Connection traversing PCIe <span class="hljs-keyword">as</span> well <span class="hljs-keyword">as</span> <span class="hljs-keyword">the</span> SMP interconnect between NUMA nodes (e.g., QPI/UPI) | |
| NODE = Connection traversing PCIe <span class="hljs-keyword">as</span> well <span class="hljs-keyword">as</span> <span class="hljs-keyword">the</span> interconnect between PCIe Host Bridges <span class="hljs-keyword">within</span> <span class="hljs-keyword">a</span> NUMA node | |
| PHB = Connection traversing PCIe <span class="hljs-keyword">as</span> well <span class="hljs-keyword">as</span> <span class="hljs-keyword">a</span> PCIe Host Bridge (typically <span class="hljs-keyword">the</span> CPU) | |
| PXB = Connection traversing multiple PCIe bridges (<span class="hljs-keyword">without</span> traversing <span class="hljs-keyword">the</span> PCIe Host Bridge) | |
| PIX = Connection traversing <span class="hljs-keyword">at</span> most <span class="hljs-keyword">a</span> single PCIe bridge | |
| NV<span class="hljs-comment"># = Connection traversing a bonded set of # NVLinks</span><!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-4q8rk1">因此,第一个报告<code>NV2</code>告诉我们GPU通过2个NVLink互连,而第二个报告<code>PHB</code>展示了典型的消费者级PCIe+Bridge设置。</p> <p data-svelte-h="svelte-86zrdf">检查你的设置中具有哪种连接类型。其中一些会使卡之间的通信更快(例如NVLink),而其他则较慢(例如PHB)。</p> <p data-svelte-h="svelte-11e56sq">根据使用的扩展解决方案的类型,连接速度可能会产生重大或较小的影响。如果GPU很少需要同步,就像在DDP中一样,那么较慢的连接的影响将不那么显著。如果GPU经常需要相互发送消息,就像在ZeRO-DP中一样,那么更快的连接对于实现更快的训练变得非常重要。</p> <h4 class="relative group"><a id="nvlink" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#nvlink"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>NVlink</span></h4> <p data-svelte-h="svelte-12wlfql"><a href="https://en.wikipedia.org/wiki/NVLink" rel="nofollow">NVLink</a>是由Nvidia开发的一种基于线缆的串行多通道近程通信链接。</p> <p data-svelte-h="svelte-1k06un1">每个新一代提供更快的带宽,例如在<a href="https://www.nvidia.com/content/dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf" rel="nofollow">Nvidia Ampere GA102 GPU架构</a>中有这样的引述:</p> <blockquote data-svelte-h="svelte-14khom1"><p>Third-Generation NVLink® | |
| GA102 GPUs utilize NVIDIA’s third-generation NVLink interface, which includes four x4 links, | |
| with each link providing 14.0625 GB/sec bandwidth in each direction between two GPUs. Four | |
| links provide 56.25 GB/sec bandwidth in each direction, and 112.5 GB/sec total bandwidth | |
| between two GPUs. Two RTX 3090 GPUs can be connected together for SLI using NVLink. | |
| (Note that 3-Way and 4-Way SLI configurations are not supported.)</p></blockquote> <p data-svelte-h="svelte-d63owv">所以,在<code>nvidia-smi topo -m</code>输出的<code>NVX</code>报告中获取到的更高的<code>X</code>值意味着更好的性能。生成的结果将取决于您的GPU架构。</p> <p data-svelte-h="svelte-1p43ga6">让我们比较在小样本wikitext上训练gpt2语言模型的执行结果。</p> <p data-svelte-h="svelte-1j2yk26">结果是:</p> <table data-svelte-h="svelte-1hvmx1i"><thead><tr><th>NVlink</th> <th align="right">Time</th></tr></thead> <tbody><tr><td>Y</td> <td align="right">101s</td></tr> <tr><td>N</td> <td align="right">131s</td></tr></tbody></table> <p data-svelte-h="svelte-1o4ysrx">可以看到,NVLink使训练速度提高了约23%。在第二个基准测试中,我们使用<code>NCCL_P2P_DISABLE=1</code>告诉GPU不要使用NVLink。</p> <p data-svelte-h="svelte-k5cryp">这里是完整的基准测试代码和输出:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-comment"># DDP w/ NVLink</span> | |
| <span class="hljs-built_in">rm</span> -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 torchrun \ | |
| --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \ | |
| --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train \ | |
| --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 | |
| {<span class="hljs-string">'train_runtime'</span>: 101.9003, <span class="hljs-string">'train_samples_per_second'</span>: 1.963, <span class="hljs-string">'epoch'</span>: 0.69} | |
| <span class="hljs-comment"># DDP w/o NVLink</span> | |
| <span class="hljs-built_in">rm</span> -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 NCCL_P2P_DISABLE=1 torchrun \ | |
| --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py --model_name_or_path openai-community/gpt2 \ | |
| --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train | |
| --output_dir /tmp/test-clm --per_device_train_batch_size 4 --max_steps 200 | |
| {<span class="hljs-string">'train_runtime'</span>: 131.4367, <span class="hljs-string">'train_samples_per_second'</span>: 1.522, <span class="hljs-string">'epoch'</span>: 0.69}<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-13y7wdy">硬件: 2x TITAN RTX 24GB each + NVlink with 2 NVLinks (<code>NV2</code> in <code>nvidia-smi topo -m</code>) | |
| 软件: <code>pytorch-1.8-to-be</code> + <code>cuda-11.0</code> / <code>transformers==4.3.0.dev0</code></p> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/transformers/blob/main/docs/source/zh/perf_hardware.md" target="_blank"><span data-svelte-h="svelte-1kd6by1"><</span> <span data-svelte-h="svelte-x0xyl0">></span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_173uja2 = { | |
| assets: "/docs/transformers/main/zh", | |
| base: "/docs/transformers/main/zh", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/transformers/main/zh/_app/immutable/entry/start.a61b9c50.js"), | |
| import("/docs/transformers/main/zh/_app/immutable/entry/app.99775688.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 48], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 24.7 kB
- Xet hash:
- 8a9207713b0b3897a536318b5550222f453b476965f15e69736342de3a6eb4f8
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.