Buckets:

hf-doc-build/doc-dev / transformers /pr_33913 /ko /perf_train_special.html
rtrm's picture
download
raw
10.5 kB
<meta charset="utf-8" /><meta name="hf:doc:metadata" content="{&quot;title&quot;:&quot;Apple 실리콘에서 Pytorch 학습&quot;,&quot;local&quot;:&quot;PyTorch training on Apple silicon&quot;,&quot;sections&quot;:[],&quot;depth&quot;:1}">
<link href="/docs/transformers/pr_33913/ko/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/entry/start.2d0ec41c.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/chunks/scheduler.bdbef820.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/chunks/singletons.665fa29a.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/chunks/index.8a885b74.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/chunks/paths.47fdf6cc.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/entry/app.e5c46936.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/chunks/index.33f81d56.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/nodes/0.7e28132d.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/chunks/each.e59479a4.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/nodes/109.88dd9b77.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/chunks/Tip.34194030.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/chunks/CodeBlock.362b34a4.js">
<link rel="modulepreload" href="/docs/transformers/pr_33913/ko/_app/immutable/chunks/EditOnGithub.a9246e21.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{&quot;title&quot;:&quot;Apple 실리콘에서 Pytorch 학습&quot;,&quot;local&quot;:&quot;PyTorch training on Apple silicon&quot;,&quot;sections&quot;:[],&quot;depth&quot;:1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="PyTorch training on Apple silicon" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#PyTorch training on Apple silicon"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Apple 실리콘에서 Pytorch 학습</span></h1> <p data-svelte-h="svelte-15xgyng">이전에는 Mac에서 모델을 학습할 때 CPU만 사용할 수 있었습니다. 그러나 이제 PyTorch v1.12의 출시로 Apple의 실리콘 GPU를 사용하여 훨씬 더 빠른 성능으로 모델을 학습할 수 있게 되었습니다. 이는 Pytorch에서 Apple의 Metal Performance Shaders (MPS)를 백엔드로 통합하면서 가능해졌습니다. <a href="https://pytorch.org/docs/stable/notes/mps.html" rel="nofollow">MPS 백엔드</a>는 Pytorch 연산을 Metal 세이더로 구현하고 이 모듈들을 mps 장치에서 실행할 수 있도록 지원합니다.</p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-p10qai">일부 Pytorch 연산들은 아직 MPS에서 지원되지 않아 오류가 발생할 수 있습니다. 이를 방지하려면 환경 변수 <code>PYTORCH_ENABLE_MPS_FALLBACK=1</code> 를 설정하여 CPU 커널을 대신 사용하도록 해야 합니다(이때 <code>UserWarning</code>이 여전히 표시될 수 있습니다).</p> <br> <p data-svelte-h="svelte-i9xmmw">다른 오류가 발생할 경우 <a href="https://github.com/pytorch/pytorch/issues" rel="nofollow">PyTorch</a> 리포지토리에 이슈를 등록해주세요. 현재 <a href="/docs/transformers/pr_33913/ko/main_classes/trainer#transformers.Trainer">Trainer</a>는 MPS 백엔드만 통합하고 있습니다.</p></div> <p data-svelte-h="svelte-94e36i"><code>mps</code> 장치를 이용하면 다음과 같은 이점들을 얻을 수 있습니다:</p> <ul data-svelte-h="svelte-12prpyz"><li>로컬에서 더 큰 네트워크나 배치 크기로 학습 가능</li> <li>GPU의 통합 메모리 아키텍처로 인해 메모리에 직접 접근할 수 있어 데이터 로딩 지연 감소</li> <li>클라우드 기반 GPU나 추가 GPU가 필요 없으므로 비용 절감 가능</li></ul> <p data-svelte-h="svelte-1il699x">Pytorch가 설치되어 있는지 확인하고 시작하세요. MPS 가속은 macOS 12.3 이상에서 지원됩니다.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pip install torch torchvision torchaudio<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1qwim8p"><a href="/docs/transformers/pr_33913/ko/main_classes/trainer#transformers.TrainingArguments">TrainingArguments</a><code>mps</code> 장치가 사용 가능한 경우 이를 기본적으로 사용하므로 장치를 따로 설정할 필요가 없습니다. 예를 들어, MPS 백엔드를 자동으로 활성화하여 <a href="https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py" rel="nofollow">run_glue.py</a> 스크립트를 아무 수정 없이 실행할 수 있습니다.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->export TASK_NAME=mrpc
python examples/pytorch/text-classification/run_glue.py \
--model_name_or_path google-bert/bert-base-cased \
--task_name $TASK_NAME \
<span class="hljs-deletion">- --use_mps_device \</span>
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir /tmp/$TASK_NAME/ \
--overwrite_output_dir<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-jq8gcz"><code>gloco</code><code>nccl</code>과 같은 <a href="https://pytorch.org/docs/stable/distributed.html#backends" rel="nofollow">분산 학습 백엔드</a><code>mps</code> 장치에서 지원되지 않으므로, MPS 백엔드에서는 단일 GPU로만 학습이 가능합니다.</p> <p data-svelte-h="svelte-1g0gj3x">Mac에서 가속된 PyTorch 학습에 대한 더 자세한 내용은 <a href="https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/" rel="nofollow">Introducing Accelerated PyTorch Training on Mac</a> 블로그 게시물에서 확인할 수 있습니다.</p> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/transformers/blob/main/docs/source/ko/perf_train_special.md" target="_blank"><span data-svelte-h="svelte-1kd6by1">&lt;</span> <span data-svelte-h="svelte-x0xyl0">&gt;</span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p>
<script>
{
__sveltekit_1wfirp1 = {
assets: "/docs/transformers/pr_33913/ko",
base: "/docs/transformers/pr_33913/ko",
env: {}
};
const element = document.currentScript.parentElement;
const data = [null,null];
Promise.all([
import("/docs/transformers/pr_33913/ko/_app/immutable/entry/start.2d0ec41c.js"),
import("/docs/transformers/pr_33913/ko/_app/immutable/entry/app.e5c46936.js")
]).then(([kit, app]) => {
kit.start(app, element, {
node_ids: [0, 109],
data,
form: null,
error: null
});
});
}
</script>

Xet Storage Details

Size:
10.5 kB
·
Xet hash:
01a371010689a80a0d8ca4727555b6b269241302132fe0588db97c6f37a749ab

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.