Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"Bagaimana Transformer Bekerja?","local":"how-do-transformers-work","sections":[{"title":"Sekilas Sejarah Transformer","local":"a-bit-of-transformer-history","sections":[],"depth":2},{"title":"Transformer Adalah Model Bahasa","local":"transformers-are-language-models","sections":[],"depth":2},{"title":"Transformer Adalah Model Besar","local":"transformers-are-big-models","sections":[],"depth":2},{"title":"Transfer Learning","local":"transfer-learning","sections":[],"depth":2},{"title":"Arsitektur Umum Transformer","local":"general-transformer-architecture","sections":[],"depth":2},{"title":"Lapisan Attention","local":"attention-layers","sections":[],"depth":2},{"title":"Arsitektur Asli","local":"the-original-architecture","sections":[],"depth":2},{"title":"Arsitektur vs. Checkpoint","local":"architecture-vs-checkpoints","sections":[],"depth":2}],"depth":1}"> | |
| <link href="/docs/course/pr_1052/id/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/entry/start.5943204e.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/chunks/scheduler.1d51f4c0.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/chunks/singletons.8dbecaac.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/chunks/index.fa8592cf.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/chunks/paths.547a3d4b.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/entry/app.b3380ec8.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/chunks/index.86f0ceea.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/nodes/0.b0267e25.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/nodes/8.6fb518a4.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/chunks/Youtube.d8ae3a4d.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/chunks/CourseFloatingBanner.9ea31445.js"> | |
| <link rel="modulepreload" href="/docs/course/pr_1052/id/_app/immutable/chunks/getInferenceSnippets.462a0ab5.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"Bagaimana Transformer Bekerja?","local":"how-do-transformers-work","sections":[{"title":"Sekilas Sejarah Transformer","local":"a-bit-of-transformer-history","sections":[],"depth":2},{"title":"Transformer Adalah Model Bahasa","local":"transformers-are-language-models","sections":[],"depth":2},{"title":"Transformer Adalah Model Besar","local":"transformers-are-big-models","sections":[],"depth":2},{"title":"Transfer Learning","local":"transfer-learning","sections":[],"depth":2},{"title":"Arsitektur Umum Transformer","local":"general-transformer-architecture","sections":[],"depth":2},{"title":"Lapisan Attention","local":"attention-layers","sections":[],"depth":2},{"title":"Arsitektur Asli","local":"the-original-architecture","sections":[],"depth":2},{"title":"Arsitektur vs. Checkpoint","local":"architecture-vs-checkpoints","sections":[],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="how-do-transformers-work" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#how-do-transformers-work"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Bagaimana Transformer Bekerja?</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"><a href="https://discuss.huggingface.co/t/chapter-1-questions" target="_blank"><img alt="Ask a Question" class="!m-0" src="https://img.shields.io/badge/Ask%20a%20question-ffcb4c.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgLTEgMTA0IDEwNiI+PGRlZnM+PHN0eWxlPi5jbHMtMXtmaWxsOiMyMzFmMjA7fS5jbHMtMntmaWxsOiNmZmY5YWU7fS5jbHMtM3tmaWxsOiMwMGFlZWY7fS5jbHMtNHtmaWxsOiMwMGE5NGY7fS5jbHMtNXtmaWxsOiNmMTVkMjI7fS5jbHMtNntmaWxsOiNlMzFiMjM7fTwvc3R5bGU+PC9kZWZzPjx0aXRsZT5EaXNjb3Vyc2VfbG9nbzwvdGl0bGU+PGcgaWQ9IkxheWVyXzIiPjxnIGlkPSJMYXllcl8zIj48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik01MS44NywwQzIzLjcxLDAsMCwyMi44MywwLDUxYzAsLjkxLDAsNTIuODEsMCw1Mi44MWw1MS44Ni0uMDVjMjguMTYsMCw1MS0yMy43MSw1MS01MS44N1M4MCwwLDUxLjg3LDBaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNNTIuMzcsMTkuNzRBMzEuNjIsMzEuNjIsMCwwLDAsMjQuNTgsNjYuNDFsLTUuNzIsMTguNEwzOS40LDgwLjE3YTMxLjYxLDMxLjYxLDAsMSwwLDEzLTYwLjQzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTc3LjQ1LDMyLjEyYTMxLjYsMzEuNiwwLDAsMS0zOC4wNSw0OEwxOC44Niw4NC44MmwyMC45MS0yLjQ3QTMxLjYsMzEuNiwwLDAsMCw3Ny40NSwzMi4xMloiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik03MS42MywyNi4yOUEzMS42LDMxLjYsMCwwLDEsMzguOCw3OEwxOC44Niw4NC44MiwzOS40LDgwLjE3QTMxLjYsMzEuNiwwLDAsMCw3MS42MywyNi4yOVoiLz48cGF0aCBjbGFzcz0iY2xzLTUiIGQ9Ik0yNi40Nyw2Ny4xMWEzMS42MSwzMS42MSwwLDAsMSw1MS0zNUEzMS42MSwzMS42MSwwLDAsMCwyNC41OCw2Ni40MWwtNS43MiwxOC40WiIvPjxwYXRoIGNsYXNzPSJjbHMtNiIgZD0iTTI0LjU4LDY2LjQxQTMxLjYxLDMxLjYxLDAsMCwxLDcxLjYzLDI2LjI5YTMxLjYxLDMxLjYxLDAsMCwwLTQ5LDM5LjYzbC0zLjc2LDE4LjlaIi8+PC9nPjwvZz48L3N2Zz4="></a> </div> <p data-svelte-h="svelte-txapp4">Di bagian ini, kita akan mempelajari arsitektur model Transformer dan memahami lebih dalam konsep seperti <em>attention</em>, arsitektur encoder-decoder, dan lainnya.</p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-1aisgur">🚀 Kita mulai naik tingkat di sini. Bagian ini bersifat teknis dan detail, jadi jangan khawatir jika Anda tidak langsung memahaminya sepenuhnya. Kita akan mengulas konsep-konsep ini kembali di bab-bab berikutnya.</p></div> <h2 class="relative group"><a id="a-bit-of-transformer-history" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#a-bit-of-transformer-history"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Sekilas Sejarah Transformer</span></h2> <p data-svelte-h="svelte-mthvjl">Berikut beberapa titik penting dalam (sejarah singkat) model Transformer:</p> <div class="flex justify-center" data-svelte-h="svelte-1nznt49"><img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers_chrono.svg" alt="Kronologi singkat model Transformer."> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers_chrono-dark.svg" alt="Kronologi singkat model Transformer."></div> <p data-svelte-h="svelte-xk9xsn">Arsitektur <a href="https://arxiv.org/abs/1706.03762" rel="nofollow">Transformer</a> diperkenalkan pada Juni 2017, dengan fokus awal pada tugas penerjemahan. Setelah itu, muncul berbagai model penting lainnya, seperti:</p> <ul data-svelte-h="svelte-12w01s2"><li><strong>Juni 2018</strong>: <a href="https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf" rel="nofollow">GPT</a>, model Transformer pra-latih pertama, digunakan untuk <em>fine-tuning</em> berbagai tugas NLP dan mencapai hasil terbaik saat itu</li> <li><strong>Oktober 2018</strong>: <a href="https://arxiv.org/abs/1810.04805" rel="nofollow">BERT</a>, model pra-latih besar lainnya yang dirancang untuk menghasilkan representasi kalimat yang lebih baik</li> <li><strong>Februari 2019</strong>: <a href="https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf" rel="nofollow">GPT-2</a>, versi lebih besar dan lebih baik dari GPT, yang sempat tidak dirilis sepenuhnya karena pertimbangan etis</li> <li><strong>Oktober 2019</strong>: <a href="https://huggingface.co/papers/1910.10683" rel="nofollow">T5</a>, implementasi Transformer sequence-to-sequence untuk multi-tasking</li> <li><strong>Mei 2020</strong>: <a href="https://huggingface.co/papers/2005.14165" rel="nofollow">GPT-3</a>, versi lebih besar dari GPT-2 yang mampu menyelesaikan berbagai tugas tanpa <em>fine-tuning</em> (<em>zero-shot learning</em>)</li> <li><strong>Januari 2022</strong>: <a href="https://huggingface.co/papers/2203.02155" rel="nofollow">InstructGPT</a>, versi GPT-3 yang dilatih agar lebih mampu mengikuti instruksi</li> <li><strong>Januari 2023</strong>: <a href="https://huggingface.co/papers/2302.13971" rel="nofollow">Llama</a>, model bahasa besar multibahasa</li> <li><strong>Maret 2023</strong>: <a href="https://huggingface.co/papers/2310.06825" rel="nofollow">Mistral</a>, model 7 miliar parameter yang melampaui Llama 2 13B di semua tolok ukur</li> <li><strong>Mei 2024</strong>: <a href="https://huggingface.co/papers/2408.00118" rel="nofollow">Gemma 2</a>, keluarga model ringan berkinerja tinggi dengan teknik attention baru</li> <li><strong>November 2024</strong>: <a href="https://huggingface.co/papers/2502.02737" rel="nofollow">SmolLM2</a>, model kecil dengan performa tinggi untuk perangkat edge/mobile</li> <li>Model tipe GPT (disebut juga sebagai Transformer model <em>auto-regressive</em>)</li> <li>Model tipe BERT (disebut juga sebagai Transformer model <em>auto-encoding</em>)</li> <li>Model tipe T5 (disebut juga sebagai Transformer model <em>sequence-to-sequence</em>)</li></ul> <p data-svelte-h="svelte-9sp20d">Kita akan membahas tiap keluarga ini lebih dalam nanti.</p> <h2 class="relative group"><a id="transformers-are-language-models" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers-are-language-models"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Transformer Adalah Model Bahasa</span></h2> <p data-svelte-h="svelte-1fnajnm">Semua model Transformer yang disebutkan sebelumnya (GPT, BERT, T5, dll.) dilatih sebagai <em>model bahasa</em>. Artinya, mereka dilatih menggunakan sejumlah besar teks mentah dengan pendekatan <em>self-supervised</em>.</p> <p data-svelte-h="svelte-fgb1ly"><em>Self-supervised learning</em> adalah metode pelatihan di mana tujuan pelatihan diturunkan langsung dari data masukan. Artinya, manusia tidak perlu memberikan label secara manual!</p> <p data-svelte-h="svelte-1vxl1b4">Model ini mengembangkan pemahaman statistik terhadap bahasa yang dilatihkan, tetapi kurang bermanfaat untuk tugas praktis tertentu. Oleh karena itu, model pra-latih umum kemudian melalui proses yang disebut <em>transfer learning</em> atau <em>fine-tuning</em>, yaitu pelatihan tambahan secara <em>supervised</em> menggunakan label yang diberikan manusia untuk tugas tertentu.</p> <p data-svelte-h="svelte-iu4nv4">Contoh tugasnya adalah memprediksi kata berikutnya dalam kalimat setelah membaca kata-kata sebelumnya. Ini disebut <em>causal language modeling</em>, karena output tergantung pada input masa lalu dan saat ini — tetapi tidak pada masa depan.</p> <div class="flex justify-center" data-svelte-h="svelte-f0jprd"><img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/causal_modeling.svg" alt="Contoh causal language modeling."> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/causal_modeling-dark.svg" alt="Contoh causal language modeling."></div> <p data-svelte-h="svelte-cjtj6p">Contoh lain adalah <em>masked language modeling</em>, di mana model diminta memprediksi kata yang di-<em>masking</em> dalam sebuah kalimat.</p> <div class="flex justify-center" data-svelte-h="svelte-pgr3nd"><img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/masked_modeling.svg" alt="Contoh masked language modeling."> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/masked_modeling-dark.svg" alt="Contoh masked language modeling."></div> <h2 class="relative group"><a id="transformers-are-big-models" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transformers-are-big-models"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Transformer Adalah Model Besar</span></h2> <p data-svelte-h="svelte-6xegnw">Kecuali beberapa pengecualian (seperti DistilBERT), strategi umum untuk meningkatkan performa model adalah dengan <strong>memperbesar ukuran model</strong> dan <strong>jumlah data pelatihan</strong>.</p> <div class="flex justify-center" data-svelte-h="svelte-16it7ng"><img src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/model_parameters.png" alt="Jumlah parameter model Transformer modern" width="90%"></div> <p data-svelte-h="svelte-1f64lmp">Sayangnya, melatih model besar memerlukan banyak data, waktu, dan sumber daya komputasi. Ini juga berdampak pada lingkungan, seperti ditunjukkan grafik berikut:</p> <div class="flex justify-center" data-svelte-h="svelte-131xl47"><img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/carbon_footprint.svg" alt="Jejak karbon model bahasa besar."> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/carbon_footprint-dark.svg" alt="Jejak karbon model bahasa besar."></div> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/ftWlj4FBHTg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <p data-svelte-h="svelte-6je4rs">Bahkan grafik di atas berasal dari proyek yang dilakukan dengan niat sadar untuk <strong>mengurangi dampak lingkungan</strong>. Bayangkan jika setiap tim peneliti, mahasiswa, atau perusahaan melatih model dari nol — biayanya akan sangat besar!</p> <p data-svelte-h="svelte-yut1s3">Inilah mengapa <strong>berbagi model sangat penting</strong>: dengan berbagi <em>trained weights</em>, kita mengurangi beban komputasi dan jejak karbon secara global.</p> <p data-svelte-h="svelte-1hduia7">Anda bisa menghitung jejak karbon model Anda menggunakan alat seperti <a href="https://mlco2.github.io/impact/" rel="nofollow">ML CO2 Impact</a> atau <a href="https://codecarbon.io/" rel="nofollow">Code Carbon</a>, yang sudah terintegrasi dalam 🤗 Transformers. Baca <a href="https://huggingface.co/blog/carbon-emissions-on-the-hub" rel="nofollow">blog ini</a> dan <a href="https://huggingface.co/docs/hub/model-cards-co2" rel="nofollow">dokumentasi resmi</a> untuk mempelajari lebih lanjut.</p> <h2 class="relative group"><a id="transfer-learning" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#transfer-learning"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Transfer Learning</span></h2> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/BqqfQnyjmgg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <p data-svelte-h="svelte-ck36x4"><em>Pretraining</em> adalah proses melatih model dari nol — bobot awalnya diinisialisasi secara acak dan tidak memiliki pengetahuan awal.</p> <div class="flex justify-center" data-svelte-h="svelte-zgaufz"><img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/pretraining.svg" alt="Pretraining membutuhkan waktu dan biaya besar."> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/pretraining-dark.svg" alt="Pretraining membutuhkan waktu dan biaya besar."></div> <p data-svelte-h="svelte-16sq3jy">Sebaliknya, <em>fine-tuning</em> dilakukan setelah model sudah dilatih. Kita mengambil model pra-latih dan melatih ulang dengan dataset spesifik untuk tugas kita. Kenapa tidak langsung latih model dari awal untuk tugas tersebut?</p> <ul data-svelte-h="svelte-hnnkyc"><li>Model pra-latih sudah memiliki <em>pengetahuan umum</em> dari data besar sebelumnya.</li> <li><em>Fine-tuning</em> hanya membutuhkan data dalam jumlah kecil.</li> <li>Waktu dan sumber daya yang dibutuhkan jauh lebih sedikit.</li> <li>Hasil yang diperoleh umumnya lebih baik dibanding melatih dari nol.</li></ul> <p data-svelte-h="svelte-1a5f89g">Contohnya, Anda bisa menggunakan model bahasa Inggris umum lalu <em>fine-tune</em> dengan korpus artikel arXiv untuk membuat model bahasa ilmiah.</p> <div class="flex justify-center" data-svelte-h="svelte-z7x83j"><img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/finetuning.svg" alt="Fine-tuning jauh lebih murah daripada pretraining."> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/finetuning-dark.svg" alt="Fine-tuning jauh lebih murah daripada pretraining."></div> <h2 class="relative group"><a id="general-transformer-architecture" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#general-transformer-architecture"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Arsitektur Umum Transformer</span></h2> <p data-svelte-h="svelte-1ke8hhq">Di bagian ini, kita membahas arsitektur umum model Transformer. Jangan khawatir jika belum paham sepenuhnya — kita akan bahas komponen-komponennya secara terpisah nanti.</p> <iframe class="w-full xl:w-4/6 h-80" src="https://www.youtube-nocookie.com/embed/H39Z_720T5s" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> <p data-svelte-h="svelte-16sewm4">Model ini terdiri dari dua bagian utama:</p> <ul data-svelte-h="svelte-1uhrz9f"><li><strong>Encoder (kiri)</strong>: Menerima masukan dan menghasilkan representasi fitur</li> <li><strong>Decoder (kanan)</strong>: Menggunakan representasi dari encoder untuk menghasilkan output</li></ul> <div class="flex justify-center" data-svelte-h="svelte-1kvjzb1"><img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers_blocks.svg" alt="Arsitektur umum Transformer."> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers_blocks-dark.svg" alt="Arsitektur umum Transformer."></div> <p data-svelte-h="svelte-7p6whb">Penggunaan tergantung pada tugasnya:</p> <ul data-svelte-h="svelte-fuprqq"><li><strong>Model encoder-only</strong>: Cocok untuk klasifikasi kalimat, NER</li> <li><strong>Model decoder-only</strong>: Cocok untuk generasi teks</li> <li><strong>Model encoder-decoder</strong> (sequence-to-sequence): Cocok untuk penerjemahan, rangkuman</li></ul> <h2 class="relative group"><a id="attention-layers" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#attention-layers"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Lapisan Attention</span></h2> <p data-svelte-h="svelte-1s1s4bc">Fitur utama dari Transformer adalah <strong>lapisan attention (attention layers)</strong>. Bahkan, judul makalah aslinya adalah <a href="https://arxiv.org/abs/1706.03762" rel="nofollow">“Attention Is All You Need”</a>!</p> <p data-svelte-h="svelte-fe44vc">Singkatnya, lapisan ini memungkinkan model fokus hanya pada kata-kata yang relevan saat membentuk representasi kata.</p> <p data-svelte-h="svelte-da9oo9">Contoh: saat menerjemahkan “You like this course” ke dalam bahasa Prancis, model harus memperhatikan subjek “You” saat menerjemahkan “like”, karena bentuk kata kerja tergantung subjek. Saat menerjemahkan “this”, model juga harus memperhatikan kata benda “course”, karena jenis kelamin kata dalam bahasa Prancis memengaruhi terjemahan.</p> <p data-svelte-h="svelte-1717rkn">Intinya: makna kata tergantung konteks — dan attention memungkinkan model memanfaatkan konteks tersebut.</p> <h2 class="relative group"><a id="the-original-architecture" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#the-original-architecture"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Arsitektur Asli</span></h2> <p data-svelte-h="svelte-1rq8gk">Arsitektur awal Transformer dirancang untuk penerjemahan.</p> <ul data-svelte-h="svelte-gagjso"><li><strong>Encoder</strong> menerima kalimat sumber (misalnya, bahasa Inggris)</li> <li><strong>Decoder</strong> menghasilkan kalimat target (misalnya, bahasa Prancis)</li></ul> <p data-svelte-h="svelte-1xvfl9w">Selama pelatihan, decoder diberi seluruh kalimat target, tapi <em>dilarang melihat kata-kata masa depan</em>. Contohnya: saat memprediksi kata ke-4, decoder hanya melihat kata ke-1 hingga ke-3.</p> <div class="flex justify-center" data-svelte-h="svelte-1oi0l43"><img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers.svg" alt="Arsitektur lengkap Transformer."> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface-course/documentation-images/resolve/main/en/chapter1/transformers-dark.svg" alt="Arsitektur lengkap Transformer."></div> <p data-svelte-h="svelte-1v2nouw">Lapisan attention pertama di decoder melihat semua input sebelumnya, sedangkan attention kedua menggunakan output dari encoder.</p> <p data-svelte-h="svelte-93on8z">Masking juga digunakan untuk menghindari perhatian ke token spesial seperti padding.</p> <h2 class="relative group"><a id="architecture-vs-checkpoints" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#architecture-vs-checkpoints"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Arsitektur vs. Checkpoint</span></h2> <p data-svelte-h="svelte-esomq5">Selama mempelajari Transformer, Anda akan menemukan istilah:</p> <ul data-svelte-h="svelte-7l0jgq"><li><strong>Architecture</strong>: Rangka desain model — struktur layer dan operasi</li> <li><strong>Checkpoint</strong>: Bobot (weights) hasil pelatihan untuk arsitektur tertentu</li> <li><strong>Model</strong>: Istilah umum yang bisa merujuk ke architecture atau checkpoint</li></ul> <p data-svelte-h="svelte-nt0g4f">Contoh:</p> <ul data-svelte-h="svelte-1ekxzcv"><li><code>BERT</code> → adalah arsitektur</li> <li><code>bert-base-cased</code> → adalah checkpoint hasil pelatihan oleh Google</li> <li>“Model BERT” → bisa merujuk ke keduanya tergantung konteks</li></ul> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/course/blob/main/chapters/id/chapter1/4.mdx" target="_blank"><span data-svelte-h="svelte-1kd6by1"><</span> <span data-svelte-h="svelte-x0xyl0">></span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_wnfanp = { | |
| assets: "/docs/course/pr_1052/id", | |
| base: "/docs/course/pr_1052/id", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/course/pr_1052/id/_app/immutable/entry/start.5943204e.js"), | |
| import("/docs/course/pr_1052/id/_app/immutable/entry/app.b3380ec8.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 8], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 32.3 kB
- Xet hash:
- ce05ceb4e34a2737a1e949931e953dace6048cb1dfe4a1a66f8846f37da38780
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.