Upload 8 files
Browse files- Alpaca_+_Codellama_34b_full_example.ipynb +22 -18
- Alpaca_+_Gemma_7b_full_example.ipynb +0 -0
- Alpaca_+_Llama_7b_full_example.ipynb +9 -3
- Alpaca_+_Mistral_7b_full_example.ipynb +9 -3
- Alpaca_+_TinyLlama_+_RoPE_Scaling_full_example.ipynb +9 -3
- ChatML_+_chat_templates_+_Mistral_7b_full_example.ipynb +9 -3
- DPO_Zephyr_Unsloth_Example.ipynb +8 -4
- Mistral_7b_Text_Completion_Raw_Text_training_full_example.ipynb +9 -3
Alpaca_+_Codellama_34b_full_example.ipynb
CHANGED
|
@@ -3,13 +3,11 @@
|
|
| 3 |
{
|
| 4 |
"cell_type": "markdown",
|
| 5 |
"source": [
|
| 6 |
-
"To run this, press \"Runtime
|
| 7 |
-
"\n",
|
| 8 |
-
"**[NOTE]** You might be lucky if an A100 is free!. If not, try our Mistral 7b notebook on a free Tesla T4 [here](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing).\n",
|
| 9 |
"<div class=\"align-center\">\n",
|
| 10 |
-
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"
|
| 11 |
-
" <a href=\"https://discord.gg/u54VK8m8tk\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord.png\" width=\"
|
| 12 |
-
" <a href=\"https://
|
| 13 |
"</div>\n",
|
| 14 |
"\n",
|
| 15 |
"To install Unsloth on your own computer, follow the installation instructions on our Github page [here](https://github.com/unslothai/unsloth#installation-instructions---conda).\n",
|
|
@@ -31,12 +29,14 @@
|
|
| 31 |
"%%capture\n",
|
| 32 |
"import torch\n",
|
| 33 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
|
|
|
|
|
|
| 34 |
"if major_version >= 8:\n",
|
| 35 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 36 |
-
" !pip install
|
| 37 |
"else:\n",
|
| 38 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 39 |
-
" !pip install
|
| 40 |
"pass"
|
| 41 |
]
|
| 42 |
},
|
|
@@ -47,7 +47,8 @@
|
|
| 47 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 48 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 49 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 50 |
-
"*
|
|
|
|
| 51 |
],
|
| 52 |
"metadata": {
|
| 53 |
"id": "r2v_X2fA0Df5"
|
|
@@ -1502,17 +1503,20 @@
|
|
| 1502 |
"source": [
|
| 1503 |
"And we're done! If you have any questions on Unsloth, we have a [Discord](https://discord.gg/u54VK8m8tk) channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!\n",
|
| 1504 |
"\n",
|
| 1505 |
-
"
|
| 1506 |
-
"1. Zephyr DPO [free Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing)\n",
|
| 1507 |
-
"2. Llama 7b [free Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing)\n",
|
| 1508 |
-
"3. TinyLlama full Alpaca 52K in
|
| 1509 |
-
"4.
|
| 1510 |
-
"5.
|
|
|
|
|
|
|
|
|
|
| 1511 |
"\n",
|
| 1512 |
"<div class=\"align-center\">\n",
|
| 1513 |
-
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"
|
| 1514 |
-
" <a href=\"https://discord.gg/u54VK8m8tk\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord.png\" width=\"
|
| 1515 |
-
" <a href=\"https://
|
| 1516 |
"</div>"
|
| 1517 |
],
|
| 1518 |
"metadata": {
|
|
|
|
| 3 |
{
|
| 4 |
"cell_type": "markdown",
|
| 5 |
"source": [
|
| 6 |
+
"To run this, press \"*Runtime*\" and press \"*Run all*\" on a **free** Tesla T4 Google Colab instance!\n",
|
|
|
|
|
|
|
| 7 |
"<div class=\"align-center\">\n",
|
| 8 |
+
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
| 9 |
+
" <a href=\"https://discord.gg/u54VK8m8tk\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord button.png\" width=\"145\"></a>\n",
|
| 10 |
+
" <a href=\"https://ko-fi.com/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Kofi button.png\" width=\"145\"></a></a> Join Discord if you need help + support us if you can!\n",
|
| 11 |
"</div>\n",
|
| 12 |
"\n",
|
| 13 |
"To install Unsloth on your own computer, follow the installation instructions on our Github page [here](https://github.com/unslothai/unsloth#installation-instructions---conda).\n",
|
|
|
|
| 29 |
"%%capture\n",
|
| 30 |
"import torch\n",
|
| 31 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
| 32 |
+
"# Must install separately since Colab has torch 2.2.1, which breaks packages\n",
|
| 33 |
+
"!pip install \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
| 34 |
"if major_version >= 8:\n",
|
| 35 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 36 |
+
" !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes\n",
|
| 37 |
"else:\n",
|
| 38 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 39 |
+
" !pip install --no-deps xformers trl peft accelerate bitsandbytes\n",
|
| 40 |
"pass"
|
| 41 |
]
|
| 42 |
},
|
|
|
|
| 47 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 48 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 49 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 50 |
+
"* With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models.\n",
|
| 51 |
+
"* [**NEW**] We make Gemma 6 trillion tokens **2.5x faster**! See our [Gemma notebook](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)"
|
| 52 |
],
|
| 53 |
"metadata": {
|
| 54 |
"id": "r2v_X2fA0Df5"
|
|
|
|
| 1503 |
"source": [
|
| 1504 |
"And we're done! If you have any questions on Unsloth, we have a [Discord](https://discord.gg/u54VK8m8tk) channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!\n",
|
| 1505 |
"\n",
|
| 1506 |
+
"Some other links:\n",
|
| 1507 |
+
"1. Zephyr DPO 2x faster [free Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing)\n",
|
| 1508 |
+
"2. Llama 7b 2x faster [free Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing)\n",
|
| 1509 |
+
"3. TinyLlama 4x faster full Alpaca 52K in 1 hour [free Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)\n",
|
| 1510 |
+
"4. CodeLlama 34b 2x faster [A100 on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing)\n",
|
| 1511 |
+
"5. Mistral 7b [free Kaggle version](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook)\n",
|
| 1512 |
+
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 1513 |
+
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 1514 |
+
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
| 1515 |
"\n",
|
| 1516 |
"<div class=\"align-center\">\n",
|
| 1517 |
+
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
| 1518 |
+
" <a href=\"https://discord.gg/u54VK8m8tk\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord.png\" width=\"145\"></a>\n",
|
| 1519 |
+
" <a href=\"https://ko-fi.com/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Kofi button.png\" width=\"145\"></a></a> Support our work if you can! Thanks!\n",
|
| 1520 |
"</div>"
|
| 1521 |
],
|
| 1522 |
"metadata": {
|
Alpaca_+_Gemma_7b_full_example.ipynb
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
Alpaca_+_Llama_7b_full_example.ipynb
CHANGED
|
@@ -29,12 +29,14 @@
|
|
| 29 |
"%%capture\n",
|
| 30 |
"import torch\n",
|
| 31 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
|
|
|
|
|
|
| 32 |
"if major_version >= 8:\n",
|
| 33 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 34 |
-
" !pip install
|
| 35 |
"else:\n",
|
| 36 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 37 |
-
" !pip install
|
| 38 |
"pass"
|
| 39 |
]
|
| 40 |
},
|
|
@@ -45,7 +47,8 @@
|
|
| 45 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 46 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 47 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 48 |
-
"*
|
|
|
|
| 49 |
],
|
| 50 |
"metadata": {
|
| 51 |
"id": "r2v_X2fA0Df5"
|
|
@@ -283,6 +286,8 @@
|
|
| 283 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 284 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 285 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
|
|
|
|
|
|
| 286 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 287 |
"\n",
|
| 288 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
@@ -1348,6 +1353,7 @@
|
|
| 1348 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 1349 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 1350 |
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
|
|
|
| 1351 |
"\n",
|
| 1352 |
"<div class=\"align-center\">\n",
|
| 1353 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
|
|
|
| 29 |
"%%capture\n",
|
| 30 |
"import torch\n",
|
| 31 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
| 32 |
+
"# Must install separately since Colab has torch 2.2.1, which breaks packages\n",
|
| 33 |
+
"!pip install \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
| 34 |
"if major_version >= 8:\n",
|
| 35 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 36 |
+
" !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes\n",
|
| 37 |
"else:\n",
|
| 38 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 39 |
+
" !pip install --no-deps xformers trl peft accelerate bitsandbytes\n",
|
| 40 |
"pass"
|
| 41 |
]
|
| 42 |
},
|
|
|
|
| 47 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 48 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 49 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 50 |
+
"* With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models.\n",
|
| 51 |
+
"* [**NEW**] We make Gemma 6 trillion tokens **2.5x faster**! See our [Gemma notebook](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)"
|
| 52 |
],
|
| 53 |
"metadata": {
|
| 54 |
"id": "r2v_X2fA0Df5"
|
|
|
|
| 286 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 287 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 288 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
| 289 |
+
" \"unsloth/gemma-7b-bnb-4bit\", # New Google 6 trillion tokens model 2.5x faster!\n",
|
| 290 |
+
" \"unsloth/gemma-2b-bnb-4bit\",\n",
|
| 291 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 292 |
"\n",
|
| 293 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
|
|
| 1353 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 1354 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 1355 |
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
| 1356 |
+
"9. Gemma 6 trillion tokens is 2.5x faster! [free Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)\n",
|
| 1357 |
"\n",
|
| 1358 |
"<div class=\"align-center\">\n",
|
| 1359 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
Alpaca_+_Mistral_7b_full_example.ipynb
CHANGED
|
@@ -29,12 +29,14 @@
|
|
| 29 |
"%%capture\n",
|
| 30 |
"import torch\n",
|
| 31 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
|
|
|
|
|
|
| 32 |
"if major_version >= 8:\n",
|
| 33 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 34 |
-
" !pip install
|
| 35 |
"else:\n",
|
| 36 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 37 |
-
" !pip install
|
| 38 |
"pass"
|
| 39 |
]
|
| 40 |
},
|
|
@@ -45,7 +47,8 @@
|
|
| 45 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 46 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 47 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 48 |
-
"*
|
|
|
|
| 49 |
],
|
| 50 |
"metadata": {
|
| 51 |
"id": "r2v_X2fA0Df5"
|
|
@@ -282,6 +285,8 @@
|
|
| 282 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 283 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 284 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
|
|
|
|
|
|
| 285 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 286 |
"\n",
|
| 287 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
@@ -1260,6 +1265,7 @@
|
|
| 1260 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 1261 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 1262 |
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
|
|
|
| 1263 |
"\n",
|
| 1264 |
"<div class=\"align-center\">\n",
|
| 1265 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
|
|
|
| 29 |
"%%capture\n",
|
| 30 |
"import torch\n",
|
| 31 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
| 32 |
+
"# Must install separately since Colab has torch 2.2.1, which breaks packages\n",
|
| 33 |
+
"!pip install \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
| 34 |
"if major_version >= 8:\n",
|
| 35 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 36 |
+
" !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes\n",
|
| 37 |
"else:\n",
|
| 38 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 39 |
+
" !pip install --no-deps xformers trl peft accelerate bitsandbytes\n",
|
| 40 |
"pass"
|
| 41 |
]
|
| 42 |
},
|
|
|
|
| 47 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 48 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 49 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 50 |
+
"* With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models.\n",
|
| 51 |
+
"* [**NEW**] We make Gemma 6 trillion tokens **2.5x faster**! See our [Gemma notebook](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)"
|
| 52 |
],
|
| 53 |
"metadata": {
|
| 54 |
"id": "r2v_X2fA0Df5"
|
|
|
|
| 285 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 286 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 287 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
| 288 |
+
" \"unsloth/gemma-7b-bnb-4bit\", # New Google 6 trillion tokens model 2.5x faster!\n",
|
| 289 |
+
" \"unsloth/gemma-2b-bnb-4bit\",\n",
|
| 290 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 291 |
"\n",
|
| 292 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
|
|
| 1265 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 1266 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 1267 |
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
| 1268 |
+
"9. Gemma 6 trillion tokens is 2.5x faster! [free Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)\n",
|
| 1269 |
"\n",
|
| 1270 |
"<div class=\"align-center\">\n",
|
| 1271 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
Alpaca_+_TinyLlama_+_RoPE_Scaling_full_example.ipynb
CHANGED
|
@@ -31,12 +31,14 @@
|
|
| 31 |
"%%capture\n",
|
| 32 |
"import torch\n",
|
| 33 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
|
|
|
|
|
|
| 34 |
"if major_version >= 8:\n",
|
| 35 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 36 |
-
" !pip install
|
| 37 |
"else:\n",
|
| 38 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 39 |
-
" !pip install
|
| 40 |
"pass"
|
| 41 |
]
|
| 42 |
},
|
|
@@ -50,7 +52,8 @@
|
|
| 50 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 51 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 52 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 53 |
-
"*
|
|
|
|
| 54 |
]
|
| 55 |
},
|
| 56 |
{
|
|
@@ -282,6 +285,8 @@
|
|
| 282 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 283 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 284 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
|
|
|
|
|
|
| 285 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 286 |
"\n",
|
| 287 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
@@ -2528,6 +2533,7 @@
|
|
| 2528 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 2529 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 2530 |
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
|
|
|
| 2531 |
"\n",
|
| 2532 |
"<div class=\"align-center\">\n",
|
| 2533 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
|
|
|
| 31 |
"%%capture\n",
|
| 32 |
"import torch\n",
|
| 33 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
| 34 |
+
"# Must install separately since Colab has torch 2.2.1, which breaks packages\n",
|
| 35 |
+
"!pip install \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
| 36 |
"if major_version >= 8:\n",
|
| 37 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 38 |
+
" !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes\n",
|
| 39 |
"else:\n",
|
| 40 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 41 |
+
" !pip install --no-deps xformers trl peft accelerate bitsandbytes\n",
|
| 42 |
"pass"
|
| 43 |
]
|
| 44 |
},
|
|
|
|
| 52 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 53 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 54 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 55 |
+
"* With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models.\n",
|
| 56 |
+
"* [**NEW**] We make Gemma 6 trillion tokens **2.5x faster**! See our [Gemma notebook](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)"
|
| 57 |
]
|
| 58 |
},
|
| 59 |
{
|
|
|
|
| 285 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 286 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 287 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
| 288 |
+
" \"unsloth/gemma-7b-bnb-4bit\", # New Google 6 trillion tokens model 2.5x faster!\n",
|
| 289 |
+
" \"unsloth/gemma-2b-bnb-4bit\",\n",
|
| 290 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 291 |
"\n",
|
| 292 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
|
|
| 2533 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 2534 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 2535 |
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
| 2536 |
+
"9. Gemma 6 trillion tokens is 2.5x faster! [free Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)\n",
|
| 2537 |
"\n",
|
| 2538 |
"<div class=\"align-center\">\n",
|
| 2539 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
ChatML_+_chat_templates_+_Mistral_7b_full_example.ipynb
CHANGED
|
@@ -31,12 +31,14 @@
|
|
| 31 |
"%%capture\n",
|
| 32 |
"import torch\n",
|
| 33 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
|
|
|
|
|
|
| 34 |
"if major_version >= 8:\n",
|
| 35 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 36 |
-
" !pip install
|
| 37 |
"else:\n",
|
| 38 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 39 |
-
" !pip install
|
| 40 |
"pass"
|
| 41 |
]
|
| 42 |
},
|
|
@@ -47,7 +49,8 @@
|
|
| 47 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 48 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 49 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 50 |
-
"*
|
|
|
|
| 51 |
],
|
| 52 |
"metadata": {
|
| 53 |
"id": "r2v_X2fA0Df5"
|
|
@@ -284,6 +287,8 @@
|
|
| 284 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 285 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 286 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
|
|
|
|
|
|
| 287 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 288 |
"\n",
|
| 289 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
@@ -1392,6 +1397,7 @@
|
|
| 1392 |
"5. Mistral 7b [free Kaggle version](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook)\n",
|
| 1393 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 1394 |
"7. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
|
|
|
| 1395 |
"\n",
|
| 1396 |
"<div class=\"align-center\">\n",
|
| 1397 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
|
|
|
| 31 |
"%%capture\n",
|
| 32 |
"import torch\n",
|
| 33 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
| 34 |
+
"# Must install separately since Colab has torch 2.2.1, which breaks packages\n",
|
| 35 |
+
"!pip install \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
| 36 |
"if major_version >= 8:\n",
|
| 37 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 38 |
+
" !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes\n",
|
| 39 |
"else:\n",
|
| 40 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 41 |
+
" !pip install --no-deps xformers trl peft accelerate bitsandbytes\n",
|
| 42 |
"pass"
|
| 43 |
]
|
| 44 |
},
|
|
|
|
| 49 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 50 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 51 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 52 |
+
"* With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models.\n",
|
| 53 |
+
"* [**NEW**] We make Gemma 6 trillion tokens **2.5x faster**! See our [Gemma notebook](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)"
|
| 54 |
],
|
| 55 |
"metadata": {
|
| 56 |
"id": "r2v_X2fA0Df5"
|
|
|
|
| 287 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 288 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 289 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
| 290 |
+
" \"unsloth/gemma-7b-bnb-4bit\", # New Google 6 trillion tokens model 2.5x faster!\n",
|
| 291 |
+
" \"unsloth/gemma-2b-bnb-4bit\",\n",
|
| 292 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 293 |
"\n",
|
| 294 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
|
|
| 1397 |
"5. Mistral 7b [free Kaggle version](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook)\n",
|
| 1398 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 1399 |
"7. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
| 1400 |
+
"9. Gemma 6 trillion tokens is 2.5x faster! [free Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)\n",
|
| 1401 |
"\n",
|
| 1402 |
"<div class=\"align-center\">\n",
|
| 1403 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
DPO_Zephyr_Unsloth_Example.ipynb
CHANGED
|
@@ -30,12 +30,14 @@
|
|
| 30 |
"%%capture\n",
|
| 31 |
"import torch\n",
|
| 32 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
|
|
|
|
|
|
| 33 |
"if major_version >= 8:\n",
|
| 34 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 35 |
-
" !pip install
|
| 36 |
"else:\n",
|
| 37 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 38 |
-
" !pip install
|
| 39 |
"pass"
|
| 40 |
]
|
| 41 |
},
|
|
@@ -49,8 +51,9 @@
|
|
| 49 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 50 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 51 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 52 |
-
"*
|
| 53 |
-
"* DPO requires a model already trained by SFT on a similar dataset that is used for DPO. We use `HuggingFaceH4/mistral-7b-sft-beta` as the SFT model. Use this [notebook](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) first to train a SFT model
|
|
|
|
| 54 |
]
|
| 55 |
},
|
| 56 |
{
|
|
@@ -2564,6 +2567,7 @@
|
|
| 2564 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 2565 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 2566 |
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
|
|
|
| 2567 |
"\n",
|
| 2568 |
"<div class=\"align-center\">\n",
|
| 2569 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
|
|
|
| 30 |
"%%capture\n",
|
| 31 |
"import torch\n",
|
| 32 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
| 33 |
+
"# Must install separately since Colab has torch 2.2.1, which breaks packages\n",
|
| 34 |
+
"!pip install \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
| 35 |
"if major_version >= 8:\n",
|
| 36 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 37 |
+
" !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes\n",
|
| 38 |
"else:\n",
|
| 39 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 40 |
+
" !pip install --no-deps xformers trl peft accelerate bitsandbytes\n",
|
| 41 |
"pass"
|
| 42 |
]
|
| 43 |
},
|
|
|
|
| 51 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 52 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 53 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 54 |
+
"* With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models.\n",
|
| 55 |
+
"* DPO requires a model already trained by SFT on a similar dataset that is used for DPO. We use `HuggingFaceH4/mistral-7b-sft-beta` as the SFT model. Use this [notebook](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) first to train a SFT model.\n",
|
| 56 |
+
"* [**NEW**] We make Gemma 6 trillion tokens **2.5x faster**! See our [Gemma notebook](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)"
|
| 57 |
]
|
| 58 |
},
|
| 59 |
{
|
|
|
|
| 2567 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 2568 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 2569 |
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
|
| 2570 |
+
"9. Gemma 6 trillion tokens is 2.5x faster! [free Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)\n",
|
| 2571 |
"\n",
|
| 2572 |
"<div class=\"align-center\">\n",
|
| 2573 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
Mistral_7b_Text_Completion_Raw_Text_training_full_example.ipynb
CHANGED
|
@@ -41,12 +41,14 @@
|
|
| 41 |
"%%capture\n",
|
| 42 |
"import torch\n",
|
| 43 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
|
|
|
|
|
|
| 44 |
"if major_version >= 8:\n",
|
| 45 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 46 |
-
" !pip install
|
| 47 |
"else:\n",
|
| 48 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 49 |
-
" !pip install
|
| 50 |
"pass"
|
| 51 |
]
|
| 52 |
},
|
|
@@ -60,7 +62,8 @@
|
|
| 60 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 61 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 62 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 63 |
-
"*
|
|
|
|
| 64 |
]
|
| 65 |
},
|
| 66 |
{
|
|
@@ -296,6 +299,8 @@
|
|
| 296 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 297 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 298 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
|
|
|
|
|
|
| 299 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 300 |
"\n",
|
| 301 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
@@ -1324,6 +1329,7 @@
|
|
| 1324 |
"5. Mistral 7b [free Kaggle version](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook)\n",
|
| 1325 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 1326 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
|
|
|
| 1327 |
"\n",
|
| 1328 |
"<div class=\"align-center\">\n",
|
| 1329 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|
|
|
|
| 41 |
"%%capture\n",
|
| 42 |
"import torch\n",
|
| 43 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
| 44 |
+
"# Must install separately since Colab has torch 2.2.1, which breaks packages\n",
|
| 45 |
+
"!pip install \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
| 46 |
"if major_version >= 8:\n",
|
| 47 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
| 48 |
+
" !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes\n",
|
| 49 |
"else:\n",
|
| 50 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
| 51 |
+
" !pip install --no-deps xformers trl peft accelerate bitsandbytes\n",
|
| 52 |
"pass"
|
| 53 |
]
|
| 54 |
},
|
|
|
|
| 62 |
"* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n",
|
| 63 |
"* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n",
|
| 64 |
"* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n",
|
| 65 |
+
"* With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models.\n",
|
| 66 |
+
"* [**NEW**] We make Gemma 6 trillion tokens **2.5x faster**! See our [Gemma notebook](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)"
|
| 67 |
]
|
| 68 |
},
|
| 69 |
{
|
|
|
|
| 299 |
" \"unsloth/llama-2-13b-bnb-4bit\",\n",
|
| 300 |
" \"unsloth/codellama-34b-bnb-4bit\",\n",
|
| 301 |
" \"unsloth/tinyllama-bnb-4bit\",\n",
|
| 302 |
+
" \"unsloth/gemma-7b-bnb-4bit\", # New Google 6 trillion tokens model 2.5x faster!\n",
|
| 303 |
+
" \"unsloth/gemma-2b-bnb-4bit\",\n",
|
| 304 |
"] # More models at https://huggingface.co/unsloth\n",
|
| 305 |
"\n",
|
| 306 |
"model, tokenizer = FastLanguageModel.from_pretrained(\n",
|
|
|
|
| 1329 |
"5. Mistral 7b [free Kaggle version](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook)\n",
|
| 1330 |
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
|
| 1331 |
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
|
| 1332 |
+
"8. Gemma 6 trillion tokens is 2.5x faster! [free Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)\n",
|
| 1333 |
"\n",
|
| 1334 |
"<div class=\"align-center\">\n",
|
| 1335 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"115\"></a>\n",
|