Update README.md
Browse files
README.md
CHANGED
|
@@ -9,8 +9,41 @@ tags:
|
|
| 9 |
- transformers
|
| 10 |
---
|
| 11 |
|
| 12 |
-
## ***See [our collection](https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5) for versions of Deepseek-R1 including GGUF and
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
# Finetune LLMs 2-5x faster with 70% less memory via Unsloth!
|
| 16 |
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
|
|
|
|
| 9 |
- transformers
|
| 10 |
---
|
| 11 |
|
| 12 |
+
## ***See [our collection](https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5) for versions of Deepseek-R1 including GGUF and 4-bit formats.***
|
| 13 |
+
### Instructions to run this model in llama.cpp:
|
| 14 |
+
Or you can view more detailed instructions here: [unsloth.ai/blog/deepseek-r1](https://unsloth.ai/blog/deepseek-r1)
|
| 15 |
+
1. Do not forget about `<|User|>` and `<|Assistant|>` tokens! - Or use a chat template formatter
|
| 16 |
+
2. Obtain the latest `llama.cpp` at https://github.com/ggerganov/llama.cpp
|
| 17 |
+
3. Example with Q8_0 K quantized cache **Notice -no-cnv disables auto conversation mode**
|
| 18 |
+
```bash
|
| 19 |
+
./llama.cpp/llama-cli \
|
| 20 |
+
--model unsloth/DeepSeek-R1-GGUF/DeepSeek-R1-Q4_K_M.gguf \
|
| 21 |
+
--cache-type-k q8_0 \
|
| 22 |
+
--threads 16 \
|
| 23 |
+
--prompt '<|User|>What is 1+1?<|Assistant|>' \
|
| 24 |
+
-no-cnv
|
| 25 |
+
```
|
| 26 |
+
Example output:
|
| 27 |
+
|
| 28 |
+
```txt
|
| 29 |
+
<think>
|
| 30 |
+
Okay, so I need to figure out what 1 plus 1 is. Hmm, where do I even start? I remember from school that adding numbers is pretty basic, but I want to make sure I understand it properly.
|
| 31 |
+
Let me think, 1 plus 1. So, I have one item and I add another one. Maybe like a apple plus another apple. If I have one apple and someone gives me another, I now have two apples. So, 1 plus 1 should be 2. That makes sense.
|
| 32 |
+
Wait, but sometimes math can be tricky. Could it be something else? Like, in a different number system maybe? But I think the question is straightforward, using regular numbers, not like binary or hexadecimal or anything.
|
| 33 |
+
I also recall that in arithmetic, addition is combining quantities. So, if you have two quantities of 1, combining them gives you a total of 2. Yeah, that seems right.
|
| 34 |
+
Is there a scenario where 1 plus 1 wouldn't be 2? I can't think of any...
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
4. If you have a GPU (RTX 4090 for example) with 24GB, you can offload multiple layers to the GPU for faster processing. If you have multiple GPUs, you can probably offload more layers.
|
| 38 |
+
```bash
|
| 39 |
+
./llama.cpp/llama-cli \
|
| 40 |
+
--model unsloth/DeepSeek-R1-GGUF/DeepSeek-R1-Q4_K_M.gguf
|
| 41 |
+
--cache-type-k q8_0
|
| 42 |
+
--threads 16
|
| 43 |
+
--prompt '<|User|>What is 1+1?<|Assistant|>'
|
| 44 |
+
--n-gpu-layers 20 \
|
| 45 |
+
-no-cnv
|
| 46 |
+
```
|
| 47 |
|
| 48 |
# Finetune LLMs 2-5x faster with 70% less memory via Unsloth!
|
| 49 |
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
|