Instructions to use unsloth/DeepSeek-R1-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use unsloth/DeepSeek-R1-GGUF with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="unsloth/DeepSeek-R1-GGUF", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("unsloth/DeepSeek-R1-GGUF", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("unsloth/DeepSeek-R1-GGUF", trust_remote_code=True) - llama-cpp-python
How to use unsloth/DeepSeek-R1-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="unsloth/DeepSeek-R1-GGUF", filename="DeepSeek-R1-BF16/DeepSeek-R1.BF16-00001-of-00030.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use unsloth/DeepSeek-R1-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf unsloth/DeepSeek-R1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf unsloth/DeepSeek-R1-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf unsloth/DeepSeek-R1-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf unsloth/DeepSeek-R1-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf unsloth/DeepSeek-R1-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf unsloth/DeepSeek-R1-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf unsloth/DeepSeek-R1-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf unsloth/DeepSeek-R1-GGUF:Q4_K_M
Use Docker
docker model run hf.co/unsloth/DeepSeek-R1-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use unsloth/DeepSeek-R1-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "unsloth/DeepSeek-R1-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "unsloth/DeepSeek-R1-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/unsloth/DeepSeek-R1-GGUF:Q4_K_M
- SGLang
How to use unsloth/DeepSeek-R1-GGUF with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "unsloth/DeepSeek-R1-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "unsloth/DeepSeek-R1-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "unsloth/DeepSeek-R1-GGUF" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "unsloth/DeepSeek-R1-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use unsloth/DeepSeek-R1-GGUF with Ollama:
ollama run hf.co/unsloth/DeepSeek-R1-GGUF:Q4_K_M
- Unsloth Studio new
How to use unsloth/DeepSeek-R1-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for unsloth/DeepSeek-R1-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for unsloth/DeepSeek-R1-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for unsloth/DeepSeek-R1-GGUF to start chatting
- Docker Model Runner
How to use unsloth/DeepSeek-R1-GGUF with Docker Model Runner:
docker model run hf.co/unsloth/DeepSeek-R1-GGUF:Q4_K_M
- Lemonade
How to use unsloth/DeepSeek-R1-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull unsloth/DeepSeek-R1-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.DeepSeek-R1-GGUF-Q4_K_M
List all available models
lemonade list
I tested dynamic 1.58bit and 2.22bit, All thoughts are empty?
There are several R's in strawberries? This question is occasionally thought about, most of which are empty tags.
Make sure you're using the correct chat template for inference
There are several R's in strawberries? This question is occasionally thought about, most of which are empty tags.Make sure you're using the correct chat template for inference
FROM R1-2.22bit.gguf
TEMPLATE """{{- if .System }}{{ .System }}{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1}}
{{- if eq .Role "user" }}<|User|>{{ .Content }}
{{- else if eq .Role "assistant" }}<|Assistant|>{{ .Content }}{{- if not $last }}<|end▁of▁sentence|>{{- end }}
{{- end }}
{{- if and $last (ne .Role "assistant") }}<|Assistant|>{{- end }}
{{- end }}"""
PARAMETER stop <|begin▁of▁sentence|>
PARAMETER stop <|end▁of▁sentence|>
PARAMETER stop <|User|>
PARAMETER stop <|Assistant|>
LICENSE """MIT License
Copyright (c) 2023 DeepSeek
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE."""
i have the same questiom as you,when i ask somethings to the 1.58bit,the tokens in is empty
Did you guys set temp to 0.6?
yes,and i know why i got the empty ,please add some tokens at the begin and the end of sequence,like "<|User|>" + input_text + "<|Assistant|>"
yes,and i know why i got the empty ,please add some tokens at the begin and the end of sequence,like "<|User|>" + input_text + "<|Assistant|>"
Can you send your configuration file?
Did you guys set temp to 0.6?
yes
Found this out after days of toiling, only way i found to make it forcefully generate COT is to make my user prompts like this
\n (Prompt in here) \n\n
SPACES ARE IMPORTANT so the newline command doesnt get caught in the prompt. Hopefully it helps someone
EDIT: weird my thing is messed up but the prompt needs to look like this
< think >\n (Prompt in here) \n< /think >\n
remove spaces for before and after each think in between the ><
Found this out after days of toiling, only way i found to make it forcefully generate COT is to make my user prompts like this
\n (Prompt in here) \n\n
SPACES ARE IMPORTANT so the newline command doesnt get caught in the prompt. Hopefully it helps someone
EDIT: weird my thing is messed up but the prompt needs to look like this
< think >\n (Prompt in here) \n< /think >\n
remove spaces for before and after each think in between the ><
Thank you brother, it's very useful, I succeeded on 1.58bit.