Instructions to use gghfez/DeepSeek-R1-Zero-IQ3_KS with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use gghfez/DeepSeek-R1-Zero-IQ3_KS with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="gghfez/DeepSeek-R1-Zero-IQ3_KS", filename="DeepSeek-R1-Zero-IQ3_KS-00001-of-00007.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use gghfez/DeepSeek-R1-Zero-IQ3_KS with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf gghfez/DeepSeek-R1-Zero-IQ3_KS # Run inference directly in the terminal: llama-cli -hf gghfez/DeepSeek-R1-Zero-IQ3_KS
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf gghfez/DeepSeek-R1-Zero-IQ3_KS # Run inference directly in the terminal: llama-cli -hf gghfez/DeepSeek-R1-Zero-IQ3_KS
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf gghfez/DeepSeek-R1-Zero-IQ3_KS # Run inference directly in the terminal: ./llama-cli -hf gghfez/DeepSeek-R1-Zero-IQ3_KS
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf gghfez/DeepSeek-R1-Zero-IQ3_KS # Run inference directly in the terminal: ./build/bin/llama-cli -hf gghfez/DeepSeek-R1-Zero-IQ3_KS
Use Docker
docker model run hf.co/gghfez/DeepSeek-R1-Zero-IQ3_KS
- LM Studio
- Jan
- vLLM
How to use gghfez/DeepSeek-R1-Zero-IQ3_KS with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "gghfez/DeepSeek-R1-Zero-IQ3_KS" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gghfez/DeepSeek-R1-Zero-IQ3_KS", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/gghfez/DeepSeek-R1-Zero-IQ3_KS
- Ollama
How to use gghfez/DeepSeek-R1-Zero-IQ3_KS with Ollama:
ollama run hf.co/gghfez/DeepSeek-R1-Zero-IQ3_KS
- Unsloth Studio new
How to use gghfez/DeepSeek-R1-Zero-IQ3_KS with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for gghfez/DeepSeek-R1-Zero-IQ3_KS to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for gghfez/DeepSeek-R1-Zero-IQ3_KS to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for gghfez/DeepSeek-R1-Zero-IQ3_KS to start chatting
- Docker Model Runner
How to use gghfez/DeepSeek-R1-Zero-IQ3_KS with Docker Model Runner:
docker model run hf.co/gghfez/DeepSeek-R1-Zero-IQ3_KS
- Lemonade
How to use gghfez/DeepSeek-R1-Zero-IQ3_KS with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull gghfez/DeepSeek-R1-Zero-IQ3_KS
Run and chat with the model
lemonade run user.DeepSeek-R1-Zero-IQ3_KS-{{QUANT_TAG}}List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf gghfez/DeepSeek-R1-Zero-IQ3_KS# Run inference directly in the terminal:
llama-cli -hf gghfez/DeepSeek-R1-Zero-IQ3_KSUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf gghfez/DeepSeek-R1-Zero-IQ3_KS# Run inference directly in the terminal:
./llama-cli -hf gghfez/DeepSeek-R1-Zero-IQ3_KSBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf gghfez/DeepSeek-R1-Zero-IQ3_KS# Run inference directly in the terminal:
./build/bin/llama-cli -hf gghfez/DeepSeek-R1-Zero-IQ3_KSUse Docker
docker model run hf.co/gghfez/DeepSeek-R1-Zero-IQ3_KS
ik_llama.cpp imatrix MLA Quantizations of deepseek-ai/DeepSeek-R1-Zero
This is an IQ3_KS quant of deepseek-ai/DeepSeek-R1-Zero using ubergarm's IQ3_KS recipe from ubergarm/DeepSeek-TNG-R1T2-Chimera-GGUF.
This quant collection REQUIRES ik_llama.cpp fork to support advanced non-linear SotA quants and Multi-Head Latent Attention (MLA). Do not download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
I've uploaded the converted BF16 weights gghfez/DeepSeek-R1-Zero-256x21B-BF16 if I, or anyone else wants to create similar quants in the future.
Note: I may be deleting gghfez/DeepSeek-R1-Zero-256x21B-BF16 shortly due to the new huggingface storage limits.
- Downloads last month
- 2
Model tree for gghfez/DeepSeek-R1-Zero-IQ3_KS
Base model
deepseek-ai/DeepSeek-R1-Zero
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf gghfez/DeepSeek-R1-Zero-IQ3_KS# Run inference directly in the terminal: llama-cli -hf gghfez/DeepSeek-R1-Zero-IQ3_KS