Instructions to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="VECTORVV1/DeepSeek-R1-Distill-Qwen-32B", filename="DeepSeek-R1-Distill-Qwen-32B-Q8_K_P.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B # Run inference directly in the terminal: llama-cli -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B # Run inference directly in the terminal: llama-cli -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B # Run inference directly in the terminal: ./llama-cli -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B # Run inference directly in the terminal: ./build/bin/llama-cli -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
Use Docker
docker model run hf.co/VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
- LM Studio
- Jan
- vLLM
How to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "VECTORVV1/DeepSeek-R1-Distill-Qwen-32B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "VECTORVV1/DeepSeek-R1-Distill-Qwen-32B", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
- Ollama
How to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with Ollama:
ollama run hf.co/VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
- Unsloth Studio new
How to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for VECTORVV1/DeepSeek-R1-Distill-Qwen-32B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for VECTORVV1/DeepSeek-R1-Distill-Qwen-32B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for VECTORVV1/DeepSeek-R1-Distill-Qwen-32B to start chatting
- Pi new
How to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "VECTORVV1/DeepSeek-R1-Distill-Qwen-32B" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
Run Hermes
hermes
- Docker Model Runner
How to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with Docker Model Runner:
docker model run hf.co/VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
- Lemonade
How to use VECTORVV1/DeepSeek-R1-Distill-Qwen-32B with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
Run and chat with the model
lemonade run user.DeepSeek-R1-Distill-Qwen-32B-{{QUANT_TAG}}List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B# Run inference directly in the terminal:
llama-cli -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32BUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B# Run inference directly in the terminal:
./llama-cli -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32BBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B# Run inference directly in the terminal:
./build/bin/llama-cli -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32BUse Docker
docker model run hf.co/VECTORVV1/DeepSeek-R1-Distill-Qwen-32BQwen3.6-35B-A3B-Uncensored-HauhauCS-Aggressive
Join the Discord for updates, roadmaps, projects, or just to chat.
Qwen3.6-35B-A3B uncensored by HauhauCS. 0/465 Refusals.
HuggingFace's "Hardware Compatibility" widget doesn't recognize K_P quants — it may show fewer files than actually exist. Click "View +X variants" or go to Files and versions to see all available downloads.
About
No changes to datasets or capabilities. Fully functional, 100% of what the original authors intended - just without the refusals.
These are meant to be the best lossless uncensored models out there.
Aggressive Variant
Stronger uncensoring — model is fully unlocked and won't refuse prompts. May occasionally append short disclaimers (baked into base model training, not refusals) but full content is always generated.
For a more conservative uncensor that keeps some safety guardrails, check the Balanced variant when it's available.
Downloads
All quants generated with importance matrix (imatrix) for optimal quality preservation on abliterated weights.
What are K_P quants?
K_P ("Perfect") quants are HauhauCS custom quantizations that use model-specific analysis to selectively preserve quality where it matters most. Each model gets its own optimized quantization profile.
A K_P quant effectively bumps quality up by 1-2 quant levels at only ~5-15% larger file size than the base quant. Fully compatible with llama.cpp, LM Studio, and any GGUF-compatible runtime — no special builds needed.
Note: K_P quants may show as "?" in LM Studio's quant column. This is a display issue only — the model loads and runs fine.
Specs
- 35B total parameters, ~3B active per forward pass (MoE)
- 256 experts, 8 routed per token
- Hybrid architecture: linear attention + full softmax attention (3:1 ratio)
- 40 layers
- 262K native context
- Natively multimodal (text, image, video)
- Based on Qwen/Qwen3.6-35B-A3B
Recommended Settings
From the official Qwen authors:
Thinking mode (default):
- General:
temperature=1.0, top_p=0.95, top_k=20, min_p=0, presence_penalty=1.5 - Coding/precise tasks:
temperature=0.6, top_p=0.95, top_k=20, min_p=0, presence_penalty=0
Non-thinking mode:
- General:
temperature=0.7, top_p=0.8, top_k=20, min_p=0, presence_penalty=1.5 - Reasoning tasks:
temperature=1.0, top_p=1.0, top_k=40, min_p=0, presence_penalty=2.0
Important:
- Keep at least 128K context to preserve thinking capabilities
- Use
--jinjaflag with llama.cpp for proper chat template handling - Vision support requires the
mmprojfile alongside the main GGUF
Usage
Works with llama.cpp, LM Studio, Jan, koboldcpp, and other GGUF-compatible runtimes.
llama-cli -m Qwen3.6-35B-A3B-Uncensored-HauhauCS-Aggressive-Q4_K_P.gguf \
--mmproj mmproj-Qwen3.6-35B-A3B-Uncensored-HauhauCS-Aggressive-f16.gguf \
--jinja -c 131072 -ngl 99
Other Models
- Downloads last month
- 110
We're not able to determine the quantization variants.
Model tree for VECTORVV1/DeepSeek-R1-Distill-Qwen-32B
Base model
Qwen/Qwen3.6-35B-A3B
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B# Run inference directly in the terminal: llama-cli -hf VECTORVV1/DeepSeek-R1-Distill-Qwen-32B