Instructions to use ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic", filename="DeepSeek-R1-Distill-Llama-70B-heretic-Q3_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M # Run inference directly in the terminal: llama-cli -hf ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M # Run inference directly in the terminal: llama-cli -hf ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
Use Docker
docker model run hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic with Ollama:
ollama run hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
- Unsloth Studio new
How to use ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic to start chatting
- Docker Model Runner
How to use ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic with Docker Model Runner:
docker model run hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
- Lemonade
How to use ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
Run and chat with the model
lemonade run user.DeepSeek-R1-Distill-Llama-70B-heretic-Q4_K_M
List all available models
lemonade list
DeepSeek-R1-Distill-Llama-70B-heretic
Abliterated (uncensored) version of deepseek-ai/DeepSeek-R1-Distill-Llama-70B, created using Heretic and converted to GGUF.
Abliteration Quality
| Metric | Value |
|---|---|
| Refusals | 0/100 |
| KL Divergence | 0.0361 |
| Rounds | 1 |
Lower refusals = fewer refused prompts. Lower KL divergence = closer to original model behavior.
Available Quantizations
| Quantization | File | Size |
|---|---|---|
| Q8_0 | DeepSeek-R1-Distill-Llama-70B-heretic-Q8_0.gguf | 69.83 GB |
| Q6_K | DeepSeek-R1-Distill-Llama-70B-heretic-Q6_K.gguf | 53.91 GB |
| Q4_K_M | DeepSeek-R1-Distill-Llama-70B-heretic-Q4_K_M.gguf | 39.60 GB |
| Q3_K_M | DeepSeek-R1-Distill-Llama-70B-heretic-Q3_K_M.gguf | 31.91 GB |
Usage with Ollama
Important: This model is based on Llama 3.1 architecture but uses DeepSeek's fullwidth Unicode special tokens in the GGUF metadata. These tokens are not correctly handled by Ollama's default tokenizer, causing garbled output. You must use the included Modelfile to get correct output.
# Download the Modelfile and create the model
wget https://huggingface.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic/resolve/main/Modelfile
ollama create deepseek-r1-70b-heretic -f Modelfile
ollama run deepseek-r1-70b-heretic
To use a different quantization, edit the FROM line in the Modelfile:
FROM hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q4_K_M
FROM hf.co/ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic:Q3_K_M
bf16 Weights
The full bf16 abliterated weights are available in the bf16/ subdirectory of this repository.
Usage with Transformers
The bf16 weights in the bf16/ subdirectory can be loaded directly with Transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic"
tokenizer = AutoTokenizer.from_pretrained(model_id, subfolder="bf16")
model = AutoModelForCausalLM.from_pretrained(
model_id, subfolder="bf16", torch_dtype="auto", device_map="auto"
)
messages = [{"role": "user", "content": "Hello!"}]
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
About
This model was processed by the Apostate automated abliteration pipeline:
- The source model was loaded in bf16
- Heretic's optimization-based abliteration was applied to remove refusal behavior
- The merged model was converted to GGUF format using llama.cpp
- Multiple quantization levels were generated
The abliteration process uses directional ablation to remove the model's refusal directions while minimizing KL divergence from the original model's behavior on harmless prompts.
- Downloads last month
- 204
3-bit
4-bit
6-bit
8-bit
Model tree for ThalisAI/DeepSeek-R1-Distill-Llama-70B-heretic
Base model
deepseek-ai/DeepSeek-R1-Distill-Llama-70B