YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Covenant-72B-GGUF
First ever quantized GGUF of 1Covenant/Covenant-72B-Chat -- a 72.7 billion parameter model from the Bittensor/Templar network, fine-tuned for chat.
Quantized and published by the LITCOIN team. Nobody else has successfully quantized this model. If you're here, you're getting the only working GGUF in existence.
Downloads
| File | Quant | Size | BPW | Notes |
|---|---|---|---|---|
covenant-72b-q3_K_S.gguf |
Q3_K_S | 30.06 GB | 3.55 | Best for 32GB RAM + 12GB VRAM |
Quick Start
Download the GGUF, run the server, open the browser. Three steps.
1. Get llama.cpp
Download the latest release for your OS from llama.cpp releases. You need llama-server and llama-quantize.
2. Download the GGUF
# Option A: huggingface-cli (recommended for large files)
pip install huggingface_hub
huggingface-cli download litcoin/Covenant-72B-GGUF covenant-72b-q3_K_S.gguf --local-dir .
# Option B: Direct download (30 GB)
wget https://huggingface.co/litcoin/Covenant-72B-GGUF/resolve/main/covenant-72b-q3_K_S.gguf
3. Run
# Linux / Mac
./llama-server -m covenant-72b-q3_K_S.gguf -ngl 30 -c 1024 -np 1 --host 0.0.0.0 --port 8080
# Windows (PowerShell)
llama-server.exe -m covenant-72b-q3_K_S.gguf -ngl 30 -c 1024 -np 1 --host 0.0.0.0 --port 8080
Open http://localhost:8080 in your browser. That's it. Built-in chat UI.
Hardware Requirements
- RAM: 32 GB minimum (the model spills from GPU into system memory)
- GPU: Any CUDA GPU with 8 GB+ VRAM
- Disk: 35 GB free space
- Software: llama.cpp b8533 or later
Tested Configurations
| GPU | VRAM | Layers on GPU | Speed | Flags |
|---|---|---|---|---|
| RTX 4070 Ti | 12 GB | 30 | ~1.5 tok/s | -ngl 30 -c 1024 -np 1 |
| RTX 3090 | 24 GB | 50+ | ~4 tok/s | -ngl 50 -c 2048 -np 1 |
| RTX 4090 | 24 GB | 50+ | ~5 tok/s | -ngl 50 -c 4096 |
| 2x RTX 3090 | 48 GB | 80 (all) | ~8 tok/s | -ngl 80 -c 8192 |
Tuning Tips
-ngl Ncontrols how many layers run on GPU. Higher = faster. Start at 30 and increase until you hit an out-of-memory error, then back off by 5.-c Nsets the context window. Lower values free VRAM for more GPU layers. Max is 8192.-np 1runs a single inference slot instead of the default 4. Saves significant VRAM on smaller GPUs.- More GPU layers is always the priority. CPU-only inference on a 72B model is unusably slow.
API Usage
The server exposes an OpenAI-compatible API at http://localhost:8080/v1.
curl
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Explain quantum computing in simple terms."}],
"max_tokens": 200
}'
Python
import requests
r = requests.post("http://localhost:8080/v1/chat/completions", json={
"messages": [{"role": "user", "content": "Write a Python function to sort a list."}],
"max_tokens": 300
})
print(r.json()["choices"][0]["message"]["content"])
OpenAI SDK
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="not-needed")
response = client.chat.completions.create(
model="covenant-72b",
messages=[{"role": "user", "content": "Hello!"}],
max_tokens=200
)
print(response.choices[0].message.content)
Chat Template
Covenant-72B uses the Gemma chat format. llama-server auto-detects this from the GGUF metadata.
<start_of_turn>user
Hello<end_of_turn>
<start_of_turn>model
Hi there<end_of_turn>
Mining on LITCOIN
This model can mine on the LITCOIN proof-of-research network, solving real scientific and computational problems from databases like Codeforces, Project Euler, Rosalind, HuggingFace, and ARC for token rewards.
# 1. Start the model server
llama-server -m covenant-72b-q3_K_S.gguf -ngl 30 -c 1024 -np 1 --host 0.0.0.0 --port 8080
# 2. Download and run the LITCOIN miner
# Get the miner from https://litcoiin.xyz/litcoin_miner.py
# Set these in the CONFIG section:
# BANKR_API_KEY = "bk_YOUR_KEY"
# AI_BASE_URL = "http://localhost:8080/v1"
# AI_API_KEY = "ollama"
# MODEL = "covenant-72b"
python litcoin_miner.py --relay
The --relay flag also registers you as a compute provider on the LITCOIN network. Other users can route inference requests through your model, burning LITCREDIT while you earn LITCOIN at 2x mining weight.
Get a Bankr wallet at bankr.bot. Learn more at litcoiin.xyz.
Why This GGUF Exists (The Quantization Story)
The original Covenant-72B-Chat model has a tokenizer bug that blocks every standard quantization path. No one else has published a working GGUF because of this. Here's what's broken and how we fixed it.
The Problem
The model's tokenizer contains 262,145 tokens (including <image_soft_token> at ID 262144), but the embedding matrix (token_embd.weight) has only 262,144 rows. This off-by-one mismatch causes llama.cpp to reject the model at load time:
check_tensor_dims: tensor 'token_embd.weight' has wrong shape;
expected 8192, 262145, got 8192, 262144, 1, 1
Every quantization tool fails. Ollama can't load it. The standard convert_hf_to_gguf.py pipeline crashes or produces a broken GGUF.
What We Tried (And Why It Failed)
Attempt 1: Edit tokenizer JSON files. We wrote fix_vocab.py to remove token 262144 from tokenizer.json, tokenizer_config.json, and added_tokens.json. The JSON was clean, but convert_hf_to_gguf.py reads token data from the sentencepiece model internally, not from the JSON. The GGUF still came out with 262,145 tokenizer entries.
Attempt 2: Binary patch the GGUF. We tried patching the array length headers directly in the GGUF binary (replacing 262145 with 262144 in the metadata). This changed the array lengths but left orphaned token data in the file, corrupting the GGUF parser. The model wouldn't load at all -- key '<image_soft_token>' has invalid GGUF type 21.
Attempt 3: --override-kv flag. llama.cpp supports metadata overrides at runtime. We tried --override-kv llama.vocab_size=int:262144. This changes the reported vocab size in metadata, but the tokenizer arrays (tokenizer.ggml.tokens, tokenizer.ggml.scores, tokenizer.ggml.token_type) still have 262,145 entries. llama.cpp computes expected tensor dimensions from the actual tokenizer array length, not the metadata field. The shape mismatch persisted.
Attempt 4: Remove from model.vocab inside tokenizer.json. The extra token wasn't actually in model.vocab (max ID was 262143). It only existed in added_tokens. And rewriting the 33 MB tokenizer.json on Windows hit a cp1252 encoding error that corrupted the file mid-write. Had to re-download the tokenizer from HuggingFace.
The Fix That Worked
Patch convert_hf_to_gguf.py line ~1714. The converter has an assert that enforces len(tokens) == vocab.vocab_size. The sentencepiece tokenizer reports 262,145 tokens. Instead of asserting, we truncate the arrays to match the embedding matrix:
# Original (line ~1714):
assert len(tokens) == vocab.vocab_size
# Fixed:
tokens = tokens[:self.hparams.get("vocab_size", len(tokens))]
scores = scores[:len(tokens)]
toktypes = toktypes[:len(tokens)]
This caps the tokenizer arrays to 262,144 (matching config.json's vocab_size and the embedding matrix) BEFORE writing to the GGUF. The resulting file has perfectly matched dimensions. llama.cpp loads it cleanly.
Combined with fix_vocab.py (which cleans the JSON files the converter also reads), the full pipeline produces a working GGUF on the first try.
Full Reproduction Steps
If you want to quantize Covenant-72B yourself (different quant level, etc):
# 1. Download the model (~145 GB)
huggingface-cli download 1Covenant/Covenant-72B-Chat --local-dir Covenant-72B-Chat
# 2. Fix the tokenizer JSON (removes <image_soft_token> from added_tokens)
python fix_vocab.py Covenant-72B-Chat
# 3. Patch the converter (one line change at line ~1714)
python fix_converter.py
# 4. Convert to f16 GGUF (~7 minutes, produces ~145 GB file)
python convert_hf_to_gguf.py Covenant-72B-Chat --outfile covenant-72b-f16.gguf --outtype f16
# 5. Quantize to your preferred level (~6 minutes for q3_K_S)
llama-quantize covenant-72b-f16.gguf covenant-72b-q3_K_S.gguf q3_K_S
# 6. Run
llama-server -m covenant-72b-q3_K_S.gguf -ngl 30 -c 1024 -np 1 --host 0.0.0.0 --port 8080
Available quant options (pick one for step 5):
| Quant | Size | Quality | Use case |
|---|---|---|---|
| q2_K | ~26 GB | Lower | Minimum viable, fits in 32 GB RAM |
| q3_K_S | ~30 GB | Good | Best balance for 32 GB RAM + consumer GPU |
| q4_0 | ~39 GB | Better | Needs 48+ GB RAM |
| q4_K_M | ~42 GB | Great | Needs 48+ GB RAM |
| q5_K_M | ~50 GB | Excellent | Needs 64 GB RAM |
Helper Scripts
fix_vocab.py -- Removes the phantom token from tokenizer JSON files:
import json, sys, os
model_dir = sys.argv[1] if len(sys.argv) > 1 else "Covenant-72B-Chat"
bad_id = 262144
# Fix tokenizer.json
tp = os.path.join(model_dir, "tokenizer.json")
with open(tp, "r", encoding="utf-8") as f:
t = json.load(f)
t["added_tokens"] = [tok for tok in t.get("added_tokens", []) if tok.get("id") != bad_id]
with open(tp, "w", encoding="utf-8") as f:
json.dump(t, f, ensure_ascii=False)
# Fix tokenizer_config.json
cp = os.path.join(model_dir, "tokenizer_config.json")
with open(cp, "r", encoding="utf-8") as f:
c = json.load(f)
if str(bad_id) in c.get("added_tokens_decoder", {}):
del c["added_tokens_decoder"][str(bad_id)]
with open(cp, "w", encoding="utf-8") as f:
json.dump(c, f, ensure_ascii=False, indent=2)
print("Done. Vocab fixed.")
fix_converter.py -- Patches the assert in convert_hf_to_gguf.py:
lines = open("convert_hf_to_gguf.py", "r", encoding="utf-8").readlines()
for i, line in enumerate(lines):
if "assert len(tokens) == vocab.vocab_size" in line:
lines[i] = ' tokens = tokens[:self.hparams.get("vocab_size", len(tokens))]; scores = scores[:len(tokens)]; toktypes = toktypes[:len(tokens)] # truncate to embedding size\n'
print(f"Patched line {i+1}")
break
open("convert_hf_to_gguf.py", "w", encoding="utf-8").writelines(lines)
print("Done. Converter patched.")
Note to the Templar Team
The root cause is that <image_soft_token> (ID 262144) exists in the tokenizer but has no corresponding row in the embedding matrix. The config.json correctly says vocab_size: 262144, but the sentencepiece model and added_tokens contain 262,145 entries. Removing the phantom token from the HuggingFace upload would fix this for everyone downstream.
License
Apache 2.0 (same as the base model).
Credits
- Base model: 1Covenant/Covenant-72B-Chat by the Templar/Bittensor team
- Quantization and vocab fix: tekkaadan / LITCOIN
- Downloads last month
- 82
3-bit