Phi-4-BitNet-1.58b
Architecture: 14.7 Billion Parameters | BitNet 1.58-bit Ternary Quantization
IMPORTANT: Parameter Count Display
HuggingFace displays a reduced parameter count because it counts packed bytes, not actual parameters. This model has the full 14.7B parameter Phi-4 architecture. The weights are stored as ternary values ({-1, 0, +1}) packed 4 per byte, which reduces storage to 4.6 GB but preserves all 14.7 billion parameters.
Overview
This is an experimental BitNet 1.58-bit quantization of Microsoft's Phi-4 model using absmean scaling with group-wise quantization. The model stores weights as ternary values ({-1, 0, +1}) packed 4 values per byte.
This is research/experimental work. Quality and performance have not been formally benchmarked.
Specifications
| Property | Value |
|---|---|
| Base Model | microsoft/phi-4 |
| Architecture | Phi-3 (Phi3ForCausalLM) |
| Parameters | 14B |
| Quantization | BitNet 1.58-bit ternary |
| Bits per Weight | ~1.58 |
| Group Size | 64 |
| Original Size | 29.32 GB (BF16) |
| Quantized Size | 4.58 GB (SafeTensors) |
| GGUF Size | 5.57 GB (TQ2_0) |
| Compression | ~6.4x |
Formats
| Format | File | Description |
|---|---|---|
| SafeTensors | model-*.safetensors |
Sharded quantized weights + scales |
| GGUF | phi4-tq2.gguf |
llama.cpp compatible |
Quantization Method
Algorithm
- Reshape weights into groups of 64
- Compute per-group scale:
scale = mean(|weights|) - Normalize and round to nearest ternary:
q = round(w / scale)clamped to {-1, 0, +1} - Map to unsigned: {-1, 0, +1} → {0, 1, 2}
- Pack 4 values per byte:
v0 + v1*3 + v2*9 + v3*27
Tooling
Hardware Used
- GPU: NVIDIA RTX 5080 (16GB VRAM)
- Quantization time: ~100 seconds
- Memory: Streaming mode with CPU fallback for large tensors
Usage
With Ollama/llama.cpp
# llama.cpp
./llama-cli -m phi4-tq2.gguf -p "Your prompt here"
Unpacking Weights (Python)
def unpack_ternary(packed_byte):
"""Unpack 4 ternary values from byte."""
values = []
val = packed_byte
for _ in range(4):
values.append((val % 3) - 1) # {0,1,2} → {-1,0,+1}
val //= 3
return values
Limitations
- Quality not benchmarked - May have significant degradation vs original
- Requires custom runtime - Standard transformers doesn't support ternary weights
- Experimental - Not intended for production use without evaluation
- GGUF keeps embeddings/lm_head at F16, hence larger than SafeTensors
License
MIT License (inherited from microsoft/phi-4)
Citation
@misc{phi4-bitnet-2025,
title={Phi-4-BitNet-1.58b: Experimental BitNet Quantization of Phi-4},
author={Tzervas},
year={2025},
url={https://huggingface.co/tzervas/phi-4-bitnet-1.58b}
}
- Downloads last month
- 143
Model tree for tzervas/phi-4-bitnet-1.58b
Base model
microsoft/phi-4