Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- README.md +169 -0
- chat_template.jinja +1 -0
- config.json +41 -0
- generation_config.json +12 -0
- model.safetensors +3 -0
- rpt_metadata.json +44 -0
- tokenizer.json +3 -0
- tokenizer_config.json +14 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,169 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
base_model: microsoft/bitnet-b1.58-2B-4T-bf16
|
| 6 |
+
tags:
|
| 7 |
+
- bitnet
|
| 8 |
+
- ternary
|
| 9 |
+
- pruning
|
| 10 |
+
- quantization
|
| 11 |
+
- efficient-inference
|
| 12 |
+
- rpt
|
| 13 |
+
datasets:
|
| 14 |
+
- wikitext
|
| 15 |
+
pipeline_tag: text-generation
|
| 16 |
+
model-index:
|
| 17 |
+
- name: rpt-bitnet-2b-pruned
|
| 18 |
+
results:
|
| 19 |
+
- task:
|
| 20 |
+
type: text-generation
|
| 21 |
+
dataset:
|
| 22 |
+
name: WikiText-2
|
| 23 |
+
type: wikitext
|
| 24 |
+
metrics:
|
| 25 |
+
- name: Perplexity
|
| 26 |
+
type: perplexity
|
| 27 |
+
value: 16.39
|
| 28 |
+
---
|
| 29 |
+
|
| 30 |
+
# RPT BitNet 2B Pruned
|
| 31 |
+
|
| 32 |
+
**Ternary (1.58-bit) language model with 42.6% sparsity, improved via progressive pruning + QAT/STE.**
|
| 33 |
+
|
| 34 |
+
Based on [microsoft/bitnet-b1.58-2B-4T-bf16](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-bf16), this model was fine-tuned using:
|
| 35 |
+
|
| 36 |
+
1. **Progressive magnitude pruning** (5% then 10%)
|
| 37 |
+
2. **Quantization-Aware Training with Straight-Through Estimator (QAT/STE)** - 300 steps per level
|
| 38 |
+
3. **Ternary snap** to {-1, 0, +1}
|
| 39 |
+
|
| 40 |
+
The result is a model that is **better than the baseline** after pruning and ternary quantization.
|
| 41 |
+
|
| 42 |
+
## Results
|
| 43 |
+
|
| 44 |
+
| Metric | Baseline | This Model | Change |
|
| 45 |
+
|--------|----------|------------|--------|
|
| 46 |
+
| PPL (WikiText-2) | 25.13 | **16.39** | **-34.8%** |
|
| 47 |
+
| Ternary weights | 100% | **100%** | - |
|
| 48 |
+
| Sparsity (zeros) | ~33% (natural) | **42.6%** | +9.6pp |
|
| 49 |
+
| GGUF size (I2_S) | ~1.3 GB | **~1.1 GB** | -15% |
|
| 50 |
+
| CPU inference | Coherent | **Coherent** | - |
|
| 51 |
+
|
| 52 |
+
## Key Finding
|
| 53 |
+
|
| 54 |
+
Removing 10% of weights by magnitude from a ternary model **improves** perplexity by 34.8% after QAT/STE fine-tuning. This is counter-intuitive: pruning typically degrades models. In the ternary regime, low-magnitude weights appear to be noise that harms performance.
|
| 55 |
+
|
| 56 |
+
## Sample Outputs (GGUF I2_S, bitnet.cpp CPU)
|
| 57 |
+
|
| 58 |
+
```
|
| 59 |
+
Prompt: "The capital of France is"
|
| 60 |
+
Output: "Paris. There are also some cities that can be considered as their main cities,
|
| 61 |
+
such as the city that has been capital of France since the 17th century."
|
| 62 |
+
|
| 63 |
+
Prompt: "Water boils at"
|
| 64 |
+
Output: "100 degrees Celsius (212 degrees Fahrenheit) at standard atmospheric pressure."
|
| 65 |
+
|
| 66 |
+
Prompt: "The largest planet in the solar system is"
|
| 67 |
+
Output: "Jupiter. It is a gas giant planet that is about 318 Earths in size."
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
## Usage
|
| 71 |
+
|
| 72 |
+
### With PyTorch (HuggingFace Transformers)
|
| 73 |
+
|
| 74 |
+
```python
|
| 75 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 76 |
+
import torch
|
| 77 |
+
|
| 78 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 79 |
+
"CesarFavero/rpt-bitnet-2b-pruned",
|
| 80 |
+
torch_dtype=torch.bfloat16,
|
| 81 |
+
device_map="auto",
|
| 82 |
+
trust_remote_code=True
|
| 83 |
+
)
|
| 84 |
+
tokenizer = AutoTokenizer.from_pretrained("CesarFavero/rpt-bitnet-2b-pruned")
|
| 85 |
+
|
| 86 |
+
inputs = tokenizer("The capital of France is", return_tensors="pt").to(model.device)
|
| 87 |
+
outputs = model.generate(**inputs, max_new_tokens=50, do_sample=False)
|
| 88 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
### With bitnet.cpp (CPU inference)
|
| 92 |
+
|
| 93 |
+
For CPU inference, use the GGUF I2_S file with [bitnet.cpp](https://github.com/microsoft/BitNet):
|
| 94 |
+
|
| 95 |
+
```bash
|
| 96 |
+
# Clone BitNet fork
|
| 97 |
+
git clone --recursive https://github.com/microsoft/BitNet.git
|
| 98 |
+
cd BitNet
|
| 99 |
+
|
| 100 |
+
# Build (requires clang, not gcc)
|
| 101 |
+
python setup_env.py --hf-repo CesarFavero/rpt-bitnet-2b-pruned-GGUF -q i2_s
|
| 102 |
+
|
| 103 |
+
# Run
|
| 104 |
+
python run_inference.py -m models/rpt-bitnet-2b-pruned/ggml-model-i2_s.gguf \
|
| 105 |
+
-p "The capital of France is" -n 50
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
**Important**: The I2_S format requires the BitNet fork of llama.cpp. Standard llama.cpp and llama-cpp-python do NOT support this format.
|
| 109 |
+
|
| 110 |
+
## Training Details
|
| 111 |
+
|
| 112 |
+
| Parameter | Value |
|
| 113 |
+
|-----------|-------|
|
| 114 |
+
| Base model | microsoft/bitnet-b1.58-2B-4T-bf16 |
|
| 115 |
+
| Parameters | 2.4B (100% ternary) |
|
| 116 |
+
| Optimizer | AdamW (lr=5e-4, wd=0.01) |
|
| 117 |
+
| Technique | QAT with STE |
|
| 118 |
+
| Pruning | Progressive magnitude (5% -> 10%) |
|
| 119 |
+
| Steps/level | 300 |
|
| 120 |
+
| Batch size | 8 |
|
| 121 |
+
| Seq length | 128 |
|
| 122 |
+
| Hardware | NVIDIA A100 (~40GB) |
|
| 123 |
+
| Training time | ~7 minutes GPU |
|
| 124 |
+
| Dataset | WikiText-2 |
|
| 125 |
+
|
| 126 |
+
### Pipeline
|
| 127 |
+
|
| 128 |
+
```
|
| 129 |
+
microsoft/bitnet-b1.58-2B-4T-bf16
|
| 130 |
+
-> Progressive pruning (5% then 10% by magnitude)
|
| 131 |
+
-> QAT/STE fine-tune (300 steps per level)
|
| 132 |
+
-> Ternary snap to {-1, 0, +1}
|
| 133 |
+
-> Save HuggingFace format (this model)
|
| 134 |
+
-> Convert to GGUF I2_S (see GGUF variant)
|
| 135 |
+
-> Inference via bitnet.cpp (CPU)
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
## Limitations
|
| 139 |
+
|
| 140 |
+
- **Evaluation limited to WikiText-2**: The PPL improvement needs validation on broader benchmarks (MMLU, HellaSwag, ARC)
|
| 141 |
+
- **Short context tested**: Only tested with sequences up to 128 tokens during training
|
| 142 |
+
- **I2_S format support**: GGUF variant requires BitNet fork of llama.cpp (not standard llama.cpp)
|
| 143 |
+
- **Language**: Primarily tested on English text
|
| 144 |
+
- **PPL improvement caveat**: The dramatic PPL improvement after ternary snap (33.07 -> 16.39) may reflect implicit regularization rather than genuine capability improvement. Broader benchmarks needed to confirm.
|
| 145 |
+
|
| 146 |
+
## Part of RPT (Redes Preditivas Termodinamicas)
|
| 147 |
+
|
| 148 |
+
This model was produced as part of the RPT project, which validates physics-inspired principles for neural network efficiency:
|
| 149 |
+
|
| 150 |
+
- **Landauer's principle**: Sparsity (removing information) improves model quality
|
| 151 |
+
- **Self-Organized Criticality**: The model naturally operates at the edge of chaos (Lyapunov exponent ~ 0)
|
| 152 |
+
- **Predictive Coding**: Correction ratios decrease with depth (39.87 -> 0.21)
|
| 153 |
+
|
| 154 |
+
Full documentation: [RPT Project](https://github.com/CesarFavero/rpt-bitnet-2b-pruned)
|
| 155 |
+
|
| 156 |
+
## Citation
|
| 157 |
+
|
| 158 |
+
```bibtex
|
| 159 |
+
@misc{rpt2026,
|
| 160 |
+
title={Sparsity Improves Ternary Language Models: Evidence from BitNet b1.58},
|
| 161 |
+
author={Cesar and Claude},
|
| 162 |
+
year={2026},
|
| 163 |
+
note={RPT - Redes Preditivas Termodinamicas}
|
| 164 |
+
}
|
| 165 |
+
```
|
| 166 |
+
|
| 167 |
+
## License
|
| 168 |
+
|
| 169 |
+
MIT (same as base model)
|
chat_template.jinja
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = message['role'] | capitalize + ': '+ message['content'] | trim + '<|eot_id|>' %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ 'Assistant: ' }}{% endif %}
|
config.json
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"BitNetForCausalLM"
|
| 4 |
+
],
|
| 5 |
+
"attention_bias": false,
|
| 6 |
+
"attention_dropout": 0.0,
|
| 7 |
+
"auto_map": {
|
| 8 |
+
"AutoConfig": "configuration_bitnet.BitNetConfig",
|
| 9 |
+
"AutoModelForCausalLM": "modeling_bitnet.BitNetForCausalLM"
|
| 10 |
+
},
|
| 11 |
+
"bos_token_id": 128000,
|
| 12 |
+
"dtype": "bfloat16",
|
| 13 |
+
"eos_token_id": 128001,
|
| 14 |
+
"hidden_act": "relu2",
|
| 15 |
+
"hidden_size": 2560,
|
| 16 |
+
"initializer_range": 0.02,
|
| 17 |
+
"intermediate_size": 6912,
|
| 18 |
+
"max_position_embeddings": 4096,
|
| 19 |
+
"model_type": "bitnet",
|
| 20 |
+
"num_attention_heads": 20,
|
| 21 |
+
"num_hidden_layers": 30,
|
| 22 |
+
"num_key_value_heads": 5,
|
| 23 |
+
"pad_token_id": null,
|
| 24 |
+
"quantization_config": {
|
| 25 |
+
"linear_class": "autobitlinear",
|
| 26 |
+
"modules_to_not_convert": null,
|
| 27 |
+
"quant_method": "bitnet",
|
| 28 |
+
"quantization_mode": "online",
|
| 29 |
+
"rms_norm_eps": 1e-06,
|
| 30 |
+
"use_rms_norm": false
|
| 31 |
+
},
|
| 32 |
+
"rms_norm_eps": 1e-05,
|
| 33 |
+
"rope_parameters": {
|
| 34 |
+
"rope_theta": 500000.0,
|
| 35 |
+
"rope_type": "default"
|
| 36 |
+
},
|
| 37 |
+
"tie_word_embeddings": true,
|
| 38 |
+
"transformers_version": "5.2.0.dev0",
|
| 39 |
+
"use_cache": true,
|
| 40 |
+
"vocab_size": 128256
|
| 41 |
+
}
|
generation_config.json
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token_id": 128000,
|
| 3 |
+
"do_sample": true,
|
| 4 |
+
"eos_token_id": [
|
| 5 |
+
128001,
|
| 6 |
+
128009
|
| 7 |
+
],
|
| 8 |
+
"max_length": 4096,
|
| 9 |
+
"temperature": 0.6,
|
| 10 |
+
"top_p": 0.9,
|
| 11 |
+
"transformers_version": "5.2.0.dev0"
|
| 12 |
+
}
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:48ef687a1f2c7fc98f53241fc1d9952b2565bd62c10b1f04e4c298a019f19f9f
|
| 3 |
+
size 4825679400
|
rpt_metadata.json
ADDED
|
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"base_model": "microsoft/bitnet-b1.58-2B-4T-bf16",
|
| 3 |
+
"experiment": "rpt_deploy_pipeline",
|
| 4 |
+
"target_sparsity": 10,
|
| 5 |
+
"actual_sparsity": 15.228370618520293,
|
| 6 |
+
"progressive": true,
|
| 7 |
+
"mask_during_training": true,
|
| 8 |
+
"ft_steps_per_level": 300,
|
| 9 |
+
"ft_lr": 0.0005,
|
| 10 |
+
"batch_size": 32,
|
| 11 |
+
"seq_len": 128,
|
| 12 |
+
"baseline_ppl": 25.089412689208984,
|
| 13 |
+
"final_ppl": 15.43481159210205,
|
| 14 |
+
"ppl_improvement": "-38.5%",
|
| 15 |
+
"results": [
|
| 16 |
+
{
|
| 17 |
+
"sparsity": 0,
|
| 18 |
+
"actual_sparsity": 0.0,
|
| 19 |
+
"ppl_before_ft": 25.089412689208984,
|
| 20 |
+
"ppl_after_ft": 25.089412689208984,
|
| 21 |
+
"ft_loss": 0,
|
| 22 |
+
"sample": "The capital of France is Paris. Paris is a city in the north of France, and it is the most important city in the country. It is also the most popular tourist",
|
| 23 |
+
"time_sec": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"sparsity": 5,
|
| 27 |
+
"actual_sparsity": 6.366207722597902,
|
| 28 |
+
"ppl_before_ft": 24.959842681884766,
|
| 29 |
+
"ppl_after_ft": 15.323016166687012,
|
| 30 |
+
"ft_loss": 2.9444207525253296,
|
| 31 |
+
"sample": "The capital of France is Paris. The city is located on the Seine River in the north of France. It is the largest city in France and the second largest in Europe",
|
| 32 |
+
"time_sec": 147.99678254127502
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"sparsity": 10,
|
| 36 |
+
"actual_sparsity": 15.228370618520293,
|
| 37 |
+
"ppl_before_ft": 20.304969787597656,
|
| 38 |
+
"ppl_after_ft": 15.43481159210205,
|
| 39 |
+
"ft_loss": 2.934646291732788,
|
| 40 |
+
"sample": "The capital of France is Paris. It is the most populous city in France and the most visited city in the world. It is the site of the world's largest public",
|
| 41 |
+
"time_sec": 144.7917971611023
|
| 42 |
+
}
|
| 43 |
+
]
|
| 44 |
+
}
|
tokenizer.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3c5cf44023714fb39b05e71e425f8d7b92805ff73f7988b083b8c87f0bf87393
|
| 3 |
+
size 17209961
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"backend": "tokenizers",
|
| 3 |
+
"bos_token": "<|begin_of_text|>",
|
| 4 |
+
"clean_up_tokenization_spaces": true,
|
| 5 |
+
"eos_token": "<|eot_id|>",
|
| 6 |
+
"is_local": false,
|
| 7 |
+
"model_input_names": [
|
| 8 |
+
"input_ids",
|
| 9 |
+
"attention_mask"
|
| 10 |
+
],
|
| 11 |
+
"model_max_length": 1000000000000000019884624838656,
|
| 12 |
+
"pad_token": "<|eot_id|>",
|
| 13 |
+
"tokenizer_class": "TokenizersBackend"
|
| 14 |
+
}
|