| | --- |
| | license: mit |
| | tags: |
| | - bitnet |
| | - lora |
| | - ternary |
| | - trillim |
| | - cpu-inference |
| | base_model: microsoft/bitnet-b1.58-2B-4T-bf16 |
| | --- |
| | |
| | # BitNet-GenZ-LoRA-TRNQ |
| |
|
| | Ternary-quantized LoRA adapter for [Trillim/BitNet-TRNQ](https://huggingface.co/Trillim/BitNet-TRNQ) that changes the model's style to speak in GenZ slang, packaged for the [Trillim DarkNet](https://huggingface.co/Trillim) inference engine. |
| |
|
| | This adapter runs entirely on CPU — no GPU required. |
| |
|
| | ## Adapter Details |
| |
|
| | | | | |
| | |---|---| |
| | | **Type** | LoRA adapter | |
| | | **Style** | GenZ slang | |
| | | **Architecture** | BitNet (BitNetForCausalLM) | |
| | | **Quantization** | Ternary ({-1, 0, 1}) | |
| | | **Platforms** | x86_64, aarch64 | |
| | | **Base model** | [Trillim/BitNet-TRNQ](https://huggingface.co/Trillim/BitNet-TRNQ) | |
| | | **Source model** | [microsoft/bitnet-b1.58-2B-4T-bf16](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-bf16) | |
| | | **License** | MIT | |
| | |
| | ## Usage |
| | |
| | ```bash |
| | pip install trillim |
| | trillim pull Trillim/BitNet-TRNQ |
| | trillim pull Trillim/BitNet-GenZ-LoRA-TRNQ |
| | trillim chat Trillim/BitNet-TRNQ --lora Trillim/BitNet-GenZ-LoRA-TRNQ |
| | ``` |
| | |
| | This starts an interactive CLI chat. |
| | |
| | ## What's in this repo |
| | |
| | | File | Description | |
| | |---|---| |
| | | `qmodel.lora` | Ternary-quantized LoRA weights in Trillim format | |
| | | `lora_tokenizer.json` | Tokenizer | |
| | | `lora_tokenizer_config.json` | Tokenizer configuration | |
| | | `lora_chat_template.jinja` | Chat template | |
| | | `trillim_config.json` | Trillim metadata | |
| |
|
| | ## License |
| |
|
| | This adapter is released under the [MIT License](https://opensource.org/licenses/MIT), following the license of the source model. |
| |
|