File size: 1,592 Bytes
7634ec3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
license: mit
tags:
  - bitnet
  - lora
  - ternary
  - trillim
  - cpu-inference
base_model: microsoft/bitnet-b1.58-2B-4T-bf16
---

# BitNet-GenZ-LoRA-TRNQ

Ternary-quantized LoRA adapter for [Trillim/BitNet-TRNQ](https://huggingface.co/Trillim/BitNet-TRNQ) that changes the model's style to speak in GenZ slang, packaged for the [Trillim DarkNet](https://huggingface.co/Trillim) inference engine.

This adapter runs entirely on CPU — no GPU required.

## Adapter Details

| | |
|---|---|
| **Type** | LoRA adapter |
| **Style** | GenZ slang |
| **Architecture** | BitNet (BitNetForCausalLM) |
| **Quantization** | Ternary ({-1, 0, 1}) |
| **Platforms** | x86_64, aarch64 |
| **Base model** | [Trillim/BitNet-TRNQ](https://huggingface.co/Trillim/BitNet-TRNQ) |
| **Source model** | [microsoft/bitnet-b1.58-2B-4T-bf16](https://huggingface.co/microsoft/bitnet-b1.58-2B-4T-bf16) |
| **License** | MIT |

## Usage

```bash
pip install trillim
trillim pull Trillim/BitNet-TRNQ
trillim pull Trillim/BitNet-GenZ-LoRA-TRNQ
trillim chat Trillim/BitNet-TRNQ --lora Trillim/BitNet-GenZ-LoRA-TRNQ
```

This starts an interactive CLI chat.

## What's in this repo

| File | Description |
|---|---|
| `qmodel.lora` | Ternary-quantized LoRA weights in Trillim format |
| `lora_tokenizer.json` | Tokenizer |
| `lora_tokenizer_config.json` | Tokenizer configuration |
| `lora_chat_template.jinja` | Chat template |
| `trillim_config.json` | Trillim metadata |

## License

This adapter is released under the [MIT License](https://opensource.org/licenses/MIT), following the license of the source model.