dgavriloff commited on
Commit
26fdb70
·
verified ·
1 Parent(s): 198e91a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - bitnet
5
+ - 1-bit
6
+ - gguf
7
+ - ios
8
+ - apple-silicon
9
+ - arm64-neon
10
+ ---
11
+
12
+ # BitNet iOS Models
13
+
14
+ Pre-converted GGUF models for use with [BitNet-iOS](https://github.com/dgavriloff/BitNet-iOS) — native 1-bit LLM inference on Apple Silicon using ARM64 NEON TL1 kernels.
15
+
16
+ These GGUFs were quantized using the BitNet.cpp `i2_s` format with locally-built `llama-quantize` from the [microsoft/BitNet](https://github.com/microsoft/BitNet) repo. **Using GGUFs from other sources may produce incorrect output** due to differences in `i2_s` packing between llama-quantize versions.
17
+
18
+ ## Models
19
+
20
+ | File | Original Model | Type | Size | License |
21
+ |------|---------------|------|------|---------|
22
+ | `Falcon3-1B-Instruct-i2s.gguf` | [tiiuae/Falcon3-1B-Instruct-1.58bit](https://huggingface.co/tiiuae/Falcon3-1B-Instruct-1.58bit) | Instruct (chat) | 1.36 GB | [TII Falcon License 2.0](https://falconllm.tii.ae/falcon-terms-and-conditions.html) |
23
+ | `bitnet-b1.58-large-i2s.gguf` | [microsoft/bitnet_b1_58-large](https://huggingface.co/1bitLLM/bitnet_b1_58-large) | Base (completion) | 270 MB | MIT |
24
+
25
+ ## Usage
26
+
27
+ These models are designed for the BitNet-iOS demo app, which downloads them automatically from this repo. They can also be used with the BitNet-iOS CLI:
28
+
29
+ ```bash
30
+ # Instruct model (chat)
31
+ .build/debug/BitNetCLI /path/to/Falcon3-1B-Instruct-i2s.gguf --chat
32
+
33
+ # Base model (completion)
34
+ .build/debug/BitNetCLI /path/to/bitnet-b1.58-large-i2s.gguf "Once upon a time"
35
+ ```
36
+
37
+ ## Why self-hosted GGUFs?
38
+
39
+ The BitNet TL1 kernels are sensitive to the exact `i2_s` quantization format. GGUFs from the original model repos (e.g., tiiuae's Falcon3 GGUF) were quantized with a different version of `llama-quantize` and differ by ~224 bytes in header metadata. This causes the ARM64 NEON kernels to silently produce garbage output. These GGUFs were converted with the same toolchain used to build the BitNet-iOS XCFramework, ensuring compatibility.
40
+
41
+ ## Attribution
42
+
43
+ - **Falcon3-1B-Instruct** by [Technology Innovation Institute (TII)](https://www.tii.ae/) — [TII Falcon License 2.0](https://falconllm.tii.ae/falcon-terms-and-conditions.html)
44
+ - **BitNet b1.58 Large** by [Microsoft Research](https://github.com/microsoft/BitNet) — MIT License
45
+ - Quantization via [microsoft/BitNet](https://github.com/microsoft/BitNet) (MIT License)