BitNet iOS Models

Pre-converted GGUF models for use with BitNet-iOS โ€” native 1-bit LLM inference on Apple Silicon using ARM64 NEON TL1 kernels.

These GGUFs were quantized using the BitNet.cpp i2_s format with locally-built llama-quantize from the microsoft/BitNet repo. Using GGUFs from other sources may produce incorrect output due to differences in i2_s packing between llama-quantize versions.

Models

File Original Model Type Size License
Falcon3-1B-Instruct-i2s.gguf tiiuae/Falcon3-1B-Instruct-1.58bit Instruct (chat) 1.36 GB TII Falcon License 2.0
bitnet-b1.58-large-i2s.gguf microsoft/bitnet_b1_58-large Base (completion) 270 MB MIT

Usage

These models are designed for the BitNet-iOS demo app, which downloads them automatically from this repo. They can also be used with the BitNet-iOS CLI:

# Instruct model (chat)
.build/debug/BitNetCLI /path/to/Falcon3-1B-Instruct-i2s.gguf --chat

# Base model (completion)
.build/debug/BitNetCLI /path/to/bitnet-b1.58-large-i2s.gguf "Once upon a time"

Why self-hosted GGUFs?

The BitNet TL1 kernels are sensitive to the exact i2_s quantization format. GGUFs from the original model repos (e.g., tiiuae's Falcon3 GGUF) were quantized with a different version of llama-quantize and differ by ~224 bytes in header metadata. This causes the ARM64 NEON kernels to silently produce garbage output. These GGUFs were converted with the same toolchain used to build the BitNet-iOS XCFramework, ensuring compatibility.

Attribution

Downloads last month
146
GGUF
Model size
0.7B params
Architecture
bitnet
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support