Bonsai-4B β€” Unpacked FP16 Safetensors

FP16 safetensors (HuggingFace format) of the 1-bit Bonsai-4B model. This repo exists for users who want to run Bonsai with stock HuggingFace tooling or frameworks that don't yet support 1-bit weights natively. The 1-bit kernels are currently in our forks of MLX and llama.cpp β€” once they land upstream, this unpacked version will no longer be needed.

We strongly recommend using the native 1-bit models instead. The 1-bit format is where all the benefits of Bonsai come from β€” up to 14x memory reduction, 4x faster inference, and lower energy per token. This unpacked FP16 version is full-size and does not provide any of those advantages.

For the optimized 1-bit release models (recommended):

Downloads last month
3
Safetensors
Model size
4B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including prism-ml/Bonsai-4B-unpacked