BitMamba-2-255M

Open in Spaces Paper GitHub ARM NEON Port Preprint

Mirror repository of Zhayr1/BitMamba-2-0.25B, maintained by Aquantic Research for the GPU-to-CPU/ARM neural network transposition programme.

BitMamba-2-255M is the ultra-efficient baseline model of the BitMamba-2 family. It integrates 1.58-bit ternary quantization (BitNet) into the Mamba-2 architecture. Despite its small size, it demonstrates stable convergence and surprising reasoning capabilities, serving as the proof-of-concept for scaling ternary State Space Models.


ARM NEON Port β€” Cross-Platform CPU Inference

An ARM NEON port of the BitMamba-2 inference engine has been developed by Aquantic Research, enabling native inference on Apple Silicon (M1/M2/M3/M4) and ARM-based processors.

Model Hardware Speed Latency/token RAM
BitMamba-2 255M Apple M1 (ARM NEON) 82.5 tok/s 12.1 ms 252 MB
BitMamba-2 255M Intel Core i3-12100F (AVX2) ~146 tok/s β€” 252 MB

Key finding: Speed is perfectly constant regardless of sequence length (50, 200, or more tokens). This experimentally validates the O(1) memory property of SSM architectures β€” unlike Transformers whose memory grows with sequence length.

ARM NEON Port Resources

  • Code: rasata/bitmamba.cpp β€” ARM NEON fork with cross-platform dispatch (x86 AVX2 + ARM NEON)
  • Preprint: "State Space Models as CPU-Native Neural Network Architectures: Experimental Evidence from ARM NEON Inference with 1.58-bit Quantized Mamba" β€” Gabriel Zo-Hasina Rasatavohary, Aquantic Research, March 2026. To be published on engrXiv (DOI pending).
  • Research programme: GPU-to-CPU/ARM Neural Network Transposition

Quick Start (ARM)

# Clone the ARM NEON fork
git clone https://github.com/rasata/bitmamba.cpp
cd bitmamba.cpp

# Build (macOS Apple Silicon)
brew install libomp
cmake -B build && cmake --build build

# Download weights from this repo
wget https://huggingface.co/rasatavohary/BitMamba-2-0.25B/resolve/main/bitmamba_cpp/bitmamba_255m.bin

# Run inference
cd build && cp ../tokenizer.bin .
./bitmamba ../bitmamba_255m.bin "The future of AI is" tokenizer 0.7 1.1 0.05 0.9 40 200

⚑ Key Features

  • Architecture: Mamba-2 SSM + BitNet b1.58 (Ternary Weights).
  • Parameters: 255M.
  • Precision: 1.58-bit (weights {-1, 0, 1}).
  • Training Tokens: Trained on high-quality data (FineWeb-Edu, Cosmopedia, Stack-Dedup).
  • Hardware: Trained on Google Cloud TPU v6e.

πŸ“Š Benchmark Results

This model serves as the baseline for our scaling laws analysis.

Benchmark Metric BitMamba-2-255M
ARC-Easy Accuracy 55.51%
PIQA Accuracy 64.42%
BoolQ Accuracy 59.30%
HellaSwag Acc Norm 35.22%
WikiText-2 Perplexity 51.69

As shown in the scaling analysis below, the 255M model (blue line) establishes a stable learning trajectory, which is significantly improved upon by the 1B model (red line).

Scaling Laws

πŸš€ Usage (Inference)

This model is optimized for extreme edge deployment (IoT, Mobile, Legacy Hardware) using our custom C++ inference engine.

1. Download the Quantized Model

Download the bitmamba_255m.bin file located in the files tab.

2. Run with C++ (x86)

Go to the original GitHub Repository for x86 AVX2 inference, or rasata/bitmamba.cpp for cross-platform (x86 + ARM NEON) inference.

# Example usage after compiling bitmamba.cpp
./bitmamba bitmamba_255m.bin "Hello, I am" tokenizer 0.7 1.1 0.05 0.9 40 200

3. JAX/Flax Usage

The bitmamba_255m.msgpack contains the raw JAX weights for research purposes. You can load them using the source code provided in src/ on GitHub.

πŸ› οΈ Efficient Deployment

Platform Hardware RAM Usage Speed
x86 (original) Intel Core i3-12100F (AVX2) 252 MB ~146 tok/s
ARM (NEON port) Apple M1 252 MB 82.5 tok/s

πŸ“œ Citations

Original model

@misc{salazar2026bitmamba2,
  author       = {Salazar, Jesus},
  title        = {{BitMamba}-2: Efficient Scaling of 1.58-bit State Space Models},
  year         = {2026},
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.18394665},
  url          = {https://doi.org/10.5281/zenodo.18394665}
}

ARM NEON port and CPU-native research

@misc{rasatavohary2026ssm,
  author       = {Rasatavohary, Gabriel Zo-Hasina},
  title        = {State Space Models as {CPU}-Native Neural Network Architectures:
                   Experimental Evidence from {ARM NEON} Inference with 1.58-bit
                   Quantized {Mamba}},
  year         = {2026},
  howpublished = {engrXiv preprint (DOI pending)},
  note         = {Aquantic Research. First ARM NEON port of BitMamba-2.
                   Code: \url{https://github.com/rasata/bitmamba.cpp}},
}

Links

Downloads last month
108
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Datasets used to train rasatavohary/BitMamba-2-0.25B