MERaLiON-2-3B-RotorQuant-MLX-8bit

MLX 8-bit RotorQuant quantization of aisingapore/MERaLiON-AudioLLM-Whisper-SEA-LION-V3-3B for Apple Silicon inference.

RotorQuant applies rotation-based quantization that decorrelates weight matrices before quantization, distributing outlier magnitudes more evenly across channels for improved accuracy at low bit-widths.

Model Specifications

Property Value
Base Model aisingapore/MERaLiON-AudioLLM-Whisper-SEA-LION-V3-3B
Parameters ~3B
Architecture Whisper-large-v3 encoder + Gemma-2-2B-IT decoder
Quantization RotorQuant 8-bit (MLX)
Disk Size ~3 GB
Peak RAM ~4 GB
License Apache 2.0
Task Automatic Speech Recognition / Speech-to-Text

Quickstart

Installation

pip install mlx-lm mlx-whisper

Inference

from mlx_lm import load, generate
from mlx_lm.cache import IsoQuantCache

model, tokenizer = load("majentik/MERaLiON-2-3B-RotorQuant-MLX-8bit")

# Create IsoQuantCache for RotorQuant models
cache = IsoQuantCache(model)

prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": "Transcribe the following audio."}],
    tokenize=False,
    add_generation_prompt=True,
)

response = generate(
    model,
    tokenizer,
    prompt=prompt,
    max_tokens=512,
    cache=cache,
)
print(response)

Quantization Details

RotorQuant is a rotation-based quantization strategy that:

  • Applies learned rotation matrices to decorrelate weight channels before quantization
  • Reduces the impact of outlier weights that typically degrade quantized model quality
  • Provides more uniform weight distributions, leading to better accuracy retention
  • Pairs with IsoQuantCache for consistent KV-cache quantization during inference

This 8-bit variant provides the highest quality among the 3B quantized variants, closely matching the full-precision model.

Supported Languages

MERaLiON-2 supports speech recognition in Southeast Asian languages including English, Mandarin Chinese, Malay, Tamil, and Indonesian.

Memory Estimates

Device Feasibility
MacBook Air M1 (8 GB) Comfortable
MacBook Pro M1/M2 (16 GB) Ideal
MacBook Pro M2/M3 (32 GB) Overkill for this variant
iPad Pro M1/M2 Feasible

See Also

Downloads last month
10
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support