MERaLiON-2-10B-RotorQuant-MLX-8bit

MLX 8-bit RotorQuant quantization of aisingapore/MERaLiON-AudioLLM-Whisper-SEA-LION-V3-10B for Apple Silicon inference.

RotorQuant applies rotation-based quantization that decorrelates weight matrices before quantization, distributing outlier magnitudes more evenly across channels for improved accuracy at low bit-widths.

Model Specifications

Property Value
Base Model aisingapore/MERaLiON-AudioLLM-Whisper-SEA-LION-V3-10B
Parameters ~10B
Architecture Whisper encoder + Gemma-2-9B-IT decoder
Quantization RotorQuant 8-bit (MLX)
Disk Size ~10 GB
Peak RAM ~11 GB
License Apache 2.0
Task Automatic Speech Recognition / Speech-to-Text

Quickstart

Installation

pip install mlx-lm mlx-whisper

Inference

from mlx_lm import load, generate
from mlx_lm.cache import IsoQuantCache

model, tokenizer = load("majentik/MERaLiON-2-10B-RotorQuant-MLX-8bit")

# Create IsoQuantCache for RotorQuant models
cache = IsoQuantCache(model)

prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": "Transcribe the following audio."}],
    tokenize=False,
    add_generation_prompt=True,
)

response = generate(
    model,
    tokenizer,
    prompt=prompt,
    max_tokens=512,
    cache=cache,
)
print(response)

Quantization Details

RotorQuant is a rotation-based quantization strategy that:

  • Applies learned rotation matrices to decorrelate weight channels before quantization
  • Reduces the impact of outlier weights that typically degrade quantized model quality
  • Provides more uniform weight distributions, leading to better accuracy retention
  • Pairs with IsoQuantCache for consistent KV-cache quantization during inference

This 8-bit variant provides the highest quality among the quantized variants, closely matching the full-precision model.

Supported Languages

MERaLiON-2 supports speech recognition in Southeast Asian languages including English, Mandarin Chinese, Malay, Tamil, and Indonesian.

Memory Estimates

Device Feasibility
MacBook Air M1 (8 GB) Not recommended
MacBook Pro M1/M2 (16 GB) Feasible with limited headroom
MacBook Pro M2/M3 (32 GB) Comfortable
Mac Studio M2 Ultra (64 GB+) Recommended for production

See Also

Downloads last month
8
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support