Whisper Small - MLX Q8 Quantized

8-bit quantized version of OpenAI's Whisper Small, optimized for Apple Silicon with MLX.

Model Details

Property Value
Original Model openai/whisper-small
Parameters ~244M
Quantization INT8 (Q8)
Size ~300MB

Other Whisper Models

Model Size Link
small (this) ~300MB -
medium ~800MB LibraxisAI/whisper-medium-mlx-q8
large-v3 ~1.6GB LibraxisAI/whisper-large-v3-mlx-q8
large-v3-turbo ~900MB LibraxisAI/whisper-large-v3-turbo-mlx-q8

Usage

import mlx_whisper

result = mlx_whisper.transcribe(
    "audio.wav",
    path_or_hf_repo="LibraxisAI/whisper-small-mlx-q8"
)
print(result["text"])

Hardware Requirements

  • Apple Silicon Mac (M1/M2/M3/M4)
  • Minimum 8GB RAM

License

MIT - inherited from OpenAI Whisper.


Converted by LibraxisAI using mlx-whisper

Downloads last month
21
MLX
Hardware compatibility
Log In to view the estimation

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for LibraxisAI/whisper-small-mlx-q8

Finetuned
(3196)
this model