Whisper Medium - MLX Q8 Quantized
8-bit quantized version of OpenAI's Whisper Medium, optimized for Apple Silicon with MLX.
Model Details
| Property | Value |
|---|---|
| Original Model | openai/whisper-medium |
| Parameters | ~769M |
| Quantization | INT8 (Q8) |
| Size | ~800MB |
Other Whisper Models
| Model | Size | Link |
|---|---|---|
| small | ~300MB | LibraxisAI/whisper-small-mlx-q8 |
| medium (this) | ~800MB | - |
| large-v3 | ~1.6GB | LibraxisAI/whisper-large-v3-mlx-q8 |
| large-v3-turbo | ~900MB | LibraxisAI/whisper-large-v3-turbo-mlx-q8 |
Usage
import mlx_whisper
result = mlx_whisper.transcribe(
"audio.wav",
path_or_hf_repo="LibraxisAI/whisper-medium-mlx-q8"
)
print(result["text"])
Hardware Requirements
- Apple Silicon Mac (M1/M2/M3/M4)
- Minimum 8GB RAM
License
MIT - inherited from OpenAI Whisper.
Converted by LibraxisAI using mlx-whisper
- Downloads last month
- 27
Hardware compatibility
Log In
to view the estimation
Quantized
Model tree for LibraxisAI/whisper-medium-mlx-q8
Base model
openai/whisper-medium