whisper-large-v3-q4
whisper-large-v3-q4 is an MLX-ready Whisper speech-to-text checkpoint derived from openai/whisper-large-v3 for local transcription on Apple Silicon.
Intended use
- Local speech-to-text transcription on Apple Silicon
- Batch or interactive audio transcription experiments
- Multilingual ASR workflows when supported by the upstream Whisper checkpoint
Out of scope
- Safety-critical decisions without domain expert review
- Claims of benchmark superiority not backed by published evaluation data
- Non-MLX runtime guarantees; this card documents the shipped HF checkpoint, not every possible serving stack
- Speaker diarization, clinical interpretation, or audio enhancement
Training and conversion metadata
| Parameter | Value |
|---|---|
| Repository | LibraxisAI/whisper-large-v3-q4 |
| Base model | openai/whisper-large-v3 |
| Task | automatic-speech-recognition |
| Library | transformers |
| Format | MLX / Apple Silicon checkpoint |
| Quantization | Q4 |
| Architecture | Not declared in config |
| Model files | 1 |
| Config model_type | whisper |
This card only reports metadata present in the Hugging Face repository, existing card frontmatter, or public config files. Missing benchmark, dataset, or training-run details are left explicit rather than reconstructed.
Tested inference path
**Inference for this checkpoint has been tested with
LibraxisAI/mlx-batch-server.**
This is the recommended tested path for operator-controlled local inference on Apple Silicon.
| Aspect | Status |
|---|---|
| Tested runtime | LibraxisAI/mlx-batch-server |
| Target hardware | Apple Silicon |
| Inference mode | Local / self-hosted |
| Hugging Face Hosted Inference | Disabled for this repository (inference: false) |
This does not claim compatibility with every possible serving stack. It documents the path that has been exercised for this published checkpoint.
Usage
Python
import mlx_whisper
result = mlx_whisper.transcribe(
"audio.wav",
path_or_hf_repo="LibraxisAI/whisper-large-v3-q4",
)
print(result["text"])
Notes
- Use local audio files supported by
mlx_whisper. - For long recordings, split audio into manageable chunks before transcription.
Example output
No public sample output is currently declared for this checkpoint.
Quantization notes
| Aspect | Original/base checkpoint | This checkpoint |
|---|---|---|
| Lineage | openai/whisper-large-v3 |
LibraxisAI/whisper-large-v3-q4 |
| Runtime target | Upstream runtime format | MLX on Apple Silicon |
| Quantization | Base precision or upstream-declared format | Q4 |
| Published quality delta | Not declared in public metadata | Not declared in public metadata |
Limitations
- No public benchmarks for this checkpoint are declared in the model metadata.
- No public benchmark claims are made by this card unless listed in the frontmatter.
- Validate outputs on your own domain data before relying on this checkpoint.
- Memory use and speed depend heavily on Apple Silicon generation, unified-memory size, audio duration, and language complexity.
License
mit. Check the upstream/base model license as well when a base model is declared.
Citation
@misc{libraxisai-whisper-large-v3-q4,
title = {whisper-large-v3-q4},
author = {LibraxisAI},
year = {2026},
howpublished = {\url{https://huggingface.co/LibraxisAI/whisper-large-v3-q4}},
note = {MLX checkpoint published by LibraxisAI}
}
π ππππππππππ. with AI Agents by VetCoders (c)2024-2026 LibraxisAI
- Downloads last month
- 261
Quantized
Model tree for LibraxisAI/whisper-large-v3-q4
Base model
openai/whisper-large-v3