mlx-community/whisper-large-v3-asr-4bit
This model was converted to MLX format from openai/whisper-large-v3 using mlx-audio version 0.3.0.
Refer to the original model card for more details on the model.
Use with mlx-audio
pip install -U mlx-audio
CLI Example:
python -m mlx_audio.stt.generate --model mlx-community/whisper-large-v3-asr-4bit --audio "audio.wav"
Python Example:
from mlx_audio.stt.utils import load_model
from mlx_audio.stt.generate import generate_transcription
model = load_model("mlx-community/whisper-large-v3-asr-4bit")
transcription = generate_transcription(
model=model,
audio_path="path_to_audio.wav",
output_path="path_to_output.txt",
format="txt",
verbose=True,
)
print(transcription.text)
- Downloads last month
- 21
Model size
0.2B params
Tensor type
F16
·
U32
·
Hardware compatibility
Log In
to view the estimation
4-bit