metadata
language:
- en
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
- mlx
- speech-to-text
- speech-to-speech
- speech
- speech generation
- stt
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
pipeline_tag: automatic-speech-recognition
license: apache-2.0
library_name: mlx-audio
model-index:
- name: whisper-medium.en
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- type: wer
value: 4.120542365210176
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 7.431640255663553
name: Test WER
mlx-community/whisper-medium.en-asr-4bit
This model was converted to MLX format from openai/whisper-medium.en using mlx-audio version 0.2.10.
Refer to the original model card for more details on the model.
Use with mlx-audio
pip install -U mlx-audio
CLI Example:
python -m mlx_audio.stt.generate --model mlx-community/whisper-medium.en-asr-4bit --audio "audio.wav"
Python Example:
from mlx_audio.stt.utils import load_model
from mlx_audio.stt.generate import generate_transcription
model = load_model("mlx-community/whisper-medium.en-asr-4bit")
transcription = generate_transcription(
model=model,
audio_path="path_to_audio.wav",
output_path="path_to_output.txt",
format="txt",
verbose=True,
)
print(transcription.text)