whisper-large-v3-turbo
This model was converted to MLX format from large-v3-turbo.
Use with mlx
pip install mlx-whisper
import mlx_whisper
result = mlx_whisper.transcribe(
"FILE_NAME",
path_or_hf_repo=mlx-community/whisper-large-v3-turbo,
)
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support