mlx-community/Voxtral-Mini-4B-Realtime-6bit
This model mlx-community/Voxtral-Mini-4B-Realtime-6bit was converted to MLX format from mistralai/Voxtral-Mini-4B-Realtime-2602 using voxmlx.
Use with voxmlx
pip install voxmlx
from voxmlx import transcribe
text = transcribe("audio.flac", model_path="mlx-community/Voxtral-Mini-4B-Realtime-6bit")
print(text)
- Downloads last month
- -
Model size
1.0B params
Tensor type
BF16
·
U32
·
Hardware compatibility
Log In
to add your hardware
Quantized
Model tree for mlx-community/Voxtral-Mini-4B-Realtime-6bit
Base model
mistralai/Ministral-3-3B-Base-2512
Finetuned
mistralai/Voxtral-Mini-4B-Realtime-2602