Update README.md
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ tags:
|
|
| 25 |
- realtime
|
| 26 |
---
|
| 27 |
|
| 28 |
-
# Voxtral Mini 4B Realtime
|
| 29 |
|
| 30 |
This is a **4-bit quantized** [MLX](https://github.com/ml-explore/mlx) conversion of [mistralai/Voxtral-Mini-4B-Realtime-2602](https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602), Mistral AI's streaming speech-to-text model.
|
| 31 |
|
|
@@ -52,7 +52,7 @@ pip install mlx-audio[stt]
|
|
| 52 |
```python
|
| 53 |
from mlx_audio.stt.utils import load
|
| 54 |
|
| 55 |
-
model = load("mlx-community/Voxtral-Mini-4B-Realtime-2602-
|
| 56 |
|
| 57 |
# Transcribe audio
|
| 58 |
result = model.generate("audio.wav")
|
|
|
|
| 25 |
- realtime
|
| 26 |
---
|
| 27 |
|
| 28 |
+
# Voxtral Mini 4B Realtime
|
| 29 |
|
| 30 |
This is a **4-bit quantized** [MLX](https://github.com/ml-explore/mlx) conversion of [mistralai/Voxtral-Mini-4B-Realtime-2602](https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602), Mistral AI's streaming speech-to-text model.
|
| 31 |
|
|
|
|
| 52 |
```python
|
| 53 |
from mlx_audio.stt.utils import load
|
| 54 |
|
| 55 |
+
model = load("mlx-community/Voxtral-Mini-4B-Realtime-2602-4bit")
|
| 56 |
|
| 57 |
# Transcribe audio
|
| 58 |
result = model.generate("audio.wav")
|