mlx-community/Qwen3-ASR-0.6B-4bit
This model was converted to MLX format from Qwen/Qwen3-ASR-0.6B using mlx-audio version 0.3.1.
Refer to the original model card for more details on the model.
Use with mlx-audio
pip install -U mlx-audio
CLI Example:
python -m mlx_audio.stt.generate --model mlx-community/Qwen3-ASR-0.6B-4bit --audio "audio.wav"
Python Example:
from mlx_audio.stt.utils import load_model
from mlx_audio.stt.generate import generate_transcription
model = load_model("mlx-community/Qwen3-ASR-0.6B-4bit")
transcription = generate_transcription(
model=model,
audio_path="path_to_audio.wav",
output_path="path_to_output.txt",
format="txt",
verbose=True,
)
print(transcription.text)
- Downloads last month
- 45
Model size
0.3B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support