Qwen3 ASR 0.6B — MLX 4-bit
MLX 4-bit quantized conversion of Qwen/Qwen3-ASR-0.6B for Apple Silicon inference.
Usage
Used by qwen3-asr-swift Qwen3ASR module:
let model = try await Qwen3ASRModel.fromPretrained()
let text = model.transcribe(audio: samples, sampleRate: 16000)
audio transcribe audio.wav
Model Details
- Architecture: Qwen3-ASR encoder-decoder (Whisper-style audio encoder + Qwen3 text decoder)
- Parameters: 0.6B
- Quantization: 4-bit (MLX)
- Size: ~680 MB
- Languages: Multilingual (EN, ZH, JA, KO, FR, DE, ES, and more)
- Downloads last month
- 16
Model size
0.3B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for aitytech/Qwen3-ASR-0.6B-MLX-4bit
Base model
Qwen/Qwen3-ASR-0.6B