Instructions to use mlx-community/Qwen-Image-2512-3bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/Qwen-Image-2512-3bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Qwen-Image-2512-3bit mlx-community/Qwen-Image-2512-3bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
Qwen-Image-2512-3bit-MLX
MLX-optimized 3-bit quantized version of Qwen-Image-2512 for Apple Silicon.
Quick Start
pip install mflux
mflux-generate-qwen \
--model mlx-community/Qwen-Image-2512-3bit \
--prompt "A photorealistic cat wearing a tiny top hat" \
--steps 20
Model Details
- Base Model: Qwen/Qwen-Image-2512 (Dec 31, 2025)
- Quantization: 3-bit
- Framework: MLX (Apple Silicon optimized)
- Converted with: mflux 0.14.0
All Quantizations
License
Apache 2.0 (same as base model)
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for mlx-community/Qwen-Image-2512-3bit
Base model
Qwen/Qwen-Image-2512