Instructions to use fcreait/Qwen-Image-Edit-mflux with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use fcreait/Qwen-Image-Edit-mflux with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Qwen-Image-Edit-mflux fcreait/Qwen-Image-Edit-mflux
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
Qwen Image Edit 2511 โ Pre-quantized for mflux
Pre-quantized weights for Qwen/Qwen-Image-Edit-2511, ready to use with mflux on Apple Silicon.
Available quantizations
| Variant | Folder | Size |
|---|---|---|
| 3-bit | Qwen-Image-Edit-2511-q3 |
~23 GB |
| 4-bit | Qwen-Image-Edit-2511-q4 |
~25 GB |
| 5-bit | Qwen-Image-Edit-2511-q5 |
~28 GB |
| 6-bit | Qwen-Image-Edit-2511-q6 |
~30 GB |
| 8-bit | Qwen-Image-Edit-2511-q8 |
~35 GB |
Full (unquantized) weights: ~54 GB from the original repo.
Usage
Download a specific quantization:
huggingface-cli download fcreait/Qwen-Image-Edit-mflux Qwen-Image-Edit-2511-q4 --local-dir ./qwen-edit-q4
Generate with mflux:
mflux-generate-qwen-edit \
--model ./qwen-edit-q4/Qwen-Image-Edit-2511-q4 \
--base-model qwen \
--image-paths input.jpg \
--prompt "your editing prompt" \
--output output.png
Details
- Quantized with mflux v0.17.4 using
mflux-save - Source model: Qwen/Qwen-Image-Edit-2511
- Format: MLX safetensors with mflux metadata
- Includes VisionTransformer weights (required for image editing)
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for fcreait/Qwen-Image-Edit-mflux
Base model
Qwen/Qwen-Image-Edit-2511