metadata
pipeline_tag: any-to-any
library_name: transformers
tags:
- text-to-image
- image-editing
- image-understanding
- vision-language
- multimodal
- unified-model
license: mit
π Unipic3-DMD-Model(Distribution Matching Distillation)
π Introduction
UniPic3-DMD-Model is a few-step image editing and multi-image composition model trained using Distribution Matching Distillation (DMD).
The model directly matches the output distribution of a high-quality teacher model, enabling sharp, visually detailed generations in very few inference steps.
It is designed to maximize perceptual quality and realism, closely imitating strong proprietary or large teacher models. This model is initialized from a consistency-trained checkpoint and further refined via distribution-level distillation.
π Benchmarks
π§ Usage
1. Clone the Repository
git clone https://github.com/SkyworkAI/UniPic
cd UniPic-3
2. Set Up the Environment
conda create -n unipic python=3.10
conda activate unipic3
pip install -r requirements.txt
3.Batch Inference
transformer_path = "Skywork/Unipic3-DMD/ema_transformer"
python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 --use_env \
qwen_image_edit_fast/batch_inference.py \
--jsonl_path data/val.jsonl \
--output_dir work_dirs/output \
--distributed \
--num_inference_steps 8 \
--true_cfg_scale 4.0 \
--transformer transformer_path \
--skip_existing
π License
This model is released under the MIT License.
Citation
If you use Skywork UniPic 3.0 in your research, please cite:
@article{wei2026skywork,
title={Skywork UniPic 3.0: Unified Multi-Image Composition via Sequence Modeling},
author={Wei, Hongyang and Liu, Hongbo and Wang, Zidong and Peng, Yi and Xu, Baixin and Wu, Size and Zhang, Xuying and He, Xianglong and Liu, Zexiang and Wang, Peiyu and others},
journal={arXiv preprint arXiv:2601.15664},
year={2026}
}