|
|
--- |
|
|
pipeline_tag: any-to-any |
|
|
library_name: transformers |
|
|
tags: |
|
|
- text-to-image |
|
|
- image-editing |
|
|
- image-understanding |
|
|
- vision-language |
|
|
- multimodal |
|
|
- unified-model |
|
|
license: mit |
|
|
--- |
|
|
|
|
|
## π Unipic3-DMD-Model(Distribution Matching Distillation) |
|
|
<div align="center"> |
|
|
<img src="skywork-logo.png" alt="Skywork Logo" width="500"> |
|
|
</div> |
|
|
|
|
|
<p align="center"> |
|
|
<a href="https://github.com/SkyworkAI/UniPic"> |
|
|
<img src="https://img.shields.io/badge/GitHub-UniPic-blue?logo=github" alt="GitHub Repo"> |
|
|
</a> |
|
|
<a href="https://github.com/SkyworkAI/UniPic/stargazers"> |
|
|
<img src="https://img.shields.io/github/stars/SkyworkAI/UniPic?style=social" alt="GitHub Stars"> |
|
|
</a> |
|
|
<a href="https://github.com/SkyworkAI/UniPic/network/members"> |
|
|
<img src="https://img.shields.io/github/forks/SkyworkAI/UniPic?style=social" alt="GitHub Forks"> |
|
|
</a> |
|
|
</p> |
|
|
|
|
|
## π Introduction |
|
|
<div align="center"> <img src="unipic3.png" alt="Model Teaser" width="720"> </div> |
|
|
|
|
|
**UniPic3-DMD-Model** is a few-step image editing and multi-image composition model trained using **Distribution Matching Distillation (DMD)**. |
|
|
The model directly matches the **output distribution of a high-quality teacher model**, enabling sharp, visually detailed generations in very few inference steps. |
|
|
It is designed to maximize **perceptual quality and realism**, closely imitating strong proprietary or large teacher models. This model is initialized from a consistency-trained checkpoint and further refined via distribution-level distillation. |
|
|
|
|
|
## π Benchmarks |
|
|
<div align="center"> <img src="unipic3_eval.png" alt="Model Teaser" width="720"> </div> |
|
|
|
|
|
|
|
|
## π§ Usage |
|
|
|
|
|
### 1. Clone the Repository |
|
|
```bash |
|
|
git clone https://github.com/SkyworkAI/UniPic |
|
|
cd UniPic-3 |
|
|
``` |
|
|
|
|
|
### 2. Set Up the Environment |
|
|
```bash |
|
|
conda create -n unipic python=3.10 |
|
|
conda activate unipic3 |
|
|
pip install -r requirements.txt |
|
|
``` |
|
|
|
|
|
|
|
|
### 3.Batch Inference |
|
|
```bash |
|
|
transformer_path = "Skywork/Unipic3-DMD/ema_transformer" |
|
|
|
|
|
python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 --use_env \ |
|
|
qwen_image_edit_fast/batch_inference.py \ |
|
|
--jsonl_path data/val.jsonl \ |
|
|
--output_dir work_dirs/output \ |
|
|
--distributed \ |
|
|
--num_inference_steps 4 \ |
|
|
--true_cfg_scale 4.0 \ |
|
|
--transformer transformer_path \ |
|
|
--skip_existing |
|
|
``` |
|
|
|
|
|
## π License |
|
|
This model is released under the MIT License. |
|
|
|
|
|
## Citation |
|
|
If you use Skywork-UniPic in your research, please cite: |
|
|
``` |
|
|
@misc{wang2025skyworkunipicunifiedautoregressive, |
|
|
title={Skywork UniPic: Unified Autoregressive Modeling for Visual Understanding and Generation}, |
|
|
author={Peiyu Wang and Yi Peng and Yimeng Gan and Liang Hu and Tianyidan Xie and Xiaokun Wang and Yichen Wei and Chuanxin Tang and Bo Zhu and Changshi Li and Hongyang Wei and Eric Li and Xuchen Song and Yang Liu and Yahui Zhou}, |
|
|
year={2025}, |
|
|
eprint={2508.03320}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2508.03320}, |
|
|
} |
|
|
``` |
|
|
|
|
|
``` |
|
|
@misc{wei2025skyworkunipic20building, |
|
|
title={Skywork UniPic 2.0: Building Kontext Model with Online RL for Unified Multimodal Model}, |
|
|
author={Hongyang Wei and Baixin Xu and Hongbo Liu and Cyrus Wu and Jie Liu and Yi Peng and Peiyu Wang and Zexiang Liu and Jingwen He and Yidan Xietian and Chuanxin Tang and Zidong Wang and Yichen Wei and Liang Hu and Boyi Jiang and William Li and Ying He and Yang Liu and Xuchen Song and Eric Li and Yahui Zhou}, |
|
|
year={2025}, |
|
|
eprint={2509.04548}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2509.04548}, |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
|