Unipic3-DMD / README.md
nonwhy's picture
Update README.md
aa7df11 verified
|
raw
history blame
2.8 kB
---
pipeline_tag: any-to-any
library_name: transformers
tags:
- text-to-image
- image-editing
- image-understanding
- vision-language
- multimodal
- unified-model
license: mit
---
## 🌌 Unipic3-DMD-Model(Distribution Matching Distillation)
<div align="center">
<img src="skywork-logo.png" alt="Skywork Logo" width="500">
</div>
<p align="center">
<a href="https://github.com/SkyworkAI/UniPic">
<img src="https://img.shields.io/badge/GitHub-UniPic-blue?logo=github" alt="GitHub Repo">
</a>
<a href="https://github.com/SkyworkAI/UniPic/stargazers">
<img src="https://img.shields.io/github/stars/SkyworkAI/UniPic?style=social" alt="GitHub Stars">
</a>
<a href="https://github.com/SkyworkAI/UniPic/network/members">
<img src="https://img.shields.io/github/forks/SkyworkAI/UniPic?style=social" alt="GitHub Forks">
</a>
</p>
## πŸ“– Introduction
<div align="center"> <img src="unipic3.png" alt="Model Teaser" width="720"> </div>
**UniPic3-DMD-Model** is a few-step image editing and multi-image composition model trained using **Distribution Matching Distillation (DMD)**.
The model directly matches the **output distribution of a high-quality teacher model**, enabling sharp, visually detailed generations in very few inference steps.
It is designed to maximize **perceptual quality and realism**, closely imitating strong proprietary or large teacher models. This model is initialized from a consistency-trained checkpoint and further refined via distribution-level distillation.
## πŸ“Š Benchmarks
<div align="center"> <img src="unipic3_eval.png" alt="Model Teaser" width="720"> </div>
## 🧠 Usage
### 1. Clone the Repository
```bash
git clone https://github.com/SkyworkAI/UniPic
cd UniPic-3
```
### 2. Set Up the Environment
```bash
conda create -n unipic python=3.10
conda activate unipic3
pip install -r requirements.txt
```
### 3.Batch Inference
```bash
transformer_path = "Skywork/Unipic3-DMD/ema_transformer"
python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 --use_env \
qwen_image_edit_fast/batch_inference.py \
--jsonl_path data/val.jsonl \
--output_dir work_dirs/output \
--distributed \
--num_inference_steps 8 \
--true_cfg_scale 4.0 \
--transformer transformer_path \
--skip_existing
```
## πŸ“„ License
This model is released under the MIT License.
## Citation
If you use Skywork UniPic 3.0 in your research, please cite:
```
@article{wei2026skywork,
title={Skywork UniPic 3.0: Unified Multi-Image Composition via Sequence Modeling},
author={Wei, Hongyang and Liu, Hongbo and Wang, Zidong and Peng, Yi and Xu, Baixin and Wu, Size and Zhang, Xuying and He, Xianglong and Liu, Zexiang and Wang, Peiyu and others},
journal={arXiv preprint arXiv:2601.15664},
year={2026}
}
```