OrlandoHugBot commited on
Commit
de0534d
·
verified ·
1 Parent(s): 4d08b8e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +99 -0
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: any-to-any
3
+ library_name: transformers
4
+ tags:
5
+ - text-to-image
6
+ - image-editing
7
+ - image-understanding
8
+ - vision-language
9
+ - multimodal
10
+ - unified-model
11
+ license: mit
12
+ ---
13
+
14
+ ## 🌌 Unipic3-DMD-Model(Distribution Matching Distillation)
15
+ <div align="center">
16
+ <img src="skywork-logo.png" alt="Skywork Logo" width="500">
17
+ </div>
18
+
19
+ <p align="center">
20
+ <a href="https://github.com/SkyworkAI/UniPic">
21
+ <img src="https://img.shields.io/badge/GitHub-UniPic-blue?logo=github" alt="GitHub Repo">
22
+ </a>
23
+ <a href="https://github.com/SkyworkAI/UniPic/stargazers">
24
+ <img src="https://img.shields.io/github/stars/SkyworkAI/UniPic?style=social" alt="GitHub Stars">
25
+ </a>
26
+ <a href="https://github.com/SkyworkAI/UniPic/network/members">
27
+ <img src="https://img.shields.io/github/forks/SkyworkAI/UniPic?style=social" alt="GitHub Forks">
28
+ </a>
29
+ </p>
30
+
31
+ ## 📖 Introduction
32
+ <div align="center"> <img src="unipic3.png" alt="Model Teaser" width="720"> </div>
33
+ **UniPic3-DMD-Model** is a few-step image editing and multi-image composition model trained using **Distribution Matching Distillation (DMD)**.
34
+ The model directly matches the **output distribution of a high-quality teacher model**, enabling sharp, visually detailed generations in very few inference steps.
35
+ It is designed to maximize **perceptual quality and realism**, closely imitating strong proprietary or large teacher models. This model is initialized from a consistency-trained checkpoint and further refined via distribution-level distillation.
36
+
37
+ ## 📊 Benchmarks
38
+ <div align="center"> <img src="unipic3_eval.png" alt="Model Teaser" width="720"> </div>
39
+
40
+
41
+ ## 🧠 Usage
42
+
43
+ ### 1. Clone the Repository
44
+ ```bash
45
+ git clone https://github.com/SkyworkAI/UniPic
46
+ cd UniPic-3
47
+ ```
48
+
49
+ ### 2. Set Up the Environment
50
+ ```bash
51
+ conda create -n unipic python=3.10
52
+ conda activate unipic3
53
+ pip install -r requirements.txt
54
+ ```
55
+
56
+
57
+ ### 3.Batch Inference
58
+ ```bash
59
+ transformer_path = "Skywork/Unipic3-DMD/ema_transformer"
60
+
61
+ python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 --use_env \
62
+ qwen_image_edit_fast/batch_inference.py \
63
+ --jsonl_path data/val.jsonl \
64
+ --output_dir work_dirs/output \
65
+ --distributed \
66
+ --num_inference_steps 4 \
67
+ --true_cfg_scale 4.0 \
68
+ --transformer transformer_path \
69
+ --skip_existing
70
+ ```
71
+
72
+ ## 📄 License
73
+ This model is released under the MIT License.
74
+
75
+ ## Citation
76
+ If you use Skywork-UniPic in your research, please cite:
77
+ ```
78
+ @misc{wang2025skyworkunipicunifiedautoregressive,
79
+ title={Skywork UniPic: Unified Autoregressive Modeling for Visual Understanding and Generation},
80
+ author={Peiyu Wang and Yi Peng and Yimeng Gan and Liang Hu and Tianyidan Xie and Xiaokun Wang and Yichen Wei and Chuanxin Tang and Bo Zhu and Changshi Li and Hongyang Wei and Eric Li and Xuchen Song and Yang Liu and Yahui Zhou},
81
+ year={2025},
82
+ eprint={2508.03320},
83
+ archivePrefix={arXiv},
84
+ primaryClass={cs.CV},
85
+ url={https://arxiv.org/abs/2508.03320},
86
+ }
87
+
88
+ @misc{wei2025skyworkunipic20building,
89
+ title={Skywork UniPic 2.0: Building Kontext Model with Online RL for Unified Multimodal Model},
90
+ author={Hongyang Wei and Baixin Xu and Hongbo Liu and Cyrus Wu and Jie Liu and Yi Peng and Peiyu Wang and Zexiang Liu and Jingwen He and Yidan Xietian and Chuanxin Tang and Zidong Wang and Yichen Wei and Liang Hu and Boyi Jiang and William Li and Ying He and Yang Liu and Xuchen Song and Eric Li and Yahui Zhou},
91
+ year={2025},
92
+ eprint={2509.04548},
93
+ archivePrefix={arXiv},
94
+ primaryClass={cs.CV},
95
+ url={https://arxiv.org/abs/2509.04548},
96
+ }
97
+ ```
98
+
99
+