OrlandoHugBot commited on
Commit
f95af2e
·
verified ·
1 Parent(s): 410de46

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -0
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: any-to-any
3
+ library_name: transformers
4
+ tags:
5
+ - text-to-image
6
+ - image-editing
7
+ - image-understanding
8
+ - vision-language
9
+ - multimodal
10
+ - unified-model
11
+ - teacher-model
12
+ - diffusion
13
+ license: mit
14
+ ---
15
+
16
+ ## 🌌 UniPic3-Teacher-Model
17
+ <div align="center">
18
+ <img src="skywork-logo.png" alt="Skywork Logo" width="500">
19
+ </div>
20
+
21
+ <p align="center">
22
+ <a href="https://github.com/SkyworkAI/UniPic">
23
+ <img src="https://img.shields.io/badge/GitHub-UniPic-blue?logo=github" alt="GitHub Repo">
24
+ </a>
25
+ <a href="https://github.com/SkyworkAI/UniPic/stargazers">
26
+ <img src="https://img.shields.io/github/stars/SkyworkAI/UniPic?style=social" alt="GitHub Stars">
27
+ </a>
28
+ <a href="https://github.com/SkyworkAI/UniPic/network/members">
29
+ <img src="https://img.shields.io/github/forks/SkyworkAI/UniPic?style=social" alt="GitHub Forks">
30
+ </a>
31
+ </p>
32
+
33
+ ## 📖 Introduction
34
+ <div align="center"> <img src="unipic3.png" alt="Model Teaser" width="720"> </div>
35
+
36
+ **UniPic3-Teacher-Model** is the **high-quality teacher diffusion model** used in the UniPic 3.0 framework.
37
+ It is trained with **full multi-step diffusion sampling** and optimized for **maximum perceptual quality, semantic consistency, and realism**.
38
+
39
+ This model serves as the **teacher backbone** for:
40
+ - **Distribution Matching Distillation (DMD)**
41
+ - **Consistency / trajectory distillation**
42
+ - **Few-step student model training**
43
+
44
+ Rather than being optimized for fast inference, the teacher model prioritizes **generation fidelity and stability**, providing a strong and reliable supervision signal for downstream distilled models.
45
+
46
+ ---
47
+
48
+ ## 🧠 Model Characteristics
49
+
50
+ - **Role**: Teacher model (not a distilled student)
51
+ - **Sampling**: Multi-step diffusion (high-fidelity)
52
+ - **Architecture**: Unified UniPic3 Transformer
53
+ - **Tasks Supported**:
54
+ - Single-image editing
55
+ - Multi-image composition (2–6 images)
56
+ - Human–Object Interaction (HOI)
57
+ - **Resolution**: Flexible, within pixel budget constraints
58
+ - **Training Objective**:
59
+ - Flow Matching / Diffusion loss
60
+ - Used as teacher for DMD & consistency training
61
+
62
+ ---
63
+
64
+ ## 📊 Benchmarks
65
+ <div align="center"> <img src="unipic3_eval.png" alt="Model Teaser" width="720"> </div>
66
+
67
+ This teacher model achieves **state-of-the-art performance** on:
68
+ - Image editing benchmarks
69
+ - Multi-image composition benchmarks
70
+
71
+ It provides **high-quality supervision targets** for distilled UniPic3 student models.
72
+
73
+ ---
74
+
75
+ ## ⚠️ Important Note
76
+
77
+ > **This repository hosts the teacher model.**
78
+ > It is **not optimized for few-step inference**.
79
+
80
+ If you are looking for:
81
+ - ⚡ **4–8 step fast inference**
82
+ - 🚀 **Deployment-friendly distilled models**
83
+
84
+ please refer to the **UniPic3-DMD / distilled checkpoints** instead.
85
+
86
+ ---
87
+
88
+ ## 🧠 Usage (Teacher Model)
89
+
90
+ ### 1. Clone the Repository
91
+ ```bash
92
+ git clone https://github.com/SkyworkAI/UniPic
93
+ cd UniPic-3
94
+ ```
95
+
96
+ ### 2. Set Up the Environment
97
+ ```bash
98
+ conda create -n unipic python=3.10
99
+ conda activate unipic3
100
+ pip install -r requirements.txt
101
+ ```
102
+
103
+
104
+ ### 3.Batch Inference
105
+ ```bash
106
+ transformer_path = "Skywork/Unipic3-DMD/ema_transformer"
107
+
108
+ python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 --use_env \
109
+ qwen_image_edit_fast/batch_inference.py \
110
+ --jsonl_path data/val.jsonl \
111
+ --output_dir work_dirs/output \
112
+ --distributed \
113
+ --num_inference_steps 4 \
114
+ --true_cfg_scale 4.0 \
115
+ --transformer transformer_path \
116
+ --skip_existing
117
+ ```
118
+
119
+ ## 📄 License
120
+ This model is released under the MIT License.
121
+
122
+ ## Citation
123
+ If you use Skywork-UniPic in your research, please cite:
124
+ ```
125
+ @misc{wang2025skyworkunipicunifiedautoregressive,
126
+ title={Skywork UniPic: Unified Autoregressive Modeling for Visual Understanding and Generation},
127
+ author={Peiyu Wang and Yi Peng and Yimeng Gan and Liang Hu and Tianyidan Xie and Xiaokun Wang and Yichen Wei and Chuanxin Tang and Bo Zhu and Changshi Li and Hongyang Wei and Eric Li and Xuchen Song and Yang Liu and Yahui Zhou},
128
+ year={2025},
129
+ eprint={2508.03320},
130
+ archivePrefix={arXiv},
131
+ primaryClass={cs.CV},
132
+ url={https://arxiv.org/abs/2508.03320},
133
+ }
134
+
135
+ @misc{wei2025skyworkunipic20building,
136
+ title={Skywork UniPic 2.0: Building Kontext Model with Online RL for Unified Multimodal Model},
137
+ author={Hongyang Wei and Baixin Xu and Hongbo Liu and Cyrus Wu and Jie Liu and Yi Peng and Peiyu Wang and Zexiang Liu and Jingwen He and Yidan Xietian and Chuanxin Tang and Zidong Wang and Yichen Wei and Liang Hu and Boyi Jiang and William Li and Ying He and Yang Liu and Xuchen Song and Eric Li and Yahui Zhou},
138
+ year={2025},
139
+ eprint={2509.04548},
140
+ archivePrefix={arXiv},
141
+ primaryClass={cs.CV},
142
+ url={https://arxiv.org/abs/2509.04548},
143
+ }
144
+ ```
145
+
146
+
147
+