Improve model card: Add pipeline tag, library name, paper, project, and code links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +55 -6
README.md CHANGED
@@ -1,15 +1,26 @@
1
  ---
 
 
2
  tags:
3
  - text-to-image
4
  - lora
5
  - diffusers
6
  - template:diffusion-lora
7
- base_model: Qwen/Qwen-Image
8
  instance_prompt: Qwen-Image, distillation
9
- license: apache-2.0
 
10
  ---
 
 
 
 
 
 
 
 
 
 
11
  # 🧪 Usage
12
- ---
13
 
14
  ## 🎨 Inference
15
 
@@ -64,6 +75,44 @@ image.save("output.png")
64
 
65
  ![Sample Output](./assets/qwen.png)
66
 
67
- ---
68
- license: apache-2.0
69
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Qwen/Qwen-Image
3
+ license: apache-2.0
4
  tags:
5
  - text-to-image
6
  - lora
7
  - diffusers
8
  - template:diffusion-lora
 
9
  instance_prompt: Qwen-Image, distillation
10
+ pipeline_tag: text-to-image
11
+ library_name: diffusers
12
  ---
13
+
14
+ # Glance: Accelerating Diffusion Models with 1 Sample
15
+
16
+ This model is presented in the paper [Glance: Accelerating Diffusion Models with 1 Sample](https://huggingface.co/papers/2512.02899).
17
+ Project Page: https://zhuobaidong.github.io/Glance/
18
+ Code: https://github.com/CSU-JPG/Glance
19
+
20
+ ## About
21
+ Glance introduces a novel approach to accelerate diffusion models by intelligently speeding up denoising phases. Instead of costly retraining, Glance equips base models with lightweight Slow-LoRA and Fast-LoRA adapters. This method achieves up to 5x acceleration over base models while maintaining comparable visual quality and strong generalization on unseen prompts. Notably, the LoRA experts are trained with only 1 sample within an hour.
22
+
23
  # 🧪 Usage
 
24
 
25
  ## 🎨 Inference
26
 
 
75
 
76
  ![Sample Output](./assets/qwen.png)
77
 
78
+ ## 🚀 Training
79
+
80
+ ### Glance_Qwen Training
81
+
82
+ To start training with your configuration file, simply run:
83
+
84
+ ```bash
85
+ accelerate launch train_Glance_qwen.py --config ./train_configs/Glance_qwen.yaml
86
+ ```
87
+
88
+ > Note: All the training code is primarily based on [flymyai-lora-trainer](https://github.com/FlyMyAI/flymyai-lora-trainer).
89
+
90
+ Ensure that `Glance_qwen.yaml` is properly configured with your dataset paths, model settings, output directory, and other hyperparameters. You can also explicitly specify whether to train the **Slow-LoRA** or **Fast-LoRA** variant directly within the configuration file.
91
+
92
+ If you want to train on a **single GPU** (requires **less than 24 GB** of VRAM), run:
93
+
94
+ ```bash
95
+ python train_Glance_qwen.py --config ./train_configs/Glance_qwen.yaml
96
+ ```
97
+
98
+
99
+ ### Glance_FLUX Training
100
+
101
+ To launch training for the FLUX variant, run:
102
+
103
+ ```bash
104
+ accelerate launch train_Glance_flux.py --config ./train_configs/Glance_flux.yaml
105
+ ```
106
+
107
+ ## Citation
108
+ ```
109
+ @misc{dong2025glanceacceleratingdiffusionmodels,
110
+ title={Glance: Accelerating Diffusion Models with 1 Sample},
111
+ author={Zhuobai Dong and Rui Zhao and Songjie Wu and Junchao Yi and Linjie Li and Zhengyuan Yang and Lijuan Wang and Alex Jinpeng Wang},
112
+ year={2025},
113
+ eprint={2512.02899},
114
+ archivePrefix={arXiv},
115
+ primaryClass={cs.CV},
116
+ url={https://arxiv.org/abs/2512.02899},
117
+ }
118
+ ```