Update README.md
Browse files
README.md
CHANGED
|
@@ -5,4 +5,27 @@ base_model:
|
|
| 5 |
new_version: black-forest-labs/FLUX.1-dev
|
| 6 |
pipeline_tag: text-to-image
|
| 7 |
library_name: adapter-transformers
|
| 8 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
new_version: black-forest-labs/FLUX.1-dev
|
| 6 |
pipeline_tag: text-to-image
|
| 7 |
library_name: adapter-transformers
|
| 8 |
+
---
|
| 9 |
+
# TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps
|
| 10 |
+
|
| 11 |
+
<p align="center">
|
| 12 |
+
📃 <a href="https://arxiv.org/html/2406.05768v5" target="_blank">Paper</a> •
|
| 13 |
+
</p>
|
| 14 |
+
|
| 15 |
+
<!-- **TLCM: Training-efficient Latent Consistency Model for Image Generation with 2-8 Steps** -->
|
| 16 |
+
|
| 17 |
+
<!-- Our method accelerates LDMs via data-free multistep latent consistency distillation (MLCD), and data-free latent consistency distillation is proposed to efficiently guarantee the inter-segment consistency in MLCD.
|
| 18 |
+
|
| 19 |
+
Furthermore, we introduce bags of techniques, e.g., distribution matching, adversarial learning, and preference learning, to enhance TLCM’s performance at few-step inference without any real data.
|
| 20 |
+
|
| 21 |
+
TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared
|
| 22 |
+
to full-step approaches. -->
|
| 23 |
+
we propose an innovative two-stage data-free consistency distillation (TDCD) approach to accelerate latent consistency model. The first stage improves consistency constraint by data-free sub-segment consistency distillation (DSCD). The second stage enforces the
|
| 24 |
+
global consistency across inter-segments through data-free consistency distillation (DCD). Besides, we explore various
|
| 25 |
+
techniques to promote TLCM’s performance in data-free manner, forming Training-efficient Latent Consistency
|
| 26 |
+
Model (TLCM) with 2-8 step inference.
|
| 27 |
+
|
| 28 |
+
TLCM demonstrates a high level of flexibility by enabling adjustment of sampling steps within the range of 2 to 8 while still producing competitive outputs compared
|
| 29 |
+
to full-step approaches.
|
| 30 |
+
|
| 31 |
+
## This is for Flux-base LoRA.
|