Update README.md
Browse files
README.md
CHANGED
|
@@ -8,14 +8,14 @@ license: openrail++
|
|
| 8 |
inference: false
|
| 9 |
---
|
| 10 |
|
| 11 |
-
# Latent Consistency Model (LCM) LoRA
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
Latent Consistency Model (LCM) LoRA was proposed in [LCM-LoRA: A universal Stable-Diffusion Acceleration Module](https://arxiv.org/abs/2311.05556)
|
| 14 |
by *Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu et al.*
|
| 15 |
|
| 16 |
-
It is a distilled consistency adapter for [`segmind/Segmind-Vega`]("https://huggingface.co/segmind/Segmind_Vega") that allows
|
| 17 |
-
to reduce the number of inference steps to only between **2 - 8 steps**.
|
| 18 |
-
|
| 19 |
# Image comparison (Segmind-VegaRT vs SDXL-Turbo)
|
| 20 |
|
| 21 |

|
|
|
|
| 8 |
inference: false
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# Segmind-VegaRT - Latent Consistency Model (LCM) LoRA of Segmind-Vega
|
| 12 |
+
|
| 13 |
+
Segmind-VegaRT a distilled consistency adapter for [`segmind/Segmind-Vega`]("https://huggingface.co/segmind/Segmind_Vega") that allows
|
| 14 |
+
to reduce the number of inference steps to only between **2 - 8 steps**.
|
| 15 |
|
| 16 |
Latent Consistency Model (LCM) LoRA was proposed in [LCM-LoRA: A universal Stable-Diffusion Acceleration Module](https://arxiv.org/abs/2311.05556)
|
| 17 |
by *Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu et al.*
|
| 18 |
|
|
|
|
|
|
|
|
|
|
| 19 |
# Image comparison (Segmind-VegaRT vs SDXL-Turbo)
|
| 20 |
|
| 21 |

|