HARRY07979 commited on
Commit
552170d
ยท
verified ยท
1 Parent(s): 7cb8048

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -3
README.md CHANGED
@@ -1,3 +1,111 @@
1
- ---
2
- license: creativeml-openrail-m
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ language:
4
+ - en
5
+ pipeline_tag: text-to-image
6
+ library_name: diffusers
7
+ ---
8
+
9
+ # LiteVision-v1
10
+
11
+ LiteVision-v1 is a lightweight, fast Stable Diffusion 1.5-based model optimized for low-step inference using LCM scheduler.
12
+
13
+ Built for speed.
14
+ Minimal steps.
15
+ Clean results.
16
+
17
+ ---
18
+
19
+ ## ๐Ÿš€ Key Features
20
+
21
+ - ๐Ÿ”น Based on Stable Diffusion 1.5 architecture
22
+ - ๐Ÿ”น Optimized for LCM (Latent Consistency Models)
23
+ - ๐Ÿ”น High-quality output in **4โ€“8 inference steps**
24
+ - ๐Ÿ”น Works with standard `StableDiffusionPipeline`
25
+ - ๐Ÿ”น No custom pipeline required
26
+ - ๐Ÿ”น Fully compatible with Diffusers 0.36.0
27
+
28
+ ---
29
+
30
+ ## โšก Recommended Settings
31
+
32
+ For best results:
33
+
34
+ ```python
35
+ num_inference_steps = 6
36
+ guidance_scale = 1.5
37
+ scheduler = LCMScheduler
38
+ Lower guidance gives cleaner results.
39
+ Higher guidance may introduce instability due to LCM behavior.
40
+
41
+ ๐Ÿง  Why LiteVision?
42
+ Traditional SD 1.5 models require 20โ€“30 steps for good quality.
43
+
44
+ LiteVision-v1 is tuned for:
45
+
46
+ Faster sampling
47
+
48
+ Lower compute
49
+
50
+ Minimal VRAM usage
51
+
52
+ Rapid prototyping
53
+
54
+ Perfect for:
55
+
56
+ Real-time applications
57
+
58
+ Low-power GPUs
59
+
60
+ Quick generation workflows
61
+
62
+ ๐Ÿ›  Usage
63
+ python
64
+ Copy code
65
+ import torch
66
+ from diffusers import DiffusionPipeline
67
+
68
+ pipe = DiffusionPipeline.from_pretrained(
69
+ "HyHorX/LiteVision-v1",
70
+ torch_dtype=torch.float16
71
+ ).to("cuda")
72
+
73
+ image = pipe(
74
+ "cinematic cyberpunk city, ultra detailed",
75
+ num_inference_steps=6,
76
+ guidance_scale=1.5,
77
+ height=512,
78
+ width=512
79
+ ).images[0]
80
+
81
+ image.save("litevision_output.png")
82
+ ๐Ÿ“ฆ Architecture
83
+ UNet: SD 1.5 compatible
84
+
85
+ VAE: AutoencoderKL
86
+
87
+ Text Encoder: CLIP
88
+
89
+ Scheduler: LCMScheduler (recommended)
90
+
91
+ Safety Checker: StableDiffusionSafetyChecker
92
+
93
+ ๐Ÿงช Benchmark (LCM)
94
+ Model Steps Time Quality
95
+ SD 1.5 20 Slow High
96
+ LiteVision-v1 6 Fast High
97
+
98
+ (Tested on RTX-class GPU)
99
+
100
+ โš  Notes
101
+ Designed specifically for LCM scheduler.
102
+
103
+ Not tuned for traditional DDIM/PNDM high-step sampling.
104
+
105
+ Use FP16 for optimal performance.
106
+
107
+ ๐Ÿ‘ค Author
108
+ HyHorX
109
+
110
+ License
111
+ Same as Stable Diffusion 1.5.