HARRY07979 commited on
Commit
b261654
·
verified ·
1 Parent(s): 552170d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -18
README.md CHANGED
@@ -31,36 +31,21 @@ Clean results.
31
 
32
  For best results:
33
 
34
- ```python
35
  num_inference_steps = 6
36
  guidance_scale = 1.5
37
  scheduler = LCMScheduler
 
38
  Lower guidance gives cleaner results.
39
  Higher guidance may introduce instability due to LCM behavior.
40
 
41
  🧠 Why LiteVision?
42
- Traditional SD 1.5 models require 20–30 steps for good quality.
43
-
44
- LiteVision-v1 is tuned for:
45
-
46
- Faster sampling
47
-
48
- Lower compute
49
-
50
- Minimal VRAM usage
51
 
52
- Rapid prototyping
53
 
54
- Perfect for:
55
-
56
- Real-time applications
57
-
58
- Low-power GPUs
59
-
60
- Quick generation workflows
61
 
62
  🛠 Usage
63
  python
 
64
  Copy code
65
  import torch
66
  from diffusers import DiffusionPipeline
@@ -79,6 +64,8 @@ image = pipe(
79
  ).images[0]
80
 
81
  image.save("litevision_output.png")
 
 
82
  📦 Architecture
83
  UNet: SD 1.5 compatible
84
 
 
31
 
32
  For best results:
33
 
 
34
  num_inference_steps = 6
35
  guidance_scale = 1.5
36
  scheduler = LCMScheduler
37
+
38
  Lower guidance gives cleaner results.
39
  Higher guidance may introduce instability due to LCM behavior.
40
 
41
  🧠 Why LiteVision?
42
+ Traditional SD 1.5 models require 20–30 steps for good quality. LiteVision ONLY requires about 6 steps.
 
 
 
 
 
 
 
 
43
 
 
44
 
 
 
 
 
 
 
 
45
 
46
  🛠 Usage
47
  python
48
+ ```
49
  Copy code
50
  import torch
51
  from diffusers import DiffusionPipeline
 
64
  ).images[0]
65
 
66
  image.save("litevision_output.png")
67
+ ```
68
+
69
  📦 Architecture
70
  UNet: SD 1.5 compatible
71