Danrisi commited on
Commit
8a6d9aa
·
verified ·
1 Parent(s): d60dca8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md CHANGED
@@ -10,6 +10,38 @@ tags:
10
  - realistic
11
  - lora
12
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  ## Example Generations
15
 
 
10
  - realistic
11
  - lora
12
  ---
13
+ # Lenovo ChromaRadiance
14
+
15
+ ## 🛠️ Usage Instructions
16
+
17
+ To use LoRAs with Chroma Radiance, you need to apply a few changes to your ComfyUI files using the modified files provided in this repository:
18
+
19
+ 1. **Replace `lora.py` (Main):**
20
+ Take the file from the `comfy` folder in this repo and replace the file at:
21
+ `ComfyUI/comfy/lora.py`
22
+
23
+ 2. **Replace `lora.py` (Weight Adapter):**
24
+ Take the file from the `weight_adapter` folder in this repo and replace the file at:
25
+ `ComfyUI/comfy/weight_adapter/lora.py`
26
+
27
+ ### Workflow & Notes
28
+ Please use the workflow provided in this repository for the best results.
29
+
30
+ > **Observation:** Based on my personal testing, GGUF generation takes a bit longer, but the quality appears to be slightly better. However, it is hard to say for sure as I haven't had enough time for extensive testing yet.
31
+
32
+ ### 💡 Generation Tips
33
+
34
+ 1. **Samplers & Settings:**
35
+ * **Best Quality:** I achieve the best results using `fully_implicit` samplers (such as `radau_iia_2s` and `gauss-legendre_2s`). For these, **20-30 steps** are sufficient, and you should keep **bong_math** disabled.
36
+ * **Alternative (Faster):** `dpmpp3m` + `beta` also works quite well with **50 steps** and **bong_math** enabled. It generates faster, but the output tends to be slightly noisier.
37
+
38
+ 2. **LoRA Strength:**
39
+ * Recommended strength: **0.85 - 1.0**.
40
+
41
+ 3. **Combinations:**
42
+ * This model can be combined with my **NiceGirls** LoRA.
43
+ * *Suggested weights:* NiceGirls at **0.6** + Lenovo at **0.8**.
44
+ ---
45
 
46
  ## Example Generations
47