Update README.md
Browse files
README.md
CHANGED
|
@@ -36,6 +36,12 @@ My node allows you to use one or two images with the Fun Inp model, and it also
|
|
| 36 |
Additionally, it supports using one or two prompts to mix the two images.
|
| 37 |
|
| 38 |
NEW TRAINING LORAs:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
I’ve converted Wan2.1-Fun-1.3B-InP-HPS2.1_lora.safetensors & Wan2.1-Fun-1.3B-InP-MPS_lora_new.safetensors to make it compatible with the Fun model and ComfyUI.
|
| 40 |
I'm working on a Fun model version using my boost method, but with this model, the effect isn’t exactly the same as the T2V model.
|
| 41 |
The boost does help, but the impact is noticeably less compared to the T2V model.
|
|
|
|
| 36 |
Additionally, it supports using one or two prompts to mix the two images.
|
| 37 |
|
| 38 |
NEW TRAINING LORAs:
|
| 39 |
+
I added a special LoRA called dg_wan2_1_v1_3b_lora_extra_noise_detail_motion.safetensors that I created.
|
| 40 |
+
The LoRA was trained on over 10,000 images, but it's not trained to reproduce them graphically, it's trained to replicate their initial noise patterns instead.
|
| 41 |
+
This LoRA is useful for adding a bit more realism and detail, and it also introduces motion. It should be used with relatively low strength.
|
| 42 |
+
Between 0.01 and 0.35, it works very well with the T2V model.
|
| 43 |
+
I haven’t had time to test it with other models yet.
|
| 44 |
+
|
| 45 |
I’ve converted Wan2.1-Fun-1.3B-InP-HPS2.1_lora.safetensors & Wan2.1-Fun-1.3B-InP-MPS_lora_new.safetensors to make it compatible with the Fun model and ComfyUI.
|
| 46 |
I'm working on a Fun model version using my boost method, but with this model, the effect isn’t exactly the same as the T2V model.
|
| 47 |
The boost does help, but the impact is noticeably less compared to the T2V model.
|