Commit
·
9fe6e15
1
Parent(s):
f090627
Update d-adaptation/notes.md
Browse files- d-adaptation/notes.md +22 -2
d-adaptation/notes.md
CHANGED
|
@@ -33,8 +33,28 @@ Lower dims show good performance. Need much larger test to check for accuracy be
|
|
| 33 |
Over 1000 has not shown much improvement.
|
| 34 |
|
| 35 |
## 2.X models
|
| 36 |
-
|
| 37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
## Noise offset.
|
| 40 |
toyxyz has noted that using high noise offset (higher than 0.2 it seems) with d-adaptation creates unusable results. It starts looking better than lower learning rates but even unet 0.5, text 0.25 with noise offset of 0.75 does not look usable.
|
|
|
|
| 33 |
Over 1000 has not shown much improvement.
|
| 34 |
|
| 35 |
## 2.X models
|
| 36 |
+
Lora training so far on wd1.5, replicant and subtly have shown poor performance when used on another model. See sample in amber.
|
| 37 |
+
Notably, replicant is highly stylized and the trained lora from replicant when used on replicant shows extreme deviation away from the art style of replicant which suggests that the lora learned a lot of style related concepts, the opposite of what we want for character.
|
| 38 |
+
The initial set of trained loras showed better usability at lower strengths which is leading to continued research in training for longer and at lower learning rates.
|
| 39 |
+
It was noted that vprediction finetuning required lower learning rates and that might apply to lora training as well.
|
| 40 |
+
|
| 41 |
+
We can see that the 1.X based models are a lot similar to one another allowing lora's to transfer well between them.
|
| 42 |
+
Similarity between models using JosephCheung's tool. Thanks qromaiko for running and bringing this up.
|
| 43 |
+
```
|
| 44 |
+
99.95% - Anything\Anything-V3.0-ema-pruned.safetensors [2ea31c17]
|
| 45 |
+
98.25% - Anything\Anything-V3.0-pruned.safetensors [2ea31c17]
|
| 46 |
+
97.57% - 7th\7th_anime_v3_C-fp16-fix.safetensors [db1dd94e]
|
| 47 |
+
95.41% - Elysium\Elysium_Anime_V3.safetensors [1a97f4ef]
|
| 48 |
+
95.36% - Orange\AOM3A1.safetensors [9600da17]
|
| 49 |
+
94.79% - Orange\AOM2_Hard-fp16-fix.safetensors [05e43f1e]
|
| 50 |
+
94.74% - Orange\AOM2_sfw-fp16.safetensors [9600da17]
|
| 51 |
+
94.70% - Orange\AOM3.safetensors [9600da17]
|
| 52 |
+
|
| 53 |
+
100.00% - wd15-beta1-fp16.safetensors [0b910e4b]
|
| 54 |
+
99.90% - Aikimi_dV3.0.safetensors [0b910e4b]
|
| 55 |
+
93.14% - subtly-fp16.safetensors [a2fa5a65]
|
| 56 |
+
82.11% - Replicant-V1.0_fp16.safetensors [18007027]
|
| 57 |
+
```
|
| 58 |
|
| 59 |
## Noise offset.
|
| 60 |
toyxyz has noted that using high noise offset (higher than 0.2 it seems) with d-adaptation creates unusable results. It starts looking better than lower learning rates but even unet 0.5, text 0.25 with noise offset of 0.75 does not look usable.
|