Update README.md
Browse files
README.md
CHANGED
|
@@ -21,9 +21,9 @@ This tells me that the training DID impact it in a way that regularized it, and
|
|
| 21 |
|
| 22 |
There was as model I once did many trains on - a flux variant named Flux 1D 2 - which was essentially a primed variation of Flux 1D spoken to have increased Lora fidelity.
|
| 23 |
|
| 24 |
-
Well, overall it didn't really provide any additional context finetuning capability overall - however, it did look a lot like how this model looks when inferenced.
|
| 25 |
|
| 26 |
-
I get the distinct feeling, if I train this model - it will respond directly to dreambooth and relearn those broken early timestep zones with teacher/student regularization.
|
| 27 |
|
| 28 |
It's a bit of a long shot, but it's already pattern recognizing by training, so it's very possible that it could at least show some promise.
|
| 29 |
|
|
|
|
| 21 |
|
| 22 |
There was as model I once did many trains on - a flux variant named Flux 1D 2 - which was essentially a primed variation of Flux 1D spoken to have increased Lora fidelity.
|
| 23 |
|
| 24 |
+
Well, overall it didn't really provide any additional context finetuning capability overall - however, it did look a lot like how this model looks when inferenced. f1d2's primary failpoint was me attempting to create additional finetunes from the original, which means it was essentially required to relearn each train. Each time it would have to relearn the same patterns and thus I was using it incorrectly.
|
| 25 |
|
| 26 |
+
I get the distinct feeling, if I train this model based on what I learned using F1D2 - it will respond directly to dreambooth and relearn those broken early timestep zones with teacher/student regularization.
|
| 27 |
|
| 28 |
It's a bit of a long shot, but it's already pattern recognizing by training, so it's very possible that it could at least show some promise.
|
| 29 |
|