Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ The timestep bias hurt the model yes, but it still inferences using the flow-pat
|
|
| 19 |
|
| 20 |
This tells me that the training DID impact it in a way that regularized it, and the system isn't dead yet.
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
Well, overall it didn't really provide any additional context finetuning capability overall - however, it did look a lot like how this model looks when inferenced. f1d2's primary failpoint was me attempting to create additional finetunes from the original, which means it was essentially required to relearn each train. Each time it would have to relearn the same patterns and thus I was using it incorrectly.
|
| 25 |
|
|
|
|
| 19 |
|
| 20 |
This tells me that the training DID impact it in a way that regularized it, and the system isn't dead yet.
|
| 21 |
|
| 22 |
+
A model I once performed multiple subsequent trains on - a flux variant named Flux 1D 2 - which was essentially a primed variation of Flux 1D spoken to have increased Lora fidelity.
|
| 23 |
|
| 24 |
Well, overall it didn't really provide any additional context finetuning capability overall - however, it did look a lot like how this model looks when inferenced. f1d2's primary failpoint was me attempting to create additional finetunes from the original, which means it was essentially required to relearn each train. Each time it would have to relearn the same patterns and thus I was using it incorrectly.
|
| 25 |
|