Update README.md
Browse files
README.md
CHANGED
|
@@ -13,6 +13,48 @@ tags:
|
|
| 13 |
library_name: diffusers
|
| 14 |
pipeline_tag: text-to-image
|
| 15 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
# flow lune is ready for toying with
|
| 17 |
https://huggingface.co/AbstractPhil/sd15-flow-lune
|
| 18 |
|
|
@@ -32,7 +74,7 @@ I get the distinct feeling, if I train this model based on what I learned using
|
|
| 32 |
|
| 33 |
It's a bit of a long shot, but it's already pattern recognizing by training, so it's very possible that it could at least show some promise.
|
| 34 |
|
| 35 |
-
I believe it's worth a shot. This model is Sol, and I don't know if it can be salvaged. Lune however showed much more response to
|
| 36 |
|
| 37 |
I refuse to yield just yet, not while I still have ideas and tools to work with.
|
| 38 |
|
|
|
|
| 13 |
library_name: diffusers
|
| 14 |
pipeline_tag: text-to-image
|
| 15 |
---
|
| 16 |
+
# Update; The Geometric Blotter
|
| 17 |
+
sd15-flow-sol is going to be representing a new purpose as per established by tinyflux-lailah structure's internal expert alignment system.
|
| 18 |
+
|
| 19 |
+
Simply put; this model is an expert. This is not the same sort of expert as expected by direct training, but instead it's an expert that represents preserved geometric
|
| 20 |
+
structure. This structure SURVIVED almost complete obliteration, because the internals were directly aligned with David's opinions; both destroying fidelity and quality,
|
| 21 |
+
while simultaneously preserving geometric underlying structure applied by the DDPM noise diffusion process. The outcomes were disappointing, because I had expected the
|
| 22 |
+
model to preserve it's fidelity as well; which it did not. However, what the model DID retain was structural sequential awareness - throughout the entire pretrain
|
| 23 |
+
process.
|
| 24 |
+
|
| 25 |
+
This process preserved a form of nth geometric underlying structure. I did not realize how important this would be later on at the time.
|
| 26 |
+
|
| 27 |
+
The regularization is a combination of cayley-menger and geometric ksimplex loss for sequential representation.
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
Meaning, this structure is built specifically to DIRECTLY align with geometry, while I was attempting to use timestep and pattern classification
|
| 31 |
+
to assist with reduced training and computation overhead for converting sd15 into an sd15 flow-matching variant with SHIFT style timesteps.
|
| 32 |
+
|
| 33 |
+
Multiple iterations went through and finally the sd15-flow-lune was established as a functional variant. There were multiple finetunes from this sd15 flow-lune variant,
|
| 34 |
+
each specifically aligned with the compartmentalization provided by the timestep/pattern classification system during training.
|
| 35 |
+
|
| 36 |
+
## David is a geometric projector
|
| 37 |
+
The structure of David IS projection through geometric preservation. That's the entire purpose, and why the David collectives can capture so much information with
|
| 38 |
+
minimal parameters and minimal requirements.
|
| 39 |
+
|
| 40 |
+
The substructure OF DAVID is gated, and the entire gating system IS geometric gating.
|
| 41 |
+
|
| 42 |
+
Larger models like SD15 are built with very specific rules to preserve their structures for LARGE IMAGE SET TRAINING. Meaning, they need to still survive after feeding
|
| 43 |
+
the model multiple billions of images and still have a fair baseline form to finetune into something directly utilizable.
|
| 44 |
+
|
| 45 |
+
DAVID finds these patterns. The entire purpose is to regularize along these unknown patterns and allow the David Diffusion structure to compartmentalize them into
|
| 46 |
+
TIMESTEP BUCKETS and PATTERN BUCKETS.
|
| 47 |
+
|
| 48 |
+
They are arbitrary, however the losses bias this behavior to ensure they don't overwhelm one set or the other; and yet they are all sharing the same space.
|
| 49 |
+
|
| 50 |
+
# David did not break this system
|
| 51 |
+
|
| 52 |
+
David taught a very specific subset of utility that is present in sd15-flow-lune as well; but flow-lune's fidelity and detail produces INACCURATE representations of that information at higher timesteps.
|
| 53 |
+
|
| 54 |
+
Meaning, this is is our tool. The first distilled geometric structure that I will be making a very important article and paper based on this topic to ensure everyone interested understands the
|
| 55 |
+
mathematics of what made the concept work, the faults ipn the experiment that produces the behavior, and the happy bush that turned out to be one of the most important accidental finds of my list.
|
| 56 |
+
|
| 57 |
+
|
| 58 |
# flow lune is ready for toying with
|
| 59 |
https://huggingface.co/AbstractPhil/sd15-flow-lune
|
| 60 |
|
|
|
|
| 74 |
|
| 75 |
It's a bit of a long shot, but it's already pattern recognizing by training, so it's very possible that it could at least show some promise.
|
| 76 |
|
| 77 |
+
I believe it's worth a shot. This model is Sol, and I don't know if it can be salvaged. Lune however showed much more response to DDPM so I'm going to attempt that version first.
|
| 78 |
|
| 79 |
I refuse to yield just yet, not while I still have ideas and tools to work with.
|
| 80 |
|