File size: 21,674 Bytes
49a6bd4 64b8c56 186ca39 64b8c56 186ca39 64b8c56 4e5cf38 88861e9 4e5cf38 7c1e7a2 e1fe0ba 7c1e7a2 b6ea730 7c1e7a2 b6ea730 7c1e7a2 186ca39 7c1e7a2 a024577 7c1e7a2 5eebc65 e8153ab 5eebc65 e8153ab 5eebc65 2391b0b 5eebc65 0c45bb0 15305b5 461e513 0910e65 461e513 fe39a28 15305b5 4a8af93 15305b5 a5db65f 062d9df 3732406 062d9df 15305b5 2b28b8b db80992 49a6bd4 2b28b8b c7ea79c b793b9f ec4403a c7ea79c f47d447 c7ea79c f47d447 c7ea79c 09594eb 0bfffeb e391d95 10c174c e391d95 faf869a 10c174c faf869a 49a6bd4 1e7e470 49a6bd4 10c174c 49a6bd4 7e80516 49a6bd4 7e80516 49a6bd4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 |
---
license: mit
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- diffusion
- distillation
- flow-matching
- geometric-deep-learning
- research
library_name: diffusers
pipeline_tag: text-to-image
---
# Update; The Geometric Blotter 1/27/2026
sd15-flow-sol is going to be representing a new purpose as per established by tinyflux-lailah structure's internal expert alignment system.
Simply put; this model is an expert. This is not the same sort of expert as expected by direct training, but instead it's an expert that represents preserved geometric
structure. This structure SURVIVED almost complete obliteration, because the internals were directly aligned with David's opinions; both destroying fidelity and quality,
while simultaneously preserving geometric underlying structure applied by the DDPM noise diffusion process. The outcomes were disappointing, because I had expected the
model to preserve it's fidelity as well; which it did not. However, what the model DID retain was structural sequential awareness - throughout the entire pretrain
process.
This process preserved a form of nth geometric underlying structure. I did not realize how important this would be later on at the time.
The regularization is a combination of cayley-menger and geometric ksimplex loss for sequential representation.
Meaning, this structure is built specifically to DIRECTLY align with geometry, while I was attempting to use timestep and pattern classification
to assist with reduced training and computation overhead for converting sd15 into an sd15 flow-matching variant with SHIFT style timesteps.
Multiple iterations went through and finally the sd15-flow-lune was established as a functional variant. There were multiple finetunes from this sd15 flow-lune variant,
each specifically aligned with the compartmentalization provided by the timestep/pattern classification system during training.
## David is a geometric projector
The structure of David IS projection through geometric preservation. That's the entire purpose, and why the David collectives can capture so much information with
minimal parameters and minimal requirements.
The substructure OF DAVID is gated, and the entire gating system IS geometric gating.
Larger models like SD15 are built with very specific rules to preserve their structures for LARGE IMAGE SET TRAINING. Meaning, they need to still survive after feeding
the model multiple billions of images and still have a fair baseline form to finetune into something directly utilizable.
DAVID finds these patterns. The entire purpose is to regularize along these unknown patterns and allow the David Diffusion structure to compartmentalize them into
TIMESTEP BUCKETS and PATTERN BUCKETS.
They are arbitrary, however the losses bias this behavior to ensure they don't overwhelm one set or the other; and yet they are all sharing the same space.
# David did not break this system
David taught a very specific subset of utility that is present in sd15-flow-lune as well; but flow-lune's fidelity and detail produces INACCURATE representations of that information at higher timesteps.
Meaning, this is is our tool. The first distilled geometric structure. I will be making a very important article and paper based on this topic to ENSUIRE everyone interested understands the
mathematics of what made the concept work, the faults ipn the experiment that produces the behavior, WHY I THINK this happened, and the happy bush that turned out to be one of the most important accidental finds of my list.
# older, oct 2025, november 25, etc
# flow lune is ready for toying with
https://huggingface.co/AbstractPhil/sd15-flow-lune
The pretraining was fairly successful and the instructions for use are tied to the repo. The model will function as proof of concept.
# The plan for restoration
The timestep bias hurt the model yes, but it still inferences using the flow-pattern matching when utilizing the teacher's timesteps and DDIM.
This tells me that the training DID impact it in a way that regularized it, and the system isn't dead yet.
A model I once performed multiple subsequent trains on - a flux variant named Flux 1D 2 - which was essentially a primed variation of Flux 1D spoken to have increased Lora fidelity.
Well, overall it didn't really provide any additional context finetuning capability overall - however, it did look a lot like how this model looks when inferenced. f1d2's primary failpoint was me attempting to create additional finetunes from the original, which means it was essentially required to relearn each train. Each time it would have to relearn the same patterns and thus I was using it incorrectly.
I get the distinct feeling, if I train this model based on what I learned using F1D2 - it will respond directly to dreambooth and relearn those broken early timestep zones with teacher/student regularization.
It's a bit of a long shot, but it's already pattern recognizing by training, so it's very possible that it could at least show some promise.
I believe it's worth a shot. This model is Sol, and I don't know if it can be salvaged. Lune however showed much more response to DDPM so I'm going to attempt that version first.
I refuse to yield just yet, not while I still have ideas and tools to work with.
This is the sister model here.
https://huggingface.co/AbstractPhil/sd15-flow-matching-try2
# This model is burned, but not dead.
Sun and Moon are hereby deemed burned. My next target is to run the train entirely without David.
This was a high-risk possibility and I gambled. The results were most likely the deterministic results and yet I still ran them. Now the post-moretem.

In the coming days I'll be preparing a much more accurate timestep + pattern distillation system. I expect it to be far more accurate and much more compact.
I've learned that David's timestep bias pushed the flow-state of the model towards the higher timesteps based on the probes.
I should have been wiser about this earlier on and weighted heavier, but I doubt this would have mattered in the long run.
David simply wasn't ready to teach, and the model is too bulky to learn much more.
The only answer is a smaller and more concise model specific to the task.
The next David will require a much higher % timestep accuracy with direct pattern association.
Instead of attempting to perform multiple tasks, the prototype Zephyr will be issued a new form of attention that I've been
prototyping and planning that enables much higher sequences with less vram to associate attention controllers.
# I've decided to name this model
* This model is dubbed SD15 Flow-Matching Sol - twin sister to the alternative Try2 who is named SD15 - Lune.
Sun and Moon.
# Plan Update: 11/1/2025

I'm sticking to the positive spectrum here, knowing that 6 million samples isn't enough to converge sd15.
I believe it will take around 10 mil to start SEEING correct shapes showing with texture other than flat or blob, but I've been wrong before - and we will make happy little bushes out of this if I am.
Our flow match troopers are trying their best, but the outlooks aren't looking particularly good yet. Blobs all the way to epoch 30.
That's roughly 200,000 samples * 30, which should be about 6 million images worth. Not enough to fully saturate the system, but more than what I used for sdxl vpred conversions.
There may need to be a refined process with synthetic dreambooth-styled images devoted to top prio, mid prio, and low prio classes.
When the distillation concludes, there will be additional finetuning after with direct images generated from sd15 using class-based specifics in any case.
So, it'll be an interesting outcome for both the baseline starter and the v2 trained version.
I have high hopes either way and I will have the class-based dreambooth-style selector ready to immediately begin after epoch 50.
# Earlier updates
```
This is the config for the PT, you want the student unless you want to train it
I KNOW I KNOW, I'll get it worked out. For now this is every epoch 9+ if you see a PT for this particular model.
"cfg": asdict(self.cfg),
"student": self.student.state_dict(),
"opt": self.opt.state_dict(),
"sched": self.sched.state_dict(),
"gstep": gstep
```
I started a second run. They are both running simultaneously. I want to see if by epoch 10 the new trainer produces a better epoch 10 than this one.
As of epoch 11 the blobs are reforming back into shapes, and the shapes are cohesing in fairly utilizable ways for the end product, they are still however - blobs for the time being.



This is a v-prediction flow-matching model that can be directly inferenced with euler-discreet flow-matching through diffusers, and I would advise doing this for testing purposes.
# E11+ new expectations
It's training directly with timestep-awareness using shift and timestep association.
The least accurate timestep buckets have their opinions removed from the classifier weight, as the classifier cannot help if it cannot classify what the teacher itself is trying to say. Which lines up roughly with the 90/10 rule that David seems to cap at - which is about 90% accuracy and 10% incorrect. So about 10% of timestep buckets are inactive.
Individual block losses have been correctly reintroduced and will train the timestep and patterns HOPEFULLY correctly.
```
# Timestep Weighting (David-guided adaptive sampling)
use_timestep_weighting: bool = True
use_david_weights: bool = True
timestep_shift: float = 3.0 # SD3-style shift (higher = bias toward clean)
base_jitter: int = 5 # Base ±jitter around bin center
adaptive_chaos: bool = True # Scale jitter by pattern difficulty
profile_samples: int = 2500 # Samples to profile David's difficulty
reliability_threshold: float = 0.15 # Minimum accuracy to trust David's guidance
```
# Most original checkpoints are default sd15 after testing
For those who downloaded the models that either exhibit blobs or don't use flow matching noise - my sincerest apologies. They are defective. Blobs are expected, standard noise is not.
The CURRENT e8 has no clip or vae, so it's just sitting there standalone. This is the currently newest valid one and it functions as expected - by making blobs due to early pretraining.
I removed the faulty checkpoints and the correct checkpoints are the only ones remaining, which are early training and incompatible with correct inference, will be present.
# Updated information
It's basically just blobs as expected, so don't expect much yet. It has a long way to go.
The current one is showing response to shift as it should. This will require an additional 40 or so epochs most likely before convergence
is possible and I will be uploading every epoch as pt from this point forward to guarantee cohesion and transparency.
So about, 4 days. Give or take. Not too bad, all things considered. So lets hope it actually works out huh.
If not, I'll just train it directly using a different technique without David.
My sincerest apologies for all of the blunders and the problems. I didn't expect so many problems but I did expect some.
I ended up having to use debug to salvage epoch 8 so I wouldn't have to restart. The metrics appear corrupted as well.
The safetensor outputs were saving the original sd15 with silently mismatched keys thanks to the diffusers script not operating as intended. Additionally, the subsystems that I implemented never tripped the flags they needed to - in order to ensure backup. So the system was culling the PTs.
Between a rock and a hard place I figured out how to salvage it and here we are - thanks to a combination of Gemini's information and Claude's code debugging and problem solving the training can continue.
# More faults more problems still managed to salvage the real one
How absurd and difficult anything SD15 has been to debug.
Okay I am correctly converting the valuation and can now properly test the unet for diffusion testing.
# CKPT Bumbles
Apologies, the safetensors ARE ComfyUI formatted... CKPT. :V
Rename the extension to ckpt, because it's clearly incorrect. I'll convert them asap, my apologies for not micro-managing more closely.
I renamed epoch 4 in the repo.
# Training continues.
Trainer updated aka trainer.py with the updated version that handles checkpoint saving and loading using the correct ComfyUI script.
It'll automatically load and run in colab, additionally it's prepared to continue training from the most recent checkpoint here. You can point it at your own repo to load and save.
# SD1.5 Flow-Matching Distillation with Geometric Guidance (EXPERIMENTAL)
The day disappeared on me and the trainer stayed down most of the day because I was working on other tasks.
Colab randomly died last night after epoch 7, and I found out the thing wasn't uploading so now I'm just going to have to restart from epoch 3 - the version I manually uploaded before I went to bed. They were supposed to be pooled in a private repo but they weren't.
## ⚠️ Experimental Research
**Status:** Training in progress | No guarantees of convergence or quality
This is an experimental approach to distilling Stable Diffusion 1.5 using flow matching with geometric guidance from [GeoDavidCollective](https://huggingface.co/AbstractPhil/geo-david-collective-sd15-base-e40). Results are not yet validated.
## Overview
This trainer attempts to distill Stable Diffusion 1.5 using **v-prediction flow matching** with **adaptive per-block weighting** based on geometric quality assessment. Unlike traditional distillation that treats all UNet blocks equally, this approach uses a pre-trained geometric model (David) to evaluate student features and dynamically adjust training emphasis per block.
**Hypothesis:** Geometric guidance may help the student learn SD1.5's internal structure more effectively by:
- Identifying which blocks are learning poorly
- Applying stronger supervision where needed
- Maintaining geometric stability during training
**Status:** Hypothesis untested. Requires ablation study comparing David-guided vs. vanilla flow matching.
## Architecture
### Three-Component System
```
Teacher (SD1.5 UNet, frozen, FP16)
↓ provides ε* → v* targets + features
Student (Trainable UNet, FP16)
↓ predicts v̂ + features
Flow Matching Loss: MSE(v̂, v*)
+
David Assessor (GeoDavidCollective, frozen, 872M params)
↓ evaluates student features per block
↓ outputs: e_t (timestep error), e_p (pattern entropy), coh (coherence)
Fusion System: λ_b = w_b · (1 + α·e_t + β·e_p + δ·(1-coh))
↓ converts metrics to per-block penalties
Block Losses: Σ λ_b · (KD loss per block)
Total: L_flow + block_weight · L_blocks
```
### Components
**Teacher**: SD1.5 UNet (frozen, FP16)
- Provides ground truth for flow matching
- Extracts spatial features per block
**Student**: Trainable UNet (FP16)
- Initialized from teacher weights
- Learns v-prediction objective
- Features assessed by David
**David**: GeoDavidCollective (frozen)
- Pre-trained geometric model
- Evaluates feature quality per block
- Provides adaptive weighting signals
**Fusion**: Dynamic penalty calculator
- `λ_b = w_b · (1 + α·e_t + β·e_p + δ·(1-coh))`
- Bounded: `[0.5, 3.0]`
- Higher λ = more training emphasis
## Training Configuration
### Dataset
```yaml
Source: SymbolicPromptDataset (synthetic prompts)
Samples: 200,000
Batch Size: 64
Epochs: 10
Workers: 2
```
### Optimization
```yaml
Optimizer: AdamW
Learning Rate: 1e-4
Weight Decay: 1e-3
Scheduler: CosineAnnealingLR
Gradient Clipping: 1.0
Mixed Precision: Enabled (FP16)
```
### Loss Weights
```yaml
Global Flow Weight: 1.0
Block Penalty Weight: 0.05 # Critical hyperparameter!
KD Weight: 0.25 (cosine similarity on pooled features)
Local Flow Heads: Disabled
```
### David Fusion
```yaml
Base Block Weights:
down_0: 0.7, down_1: 0.9, down_2: 1.0, down_3: 1.1
mid: 1.2, up_0: 1.1, up_1: 1.0, up_2: 0.9, up_3: 0.7
Fusion Coefficients:
alpha (timestep): 0.5
beta (pattern): 0.25
delta (incoherence): 0.25
Lambda Bounds: [0.5, 3.0]
```
## Training Progress (Epoch 1/10)
### Current Metrics
```
L_total: 0.24
L_flow: 0.23
L_blocks: 0.07
Speed: ~1.5 it/s (A100)
```
**Interpretation:**
- Block losses balanced after fixing `block_penalty_weight`
- Flow loss converging as expected
- No evidence of collapse or divergence yet
### Expected Timeline (Unvalidated)
```
Epoch 1-2: Loss stabilization
Epoch 3-5: Feature structure learning (images may be blurry)
Epoch 8-10: Potential convergence (quality unknown)
```
**Note:** No baseline comparison yet. Cannot claim faster/better convergence without ablation study.
## Model Files
Training saves checkpoints as:
```
checkpoints/
├── checkpoint_epoch_002.safetensors
├── checkpoint_epoch_004.safetensors
└── final.safetensors
```
Each checkpoint contains student UNet weights only.
## Inference
Model can be sampled using standard diffusion samplers (DDPM, DDIM) with v-prediction:
```python
# Pseudocode - implementation details TBD
x_t = noise
for t in reversed(timesteps):
v = student_unet(x_t, t, text_embeddings)
x_t = step(x_t, v, t) # v-prediction update
image = vae.decode(x_t)
```
Requires SD1.5 VAE and text encoder (not included in checkpoint).
## Known Issues
- ❓ No proof this approach works better than vanilla distillation
- ❓ Optimal `block_penalty_weight` unknown (currently 0.05)
- ❓ May require tuning lambda bounds for different datasets
- ❓ Inference quality unvalidated
## Future Work
### Required Validation
1. **Ablation Study**: Train identical model WITHOUT David guidance
2. **Quality Metrics**: FID, CLIP score vs. SD1.5 baseline
3. **Convergence Analysis**: Compare learning curves
4. **Inference Testing**: Visual quality assessment
### Potential Improvements
- Adaptive `block_penalty_weight` scheduling
- Per-block learning rates
- David warmup strategy
- Better fusion formulas
## Experimental Design
### Hypothesis
Geometric guidance from David will improve distillation by:
1. Identifying poorly-learning blocks
2. Applying adaptive supervision
3. Maintaining feature geometry
### Test Plan
```
Control: SD1.5 flow matching (no David)
Treatment: SD1.5 flow matching + David guidance
Metrics: Loss curves, FID, CLIP score, visual quality
```
### Success Criteria
- Faster convergence (fewer epochs to target loss)
- Better final quality (lower FID)
- More stable training (less variance)
**Status:** Experiment in progress, no results yet.
## Technical Details
### David Assessment
Per block, David outputs:
- `e_t`: Cross-entropy on timestep classification (proxy for temporal understanding)
- `e_p`: Entropy on pattern classification (proxy for feature diversity)
- `coh`: Cantor alpha (geometric coherence metric)
These convert to penalty multipliers via fusion formula.
### Flow Matching
v-prediction objective:
```
v* = α · ε - σ · x₀ (target)
v̂ = student(x_t, t) (prediction)
L_flow = MSE(v̂, v*)
```
Where α, σ from noise schedule.
### Per-Block KD
Cosine similarity on spatial-pooled features:
```
L_kd = 1 - cosine_sim(
student_features.mean(spatial),
teacher_features.mean(spatial)
)
```
## Dependencies
```
torch >= 2.0
diffusers >= 0.21
transformers >= 4.30
safetensors >= 0.3
huggingface_hub >= 0.16
```
Plus custom repo: `geovocab2` (for David model and data synthesis)
## Hardware Requirements
- **Training**: A100 40GB (FP16 mixed precision)
- **Inference**: RTX 3090 / A6000 (24GB)
- **Storage**: ~10GB for checkpoints + logs
## Reproducibility
Training is deterministic with fixed seed (42), but:
- Depends on David checkpoint version
- May be sensitive to hardware (GPU type)
- Synthetic data generation has randomness
## Limitations
1. **Untested**: No validation that this works
2. **SD1.5 Only**: Hardcoded for SD1.5 architecture
3. **David Dependency**: Requires specific pre-trained model
4. **Synthetic Data**: Trained on generated prompts, not real captions
5. **No Safety**: Inherits SD1.5 biases, no content filtering
## Ethical Considerations
- Inherits biases from SD1.5 training data
- No additional safety measures implemented
- Should not be deployed without content filtering
- Research purposes only
## Citation
```bibtex
@software{sd15flowmatch2025,
author = {AbstractPhil},
title = {SD1.5 Flow-Matching with Geometric Guidance (Experimental)},
year = {2025},
url = {https://huggingface.co/AbstractPhil/sd15-flow-matching},
note = {Experimental distillation approach, results unvalidated}
}
```
## License
MIT License
## Related Work
- [GeoDavidCollective](https://huggingface.co/AbstractPhil/geo-david-collective-sd15-base-e40): Geometric assessor model
- [Stable Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5): Teacher model
- Flow Matching: Progressive distillation technique
---
**Current Status:** 🧪 Experimental training in progress
**Do not use for production** - validation pending |