--- license: apache-2.0 tags: - flow-matching - diffusion - geometric-deep-learning - constellation - geolip - cifar10 - geometric-lookup --- # GeoLIP Spherical Diffusion Prototype **Flow matching diffusion through constellation bottleneck on S^15.** Four progressive experiments proving that geometric triangulation on the unit hypersphere is a viable information bottleneck for diffusion models — and that the binding constant 0.29154 emerges from velocity matching through geometric lookup. ## Experiments ### v1 — Regulator (baseline) Constellation as a side-channel regulator on feature maps. Gate stayed at 6%. Constellation was decorative. - Loss: 0.1900 | Params: 6.1M | Near 0.29: 0% ### v2 — Skip Bypass (the sneaky test) 268M parameter `Linear(16384, 16384)` skip projection alongside the constellation bottleneck. The model was given every reason to bypass the constellation. **It chose the constellation** — gate at 11.8%, routing 88% through 768 triangulation dimensions. - Loss: 0.1757 | Params: 287M | Near 0.29: 9% ### v3 — Pure Constellation Bottleneck Skip projection removed. Everything through S^15. Zero bypass. Beat the 268M skip version with 8× fewer bottleneck params. Reconstruction cos_sim ≈ 0 — the bottleneck is a geometric lookup table, not an autoencoder. - Loss: 0.1749 | Params: 36.6M | Near 0.29: 30% ### v4 — Geometric Lookup Flow Matching (GLFM) Three-stage pipeline: Address → Condition → Generate. Multi-scale addressing (coarse + fine). 46% of anchors converged within ±0.05 of the binding constant 0.29154. - Loss: 0.1754 | Params: 35.2M | Near 0.29: 46% ## The 0.29154 Binding Constant Anchor drift from home position converges toward 0.29154 radians across all experiments. This constant has now appeared in: | Domain | Architecture | Training | |---|---|---| | MinimalShunts | Binding/separation phase boundary | Contrastive | | CLIP projections | Geometric transition | Contrastive | | T5 generation | Alpha convergence | Language modeling | | CaptionBERT | Phase boundary | Contrastive | | **Flow matching** | **Max anchor drift** | **Velocity matching** | The constant marks the boundary where anchors transition from geometric frame holders to task-specific encoders. ## Key Empirical Results | Finding | Result | |---|---| | CV ≈ 0.20 is geometry of S^15 | Precision-invariant, 1-bit to fp64 | | Constellation relay preserves 99.4% cos_to_orig at depth 16 | vs 7.4% for attention | | Model prefers constellation over 268M skip bypass | 88/12 split | | 768 tri dims match 16384 unconstrained dims for velocity | cos 0.949 | | Bottleneck doesn't reconstruct — it's a lookup table | cos_sim ≈ 0 to input | | Anchors self-organize: structural (<0.29) vs semantic (>0.29) | Confirmed across 4 versions | ## Architecture — GLFM (v4) ``` Stage 1 — ADDRESS encoder(x_t) → (B, 256, 8, 8) coarse: pool → proj → S^15 → triangulate (768d) fine: per-pixel → proj → S^15 → triangulate → aggregate (768d) address = concat(coarse, fine) = 1536d Stage 2 — CONDITION fuse(address + time_emb + class_emb + noise_emb) → 1024d Stage 3 — GENERATE 4× ResBlock(1024d) → proj(16384d) → reshape(256, 8, 8) → decoder ``` ## Files ### HuggingFace Integration - `configuration_flow_match.py` — PretrainedConfig - `modeling_flow_match.py` — PreTrainedModel (AutoModel compatible) ### Checkpoints (if present) - `checkpoints/` — best checkpoints from each training run ### Samples (if present) - `samples/` — v1 regulator samples - `samples_bn/` — v2/v3 bottleneck samples - `samples_cd/` — v3 pure constellation samples - `samples_glfm/` — v4 GLFM samples ### Analysis Outputs (if present) - `analysis/` — v1 analysis images - `analysis_bn/` — v2 analysis images - `analysis_cd/` — v3 analysis images - `analysis_glfm/` — v4 analysis images ## Part of the GeoLIP Ecosystem - [geolip-constellation-core](https://huggingface.co/AbstractPhil/geolip-constellation-core) - [geolip-diffusion-proto](https://huggingface.co/AbstractPhil/geolip-diffusion-proto) (v1/v2 regulator) - [geolip package](https://pypi.org/project/geolip/) - [glip-autoencoder](https://github.com/AbstractEyes/glip-autoencoder)