--- license: mit tags: - geometric-deep-learning - diffusion - stable-diffusion - projective-geometry - multi-expert - classification library_name: pytorch --- # GeoDavidCollective Enhanced - ProjectiveHead Architecture **Another train of the same GeoFractalDavid with more condensed dims** Curves look really good. I might put this one back in for another 20 epochs at 50k prompts to see how it fares. If that one looks good it may be fine to go another 60 epochs at 100k prompts, or feed it laion flavors directly. This one is condensed with smaller scale dims making a much more condensed feature. Roughly 600,000 samples for this one, 10k per epoch between 0-10 complexity 1-5, and 50k synthetic prompts per epoch at epoch 11-20 with reduced complexity between 1-4. Currently prototyping a series of cantor-driven layers to test sparsity inclusion and omission while this one cooked. ## 🎯 Model Overview GeoDavidCollective Enhanced is a sophisticated multi-expert geometric classification system that learns from Stable Diffusion 1.5's internal representations. Using ProjectiveHead architecture with Cayley-Menger geometry, it achieves efficient pattern recognition across timestep and semantic spaces. ### Key Features - **ProjectiveHead Multi-Expert Architecture**: Auto-configured expert systems per block - **Geometric Loss Functions**: Rose, Cayley-Menger, and Cantor coherence losses - **9-Block Processing**: Full SD1.5 UNet feature extraction (down, mid, up) - **Compact Yet Powerful**: 690,925,542 parameters - **100 Timestep Bins** x **10 Patterns** = 1000 semantic-temporal classes ## 📊 Model Statistics - **Parameters**: 690,925,542 - **Trained Epochs**: 20 - **Base Model**: Stable Diffusion 1.5 - **Dataset Size**: 10,000 synthetic prompts - **Training Date**: 2025-10-28 ## 🏗️ Architecture Details ### Block Configuration ``` Down Blocks: - down_0: 320 → 64 (3 experts, 3 gates) - down_1: 640 → 96 (3 experts, 3 gates) - down_2: 1280 → 128 (3 experts, 3 gates) - down_3: 1280 → 128 (3 experts, 3 gates) Mid Block (Highest Capacity): - mid: 1280 → 256 (4 experts, 4 gates) Up Blocks: - up_0: 1280 → 128 (3 experts, 3 gates) - up_1: 1280 → 128 (3 experts, 3 gates) - up_2: 640 → 96 (3 experts, 3 gates) - up_3: 320 → 64 (3 experts, 3 gates) ``` ### Loss Components | Component | Weight | Purpose | |-----------|--------|---------| | Feature Similarity | 0.40 | Alignment with SD1.5 features | | Rose Loss | 0.25 | Geometric pattern emergence | | Cross-Entropy | 0.15 | Classification accuracy | | Cayley-Menger | 0.10 | 5D geometric structure | | Pattern Diversity | 0.05 | Prevent mode collapse | | Cantor Coherence | 0.05 | Temporal consistency | ## 💻 Usage ```python from geovocab2.train.model.core.geo_david_collective import GeoDavidCollective from safetensors.torch import load_file import torch # Load model state_dict = load_file("model.safetensors") collective = GeoDavidCollective( block_configs={...}, # See config.json num_timestep_bins=100, num_patterns_per_bin=10 ) collective.load_state_dict(state_dict) collective.eval() # Extract features from SD1.5 and classify with torch.no_grad(): results = collective(features_dict, timesteps) predictions = results['predictions'] # Timestep + pattern class ``` ## 🔬 Training Details - **Optimizer**: AdamW (lr=1e-3, weight_decay=0.001) - **Batch Size**: 16 - **Data**: Symbolic prompt synthesis (complexity 1-5) - **Feature Extraction**: SD1.5 UNet blocks (spatial, not pooled) - **Pool Mode**: Mean spatial pooling ## 📈 Training Metrics Final metrics from epoch 20: - Cayley Loss: 0.1018 - Timestep Accuracy: 30.83% - Pattern Accuracy: 33.74% - Full Accuracy: 16.87% ## 🎓 Research Context This model is part of the geometric deep learning research exploring: - 5D simplex-based neural representations (pentachora) - Geometric alternatives to traditional transformers - Consciousness-informed AI architectures - Universal mathematical principles in neural networks ## 📦 Files Included - `model.safetensors` - Model weights (3.3GB) - `config.json` - Complete architecture configuration - `training_history.json` - Full training metrics - `prompts_enhanced.jsonl` - All training prompts with metadata - `tensorboard/` - TensorBoard logs (optional) ## 🔗 Related Work - [Geometric Vocabulary System](https://huggingface.co/datasets/AbstractPhil/geometric-vocab-frozen-v1) - [PentachoraViT](https://huggingface.co/AbstractPhil/pentachora-vit-cifar100) - [Crystal-Beeper Language Models](https://huggingface.co/AbstractPhil) ## 📜 License MIT License - Free for research and commercial use ## 🙏 Acknowledgments Built with: - PyTorch & Diffusers - Stable Diffusion 1.5 (Runway ML) - Geometric algebra principles from the 1800s - Dream-inspired mathematical insights ## 👤 Author **AbstractPhil** - AI Researcher specializing in geometric deep learning *"Working with universal mathematical principles, not against them"* --- For questions, issues, or collaborations: [GitHub](https://github.com/AbstractEyes) | [HuggingFace](https://huggingface.co/AbstractPhil)