--- license: apache-2.0 language: - en tags: - nexus-worldmodel - world-model - cognitive-architecture - earcp - lpol - gqa - neurogenesis - pytorch - reinforcement-learning pipeline_tag: reinforcement-learning library_name: pytorch --- # NEXUS-WorldModel v2.0 **"Learning to Simulate Reality with Full Cognitive Architecture"** ## 🧠 Architecture | Component | Description | |-----------|-------------| | **EARCP Module** | Sparse Compression + Gated Integration | | **LPOL Memory** | 9 domains with GQA | | **GQA** | 8 heads, 2 KV groups (75% savings) | | **EARCP Layers** | 8 layers × 6 experts | | **Neurogenesis** | Dynamic growth (32-256 neurons) | | **Physics Prior** | MDN with 8 components | ## 📊 Training Results | Metric | Value | |--------|-------| | **Epochs** | 6 | | **Final Loss** | 0.0172 | | **Coherence** | ~0.42 | | **Neurogenesis Events** | 0 | | **Parameters** | 227,991,690 | ## 🚀 Quick Start ```python import torch from huggingface_hub import hf_hub_download # Download and load model_path = hf_hub_download(repo_id="amewebstudio/nexus-worldmodel-v2", filename="nexus_worldmodel_v2.pt") checkpoint = torch.load(model_path, map_location="cuda") config = checkpoint['config'] state_dict = checkpoint['model'] print(f"Epochs: {checkpoint['epochs']}") print(f"Loss: {checkpoint['loss']:.4f}") ``` ## 📁 Files | File | Description | |------|-------------| | `nexus_worldmodel_v2.pt` | Full checkpoint | | `pytorch_model.bin` | Weights only | | `config.json` | Model configuration | | `cognitive_state.json` | Dynamic cognitive state | | `configuration_nexus_worldmodel.py` | Config class | | `model_index.json` | Component index | ## ⚠️ Dynamic Architecture This model uses a **cognitive-dynamic** architecture where: - Expert count per layer can **grow** during training - Neuron count can **change** (neurogenesis) - Memory states are **persistent** When loading, use `strict=False` to handle potential size mismatches: ```python model.load_state_dict(state_dict, strict=False) ``` ## 📖 Configuration ```json { "d_model": 512, "n_layers": 8, "latent_dim": 256, "use_gqa": true, "gqa_num_kv_groups": 2, "neurogenesis_enabled": true } ``` ## 👤 Author **Mike Amega (Logo)** - Ame Web Studio ## 📄 License Apache 2.0