NEXUS-WorldModel v2.0
"Learning to Simulate Reality with Full Cognitive Architecture"
π§ Architecture
| Component | Description |
|---|---|
| EARCP Module | Sparse Compression + Gated Integration |
| LPOL Memory | 9 domains with GQA |
| GQA | 8 heads, 2 KV groups (75% savings) |
| EARCP Layers | 8 layers Γ 6 experts |
| Neurogenesis | Dynamic growth (32-256 neurons) |
| Physics Prior | MDN with 8 components |
π Training Results
| Metric | Value |
|---|---|
| Epochs | 6 |
| Final Loss | 0.0172 |
| Coherence | ~0.42 |
| Neurogenesis Events | 0 |
| Parameters | 227,991,690 |
π Quick Start
import torch
from huggingface_hub import hf_hub_download
# Download and load
model_path = hf_hub_download(repo_id="amewebstudio/nexus-worldmodel-v2", filename="nexus_worldmodel_v2.pt")
checkpoint = torch.load(model_path, map_location="cuda")
config = checkpoint['config']
state_dict = checkpoint['model']
print(f"Epochs: {checkpoint['epochs']}")
print(f"Loss: {checkpoint['loss']:.4f}")
π Files
| File | Description |
|---|---|
nexus_worldmodel_v2.pt |
Full checkpoint |
pytorch_model.bin |
Weights only |
config.json |
Model configuration |
cognitive_state.json |
Dynamic cognitive state |
configuration_nexus_worldmodel.py |
Config class |
model_index.json |
Component index |
β οΈ Dynamic Architecture
This model uses a cognitive-dynamic architecture where:
- Expert count per layer can grow during training
- Neuron count can change (neurogenesis)
- Memory states are persistent
When loading, use strict=False to handle potential size mismatches:
model.load_state_dict(state_dict, strict=False)
π Configuration
{
"d_model": 512,
"n_layers": 8,
"latent_dim": 256,
"use_gqa": true,
"gqa_num_kv_groups": 2,
"neurogenesis_enabled": true
}
π€ Author
Mike Amega (Logo) - Ame Web Studio
π License
Apache 2.0
- Downloads last month
- 17