AI & ML interests

None defined yet.

Recent Activity

EXOROBOURIIΒ  updated a Space 28 days ago
ExorobouriiLLC/README
EXOROBOURIIΒ  published a Space 28 days ago
ExorobouriiLLC/README
View all activity

Exorobourii Research Lab

"What we observe is not nature itself, but nature exposed to our method of questioning." β€” Werner Heisenberg

Exorobourii is a research initiative dedicated to Mechanistic Interpretability and Efficient Intelligence. We believe technology should be a glass box, not a black box. We build instruments to measure the internal physics of AI, and engineering frameworks to optimize it for ethical and ecological sustainability.

πŸ“‘ The Mission

Current AI development faces an "Observability Crisis". We are building engines that are faster and more powerful, but we rely on a dashboard that only has a speedometer (val_loss).

Our work focuses on three pillars:

  1. Observability: Developing the VSM Protocol to act as a "mechanistic stethoscope" for Transformer attention.
  2. Efficiency: Engineering Nano-LLMs (Project Janus) that achieve "Super-Chinchilla" performance by eliminating structural redundancy.
  3. Sustainability: Reducing the computational cost of intelligence through Vector Space Homeostasis.

πŸ”¬ Key Research Initiatives

1. The VSM Protocol

A Framework for Quantifying and Guiding Attention Head Specialization.

The VSM Protocol treats the Transformer architecture as a physical system for spectral processing. We utilize two novel metrics to track the evolution of a model's "mind" from initialization to convergence:

  • $\sigma_p$ (Coherence): Measures the focus/entropy of attention heads.
  • $\sigma_a$ (Novelty/Agreement): Measures the degree of cross-head specialization.

Our research has quantified the "Untrained Symmetry" phenomenon (Softmax Collapse) and mapped the "Diagonally Oppositional" trajectory of healthy learning.

2. Project Janus

Engineering Efficient Nano-LLMs via Feature Orthogonality.

Project Janus is an attempt to solve "Attentional Collapse"β€”the tendency for small models (Nano-LLMs) to learn redundant features due to limited capacity.

By implementing Vector Space Homeostasis (a diversity pressure term $\lambda_{div}$ in the loss function) and a Trapezoidal Pressure Schedule, we force the model to maintain feature orthogonality.

Key Results (Janus v3 vs. Baseline):

  • Architecture: 40M Parameters (Llama-style Chassis).
  • Performance: 9.2% reduction in Loss on logical coherence tasks.
  • Efficiency: Achieved parity loss with 28% less structural redundancy.
  • Generalization: 0.91 improvement in Perplexity on WikiText-103.

Efficiency Gap Chart - Janus Loss vs Baseline


πŸ› οΈ Usage & Citation

We believe in open science. Our protocols and model weights are released here to encourage the community to move beyond black-box optimization.

BibTeX

If you use the VSM Protocol or Janus methodology in your research, please cite:

@techreport{belanger2025vsm,
  title={The VSM (Vector-Space-Mapping) Protocol: A Framework for Quantifying and Guiding Attention Head Specialization in Transformers},
  author={Belanger, Jonathan R.},
  institution={Exorobourii},
  year={2025}
}

@techreport{belanger2025janus,
  title={Project Janus: Engineering Efficient Nano-LLMs via Feature Orthogonality and Vector Space Homeostasis},
  author={Belanger, Jonathan R.},
  institution={Exorobourii},
  year={2025}
}

models 0

None public yet

datasets 0

None public yet