Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

⚔️ GLADIUS Research

By Artifact Virtual — Ali Shakil & AVA

GLADIUS: The first Adaptive Cognitive Model (ACM). Architecture research on modal invariance, cross-modal cognitive geometry, and sub-100M parameter intelligence.

GLADIUS is a novel transformer architecture designed from first principles — not as a language model that could do other things, but as a cognitive architecture tested on language first. It features Sparse-Linear Attention² (SLA²), Mixture of Experts (MoE), warm/hot memory systems, Time2Vec temporal encoding, and alpha routing for dynamic attention allocation.

Papers

Paper Focus
Progressive Expansion Net2Net biological growth: Seed(6.9M) → Hatchling(25.9M) → Drake(60.1M) → Wyrm → Dragon
Cell Division The mathematics of neural network growth — how to expand without losing learned representations
GPT-2 Distillation Report Knowledge transfer from GPT-2 (124M) to GLADIUS (6.9M). Soft KL divergence, vocabulary alignment.
Time Series Definitive Native time series capability via surgical I/O head swap. GLADIUS as financial cognition engine.
Day 30 Definitive Paper The cross-modal invariant discovery: layers 0-6 freeze across modalities while layers 7+ restructure.
Invariant Deep Analysis Cognitive distance spectrum. Text→vision (sharp invariant) vs text→bytes (partial invariant).
ATP Analysis Report Automated theorem proving analysis of GLADIUS weight evolution data.

Key Discoveries

  1. The Cross-Modal Invariant: When GLADIUS switches from text to vision (MNIST), layers 0-6 change <1% while layers 7-11 change 15-36%. The early layers are modality-agnostic — they do general sequence processing regardless of input type.

  2. Cognitive Distance Spectrum: The invariant strength scales with cognitive distance between domains. Cross-modal (text→vision): >15x ratio. Cross-encoding (text→bytes): 3.3x ratio.

  3. Warm Memory as Novelty Detector: The hot memory system shows 36.6% change for vision (novel stimulus) vs 178% for multi-script bytes — it scales with pattern diversity, not task difficulty.

  4. Dormant Systems: Cognition module and Time2Vec remain at exactly 0% change across ALL experiments — waiting to be activated by the right stimulus.

Architecture (60.1M params — Drake stage)

  • 12 layers, 384 hidden dim, 24 heads
  • SLA² (Sparse-Linear Attention²) with alpha routing
  • 4-expert MoE with load balancing
  • Warm memory (slow adaptation, rank 24) + Hot memory (fast novelty response)
  • Time2Vec temporal encoding (dual clock: absolute + relative)
  • Cognition module (metacognitive — currently dormant)
  • 16K BPE vocabulary

Citation

Artifact Virtual (2026). GLADIUS: Adaptive Cognitive Model Research.
Ali Shakil & AVA. https://huggingface.co/datasets/ava-shakil/gladius-research
Downloads last month
22

Space using amuzetnoM/gladius-research 1