Spaces:
Running
Running
| title: README | |
| emoji: π | |
| colorFrom: indigo | |
| colorTo: pink | |
| sdk: static | |
| pinned: false | |
| # π Welcome to Event Horizon AI Labs | |
| > "Intelligence is not defined by scale, but by the purity of its logic." | |
| We are a specialized research collective focused on the frontier of **Small Language Models (SLM)**. Our mission is to democratize high-level reasoning by building models that are powerful enough to understand the world, yet small enough to run on-device, 100% offline. | |
| --- | |
| ### 𧬠Our Research Pillars | |
| #### 1. The Horizon-Zero Initiative (Base Models) | |
| Our **Horizon-Zero** series represents models trained from a "Tabula Rasa" (clean slate). We don't just fine-tune; we architect the weights from step zero, optimizing for deep semantic understanding within sub-1B parameter constraints. | |
| #### 2. The Axiom-Free Initiative (Refinement) | |
| The **Axiom-Free** series focuses on "Sanctified Intelligence." These models undergo rigorous filtering and alignment to ensure they are safe, professional, and suitable for environments requiring the highest level of content purity. | |
| #### 3. The Horizon-Axiom (Advanced Fine-Tuning) | |
| The **Horizon-Axiom** initiative is dedicated to the art of surgical refinement. We take high-potential Small Language Models and subject them to our proprietary fine-tuning pipelines. By optimizing weight distribution and enhancing latent reasoning patterns, we push the mathematical boundaries of what sub-7B models can achieve, transforming raw silicon into precision instruments. | |
| #### 4. Stellaris GGUF Series | |
| Optimization is our craft. Every model we release is natively converted and tested for local inference engines like LM Studio and private RAG systems. | |
| --- | |
| ### π οΈ Tech Stack & Philosophy | |
| * **Core:** PyTorch, JAX, and custom training loops. | |
| * **Paradigm:** Multi-step reasoning & Interactive Chain-of-Thought. | |
| * **Vision:** Privacy-first, local-only, zero-latency. | |
| --- |