File size: 1,923 Bytes
ce2edca
 
 
0945e57
ce2edca
 
 
 
 
0945e57
 
 
 
 
 
 
 
 
 
 
c375347
0945e57
 
c375347
0945e57
23bd61e
 
 
 
0945e57
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
title: README
emoji: ๐ŸŒ
colorFrom: indigo
colorTo: pink
sdk: static
pinned: false
---

# ๐ŸŒŒ Welcome to Event Horizon AI Labs

> "Intelligence is not defined by scale, but by the purity of its logic."

We are a specialized research collective focused on the frontier of **Small Language Models (SLM)**. Our mission is to democratize high-level reasoning by building models that are powerful enough to understand the world, yet small enough to run on-device, 100% offline.

---

### ๐Ÿงฌ Our Research Pillars

#### 1. The Horizon-Zero Initiative (Base Models)
Our **Horizon-Zero** series represents models trained from a "Tabula Rasa" (clean slate). We don't just fine-tune; we architect the weights from step zero, optimizing for deep semantic understanding within sub-1B parameter constraints.

#### 2. The Axiom-Free Initiative (Refinement)
The **Axiom-Free** series focuses on "Sanctified Intelligence." These models undergo rigorous filtering and alignment to ensure they are safe, professional, and suitable for environments requiring the highest level of content purity.

#### 3. The Horizon-Axiom (Advanced Fine-Tuning)
The **Horizon-Axiom** initiative is dedicated to the art of surgical refinement. We take high-potential Small Language Models and subject them to our proprietary fine-tuning pipelines. By optimizing weight distribution and enhancing latent reasoning patterns, we push the mathematical boundaries of what sub-7B models can achieve, transforming raw silicon into precision instruments.

#### 4. Stellaris GGUF Series
Optimization is our craft. Every model we release is natively converted and tested for local inference engines like LM Studio and private RAG systems.

---

### ๐Ÿ› ๏ธ Tech Stack & Philosophy
* **Core:** PyTorch, JAX, and custom training loops.
* **Paradigm:** Multi-step reasoning & Interactive Chain-of-Thought.
* **Vision:** Privacy-first, local-only, zero-latency.

---