KitsuVp commited on
Commit
033bc2c
·
verified ·
1 Parent(s): 24bf54c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +169 -32
README.md CHANGED
@@ -1,52 +1,189 @@
1
  ---
2
- library_name: transformers
 
3
  tags:
4
- - generated_from_trainer
5
- model-index:
6
- - name: NeoLLM
7
- results: []
 
 
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
-
13
  # NeoLLM
14
 
15
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- ## Model description
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- More information needed
20
 
21
- ## Intended uses & limitations
22
 
23
- More information needed
 
 
 
 
 
 
 
 
 
24
 
25
- ## Training and evaluation data
26
 
27
- More information needed
28
 
29
- ## Training procedure
30
 
31
- ### Training hyperparameters
 
 
 
 
 
 
 
 
32
 
33
- The following hyperparameters were used during training:
34
- - learning_rate: 0.0006
35
- - train_batch_size: 64
36
- - eval_batch_size: 64
37
- - seed: 42
38
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
39
- - lr_scheduler_type: linear
40
- - lr_scheduler_warmup_steps: 0.1
41
- - num_epochs: 1
42
 
43
- ### Training results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
 
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
- ### Framework versions
48
 
49
- - Transformers 5.3.0
50
- - Pytorch 2.10.0+cu130
51
- - Datasets 4.8.4
52
- - Tokenizers 0.22.2
 
1
  ---
2
+ language: en
3
+ license: apache-2.0
4
  tags:
5
+ - causal-lm
6
+ - research
7
+ - fp8
8
+ - attention
9
+ - normalization
10
+ - neollm
11
+ datasets:
12
+ - HuggingFaceFW/fineweb-edu
13
  ---
14
 
 
 
 
15
  # NeoLLM
16
 
17
+ NeoLLM is a **135 M parameter** decoder-only language model trained from scratch on
18
+ [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) in **FP8**
19
+ precision, completing training in approximately **6 hours** on a single NVIDIA RTX 5090.
20
+ It integrates a collection of recently published attention and normalization techniques
21
+ into a single architecture, with the goal of studying how they interact during
22
+ pretraining. The model is actively being developed and the current checkpoint represents
23
+ an intermediate training state.
24
+
25
+ > **Author / contact:** [@Kyokopom](https://x.com/Kyokopom) on X
26
+ > **Repository:** [KitsuVp/NeoLLM](https://huggingface.co/KitsuVp/NeoLLM)
27
+
28
+ ---
29
 
30
+ ## Architecture
31
+
32
+ NeoLLM is a decoder-only transformer with the following configuration:
33
+
34
+ | Parameter | Value |
35
+ |---|---|
36
+ | Hidden size | 512 |
37
+ | Layers | 12 |
38
+ | Attention heads | 8 |
39
+ | KV heads (GQA) | 2 |
40
+ | Head dim | 64 |
41
+ | Intermediate size | 1536 |
42
+ | Vocabulary | Qwen3 tokenizer (151,665 tokens) |
43
+ | Context length | 512 tokens |
44
+
45
+ ### Parameter breakdown
46
+
47
+ | Parameter bucket | Count |
48
+ |---|---|
49
+ | **Total parameters** | 132.09M (132,091,872) |
50
+ | **Embedding parameters** (tied) | 77.65M (77,652,480) |
51
+ | **Non-embedding parameters** | 54.44M (54,439,392) |
52
+ | **Effective trainable parameters** | 132.09M (132,091,872) |
53
+
54
+ > Weight tying is **enabled**: the input embedding matrix and the language-model head
55
+ > share the same parameters, so the effective trainable budget is
56
+ > `total − embed = 54.44M`.
57
+
58
+ ### Integrated techniques
59
+
60
+ Each layer combines the following mechanisms simultaneously.
61
+
62
+ **Normalization and residual stream**
63
+
64
+ - **SeeDNorm** ([arXiv:2510.22777](https://arxiv.org/abs/2510.22777)) — Applied to Q and K
65
+ projections. Dynamically rescales the normalization based on the input's own statistics,
66
+ making the attention geometry more stable across varying input distributions.
67
+ - **PolyNorm** ([arXiv:2602.04902](https://arxiv.org/abs/2602.04902)) — Replaces the standard
68
+ MLP activation with three branches: linear (x), quadratic (x²), and cubic (x³) — each
69
+ normalized and combined with learned weights. This allows the MLP to express both linear
70
+ and non-linear relationships simultaneously.
71
+ - **GPAS** ([arXiv:2506.22049](https://arxiv.org/abs/2506.22049)) — Gradient-Preserving
72
+ Activation Scaling. Applied to residual connections between sublayers; helps gradients
73
+ flow more cleanly during training without distorting the residual stream.
74
+ - **LayerNorm Scaling / LNS** ([arXiv:2502.05795](https://arxiv.org/abs/2502.05795)) — Each
75
+ layer's output is scaled by 1/√ℓ where ℓ is the layer index. Directly addresses the
76
+ "Curse of Depth" in Pre-LN transformers.
77
+
78
+ **Attention mechanisms**
79
+
80
+ - **FAN** ([arXiv:2502.21309](https://arxiv.org/abs/2502.21309)) — Fourier Analysis Networks.
81
+ A portion of the input projection channels are dedicated to representing periodic patterns
82
+ (cosine/sine pairs), while the remainder handle standard linear content.
83
+ - **MEA** ([arXiv:2601.19611](https://arxiv.org/abs/2601.19611)) — Explicit Multi-head
84
+ Attention. Adds small learnable interaction matrices between attention heads for K and V.
85
+ - **LUCID** ([arXiv:2602.10410](https://arxiv.org/abs/2602.10410)) — Applies a learned
86
+ lower-triangular preconditioner to V before attention, decorrelating value representations
87
+ across positions.
88
+ - **Affine-Scaled Attention** ([arXiv:2602.23057](https://arxiv.org/abs/2602.23057)) — Adds
89
+ two learnable per-head scalars (α and β) to the softmax weights:
90
+ `[α·softmax(QKᵀ) + β]·V`.
91
+ - **XSA** ([arXiv:2603.09078](https://arxiv.org/abs/2603.09078)) — Exclusive Self Attention.
92
+ After computing attention, removes the component of the output aligned with the token's
93
+ own value vector.
94
+ - **Directional Routing** ([arXiv:2603.14923](https://arxiv.org/abs/2603.14923)) — Each head
95
+ learns K=4 directions in the output space; a learned router suppresses the attention output
96
+ along each direction per input.
97
+ - **Gated Attention** ([arXiv:2505.06708](https://arxiv.org/abs/2505.06708)) — A sigmoid gate
98
+ is applied to the attention output before the output projection, introducing non-linearity
99
+ and preventing attention sinks.
100
+ - **Momentum Attention** ([arXiv:2411.03884](https://arxiv.org/abs/2411.03884)) — Modifies Q
101
+ and K by subtracting a fraction of the previous position's Q and K values (causal
102
+ first-difference).
103
+
104
+ **MLP**
105
+
106
+ - **Learnable Multipliers** ([arXiv:2601.04890](https://arxiv.org/abs/2601.04890)) — Adds
107
+ per-row and per-column learnable scalar parameters to each linear layer.
108
+ - **SimpleGPT** ([arXiv:2602.01212](https://arxiv.org/abs/2602.01212)) — A normalization
109
+ strategy derived from second-order geometry analysis, applied inside MLP projections to
110
+ improve optimization stability.
111
 
112
+ ---
113
 
114
+ ## Training
115
 
116
+ | Setting | Value |
117
+ |---|---|
118
+ | Dataset | FineWeb-Edu (sample-10BT) |
119
+ | Tokens seen | ~0.01B (157 steps × batch 64 × length 512) |
120
+ | Precision | FP8 native (E4M3 weights/activations, E5M2 gradients) + BF16 fallback |
121
+ | Optimizer | Conda (Column-Normalized Adam) + GPA |
122
+ | Learning rate | 6e-04 with linear warmup (10 % of steps) |
123
+ | Weight decay | 0.1 |
124
+ | Training time | ~0h 25m |
125
+ | Hardware | NVIDIA RTX 5090 (single GPU) |
126
 
 
127
 
128
+ ---
129
 
130
+ ## Limitations
131
 
132
+ - **Token budget** — ~1.5 B tokens seen; below estimated optimum. Knowledge-intensive tasks
133
+ will improve with more training.
134
+ - **Gradient spike at step 40k** — Reorganized the attention pattern in layer 9 that
135
+ previously captured long-range token correlations. A checkpoint from ~step 38k is expected
136
+ to have better aggregate benchmark scores.
137
+ - **PolyNorm exclusivity** — The quadratic branch has become partially redundant with the
138
+ linear branch. Will be corrected in the next training run.
139
+ - **Base model only** — Not instruction-tuned or aligned; purely a next-token-prediction
140
+ base model.
141
 
142
+ ---
 
 
 
 
 
 
 
 
143
 
144
+ ## References
145
+
146
+ All papers whose techniques are integrated into NeoLLM's architecture:
147
+
148
+ | Technique | Paper title | arXiv |
149
+ |---|---|---|
150
+ | SeeDNorm | Self-Rescaled Dynamic Normalization | [2510.22777](https://arxiv.org/abs/2510.22777) |
151
+ | MEA | Explicit Multi-head Attention | [2601.19611](https://arxiv.org/abs/2601.19611) |
152
+ | Learnable Multipliers | Freeing the Scale of Language Model Matrix Layers | [2601.04890](https://arxiv.org/abs/2601.04890) |
153
+ | Directional Routing | Directional Routing in Transformers | [2603.14923](https://arxiv.org/abs/2603.14923) |
154
+ | XSA | Exclusive Self Attention | [2603.09078](https://arxiv.org/abs/2603.09078) |
155
+ | Gated Attention | Gated Attention for LLMs | [2505.06708](https://arxiv.org/abs/2505.06708) |
156
+ | Affine-Scaled Attention | Affine-Scaled Attention | [2602.23057](https://arxiv.org/abs/2602.23057) |
157
+ | LNS | The Curse of Depth in LLMs | [2502.05795](https://arxiv.org/abs/2502.05795) |
158
+ | LUCID | Attention with Preconditioned Representations | [2602.10410](https://arxiv.org/abs/2602.10410) |
159
+ | FAN | Fourier Analysis Networks | [2502.21309](https://arxiv.org/abs/2502.21309) |
160
+ | SimpleGPT | SimpleGPT | [2602.01212](https://arxiv.org/abs/2602.01212) |
161
+ | GPAS | Gradient-Preserving Activation Scaling | [2506.22049](https://arxiv.org/abs/2506.22049) |
162
+ | PolyNorm | PolyNorm / PolyCom | [2602.04902](https://arxiv.org/abs/2602.04902) |
163
+ | Momentum Attention | Momentum Attention | [2411.03884](https://arxiv.org/abs/2411.03884) |
164
+ | TWEO (analysis ref.) | Transformers Without Extreme Outliers | [2511.23225](https://arxiv.org/abs/2511.23225) |
165
 
166
+ ---
167
 
168
+ ## Citation
169
+
170
+ ```bibtex
171
+ @misc{neollm2026,
172
+ title = {NeoLLM: A Research Language Model Integrating Recent Attention and Normalization Techniques},
173
+ author = {KitsuVp},
174
+ year = {2026},
175
+ url = {https://huggingface.co/KitsuVp/NeoLLM}
176
+ }
177
+ ```
178
+
179
+ ---
180
+
181
+ ## Author
182
+
183
+ [@Kyokopom](https://x.com/Kyokopom) on X
184
+
185
+ ---
186
 
187
+ ## License
188
 
189
+ Apache 2.0