edeneldith commited on
Commit
85777f5
·
verified ·
1 Parent(s): 9d04da4

They do be wigglin'

Files changed (1) hide show
  1. README.md +128 -3
README.md CHANGED
@@ -1,3 +1,128 @@
1
- ---
2
- license: gpl-3.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gpl-3.0
3
+ tags:
4
+ - pytorch
5
+ - gpt2
6
+ - transformer
7
+ - oscillating-activation
8
+ - bio-inspired
9
+ - language-model
10
+ language:
11
+ - en
12
+ datasets:
13
+ - openwebtext
14
+ - HuggingFaceTB/smoltalk
15
+ pipeline_tag: text-generation
16
+ ---
17
+
18
+ # WiggleGPT
19
+
20
+ A 124M parameter transformer that challenges a 56-year-old assumption in neural network design.
21
+
22
+ ![WiggleGPT Architecture](model_architecture.png)
23
+
24
+ ## What Makes It Different?
25
+
26
+ Since Minsky and Papert's *Perceptrons* (1969), neural networks have relied on **monotonic activation functions** (Sigmoid, ReLU, GELU) — requiring multiple hidden layers to solve non-linearly separable problems like XOR.
27
+
28
+ WiggleGPT replaces monotonic activations with **learnable oscillating functions**, enabling single neurons to create multiple decision boundaries:
29
+
30
+ ```
31
+ f(x) = sin(ωx + φ) · tanh(x) + baseline
32
+ ```
33
+
34
+ Where ω (frequency) and φ (phase) are **learnable per-neuron parameters**.
35
+
36
+ ## Results
37
+
38
+ | Model | Parameters | Val Loss | Notes |
39
+ |-------|------------|----------|-------|
40
+ | **WiggleGPT** | 124M | **3.1621** | Oscillating activation |
41
+ | GPT-2 | 124M | ~3.12 | Standard GELU baseline |
42
+
43
+ **Within 1.3% of GPT-2 performance** — proving oscillating activations are a functional drop-in replacement at scale.
44
+
45
+ ### The Model Actually Learned to Oscillate
46
+
47
+ | Parameter | Init | After Training | Change |
48
+ |-----------|------|----------------|--------|
49
+ | ω mean | 1.0 | 1.096 | +9.6% |
50
+ | ω std | 0.1 | **0.602** | **6× increase** |
51
+ | ω range | [0.8, 1.2] | [-0.19, 5.17] | Massive expansion |
52
+
53
+ - **95% of neurons retained active oscillation** (ω > 0.1)
54
+ - Some neurons learned frequencies up to ω = 5.17 (five oscillations per unit input)
55
+ - Full phase coverage [-π, +π] after training
56
+
57
+ ## Checkpoints
58
+
59
+ | File | Description |
60
+ |------|-------------|
61
+ | `ckpt_pretrain.pt` | Base model trained on OpenWebText (~600k iterations) |
62
+ | `ckpt_finetune.pt` | Fine-tuned on SmolTalk2 (instruction following) |
63
+
64
+ ## Architecture
65
+
66
+ | Component | Specification |
67
+ |-----------|---------------|
68
+ | Parameters | 123,697,920 |
69
+ | Layers | 12 |
70
+ | Attention Heads | 12 |
71
+ | Embedding Dimension | 768 |
72
+ | Oscillating Neurons | 36,864 (each with learnable ω, φ, baseline) |
73
+ | Normalization | RMSNorm |
74
+ | Position Encoding | RoPE (Rotary) |
75
+ | Attention | Flash Attention (when available) |
76
+
77
+ ## Usage
78
+
79
+ See the [GitHub repository](https://github.com/Eden-Eldith/WiggleGPT) for full training, inference, and chat scripts.
80
+
81
+ ```python
82
+ # Quick inference example
83
+ import torch
84
+ from model_bio import GPT, GPTConfig
85
+
86
+ # Load checkpoint
87
+ checkpoint = torch.load('ckpt_pretrain.pt', map_location='cuda')
88
+ config = GPTConfig(**checkpoint['config'])
89
+ model = GPT(config)
90
+ model.load_state_dict(checkpoint['model'])
91
+ model.eval()
92
+
93
+ # Generate text (see sample_bio.py for full implementation)
94
+ ```
95
+
96
+ ## Training Details
97
+
98
+ **Pretraining:**
99
+ - Dataset: OpenWebText (~9B tokens)
100
+ - Iterations: 600,000
101
+ - Hardware: RTX 3070 (steps 0–354k) → RTX 5060 Ti 16GB (steps 354k–600k)
102
+ - Time: Roughly 20 days total (~15 days on 3070, ~5 days on 5060 Ti)
103
+
104
+ **Fine-tuning:**
105
+ - Dataset: SmolTalk2 (406K examples)
106
+ - Oscillation parameters (ω, φ) remained stable — 0.0% of neurons shifted by >0.1
107
+
108
+ ## Citation
109
+
110
+ ```bibtex
111
+ @software{wigglegpt2025,
112
+ author = {O'Brien, Phillip C.},
113
+ title = {WiggleGPT: Revisiting the Monotonicity Assumption in Neural Networks via Oscillating Activation Functions},
114
+ year = {2025},
115
+ url = {https://github.com/Eden-Eldith/WiggleGPT}
116
+ }
117
+ ```
118
+
119
+ ## Author
120
+
121
+ **Eden (Phillip C. O'Brien)**
122
+ Independent AI Researcher | ORCID: [0009-0007-3961-1182](https://orcid.org/0009-0007-3961-1182)
123
+
124
+ Built in a garage lab in Gosport, UK. No academic affiliation, no institutional funding — just curiosity and an RTX 3070.
125
+
126
+ ## License
127
+
128
+ GPL-3.0 — if you build on this, keep it open source.