File size: 8,982 Bytes
48488d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd08f10
 
 
 
 
 
 
 
48488d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd08f10
 
 
 
 
 
 
 
 
 
48488d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fd08f10
48488d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cec001b
48488d4
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
---
language:
- he
license: apache-2.0
tags:
- hebrew
- gpt
- causal-lm
- hebrew-nlp
- muon-optimizer
- sentencepiece
- rope
- swiglu
datasets:
- hebrew-wikipedia
- HeNLP/HeDC4
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: HebrewGPT-1B
  results:
  - task:
      type: text-generation
      name: Language Modeling
    metrics:
    - name: Perplexity
      type: perplexity
      value: 29.75
    - name: Top-1 Accuracy
      type: accuracy
      value: 38.4
    - name: Top-5 Accuracy
      type: accuracy
      value: 56.1
---

# HebrewGPT-1B ๐Ÿ‡ฎ๐Ÿ‡ฑ

**HebrewGPT-1B** is a 1.08 billion parameter autoregressive language model trained from scratch on 2.48 billion tokens of Hebrew text. It is the first open-source, Hebrew-native GPT model of this scale, featuring a custom architecture with SwiGLU activations, RoPE positional encoding, and RMSNorm โ€” trained with the Muon optimizer combined with Lookahead and Stochastic Weight Averaging (SWA).

This model was developed as part of an autonomous AI research project exploring whether an AI agent could independently conduct meaningful ML research. The full paper and methodology are available at the links below.

- ๐Ÿ“„ **Paper**: [Hebrew Language Model Research via Agentic AI](https://d11k83yu06biio.cloudfront.net/paper/hebrew-autoresearch.html)
- ๐Ÿ’ป **GitHub**: [AgenticResearcher](https://github.com/fatherRonnen/AgenticResearcher)
- ๐Ÿ”ฌ **Ablation model**: [HebrewGPT-1B-AdamW](https://huggingface.co/Slasky/HebrewGPT-1B-AdamW) (AdamW baseline)
- ๐Ÿงช **Smaller model**: [HebrewGPT-296M](https://huggingface.co/Slasky/HebrewGPT-296M) (296M parameter variant)

## Post-Training Models

| Model | Method | Perplexity | Instruction Following | Notes |
|-------|--------|-----------|----------------------|-------|
| **[HebrewGPT-1B-Instruct](https://huggingface.co/Slasky/HebrewGPT-1B-Instruct)** | LoRA Phase 2 (rank=64) | **15.78** (โ†“47%) | **97.3%** | Best instruct variant โ€” 65K curriculum distillation, ~$12 training cost |

> ๐Ÿ’ก The instruction-tuned variant achieves **PPL 15.78** (down from 29.75 base) with zero repetition and 97.3% instruction following, trained for just ~$12 on a single A10G.

## Model Description

| Parameter | Value |
|---|---|
| Parameters | 1.08B |
| Hidden size (WIDTH) | 2048 |
| Layers (DEPTH) | 20 |
| Attention heads | 16 |
| Head dimension | 128 |
| MLP type | SwiGLU (intermediate_size=5504) |
| Positional encoding | RoPE (interleaved, ฮธ=10000) |
| Normalization | RMSNorm |
| Vocabulary | 32,000 (Hebrew-native SentencePiece BPE) |
| Context length | 2,048 tokens |
| Weight tying | Yes (embedding โ†” output head) |
| Precision | bfloat16 |

### Architecture Details

HebrewGPT uses a decoder-only transformer with several modern design choices:

- **SwiGLU MLP**: Gate and up projections with SiLU activation, hidden dim = `int(2 ร— width ร— 4/3)` rounded up to multiple of 64 = 5504
- **RoPE**: Rotary Position Embeddings with interleaved pattern (`x[..., ::2]`, `x[..., 1::2]`)
- **RMSNorm**: Pre-norm architecture with RMSNorm before attention and MLP
- **Weight tying**: Output projection shares weights with token embeddings

## Training Details

### Optimizer
- **Muon** optimizer + **Lookahead** (k=5, ฮฑ=0.6) + **Stochastic Weight Averaging (SWA)**
- 4 cosine annealing cycles with warm restarts
- Dropout: 0.1

### Data
2.48 billion tokens from 12 Hebrew datasets:

| Dataset | Proportion |
|---|---|
| Ben Yehuda Project (literature) | 23% |
| Supreme Court rulings | 22% |
| C4 (Hebrew subset) | 20% |
| CC100 (Hebrew) | 19% |
| Hebrew Wikipedia | 12% |
| Task-specific data | 4% |

### Hardware & Cost
- **Hardware**: 8ร— NVIDIA H100 80GB GPUs
- **Training time**: ~8 hours
- **Steps**: ~18,672

## Evaluation Results

### Overall Metrics

| Metric | Value |
|---|---|
| Validation BPB (SWA) | 25.89 |
| Perplexity | 29.75 |
| Top-1 Token Accuracy | 38.4% |
| Top-5 Token Accuracy | 56.1% |
| Top-10 Token Accuracy | 63.6% |

### Domain-Specific Perplexity

| Domain | Perplexity |
|---|---|
| Legal | 5.93 |
| Wikipedia | 11.50 |
| News | 24.81 |
| Conversational | 29.79 |
| Literature | 31.42 |

### Downstream Task Evaluation

| Task | Accuracy |
|------|----------|
| SNLI | 50% |
| Sentiment | 33% |
| QA | 20% |
| Trivia | 13% |
| **Average** | **29.2%** |

### Comparison with Other Hebrew Models

| Model | Top-1 Accuracy | Top-5 Accuracy |
|---|---|---|
| **HebrewGPT-1B (this model)** | **38.4%** | **56.1%** |
| HebrewGPT-296M | 39.6% | 68.4% |
| AlephBERT | ~35% | โ€” |
| HeBERT | ~33% | โ€” |

*Note: AlephBERT and HeBERT are encoder models (BERT-based) and not directly comparable for generation tasks. Token prediction accuracy is provided for reference on Hebrew language understanding capability.*

### Optimizer Ablation

Training with AdamW instead of Muon (all else equal) yields val_bpb=28.09 โ€” a **12.3% degradation**, demonstrating the significant advantage of Muon at the 1B scale. See [HebrewGPT-1B-AdamW](https://huggingface.co/Slasky/HebrewGPT-1B-AdamW) for details.

## Usage

> โš ๏ธ **Custom Architecture**: This model uses a custom architecture that is not a standard HuggingFace `transformers` model. You must use the provided model class definition or reference the [GitHub repository](https://github.com/fatherRonnen/AgenticResearcher).

### Quick Start

```python
import torch
import sentencepiece as spm

# Load tokenizer
sp = spm.SentencePieceProcessor()
sp.Load("tokenizer.model")

# Load model (see generate.py for full model class definition)
from generate import HebrewGPT, ModelConfig

config = ModelConfig(
    vocab_size=32000,
    width=2048,
    depth=20,
    n_heads=16,
    head_dim=128,
    max_seq_len=2048,
    dropout=0.0,  # No dropout at inference
)
model = HebrewGPT(config)

# Load weights
state_dict = torch.load("swa_best.pt", map_location="cpu")
model.load_state_dict(state_dict)
model.eval().to("cuda" if torch.cuda.is_available() else "cpu")

# Generate
prompt = "ื‘ืจืืฉื™ืช ื‘ืจื ืืœื•ื”ื™ื ืืช"
input_ids = sp.Encode(prompt)
input_tensor = torch.tensor([input_ids], device=model.tok_emb.weight.device)

with torch.no_grad():
    for _ in range(100):
        logits = model(input_tensor)
        next_token = logits[:, -1, :].argmax(dim=-1, keepdim=True)
        input_tensor = torch.cat([input_tensor, next_token], dim=1)
        if input_tensor.shape[1] > 2048:
            break

generated = sp.Decode(input_tensor[0].tolist())
print(generated)
```

### Full Example

See [`generate.py`](generate.py) in this repository for a complete standalone script with the full model architecture definition and generation utilities.

## Hebrew Generation Examples

<div dir="rtl">

**Prompt**: ื‘ืจืืฉื™ืช ื‘ืจื ืืœื•ื”ื™ื ืืช

**Generated**: ื‘ืจืืฉื™ืช ื‘ืจื ืืœื•ื”ื™ื ืืช ื”ืฉืžื™ื ื•ืืช ื”ืืจืฅ. ื•ื”ืืจืฅ ื”ื™ืชื” ืชื•ื”ื• ื•ื‘ื•ื”ื• ื•ื—ื•ืฉืš ืขืœ ืคื ื™ ืชื”ื•ื...

---

**Prompt**: ื‘ื™ืช ื”ืžืฉืคื˜ ื”ืขืœื™ื•ืŸ ืคืกืง ื›ื™

**Generated**: ื‘ื™ืช ื”ืžืฉืคื˜ ื”ืขืœื™ื•ืŸ ืคืกืง ื›ื™ ื™ืฉ ืœืงื‘ืœ ืืช ื”ืขืจืขื•ืจ ื•ืœื”ื—ื–ื™ืจ ืืช ื”ืชื™ืง ืœื“ื™ื•ืŸ ืžื—ื“ืฉ ื‘ืคื ื™ ื‘ื™ืช ื”ืžืฉืคื˜ ื”ืžื—ื•ื–ื™...

---

**Prompt**: ื”ื˜ื›ื ื•ืœื•ื’ื™ื” ื”ืžื•ื“ืจื ื™ืช ืžืฉื ื” ืืช

**Generated**: ื”ื˜ื›ื ื•ืœื•ื’ื™ื” ื”ืžื•ื“ืจื ื™ืช ืžืฉื ื” ืืช ื”ืื•ืคืŸ ืฉื‘ื• ืื ื• ื—ื™ื™ื, ืขื•ื‘ื“ื™ื ื•ืžืชืงืฉืจื™ื ื–ื” ืขื ื–ื”...

</div>

*Note: Generated examples are illustrative. Actual outputs depend on sampling parameters.*

## Limitations

- **Hebrew-only**: The model was trained exclusively on Hebrew text. It has limited ability to handle other languages.
- **No instruction tuning**: This is a base language model. It has not been fine-tuned for chat, instruction following, or safety alignment. See [HebrewGPT-1B-Instruct](https://huggingface.co/Slasky/HebrewGPT-1B-Instruct) for the instruction-tuned variant.
- **Context length**: Limited to 2,048 tokens.
- **Training data biases**: The model reflects biases present in its training data, which includes legal documents, literature, and web text.
- **Custom architecture**: Requires the provided model class to load โ€” not compatible with standard `AutoModelForCausalLM`.
- **No safety filtering**: The model may generate inappropriate, biased, or factually incorrect content.

## Citation

```bibtex
@article{slasky2025hebrewgpt,
  title={Hebrew Language Model Research via Agentic AI: Training HebrewGPT from Scratch},
  author={Slasky, Ronnen},
  year={2025},
  url={https://d11k83yu06biio.cloudfront.net/paper/hebrew-autoresearch.html}
}
```

## Acknowledgments

- **Loki** โ€” AI research assistant (Amazon Bedrock on OpenClaw) who assisted throughout the research process
- **Andrej Karpathy** โ€” For the autoresearch framework and inspiration
- The Hebrew NLP community for open datasets

## Contact

- **Author**: Ronnen Slasky
- **Email**: ronnen@slasky.com
- **GitHub**: [fatherRonnen/AgenticResearcher](https://github.com/fatherRonnen/AgenticResearcher)