HebrewGPT-1B / README.md
ronnengmail's picture
Add post-training models section, downstream eval results, link to instruct variant
fd08f10 verified
metadata
language:
  - he
license: apache-2.0
tags:
  - hebrew
  - gpt
  - causal-lm
  - hebrew-nlp
  - muon-optimizer
  - sentencepiece
  - rope
  - swiglu
datasets:
  - hebrew-wikipedia
  - HeNLP/HeDC4
library_name: transformers
pipeline_tag: text-generation
model-index:
  - name: HebrewGPT-1B
    results:
      - task:
          type: text-generation
          name: Language Modeling
        metrics:
          - name: Perplexity
            type: perplexity
            value: 29.75
          - name: Top-1 Accuracy
            type: accuracy
            value: 38.4
          - name: Top-5 Accuracy
            type: accuracy
            value: 56.1

HebrewGPT-1B ๐Ÿ‡ฎ๐Ÿ‡ฑ

HebrewGPT-1B is a 1.08 billion parameter autoregressive language model trained from scratch on 2.48 billion tokens of Hebrew text. It is the first open-source, Hebrew-native GPT model of this scale, featuring a custom architecture with SwiGLU activations, RoPE positional encoding, and RMSNorm โ€” trained with the Muon optimizer combined with Lookahead and Stochastic Weight Averaging (SWA).

This model was developed as part of an autonomous AI research project exploring whether an AI agent could independently conduct meaningful ML research. The full paper and methodology are available at the links below.

Post-Training Models

Model Method Perplexity Instruction Following Notes
HebrewGPT-1B-Instruct LoRA Phase 2 (rank=64) 15.78 (โ†“47%) 97.3% Best instruct variant โ€” 65K curriculum distillation, ~$12 training cost

๐Ÿ’ก The instruction-tuned variant achieves PPL 15.78 (down from 29.75 base) with zero repetition and 97.3% instruction following, trained for just ~$12 on a single A10G.

Model Description

Parameter Value
Parameters 1.08B
Hidden size (WIDTH) 2048
Layers (DEPTH) 20
Attention heads 16
Head dimension 128
MLP type SwiGLU (intermediate_size=5504)
Positional encoding RoPE (interleaved, ฮธ=10000)
Normalization RMSNorm
Vocabulary 32,000 (Hebrew-native SentencePiece BPE)
Context length 2,048 tokens
Weight tying Yes (embedding โ†” output head)
Precision bfloat16

Architecture Details

HebrewGPT uses a decoder-only transformer with several modern design choices:

  • SwiGLU MLP: Gate and up projections with SiLU activation, hidden dim = int(2 ร— width ร— 4/3) rounded up to multiple of 64 = 5504
  • RoPE: Rotary Position Embeddings with interleaved pattern (x[..., ::2], x[..., 1::2])
  • RMSNorm: Pre-norm architecture with RMSNorm before attention and MLP
  • Weight tying: Output projection shares weights with token embeddings

Training Details

Optimizer

  • Muon optimizer + Lookahead (k=5, ฮฑ=0.6) + Stochastic Weight Averaging (SWA)
  • 4 cosine annealing cycles with warm restarts
  • Dropout: 0.1

Data

2.48 billion tokens from 12 Hebrew datasets:

Dataset Proportion
Ben Yehuda Project (literature) 23%
Supreme Court rulings 22%
C4 (Hebrew subset) 20%
CC100 (Hebrew) 19%
Hebrew Wikipedia 12%
Task-specific data 4%

Hardware & Cost

  • Hardware: 8ร— NVIDIA H100 80GB GPUs
  • Training time: ~8 hours
  • Steps: ~18,672

Evaluation Results

Overall Metrics

Metric Value
Validation BPB (SWA) 25.89
Perplexity 29.75
Top-1 Token Accuracy 38.4%
Top-5 Token Accuracy 56.1%
Top-10 Token Accuracy 63.6%

Domain-Specific Perplexity

Domain Perplexity
Legal 5.93
Wikipedia 11.50
News 24.81
Conversational 29.79
Literature 31.42

Downstream Task Evaluation

Task Accuracy
SNLI 50%
Sentiment 33%
QA 20%
Trivia 13%
Average 29.2%

Comparison with Other Hebrew Models

Model Top-1 Accuracy Top-5 Accuracy
HebrewGPT-1B (this model) 38.4% 56.1%
HebrewGPT-296M 39.6% 68.4%
AlephBERT ~35% โ€”
HeBERT ~33% โ€”

Note: AlephBERT and HeBERT are encoder models (BERT-based) and not directly comparable for generation tasks. Token prediction accuracy is provided for reference on Hebrew language understanding capability.

Optimizer Ablation

Training with AdamW instead of Muon (all else equal) yields val_bpb=28.09 โ€” a 12.3% degradation, demonstrating the significant advantage of Muon at the 1B scale. See HebrewGPT-1B-AdamW for details.

Usage

โš ๏ธ Custom Architecture: This model uses a custom architecture that is not a standard HuggingFace transformers model. You must use the provided model class definition or reference the GitHub repository.

Quick Start

import torch
import sentencepiece as spm

# Load tokenizer
sp = spm.SentencePieceProcessor()
sp.Load("tokenizer.model")

# Load model (see generate.py for full model class definition)
from generate import HebrewGPT, ModelConfig

config = ModelConfig(
    vocab_size=32000,
    width=2048,
    depth=20,
    n_heads=16,
    head_dim=128,
    max_seq_len=2048,
    dropout=0.0,  # No dropout at inference
)
model = HebrewGPT(config)

# Load weights
state_dict = torch.load("swa_best.pt", map_location="cpu")
model.load_state_dict(state_dict)
model.eval().to("cuda" if torch.cuda.is_available() else "cpu")

# Generate
prompt = "ื‘ืจืืฉื™ืช ื‘ืจื ืืœื•ื”ื™ื ืืช"
input_ids = sp.Encode(prompt)
input_tensor = torch.tensor([input_ids], device=model.tok_emb.weight.device)

with torch.no_grad():
    for _ in range(100):
        logits = model(input_tensor)
        next_token = logits[:, -1, :].argmax(dim=-1, keepdim=True)
        input_tensor = torch.cat([input_tensor, next_token], dim=1)
        if input_tensor.shape[1] > 2048:
            break

generated = sp.Decode(input_tensor[0].tolist())
print(generated)

Full Example

See generate.py in this repository for a complete standalone script with the full model architecture definition and generation utilities.

Hebrew Generation Examples

Prompt: ื‘ืจืืฉื™ืช ื‘ืจื ืืœื•ื”ื™ื ืืช

Generated: ื‘ืจืืฉื™ืช ื‘ืจื ืืœื•ื”ื™ื ืืช ื”ืฉืžื™ื ื•ืืช ื”ืืจืฅ. ื•ื”ืืจืฅ ื”ื™ืชื” ืชื•ื”ื• ื•ื‘ื•ื”ื• ื•ื—ื•ืฉืš ืขืœ ืคื ื™ ืชื”ื•ื...


Prompt: ื‘ื™ืช ื”ืžืฉืคื˜ ื”ืขืœื™ื•ืŸ ืคืกืง ื›ื™

Generated: ื‘ื™ืช ื”ืžืฉืคื˜ ื”ืขืœื™ื•ืŸ ืคืกืง ื›ื™ ื™ืฉ ืœืงื‘ืœ ืืช ื”ืขืจืขื•ืจ ื•ืœื”ื—ื–ื™ืจ ืืช ื”ืชื™ืง ืœื“ื™ื•ืŸ ืžื—ื“ืฉ ื‘ืคื ื™ ื‘ื™ืช ื”ืžืฉืคื˜ ื”ืžื—ื•ื–ื™...


Prompt: ื”ื˜ื›ื ื•ืœื•ื’ื™ื” ื”ืžื•ื“ืจื ื™ืช ืžืฉื ื” ืืช

Generated: ื”ื˜ื›ื ื•ืœื•ื’ื™ื” ื”ืžื•ื“ืจื ื™ืช ืžืฉื ื” ืืช ื”ืื•ืคืŸ ืฉื‘ื• ืื ื• ื—ื™ื™ื, ืขื•ื‘ื“ื™ื ื•ืžืชืงืฉืจื™ื ื–ื” ืขื ื–ื”...

Note: Generated examples are illustrative. Actual outputs depend on sampling parameters.

Limitations

  • Hebrew-only: The model was trained exclusively on Hebrew text. It has limited ability to handle other languages.
  • No instruction tuning: This is a base language model. It has not been fine-tuned for chat, instruction following, or safety alignment. See HebrewGPT-1B-Instruct for the instruction-tuned variant.
  • Context length: Limited to 2,048 tokens.
  • Training data biases: The model reflects biases present in its training data, which includes legal documents, literature, and web text.
  • Custom architecture: Requires the provided model class to load โ€” not compatible with standard AutoModelForCausalLM.
  • No safety filtering: The model may generate inappropriate, biased, or factually incorrect content.

Citation

@article{slasky2025hebrewgpt,
  title={Hebrew Language Model Research via Agentic AI: Training HebrewGPT from Scratch},
  author={Slasky, Ronnen},
  year={2025},
  url={https://d11k83yu06biio.cloudfront.net/paper/hebrew-autoresearch.html}
}

Acknowledgments

  • Loki โ€” AI research assistant (Amazon Bedrock on OpenClaw) who assisted throughout the research process
  • Andrej Karpathy โ€” For the autoresearch framework and inspiration
  • The Hebrew NLP community for open datasets

Contact