schaeff's picture
Update README.md
616af36 verified
---
library_name: transformers
license: other
license_name: "openai-gpt2-license"
license_link: "https://github.com/openai/gpt-2/blob/master/LICENSE"
base_model:
- openai-community/gpt2-medium
tags: []
---
# Model Card for `schaeff/gpt2-medium_vanilla500`
Associated publication: *Transformers Don’t Need LayerNorm at Inference Time: Scaling LayerNorm Removal to GPT-2 XL and the Implications for Mechanistic Interpretability* (arXiv TBD)
Associated GitHub: [removing-layer-norm](https://github.com/submarat/removing-layer-norm)
This model is based on *openai-community/gpt2-medium* and was finetuned on OpenWebText for 500 iterations with 0.5M tokens per iteration. This model has the same architecture as the corresponding gpt-2 model and is being made available for reproducibility of the results reported in the associated publication.
## Usage
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
You can load the model with `transformers`:
```python
model = GPT2LMHeadModel.from_pretrained("schaeff/gpt2-medium_vanilla500")
```
## Model Collection
This model is part of a collection of LayerNorm-free models. The table below provides links and details.
### Evaluation results of LN-free, vanilla fine-tuned, and original GPT-2 models
*Reported values are mean cross-entropy losses for 10.2M tokens for The Pile and The Pile filtered and 4.5M tokens for the OpenWebText (WT) validation set. For each model size and dataset, the lowest loss is highlighted in **bold**, and the loss difference between the LN-free model and the best-performing model is shown in brackets.*
| Model | FT steps | [OWT (val)](https://huggingface.co/datasets/Skylion007/openwebtext) | [The Pile](https://huggingface.co/datasets/apollo-research/monology-pile-uncopyrighted-tokenizer-gpt2) | [The Pile-filtered](https://huggingface.co/datasets/lucabaroni/apollo-pile-filtered-10k) |
|-------|----------|-----------|----------|-------------------|
| OpenAI [GPT-2 Small original](https://huggingface.co/openai-community/gpt2) | 0 | 3.1006 | **2.8450** | **2.7899** |
| schaeff [GPT-2 Small vanilla](https://huggingface.co/schaeff/gpt2-small_vanilla300) | 300 | **3.0126** | 2.8511 | 2.8112 |
| schaeff [GPT-2 Small LN-free](https://huggingface.co/schaeff/gpt2-small_LNFree300) | 300 | 3.0797 [+0.0671] | 2.8852 [+0.0402] | 2.8757 [+0.0858] |
||||||
| OpenAI [GPT-2 Medium original](https://huggingface.co/openai-community/gpt2-medium) | 0 | 2.8145 | **2.5163** | **2.5390** |
| schaeff [GPT-2 Medium vanilla](https://huggingface.co/schaeff/gpt2-medium_vanilla500) | 500 | **2.7390** | 2.5752 | 2.5724 |
| schaeff [GPT-2 Medium LN-free](https://huggingface.co/schaeff/gpt2-medium_LNFree500) | 500 | 2.7642 [+0.0252] | 2.6579 [+0.1416] | 2.6352 [+0.0962] |
||||||
| OpenAI [GPT-2 Large original](https://huggingface.co/openai-community/gpt2-large) | 0 | 2.6623 | **2.5320** | **2.4347** |
| schaeff [GPT-2 Large vanilla](https://huggingface.co/schaeff/gpt2-large_vanilla600) | 600 | **2.6240** | 2.6233 | 2.5074 |
| schaeff [GPT-2 Large LN-free](https://huggingface.co/schaeff/gpt2-large_LNFree600) | 600 | 2.6384 [+0.0144] | 2.7504 [+0.2184] | 2.5159 [+0.0812] |
||||||
| OpenAI [GPT-2 XL original](https://huggingface.co/openai-community/gpt2-xl) | 0 | 2.5567 | **2.4436**¹ | **2.3739** |
| schaeff [GPT-2 XL vanilla](https://huggingface.co/schaeff/gpt2-xl_vanilla800) | 800 | **2.4799** | 2.4673 | 2.3821 |
| schaeff [GPT-2 XL LN-free](https://huggingface.co/schaeff/gpt2-xl_LNFree800) | 800 | 2.5052 [+0.0253] | 130.2197² | 2.3992 [+0.0253] |
#### **Footnotes:**
1. GPT-2 XL original: Median: 1.0103, 95 Percentile Range: [0.0005, 10.6193], 99.9% Percentile Range [≈0.0000, 43.0064]
2. GPT-2 XL LN-free: Median: 1.0937, 95 Percentile Range: [0.0004, 10.7548], 99.9% Percentile Range [≈0.0000, 48.6459]
## Citation
If you have found our work useful please cite as:
```
@misc{gpt2layernorm2025,
author = {Baroni, Luca and Khara, Galvin and Schaeffer, Joachim and Subkhankulov, Marat and Heimersheim, Stefan},
title = {Transformers Don't Need LayerNorm at Inference Time: Scaling LayerNorm Removal to GPT-2 XL and the Implications for Mechanistic Interpretability},
year = {2025},
eprint = {2507.02559},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2507.02559v1}
}
```