Scaling Laws for Neural Language Models

Alice MartinHugging Face, Bob ChenHugging Face, MIT, Carol WuMIT

April 13, 2026

Welcome to the Collaborative Editor

This demo article showcases every content type available in the editor. Feel free to edit, delete, or rewrite anything — the AI assistant in the left panel can help you.

Text formatting

You can make text bold, italic, or strikethrough. Combine them for bold italic. Use inline code for technical terms, and add links to external resources.

Lists

Unordered list

  • First item with some context

  • Second item — supports rich formatting inside

  • Third item with inline code

Ordered list

  1. Prepare the dataset

  2. Fine-tune the model

  3. Evaluate on the test split

  4. Deploy to production

Blockquote

"The best way to predict the future is to invent it." — Alan Kay

Code block

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "meta-llama/Llama-3-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

prompt = "The future of AI is"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Table

Model

Parameters

MMLU

HellaSwag

LLaMA 3 8B

8B

66.6

82.0

LLaMA 3 70B

70B

79.5

87.3

GPT-4

~1.8T

86.4

95.3

Mixtral 8x7B

46.7B

70.6

84.4

Mathematics

Inline math works with double dollars: the quadratic formula is and Euler's identity is .

Block equations use triple dollars. Here is a standard autoregressive loss:

And the full variational trajectory decomposition:

Scientific references

The editor supports academic citations. The Transformer architecture [vaswani2017] revolutionized natural language processing by introducing the self-attention mechanism. Large-scale pretraining [devlin2019] demonstrated that unsupervised objectives on massive corpora produce powerful general-purpose representations.

More recently, scaling laws [kaplan2020] have shown predictable relationships between model size, dataset size, and performance.


Horizontal rule

Use a horizontal rule to visually separate sections:


Nested content

Lists can contain multiple levels of formatting. Blockquotes can hold structured content:

Note: This editor supports real-time collaboration. Open this page in another tab to see cursors sync live.

A deeper heading level

Heading levels go from H1 down to H3, giving you a clear document hierarchy. The floating menu on the left lets you insert any block type, and the bubble toolbar appears when you select text.

Custom components

The editor supports rich custom components from the research article template. Use the / slash menu to insert them.

Details

The model was fine-tuned using LoRA adapters with rank 16 on 4× A100 GPUs. Training took approximately 12 hours on the full dataset. We used a cosine learning rate schedule with warm-up over the first 10% of steps.

All experiments were conducted with mixed-precision (bf16) training. Results may vary slightly with different random seeds.

If you have a very large neural network and you train it on a very large dataset, you get very good results. It really is that simple.

Do not use these scaling estimates for production capacity planning without accounting for inference overhead and memory constraints.

This content stretches beyond the normal column width, useful for wide tables, figures or visualizations that need extra horizontal space.

This content spans the entire page width. It is ideal for large figures, panoramic images, or full-bleed data visualizations.

Scaling laws were first systematically studied in the context of neural machine translation before being generalized to language models.

[ Figure placeholder — insert an image or chart here ]

The concept of Scaling law has become central to modern AI research. Early work* suggested these relationships hold across modalities.

Column A

Multi-column layouts let you place content side by side. Each column is fully editable.

Column B

Use the layout selector in the header to switch between 2, 3, or 4 columns.

What's next?

Try asking the AI assistant to:

  • Rewrite a paragraph in a different tone

  • Expand a section with more detail

  • Fix grammar or spelling errors

  • Translate content to another language

  • Add a new section on a specific topic

Select some text and use the quick actions, or type a message in the chat panel. All AI edits can be undone with Cmd+Z.

References

Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT, 4171–4186. https://doi.org/10.18653/v1/N19-1423
Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., & Amodei, D. (2020). Scaling laws for neural language models. arXiv Preprint arXiv:2001.08361. https://doi.org/10.48550/arXiv.2001.08361
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30. https://doi.org/10.48550/arXiv.1706.03762