Papers
arxiv:2601.22146

FineInstructions: Scaling Synthetic Instructions to Pre-Training Scale

Published on Jan 29
· Submitted by
Stefan Schweter
on Jan 30
Authors:
,
,

Abstract

Large language models can be pre-trained from scratch using synthetic instruction-response pairs generated from unstructured text corpora, outperforming traditional methods on benchmarks measuring response quality.

AI-generated summary

Due to limited supervised training data, large language models (LLMs) are typically pre-trained via a self-supervised "predict the next word" objective on a vast amount of unstructured text data. To make the resulting model useful to users, it is further trained on a far smaller amount of "instruction-tuning" data comprised of supervised training examples of instructions and responses. To overcome the limited amount of supervised data, we propose a procedure that can transform the knowledge in internet-scale pre-training documents into billions of synthetic instruction and answer training pairs. The resulting dataset, called FineInstructions, uses ~18M instruction templates created from real user-written queries and prompts. These instruction templates are matched to and instantiated with human-written source documents from unstructured pre-training corpora. With "supervised" synthetic training data generated at this scale, an LLM can be pre-trained from scratch solely with the instruction-tuning objective, which is far more in-distribution with the expected downstream usage of LLMs (responding to user prompts). We conduct controlled token-for-token training experiments and find pre-training on FineInstructions outperforms standard pre-training and other proposed synthetic pre-training techniques on standard benchmarks measuring free-form response quality. Our resources can be found at https://huggingface.co/fineinstructions .

Community

@AjayP13 and @craffel really interesting work and approach, do you plan to add support for multilingual instructions 🤔

·

Thanks @stefan-it . At the moment no, but certainly this pipeline could be extended to documents of different languages and different kinds of modalities (code, images, etc.).

Paper submitter

Large language models can be pre-trained from scratch using synthetic instruction-response pairs generated from unstructured text corpora, outperforming traditional methods on benchmarks measuring response quality.

Great work guys 😍

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 3

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.22146 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.