| | --- |
| | license: apache-2.0 |
| | datasets: |
| | - tiiuae/falcon-refinedweb |
| | - instruction-pretrain/ft-instruction-synthesizer-collection |
| | - instruction-pretrain/general-instruction-augmented-corpora |
| | language: |
| | - en |
| | --- |
| | # Instruction Pre-Training: Language Models are Supervised Multitask Learners (EMNLP 2024) |
| | This repo contains the **general models pre-trained from scratch** (on 100B tokens) in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491). |
| |
|
| | We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training. **In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning.** In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B. |
| |
|
| | <p align='center'> |
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400"> |
| | </p> |
| | |
| | **************************** **Updates** **************************** |
| | * 2024/9/20: Our paper has been accepted by EMNLP 2024 main conference🎉 |
| | * 2024/9/11: Updated [FAQ on continual pre-training from Llama3](https://huggingface.co/instruction-pretrain/instruction-synthesizer) |
| | * 2024/8/29: Updated [guidelines](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B) on evaluating any 🤗Huggingface models on the domain-specific tasks |
| | * 2024/7/31: Updated pre-training suggestions in the `Advanced Usage` section of [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) |
| | * 2024/7/15: We scaled up the pre-trained tokens from 100B to 250B, with the number of synthesized instruction-response pairs reaching 500M. The performance trend on downstream tasks throughout the pre-training process: |
| | <p align='left'> |
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/0okCfRkC6uALTfuNxt0Fa.png" width="500"> |
| | </p> |
| | * 2024/6/21: Released the [paper](https://huggingface.co/papers/2406.14491), [code](https://github.com/microsoft/LMOps), and [resources](https://huggingface.co/instruction-pretrain) |
| | |
| | ## Resources |
| | **🤗 We share our data and models with example usages, feel free to open any discussions at [this page](https://huggingface.co/papers/2406.14491)! 🤗** |
| |
|
| | - Thanks to the demo [davanstrien/instruction-synthesizer](https://huggingface.co/spaces/davanstrien/instruction-synthesizer) for implementing our approach |
| | - Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) |
| | - Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection) |
| | - General Models Pre-Trained from Scratch (on 100B tokes): |
| | - [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M) |
| | - [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B) |
| | - Domain-Specific Models Pre-Trained from Llama3-8B: |
| | - [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B) |
| | - [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B) |
| | - General Instruction-Augmented Corpora: [general-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/general-instruction-augmented-corpora) |
| | - Domain-Specific Instruction-Augmented Corpora (no finance data to avoid ethical issues): [medicine-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/medicine-instruction-augmented-corpora) |
| |
|
| | ## General Pre-Training From Scratch |
| | We augment the [RefinedWeb corproa](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train general langauge models from scratch. |
| |
|
| | To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness) |
| |
|
| | 1. Setup dependencies: |
| | ```bash |
| | git clone https://github.com/EleutherAI/lm-evaluation-harness |
| | cd lm-evaluation-harness |
| | pip install -e . |
| | ``` |
| |
|
| | 2. Evalaute: |
| | ```bash |
| | MODEL=instruction-pretrain/InstructLM-500M |
| | add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True |
| | |
| | accelerate launch -m lm_eval --model hf \ |
| | --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \ |
| | --gen_kwargs do_sample=False \ |
| | --tasks piqa,hellaswag,winogrande \ |
| | --batch_size auto \ |
| | --num_fewshot 0 |
| | |
| | accelerate launch -m lm_eval --model hf \ |
| | --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \ |
| | --gen_kwargs do_sample=False \ |
| | --tasks social_iqa,ai2_arc,openbookqa,boolq,mmlu \ |
| | --batch_size auto \ |
| | --num_fewshot 5 |
| | ``` |
| |
|
| | ## Citation |
| | If you find our work helpful, please cite us: |
| |
|
| | [Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024) |
| | ```bibtex |
| | @article{cheng2024instruction, |
| | title={Instruction Pre-Training: Language Models are Supervised Multitask Learners}, |
| | author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu}, |
| | journal={arXiv preprint arXiv:2406.14491}, |
| | year={2024} |
| | } |
| | ``` |
| |
|
| | [Adapt LLM to Domains](https://huggingface.co/papers/2309.09530)(ICLR 2024) |
| | ```bibtex |
| | @inproceedings{ |
| | cheng2024adapting, |
| | title={Adapting Large Language Models via Reading Comprehension}, |
| | author={Daixuan Cheng and Shaohan Huang and Furu Wei}, |
| | booktitle={The Twelfth International Conference on Learning Representations}, |
| | year={2024}, |
| | url={https://openreview.net/forum?id=y886UXPEZ0} |
| | } |
| | ``` |