Update README.md
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ language:
|
|
| 5 |
datasets:
|
| 6 |
- instruction-pretrain/ft-instruction-synthesizer-collection
|
| 7 |
---
|
| 8 |
-
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
|
| 9 |
This repo contains the **context-based instruction synthesizer** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
|
| 10 |
|
| 11 |
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
|
|
@@ -15,6 +15,7 @@ We explore supervised multitask pre-training by proposing ***Instruction Pre-Tra
|
|
| 15 |
</p>
|
| 16 |
|
| 17 |
**************************** **Updates** ****************************
|
|
|
|
| 18 |
* 2024/8/29: Updated [guidelines](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B) on evaluating any 🤗Huggingface models on the domain-specific tasks
|
| 19 |
* 2024/7/31: Updated pre-training suggestions in the `Advanced Usage` section of [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
|
| 20 |
* 2024/7/15: We scaled up the pre-trained tokens from 100B to 250B, with the number of synthesized instruction-response pairs reaching 500M. The performance trend on downstream tasks throughout the pre-training process:
|
|
@@ -223,7 +224,7 @@ text_ids = tokenizer(text, add_special_tokens=False, **kwargs).input_ids
|
|
| 223 |
## Citation
|
| 224 |
If you find our work helpful, please cite us:
|
| 225 |
|
| 226 |
-
Instruction Pre-Training
|
| 227 |
```bibtex
|
| 228 |
@article{cheng2024instruction,
|
| 229 |
title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
|
|
@@ -233,7 +234,7 @@ Instruction Pre-Training
|
|
| 233 |
}
|
| 234 |
```
|
| 235 |
|
| 236 |
-
[Adapt LLM to Domains](https://huggingface.co/papers/2309.09530)
|
| 237 |
```bibtex
|
| 238 |
@inproceedings{
|
| 239 |
cheng2024adapting,
|
|
|
|
| 5 |
datasets:
|
| 6 |
- instruction-pretrain/ft-instruction-synthesizer-collection
|
| 7 |
---
|
| 8 |
+
# Instruction Pre-Training: Language Models are Supervised Multitask Learners (EMNLP 2024)
|
| 9 |
This repo contains the **context-based instruction synthesizer** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
|
| 10 |
|
| 11 |
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. ***Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training.** In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning. In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
|
|
|
|
| 15 |
</p>
|
| 16 |
|
| 17 |
**************************** **Updates** ****************************
|
| 18 |
+
* 2024/9/20: Our paper has been accepted by EMNLP 2024 main conference🎉
|
| 19 |
* 2024/8/29: Updated [guidelines](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B) on evaluating any 🤗Huggingface models on the domain-specific tasks
|
| 20 |
* 2024/7/31: Updated pre-training suggestions in the `Advanced Usage` section of [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
|
| 21 |
* 2024/7/15: We scaled up the pre-trained tokens from 100B to 250B, with the number of synthesized instruction-response pairs reaching 500M. The performance trend on downstream tasks throughout the pre-training process:
|
|
|
|
| 224 |
## Citation
|
| 225 |
If you find our work helpful, please cite us:
|
| 226 |
|
| 227 |
+
Instruction Pre-Training (EMNLP 2024)
|
| 228 |
```bibtex
|
| 229 |
@article{cheng2024instruction,
|
| 230 |
title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
|
|
|
|
| 234 |
}
|
| 235 |
```
|
| 236 |
|
| 237 |
+
[Adapt LLM to Domains](https://huggingface.co/papers/2309.09530)(ICLR 2024)
|
| 238 |
```bibtex
|
| 239 |
@inproceedings{
|
| 240 |
cheng2024adapting,
|