Update README.md
Browse files
README.md
CHANGED
|
@@ -1,14 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
|
| 2 |
-
This repo contains the **
|
| 3 |
|
| 4 |
-
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive
|
| 5 |
|
| 6 |
<p align='center'>
|
| 7 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/
|
| 8 |
</p>
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
## General Pre-Training From Scratch
|
| 11 |
-
We augment the RefinedWeb corproa with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train
|
| 12 |
|
| 13 |
To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness)
|
| 14 |
|
|
@@ -19,7 +39,7 @@ cd lm-evaluation-harness
|
|
| 19 |
pip install -e .
|
| 20 |
```
|
| 21 |
|
| 22 |
-
2. Evalaute
|
| 23 |
```bash
|
| 24 |
MODEL=instruction-pretrain/InstructLM-1.3B
|
| 25 |
add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True
|
|
@@ -41,6 +61,8 @@ accelerate launch -m lm_eval --model hf \
|
|
| 41 |
|
| 42 |
## Citation
|
| 43 |
If you find our work helpful, please cite us:
|
|
|
|
|
|
|
| 44 |
```bibtex
|
| 45 |
@inproceedings{
|
| 46 |
cheng2024adapting,
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- tiiuae/falcon-refinedweb
|
| 5 |
+
- instruction-pretrain/ft-instruction-synthesizer-collection
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
---
|
| 9 |
# Instruction Pre-Training: Language Models are Supervised Multitask Learners
|
| 10 |
+
This repo contains the **general models pre-trained from scratch** in our paper **Instruction Pre-Training: Language Models are Supervised Multitask Learners**.
|
| 11 |
|
| 12 |
+
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training. **In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning.** In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
|
| 13 |
|
| 14 |
<p align='center'>
|
| 15 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400">
|
| 16 |
</p>
|
| 17 |
|
| 18 |
+
## Resources
|
| 19 |
+
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
|
| 20 |
+
|
| 21 |
+
- Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
|
| 22 |
+
- Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
|
| 23 |
+
- General Models Pre-Trained from Scratch:
|
| 24 |
+
- [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M)
|
| 25 |
+
- [InstructLLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLLM-1.3B)
|
| 26 |
+
- Domain-Specific Models Pre-Trained from Llama3-8B:
|
| 27 |
+
- [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
|
| 28 |
+
- [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
|
| 29 |
+
|
| 30 |
## General Pre-Training From Scratch
|
| 31 |
+
We augment the [RefinedWeb corproa](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train general langauge models from scratch.
|
| 32 |
|
| 33 |
To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness)
|
| 34 |
|
|
|
|
| 39 |
pip install -e .
|
| 40 |
```
|
| 41 |
|
| 42 |
+
2. Evalaute:
|
| 43 |
```bash
|
| 44 |
MODEL=instruction-pretrain/InstructLM-1.3B
|
| 45 |
add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True
|
|
|
|
| 61 |
|
| 62 |
## Citation
|
| 63 |
If you find our work helpful, please cite us:
|
| 64 |
+
|
| 65 |
+
[AdaptLLM](https://huggingface.co/papers/2309.09530)
|
| 66 |
```bibtex
|
| 67 |
@inproceedings{
|
| 68 |
cheng2024adapting,
|