Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- tiiuae/falcon-refinedweb
|
| 5 |
+
- instruction-pretrain/ft-instruction-synthesizer-collection
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
base_model: instruction-pretrain/InstructLM-1.3B
|
| 9 |
+
pipeline_tag: text-generation
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# QuantFactory/InstructLM-1.3B-GGUF
|
| 13 |
+
This is quantized version of [instruction-pretrain/InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B) created using llama.cpp
|
| 14 |
+
|
| 15 |
+
# Model Description
|
| 16 |
+
## Instruction Pre-Training: Language Models are Supervised Multitask Learners
|
| 17 |
+
This repo contains the **general models pre-trained from scratch** in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
|
| 18 |
+
|
| 19 |
+
We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training. **In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning.** In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
|
| 20 |
+
|
| 21 |
+
<p align='center'>
|
| 22 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400">
|
| 23 |
+
</p>
|
| 24 |
+
|
| 25 |
+
## Resources
|
| 26 |
+
**🤗 We share our data and models with example usages, feel free to open any issues or discussions! 🤗**
|
| 27 |
+
|
| 28 |
+
- Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
|
| 29 |
+
- Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
|
| 30 |
+
- General Models Pre-Trained from Scratch:
|
| 31 |
+
- [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M)
|
| 32 |
+
- [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B)
|
| 33 |
+
- Domain-Specific Models Pre-Trained from Llama3-8B:
|
| 34 |
+
- [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
|
| 35 |
+
- [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
|
| 36 |
+
|
| 37 |
+
## General Pre-Training From Scratch
|
| 38 |
+
We augment the [RefinedWeb corproa](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train general langauge models from scratch.
|
| 39 |
+
|
| 40 |
+
To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness)
|
| 41 |
+
|
| 42 |
+
1. Setup dependencies:
|
| 43 |
+
```bash
|
| 44 |
+
git clone https://github.com/EleutherAI/lm-evaluation-harness
|
| 45 |
+
cd lm-evaluation-harness
|
| 46 |
+
pip install -e .
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
2. Evalaute:
|
| 50 |
+
```bash
|
| 51 |
+
MODEL=instruction-pretrain/InstructLM-1.3B
|
| 52 |
+
add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True
|
| 53 |
+
|
| 54 |
+
accelerate launch -m lm_eval --model hf \
|
| 55 |
+
--model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
|
| 56 |
+
--gen_kwargs do_sample=False \
|
| 57 |
+
--tasks piqa,hellaswag,winogrande \
|
| 58 |
+
--batch_size auto \
|
| 59 |
+
--num_fewshot 0
|
| 60 |
+
|
| 61 |
+
accelerate launch -m lm_eval --model hf \
|
| 62 |
+
--model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
|
| 63 |
+
--gen_kwargs do_sample=False \
|
| 64 |
+
--tasks social_iqa,ai2_arc,openbookqa,boolq,mmlu \
|
| 65 |
+
--batch_size auto \
|
| 66 |
+
--num_fewshot 5
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
## Model Citation
|
| 70 |
+
If you find our work helpful, please cite us:
|
| 71 |
+
|
| 72 |
+
[AdaptLLM](https://huggingface.co/papers/2309.09530)
|
| 73 |
+
```bibtex
|
| 74 |
+
@inproceedings{
|
| 75 |
+
cheng2024adapting,
|
| 76 |
+
title={Adapting Large Language Models via Reading Comprehension},
|
| 77 |
+
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
|
| 78 |
+
booktitle={The Twelfth International Conference on Learning Representations},
|
| 79 |
+
year={2024},
|
| 80 |
+
url={https://openreview.net/forum?id=y886UXPEZ0}
|
| 81 |
+
}
|
| 82 |
+
```
|