Update README.md
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ We explore supervised multitask pre-training by proposing ***Instruction Pre-Tra
|
|
| 20 |
**************************** **Updates** ****************************
|
| 21 |
* 2024/7/15: We scaled up the pre-trained tokens from 100B to 250B, with the number of synthesized instruction-response pairs reaching 500M! Below, we show the performance trend on downstream tasks throughout the pre-training process:
|
| 22 |
<p align='center'>
|
| 23 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/
|
| 24 |
</p>
|
| 25 |
* 2024/6/21: Released the [paper](https://huggingface.co/papers/2406.14491), [code](https://github.com/microsoft/LMOps), and [resources](https://huggingface.co/instruction-pretrain)
|
| 26 |
|
|
|
|
| 20 |
**************************** **Updates** ****************************
|
| 21 |
* 2024/7/15: We scaled up the pre-trained tokens from 100B to 250B, with the number of synthesized instruction-response pairs reaching 500M! Below, we show the performance trend on downstream tasks throughout the pre-training process:
|
| 22 |
<p align='center'>
|
| 23 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/0okCfRkC6uALTfuNxt0Fa.png" width="700">
|
| 24 |
</p>
|
| 25 |
* 2024/6/21: Released the [paper](https://huggingface.co/papers/2406.14491), [code](https://github.com/microsoft/LMOps), and [resources](https://huggingface.co/instruction-pretrain)
|
| 26 |
|