Update README.md
Browse files
README.md
CHANGED
|
@@ -15,13 +15,13 @@ language:
|
|
| 15 |
<em>[Paper][Code][π€] (would be released soon)</em>
|
| 16 |
</p>
|
| 17 |
|
| 18 |
-
Infinity-Instruct-
|
| 19 |
|
| 20 |
## **News**
|
| 21 |
|
| 22 |
-
- π₯π₯π₯[2024/
|
| 23 |
|
| 24 |
-
- π₯π₯π₯[2024/
|
| 25 |
|
| 26 |
- π₯π₯π₯[2024/07/09] We release the model weights of [InfInstruct-Mistral-7B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Mistral-7B), [InfInstruct-Qwen2-7B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Qwen2-7B), [InfInstruct-Llama3-8B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Llama3-8B), [InfInstruct-Llama3-70B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Llama3-70B), and [InfInstruct-Yi-1.5-9B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B).
|
| 27 |
|
|
@@ -39,7 +39,7 @@ Infinity-Instruct-6M-0718-Mistral-7B is an opensource supervised instruction tun
|
|
| 39 |
<img src="fig/trainingflow.png">
|
| 40 |
</p>
|
| 41 |
|
| 42 |
-
Infinity-Instruct-
|
| 43 |
|
| 44 |
```bash
|
| 45 |
epoch: 3
|
|
@@ -58,27 +58,23 @@ Thanks to [FlagScale](https://github.com/FlagOpen/FlagScale), we could concatena
|
|
| 58 |
|
| 59 |
## **Benchmark**
|
| 60 |
|
| 61 |
-
|
|
| 62 |
-
|
| 63 |
-
|
|
| 64 |
-
|
|
| 65 |
-
|
|
| 66 |
-
|
|
| 67 |
-
| Mixtral 8x7B v0.1
|
| 68 |
-
|
|
| 69 |
-
|
|
| 70 |
-
| InfInstruct-3M-Mistral-7B
|
| 71 |
-
| InfInstruct-3M-
|
| 72 |
-
| InfInstruct-3M-0625-Mistral-7B* | 8.1 | 31.42 |
|
| 73 |
-
| InfInstruct-3M-0718-Mistral-7B* | 8.1 | **40.0** |
|
| 74 |
|
| 75 |
*denote the model is finetuned without reinforcement learning from human feedback (RLHF).
|
| 76 |
|
| 77 |
-
We evaluate Infinity-Instruct-6M-0718-Mistral-7B on the two most popular instructions following benchmarks. Mt-Bench is a set of challenging multi-turn questions including code, math and routine dialogue. AlpacaEval2.0 is based on AlpacaFarm evaluation set. Both of these two benchmarks use GPT-4 to judge the model answer. AlpacaEval2.0 displays a high agreement rate with human-annotated benchmark, Chatbot Arena. The result shows that InfInstruct-6M-0718-Mistral-7B achieved 40.0 in AlpacaEval2.0, which is higher than the 35.5 of GPT4-0314 although it does not yet use RLHF. InfInstruct-6M-0718-Mistral-7B also achieves 8.1 in MT-Bench, which is comparable to the state-of-the-art billion-parameter LLM such as Llama-3-8B-Instruct and Mistral-7B-Instruct-v0.2.
|
| 78 |
-
|
| 79 |
## **How to use**
|
| 80 |
|
| 81 |
-
Infinity-Instruct-
|
| 82 |
|
| 83 |
```bash
|
| 84 |
<|im_start|>system
|
|
@@ -98,11 +94,11 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, LogitsProcessorLis
|
|
| 98 |
import torch
|
| 99 |
device = "cuda" # the device to load the model onto
|
| 100 |
|
| 101 |
-
model = AutoModelForCausalLM.from_pretrained("BAAI/Infinity-Instruct-
|
| 102 |
torch_dtype=torch.bfloat16,
|
| 103 |
device_map="auto"
|
| 104 |
)
|
| 105 |
-
tokenizer = AutoTokenizer.from_pretrained("BAAI/Infinity-Instruct-
|
| 106 |
|
| 107 |
# This template is copied from OpenHermers-mistral-2.5 (https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
|
| 108 |
prompt = "Give me a short introduction to large language model."
|
|
|
|
| 15 |
<em>[Paper][Code][π€] (would be released soon)</em>
|
| 16 |
</p>
|
| 17 |
|
| 18 |
+
Infinity-Instruct-7M-0729-Mistral-7B is an opensource supervised instruction tuning model without reinforcement learning from human feedback (RLHF). This model is just finetuned on [Infinity-Instruct-7M and Infinity-Instruct-0729](https://huggingface.co/datasets/BAAI/Infinity-Instruct) and showing favorable results on AlpacaEval 2.0 compared to Mixtral 8x22B v0.1, Gemini Pro, and GPT-4.
|
| 19 |
|
| 20 |
## **News**
|
| 21 |
|
| 22 |
+
- π₯π₯π₯[2024/08/02] We release the model weights of [InfInstruct-Llama3.1-70B 0729](https://huggingface.co/BAAI/Infinity-Instruct-7M-0729-Llama3_1-70B), [InfInstruct-Llama3.1-8B 0729](https://huggingface.co/BAAI/Infinity-Instruct-7M-0729-Llama3_1-70B), [InfInstruct-Mistral-7B 0729](https://huggingface.co/BAAI/Infinity-Instruct-7M-0729-Mistral-7B).
|
| 23 |
|
| 24 |
+
- π₯π₯π₯[2024/08/02] We release the 7M foundational dataset [Infinity-Instruct-7M](https://huggingface.co/datasets/BAAI/Infinity-Instruct).
|
| 25 |
|
| 26 |
- π₯π₯π₯[2024/07/09] We release the model weights of [InfInstruct-Mistral-7B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Mistral-7B), [InfInstruct-Qwen2-7B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Qwen2-7B), [InfInstruct-Llama3-8B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Llama3-8B), [InfInstruct-Llama3-70B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Llama3-70B), and [InfInstruct-Yi-1.5-9B 0625](https://huggingface.co/BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B).
|
| 27 |
|
|
|
|
| 39 |
<img src="fig/trainingflow.png">
|
| 40 |
</p>
|
| 41 |
|
| 42 |
+
Infinity-Instruct-7M-0729-Mistral-7B is tuned on Million-level instruction dataset [Infinity-Instruct](https://huggingface.co/datasets/BAAI/Infinity-Instruct). First, we apply the foundational dataset Infinity-Instruct-7M to improve the foundational ability (math & code) of Mistral-7B-v0.1, and get the foundational instruct model Infinity-Instruct-7M-Mistral-7B. Then we finetune the Infinity-Instruct-7M-Mistral-7B to get the stronger chat model Infinity-Instruct-7M-0729-Mistral-7B. Here is the training hyperparamers.
|
| 43 |
|
| 44 |
```bash
|
| 45 |
epoch: 3
|
|
|
|
| 58 |
|
| 59 |
## **Benchmark**
|
| 60 |
|
| 61 |
+
| **Model** | **MT-Bench** | **AlpacaEval2.0** | **Arena-hard** |
|
| 62 |
+
|:----------------------------:|:------------:|:-----------------:|:-----------------:|
|
| 63 |
+
| GPT-4-0314 | 9.0 | 35.3 | 50.0 |
|
| 64 |
+
| GPT-4-0613 | 9.2 | 30.2 | 37.9 |
|
| 65 |
+
| GPT-4-1106 | 9.3 | 30.2 | -- |
|
| 66 |
+
| Gemini Pro | -- | 24.4 | 17.8 |
|
| 67 |
+
| Mixtral 8x7B v0.1 | 8.3 | 23.7 | 23.4 |
|
| 68 |
+
| Mistral-7B-Instruct-v0.2 | 7.6 | 17.1 | -- |
|
| 69 |
+
| InfInstruct-3M-0613-Mistral-7B | 8.1 | 25.5 | -- |
|
| 70 |
+
| InfInstruct-3M-0625-Mistral-7B | 8.1 | 31.4 | -- |
|
| 71 |
+
| **InfInstruct-3M-0629-Mistral-7B** | **8.1** | **40.0** | **26.9** |
|
|
|
|
|
|
|
| 72 |
|
| 73 |
*denote the model is finetuned without reinforcement learning from human feedback (RLHF).
|
| 74 |
|
|
|
|
|
|
|
| 75 |
## **How to use**
|
| 76 |
|
| 77 |
+
Infinity-Instruct-7M-0729-Mistral-7B adopt the same chat template of [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B):
|
| 78 |
|
| 79 |
```bash
|
| 80 |
<|im_start|>system
|
|
|
|
| 94 |
import torch
|
| 95 |
device = "cuda" # the device to load the model onto
|
| 96 |
|
| 97 |
+
model = AutoModelForCausalLM.from_pretrained("BAAI/Infinity-Instruct-7M-0729-Mistral-7B",
|
| 98 |
torch_dtype=torch.bfloat16,
|
| 99 |
device_map="auto"
|
| 100 |
)
|
| 101 |
+
tokenizer = AutoTokenizer.from_pretrained("BAAI/Infinity-Instruct-7M-0729-Mistral-7BB")
|
| 102 |
|
| 103 |
# This template is copied from OpenHermers-mistral-2.5 (https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
|
| 104 |
prompt = "Give me a short introduction to large language model."
|