Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,80 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-nd-3.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-nd-3.0
|
| 3 |
+
---
|
| 4 |
+
# SFR-Iterative-DPO-Llama-3-8B-R
|
| 5 |
+
|
| 6 |
+
## Introduction
|
| 7 |
+
We release a state-of-the-art instruct model of its class, **SFR-Iterative-DPO-LLaMA-3-8B-R**.
|
| 8 |
+
On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
|
| 9 |
+
and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human- or GPT4-labeling.
|
| 10 |
+
|
| 11 |
+
## Model Releases
|
| 12 |
+
- SFT model
|
| 13 |
+
- Reward model
|
| 14 |
+
- RLHF model
|
| 15 |
+
|
| 16 |
+
## Dataset Releases
|
| 17 |
+
- Preference data mix
|
| 18 |
+
- Prompt collection for RLHF training
|
| 19 |
+
|
| 20 |
+
## Training methods
|
| 21 |
+
The key to our training is iterative RLHF.
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
## Chat Benchmarks
|
| 25 |
+
|
| 26 |
+
| **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** |
|
| 27 |
+
|-------------------------|----------|-------------------|-----------------------|--------------|---------------------|
|
| 28 |
+
| **Small Open-Sourced Models** | | | | | |
|
| 29 |
+
| Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 |
|
| 30 |
+
| Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - |
|
| 31 |
+
| Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 |
|
| 32 |
+
| Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - |
|
| 33 |
+
| Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 |
|
| 34 |
+
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 |
|
| 35 |
+
| **Ours** | | | | | |
|
| 36 |
+
| Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 |
|
| 37 |
+
| Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 |
|
| 38 |
+
| Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** |
|
| 39 |
+
| **Large Open-Sourced Models** | | | | | |
|
| 40 |
+
| Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 |
|
| 41 |
+
| Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 |
|
| 42 |
+
| Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 |
|
| 43 |
+
| Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 |
|
| 44 |
+
| LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 |
|
| 45 |
+
| Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 |
|
| 46 |
+
| **Proprietary Models** | | | | | |
|
| 47 |
+
| GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 |
|
| 48 |
+
| GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 |
|
| 49 |
+
| GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 |
|
| 50 |
+
| Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 |
|
| 51 |
+
| GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 |
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
## Academic Benchmarks
|
| 55 |
+
|
| 56 |
+
| **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
|
| 57 |
+
|------------------------|----------|---------------|------------|----------|---------------|----------------|---------|----------|
|
| 58 |
+
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
|
| 59 |
+
| Ours (SFT baseline) | 8B | SFT | 76.7 | | 61.0 | | | 63.5 |
|
| 60 |
+
| Ours (Offline baseline)| 8B | Vanilla DPO | 79.8 | | 63.4 | | | 60.3 |
|
| 61 |
+
| Ours (Online RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
## Usage
|
| 65 |
+
```python
|
| 66 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 67 |
+
model = AutoModelForCausalLM.from_pretrained("Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R")
|
| 68 |
+
tokenizer = AutoTokenizer.from_pretrained("Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R")
|
| 69 |
+
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
## Limitations
|
| 74 |
+
SFR-Iterative-DPO-LLaMA-3-8B-R is a reseach model as a result on our RLHF research at Salesforce.
|
| 75 |
+
|
| 76 |
+
## Citation
|
| 77 |
+
Please cite our techical report if you find our model is useful for your research or product.
|
| 78 |
+
```
|
| 79 |
+
@article{}
|
| 80 |
+
```
|