Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: llama3
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
base_model: Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R
|
| 5 |
+
---
|
| 6 |
+
# Llama-3-8B-SFR-Iterative-DPO-R-GGUF
|
| 7 |
+
This is quantized version of [Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R](https://huggingface.co/Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R) created using llama.cpp
|
| 8 |
+
|
| 9 |
+
## Model Description
|
| 10 |
+
We release a state-of-the-art instruct model of its class, **Llama-3-8B-SFR-Iterative-DPO-R**.
|
| 11 |
+
On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
|
| 12 |
+
and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
|
| 13 |
+
|
| 14 |
+
## Model Releases
|
| 15 |
+
- [SFT model](https://huggingface.co/Salesforce/SFR-SFT-LLaMA-3-8B-R)
|
| 16 |
+
- [Reward model](https://huggingface.co/Salesforce/SFR-RM-LLaMA-3-8B-R)
|
| 17 |
+
- [RLHF model](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R)
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
## Training methods
|
| 21 |
+
We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches.
|
| 22 |
+
Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization.
|
| 23 |
+
For a detailed exposition, please refer to our accompanying technical report.
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
## Chat Benchmarks
|
| 27 |
+
|
| 28 |
+
| **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** |
|
| 29 |
+
|-------------------------|----------|-------------------|-----------------------|--------------|---------------------|
|
| 30 |
+
| **Small Open-Sourced Models** | | | | | |
|
| 31 |
+
| Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 |
|
| 32 |
+
| Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - |
|
| 33 |
+
| Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 |
|
| 34 |
+
| Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - |
|
| 35 |
+
| Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 |
|
| 36 |
+
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 |
|
| 37 |
+
| **Ours** | | | | | |
|
| 38 |
+
| Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 |
|
| 39 |
+
| Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 |
|
| 40 |
+
| Ours (Online RLHF) | 8B | Iterative DPO | **31.3** | **8.46** | **29.1** |
|
| 41 |
+
| **Large Open-Sourced Models** | | | | | |
|
| 42 |
+
| Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 |
|
| 43 |
+
| Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 |
|
| 44 |
+
| Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 |
|
| 45 |
+
| Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 |
|
| 46 |
+
| LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 |
|
| 47 |
+
| Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 |
|
| 48 |
+
| **Proprietary Models** | | | | | |
|
| 49 |
+
| GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 |
|
| 50 |
+
| GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 |
|
| 51 |
+
| GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 |
|
| 52 |
+
| Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 |
|
| 53 |
+
| GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 |
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
## Academic Benchmarks
|
| 57 |
+
|
| 58 |
+
| **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
|
| 59 |
+
|----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------|
|
| 60 |
+
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
|
| 61 |
+
| Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 |
|
| 62 |
+
| Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 |
|
| 63 |
+
| Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
## Usage
|
| 67 |
+
```python
|
| 68 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 69 |
+
|
| 70 |
+
device = "cuda"
|
| 71 |
+
|
| 72 |
+
model = AutoModelForCausalLM.from_pretrained("Salesforce/Llama-3-8B-SFR-Iterative-DPO-R")
|
| 73 |
+
tokenizer = AutoTokenizer.from_pretrained("Salesforce/Llama-3-8B-SFR-Iterative-DPO-R")
|
| 74 |
+
|
| 75 |
+
messages = [
|
| 76 |
+
{"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
|
| 77 |
+
]
|
| 78 |
+
|
| 79 |
+
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
|
| 80 |
+
|
| 81 |
+
model_inputs = model_inputs.to(device)
|
| 82 |
+
model.to(device)
|
| 83 |
+
|
| 84 |
+
output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True)
|
| 85 |
+
model_outputs = tokenizer.batch_decode(output_tokens)
|
| 86 |
+
print(model_outputs[0])
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
## Limitations
|
| 91 |
+
Llama-3-8B-SFR-Iterative-DPO-R is a research model developed as part of our RLHF initiative at Salesforce.
|
| 92 |
+
While safety and ethical considerations are integral to our alignment process,
|
| 93 |
+
there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
|
| 94 |
+
We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.
|
| 95 |
+
|
| 96 |
+
## Original Model Citation
|
| 97 |
+
Please cite our papers if you find our models are useful.
|
| 98 |
+
|
| 99 |
+
```bibtex
|
| 100 |
+
@misc{dong2024rlhf,
|
| 101 |
+
title={RLHF Workflow: From Reward Modeling to Online RLHF},
|
| 102 |
+
author={Hanze Dong* and Wei Xiong* and Bo Pang* and Haoxiang Wang* and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
|
| 103 |
+
year={2024},
|
| 104 |
+
eprint={2405.07863},
|
| 105 |
+
archivePrefix={arXiv},
|
| 106 |
+
primaryClass={cs.LG}
|
| 107 |
+
}
|
| 108 |
+
|
| 109 |
+
@misc{xiong2024iterative,
|
| 110 |
+
title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint},
|
| 111 |
+
author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
|
| 112 |
+
year={2024},
|
| 113 |
+
eprint={2312.11456},
|
| 114 |
+
archivePrefix={arXiv},
|
| 115 |
+
primaryClass={cs.LG}
|
| 116 |
+
}
|
| 117 |
+
```
|