Add missing metadata, link to paper and project page
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,14 +1,21 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
datasets:
|
| 3 |
- BAAI/Infinity-Instruct
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
-
|
|
|
|
|
|
|
| 7 |
---
|
| 8 |
|
| 9 |
## Overview
|
| 10 |
Gemma2-9B-IT-Simpo-Infinity-Preference is based on [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) and finetuned on [Infinity-Preference](https://huggingface.co/datasets/BAAI/Infinity-Preference) with [Simpo](https://github.com/princeton-nlp/SimPO). It achieves 73.4% LC win-rate on AlpacaEval 2.0 and 58.1% win-rate on Arena-Hard against GPT-4.
|
| 11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
## Training hyperparameters
|
| 13 |
|
| 14 |
```yaml
|
|
@@ -82,4 +89,4 @@ print(response)
|
|
| 82 |
|
| 83 |
## Disclaimer
|
| 84 |
|
| 85 |
-
The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of Infinity-Preference is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: google/gemma-2-9b-it
|
| 3 |
datasets:
|
| 4 |
- BAAI/Infinity-Instruct
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
+
library_name: transformers
|
| 8 |
+
pipeline_tag: text-generation
|
| 9 |
+
license: cc-by-nc-4.0
|
| 10 |
---
|
| 11 |
|
| 12 |
## Overview
|
| 13 |
Gemma2-9B-IT-Simpo-Infinity-Preference is based on [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) and finetuned on [Infinity-Preference](https://huggingface.co/datasets/BAAI/Infinity-Preference) with [Simpo](https://github.com/princeton-nlp/SimPO). It achieves 73.4% LC win-rate on AlpacaEval 2.0 and 58.1% win-rate on Arena-Hard against GPT-4.
|
| 14 |
|
| 15 |
+
This model is based on the paper [Infinity Instruct: Scaling Instruction Selection and Synthesis to Enhance Language Models](https://huggingface.co/papers/2506.11116).
|
| 16 |
+
|
| 17 |
+
Project page: https://huggingface.co/datasets/BAAI/Infinity-Instruct
|
| 18 |
+
|
| 19 |
## Training hyperparameters
|
| 20 |
|
| 21 |
```yaml
|
|
|
|
| 89 |
|
| 90 |
## Disclaimer
|
| 91 |
|
| 92 |
+
The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of Infinity-Preference is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
|