Improve model card: Add pipeline tag, library_name, and link to code
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,10 +1,13 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
datasets:
|
| 4 |
- BAAI/Infinity-Instruct
|
| 5 |
language:
|
| 6 |
- en
|
|
|
|
|
|
|
|
|
|
| 7 |
---
|
|
|
|
| 8 |
# Infinity Instruct
|
| 9 |
|
| 10 |
<p align="center">
|
|
@@ -15,6 +18,10 @@ language:
|
|
| 15 |
<em>[Paper][Code][🤗] (would be released soon)</em>
|
| 16 |
</p>
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
Infinity-Instruct-3M-0613-Llama3-70B is an opensource supervised instruction tuning model without reinforcement learning from human feedback (RLHF). This model is just finetuned on [Infinity-Instruct-3M and Infinity-Instruct-0613](https://huggingface.co/datasets/BAAI/Infinity-Instruct) and showing favorable results on AlpacaEval 2.0 compared to GPT4-0613.
|
| 19 |
|
| 20 |
## **News**
|
|
@@ -77,6 +84,7 @@ How are you?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
|
| 77 |
```
|
| 78 |
|
| 79 |
To apply this model and template in conversation scenarios, you can refer to the following code:
|
|
|
|
| 80 |
```python
|
| 81 |
from transformers import AutoModelForCausalLM, AutoTokenizer, LogitsProcessorList
|
| 82 |
import torch
|
|
@@ -121,8 +129,6 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
| 121 |
print(response)
|
| 122 |
```
|
| 123 |
|
| 124 |
-
|
| 125 |
-
|
| 126 |
## **Disclaimer**
|
| 127 |
|
| 128 |
The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of Infinity Instruct is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
datasets:
|
| 3 |
- BAAI/Infinity-Instruct
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
+
license: apache-2.0
|
| 7 |
+
pipeline_tag: text-generation
|
| 8 |
+
library_name: transformers
|
| 9 |
---
|
| 10 |
+
|
| 11 |
# Infinity Instruct
|
| 12 |
|
| 13 |
<p align="center">
|
|
|
|
| 18 |
<em>[Paper][Code][🤗] (would be released soon)</em>
|
| 19 |
</p>
|
| 20 |
|
| 21 |
+
This repository contains the model described in the paper [Infinity Instruct: Scaling Instruction Selection and Synthesis to Enhance Language Models](https://huggingface.co/papers/2506.11116).
|
| 22 |
+
|
| 23 |
+
Code: this https URL
|
| 24 |
+
|
| 25 |
Infinity-Instruct-3M-0613-Llama3-70B is an opensource supervised instruction tuning model without reinforcement learning from human feedback (RLHF). This model is just finetuned on [Infinity-Instruct-3M and Infinity-Instruct-0613](https://huggingface.co/datasets/BAAI/Infinity-Instruct) and showing favorable results on AlpacaEval 2.0 compared to GPT4-0613.
|
| 26 |
|
| 27 |
## **News**
|
|
|
|
| 84 |
```
|
| 85 |
|
| 86 |
To apply this model and template in conversation scenarios, you can refer to the following code:
|
| 87 |
+
|
| 88 |
```python
|
| 89 |
from transformers import AutoModelForCausalLM, AutoTokenizer, LogitsProcessorList
|
| 90 |
import torch
|
|
|
|
| 129 |
print(response)
|
| 130 |
```
|
| 131 |
|
|
|
|
|
|
|
| 132 |
## **Disclaimer**
|
| 133 |
|
| 134 |
The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes. The content produced by any version of Infinity Instruct is influenced by uncontrollable variables such as randomness, and therefore, the accuracy of the output cannot be guaranteed by this project. This project does not accept any legal liability for the content of the model output, nor does it assume responsibility for any losses incurred due to the use of associated resources and output results.
|