Update README.md
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ license: mit
|
|
| 5 |
# Model Card for FIRST
|
| 6 |
|
| 7 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 8 |
-
FIRST is a language models trained specifically for reranking tasks, leveraging the output logits of the first generated identifier to directly produce a ranked ordering of candidates. Built on the Zephyr-7B-β model, FIRST undergoes single-stage fine-tuning on
|
| 9 |
|
| 10 |
### Model Description
|
| 11 |
|
|
@@ -13,14 +13,14 @@ FIRST is a language models trained specifically for reranking tasks, leveraging
|
|
| 13 |
- **Model type:** A 7B parameter GPT-like model based on Zephyr-7B-β model and further fine-tuned on task-specific listwise reranking data
|
| 14 |
- **Language(s) (NLP):** Primarily English
|
| 15 |
- **License:** MIT
|
| 16 |
-
- **Finetuned from model
|
| 17 |
|
| 18 |
### Model Sources [optional]
|
| 19 |
|
| 20 |
<!-- Provide the basic links for the model. -->
|
| 21 |
|
| 22 |
- **Repository:** [https://github.com/gangiswag/llm-reranker](https://github.com/gangiswag/llm-reranker)
|
| 23 |
-
- **Paper
|
| 24 |
|
| 25 |
|
| 26 |
### Evaluations
|
|
@@ -43,12 +43,6 @@ More details can be found in the paper.
|
|
| 43 |
|
| 44 |
FIRST is trained specifically on monolingual English data, effectiveness on multilingual sets is not guaranteed.
|
| 45 |
|
| 46 |
-
### Recommendations
|
| 47 |
-
|
| 48 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 49 |
-
|
| 50 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 51 |
-
|
| 52 |
## Citation [optional]
|
| 53 |
|
| 54 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
|
|
|
| 5 |
# Model Card for FIRST
|
| 6 |
|
| 7 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 8 |
+
FIRST is a language models trained specifically for listwise reranking tasks, leveraging the output logits of the first generated identifier to directly produce a ranked ordering of candidates. Built on the Zephyr-7B-β model, FIRST undergoes single-stage fine-tuning on a converted alphabetic version of the RankZephyr dataset, which includes RankGPT-4 reorderings of OpenAI's Ada2 outputs for 5k queries.
|
| 9 |
|
| 10 |
### Model Description
|
| 11 |
|
|
|
|
| 13 |
- **Model type:** A 7B parameter GPT-like model based on Zephyr-7B-β model and further fine-tuned on task-specific listwise reranking data
|
| 14 |
- **Language(s) (NLP):** Primarily English
|
| 15 |
- **License:** MIT
|
| 16 |
+
- **Finetuned from model:** [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
|
| 17 |
|
| 18 |
### Model Sources [optional]
|
| 19 |
|
| 20 |
<!-- Provide the basic links for the model. -->
|
| 21 |
|
| 22 |
- **Repository:** [https://github.com/gangiswag/llm-reranker](https://github.com/gangiswag/llm-reranker)
|
| 23 |
+
- **Paper:** [https://arxiv.org/abs/2406.15657](https://arxiv.org/abs/2406.15657)
|
| 24 |
|
| 25 |
|
| 26 |
### Evaluations
|
|
|
|
| 43 |
|
| 44 |
FIRST is trained specifically on monolingual English data, effectiveness on multilingual sets is not guaranteed.
|
| 45 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
## Citation [optional]
|
| 47 |
|
| 48 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|