Improve model card: Add pipeline tag, paper link, code repository and license
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,13 +1,19 @@
|
|
| 1 |
---
|
| 2 |
base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
| 3 |
library_name: peft
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Model Card for Model ID
|
| 7 |
|
| 8 |
-
|
|
|
|
|
|
|
| 9 |
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
| 12 |
## Model Details
|
| 13 |
|
|
@@ -15,14 +21,12 @@ library_name: peft
|
|
| 15 |
|
| 16 |
<!-- Provide a longer summary of what this model is. -->
|
| 17 |
|
| 18 |
-
|
| 19 |
-
|
| 20 |
- **Developed by:** [More Information Needed]
|
| 21 |
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
- **Model type:** [More Information Needed]
|
| 24 |
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:**
|
| 26 |
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
|
| 28 |
### Model Sources [optional]
|
|
@@ -89,7 +93,6 @@ Use the code below to get started with the model.
|
|
| 89 |
|
| 90 |
[More Information Needed]
|
| 91 |
|
| 92 |
-
|
| 93 |
#### Training Hyperparameters
|
| 94 |
|
| 95 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
|
@@ -130,8 +133,6 @@ Use the code below to get started with the model.
|
|
| 130 |
|
| 131 |
#### Summary
|
| 132 |
|
| 133 |
-
|
| 134 |
-
|
| 135 |
## Model Examination [optional]
|
| 136 |
|
| 137 |
<!-- Relevant interpretability work for the model goes here -->
|
|
|
|
| 1 |
---
|
| 2 |
base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
| 3 |
library_name: peft
|
| 4 |
+
pipeline_tag: question-answering
|
| 5 |
+
license: mit
|
| 6 |
---
|
| 7 |
|
| 8 |
# Model Card for Model ID
|
| 9 |
|
| 10 |
+
This is a Lora adapter trained on top of Meta-Llama-3-8B-Instruct as described in the paper [RankCoT: Refining Knowledge for Retrieval-Augmented Generation through Ranking Chain-of-Thoughts](https://huggingface.co/papers/2502.17888).
|
| 11 |
+
|
| 12 |
+
Project page: https://yanqval.github.io/PAE/.
|
| 13 |
|
| 14 |
+
Code: https://github.com/NEUIR/RankCoT
|
| 15 |
|
| 16 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 17 |
|
| 18 |
## Model Details
|
| 19 |
|
|
|
|
| 21 |
|
| 22 |
<!-- Provide a longer summary of what this model is. -->
|
| 23 |
|
|
|
|
|
|
|
| 24 |
- **Developed by:** [More Information Needed]
|
| 25 |
- **Funded by [optional]:** [More Information Needed]
|
| 26 |
- **Shared by [optional]:** [More Information Needed]
|
| 27 |
- **Model type:** [More Information Needed]
|
| 28 |
- **Language(s) (NLP):** [More Information Needed]
|
| 29 |
+
- **License:** MIT
|
| 30 |
- **Finetuned from model [optional]:** [More Information Needed]
|
| 31 |
|
| 32 |
### Model Sources [optional]
|
|
|
|
| 93 |
|
| 94 |
[More Information Needed]
|
| 95 |
|
|
|
|
| 96 |
#### Training Hyperparameters
|
| 97 |
|
| 98 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
|
|
|
| 133 |
|
| 134 |
#### Summary
|
| 135 |
|
|
|
|
|
|
|
| 136 |
## Model Examination [optional]
|
| 137 |
|
| 138 |
<!-- Relevant interpretability work for the model goes here -->
|