Improve Model Card with Paper Information and Usage Details
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,156 +1,110 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
tags: []
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Model Card for Model ID
|
| 7 |
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
|
| 12 |
## Model Details
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
- **Developed by:** [More Information Needed]
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
|
| 28 |
-
### Model Sources
|
| 29 |
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
-
|
| 33 |
-
- **Paper [optional]:** [More Information Needed]
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
|
| 36 |
## Uses
|
| 37 |
|
| 38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
-
|
| 40 |
### Direct Use
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
[More Information Needed]
|
| 45 |
|
| 46 |
-
### Downstream Use
|
| 47 |
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
[More Information Needed]
|
| 51 |
|
| 52 |
### Out-of-Scope Use
|
| 53 |
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
[More Information Needed]
|
| 57 |
|
| 58 |
## Bias, Risks, and Limitations
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
[More Information Needed]
|
| 63 |
|
| 64 |
### Recommendations
|
| 65 |
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 69 |
|
| 70 |
## How to Get Started with the Model
|
| 71 |
|
| 72 |
-
Use the
|
| 73 |
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
## Training Details
|
| 77 |
|
| 78 |
### Training Data
|
| 79 |
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
[More Information Needed]
|
| 83 |
|
| 84 |
### Training Procedure
|
| 85 |
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
#### Preprocessing [optional]
|
| 89 |
-
|
| 90 |
-
[More Information Needed]
|
| 91 |
-
|
| 92 |
|
| 93 |
#### Training Hyperparameters
|
| 94 |
|
| 95 |
-
-
|
| 96 |
-
|
| 97 |
-
#### Speeds, Sizes, Times [optional]
|
| 98 |
-
|
| 99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
|
| 103 |
## Evaluation
|
| 104 |
|
| 105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 106 |
-
|
| 107 |
### Testing Data, Factors & Metrics
|
| 108 |
|
| 109 |
#### Testing Data
|
| 110 |
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
[More Information Needed]
|
| 114 |
-
|
| 115 |
-
#### Factors
|
| 116 |
-
|
| 117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
-
|
| 119 |
-
[More Information Needed]
|
| 120 |
|
| 121 |
#### Metrics
|
| 122 |
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
[More Information Needed]
|
| 126 |
|
| 127 |
### Results
|
| 128 |
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
#### Summary
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
|
| 141 |
## Environmental Impact
|
| 142 |
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
|
| 147 |
-
-
|
| 148 |
-
-
|
| 149 |
-
-
|
| 150 |
-
-
|
| 151 |
-
-
|
| 152 |
|
| 153 |
-
## Technical Specifications
|
| 154 |
|
| 155 |
### Model Architecture and Objective
|
| 156 |
|
|
@@ -168,32 +122,16 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 168 |
|
| 169 |
[More Information Needed]
|
| 170 |
|
| 171 |
-
## Citation
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## Model Card Authors [optional]
|
| 194 |
-
|
| 195 |
-
[More Information Needed]
|
| 196 |
-
|
| 197 |
-
## Model Card Contact
|
| 198 |
|
| 199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
tags: []
|
| 4 |
+
license: cc-by-4.0
|
| 5 |
+
pipeline_tag: feature-extraction
|
| 6 |
---
|
| 7 |
|
| 8 |
# Model Card for Model ID
|
| 9 |
|
| 10 |
+
This model identifies and relabels false negatives in IR training datasets as described in the paper [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967). It is based on the e5-base model.
|
|
|
|
|
|
|
| 11 |
|
| 12 |
## Model Details
|
| 13 |
|
| 14 |
+
- **Developed by:** [More Information Needed]
|
| 15 |
+
- **Model type:** BertModel
|
| 16 |
+
- **Language(s) (NLP):** en
|
| 17 |
+
- **License:** cc-by-4.0
|
| 18 |
+
- **Finetuned from model:** models/e5-base-unsupervised-bge-retrieval-7-datasets-250K-default
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
### Model Sources
|
| 21 |
|
| 22 |
+
- **Repository:** Automatically Generated
|
| 23 |
+
- **Paper:** [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967)
|
| 24 |
+
- **Code:** https://github.com/studio-name/rlhn
|
|
|
|
|
|
|
| 25 |
|
| 26 |
## Uses
|
| 27 |
|
|
|
|
|
|
|
| 28 |
### Direct Use
|
| 29 |
|
| 30 |
+
This model is designed for identifying and relabeling hard negatives in information retrieval training datasets. It can be used to improve the quality of training data for retrieval and reranker models.
|
|
|
|
|
|
|
| 31 |
|
| 32 |
+
### Downstream Use
|
| 33 |
|
| 34 |
+
Fine-tuning retrieval and reranker models using the relabeled data can lead to significant improvements in retrieval effectiveness, especially on out-of-distribution datasets.
|
|
|
|
|
|
|
| 35 |
|
| 36 |
### Out-of-Scope Use
|
| 37 |
|
| 38 |
+
This model is not intended for use in applications that require real-time or low-latency performance, as the relabeling process involves computationally intensive LLM inference.
|
|
|
|
|
|
|
| 39 |
|
| 40 |
## Bias, Risks, and Limitations
|
| 41 |
|
| 42 |
+
The effectiveness of this model depends on the quality and diversity of the LLMs used for relabeling. Biases in the LLMs may lead to biased relabeling and affect the performance of downstream models.
|
|
|
|
|
|
|
| 43 |
|
| 44 |
### Recommendations
|
| 45 |
|
| 46 |
+
Users should be aware of the potential biases and limitations of the LLMs used for relabeling and carefully evaluate the impact of the relabeled data on the performance of downstream models.
|
|
|
|
|
|
|
| 47 |
|
| 48 |
## How to Get Started with the Model
|
| 49 |
|
| 50 |
+
Use the model with the transformers library:
|
| 51 |
|
| 52 |
+
```python
|
| 53 |
+
from transformers import AutoModel, AutoTokenizer
|
| 54 |
+
|
| 55 |
+
model_name = "models/e5-base-unsupervised-bge-retrieval-7-datasets-250K-default"
|
| 56 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 57 |
+
model = AutoModel.from_pretrained(model_name)
|
| 58 |
+
|
| 59 |
+
# Example usage
|
| 60 |
+
text = "This is an example sentence."
|
| 61 |
+
inputs = tokenizer(text, return_tensors="pt")
|
| 62 |
+
outputs = model(**inputs)
|
| 63 |
+
embeddings = outputs.last_hidden_state
|
| 64 |
+
print(embeddings.shape)
|
| 65 |
+
```
|
| 66 |
|
| 67 |
## Training Details
|
| 68 |
|
| 69 |
### Training Data
|
| 70 |
|
| 71 |
+
The model used here was trained on a subset of the BGE collection and has a vocab size of 30522.
|
|
|
|
|
|
|
| 72 |
|
| 73 |
### Training Procedure
|
| 74 |
|
| 75 |
+
The model was fine-tuned using a semi-supervised approach with LLMs to relabel hard negatives.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
|
| 77 |
#### Training Hyperparameters
|
| 78 |
|
| 79 |
+
- **Training regime:** bfloat16 mixed precision
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
## Evaluation
|
| 82 |
|
|
|
|
|
|
|
| 83 |
### Testing Data, Factors & Metrics
|
| 84 |
|
| 85 |
#### Testing Data
|
| 86 |
|
| 87 |
+
BEIR and AIR-Bench
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
|
| 89 |
#### Metrics
|
| 90 |
|
| 91 |
+
nDCG@10
|
|
|
|
|
|
|
| 92 |
|
| 93 |
### Results
|
| 94 |
|
| 95 |
+
Relabeling false negatives with true positives improves both E5 (base) and Qwen2.5-7B retrieval models by 0.7-1.4 nDCG@10 on BEIR and by 1.7-1.8 nDCG@10 on zero-shot AIR-Bench evaluation. Similar gains are observed for rerankers fine-tuned on the relabeled data, such as Qwen2.5-3B on BEIR.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
|
| 97 |
## Environmental Impact
|
| 98 |
|
|
|
|
|
|
|
| 99 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 100 |
|
| 101 |
+
- **Hardware Type:** [More Information Needed]
|
| 102 |
+
- **Hours used:** [More Information Needed]
|
| 103 |
+
- **Cloud Provider:** [More Information Needed]
|
| 104 |
+
- **Compute Region:** [More Information Needed]
|
| 105 |
+
- **Carbon Emitted:** [More Information Needed]
|
| 106 |
|
| 107 |
+
## Technical Specifications
|
| 108 |
|
| 109 |
### Model Architecture and Objective
|
| 110 |
|
|
|
|
| 122 |
|
| 123 |
[More Information Needed]
|
| 124 |
|
| 125 |
+
## Citation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
+
```
|
| 128 |
+
@misc{luo2024semievol,
|
| 129 |
+
title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval},
|
| 130 |
+
author={Junyu Luo and Xiao Luo and Xiusi Chen and Zhiping Xiao and Wei Ju and Ming Zhang},
|
| 131 |
+
year={2024},
|
| 132 |
+
eprint={2410.14745},
|
| 133 |
+
archivePrefix={arXiv},
|
| 134 |
+
primaryClass={cs.CL},
|
| 135 |
+
url={https://arxiv.org/abs/2410.14745},
|
| 136 |
+
}
|
| 137 |
+
```
|