Improve model card with paper information and usage details
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,165 +1,125 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Model Card for Model ID
|
| 7 |
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
|
| 12 |
## Model Details
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
|
|
|
| 19 |
|
| 20 |
-
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
- **Repository:** [More Information Needed]
|
| 33 |
-
- **Paper [optional]:** [More Information Needed]
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
|
| 36 |
## Uses
|
| 37 |
|
| 38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 39 |
-
|
| 40 |
### Direct Use
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
-
|
| 45 |
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 49 |
-
|
| 50 |
-
[More Information Needed]
|
| 51 |
|
| 52 |
### Out-of-Scope Use
|
| 53 |
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
[More Information Needed]
|
| 57 |
|
| 58 |
## Bias, Risks, and Limitations
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
[More Information Needed]
|
| 63 |
-
|
| 64 |
-
### Recommendations
|
| 65 |
-
|
| 66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 67 |
-
|
| 68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
| 69 |
|
| 70 |
## How to Get Started with the Model
|
| 71 |
|
| 72 |
Use the code below to get started with the model.
|
| 73 |
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
## Training Details
|
| 77 |
|
| 78 |
### Training Data
|
| 79 |
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
[More Information Needed]
|
| 83 |
|
| 84 |
### Training Procedure
|
| 85 |
|
| 86 |
-
|
| 87 |
|
| 88 |
-
#### Preprocessing
|
| 89 |
|
| 90 |
[More Information Needed]
|
| 91 |
|
| 92 |
-
|
| 93 |
#### Training Hyperparameters
|
| 94 |
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
#### Speeds, Sizes, Times [optional]
|
| 98 |
-
|
| 99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
|
| 103 |
## Evaluation
|
| 104 |
|
| 105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 106 |
-
|
| 107 |
### Testing Data, Factors & Metrics
|
| 108 |
|
| 109 |
#### Testing Data
|
| 110 |
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
[More Information Needed]
|
| 114 |
|
| 115 |
#### Factors
|
| 116 |
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
[More Information Needed]
|
| 120 |
|
| 121 |
#### Metrics
|
| 122 |
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
[More Information Needed]
|
| 126 |
|
| 127 |
### Results
|
| 128 |
|
| 129 |
-
|
| 130 |
|
| 131 |
#### Summary
|
| 132 |
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
|
| 141 |
## Environmental Impact
|
| 142 |
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
|
| 153 |
-
## Technical Specifications
|
| 154 |
|
| 155 |
### Model Architecture and Objective
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
### Compute Infrastructure
|
| 160 |
|
| 161 |
-
[More Information Needed]
|
| 162 |
-
|
| 163 |
#### Hardware
|
| 164 |
|
| 165 |
[More Information Needed]
|
|
@@ -168,32 +128,24 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 168 |
|
| 169 |
[More Information Needed]
|
| 170 |
|
| 171 |
-
## Citation
|
| 172 |
|
| 173 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 174 |
|
| 175 |
-
|
| 176 |
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## Model Card Authors [optional]
|
| 194 |
-
|
| 195 |
-
[More Information Needed]
|
| 196 |
|
| 197 |
## Model Card Contact
|
| 198 |
|
| 199 |
-
[More Information Needed]
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
+
license: cc-by-4.0
|
| 4 |
+
pipeline_tag: feature-extraction
|
| 5 |
---
|
| 6 |
|
| 7 |
# Model Card for Model ID
|
| 8 |
|
| 9 |
+
This model is a BERT-based model fine-tuned for generating text embeddings. It was trained to improve the robustness of information retrieval systems by identifying and relabeling false negatives in training data. This process uses cascading LLM prompts, as described in [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967).
|
|
|
|
|
|
|
| 10 |
|
| 11 |
## Model Details
|
| 12 |
|
| 13 |
+
* **Developed by:** \[More Information Needed]
|
| 14 |
+
* **Shared by:** \[More Information Needed]
|
| 15 |
+
* **Model type:** BERT
|
| 16 |
+
* **Language(s) (NLP):** en
|
| 17 |
+
* **License:** cc-by-4.0
|
| 18 |
+
* **Finetuned from model:** e5-base-unsupervised-bge-retrieval-7-datasets-680K
|
| 19 |
|
| 20 |
+
### Model Sources
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
+
* **Repository:** This repository.
|
| 23 |
+
* **Paper:** [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967)
|
| 24 |
+
* **Github:** https://github.com/luojunyu/rlhn
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
## Uses
|
| 27 |
|
|
|
|
|
|
|
| 28 |
### Direct Use
|
| 29 |
|
| 30 |
+
This model is primarily used to generate text embeddings. These embeddings can then be used for various downstream tasks, especially information retrieval.
|
| 31 |
|
| 32 |
+
### Downstream Use
|
| 33 |
|
| 34 |
+
The model can be used as a drop-in replacement for the original e5-base model and for generating embeddings for improving retrieval performance.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
### Out-of-Scope Use
|
| 37 |
|
| 38 |
+
This model is not intended for use cases outside of information retrieval, such as content generation.
|
|
|
|
|
|
|
| 39 |
|
| 40 |
## Bias, Risks, and Limitations
|
| 41 |
|
| 42 |
+
The model's performance is tied to the quality of the relabeling process. The prompts used during relabeling can introduce biases.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
## How to Get Started with the Model
|
| 45 |
|
| 46 |
Use the code below to get started with the model.
|
| 47 |
|
| 48 |
+
```python
|
| 49 |
+
from transformers import AutoModel, AutoTokenizer
|
| 50 |
+
import torch
|
| 51 |
+
|
| 52 |
+
model_name = "ThisHuggingFaceRepoID" # Replace with the actual model name
|
| 53 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 54 |
+
model = AutoModel.from_pretrained(model_name).to('cuda')
|
| 55 |
+
|
| 56 |
+
text = "This is an example sentence."
|
| 57 |
+
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512).to('cuda')
|
| 58 |
+
with torch.no_grad():
|
| 59 |
+
embeddings = model(**inputs).last_hidden_state[:, 0, :] # Extract embeddings
|
| 60 |
+
print(embeddings.shape)
|
| 61 |
+
```
|
| 62 |
|
| 63 |
## Training Details
|
| 64 |
|
| 65 |
### Training Data
|
| 66 |
|
| 67 |
+
The model was fine-tuned using a semi-supervised approach, leveraging both labeled and unlabeled data. The unlabeled data was relabeled using a cascading LLM prompting strategy to correct false negatives.
|
|
|
|
|
|
|
| 68 |
|
| 69 |
### Training Procedure
|
| 70 |
|
| 71 |
+
The model utilizes supervised fine-tuning (SFT) and a bi-level knowledge propagation and selection mechanism (SemiEvol) to enhance its performance.
|
| 72 |
|
| 73 |
+
#### Preprocessing
|
| 74 |
|
| 75 |
[More Information Needed]
|
| 76 |
|
|
|
|
| 77 |
#### Training Hyperparameters
|
| 78 |
|
| 79 |
+
* **Training regime:** bf16 mixed precision
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
## Evaluation
|
| 82 |
|
|
|
|
|
|
|
| 83 |
### Testing Data, Factors & Metrics
|
| 84 |
|
| 85 |
#### Testing Data
|
| 86 |
|
| 87 |
+
The model was evaluated on several datasets, including BEIR and AIR-Bench.
|
|
|
|
|
|
|
| 88 |
|
| 89 |
#### Factors
|
| 90 |
|
| 91 |
+
The model's performance was analyzed across different benchmark datasets to assess its generalization capability.
|
|
|
|
|
|
|
| 92 |
|
| 93 |
#### Metrics
|
| 94 |
|
| 95 |
+
The primary evaluation metric is nDCG@10, which measures the ranking quality of retrieved passages.
|
|
|
|
|
|
|
| 96 |
|
| 97 |
### Results
|
| 98 |
|
| 99 |
+
Relabeling false negatives with true positives improves both E5 (base) and Qwen2.5-7B retrieval models by 0.7-1.4 nDCG@10 on BEIR and by 1.7-1.8 nDCG@10 on zero-shot AIR-Bench evaluation. Similar gains are observed for rerankers fine-tuned on the relabeled data, such as Qwen2.5-3B on BEIR.
|
| 100 |
|
| 101 |
#### Summary
|
| 102 |
|
| 103 |
+
The model achieves significant improvements on BEIR and AIR-Bench benchmarks, indicating that the relabeling strategy improves retrieval performance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 104 |
|
| 105 |
## Environmental Impact
|
| 106 |
|
|
|
|
|
|
|
| 107 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 108 |
|
| 109 |
+
* **Hardware Type:** \[More Information Needed]
|
| 110 |
+
* **Hours used:** \[More Information Needed]
|
| 111 |
+
* **Cloud Provider:** \[More Information Needed]
|
| 112 |
+
* **Compute Region:** \[More Information Needed]
|
| 113 |
+
* **Carbon Emitted:** \[More Information Needed]
|
| 114 |
|
| 115 |
+
## Technical Specifications
|
| 116 |
|
| 117 |
### Model Architecture and Objective
|
| 118 |
|
| 119 |
+
The model uses a BERT-based architecture fine-tuned for generating text embeddings. The objective is to improve retrieval and reranking performance by relabeling hard negatives in the training data.
|
| 120 |
|
| 121 |
### Compute Infrastructure
|
| 122 |
|
|
|
|
|
|
|
| 123 |
#### Hardware
|
| 124 |
|
| 125 |
[More Information Needed]
|
|
|
|
| 128 |
|
| 129 |
[More Information Needed]
|
| 130 |
|
| 131 |
+
## Citation
|
| 132 |
|
| 133 |
+
```
|
| 134 |
+
@misc{luo2024semievol,
|
| 135 |
+
title={SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation},
|
| 136 |
+
author={Junyu Luo and Xiao Luo and Xiusi Chen and Zhiping Xiao and Wei Ju and Ming Zhang},
|
| 137 |
+
year={2024},
|
| 138 |
+
eprint={2410.14745},
|
| 139 |
+
archivePrefix={arXiv},
|
| 140 |
+
primaryClass={cs.CL},
|
| 141 |
+
url={https://arxiv.org/abs/2410.14745},
|
| 142 |
+
}
|
| 143 |
+
```
|
| 144 |
|
| 145 |
+
## Model Card Authors
|
| 146 |
|
| 147 |
+
\[More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 148 |
|
| 149 |
## Model Card Contact
|
| 150 |
|
| 151 |
+
\[More Information Needed]
|