Improve model card with details from paper and repo
Browse filesThis PR updates the model card for this retrieval model by providing more relevant information:
- Added `pipeline_tag`: feature-extraction
- Added model description and link to github.
- Added license information
README.md
CHANGED
|
@@ -1,147 +1,113 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
# Model Card for Model ID
|
| 7 |
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
|
| 12 |
## Model Details
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
|
| 20 |
-
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
- **Repository:** [More Information Needed]
|
| 33 |
-
- **Paper [optional]:** [More Information Needed]
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
|
| 36 |
## Uses
|
| 37 |
|
| 38 |
-
|
| 39 |
|
| 40 |
### Direct Use
|
| 41 |
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
[More Information Needed]
|
| 45 |
-
|
| 46 |
-
### Downstream Use [optional]
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
### Out-of-Scope Use
|
| 53 |
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
[More Information Needed]
|
| 57 |
|
| 58 |
## Bias, Risks, and Limitations
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
[More Information Needed]
|
| 63 |
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 67 |
|
| 68 |
-
|
| 69 |
|
| 70 |
-
|
|
|
|
|
|
|
| 71 |
|
| 72 |
-
|
|
|
|
|
|
|
| 73 |
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
## Training Details
|
| 77 |
|
| 78 |
### Training Data
|
| 79 |
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
[More Information Needed]
|
| 83 |
|
| 84 |
### Training Procedure
|
| 85 |
|
| 86 |
-
|
| 87 |
|
| 88 |
-
#### Preprocessing
|
| 89 |
|
| 90 |
[More Information Needed]
|
| 91 |
|
| 92 |
-
|
| 93 |
#### Training Hyperparameters
|
| 94 |
|
| 95 |
-
- **Training regime:** [More Information Needed]
|
| 96 |
-
|
| 97 |
-
#### Speeds, Sizes, Times [optional]
|
| 98 |
-
|
| 99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 100 |
-
|
| 101 |
-
[More Information Needed]
|
| 102 |
|
| 103 |
## Evaluation
|
| 104 |
|
| 105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 106 |
-
|
| 107 |
### Testing Data, Factors & Metrics
|
| 108 |
|
| 109 |
#### Testing Data
|
| 110 |
|
| 111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
| 112 |
-
|
| 113 |
[More Information Needed]
|
| 114 |
|
| 115 |
#### Factors
|
| 116 |
|
| 117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
-
|
| 119 |
[More Information Needed]
|
| 120 |
|
| 121 |
#### Metrics
|
| 122 |
|
| 123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 124 |
-
|
| 125 |
[More Information Needed]
|
| 126 |
|
| 127 |
### Results
|
| 128 |
|
| 129 |
[More Information Needed]
|
| 130 |
|
| 131 |
-
#### Summary
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
## Environmental Impact
|
| 142 |
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
|
| 147 |
- **Hardware Type:** [More Information Needed]
|
|
@@ -150,7 +116,7 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 150 |
- **Compute Region:** [More Information Needed]
|
| 151 |
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
|
| 153 |
-
## Technical Specifications
|
| 154 |
|
| 155 |
### Model Architecture and Objective
|
| 156 |
|
|
@@ -168,32 +134,20 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 168 |
|
| 169 |
[More Information Needed]
|
| 170 |
|
| 171 |
-
## Citation
|
| 172 |
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
-
|
| 191 |
-
[More Information Needed]
|
| 192 |
-
|
| 193 |
-
## Model Card Authors [optional]
|
| 194 |
-
|
| 195 |
-
[More Information Needed]
|
| 196 |
|
| 197 |
-
## Model Card
|
| 198 |
|
| 199 |
-
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
+
license: cc-by-4.0
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
pipeline_tag: feature-extraction
|
| 7 |
+
architectures:
|
| 8 |
+
- BertModel
|
| 9 |
+
tags:
|
| 10 |
+
- embedding
|
| 11 |
+
- retrieval
|
| 12 |
---
|
| 13 |
|
| 14 |
# Model Card for Model ID
|
| 15 |
|
| 16 |
+
This model is a BERT model fine-tuned for feature extraction, specifically designed for use in information retrieval tasks. It is intended to be used as an encoder to generate embeddings for passages, which can then be used to improve the recall or re-ranking of information retrieval systems. This model was introduced in the paper [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967).
|
|
|
|
|
|
|
| 17 |
|
| 18 |
## Model Details
|
| 19 |
|
| 20 |
+
- **Developed by:** Junyu Luo et al.
|
| 21 |
+
- **Model type:** BERT
|
| 22 |
+
- **Language(s) (NLP):** English
|
| 23 |
+
- **License:** CC-BY-4.0
|
| 24 |
+
- **Finetuned from model:** E5 Small Unsupervised BGE All Datasets HF
|
| 25 |
|
| 26 |
+
### Model Sources
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
+
- **Repository:** This repository
|
| 29 |
+
- **Paper:** [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967)
|
| 30 |
+
- **Code:** The relabeling code can be found at https://github.com/luojunyu/rlhn
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
## Uses
|
| 33 |
|
| 34 |
+
This model is designed to generate embeddings for passages in information retrieval systems. It can be used directly for passage retrieval or fine-tuned for specific tasks.
|
| 35 |
|
| 36 |
### Direct Use
|
| 37 |
|
| 38 |
+
Generate passage embeddings for retrieval or re-ranking.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
+
### Downstream Use
|
| 41 |
|
| 42 |
+
This model can be fine-tuned for specific retrieval tasks or plugged into a larger information retrieval system to improve performance.
|
| 43 |
|
| 44 |
### Out-of-Scope Use
|
| 45 |
|
| 46 |
+
This model is not intended for use in generating text or for any tasks other than feature extraction for information retrieval.
|
|
|
|
|
|
|
| 47 |
|
| 48 |
## Bias, Risks, and Limitations
|
| 49 |
|
| 50 |
+
The model's performance is dependent on the quality of the training data. It may exhibit biases present in the original training data or the relabeled data used in fine-tuning.
|
|
|
|
|
|
|
| 51 |
|
| 52 |
+
## How to Get Started with the Model
|
|
|
|
|
|
|
| 53 |
|
| 54 |
+
Please refer to the code in the original paper repository for how to compute passage embeddings.
|
| 55 |
|
| 56 |
+
```python
|
| 57 |
+
from transformers import AutoTokenizer, AutoModel
|
| 58 |
+
import torch
|
| 59 |
|
| 60 |
+
model_name = 'WhereIsAI/e5-small-unsupervised-bge-all-datasets-hf'
|
| 61 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 62 |
+
model = AutoModel.from_pretrained(model_name, trust_remote_code=True).to('cuda')
|
| 63 |
|
| 64 |
+
text = "This is a sample passage."
|
| 65 |
+
input_ids = tokenizer([text], return_tensors="pt", max_length=512, truncation=True, padding='max_length').to('cuda')
|
| 66 |
+
with torch.no_grad():
|
| 67 |
+
output = model(**input_ids)
|
| 68 |
+
embeddings = output.last_hidden_state[:, 0, :] # Extract embeddings
|
| 69 |
+
```
|
| 70 |
|
| 71 |
## Training Details
|
| 72 |
|
| 73 |
### Training Data
|
| 74 |
|
| 75 |
+
The model was fine-tuned using a semi-supervised approach on a mix of labeled and unlabeled data. See the paper for more details.
|
|
|
|
|
|
|
| 76 |
|
| 77 |
### Training Procedure
|
| 78 |
|
| 79 |
+
[More Information Needed]
|
| 80 |
|
| 81 |
+
#### Preprocessing
|
| 82 |
|
| 83 |
[More Information Needed]
|
| 84 |
|
|
|
|
| 85 |
#### Training Hyperparameters
|
| 86 |
|
| 87 |
+
- **Training regime:** [More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
|
| 89 |
## Evaluation
|
| 90 |
|
|
|
|
|
|
|
| 91 |
### Testing Data, Factors & Metrics
|
| 92 |
|
| 93 |
#### Testing Data
|
| 94 |
|
|
|
|
|
|
|
| 95 |
[More Information Needed]
|
| 96 |
|
| 97 |
#### Factors
|
| 98 |
|
|
|
|
|
|
|
| 99 |
[More Information Needed]
|
| 100 |
|
| 101 |
#### Metrics
|
| 102 |
|
|
|
|
|
|
|
| 103 |
[More Information Needed]
|
| 104 |
|
| 105 |
### Results
|
| 106 |
|
| 107 |
[More Information Needed]
|
| 108 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 109 |
## Environmental Impact
|
| 110 |
|
|
|
|
|
|
|
| 111 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 112 |
|
| 113 |
- **Hardware Type:** [More Information Needed]
|
|
|
|
| 116 |
- **Compute Region:** [More Information Needed]
|
| 117 |
- **Carbon Emitted:** [More Information Needed]
|
| 118 |
|
| 119 |
+
## Technical Specifications
|
| 120 |
|
| 121 |
### Model Architecture and Objective
|
| 122 |
|
|
|
|
| 134 |
|
| 135 |
[More Information Needed]
|
| 136 |
|
| 137 |
+
## Citation
|
| 138 |
|
| 139 |
+
```
|
| 140 |
+
@misc{luo2024semievol,
|
| 141 |
+
title={SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation},
|
| 142 |
+
author={Junyu Luo and Xiao Luo and Xiusi Chen and Zhiping Xiao and Wei Ju and Ming Zhang},
|
| 143 |
+
year={2024},
|
| 144 |
+
eprint={2410.14745},
|
| 145 |
+
archivePrefix={arXiv},
|
| 146 |
+
primaryClass={cs.CL},
|
| 147 |
+
url={https://arxiv.org/abs/2410.14745},
|
| 148 |
+
}
|
| 149 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 150 |
|
| 151 |
+
## Model Card Authors
|
| 152 |
|
| 153 |
+
Niels Drost (Hugging Face)
|