Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 4,402 Bytes
e3b52be
6de737d
 
 
 
e3b52be
8244ba0
6de737d
8244ba0
6de737d
8244ba0
6de737d
8244ba0
6de737d
 
 
 
 
8244ba0
6de737d
8244ba0
6de737d
 
 
8244ba0
6de737d
8244ba0
6de737d
8244ba0
6de737d
 
 
 
 
 
 
8244ba0
6de737d
8244ba0
6de737d
8244ba0
6de737d
8244ba0
6de737d
8244ba0
6de737d
8244ba0
6de737d
8244ba0
6de737d
8244ba0
 
6de737d
 
 
 
 
 
 
 
 
 
 
 
8244ba0
 
6de737d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8244ba0
6de737d
8244ba0
6de737d
 
 
 
 
 
 
 
8244ba0
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---
library_name: transformers
tags: []
license: cc-by-4.0
pipeline_tag: feature-extraction
---

# Model Card for Model ID

This model identifies and relabels false negatives in IR training datasets as described in the paper [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967). It is based on the e5-base model.

## Model Details

-   **Developed by:** [More Information Needed]
-   **Model type:** BertModel
-   **Language(s) (NLP):** en
-   **License:** cc-by-4.0
-   **Finetuned from model:** models/e5-base-unsupervised-bge-retrieval-7-datasets-250K-default

### Model Sources

-   **Repository:** Automatically Generated
-   **Paper:** [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967)
-   **Code:** https://github.com/studio-name/rlhn

## Uses

### Direct Use

This model is designed for identifying and relabeling hard negatives in information retrieval training datasets. It can be used to improve the quality of training data for retrieval and reranker models.

### Downstream Use

Fine-tuning retrieval and reranker models using the relabeled data can lead to significant improvements in retrieval effectiveness, especially on out-of-distribution datasets.

### Out-of-Scope Use

This model is not intended for use in applications that require real-time or low-latency performance, as the relabeling process involves computationally intensive LLM inference.

## Bias, Risks, and Limitations

The effectiveness of this model depends on the quality and diversity of the LLMs used for relabeling. Biases in the LLMs may lead to biased relabeling and affect the performance of downstream models.

### Recommendations

Users should be aware of the potential biases and limitations of the LLMs used for relabeling and carefully evaluate the impact of the relabeled data on the performance of downstream models.

## How to Get Started with the Model

Use the model with the transformers library:

```python
from transformers import AutoModel, AutoTokenizer

model_name = "models/e5-base-unsupervised-bge-retrieval-7-datasets-250K-default"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)

# Example usage
text = "This is an example sentence."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state
print(embeddings.shape)
```

## Training Details

### Training Data

The model used here was trained on a subset of the BGE collection and has a vocab size of 30522.

### Training Procedure

The model was fine-tuned using a semi-supervised approach with LLMs to relabel hard negatives.

#### Training Hyperparameters

-   **Training regime:** bfloat16 mixed precision

## Evaluation

### Testing Data, Factors & Metrics

#### Testing Data

BEIR and AIR-Bench

#### Metrics

nDCG@10

### Results

Relabeling false negatives with true positives improves both E5 (base) and Qwen2.5-7B retrieval models by 0.7-1.4 nDCG@10 on BEIR and by 1.7-1.8 nDCG@10 on zero-shot AIR-Bench evaluation. Similar gains are observed for rerankers fine-tuned on the relabeled data, such as Qwen2.5-3B on BEIR.

## Environmental Impact

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

-   **Hardware Type:** [More Information Needed]
-   **Hours used:** [More Information Needed]
-   **Cloud Provider:** [More Information Needed]
-   **Compute Region:** [More Information Needed]
-   **Carbon Emitted:** [More Information Needed]

## Technical Specifications

### Model Architecture and Objective

[More Information Needed]

### Compute Infrastructure

[More Information Needed]

#### Hardware

[More Information Needed]

#### Software

[More Information Needed]

## Citation

```
@misc{luo2024semievol,
    title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval},
    author={Junyu Luo and Xiao Luo and Xiusi Chen and Zhiping Xiao and Wei Ju and Ming Zhang},
    year={2024},
    eprint={2410.14745},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2410.14745},
}
```