Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
nielsr HF Staff commited on
Commit
6de737d
·
verified ·
1 Parent(s): 8244ba0

Correct task category to text-ranking

Browse files

This PR corrects the `task_categories` metadata from `question-answering` to `text-ranking` for better discoverability of the dataset. The dataset is designed for improving retrieval and reranking models.

Files changed (1) hide show
  1. README.md +111 -112
README.md CHANGED
@@ -1,138 +1,137 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: query_id
5
- dtype: string
6
- - name: query
7
- dtype: string
8
- - name: positive_passages
9
- list:
10
- - name: docid
11
- dtype: string
12
- - name: text
13
- dtype: string
14
- - name: title
15
- dtype: string
16
- - name: negative_passages
17
- list:
18
- - name: docid
19
- dtype: string
20
- - name: text
21
- dtype: string
22
- - name: title
23
- dtype: string
24
- - name: subset
25
- dtype: string
26
- splits:
27
- - name: train
28
- num_bytes: 1238537194
29
- num_examples: 61026
30
- download_size: 713689370
31
- dataset_size: 1238537194
32
- configs:
33
- - config_name: default
34
- data_files:
35
- - split: train
36
- path: data/train-*
37
- license: cc-by-sa-4.0
38
- task_categories:
39
- - question-answering
40
- language:
41
- - en
42
- pretty_name: Remove 100K
43
- size_categories:
44
- - 10K<n<100K
45
  ---
46
 
47
- # Dataset Card for Remove 100K
48
 
49
- ## Dataset Description
50
- [Repository](https://github.com/castorini/rlhn) |
51
- [Paper](https://huggingface.co/papers/2505.16967) |
52
- [ArXiv](https://arxiv.org/abs/2505.16967)
53
 
54
- RLHN is a cascading LLM framework designed to accurately relabel hard negatives in existing IR/RAG training datasets, such as MS MARCO and HotpotQA.
55
 
56
- This Tevatron dataset (100K training pairs) contains the original queries, positives and hard negatives after dropping each training pair with a single false negative.
 
 
 
 
57
 
58
- This repository contains the training pairs that can be used to fine-tune embedding, ColBERT or multi-vector, and reranker models.
59
 
60
- The original dataset (bad quality; containing false negatives) can be found at [rlhn/default-100K](https://huggingface.co/datasets/rlhn/default-100K/).
 
 
61
 
62
- > Note: RLHN datasets are not **new** training datasets, but rather existing BGE collection training datasets with hard negatives cleaned!
63
 
64
- ## Dataset Structure
65
 
66
- To access the data using HuggingFace `datasets`:
67
- ```python
68
- rlhn = datasets.load_dataset('rlhn/remove-100K')
69
-
70
- # training set:
71
- for data in freshstack['train']:
72
- query_id = data["query_id"] # md5 hash of the query_id
73
- query = data["query"] # query text
74
- subset = data["subset"] # training dataset, e.g., fiqa or msmarco_passage
75
-
76
- # positive passages
77
- for positive_passage in data["positive_passages"]:
78
- doc_id = positive_passage["docid"]
79
- title = positive_passage["title"] # title is usually empty, added in text
80
- text = positive_passage["text"] # contains both the title & text
81
-
82
- # hard negative passages
83
- for negative_passage in data["negative_passages"]:
84
- doc_id = negative_passage["docid"]
85
- title = negative_passage["title"] # title is usually empty, added in text
86
- text = negative_passage["text"] # contains both the title & text
87
- ```
88
 
 
89
 
90
- ## Original Dataset Statistics
91
- The following table contains the number of training pairs for each training dataset included in RLHN. These numbers are for the default setting.
92
 
93
- | Dataset | 100K splits | 250K splits | 400K splits | 680K splits |
94
- |-------------------|-------------|-------------|-------------|------------- |
95
- | arguana | 4,065 | 4,065 | 4,065 | 4,065 |
96
- | fever | 28,755 | 28,755 | 28,755 | 28,755 |
97
- | fiqa | 5,500 | 5,500 | 5,500 | 5,500 |
98
- | hotpotqa | 10,250 | 30,000 | 84,516 | 84,516 |
99
- | msmarco_passage | 49,571 | 145,000 | 210,000 | 485,823 |
100
- | nq | 6,110 | 30,000 | 58,568 | 58,568 |
101
- | scidocsrr | 12,654 | 12,654 | 12,654 | 12,654 |
102
- | **total** | **96,167** | **255,974** | **404,058** | **679,881** |
103
 
 
104
 
105
- ## License
106
- The RLHN dataset is made available with the CC-BY-SA 4.0 license.
107
 
108
- ## Hashing & IDs
109
 
110
- We generate the md5 hash as the unique identifier (ID) for both the query \& documents, using the code below:
111
 
112
  ```python
113
- import hashlib
114
-
115
- def get_md5_hash(text):
116
- """Calculates the MD5 hash of a given string.
117
- Args:
118
- text: The string to hash.
119
- Returns:
120
- The MD5 hash of the string as a hexadecimal string.
121
- """
122
- text_bytes = text.encode('utf-8') # Encode the string to bytes
123
- md5_hash = hashlib.md5(text_bytes).hexdigest()
124
- return md5_hash
125
  ```
126
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
  ## Citation
 
128
  ```
129
- @misc{thakur2025relabel,
130
- title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval},
131
- author={Nandan Thakur and Crystina Zhang and Xueguang Ma and Jimmy Lin},
132
- year={2025},
133
- eprint={2505.16967},
134
- archivePrefix={arXiv},
135
- primaryClass={cs.IR},
136
- url={https://arxiv.org/abs/2505.16967},
137
  }
138
  ```
 
1
  ---
2
+ library_name: transformers
3
+ tags: []
4
+ license: cc-by-4.0
5
+ pipeline_tag: feature-extraction
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
 
8
+ # Model Card for Model ID
9
 
10
+ This model identifies and relabels false negatives in IR training datasets as described in the paper [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967). It is based on the e5-base model.
 
 
 
11
 
12
+ ## Model Details
13
 
14
+ - **Developed by:** [More Information Needed]
15
+ - **Model type:** BertModel
16
+ - **Language(s) (NLP):** en
17
+ - **License:** cc-by-4.0
18
+ - **Finetuned from model:** models/e5-base-unsupervised-bge-retrieval-7-datasets-250K-default
19
 
20
+ ### Model Sources
21
 
22
+ - **Repository:** Automatically Generated
23
+ - **Paper:** [Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval](https://huggingface.co/papers/2505.16967)
24
+ - **Code:** https://github.com/studio-name/rlhn
25
 
26
+ ## Uses
27
 
28
+ ### Direct Use
29
 
30
+ This model is designed for identifying and relabeling hard negatives in information retrieval training datasets. It can be used to improve the quality of training data for retrieval and reranker models.
31
+
32
+ ### Downstream Use
33
+
34
+ Fine-tuning retrieval and reranker models using the relabeled data can lead to significant improvements in retrieval effectiveness, especially on out-of-distribution datasets.
35
+
36
+ ### Out-of-Scope Use
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
+ This model is not intended for use in applications that require real-time or low-latency performance, as the relabeling process involves computationally intensive LLM inference.
39
 
40
+ ## Bias, Risks, and Limitations
 
41
 
42
+ The effectiveness of this model depends on the quality and diversity of the LLMs used for relabeling. Biases in the LLMs may lead to biased relabeling and affect the performance of downstream models.
 
 
 
 
 
 
 
 
 
43
 
44
+ ### Recommendations
45
 
46
+ Users should be aware of the potential biases and limitations of the LLMs used for relabeling and carefully evaluate the impact of the relabeled data on the performance of downstream models.
 
47
 
48
+ ## How to Get Started with the Model
49
 
50
+ Use the model with the transformers library:
51
 
52
  ```python
53
+ from transformers import AutoModel, AutoTokenizer
54
+
55
+ model_name = "models/e5-base-unsupervised-bge-retrieval-7-datasets-250K-default"
56
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
57
+ model = AutoModel.from_pretrained(model_name)
58
+
59
+ # Example usage
60
+ text = "This is an example sentence."
61
+ inputs = tokenizer(text, return_tensors="pt")
62
+ outputs = model(**inputs)
63
+ embeddings = outputs.last_hidden_state
64
+ print(embeddings.shape)
65
  ```
66
 
67
+ ## Training Details
68
+
69
+ ### Training Data
70
+
71
+ The model used here was trained on a subset of the BGE collection and has a vocab size of 30522.
72
+
73
+ ### Training Procedure
74
+
75
+ The model was fine-tuned using a semi-supervised approach with LLMs to relabel hard negatives.
76
+
77
+ #### Training Hyperparameters
78
+
79
+ - **Training regime:** bfloat16 mixed precision
80
+
81
+ ## Evaluation
82
+
83
+ ### Testing Data, Factors & Metrics
84
+
85
+ #### Testing Data
86
+
87
+ BEIR and AIR-Bench
88
+
89
+ #### Metrics
90
+
91
+ nDCG@10
92
+
93
+ ### Results
94
+
95
+ Relabeling false negatives with true positives improves both E5 (base) and Qwen2.5-7B retrieval models by 0.7-1.4 nDCG@10 on BEIR and by 1.7-1.8 nDCG@10 on zero-shot AIR-Bench evaluation. Similar gains are observed for rerankers fine-tuned on the relabeled data, such as Qwen2.5-3B on BEIR.
96
+
97
+ ## Environmental Impact
98
+
99
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
100
+
101
+ - **Hardware Type:** [More Information Needed]
102
+ - **Hours used:** [More Information Needed]
103
+ - **Cloud Provider:** [More Information Needed]
104
+ - **Compute Region:** [More Information Needed]
105
+ - **Carbon Emitted:** [More Information Needed]
106
+
107
+ ## Technical Specifications
108
+
109
+ ### Model Architecture and Objective
110
+
111
+ [More Information Needed]
112
+
113
+ ### Compute Infrastructure
114
+
115
+ [More Information Needed]
116
+
117
+ #### Hardware
118
+
119
+ [More Information Needed]
120
+
121
+ #### Software
122
+
123
+ [More Information Needed]
124
+
125
  ## Citation
126
+
127
  ```
128
+ @misc{luo2024semievol,
129
+ title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval},
130
+ author={Junyu Luo and Xiao Luo and Xiusi Chen and Zhiping Xiao and Wei Ju and Ming Zhang},
131
+ year={2024},
132
+ eprint={2410.14745},
133
+ archivePrefix={arXiv},
134
+ primaryClass={cs.CL},
135
+ url={https://arxiv.org/abs/2410.14745},
136
  }
137
  ```