Update task category to text-ranking and add link to paper

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +104 -24
README.md CHANGED
@@ -1,4 +1,12 @@
1
  ---
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: query_id
@@ -25,34 +33,106 @@ dataset_info:
25
  dtype: string
26
  splits:
27
  - name: train
28
- num_bytes: 43144012642.700035
29
- num_examples: 680223
30
- download_size: 6653360727
31
- dataset_size: 43144012642.700035
32
  configs:
33
  - config_name: default
34
  data_files:
35
  - split: train
36
  path: data/train-*
37
- task_categories:
38
- - sentence-similarity
39
- language:
40
- - en
41
- pretty_name: BGE Retrieval Dataset (7 Datasets)
42
- size_categories:
43
- - 100K<n<1M
44
  ---
45
 
46
- ## BGE-Retrieval-Data (7 Datasets)
47
- This filtered version of `nthakur/bge-full-data` includes only 7 datasets of ~678K training pairs.
48
- The distribution of each subset is shown below:
49
-
50
- | Dataset | Training Pairs | Pos | Hard Negs |
51
- | -------: | -----------: | -----: | ------: |
52
- | msmarco_passage | 485823 | 1.1 | 25.0 |
53
- | hotpotqa | 84516 | 2.0 | 20.0 |
54
- | nq | 58568 | 1.0 | 98.5 |
55
- | fever | 29096 | 1.3 | 20.0 |
56
- | scidocsrr | 12655 | 1.6 | 19.7 |
57
- | fiqa | 5500 | 2.6 | 15.0 |
58
- | arguana | 4065 | 1.0 | 13.6 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: cc-by-sa-4.0
5
+ size_categories:
6
+ - 100K<n<1M
7
+ task_categories:
8
+ - text-ranking
9
+ pretty_name: Remove 680K
10
  dataset_info:
11
  features:
12
  - name: query_id
 
33
  dtype: string
34
  splits:
35
  - name: train
36
+ num_bytes: 6242007530
37
+ num_examples: 323622
38
+ download_size: 3696876038
39
+ dataset_size: 6242007530
40
  configs:
41
  - config_name: default
42
  data_files:
43
  - split: train
44
  path: data/train-*
 
 
 
 
 
 
 
45
  ---
46
 
47
+ # Dataset Card for Remove 680K
48
+
49
+ ## Dataset Description
50
+ [Repository](https://github.com/castorini/rlhn) |
51
+ [Paper](https://huggingface.co/papers/2505.16967) |
52
+ [ArXiv](https://arxiv.org/abs/2505.16967)
53
+
54
+ RLHN is a cascading LLM framework designed to accurately relabel hard negatives in existing IR/RAG training datasets, such as MS MARCO and HotpotQA.
55
+
56
+ This Tevatron dataset (680K training pairs) contains the original queries, positives and hard negatives after dropping each training pair with a single false negative.
57
+
58
+ This repository contains the training pairs that can be used to fine-tune embedding, ColBERT or multi-vector, and reranker models.
59
+
60
+ The original dataset (bad quality; containing false negatives) can be found at [rlhn/default-680K](https://huggingface.co/datasets/rlhn/default-680K/).
61
+
62
+ > Note: RLHN datasets are not **new** training datasets, but rather existing BGE collection training datasets with hard negatives cleaned!
63
+
64
+ ## Dataset Structure
65
+
66
+ To access the data using HuggingFace `datasets`:
67
+ ```python
68
+ rlhn = datasets.load_dataset('rlhn/remove-680K')
69
+
70
+ # training set:
71
+ for data in freshstack['train']:
72
+ query_id = data["query_id"] # md5 hash of the query_id
73
+ query = data["query"] # query text
74
+ subset = data["subset"] # training dataset, e.g., fiqa or msmarco_passage
75
+
76
+ # positive passages
77
+ for positive_passage in data["positive_passages"]:
78
+ doc_id = positive_passage["docid"]
79
+ title = positive_passage["title"] # title is usually empty, added in text
80
+ text = positive_passage["text"] # contains both the title & text
81
+
82
+ # hard negative passages
83
+ for negative_passage in data["negative_passages"]:
84
+ doc_id = negative_passage["docid"]
85
+ title = negative_passage["title"] # title is usually empty, added in text
86
+ text = negative_passage["text"] # contains both the title & text
87
+ ```
88
+
89
+
90
+ ## Original Dataset Statistics
91
+ The following table contains the number of training pairs for each training dataset included in RLHN. These numbers are for the default setting.
92
+
93
+ | Dataset | 100K splits | 250K splits | 400K splits | 680K splits |
94
+ |-------------------|-------------|-------------|-------------|------------- |
95
+ | arguana | 4,065 | 4,065 | 4,065 | 4,065 |
96
+ | fever | 28,755 | 28,755 | 28,755 | 28,755 |
97
+ | fiqa | 5,500 | 5,500 | 5,500 | 5,500 |
98
+ | hotpotqa | 10,250 | 30,000 | 84,516 | 84,516 |
99
+ | msmarco_passage | 49,571 | 145,000 | 210,000 | 485,823 |
100
+ | nq | 6,110 | 30,000 | 58,568 | 58,568 |
101
+ | scidocsrr | 12,654 | 12,654 | 12,654 | 12,654 |
102
+ | **total** | **96,167** | **255,974** | **404,058** | **679,881** |
103
+
104
+
105
+ ## License
106
+ The RLHN dataset is made available with the CC-BY-SA 4.0 license.
107
+
108
+ ## Hashing & IDs
109
+
110
+ We generate the md5 hash as the unique identifier (ID) for both the query \& documents, using the code below:
111
+
112
+ ```python
113
+ import hashlib
114
+
115
+ def get_md5_hash(text):
116
+ """Calculates the MD5 hash of a given string.
117
+ Args:
118
+ text: The string to hash.
119
+ Returns:
120
+ The MD5 hash of the string as a hexadecimal string.
121
+ """
122
+ text_bytes = text.encode('utf-8') # Encode the string to bytes
123
+ md5_hash = hashlib.md5(text_bytes).hexdigest()
124
+ return md5_hash
125
+ ```
126
+
127
+ ## Citation
128
+ ```
129
+ @misc{thakur2025relabel,
130
+ title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval},
131
+ author={Nandan Thakur and Crystina Zhang and Xueguang Ma and Jimmy Lin},
132
+ year={2025},
133
+ eprint={2505.16967},
134
+ archivePrefix={arXiv},
135
+ primaryClass={cs.IR},
136
+ url={https://arxiv.org/abs/2505.16967},
137
+ }
138
+ ```