IrvinTopi commited on
Commit
79d4290
·
verified ·
1 Parent(s): 16e33a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -93
README.md CHANGED
@@ -1,99 +1,119 @@
1
  ---
2
- configs:
3
- - config_name: ara
4
- data_files:
5
- - split: train
6
- path: data/ara/train_marginmse.jsonl
7
- - config_name: dan
8
- data_files:
9
- - split: train
10
- path: data/dan/train_marginmse.jsonl
11
- - config_name: deu
12
- data_files:
13
- - split: train
14
- path: data/deu/train_marginmse.jsonl
15
- - config_name: eng
16
- data_files:
17
- - split: train
18
- path: data/eng/train_marginmse.jsonl
19
- - config_name: fas
20
- data_files:
21
- - split: train
22
- path: data/fas/train_marginmse.jsonl
23
- - config_name: fra
24
- data_files:
25
- - split: train
26
- path: data/fra/train_marginmse.jsonl
27
- - config_name: hin
28
- data_files:
29
- - split: train
30
- path: data/hin/train_marginmse.jsonl
31
- - config_name: ind
32
- data_files:
33
- - split: train
34
- path: data/ind/train_marginmse.jsonl
35
- - config_name: ita
36
- data_files:
37
- - split: train
38
- path: data/ita/train_marginmse.jsonl
39
- - config_name: jpn
40
- data_files:
41
- - split: train
42
- path: data/jpn/train_marginmse.jsonl
43
- - config_name: kor
44
- data_files:
45
- - split: train
46
- path: data/kor/train_marginmse.jsonl
47
- - config_name: nld
48
- data_files:
49
- - split: train
50
- path: data/nld/train_marginmse.jsonl
51
- - config_name: pol
52
- data_files:
53
- - split: train
54
- path: data/pol/train_marginmse.jsonl
55
- - config_name: por
56
- data_files:
57
- - split: train
58
- path: data/por/train_marginmse.jsonl
59
- - config_name: rus
60
- data_files:
61
- - split: train
62
- path: data/rus/train_marginmse.jsonl
63
- - config_name: spa
64
- data_files:
65
- - split: train
66
- path: data/spa/train_marginmse.jsonl
67
- - config_name: swe
68
- data_files:
69
- - split: train
70
- path: data/swe/train_marginmse.jsonl
71
- - config_name: tur
72
- data_files:
73
- - split: train
74
- path: data/tur/train_marginmse.jsonl
75
- - config_name: vie
76
- data_files:
77
- - split: train
78
- path: data/vie/train_marginmse.jsonl
79
- - config_name: zho
80
- data_files:
81
- - split: train
82
- path: data/zho/train_marginmse.jsonl
83
-
84
  ---
85
 
86
- # WebFAQ Distilled (MarginMSE)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
 
88
- This dataset contains **Hard Negatives** and **Teacher Scores** (from BGE-M3) for the WebFAQ dataset.
89
- It is designed for training dense retrievers using **MarginMSE Loss**.
90
 
91
- ## Structure
92
- - **query**: The user question.
93
- - **positive**: The correct answer.
94
- - **positive_score**: The teacher's score for the positive pair.
95
- - **negatives**: A list of hard negatives mined via BM25.
96
- - **negative_scores**: The teacher's scores for each negative.
97
 
98
- ## Languages
99
- 20 languages included.
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - ara
4
+ - dan
5
+ - deu
6
+ - eng
7
+ - fas
8
+ - fra
9
+ - hin
10
+ - ind
11
+ - ita
12
+ - jpn
13
+ - kor
14
+ - nld
15
+ - pol
16
+ - por
17
+ - rus
18
+ - spa
19
+ - swe
20
+ - tur
21
+ - vie
22
+ - zho
23
+ multilingual: true
24
+ tags:
25
+ - dense-retrieval
26
+ - hard-negatives
27
+ - knowledge-distillation
28
+ - webfaq
29
+ license: cc-by-4.0
30
+ task_categories:
31
+ - sentence-similarity
32
+ - text-retrieval
33
+ size_categories:
34
+ - 1M<n<10M
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ---
36
 
37
+ # WebFAQ 2.0: Multilingual Hard Negatives
38
+
39
+ This dataset contains **mined hard negatives** derived from the **WebFAQ 2.0** corpus. It includes approximately **1.3 million** samples across **20 languages**.
40
+
41
+ The dataset is designed to support robust training of dense retrieval models, specifically enabling:
42
+ 1. **Contrastive Learning:** Using strict hard negatives to improve discrimination.
43
+ 2. **Knowledge Distillation:** Using the provided cross-encoder scores to train with soft labels (e.g., MarginMSE).
44
+
45
+ ## Dataset Creation & Mining Process
46
+
47
+ To ensure high-quality training signals, we employed a **two-stage mining pipeline** that balances difficulty with correctness.
48
+
49
+ ### 1. Lexical Retrieval (Recall)
50
+ For every query in WebFAQ, we first retrieved the **top-200 candidate answers** from the monolingual corpus using **BM25**.
51
+ * **Goal:** Identify candidates with high lexical overlap (shared keywords) that are likely to be "hard" for a dense retriever to distinguish.
52
+
53
+ ### 2. Semantic Reranking (Precision)
54
+ We reranked the top-200 candidates using the state-of-the-art cross-encoder model: **[BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)**.
55
+ * **Goal:** Assess the true semantic relevance of each candidate.
56
+
57
+ ### 3. Filtering & Scoring
58
+ We applied a rigorous filtering strategy to curate the final dataset:
59
+ * **False Negative Removal:** Candidates with extremely high cross-encoder scores (semantic matches) were discarded to prevent "poisoning" the training data with valid answers labeled as negatives.
60
+ * **Easy Negative Removal:** Candidates with very low scores were discarded to ensure training efficiency.
61
+ * **Score Retention:** We retained the BGE-M3 relevance scores for every negative, enabling knowledge distillation workflows.
62
+
63
+ ## Dataset Structure
64
+
65
+ The data is stored in a **grouped format** (JSONL), where each line represents a single query paired with its positive answer and a list of mined hard negatives.
66
+
67
+ Each sample contains:
68
+
69
+ | Field | Type | Description |
70
+ | :--- | :--- | :--- |
71
+ | `query` | String | The user question. |
72
+ | `positive` | String | The ground-truth correct answer. |
73
+ | `positive_score` | Float | The **BGE-M3** relevance score for the positive answer. |
74
+ | `negatives` | List[String] | A list of mined hard negatives (non-relevant but similar). |
75
+ | `negative_scores`| List[Float] | The **BGE-M3** relevance scores corresponding to each negative. |
76
+
77
+ **Note:** This format is optimized for:
78
+ * **Contrastive Learning:** You can instantly sample 1 positive and $N$ negatives.
79
+ * **MarginMSE:** You have all the teacher scores (positive and negative) required to compute the margin loss.
80
+
81
+ ## Languages & Distribution
82
+
83
+ The dataset covers **20 languages** with the following sample counts:
84
+
85
+ | ISO Code | Language | Samples |
86
+ | :--- | :--- | :--- |
87
+ | `ara` | Arabic | 32,000 |
88
+ | `dan` | Danish | 32,000 |
89
+ | `deu` | German | 96,000 |
90
+ | `eng` | English | 128,000 |
91
+ | `fas` | Persian | 64,000 |
92
+ | `fra` | French | 96,000 |
93
+ | `hin` | Hindi | 32,000 |
94
+ | `ind` | Indonesian | 32,000 |
95
+ | `ita` | Italian | 96,000 |
96
+ | `jpn` | Japanese | 96,000 |
97
+ | `kor` | Korean | 32,000 |
98
+ | `nld` | Dutch | 96,000 |
99
+ | `pol` | Polish | 64,000 |
100
+ | `por` | Portuguese | 64,000 |
101
+ | `rus` | Russian | 96,000 |
102
+ | `spa` | Spanish | 96,000 |
103
+ | `swe` | Swedish | 32,000 |
104
+ | `tur` | Turkish | 32,000 |
105
+ | `vie` | Vietnamese | 32,000 |
106
+ | `zho` | Chinese | 32,000 |
107
+ | **Total** | **All** | **~1,280,000** |
108
 
109
+ ## Citation
 
110
 
111
+ If you use this dataset, please cite the WebFAQ 2.0 paper:
 
 
 
 
 
112
 
113
+ ```bibtex
114
+ @inproceedings{dinzinger2025webfaq,
115
+ title={WebFAQ: A Multilingual Collection of Natural QA Datasets for Dense Retrieval},
116
+ author={Dinzinger, Michael and Caspari, Laura and Dastidar, Kanishka Ghosh and Mitrović, Jelena and Granitzer, Michael},
117
+ booktitle={Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval},
118
+ year={2025}
119
+ }