File size: 3,757 Bytes
16e33a8
79d4290
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16e33a8
 
79d4290
 
 
 
 
 
 
 
 
 
a67f76c
79d4290
 
a67f76c
79d4290
 
 
 
 
a67f76c
 
79d4290
a67f76c
 
 
 
 
 
 
 
79d4290
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16e33a8
79d4290
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
language:
- ara
- dan
- deu
- eng
- fas
- fra
- hin
- ind
- ita
- jpn
- kor
- nld
- pol
- por
- rus
- spa
- swe
- tur
- vie
- zho
multilingual: true
tags:
- dense-retrieval
- hard-negatives
- knowledge-distillation
- webfaq
license: cc-by-4.0
task_categories:
- sentence-similarity
- text-retrieval
size_categories:
- 1M<n<10M
---

# WebFAQ 2.0: Multilingual Hard Negatives

This dataset contains **mined hard negatives** derived from the **WebFAQ 2.0** corpus. It includes approximately **1.3 million** samples across **20 languages**.

The dataset is designed to support robust training of dense retrieval models, specifically enabling:
1.  **Contrastive Learning:** Using strict hard negatives to improve discrimination.
2.  **Knowledge Distillation:** Using the provided cross-encoder scores to train with soft labels (e.g., MarginMSE).

## Dataset Creation & Mining Process

To ensure high-quality training signals, we employed a **two-stage mining pipeline**. The full mining script is available in this repository: [mining_script.py](./mining_script.py).

### 1. Lexical Retrieval (Recall)
We first retrieved the **top-200 candidate answers** for each query using **BM25** (via Pyserini).
* **Goal:** Identify candidates with high lexical overlap (shared keywords) that are likely to be "hard" for a dense retriever to distinguish.

### 2. Semantic Reranking (Precision)
We reranked the top-200 candidates using the state-of-the-art cross-encoder model: **[BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)**.
* **Goal:** Assess the true semantic relevance of each candidate.
* **Filtering:** We applied a rigorous filtering strategy to remove False Negatives (high semantic scores) and Easy Negatives (low scores).
* **Scoring:** We retained the BGE-M3 relevance scores for every negative to enable knowledge distillation (MarginMSE).

### Code & Reproduction
You can reproduce the mining process using the provided script:

```bash
python mining_hardnegatives_bge3.py \
  --repo-id "PaDaS-Lab/webfaq-retrieval" \
  --output-dir "./data/distilled_data" \
  --k-negatives 200

## Dataset Structure

The data is stored in a **grouped format** (JSONL), where each line represents a single query paired with its positive answer and a list of mined hard negatives.

Each sample contains:

| Field | Type | Description |
| :--- | :--- | :--- |
| `query` | String | The user question. |
| `positive` | String | The ground-truth correct answer. |
| `positive_score` | Float | The **BGE-M3** relevance score for the positive answer. |
| `negatives` | List[String] | A list of mined hard negatives (non-relevant but similar). |
| `negative_scores`| List[Float] | The **BGE-M3** relevance scores corresponding to each negative. |

**Note:** This format is optimized for:
* **Contrastive Learning:** You can instantly sample 1 positive and $N$ negatives.
* **MarginMSE:** You have all the teacher scores (positive and negative) required to compute the margin loss.

## Languages & Distribution

The dataset covers **20 languages** with the following sample counts:

| ISO Code | Language | Samples |
| :--- | :--- | :--- |
| `ara` | Arabic | 32,000 |
| `dan` | Danish | 32,000 |
| `deu` | German | 96,000 |
| `eng` | English | 128,000 |
| `fas` | Persian | 64,000 |
| `fra` | French | 96,000 |
| `hin` | Hindi | 32,000 |
| `ind` | Indonesian | 32,000 |
| `ita` | Italian | 96,000 |
| `jpn` | Japanese | 96,000 |
| `kor` | Korean | 32,000 |
| `nld` | Dutch | 96,000 |
| `pol` | Polish | 64,000 |
| `por` | Portuguese | 64,000 |
| `rus` | Russian | 96,000 |
| `spa` | Spanish | 96,000 |
| `swe` | Swedish | 32,000 |
| `tur` | Turkish | 32,000 |
| `vie` | Vietnamese | 32,000 |
| `zho` | Chinese | 32,000 |
| **Total** | **All** | **~1,280,000** |

## Citation