Update README.md
Browse files
README.md
CHANGED
|
@@ -44,21 +44,26 @@ The dataset is designed to support robust training of dense retrieval models, sp
|
|
| 44 |
|
| 45 |
## Dataset Creation & Mining Process
|
| 46 |
|
| 47 |
-
To ensure high-quality training signals, we employed a **two-stage mining pipeline
|
| 48 |
|
| 49 |
### 1. Lexical Retrieval (Recall)
|
| 50 |
-
|
| 51 |
* **Goal:** Identify candidates with high lexical overlap (shared keywords) that are likely to be "hard" for a dense retriever to distinguish.
|
| 52 |
|
| 53 |
### 2. Semantic Reranking (Precision)
|
| 54 |
We reranked the top-200 candidates using the state-of-the-art cross-encoder model: **[BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)**.
|
| 55 |
* **Goal:** Assess the true semantic relevance of each candidate.
|
|
|
|
|
|
|
| 56 |
|
| 57 |
-
###
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
|
|
|
|
|
|
|
|
|
| 62 |
|
| 63 |
## Dataset Structure
|
| 64 |
|
|
@@ -107,13 +112,3 @@ The dataset covers **20 languages** with the following sample counts:
|
|
| 107 |
| **Total** | **All** | **~1,280,000** |
|
| 108 |
|
| 109 |
## Citation
|
| 110 |
-
|
| 111 |
-
If you use this dataset, please cite the WebFAQ 2.0 paper:
|
| 112 |
-
|
| 113 |
-
```bibtex
|
| 114 |
-
@inproceedings{dinzinger2025webfaq,
|
| 115 |
-
title={WebFAQ: A Multilingual Collection of Natural QA Datasets for Dense Retrieval},
|
| 116 |
-
author={Dinzinger, Michael and Caspari, Laura and Dastidar, Kanishka Ghosh and Mitrović, Jelena and Granitzer, Michael},
|
| 117 |
-
booktitle={Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval},
|
| 118 |
-
year={2025}
|
| 119 |
-
}
|
|
|
|
| 44 |
|
| 45 |
## Dataset Creation & Mining Process
|
| 46 |
|
| 47 |
+
To ensure high-quality training signals, we employed a **two-stage mining pipeline**. The full mining script is available in this repository: [mining_script.py](./mining_script.py).
|
| 48 |
|
| 49 |
### 1. Lexical Retrieval (Recall)
|
| 50 |
+
We first retrieved the **top-200 candidate answers** for each query using **BM25** (via Pyserini).
|
| 51 |
* **Goal:** Identify candidates with high lexical overlap (shared keywords) that are likely to be "hard" for a dense retriever to distinguish.
|
| 52 |
|
| 53 |
### 2. Semantic Reranking (Precision)
|
| 54 |
We reranked the top-200 candidates using the state-of-the-art cross-encoder model: **[BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)**.
|
| 55 |
* **Goal:** Assess the true semantic relevance of each candidate.
|
| 56 |
+
* **Filtering:** We applied a rigorous filtering strategy to remove False Negatives (high semantic scores) and Easy Negatives (low scores).
|
| 57 |
+
* **Scoring:** We retained the BGE-M3 relevance scores for every negative to enable knowledge distillation (MarginMSE).
|
| 58 |
|
| 59 |
+
### Code & Reproduction
|
| 60 |
+
You can reproduce the mining process using the provided script:
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
python mining_hardnegatives_bge3.py \
|
| 64 |
+
--repo-id "PaDaS-Lab/webfaq-retrieval" \
|
| 65 |
+
--output-dir "./data/distilled_data" \
|
| 66 |
+
--k-negatives 200
|
| 67 |
|
| 68 |
## Dataset Structure
|
| 69 |
|
|
|
|
| 112 |
| **Total** | **All** | **~1,280,000** |
|
| 113 |
|
| 114 |
## Citation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|