IrvinTopi commited on
Commit
a67f76c
·
verified ·
1 Parent(s): 79d4290

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -17
README.md CHANGED
@@ -44,21 +44,26 @@ The dataset is designed to support robust training of dense retrieval models, sp
44
 
45
  ## Dataset Creation & Mining Process
46
 
47
- To ensure high-quality training signals, we employed a **two-stage mining pipeline** that balances difficulty with correctness.
48
 
49
  ### 1. Lexical Retrieval (Recall)
50
- For every query in WebFAQ, we first retrieved the **top-200 candidate answers** from the monolingual corpus using **BM25**.
51
  * **Goal:** Identify candidates with high lexical overlap (shared keywords) that are likely to be "hard" for a dense retriever to distinguish.
52
 
53
  ### 2. Semantic Reranking (Precision)
54
  We reranked the top-200 candidates using the state-of-the-art cross-encoder model: **[BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)**.
55
  * **Goal:** Assess the true semantic relevance of each candidate.
 
 
56
 
57
- ### 3. Filtering & Scoring
58
- We applied a rigorous filtering strategy to curate the final dataset:
59
- * **False Negative Removal:** Candidates with extremely high cross-encoder scores (semantic matches) were discarded to prevent "poisoning" the training data with valid answers labeled as negatives.
60
- * **Easy Negative Removal:** Candidates with very low scores were discarded to ensure training efficiency.
61
- * **Score Retention:** We retained the BGE-M3 relevance scores for every negative, enabling knowledge distillation workflows.
 
 
 
62
 
63
  ## Dataset Structure
64
 
@@ -107,13 +112,3 @@ The dataset covers **20 languages** with the following sample counts:
107
  | **Total** | **All** | **~1,280,000** |
108
 
109
  ## Citation
110
-
111
- If you use this dataset, please cite the WebFAQ 2.0 paper:
112
-
113
- ```bibtex
114
- @inproceedings{dinzinger2025webfaq,
115
- title={WebFAQ: A Multilingual Collection of Natural QA Datasets for Dense Retrieval},
116
- author={Dinzinger, Michael and Caspari, Laura and Dastidar, Kanishka Ghosh and Mitrović, Jelena and Granitzer, Michael},
117
- booktitle={Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval},
118
- year={2025}
119
- }
 
44
 
45
  ## Dataset Creation & Mining Process
46
 
47
+ To ensure high-quality training signals, we employed a **two-stage mining pipeline**. The full mining script is available in this repository: [mining_script.py](./mining_script.py).
48
 
49
  ### 1. Lexical Retrieval (Recall)
50
+ We first retrieved the **top-200 candidate answers** for each query using **BM25** (via Pyserini).
51
  * **Goal:** Identify candidates with high lexical overlap (shared keywords) that are likely to be "hard" for a dense retriever to distinguish.
52
 
53
  ### 2. Semantic Reranking (Precision)
54
  We reranked the top-200 candidates using the state-of-the-art cross-encoder model: **[BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3)**.
55
  * **Goal:** Assess the true semantic relevance of each candidate.
56
+ * **Filtering:** We applied a rigorous filtering strategy to remove False Negatives (high semantic scores) and Easy Negatives (low scores).
57
+ * **Scoring:** We retained the BGE-M3 relevance scores for every negative to enable knowledge distillation (MarginMSE).
58
 
59
+ ### Code & Reproduction
60
+ You can reproduce the mining process using the provided script:
61
+
62
+ ```bash
63
+ python mining_hardnegatives_bge3.py \
64
+ --repo-id "PaDaS-Lab/webfaq-retrieval" \
65
+ --output-dir "./data/distilled_data" \
66
+ --k-negatives 200
67
 
68
  ## Dataset Structure
69
 
 
112
  | **Total** | **All** | **~1,280,000** |
113
 
114
  ## Citation