Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
parquet
Sub-tasks:
document-retrieval
Size:
100K - 1M
License:
File size: 10,871 Bytes
4be2a2c 5e3c35c 75863f9 5e3c35c 4be2a2c 66cfb70 4be2a2c 66cfb70 4be2a2c 5e3c35c 75863f9 4be2a2c 2061570 4be2a2c 2061570 707d93b 2061570 707d93b 2061570 707d93b 2061570 138308a 2061570 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 | ---
dataset_info:
- config_name: corpus
features:
- name: corpus_id
dtype: string
- name: title
dtype: string
- name: passage
dtype: string
splits:
- name: corpus
num_bytes: 48646896
num_examples: 123972
download_size: 27095806
dataset_size: 48646896
- config_name: qrels
features:
- name: query_id
dtype: string
- name: corpus_id
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 2088686
num_examples: 79700
- name: validation
num_bytes: 447673
num_examples: 17078
- name: test
num_bytes: 447559
num_examples: 17079
download_size: 1717341
dataset_size: 2983918
- config_name: queries
features:
- name: query_id
dtype: string
- name: query_vi
dtype: string
- name: query_ede
dtype: string
splits:
- name: train
num_bytes: 11623909
num_examples: 79700
- name: validation
num_bytes: 2486448
num_examples: 17078
- name: test
num_bytes: 2497903
num_examples: 17079
download_size: 9864513
dataset_size: 16608260
configs:
- config_name: corpus
data_files:
- split: corpus
path: corpus/corpus-*
- config_name: qrels
data_files:
- split: train
path: qrels/train-*
- split: validation
path: qrels/validation-*
- split: test
path: qrels/test-*
- config_name: queries
data_files:
- split: train
path: queries/train-*
- split: validation
path: queries/validation-*
- split: test
path: queries/test-*
license: cc-by-4.0
multilinguality: cross-lingual
task_categories:
- text-retrieval
task_ids:
- document-retrieval
tags:
- information-retrieval
- cross-lingual-retrieval
- low-resource
- ede
- vietnamese
- webfaq
- beir
- mteb
size_categories:
- 100K<n<1M
source_datasets:
- PaDaS-Lab/webfaq-retrieval
---
# EViRAL: Ede-Vietnamese Retrieval Across Languages
EViRAL is a cross-lingual information retrieval benchmark for Ede (ISO 639-3: `rad`, Glottocode: `rade1241`), a low-resource Austronesian language spoken primarily in the Central Highlands of Vietnam. The dataset pairs Ede queries with Vietnamese passages sourced from WebFAQ, forming the first IR benchmark for this language.
Queries are derived from the Vietnamese subset of WebFAQ and translated into Ede using the NIRVLab/ViEde machine translation model (Vietnamese-to-Ede). The corpus and relevance judgments (qrels) are retained from the original WebFAQ Vietnamese retrieval subset without modification.
## Dataset Structure
The dataset contains three subsets following the standard BEIR/MTEB format:
**queries** — questions in both Ede and Vietnamese, split into train, validation, and test.
| Column | Type | Description |
|---|---|---|
| query_id | string | Unique query identifier |
| query_vi | string | Original query in Vietnamese |
| query_ede | string | Machine-translated query in Ede |
**corpus** — Vietnamese FAQ passages.
| Column | Type | Description |
|---|---|---|
| corpus_id | string | Unique passage identifier |
| title | string | Passage title (may be empty) |
| passage | string | Passage text in Vietnamese |
**qrels** — relevance judgments mapping queries to relevant passages.
| Column | Type | Description |
|---|---|---|
| query_id | string | Query identifier |
| corpus_id | string | Relevant passage identifier |
| score | float | Relevance score (1 = relevant) |
### Splits
| Split | Queries | Qrels |
|---|---|---|
| train | ~79,700 | ~79,700 |
| validation | ~17,100 | ~17,100 |
| test | ~17,100 | ~17,100 |
| corpus | 124,000 passages | — |
The split was produced using a stratified query-disjoint strategy: each query appears in exactly one split, and stratification is performed by query character-length bucket (short / medium / long) to preserve distributional balance across splits. This design prevents data leakage through shared queries between train and evaluation sets. Split ratio: 70 / 15 / 15.
## Loading the Dataset
```python
from datasets import load_dataset
# Load queries (all splits)
queries = load_dataset("NIRVLab/EViRAL", "queries")
# Load corpus
corpus = load_dataset("NIRVLab/EViRAL", "corpus", split="corpus")
# Load qrels (all splits)
qrels = load_dataset("NIRVLab/EViRAL", "qrels")
# Example: access test split
test_queries = queries["test"]
test_qrels = qrels["test"]
```
### Evaluation with BEIR
```python
from beir import util
from beir.retrieval.evaluation import EvaluateRetrieval
# Load EViRAL subsets into BEIR-compatible format
# corpus: {corpus_id: {"title": ..., "text": ...}}
# queries: {query_id: query_ede_text}
# qrels: {query_id: {corpus_id: score}}
```
### Evaluation with MTEB
```python
import mteb
# EViRAL follows the standard BEIR retrieval task format.
# To evaluate a model:
model = mteb.get_model("your-model-name")
# Pass corpus, queries, and qrels as a custom retrieval task.
```
## Construction Pipeline
1. **Source data.** Vietnamese queries, corpus, and qrels were loaded from `PaDaS-Lab/webfaq-retrieval` (vie-queries, vie-corpus, vie-qrels subsets).
2. **Quality filtering.** Queries shorter than 5 characters and exact/near-duplicate queries (by normalized MD5 hash) were removed.
3. **Translation.** Surviving queries were translated from Vietnamese to Ede using `NIRVLab/ViEde`, a fine-tuned mBART-based model (BLEU 22.8, ChrF++ 46.2 on held-out evaluation). Translation was performed with greedy decoding (num_beams=1), max 64 source/target tokens, using float16 precision on GPU.
4. **Stratified split.** Query IDs were divided into train / validation / test (70/15/15) using stratified shuffle split on query length bucket, ensuring no query appears in more than one split.
5. **Integrity check.** All qrels were verified to reference only valid query IDs and corpus IDs present in their respective subsets.
## Languages
| Language | ISO 639-3 | Glottocode | Role in dataset |
|---|---|---|---|
| Vietnamese | `vie` | `viet1252` | Corpus passages, parallel queries |
| Ede (Rade) | `rad` | `rade1241` | Primary query language |
Ede (also written Ede, Rhade, or Rade) is an Austronesian language of the Malayo-Polynesian branch spoken by approximately 400,000 people in the Central Highlands of Vietnam, primarily in Dak Lak and Phu Yen provinces.
## Intended Use
EViRAL is intended for:
- Benchmarking cross-lingual dense and sparse retrieval models on a low-resource language pair.
- Training and evaluating multilingual text embedding models for underrepresented languages.
- Studying the effectiveness of machine-translated queries in cross-lingual IR evaluation.
- Supporting NLP research for Ede and other Central Highlands minority languages of Vietnam.
## Limitations
**Machine-translated queries.** Ede queries were produced by `NIRVLab/ViEde` without human post-editing. Translation quality (BLEU 22.8, ChrF++ 46.2) is moderate; errors may propagate to retrieval evaluation scores. Results should be interpreted with this in mind and compared against human-translated or natively authored baselines when available.
**Single relevant passage per query.** Each query is paired with exactly one relevant passage (the answer from the originating FAQ page). Additional relevant passages on other websites are not labeled. This sparse relevance structure is shared with the original WebFAQ retrieval subsets.
**No guarantee of factual accuracy.** Passages reflect content from public FAQ pages at crawl time (Common Crawl 2022-2024). They may contain outdated, biased, or incorrect information.
**Language detection noise.** A small proportion of Vietnamese FAQ pairs may contain mixed-language content or brand names that escaped language detection in the original WebFAQ pipeline.
**Translationese effects.** Cross-lingual IR benchmarks built via machine translation may exhibit translationese artifacts, which can introduce bias toward translation-aware retrieval methods. See Artetxe et al. (2020) and related work for discussion.
## Ethical Considerations
**Data provenance.** The corpus is derived from public FAQ pages collected by Common Crawl and processed by the Web Data Commons project. Content was published by website owners with the intent of public access. Downstream users should verify compliance with the terms of use of source websites and Common Crawl.
**Low-resource language representation.** Ede is a seriously under-resourced language in NLP. This dataset is intended to contribute to, not substitute for, community-led efforts to develop language resources for Ede speakers. The authors acknowledge that machine-translated queries do not reflect native speaker usage and encourage future work involving native Ede speakers in data creation and evaluation.
**No personally identifiable information.** The dataset does not contain personally identifiable information. All content is sourced from publicly accessible FAQ pages.
**License compliance.** The source dataset `PaDaS-Lab/webfaq-retrieval` is released under CC BY 4.0. EViRAL inherits this license. Users are required to attribute the original WebFAQ dataset and this dataset when publishing results.
## Citation
If you use EViRAL in your research, please cite this dataset and the following works:
**WebFAQ (source dataset)**
```bibtex
@inproceedings{dinzinger2025webfaq,
title={WebFAQ: A Multilingual Collection of Natural Q\&A Datasets for Dense Retrieval},
author={Dinzinger, Michael and Caspari, Laura and Ghosh Dastidar, Kanishka and Mitrovi{\'c}, Jelena and Granitzer, Michael},
booktitle={Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages={3802--3811},
year={2025}
}
```
**BEIR benchmark (evaluation framework)**
```bibtex
@inproceedings{thakur2021beir,
title = {{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author = {Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhijit Bhole and Iryna Gurevych},
booktitle = {Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year = {2021},
url = {https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
## License
This dataset is released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license, consistent with the source WebFAQ retrieval dataset.
## Acknowledgments
The corpus and relevance judgments are derived from the WebFAQ project (Dinzinger et al., 2025), which collected FAQ data from Common Crawl and Web Data Commons snapshots. Ede query translation was performed using the NIRVLab/ViEde model developed by the Network for Intelligent Research Vietnam (NIRVLab). The split methodology follows practices established by BEIR (Thakur et al., 2021) and MTEB (Muennighoff et al., 2022). |