Datasets:
File size: 6,308 Bytes
d044821 063fc56 d044821 089ea0c c72774a d044821 c72774a d27ce3d 089ea0c c00d0f6 089ea0c c00d0f6 089ea0c c00d0f6 089ea0c c00d0f6 089ea0c d044821 6ffdcec 063fc56 6ffdcec 063fc56 6ffdcec 063fc56 6ffdcec 3b21075 6ffdcec 8f02c19 6ffdcec 4875c5b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
---
license: cc-by-4.0
annotations_creators:
- NIST
task_categories:
- text-retrieval
- text-ranking
language:
- en
- zh
- fa
- ru
multilinguality:
- multilingual
pretty_name: NeuCLIRBench
size_categories:
- n<1K
task_ids:
- document-retrieval
configs:
- config_name: queries
default: true
data_files:
- split: eng
path: data/news.eng.tsv
- split: fas
path: data/news.fas.tsv
- split: rus
path: data/news.rus.tsv
- split: zho
path: data/news.zho.tsv
format: csv
sep: "\t"
header: null
names: ["id", "query"]
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- config_name: qrels
default: false
data_files:
- split: mlir
path: data/qrels.mlir.gains.txt
- split: fas
path: data/qrels.fas.gains.txt
- split: rus
path: data/qrels.rus.gains.txt
- split: zho
path: data/qrels.zho.gains.txt
format: csv
sep: " "
header: null
names: ["id", "ignore", "docid", "relevance"]
dataset_info:
features:
- name: id
dtype: string
- name: ignore
dtype: string
- name: docid
dtype: string
- name: relevance
dtype: int
---
# NeuCLIRBench Topics and Queries
NeuCLIRBench is an evaluation benchmark for monolingual, cross-language, and multilingual adhoc retrieval.
The document collection can be found at [neuclir/neuclir1](https://huggingface.co/datasets/neuclir/neuclir1).
## Supporting Tasks and Corresponding Data
NeuCLIRBench supports three types of tasks: monolingual, cross-language, and multilingual adhoc retrieval.
The following specifies the documents, queries, and qrels (labels) that should be used for each task.
Please report nDCG@20 for all tasks.
*We use `:` to indicate different subset under the dataset.*
### Monolingual Retrieval (`mono`)
| Language | Documents | Queries | Qrels |
| --- | --- | --- | --- |
| English | All splits under `neuclir/neuclir1:mt_docs` | `eng` split of `neuclir/bench:queries` | `mlir` split of `neuclir/bench:qrels` |
| Persian | `fas` split of `neuclir/neuclir1:default` | `fas` split of `neuclir/bench:queries` | `fas` split of `neuclir/bench:qrels` |
| Russian | `rus` split of `neuclir/neuclir1:default` | `rus` split of `neuclir/bench:queries` | `rus` split of `neuclir/bench:qrels` |
| Chinese | `zho` split of `neuclir/neuclir1:default` | `zho` split of `neuclir/bench:queries` | `zho` split of `neuclir/bench:qrels` |
### Cross-Language Retrieval (`clir`)
| Language | Documents | Queries | Qrels |
| --- | --- | --- | --- |
| Persian | `fas` split of `neuclir/neuclir1:default` | `eng` split of `neuclir/bench:queries` | `fas` split of `neuclir/bench:qrels` |
| Russian | `rus` split of `neuclir/neuclir1:default` | `eng` split of `neuclir/bench:queries` | `rus` split of `neuclir/bench:qrels` |
| Chinese | `zho` split of `neuclir/neuclir1:default` | `eng` split of `neuclir/bench:queries` | `zho` split of `neuclir/bench:qrels` |
### Multilingual Retrieval (`mlir`)
| Language | Documents | Queries | Qrels |
| --- | --- | --- | --- |
| English | All splits under `neuclir/neuclir1:default` | `eng` split of `neuclir/bench:queries` | `mlir` split of `neuclir/bench:qrels` |
## Baseline Retrieval Results and Run Files
We also provide all reported baseline retrieval results in the NeuCLIRBench paper.
Please refer to the paper for the detailed descriptions of each model.


### Run Names
Please refer to the `./runs` directory in this dataset to find all the runs.
Files follow the naming scheme of `{run_handle}_{task:mono/clir/mlir}_{lang}.trec`. Please refer to the task section for the details.
| Run Handle | Model Type | Model Name |
|:------------------|:-------------------|:----------------------|
| bm25 | Lexical | BM25 |
| bm25dt | Lexical | BM25 w/ DT |
| bm25qt | Lexical | BM25 w/ QT |
| milco | Bi-Encoder | MILCO |
| plaidx | Bi-Encoder | PLAID-X |
| qwen8b | Bi-Encoder | Qwen3 8B Embed |
| qwen4b | Bi-Encoder | Qwen3 4B Embed |
| qwen600m | Bi-Encoder | Qwen3 0.6B Embed |
| arctic | Bi-Encoder | Arctic-Embed Large v2 |
| splade | Bi-Encoder | SPLADEv3 |
| fusion3 | Bi-Encoder | Fusion |
| repllama | Bi-Encoder | RepLlama |
| me5large | Bi-Encoder | e5 Large |
| jinav3 | Bi-Encoder | JinaV3 |
| bgem3sparse | Bi-Encoder | BGE-M3 Sparse |
| mt5 | Pointwise Reranker | Mono-mT5XXL |
| qwen3-0.6b-rerank | Pointwise Reranker | Qwen3 0.6B Rerank |
| qwen3-4b-rerank | Pointwise Reranker | Qwen3 4B Rerank |
| qwen3-8b-rerank | Pointwise Reranker | Qwen3 8B Rerank |
| jina-rerank | Pointwise Reranker | Jina Reranker |
| searcher-rerank | Pointwise Reranker | SEARCHER Reranker |
| rank1 | Pointwise Reranker | Rank1 |
| qwq | Listwise Reranker | Rank-K (QwQ) |
| rankzephyr | Listwise Reranker | RankZephyr 7B |
| firstqwen | Listwise Reranker | FIRST Qwen3 8B |
| rq32b | Listwise Reranker | RankQwen-32B |
## Citation
```bibtex
@article{neuclirbench,
title={NeuCLIRBench: A Modern Evaluation Collection for Monolingual, Cross-Language, and Multilingual Information Retrieval},
author={Dawn Lawrie and James Mayfield and Eugene Yang and Andrew Yates and Sean MacAvaney and Ronak Pradeep and Scott Miller and Paul McNamee and Luca Soldani},
year={2025},
eprint={2511.14758},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2511.14758},
}
```
|