tech / README.md
eugene-yang's picture
Update README.md
f8e7bd4 verified
---
license: cc-by-4.0
annotations_creators:
- NIST
task_categories:
- text-retrieval
- text-ranking
language:
- en
- zh
multilinguality:
- multilingual
pretty_name: NeuCLIRTech
size_categories:
- n<1K
task_ids:
- document-retrieval
configs:
- config_name: queries
default: true
data_files:
- split: eng
path: data/eng.tsv
- split: zho
path: data/zho.tsv
format: csv
sep: "\t"
header: null
names: ["id", "query"]
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- config_name: qrels
default: false
data_files: data/qrels.gains.txt
format: csv
sep: " "
header: null
names: ["id", "ignore", "docid", "relevance"]
dataset_info:
features:
- name: id
dtype: string
- name: ignore
dtype: string
- name: docid
dtype: string
- name: relevance
dtype: int
---
# NeuCLIRTech Topics and Queries
NeuCLIRTech is an evaluation benchmark for monolingual and cross-language on technical documents
The document collection can be found at [neuclir/csl](https://huggingface.co/datasets/neuclir/csl).
## Supporting Tasks and Corresponding Data
NeuCLIRTech supports three types of tasks: Chinese monolingual and English-Chinese cross-language retrieval.
The following specifies the documents, queries, and qrels (labels) that should be used for each task.
Please report nDCG@20 for all tasks.
*We use `:` to indicate different subset under the dataset.*
| Task | Documents | Queries | Qrels |
| --- | --- | --- | --- |
| Monolingual | `neuclir/csl:csl` | `zho` split of `neuclir/tech:queries` | `neuclir/tech:qrels` |
| Cross-Language | `neuclir/csl:csl` | `eng` split of `neuclir/tech:queries` | `neuclir/tech:qrels` |
## Baseline Retrieval Results and Run Files
We also provide all reported baseline retrieval results in the NeuCLIRBench paper.
Please refer to the paper for the detailed descriptions of each model.
![image](https://cdn-uploads.huggingface.co/production/uploads/63a0c07a3c8841cfe2cd1e70/ywTDl59DIIs_9S2pSGdeU.png)
### Run Names
Please refer to the `./runs` directory in this dataset to find all the runs.
Files follow the naming scheme of `{run_handle}_{task:mono/clir/mlir}_{lang}.trec`. Please refer to the task section for the details.
| Run Handle | Model Type | Model Name |
|:------------------|:-------------------|:----------------------|
| bm25 | Lexical | BM25 |
| bm25dt | Lexical | BM25 w/ DT |
| bm25qt | Lexical | BM25 w/ QT |
| milco | Bi-Encoder | MILCO |
| plaidx | Bi-Encoder | PLAID-X |
| qwen8b | Bi-Encoder | Qwen3 8B Embed |
| qwen4b | Bi-Encoder | Qwen3 4B Embed |
| qwen600m | Bi-Encoder | Qwen3 0.6B Embed |
| arctic | Bi-Encoder | Arctic-Embed Large v2 |
| splade | Bi-Encoder | SPLADEv3 |
| fusion3 | Bi-Encoder | Fusion |
| repllama | Bi-Encoder | RepLlama |
| me5large | Bi-Encoder | e5 Large |
| jinav3 | Bi-Encoder | JinaV3 |
| bgem3sparse | Bi-Encoder | BGE-M3 Sparse |
| mt5 | Pointwise Reranker | Mono-mT5XXL |
| qwen3-0.6b-rerank | Pointwise Reranker | Qwen3 0.6B Rerank |
| qwen3-4b-rerank | Pointwise Reranker | Qwen3 4B Rerank |
| qwen3-8b-rerank | Pointwise Reranker | Qwen3 8B Rerank |
| jina-rerank | Pointwise Reranker | Jina Reranker |
| searcher-rerank | Pointwise Reranker | SEARCHER Reranker |
| rank1 | Pointwise Reranker | Rank1 |
| qwq | Listwise Reranker | Rank-K (QwQ) |
| rankzephyr | Listwise Reranker | RankZephyr 7B |
| firstqwen | Listwise Reranker | FIRST Qwen3 8B |
| rq32b | Listwise Reranker | RankQwen-32B |
## Citation
TBA