Datasets:
Tasks:
Text Ranking
Modalities:
Text
Formats:
parquet
Sub-tasks:
document-retrieval
Languages:
English
Size:
10K - 100K
ArXiv:
License:
metadata
annotations_creators:
- human-annotated
language:
- eng
license: mit
multilinguality: monolingual
task_categories:
- text-ranking
task_ids:
- document-retrieval
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 3754590
num_examples: 5112
download_size: 893410
dataset_size: 3754590
- config_name: qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 96592
num_examples: 2766
download_size: 31056
dataset_size: 96592
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 233567
num_examples: 2766
download_size: 105762
dataset_size: 233567
- config_name: top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 125526
num_examples: 2766
download_size: 34006
dataset_size: 125526
configs:
- config_name: corpus
data_files:
- split: test
path: corpus/test-*
- config_name: qrels
data_files:
- split: test
path: qrels/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
- config_name: top_ranked
data_files:
- split: test
path: top_ranked/test-*
tags:
- mteb
- text
Paired evaluation of real world negation in retrieval, with questions and passages. Since models generally prefer one passage over the other always, there are two questions that the model must get right to understand the negation (hence the paired_accuracy metric).
| Task category | t2t |
| Domains | Web |
| Reference | https://github.com/orionw/NevIR |
How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
import mteb
task = mteb.get_tasks(["NevIR"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
To learn more about how to run models on mteb task check out the GitHub repitory.
Citation
If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.
@inproceedings{Weller2023NevIRNI,
author = {{Orion Weller and Dawn J Lawrie and Benjamin Van Durme}},
booktitle = {{Conference of the European Chapter of the Association for Computational Linguistics}},
title = {{NevIR: Negation in Neural Information Retrieval}},
url = {{https://api.semanticscholar.org/CorpusID:258676146}},
year = {{2023}},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
Dataset Statistics
Dataset Statistics
The following code contains the descriptive statistics from the task. These can also be obtained using:
import mteb
task = mteb.get_task("NevIR")
desc_stats = task.metadata.descriptive_stats
{
"test": {
"num_samples": 7878,
"number_of_characters": 3829988,
"num_documents": 5112,
"min_document_length": 95,
"average_document_length": 712.460289514867,
"max_document_length": 1317,
"unique_documents": 5112,
"num_queries": 2766,
"min_query_length": 19,
"average_query_length": 67.9287780187997,
"max_query_length": 168,
"unique_queries": 2766,
"none_queries": 0,
"num_relevant_docs": 2766,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 1.0,
"max_relevant_docs_per_query": 1,
"unique_relevant_docs": 2766,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
This dataset card was automatically generated using MTEB