text-classification bool 2 classes | text stringlengths 0 664k |
|---|---|
true |
# Dataset Card for CSFD movie reviews (Czech)
## Dataset Description
The dataset contains user reviews from Czech/Slovak movie databse website <https://csfd.cz>.
Each review contains text, rating, date, and basic information about the movie (or TV series).
The dataset has in total (train+validation+test) 30,000 reviews. The data is balanced - each rating has approximately the same frequency.
## Dataset Features
Each sample contains:
- `review_id`: unique string identifier of the review.
- `rating_str`: string representation of the rating (from "0/5" to "5/5")
- `rating_int`: integer representation of the rating (from 0 to 5)
- `date`: date of publishing the review (just date, no time nor timezone)
- `comment_language`: language of the review (always "cs")
- `comment`: the string of the review
- `item_title`: title of the reviewed item
- `item_year`: publishing year of the item (string, can also be a range)
- `item_kind`: kind of the item - either "film" or "seriál"
- `item_genres`: list of genres of the item
- `item_directors`: list of director names of the item
- `item_screenwriters`: list of screenwriter names of the item
- `item_cast`: list of actors and actress in the item
## Dataset Source
The data was mined and sampled from the <https://csfd.cz> website.
Make sure to comply with the terms of conditions of the website operator when using the data.
|
false |
# Dataset Card for "Hansel"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Splits](#data-splits)
- [Citation](#citation)
## Dataset Description
- **Homepage:** https://github.com/HITsz-TMG/Hansel
- **Paper:** https://arxiv.org/abs/2207.13005
Hansel is a high-quality human-annotated Chinese entity linking (EL) dataset, focusing on tail entities and emerging entities:
- The test set contains Few-shot (FS) and zero-shot (ZS) slices, has 10K examples and uses Wikidata as the corresponding knowledge base.
- The training and validation sets are from Wikipedia hyperlinks, useful for large-scale pretraining of Chinese EL systems.
Please see our [WSDM 2023](https://www.wsdm-conference.org/2023/) paper [**"Hansel: A Chinese Few-Shot and Zero-Shot Entity Linking Benchmark"**](https://dl.acm.org/doi/10.1145/3539597.3570418) to learn more about our dataset.
For models in the paper and our processed knowledge base, please see our [Github repository](https://github.com/HITsz-TMG/Hansel).
## Dataset Structure
### Data Instances
{"id": "hansel-eval-zs-1463",
"text": "1905电影网讯 已经筹备了十余年的吉尔莫·德尔·托罗的《匹诺曹》,在上个月顺利被网飞公司买下,成为了流媒体巨头旗下的新片。近日,这部备受关注的影片确定了自己的档期:2021年。虽然具体时间未定,但影片却已经实实在在地向前迈出了一步。",
"start": 29,
"end": 32,
"mention": "匹诺曹",
"gold_id": "Q73895818",
"source": "https://www.1905.com/news/20181107/1325389.shtml",
"domain": "news"
}
### Data Splits
| | # Mentions | # Entities | Domain |
| ---- | ---- | ---- | ---- |
| Train | 9,879,813 | 541,058 | Wikipedia |
| Validation | 9,674 | 6,320 | Wikipedia |
| Hansel-FS | 5,260 | 2,720 | News, Social Media |
| Hansel-ZS | 4,715 | 4,046 | News, Social Media, E-books, etc.|
## Citation
If you find our dataset useful, please cite us.
```bibtex
@inproceedings{xu2022hansel,
author = {Xu, Zhenran and Shan, Zifei and Li, Yuxin and Hu, Baotian and Qin, Bing},
title = {Hansel: A Chinese Few-Shot and Zero-Shot Entity Linking Benchmark},
year = {2023},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3539597.3570418},
booktitle = {Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining},
pages = {832–840}
}
```
|
true |
# Dataset Card for IMDB Kurdish
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://ai.stanford.edu/~amaas/data/sentiment/](http://ai.stanford.edu/~amaas/data/sentiment/)
- **Repository:** [https://github.com/Hrazhan/IMDB_Kurdish/](https://github.com/Hrazhan/IMDB_Kurdish/)
- **Point of Contact:** [Razhan Hameed](https://twitter.com/RazhanHameed)
- **Paper:**
- **Leaderboard:**
### Dataset Summary
Central Kurdish translation of the famous IMDB movie reviews dataset.
The dataset contains 50K highly polar movie reviews, divided into two equal classes of positive and negative reviews. We can perform binary sentiment classification using this dataset.
The availability of datasets in Kurdish, such as the IMDB movie reviews dataset, can help researchers and developers train and evaluate machine learning models for Kurdish language processing.
However, it is important to note that machine learning algorithms can only be as accurate as the data they are trained on (in this case the quality of the translation), so the quality and relevance of the dataset will affect the performance of the resulting model.
For more information about the dataset, please go through the following link: http://ai.stanford.edu/~amaas/data/sentiment/
P.S. This dataset is translated with Google Translator.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Central Kurdish
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
"label": 0,
"text": ""فیلمێکی زۆر باش، کە سەرنج دەخاتە سەر پرسێکی زۆر گرنگ. نەخۆشی کحولی کۆرپەلە کەموکوڕییەکی زۆر جددی لە لەدایکبوونە کە بە تەواوی دەتوانرێت ڕێگری لێبکرێت. ئەگەر خێزانە زیاترەکان ئەم فیلمە ببینن، ڕەنگە منداڵی زیاتر وەک ئادەم کۆتاییان نەهاتبێت. جیمی سمیس لە یەکێک لە باشترین ڕۆڵەکانیدا نمایش دەکات تا ئێستا. ئەمە فیلمێکی نایاب و باشە کە خێزانێکی زۆر تایبەت لەبەرچاو دەگرێت و پێویستییەکی زۆر گرنگی هەیە. ئەمەش جیاواز نییە لە هەزاران خێزان کە ئەمڕۆ لە ئەمریکا هەن. منداڵان هەن کە لەگەڵ ئەم جیهانەدا خەبات دەکەن. بەڕاستی خاڵە گرنگەکە لێرەدا ئەوەیە کە دەکرا ڕێگری لە هەموو شتێک بکرێت. خەڵکی زیاتر دەبێ ئەم فیلمە ببینن و ئەوەی کە هەیەتی بە جددی وەریبگرێت. بە باشی ئەنجام دراوە، بە پەیامی گرنگ، بە شێوەیەکی بەڕێزانە مامەڵەی لەگەڵ دەکرێت."
}
```
### Data Fields
plain_text
text: a string feature.
label: a classification label, with possible values including neg (0), pos (1).
### Data Splits
| name |train|test|
|----------|----:|----:|
|plain_text|24903|24692|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
### Contributions
Thanks to [Razhan Hameed](https://twitter.com/RazhanHameed) for adding this dataset.
|
false |
# Dataset Card for "lmqg/qa_squadshifts_synthetic"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The test split is the original validation set of [`lmqg/qa_squadshifts`](https://huggingface.co/datasets/lmqg/qa_squadshifts), where the model should be evaluate on.
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
TBA
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` |
false |
# Dataset Card for ScandiReddit
## Dataset Description
- **Repository:** <https://github.com/alexandrainst/ScandiReddit>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 2341 MB
- **Size of the generated dataset:** 3594 MB
- **Total amount of disk used:** 5935 MB
### Dataset Summary
ScandiReddit is a filtered and post-processed corpus consisting of comments from [Reddit](https://reddit.com/).
All Reddit comments from December 2005 up until October 2022 were downloaded through [PushShift](https://files.pushshift.io/reddit/comments/), after which these were filtered based on the FastText language detection model. Any comment which was classified as Danish (`da`), Norwegian (`no`), Swedish (`sv`) or Icelandic (`is`) with a confidence score above 70% was kept.
The resulting comments were then deduplicated, removing roughly 438,000 comments. 5,000 comments written by Reddit bots were removed, and roughly 189,000 comments belonging to inappropriate subreddits (explicit and drug-related) were also removed.
Lastly, we remove roughly 40,000 near-duplicate comments from the resulting corpus, where near-duplicate here means that the comments have more than 80% of their word 5-grams in common.
### Supported Tasks and Leaderboards
Training language models is the intended task for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian (`no`) and Icelandic (`is`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 2341 MB
- **Size of the generated dataset:** 3594 MB
- **Total amount of disk used:** 5935 MB
An example from the dataset looks as follows.
```
{
'doc': 'Bergen er ødelagt. Det er ikke moro mer.',
'subreddit': 'Norway',
'language': 'da',
'language_confidence': 0.7472341656684875
}
```
### Data Fields
The data fields are the same among all splits.
- `doc`: a `string` feature.
- `subreddit`: a `string` feature.
- `language`: a `string` feature.
- `language_confidence`: a `float64` feature.
### Language Distribution
| name | count |
|----------|---------:|
| sv | 6,967,420 |
| da | 4,965,195 |
| no | 1,340,470 |
| is | 206,689 |
| total | 13,479,774 |
### Top-50 Subreddit Distribution
| name | count |
|----------|--------:|
|sweden |4,881,483|
|Denmark |3,579,178|
|norge |1,281,655|
|svenskpolitik | 771,960|
|InfluencergossipDK | 649,910|
|swedishproblems | 339,683|
|Iceland | 183,488|
|dkfinance | 113,860|
|unket | 81,077|
|DanishEnts | 69,055|
|dankmark | 62,928|
|swedents | 58,576|
|scandinavia | 57,136|
|Allsvenskan | 56,006|
|Gothenburg | 54,395|
|stockholm | 51,016|
|ISKbets | 47,944|
|Sverige | 39,552|
|SWARJE | 34,691|
|GossipDK | 29,332|
|NorskFotball | 28,571|
|Superligaen | 23,641|
|Aarhus | 22,516|
|Svenska | 20,561|
|newsdk | 19,893|
|AskReddit | 16,672|
|copenhagen | 16,668|
|okpolarncp | 16,583|
|SwedditUniversalis | 15,990|
|Sveriges_politik | 15,058|
|intresseklubben | 13,246|
|Aktiemarknaden | 13,202|
|soccer | 12,637|
|teenagers | 10,845|
|Norway | 10,680|
|europe | 10,247|
|Matinbum | 9,792|
|oslo | 9,650|
|iksdagen | 9,232|
|Asksweddit | 8,851|
|Forsvaret | 8,641|
|Sverigesforsvarsmakt | 8,469|
|memes | 8,299|
|Danish | 8,268|
|DANMAG | 8,214|
|PewdiepieSubmissions | 7,800|
|sweddpolitik | 7,646|
|pinsamt | 7,318|
|arbetarrorelsen | 7,317|
|Ishockey | 6,824|
## Dataset Creation
### Curation Rationale
The Scandinavian languages do not have many open source social media datasets.
### Source Data
The raw Reddit data was collected through [PushShift](https://files.pushshift.io/reddit/comments/).
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY 4.0
license](https://creativecommons.org/licenses/by/4.0/).
|
false |
# Dataset Card for Czech Simple Question Answering Dataset 2.0
This a processed and filtered adaptation of an existing dataset. For raw and larger dataset, see `Dataset Source` section.
## Dataset Description
The data contains questions and answers based on Czech wikipeadia articles.
Each question has an answer (or more) and a selected part of the context as the evidence.
A majority of the answers are extractive - i.e. they are present in the context in the exact form. The remaining cases are
- yes/no questions
- answer is almost in the exact form present in the text, but the form of words was changed to suit the question (declension, ...)
- answered in own words (should be rare, but is not)
All questions in the dataset are answerable from the context. Small minority of questions have multiple answers.
Sometimes it means that any of them is correct (e.g. either "Pacifik" or "Tichý oceán" are correct terms for Pacific Ocean)
and sometimes it means that all of them together are a correct answer (e.g., Who was Leonardo da Vinci? ["painter", "engineer"])
Total number of examples is around:
- 6,250 in train
- 570 in validation
- 850 in test.
## Dataset Features
Each example contains:
- `item_id`: string id of the
- `context`: "reasonably" big chunk (string) of wikipedia article that contains the answer
- `question`: string
- `answers`: list of all answers (string). mostly list of length 1
- `evidence_text`: substring of context (typically one sentence) that is sufficient to answer the question
- `evidence_start`: index in context, such that `context[evidence_start:evidence_end] == evidence_text`
- `evidence_end`: index in context
- `occurences`:
list of (dictionaries) occurences of the answer(s) in the evidence.
Each answer was searched with word boundaries ("\b" in regex) and case-sensitive in the evidence.
If nothing found, try again but case-insensitive.
If nothing found, try again but case-sensitive without word boundaries.
If nothing found, try again but case-insensitive without word boundaries.
This process should supress "false positive" occurences of the answer in the evidence.
- `start`: index in context
- `end`: index in context
- `text`: the answer looked for
- `url`: link to the wikipedia article
- `original_article`: original parsed wikipedia article from which the context is taken
- `question_type`: type of the question, one of: ['ABBREVIATION', 'DATETIME', 'DENOTATION', 'ENTITY', 'LOCATION', 'NUMERIC', 'ORGANIZATION', 'OTHER', 'PERSON', 'YES_NO']
- `answer_type`: type of the answer, one of: ['ABBREVIATION', 'ADJ_PHRASE', 'CLAUSE', 'DATETIME', 'ENTITY', 'LOCATION', 'NUMERIC', 'OTHER', 'PERSON', 'VERB_PHRASE']
## Dataset Source
The dataset is a preprocessed adaptation of existing SQAD 3.0 dataset [link to data](https://lindat.cz/repository/xmlui/handle/11234/1-3069).
This adaptation contains (almost) same data, but converted to a convenient format.
The data was also filtered to remove a statistical bias where the answer was contained
in the first sentence in the article (around 50% of all data in the original dataset, likely
caused by the data collection process).
## Citation
Cite authors of the [original dataset](https://lindat.cz/repository/xmlui/handle/11234/1-3069):
```bibtex
@misc{11234/1-3069,
title = {sqad 3.0},
author = {Medve{\v d}, Marek and Hor{\'a}k, Ale{\v s}},
url = {http://hdl.handle.net/11234/1-3069},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {{GNU} Library or "Lesser" General Public License 3.0 ({LGPL}-3.0)},
year = {2019}
}
```
|
true |
# Dataset Card for BasqueGLUE
## Table of Contents
* [Table of Contents](#table-of-contents)
* [Dataset Description](#dataset-description)
* [Dataset Summary](#dataset-summary)
* [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
* [Languages](#languages)
* [Dataset Structure](#dataset-structure)
* [Data Instances](#data-instances)
* [Data Fields](#data-fields)
* [Data Splits](#data-splits)
* [Dataset Creation](#dataset-creation)
* [Curation Rationale](#curation-rationale)
* [Source Data](#source-data)
* [Annotations](#annotations)
* [Personal and Sensitive Information](#personal-and-sensitive-information)
* [Considerations for Using the Data](#considerations-for-using-the-data)
* [Social Impact of Dataset](#social-impact-of-dataset)
* [Discussion of Biases](#discussion-of-biases)
* [Other Known Limitations](#other-known-limitations)
* [Additional Information](#additional-information)
* [Dataset Curators](#dataset-curators)
* [Licensing Information](#licensing-information)
* [Citation Information](#citation-information)
* [Contributions](#contributions)
## Dataset Description
* **Repository:** <https://github.com/orai-nlp/BasqueGLUE>
* **Paper:** [BasqueGLUE: A Natural Language Understanding Benchmark for Basque](http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.172.pdf)
* **Point of Contact:** [Contact Information](https://github.com/orai-nlp/BasqueGLUE#contact-information)
### Dataset Summary
Natural Language Understanding (NLU) technology has improved significantly over the last few years, and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages.
We present BasqueGLUE, the first NLU benchmark for Basque, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. BasqueGLUE is freely available under an open license.
| Dataset | \|Train\| | \|Val\| | \|Test\| | Task | Metric | Domain |
|----------------|----------:|--------:|---------:|------------------------|:------:|-----------------|
| NERCid | 51,539 | 12,936 | 35,855 | NERC | F1 | News |
| NERCood | 64,475 | 14,945 | 14,462 | NERC | F1 | News, Wikipedia |
| FMTODeu_intent | 3,418 | 1,904 | 1,087 | Intent classification | F1 | Dialog system |
| FMTODeu_slot | 19,652 | 10,791 | 5,633 | Slot filling | F1 | Dialog system |
| BHTCv2 | 8,585 | 1,857 | 1,854 | Topic classification | F1 | News |
| BEC2016eu | 6,078 | 1,302 | 1,302 | Sentiment analysis | F1 | Twitter |
| VaxxStance | 864 | 206 | 312 | Stance detection | MF1* | Twitter |
| QNLIeu | 1,764 | 230 | 238 | QA/NLI | Acc | Wikipedia |
| WiCeu | 408,559 | 600 | 1,400 | WSD | Acc | Wordnet |
| EpecKorrefBin | 986 | 320 | 587 | Coreference resolution | Acc | News |
### Supported Tasks and Leaderboards
This benchmark comprises the following tasks:
#### NERCid
This dataset contains sentences from the news domain with manually annotated named entities. The data is the merge of EIEC (a dataset of a collection of news wire articles from Euskaldunon Egunkaria newspaper, (Alegria et al. 2004)), and newly annotated data from naiz.eus. The data is annotated following the BIO annotation scheme over four categories: person, organization, location, and miscellaneous.
#### NERCood
This dataset contains sentences with manually annotated named entities. The training data is the merge of EIEC (a dataset of a collection of news wire articles from Euskaldunon Egunkaria newspaper, (Alegria et al. 2004)), and newly annotated data from naiz.eus. The data is annotated following the BIO annotation scheme over four categories: person, organization, location, and miscellaneous. For validation and test sets, sentences from Wikipedia were annotated following the same annotation guidelines.
#### FMTODeu_intent
This dataset contains utterance texts and intent annotations drawn from the manually-annotated Facebook Multilingual Task Oriented Dataset (FMTOD) (Schuster et al. 2019). Basque translated data was drawn from the datasets created for Building a Task-oriented Dialog System for languages with no training data: the Case for Basque (de Lacalle et al. 2020). The examples are annotated with one of 12 different intent classes corresponding to alarm, reminder or weather related actions.
#### FMTODeu_slot
This dataset contains utterance texts and sequence intent argument annotations designed for slot filling tasks, drawn from the manually-annotated Facebook Multilingual Task Oriented Dataset (FMTOD) (Schuster et al. 2019). Basque translated data was drawn from the datasets created for Building a Task-oriented Dialog System for languages with no training data: the Case for Basque (de Lacalle et al. 2020). The task is a sequence labelling task similar to NERC, following BIO annotation scheme over 11 categories.
#### BHTCv2
The corpus contains 12,296 news headlines (brief article descriptions) from the Basque weekly newspaper [Argia](https://www.argia.eus). Topics are classified uniquely according to twelve thematic categories.
#### BEC2016eu
The Basque Election Campaign 2016 Opinion Dataset (BEC2016eu) is a new dataset for the task of sentiment analysis, a sequence classification task, which contains tweets about the campaign for the Basque elections from 2016. The crawling was carried out during the election campaign period (2016/09/09-2016/09/23), by monitoring the main parties and their respective candidates. The tweets were manually annotated as positive, negative or neutral.
#### VaxxStance
The VaxxStance (Agerri et al., 2021) dataset originally provides texts and stance annotations for social media texts around the anti-vaccine movement. Texts are given a label indicating whether they express an AGAINST, FAVOR or NEUTRAL stance towards the topic.
#### QNLIeu
This task includes the QA dataset ElkarHizketak (Otegi et al. 2020), a low resource conversational Question Answering (QA) dataset for Basque created by native speaker volunteers. The dataset is built on top of Wikipedia sections about popular people and organizations, and it contains around 400 dialogues and 1600 question and answer pairs. The task was adapted into a sentence-pair binary classification task, following the design of QNLI for English (Wang et al. 2019). Each question and answer pair are given a label indicating whether the answer is entailed by the question.
#### WiCeu
Word in Context or WiC (Pilehvar and Camacho-Collados 2019) is a word sense disambiguation (WSD) task, designed as a particular form of sentence pair binary classification. Given two text snippets and a polyse mous word that appears in both of them (the span of the word is marked in both snippets), the task is to determine whether the word has the same sense in both sentences. This dataset is based on the EPEC-EuSemcor (Pociello et al. 2011) sense-tagged corpus.
#### EpecKorrefBin
EPEC-KORREF-Bin is a dataset derived from EPEC-KORREF (Soraluze et al. 2012), a corpus of Basque news documents with manually annotated mentions and coreference chains, which we have been converted into a binary classification task. In this task, the model has to predict whether two mentions from a text, which can be pronouns, nouns or noun phrases, are referring to the same entity.
#### Leaderboard
Results obtained for two BERT base models as a baseline for the Benchmark.
| | AVG | NERC | F_intent | F_slot | BHTC | BEC | Vaxx | QNLI | WiC | coref |
|------------------------------------------------------------|:-----:|:-----:|:---------:|:-------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|
| Model | | F1 | F1 | F1 | F1 | F1 | MF1 | acc | acc | acc |
|[BERTeus](https://huggingface.co/ixa-ehu/berteus-base-cased)| 73.23 | 81.92 | 82.52 | 74.34 | 78.26 | 69.43 | 59.30 | 74.26 | 70.71 | 68.31 |
|[ElhBERTeu](https://huggingface.co/elh-eus/ElhBERTeu) | 73.71 | 82.30 | 82.24 | 75.64 | 78.05 | 69.89 | 63.81 | 73.84 | 71.71 | 65.93 |
The results obtained on NERC are the average of in-domain and out-of-domain NERC.
### Languages
Data are available in Basque (BCP-47 `eu`)
## Dataset Structure
### Data Instances
#### NERCid/NERCood
An example of 'train' looks as follows:
```
{
"idx": 0,
"tags": ["O", "O", "O", "O", "B-ORG", "O", ...],
"tokens": ["Greba", "orokorrera", "deitu", "du", "EHk", "27rako", ...]
}
```
#### FMTODeu_intent
An example of 'train' looks as follows:
```
{
"idx": 0,
"label": "alarm/modify_alarm",
"text": "aldatu alarma 7am-tik 7pm-ra , mesedez"
}
```
#### FMTODeu_slot
An example of 'train' looks as follows:
```
{
"idx": 923,
"tags": ["O", "B-reminder/todo", "I-datetime", "I-datetime", "B-reminder/todo"],
"tokens": ["gogoratu", "zaborra", "gaur", "gauean", "ateratzea"]
}
```
#### BHTCv2
An example of 'test' looks as follows:
```
{
"idx": 0,
"label": "Gizartea",
"text": "Genero berdintasunaz, hezkuntzaz eta klase gizarteaz hamar liburu baino gehiago..."
}
```
#### BEC2016eu
An example of 'test' looks as follows:
```
{
"idx": 0,
"label": "NEU",
"text": '"Emandako hitza bete egingo dut" Urkullu\nBa galdeketa enegarrenez daramazue programan (ta zuen AHTa...)\n#I25debatea #URL"'
}
```
#### VaxxStance
An example of 'train' looks as follows:
```
{
"idx": 0,
"label": "FAVOR",
"text": "\"#COVID19 Oraingo datuak, izurriaren dinamika, txertoaren eragina eta birusaren..
}
```
#### QNLIeu
An example of 'train' looks as follows:
```
{
"idx": 1,
"label": "not_entailment",
"question": "Zein posiziotan jokatzen du Busquets-ek?",
"sentence": "Busquets 23 partidatan izan zen konbokatua eta 2 gol sartu zituen."
}
```
#### WiCeu
An example of 'test' looks as follows:
```
{
"idx": 16,
"label": false,
"word": "udal",
"sentence1": "1a . Lekeitioko udal mugarteko Alde Historikoa Birgaitzeko Plan Berezia behin...",
"sentence2": "Diezek kritikatu egin zuen EAJk zenbait udaletan EH gobernu taldeetatik at utzi...",
"start1": 16,
"start2": 40,
"end1": 21,
"end2": 49
}
```
#### EpecKorrefBin
An example of 'train' looks as follows:
```
{
"idx": 6,
"label": false,
"text": "Isuntza da faborito nagusia Elantxobeko banderan . ISUNTZA trainerua da faborito nagusia bihar Elantxoben jokatuko den bandera irabazteko .",
"span1_text": "Elantxobeko banderan",
"span2_text": "ISUNTZA trainerua",
"span1_index": 4,
"span2_index": 8
}
```
### Data Fields
#### NERCid
* `tokens`: a list of `string` features
* `tags`: a list of entity labels, with possible values including `person` (PER), `location` (LOC), `organization` (ORG), `miscellaneous` (MISC)
* `idx`: an `int32` feature
#### NERCood
* `tokens`: a list of `string` features
* `tags`: a list of entity labels, with possible values including `person` (PER), `location` (LOC), `organization` (ORG), `miscellaneous` (MISC)
* `idx`: an `int32` feature
#### FMTODeu_intent
* `text`: a `string` feature
* `label`: an intent label, with possible values including:
* `alarm/cancel_alarm`
* `alarm/modify_alarm`
* `alarm/set_alarm`
* `alarm/show_alarms`
* `alarm/snooze_alarm`
* `alarm/time_left_on_alarm`
* `reminder/cancel_reminder`
* `reminder/set_reminder`
* `reminder/show_reminders`
* `weather/checkSunrise`
* `weather/checkSunset`
* `weather/find`
* `idx`: an `int32` feature
#### FMTODeu_slot
* `tokens`: a list of `string` features
* `tags`: a list of intent labels, with possible values including:
* `datetime`
* `location`
* `negation`
* `alarm/alarm_modifier`
* `alarm/recurring_period`
* `reminder/noun`
* `reminder/todo`
* `reminder/reference`
* `reminder/recurring_period`
* `weather/attribute`
* `weather/noun`
* `idx`: an `int32` feature
#### BHTCv2
* `text`: a `string` feature
* `label`: a polarity label, with possible values including `neutral` (NEU), `negative` (N), `positive` (P)
* `idx`: an `int32` feature
#### BEC2016eu
* `text`: a `string` feature
* `label`: a topic label, with possible values including:
* `Ekonomia`
* `Euskal Herria`
* `Euskara`
* `Gizartea`
* `Historia`
* `Ingurumena`
* `Iritzia`
* `Komunikazioa`
* `Kultura`
* `Nazioartea`
* `Politika`
* `Zientzia`
* `idx`: an `int32` feature
#### VaxxStance
* `text`: a `string` feature
* `label`: a stance label, with possible values including `AGAINST`, `FAVOR`, `NONE`
* `idx`: an `int32` feature
#### QNLIeu
* `question`: a `string` feature
* `sentence`: a `string` feature
* `label`: an entailment label, with possible values including `entailment`, `not_entailment`
* `idx`: an `int32` feature
#### WiCeu
* `word`: a `string` feature
* `sentence1`: a `string` feature
* `sentence2`: a `string` feature
* `label`: a `boolean` label indicating sense agreement, with possible values including `true`, `false`
* `start1`: an `int` feature indicating character position where word occurence begins in first sentence
* `start2`: an `int` feature indicating character position where word occurence begins in second sentence
* `end1`: an `int` feature indicating character position where word occurence ends in first sentence
* `end2`: an `int` feature indicating character position where word occurence ends in second sentence
* `idx`: an `int32` feature
#### EpecKorrefBin
* `text`: a `string` feature.
* `label`: a `boolean` coreference label, with possible values including `true`, `false`.
* `span1_text`: a `string` feature
* `span2_text`: a `string` feature
* `span1_index`: an `int` feature indicating token index where `span1_text` feature occurs in `text`
* `span2_index`: an `int` feature indicating token index where `span2_text` feature occurs in `text`
* `idx`: an `int32` feature
### Data Splits
| Dataset | \|Train\| | \|Val\| | \|Test\| |
|---------|--------:|------:|-------:|
| NERCid | 51,539 | 12,936 | 35,855 |
| NERCood | 64,475 | 14,945 | 14,462 |
| FMTODeu_intent | 3,418 | 1,904 | 1,087 |
| FMTODeu_slot | 19,652 | 10,791 | 5,633 |
| BHTCv2 | 8,585 | 1,857 | 1,854 |
| BEC2016eu | 6,078 | 1,302 | 1,302 |
| VaxxStance | 864 | 206 | 312 |
| QNLIeu | 1,764 | 230 | 238 |
| WiCeu | 408,559 | 600 | 1,400 |
| EpecKorrefBin | 986 | 320 | 587 |
## Dataset Creation
### Curation Rationale
We believe that BasqueGLUE is a significant contribution towards developing NLU tools in Basque, which we believe will facilitate the technological advance for the Basque language. In order to create BasqueGLUE we took as a reference the GLUE and SuperGLUE frameworks. When possible, we re-used existing datasets for Basque, adapting them to the corresponding task formats if necessary. Additionally, BasqueGLUE also includes six new datasets that have not been published before. In total, BasqueGLUE consists of nine Basque NLU tasks and covers a wide range of tasks with different difficulties across several domains. As with the original GLUE benchmark, the training data for the tasks vary in size, which allows to measure the performance of how the models transfer knowledge across tasks.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Gorka Urbizu [1], Iñaki San Vicente [1], Xabier Saralegi [1], Rodrigo Agerri [2] and Aitor Soroa [2]
Affiliation of the authors:
[1] orai NLP Technologies
[2] HiTZ Center - Ixa, University of the Basque Country UPV/EHU
### Licensing Information
Each dataset of the BasqueGLUE benchmark has it's own license (due to most of them being or being derived from already existing datasets). See their respective README files for details.
Here we provide a brief summary of their licenses:
| Dataset | License |
|---------|---------|
| NERCid | CC BY-NC-SA 4.0 |
| NERCood | CC BY-NC-SA 4.0 |
| FMTODeu_intent | CC BY-NC-SA 4.0 |
| FMTODeu_slot | CC BY-NC-SA 4.0 |
| BHTCv2 | CC BY-NC-SA 4.0 |
| BEC2016eu | Twitter's license + CC BY-NC-SA 4.0 |
| VaxxStance | Twitter's license + CC BY 4.0 |
| QNLIeu | CC BY-SA 4.0 |
| WiCeu | CC BY-NC-SA 4.0 |
| EpecKorrefBin | CC BY-NC-SA 4.0 |
For the rest of the files of the benchmark, including the loading and evaluation scripts, the following license applies:
Copyright (C) by Orai NLP Technologies.
This benchmark and evaluation scripts are licensed under the Creative Commons Attribution Share Alike 4.0
International License (CC BY-SA 4.0). To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
### Citation Information
```
@InProceedings{urbizu2022basqueglue,
author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor},
title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {1603--1612},
abstract = {Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license.},
url = {https://aclanthology.org/2022.lrec-1.172}
}
```
### Contributions
Thanks to [@richplant](https://github.com/richplant) for adding this dataset to hugginface.
|
true |
# Dataset Card for Czech Facebook comments
## Dataset Description
The dataset contains user comments from Facebook. Each comment contains text, sentiment (positive/negative/neutral).
The dataset has in total (train+validation+test) 6,600 reviews. The data is balanced.
## Dataset Features
Each sample contains:
- `comment_id`: unique string identifier of the comment.
- `sentiment_str`: string representation of the rating - "pozitivní" / "neutrální" / "negativní"
- `sentiment_int`: integer representation of the rating (1=positive, 0=neutral, -1=negative)
- `comment`: the string of the comment
## Dataset Source
The data is a processed adaptation of [Facebook CZ Corpus](https://liks.fav.zcu.cz/sentiment/).
This adaptation is label-balanced.
|
false | |
false | # AutoTrain Dataset for project: feetfoot
## Dataset Description
This dataset has been automatically processed by AutoTrain for project feetfoot.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<180x320 RGB PIL image>",
"target": 0
},
{
"image": "<78x320 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['gettyimagefeet'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 97 |
| valid | 25 |
|
false | # AutoTrain Dataset for project: feets
## Dataset Description
This dataset has been automatically processed by AutoTrain for project feets.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<206x320 RGB PIL image>",
"target": 0
},
{
"image": "<173x320 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['gettyimagefeet'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 122 |
| valid | 122 |
|
false |
# Dataset Card for tox21_srp53
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage: https://moleculenet.org/**
- **Repository: https://github.com/deepchem/deepchem/tree/master**
- **Paper: https://arxiv.org/abs/1703.00564**
### Dataset Summary
`tox21_srp53` is a dataset included in [MoleculeNet](https://moleculenet.org/). It is the p53 stress-response pathway activation (SR-p53) task from Tox21.
## Dataset Structure
### Data Fields
Each split contains
* `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule
* `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule
* `target`: clinical trial toxicity (or absence of toxicity)
### Data Splits
The dataset is split into an 80/10/10 train/valid/test split using scaffold split.
### Source Data
#### Initial Data Collection and Normalization
Data was originially generated by the Pande Group at Standford
### Licensing Information
This dataset was originally released under an MIT license
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.1703.00564,
doi = {10.48550/ARXIV.1703.00564},
url = {https://arxiv.org/abs/1703.00564},
author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay},
keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences},
title = {MoleculeNet: A Benchmark for Molecular Machine Learning},
publisher = {arXiv},
year = {2017},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
### Contributions
Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset. |
false | Russian dataset for ELQ (Entity Linking for Questions) model (https://github.com/facebookresearch/BLINK/tree/main/elq) |
false | New Russian dataset for ELQ (Entity Linking for Questions) model (https://github.com/facebookresearch/BLINK/tree/main/elq)
|
true |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- jampatoisnli.github.io
- **Repository:**
- https://github.com/ruth-ann/jampatoisnli
- **Paper:**
- https://arxiv.org/abs/2212.03419
- **Point of Contact:**
- Ruth-Ann Armsrong: armstrongruthanna@gmail.com
### Dataset Summary
JamPatoisNLI provides the first dataset for natural language inference in a creole language, Jamaican Patois.
Many of the most-spoken low-resource languages are creoles. These languages commonly have a lexicon derived from
a major world language and a distinctive grammar reflecting the languages of the original speakers and the process
of language birth by creolization. This gives them a distinctive place in exploring the effectiveness of transfer
from large monolingual or multilingual pretrained models.
### Supported Tasks and Leaderboards
Natural language inference
### Languages
Jamaican Patois
### Data Fields
premise, hypothesis, label
### Data Splits
Train: 250
Val: 200
Test: 200
### Data set creation + Annotations
Premise collection:
97% of examples from Twitter; remaining pulled from literature and online cultural website
Hypothesis construction:
For each premise, hypothesis written by native speaker (our first author) so that pair’s classification would be E, N or C
Label validation:
Random sample of 100 sentence pairs double annotated by fluent speakers
### Social Impact of Dataset
JamPatoisNLI is a low-resource language dataset in an English-based Creole spoken in the Caribbean,
Jamaican Patois. The creation of the dataset contributes to expanding the scope of NLP research
to under-explored languages across the world.
### Dataset Curators
[@ruth-ann](https://github.com/ruth-ann)
### Citation Information
@misc{https://doi.org/10.48550/arxiv.2212.03419,
doi = {10.48550/ARXIV.2212.03419},
url = {https://arxiv.org/abs/2212.03419},
author = {Armstrong, Ruth-Ann and Hewitt, John and Manning, Christopher},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7},
title = {JamPatoisNLI: A Jamaican Patois Natural Language Inference Dataset},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
### Contributions
Thanks to Prof. Christopher Manning and John Hewitt for their contributions, guidance, facilitation and support related to the creation of this dataset.
|
false |
# Dataset Card for [opus]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
**Disclaimer.** Loading of dataset is slow, thus it may not be feasible when loading at scale. I'd suggest to use the other OPUS datasets on Huggingface which loads a specific corpus.
Loads [OPUS](https://opus.nlpl.eu/) as HuggingFace dataset. OPUS is an open parallel corpus covering 700+ languages and 1100+ datasets.
Given a `src` and `tgt` language, this repository can load *all* available parallel corpus. To my knowledge, other OPUS datasets on Huggingface loads a specific corpus
**Requirements**.
```
pip install pandas
# pip install my fork of `opustools`
git clone https://github.com/larrylawl/OpusTools.git
pip install -e OpusTools/opustools_pkg
```
**Example Usage**.
```
# args follows `opustools`: https://pypi.org/project/opustools/
src="en"
tgt="id"
download_dir="data" # dir to save downloaded files
corpus="bible-uedin" # corpus name. Leave as `None` to download all available corpus for the src-tgt pair.
dataset = load_dataset("larrylawl/opus",
src=src,
tgt=tgt,
download_dir=download_dir,
corpus=corpus)
)
```
**Disclaimer**.
This repository is still in active development. Do make a PR if there're any issues!
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Available languages can be viewed on the [OPUS API](https://opus.nlpl.eu/opusapi/?languages=True)
## Dataset Structure
### Data Instances
```
{'src': 'In the beginning God created the heavens and the earth .',
'tgt': 'Pada mulanya , waktu Allah mulai menciptakan alam semesta'}
```
### Data Fields
```
features = {
"src": datasets.Value("string"),
"tgt": datasets.Value("string"),
}
```
### Data Splits
Merged all data into train split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@larrylawl](https://larrylawl.github.io/) for adding this dataset. |
true |
# Dataset Card for reddit_one_ups_2014
## Dataset Description
- **Homepage:** https://github.com/Georeactor/reddit-one-ups
### Dataset Summary
Reddit 'one-ups' or 'clapbacks' - replies which scored higher than the original comments. This task makes one-ups easier by focusing on a set of common, often meme-like replies (e.g. 'yes', 'nope', '(͡°͜ʖ͡°)').
For commentary on predictions with a previous version of the dataset, see https://blog.goodaudience.com/can-deepclapback-learn-when-to-lol-e4a2092a8f2c
For unique / non-meme seq2seq version of this dataset, see https://huggingface.co/datasets/georeactor/reddit_one_ups_seq2seq_2014
Replies were selected from PushShift's archive of posts from 2014.
### Supported Tasks
Text classification task: finding the common reply (out of ~37) to match the parent comment text.
Text prediction task: estimating the vote score, or parent:reply ratio, of a meme response, as a measure of relevancy/cleverness of reply.
### Languages
Primarily English - includes some emoticons such as ┬─┬ノ(ಠ_ಠノ)
## Dataset Structure
### Data Instances
29,375 rows
### Data Fields
- id: the Reddit alphanumeric ID for the reply
- body: the content of the original reply
- score: the net vote score of the original reply
- parent_id: the Reddit alphanumeric ID for the parent
- author: the Reddit username of the reply
- subreddit: the Reddit community where the discussion occurred
- parent_score: the net vote score of the parent comment
- cleantext: the simplified reply (one of 37 classes)
- tstamp: the timestamp of the reply
- parent_body: the content of the original parent
## Dataset Creation
### Source Data
Reddit comments collected through PushShift.io archives for 2014.
#### Initial Data Collection and Normalization
- Removed deleted or empty comments.
- Selected only replies which scored 1.5x higher than a parent comment, where both have a positive score.
- Found the top/repeating phrases common to these one-ups/clapback comments.
- Selected only replies which had one of these top/repeating phrases.
- Made rows in PostgreSQL and output as CSV.
## Considerations for Using the Data
Comments and responses in the Reddit archives and output datasets all include NSFW and otherwise toxic language and links!
- You can use the subreddit and score columns to filter content.
- Imbalanced dataset: replies 'yes' and 'no' are more common than others.
- Overlap of labels: replies such as 'yes', 'yep', and 'yup' serve similar purposes; in other cases 'no' vs. 'nope' may be interesting.
- Timestamps: the given timestamp may help identify trends in meme replies
- Usernames: a username was included to identify the 'username checks out' meme, but this was not common enough in 2014, and the included username is from the reply.
Reddit comments are properties of Reddit and comment owners using their Terms of Service. |
true |
# SICK_PL - Sentences Involving Compositional Knowledge (Polish)
### Dataset Summary
This dataset is a manually translated version of popular English natural language inference (NLI) corpus consisting of 10,000 sentence pairs. NLI is the task of determining whether one statement (premise) semantically entails other statement (hypothesis). Such relation can be classified as entailment (if the first sentence entails second sentence), neutral (the first statement does not determine the truth value of the second statement), or contradiction (if the first sentence is true, the second is false). Additionally, the original SICK dataset contains semantic relatedness scores for the sentence pairs as real numbers ranging from 1 to 5. When translating the corpus to Polish, we tried to be as close as possible to the original meaning. In some cases, however, two different English sentences had an identical translation in Polish. Such instances were slightly modified in order to preserve both the meaning and the syntactic differences in sentence pair.
### Data Instances
Example instance:
```
{
"pair_ID": "122",
"sentence_A": "Pięcioro dzieci stoi blisko siebie , a jedno dziecko ma pistolet",
"sentence_B": "Pięcioro dzieci stoi blisko siebie i żadne z nich nie ma pistoletu",
"relatedness_score": 3.7,
"entailment_judgment": "CONTRADICTION"
}
```
### Data Fields
- pair_ID: sentence pair ID
- sentence_A: sentence A
- sentence_B: sentence B
- entailment_judgment: textual entailment gold label: entailment (0), neutral (1) or contradiction (2)
- relatedness_score: semantic relatedness gold score (on a 1-5 continuous scale)
### Citation Information
```
@inproceedings{dadas-etal-2020-evaluation,
title = "Evaluation of Sentence Representations in {P}olish",
author = "Dadas, Slawomir and Pere{\l}kiewicz, Micha{\l} and Po{\'s}wiata, Rafa{\l}",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.207",
pages = "1674--1680",
language = "English",
ISBN = "979-10-95546-34-4",
}
``` |
true |
# PPC - Polish Paraphrase Corpus
### Dataset Summary
Polish Paraphrase Corpus contains 7000 manually labeled sentence pairs. The dataset was divided into training, validation and test splits. The training part includes 5000 examples, while the other parts contain 1000 examples each. The main purpose of creating such a dataset was to verify how machine learning models perform in the challenging problem of paraphrase identification, where most records contain semantically overlapping parts. Technically, this is a three-class classification task, where each record can be assigned to one of the following categories:
- Exact paraphrases - Sentence pairs that convey exactly the same information. We are interested only in the semantic meaning of the sentence, therefore this category also includes sentences that are semantically identical but, for example, have different emotional emphasis.
- Close paraphrases - Sentence pairs with similar semantic meaning. In this category we include all pairs which contain the same information, but in addition to it there may be other semantically non-overlapping parts. This category also contains context-dependent paraphrases - sentence pairs that may have the same meaning in some contexts but are different in others.
- Non-paraphrases - All other cases, including contradictory sentences and semantically unrelated sentences.
The corpus contains 2911, 1297, and 2792 examples for the above three categories, respectively. The process of annotating the dataset was preceded by an automated generation of candidate pairs, which were then manually labeled. We experimented with two popular techniques of generating possible paraphrases: backtranslation with a set of neural machine translation models and paraphrase mining using a pre-trained multilingual sentence encoder. The extracted sentence pairs are drawn from different data sources: Taboeba, Polish news articles, Wikipedia and Polish version of SICK dataset. Since most of the sentence pairs obtained in this way fell into the first two categories, in order to balance the dataset, some of the examples were manually modified to convey different information. In this way, even negative examples often have high semantic overlap, making this problem difficult for machine learning models.
### Data Instances
Example instance:
```
{
"sentence_A": "Libia: lotnisko w w Trypolisie ostrzelane rakietami.",
"sentence_B": "Jedyne lotnisko w stolicy Libii - Trypolisie zostało w nocy z wtorku na środę ostrzelane rakietami.",
"label": "2"
}
```
### Data Fields
- sentence_A: first sentence text
- sentence_B: second sentence text
- label: label identifier corresponding to one of three categories
### Citation Information
```
@inproceedings{9945218,
author={Dadas, S{\l}awomir},
booktitle={2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC)},
title={Training Effective Neural Sentence Encoders from Automatically Mined Paraphrases},
year={2022},
volume={},
number={},
pages={371-378},
doi={10.1109/SMC53654.2022.9945218}
}
``` |
true |
# 8TAGS
### Dataset Summary
A Polish topic classification dataset consisting of headlines from social media posts. It contains about 50,000 sentences annotated with 8 topic labels: film, history, food, medicine, motorization, work, sport and technology. This dataset was created automatically by extracting sentences from headlines and short descriptions of articles posted on Polish social networking site **wykop.pl**. The service allows users to annotate articles with one or more tags (categories). Dataset represents a selection of article sentences from 8 popular categories. The resulting corpus contains cleaned and tokenized, unambiguous sentences (tagged with only one of the selected categories), and longer than 30 characters.
### Data Instances
Example instance:
```
{
"sentence": "Kierowca był nieco zdziwiony że podróżując sporo ponad 200 km / h zatrzymali go policjanci.",
"label": "4"
}
```
### Data Fields
- sentence: sentence text
- label: label identifier corresponding to one of 8 topics
### Citation Information
```
@inproceedings{dadas-etal-2020-evaluation,
title = "Evaluation of Sentence Representations in {P}olish",
author = "Dadas, Slawomir and Pere{\l}kiewicz, Micha{\l} and Po{\'s}wiata, Rafa{\l}",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.207",
pages = "1674--1680",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
|
false |
## Dataset Description
- **Homepage: https://mirror.xyz/bitkevin.eth**
- **Repository: https://colab.research.google.com/drive/1EnqpDiKOVYhR0c6f4CgmDg2zqcbYZJpB#scrollTo=c1ef3d21-6e0e-46c9-a459-8a2ab856a5ca**
- **Point of Contact: Kevin Leffew – kleffew94@gmail.com**
### Dataset Summary: golf-course
This dataset (bethecloud/golf-courses) includes 21 unique images of golf courses pulled from Unsplash.
The dataset is a collection of photographs taken at various golf courses around the world. The images depict a variety of scenes, including fairways, greens, bunkers, water hazards, and clubhouse facilities. The images are high resolution and have been carefully selected to provide a diverse range of visual content for fine-tuning a machine learning model.
The dataset is intended to be used in the context of the Hugging Face Dream Booth hackathon, a competition that challenges participants to build innovative applications using the Hugging Face transformers library. The submission is for the category of landscape.
Overall, this dataset provides a rich source of visual data for machine learning models looking to understand and classify elements of golf courses. Its diverse range of images and high-quality resolution make it well-suited for use in fine-tuning models for tasks such as image classification, object detection, and image segmentation.
By using the golf course images as part of their training data, participants can fine-tune their models to recognize and classify specific features and elements commonly found on golf courses. The ultimate goal after the hackathon is to pull this dataset from decentralized cloud storage (like Storj DCS), increasing its accessibility, performance, and resilience by distributing across an edge of over 17,000 uncorrelated participants.
## Example Output
![golf-acropolis.jpg]https://link.storjshare.io/juid5vc27dbajh6zyzplf4fah5xq/golf-course-output%2Fgolf-acropolis.png
# Usage
The golf-courses dataset can be used by modifying the instance_prompt: a photo of golf course
### Languages
The language data in golf-courses is in English (BCP-47 en)
## Dataset Structure
The complete dataset is GBs and consists of 21 objects.
### Parallelized download using Decentralized Object Storage (Storj DCS)
A direct download for the dataset is located at https://link.storjshare.io/juo7ynuvpe5svxj3hh454v6fnhba/golf-courses.
In the future, Storj DCS will be used to download large datasets (exceeding 1TB) in a highly parallel, highly performant, and highly economical manner (by utilizing a network of over 17,000 diverse and economically incentivized datacenter node endpoints.
### Curation Rationale
This model was created as a sample by Kevin Leffew as part of the DreamBooth Hackathon.
### Source Data
The source data for the dataset is simply pulled from Unsplash
### Licensing Information
MIT License
## Thanks to John Whitaker and Lewis Tunstall
Thanks to [John Whitaker](https://github.com/johnowhitaker) and [Lewis Tunstall](https://github.com/lewtun)for writing out and describing the initial hackathon parameters at https://huggingface.co/dreambooth-hackathon.
## Example Training Data














|
false |
<div align="center">
<img width="640" alt="keremberke/clash-of-clans-object-detection" src="https://huggingface.co/datasets/keremberke/clash-of-clans-object-detection/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['ad', 'airsweeper', 'bombtower', 'canon', 'clancastle', 'eagle', 'inferno', 'kingpad', 'mortar', 'queenpad', 'rcpad', 'scattershot', 'th13', 'wardenpad', 'wizztower', 'xbow']
```
### Number of Images
```json
{'train': 88, 'test': 13, 'valid': 24}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/clash-of-clans-object-detection", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y/dataset/5](https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y/dataset/5?ref=roboflow2huggingface?ref=roboflow2huggingface)
### Citation
```
@misc{ clash-of-clans-vop4y_dataset,
title = { Clash of Clans Dataset },
type = { Open Source Dataset },
author = { Find This Base },
howpublished = { \\url{ https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y } },
url = { https://universe.roboflow.com/find-this-base/clash-of-clans-vop4y },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { feb },
note = { visited on 2023-01-18 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 30, 2022 at 4:31 PM GMT
It includes 125 images.
CoC are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 1920x1920 (Fit (black edges))
No image augmentation techniques were applied.
|
false | |
false | # Dataset Card for "microsoft-fluentui-emoji-512-whitebg"
[svg and their file names were converted to images and text from Microsoft's fluentui-emoji repo](https://github.com/microsoft/fluentui-emoji) |
false | # AutoTrain Dataset for project: fine_tune_table_tm2
## Dataset Description
This dataset has been automatically processed by AutoTrain for project fine_tune_table_tm2.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "List all PO headers with a valid vendor record in database",
"target": "select * from RETAILBUYER_POHEADER P inner join RETAILBUYER_VENDOR V\non P.VENDOR_ID = V.VENDOR_ID"
},
{
"text": "List all details of PO headers which have a vendor in vendor table",
"target": "select * from RETAILBUYER_POHEADER P inner join RETAILBUYER_VENDOR V\non P.VENDOR_ID = V.VENDOR_ID"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 32 |
| valid | 17 |
|
false | Dataset with Prolog code / query pairs and execution results. |
false |
<h1>Afriqa Prebuilt Indices</h1>
Prebuilt Lucene Inverted Indices for preprocessed Afriqa Wikipedia Passages |
false | # AutoTrain Dataset for project: exact_data
## Dataset Description
This dataset has been automatically processed by AutoTrain for project exact_data.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "What is the maximum vendor id of vendor present in vendor table who has been issued a PO in 2021",
"target": "select max(t1.vendor_id) from RETAILBUYER_POHEADER as t2 inner join RETAILBUYER_VENDOR as t1 on t2.vendor_id = t1.vendor_id where YEAR(t2.po_issuedt) = 2021"
},
{
"text": "What are the product ids, descriptions and sum of quantities ordered for the products in purchase order line items",
"target": "select L.product_id, t2.product_desc, sum(t1.quantity) from RETAILBUYER_PRODUCT_SOURCE as t2 INNER JOIN RETAILBUYER_POLINEITEM as t1 ON t2.PRODUCT_ID = t1.PRODUCT_ID GROUP BY t1.PRODUCT_ID, t2.product_desc"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 25 |
| valid | 7 |
|
false | # AutoTrain Dataset for project: tm3_model
## Dataset Description
This dataset has been automatically processed by AutoTrain for project tm3_model.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "List all PO headers with a valid vendor record in database",
"target": "select * from RETAILBUYER_POHEADER as t2 inner join RETAILBUYER_VENDOR as t1 on t2.VENDOR_ID = t1.VENDOR_ID"
},
{
"text": "List all details of PO headers which have a vendor in vendor table",
"target": "select * from RETAILBUYER_POHEADER as t2 inner join RETAILBUYER_VENDOR as t1 on t2.VENDOR_ID = t1.VENDOR_ID"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 49 |
| valid | 17 |
|
false |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Hacker news until 2015 with comments. Collect from Google BigQuery open dataset. We didn't do any pre-processing except remove HTML tags.
### Supported Tasks and Leaderboards
Comment Generation; News analysis with comments; Other comment-based NLP tasks.
### Languages
English
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
false |
## starcraft-remastered-melee-maps
This is a dataset containing 1,815 Starcraft:Remastered melee maps, categorized into tilesets.
The dataset is used to train this model: https://huggingface.co/wdcqc/starcraft-platform-terrain-32x32
The dataset is manually downloaded from Battle.net, bounding.net (scmscx.com) and broodwarmaps.com over a long period of time.
To use this dataset, extract the `staredit\\scenario.chk` files from the map files using StormLib, then refer to [Scenario.chk Format](http://www.staredit.net/wiki/index.php/Scenario.chk) to get data like text, terrain or resource placement from the map.
Alternatively download the dataset and put it in `<My Documents>\StarCraft\Maps`. You can play with your friends. |
true | |
true |
# Dataset Card for MDK
This dataset was created as part of the [Bertelsmann Foundation's](https://www.bertelsmann-stiftung.de/de/startseite)
[Musterdatenkatalog (MDK)]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") project. The MDK provides an overview of Open Data in municipalities in Germany. It is intended to help municipalities in Germany, as well as data analysts and journalists, to get an overview of the topics and the extent to which cities have already published data sets.
## Dataset Description
### Dataset Summary
The dataset is an annotated corpus of 1258 records based on the metadata of the datasets from [GOVDATA](https://www.govdata.de/). GovData is a data portal that aims to make cities' data available in a standardized way.
The annotation maps the titles of the datasets to a taxonomy containing categories such as 'Verkehr - KFZ - Messung' or 'Abfallwirtschaft - Abfallkalender'. Through the assignment the names of the data sets can be normalized and grouped. In total, the taxonomy consists 250 categories. Each category is divided into two levels:
- Level 1: "Thema" (topic)

- Level 2: "Bezeichnung" (label).
The first dash divides the levels. For example:

You can find an interactive view of the taxonomy with all labels [here](https://huggingface.co/spaces/and-effect/Musterdatenkatalog).
The repository contains a small and a large version of the data. The small version is for testing purposes only. The large data set contains all 1258 entries. The large and small datasets are split into a training and a testing dataset. In addition, the large dataset folder contains of a validation dataset that has been annotated separately. The validation dataset is an additional dataset that we created for the evaluation of the algorithm. It also consists of data from GOVDATA and has the same structure as the test and training data set.
### Languages
The language data is German.
## Dataset Structure
### Data Fields
| dataset | size |
|-----|-----|
| small/train | 18.96 KB |
| small/test | 6.13 KB |
| large/train | 517.77 KB |
| large/test | 118.66 KB |
An example of looks as follows:
```json
{
"doc_id": "a063d3b7-4c09-421e-9849-073dc8939e76",
"title": "Dienstleistungen Alphabetisch sortiert April 2019",
"description": "CSV-Datei mit allen Dienstleistungen der Kreisverwaltung Kleve. Sortiert nach AlphabetStand 01.04.2019",
"labels_name": "Sonstiges - Sonstiges",
"labels": 166
}
```
The data fields are the same among all splits:
- doc_id (uuid): identifier for each document
- title (str): dataset title from GOVDATA
- description (str): description of the dataset
- labels_name (str): annotation with labels from taxonomy
- labels (int): labels indexed from 0 to 250
### Data Splits
| dataset_name | dataset_splits | train_size | test_size | validation_size
|-----|-----|-----|-----|-----|
| dataset_large | train, test, validation | 1009 | 249 | 101
| dataset_small | train, test | 37 | 13 | None
## Dataset Creation
The dataset was created through multiple manual annotation rounds.
### Source Data
The data comes from [GOVDATA](https://www.govdata.de/), an open data portal of Germany. It aims to provide central access to administrative data from the federal, state and local governments. Their aim is to make data available in one place and thus easier to use. The data available is structured in 13 categories ranging from finance, to international topics, health, education and science and technology. [GOVDATA](https://www.govdata.de/) offers a [CKAN API](https://ckan.govdata.de/) to make requests and provides metadata for each data entry.
#### Initial Data Collection and Normalization
Several sources were used for the annotation process. A sample was collected from [GOVDATA](https://www.govdata.de/) with actual datasets. For the sample, 50 records were drawn for each group. Additional samples are from the previous version of the [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) that contain older data from [GOVDATA](https://www.govdata.de/). Some of the datasets from the old [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) already contained an annotation, but since the taxonomy is not the same, the data were re-annotated. A sample was drawn from each source (randomly and by manual selection), resulting in a total of 1258 titles.
### Annotations
#### Annotation process
The data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.
The following table shows the results of the of the annotations:
| | **Cohens Kappa** | **Number of Annotators** | **Number of Documents** |
| ------------------ | :--------------: | ------------------------ | ----------------------- |
| **Test Round** | .77 | 6 | 50 |
| **Round 1** | .41 | 2 | 120 |
| **Round 2** | .76 | 4 | 480 |
| **Round 3** | .71 | 3 | 420 |
| **Round 4** | .87 | 2 | 416 |
| **Validation set** | - | 1 | 177 |
In addition, a validation set was generated by the dataset curators.
#### Who are the annotators?
Annotators are all employees from [&effect data solutions GmbH](https://www.and-effect.com/). The taxonomy as well as rules and problems in the assignment of datasets were discussed and debated in advance of the development of the taxonomy and the annotation in two workshops with experts and representatives of the open data community and local governments as well as with the project members of the [Musterdatenkatalog]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") from the Bertelsmann Foundation. On this basis, the [&effect](https://www.and-effect.com/) employees were instructed in the annotation by the curators of the datasets.
## Considerations for Using the Data
The dataset for the annotation process was generated by sampling from [GOVDATA](https://www.govdata.de/) and data previously collected from GOVDATA. The data on GOVDATA is continuously updated and data can get deleted. Thus, there is no guarantee that data entries included here will still be available.
### Social Impact of Dataset
Since 2017, the German government has been promoting systematic and free access to public administration data with first laws on open data in municipalities. In this way, a contribution is aimed at the development of a [knowledge society] (https://www.verwaltung-innovativ.de/DE/Startseite/startseite_node.html). The categorization of open data of cities in a standardized and detailed taxonomy supports this process of making data of municipalities freely, openly and structured accessible.
### Discussion of Biases (non-ethical)
The data was mainly sampled at random from the categories available on GOVDATA. Although all categories were sampled there is still some imbalance in the data. For example: entries for the concept 'Raumordnung, Raumplanung und Raumentwicklung - Bebauungsplan' make up the majority class. Although manual selection of data was also used for not all previous concepts data entries was found. However, for 95% of concepts at least one data entry is available.
## Additional Information
### Dataset Curators
Friederike Bauer
Rahkakavee Baskaran
### Licensing Information
CC BY 4.0 |
false |
# Dataset Card for `antique`
The `antique` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=403,666
This dataset is used by: [`antique_test`](https://huggingface.co/datasets/irds/antique_test), [`antique_test_non-offensive`](https://huggingface.co/datasets/irds/antique_test_non-offensive), [`antique_train`](https://huggingface.co/datasets/irds/antique_train), [`antique_train_split200-train`](https://huggingface.co/datasets/irds/antique_train_split200-train), [`antique_train_split200-valid`](https://huggingface.co/datasets/irds/antique_train_split200-valid)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/antique', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
|
false |
# Dataset Card for `antique/test`
The `antique/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=200
- `qrels`: (relevance assessments); count=6,589
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
|
false |
# Dataset Card for `antique/test/non-offensive`
The `antique/test/non-offensive` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/test/non-offensive).
# Data
This dataset provides:
- `queries` (i.e., topics); count=176
- `qrels`: (relevance assessments); count=5,752
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_test_non-offensive', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_test_non-offensive', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
|
false |
# Dataset Card for `antique/train`
The `antique/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=2,426
- `qrels`: (relevance assessments); count=27,422
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
|
false |
# Dataset Card for `antique/train/split200-train`
The `antique/train/split200-train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/train/split200-train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=2,226
- `qrels`: (relevance assessments); count=25,229
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_train_split200-train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_train_split200-train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
|
false |
# Dataset Card for `antique/train/split200-valid`
The `antique/train/split200-valid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/train/split200-valid).
# Data
This dataset provides:
- `queries` (i.e., topics); count=200
- `qrels`: (relevance assessments); count=2,193
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_train_split200-valid', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_train_split200-valid', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
|
false |
# Dataset Card for `aquaint`
The `aquaint` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/aquaint#aquaint).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,033,461
This dataset is used by: [`aquaint_trec-robust-2005`](https://huggingface.co/datasets/irds/aquaint_trec-robust-2005)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/aquaint', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@misc{Graff2002Aquaint,
title={The AQUAINT Corpus of English News Text},
author={David Graff},
year={2002},
url={https://catalog.ldc.upenn.edu/LDC2002T31},
publisher={Linguistic Data Consortium}
}
```
|
false |
# Dataset Card for `aquaint/trec-robust-2005`
The `aquaint/trec-robust-2005` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/aquaint#aquaint/trec-robust-2005).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=37,798
- For `docs`, use [`irds/aquaint`](https://huggingface.co/datasets/irds/aquaint)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/aquaint_trec-robust-2005', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/aquaint_trec-robust-2005', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Voorhees2005Robust,
title={Overview of the TREC 2005 Robust Retrieval Track},
author={Ellen M. Voorhees},
booktitle={TREC},
year={2005}
}
@misc{Graff2002Aquaint,
title={The AQUAINT Corpus of English News Text},
author={David Graff},
year={2002},
url={https://catalog.ldc.upenn.edu/LDC2002T31},
publisher={Linguistic Data Consortium}
}
```
|
false |
# Dataset Card for `beir/arguana`
The `beir/arguana` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/arguana).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=8,674
- `queries` (i.e., topics); count=1,406
- `qrels`: (relevance assessments); count=1,406
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_arguana', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ...}
queries = load_dataset('irds/beir_arguana', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_arguana', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Wachsmuth2018Arguana,
author = "Wachsmuth, Henning and Syed, Shahbaz and Stein, Benno",
title = "Retrieval of the Best Counterargument without Prior Topic Knowledge",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Melbourne, Australia",
pages = "241--251",
url = "http://aclweb.org/anthology/P18-1023"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/climate-fever`
The `beir/climate-fever` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/climate-fever).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5,416,593
- `queries` (i.e., topics); count=1,535
- `qrels`: (relevance assessments); count=4,681
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_climate-fever', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ...}
queries = load_dataset('irds/beir_climate-fever', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_climate-fever', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Diggelmann2020CLIMATEFEVERAD,
title={CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims},
author={T. Diggelmann and Jordan L. Boyd-Graber and Jannis Bulian and Massimiliano Ciaramita and Markus Leippold},
journal={ArXiv},
year={2020},
volume={abs/2012.00614}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/dbpedia-entity`
The `beir/dbpedia-entity` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/dbpedia-entity).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=4,635,922
- `queries` (i.e., topics); count=467
This dataset is used by: [`beir_dbpedia-entity_dev`](https://huggingface.co/datasets/irds/beir_dbpedia-entity_dev), [`beir_dbpedia-entity_test`](https://huggingface.co/datasets/irds/beir_dbpedia-entity_test)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_dbpedia-entity', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ...}
queries = load_dataset('irds/beir_dbpedia-entity', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Hasibi2017DBpediaEntityVA,
title={DBpedia-Entity v2: A Test Collection for Entity Search},
author={Faegheh Hasibi and Fedor Nikolaev and Chenyan Xiong and K. Balog and S. E. Bratsberg and Alexander Kotov and J. Callan},
journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval},
year={2017}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/dbpedia-entity/dev`
The `beir/dbpedia-entity/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/dbpedia-entity/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=67
- `qrels`: (relevance assessments); count=5,673
- For `docs`, use [`irds/beir_dbpedia-entity`](https://huggingface.co/datasets/irds/beir_dbpedia-entity)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_dbpedia-entity_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_dbpedia-entity_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Hasibi2017DBpediaEntityVA,
title={DBpedia-Entity v2: A Test Collection for Entity Search},
author={Faegheh Hasibi and Fedor Nikolaev and Chenyan Xiong and K. Balog and S. E. Bratsberg and Alexander Kotov and J. Callan},
journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval},
year={2017}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/dbpedia-entity/test`
The `beir/dbpedia-entity/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/dbpedia-entity/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=400
- `qrels`: (relevance assessments); count=43,515
- For `docs`, use [`irds/beir_dbpedia-entity`](https://huggingface.co/datasets/irds/beir_dbpedia-entity)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_dbpedia-entity_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_dbpedia-entity_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Hasibi2017DBpediaEntityVA,
title={DBpedia-Entity v2: A Test Collection for Entity Search},
author={Faegheh Hasibi and Fedor Nikolaev and Chenyan Xiong and K. Balog and S. E. Bratsberg and Alexander Kotov and J. Callan},
journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval},
year={2017}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/fever`
The `beir/fever` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5,416,568
- `queries` (i.e., topics); count=123,142
This dataset is used by: [`beir_fever_dev`](https://huggingface.co/datasets/irds/beir_fever_dev), [`beir_fever_test`](https://huggingface.co/datasets/irds/beir_fever_test), [`beir_fever_train`](https://huggingface.co/datasets/irds/beir_fever_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_fever', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ...}
queries = load_dataset('irds/beir_fever', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Thorne2018Fever,
title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification",
author = "Thorne, James and
Vlachos, Andreas and
Christodoulopoulos, Christos and
Mittal, Arpit",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N18-1074",
doi = "10.18653/v1/N18-1074",
pages = "809--819"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/fever/dev`
The `beir/fever/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=6,666
- `qrels`: (relevance assessments); count=8,079
- For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_fever_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_fever_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Thorne2018Fever,
title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification",
author = "Thorne, James and
Vlachos, Andreas and
Christodoulopoulos, Christos and
Mittal, Arpit",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N18-1074",
doi = "10.18653/v1/N18-1074",
pages = "809--819"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/fever/test`
The `beir/fever/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=6,666
- `qrels`: (relevance assessments); count=7,937
- For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_fever_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_fever_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Thorne2018Fever,
title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification",
author = "Thorne, James and
Vlachos, Andreas and
Christodoulopoulos, Christos and
Mittal, Arpit",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N18-1074",
doi = "10.18653/v1/N18-1074",
pages = "809--819"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/fever/train`
The `beir/fever/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=109,810
- `qrels`: (relevance assessments); count=140,085
- For `docs`, use [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_fever_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_fever_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Thorne2018Fever,
title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification",
author = "Thorne, James and
Vlachos, Andreas and
Christodoulopoulos, Christos and
Mittal, Arpit",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N18-1074",
doi = "10.18653/v1/N18-1074",
pages = "809--819"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/fiqa`
The `beir/fiqa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=57,638
- `queries` (i.e., topics); count=6,648
This dataset is used by: [`beir_fiqa_dev`](https://huggingface.co/datasets/irds/beir_fiqa_dev), [`beir_fiqa_test`](https://huggingface.co/datasets/irds/beir_fiqa_test), [`beir_fiqa_train`](https://huggingface.co/datasets/irds/beir_fiqa_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_fiqa', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
queries = load_dataset('irds/beir_fiqa', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Maia2018Fiqa,
title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering},
author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur},
journal={Companion Proceedings of the The Web Conference 2018},
year={2018}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/fiqa/dev`
The `beir/fiqa/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=500
- `qrels`: (relevance assessments); count=1,238
- For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_fiqa_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_fiqa_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Maia2018Fiqa,
title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering},
author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur},
journal={Companion Proceedings of the The Web Conference 2018},
year={2018}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/fiqa/test`
The `beir/fiqa/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=648
- `qrels`: (relevance assessments); count=1,706
- For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_fiqa_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_fiqa_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Maia2018Fiqa,
title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering},
author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur},
journal={Companion Proceedings of the The Web Conference 2018},
year={2018}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/fiqa/train`
The `beir/fiqa/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fiqa/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=5,500
- `qrels`: (relevance assessments); count=14,166
- For `docs`, use [`irds/beir_fiqa`](https://huggingface.co/datasets/irds/beir_fiqa)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_fiqa_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_fiqa_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Maia2018Fiqa,
title={WWW'18 Open Challenge: Financial Opinion Mining and Question Answering},
author={Macedo Maia and S. Handschuh and A. Freitas and Brian Davis and R. McDermott and M. Zarrouk and A. Balahur},
journal={Companion Proceedings of the The Web Conference 2018},
year={2018}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/hotpotqa`
The `beir/hotpotqa` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5,233,329
- `queries` (i.e., topics); count=97,852
This dataset is used by: [`beir_hotpotqa_dev`](https://huggingface.co/datasets/irds/beir_hotpotqa_dev), [`beir_hotpotqa_test`](https://huggingface.co/datasets/irds/beir_hotpotqa_test), [`beir_hotpotqa_train`](https://huggingface.co/datasets/irds/beir_hotpotqa_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_hotpotqa', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ...}
queries = load_dataset('irds/beir_hotpotqa', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Yang2018Hotpotqa,
title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering",
author = "Yang, Zhilin and
Qi, Peng and
Zhang, Saizheng and
Bengio, Yoshua and
Cohen, William and
Salakhutdinov, Ruslan and
Manning, Christopher D.",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1259",
doi = "10.18653/v1/D18-1259",
pages = "2369--2380"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/hotpotqa/dev`
The `beir/hotpotqa/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=5,447
- `qrels`: (relevance assessments); count=10,894
- For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_hotpotqa_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_hotpotqa_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Yang2018Hotpotqa,
title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering",
author = "Yang, Zhilin and
Qi, Peng and
Zhang, Saizheng and
Bengio, Yoshua and
Cohen, William and
Salakhutdinov, Ruslan and
Manning, Christopher D.",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1259",
doi = "10.18653/v1/D18-1259",
pages = "2369--2380"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/hotpotqa/test`
The `beir/hotpotqa/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=7,405
- `qrels`: (relevance assessments); count=14,810
- For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_hotpotqa_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_hotpotqa_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Yang2018Hotpotqa,
title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering",
author = "Yang, Zhilin and
Qi, Peng and
Zhang, Saizheng and
Bengio, Yoshua and
Cohen, William and
Salakhutdinov, Ruslan and
Manning, Christopher D.",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1259",
doi = "10.18653/v1/D18-1259",
pages = "2369--2380"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/hotpotqa/train`
The `beir/hotpotqa/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/hotpotqa/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=85,000
- `qrels`: (relevance assessments); count=170,000
- For `docs`, use [`irds/beir_hotpotqa`](https://huggingface.co/datasets/irds/beir_hotpotqa)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_hotpotqa_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_hotpotqa_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Yang2018Hotpotqa,
title = "{H}otpot{QA}: A Dataset for Diverse, Explainable Multi-hop Question Answering",
author = "Yang, Zhilin and
Qi, Peng and
Zhang, Saizheng and
Bengio, Yoshua and
Cohen, William and
Salakhutdinov, Ruslan and
Manning, Christopher D.",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1259",
doi = "10.18653/v1/D18-1259",
pages = "2369--2380"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/msmarco`
The `beir/msmarco` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=8,841,823
- `queries` (i.e., topics); count=509,962
This dataset is used by: [`beir_msmarco_dev`](https://huggingface.co/datasets/irds/beir_msmarco_dev), [`beir_msmarco_test`](https://huggingface.co/datasets/irds/beir_msmarco_test), [`beir_msmarco_train`](https://huggingface.co/datasets/irds/beir_msmarco_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_msmarco', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
queries = load_dataset('irds/beir_msmarco', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Bajaj2016Msmarco,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang},
booktitle={InCoCo@NIPS},
year={2016}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/msmarco/dev`
The `beir/msmarco/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=6,980
- `qrels`: (relevance assessments); count=7,437
- For `docs`, use [`irds/beir_msmarco`](https://huggingface.co/datasets/irds/beir_msmarco)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_msmarco_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_msmarco_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Bajaj2016Msmarco,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang},
booktitle={InCoCo@NIPS},
year={2016}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/msmarco/test`
The `beir/msmarco/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=43
- `qrels`: (relevance assessments); count=9,260
- For `docs`, use [`irds/beir_msmarco`](https://huggingface.co/datasets/irds/beir_msmarco)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_msmarco_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_msmarco_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Craswell2019TrecDl,
title={Overview of the TREC 2019 deep learning track},
author={Nick Craswell and Bhaskar Mitra and Emine Yilmaz and Daniel Campos and Ellen Voorhees},
booktitle={TREC 2019},
year={2019}
}
@inproceedings{Bajaj2016Msmarco,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang},
booktitle={InCoCo@NIPS},
year={2016}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/msmarco/train`
The `beir/msmarco/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/msmarco/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=502,939
- `qrels`: (relevance assessments); count=532,751
- For `docs`, use [`irds/beir_msmarco`](https://huggingface.co/datasets/irds/beir_msmarco)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_msmarco_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_msmarco_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Bajaj2016Msmarco,
title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},
author={Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, Tong Wang},
booktitle={InCoCo@NIPS},
year={2016}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/nfcorpus/dev`
The `beir/nfcorpus/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=324
- `qrels`: (relevance assessments); count=11,385
- For `docs`, use [`irds/beir_nfcorpus`](https://huggingface.co/datasets/irds/beir_nfcorpus)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_nfcorpus_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_nfcorpus_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Boteva2016Nfcorpus,
title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval",
author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler",
booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})",
location = "Padova, Italy",
publisher = "Springer",
year = 2016
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/nfcorpus/test`
The `beir/nfcorpus/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=323
- `qrels`: (relevance assessments); count=12,334
- For `docs`, use [`irds/beir_nfcorpus`](https://huggingface.co/datasets/irds/beir_nfcorpus)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_nfcorpus_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_nfcorpus_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Boteva2016Nfcorpus,
title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval",
author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler",
booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})",
location = "Padova, Italy",
publisher = "Springer",
year = 2016
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/nfcorpus/train`
The `beir/nfcorpus/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nfcorpus/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=2,590
- `qrels`: (relevance assessments); count=110,575
- For `docs`, use [`irds/beir_nfcorpus`](https://huggingface.co/datasets/irds/beir_nfcorpus)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_nfcorpus_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_nfcorpus_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Boteva2016Nfcorpus,
title="A Full-Text Learning to Rank Dataset for Medical Information Retrieval",
author = "Vera Boteva and Demian Gholipour and Artem Sokolov and Stefan Riezler",
booktitle = "Proceedings of the European Conference on Information Retrieval ({ECIR})",
location = "Padova, Italy",
publisher = "Springer",
year = 2016
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/nq`
The `beir/nq` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/nq).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=2,681,468
- `queries` (i.e., topics); count=3,452
- `qrels`: (relevance assessments); count=4,201
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_nq', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ...}
queries = load_dataset('irds/beir_nq', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_nq', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Kwiatkowski2019Nq,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {TACL}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/quora`
The `beir/quora` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/quora).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=522,931
- `queries` (i.e., topics); count=15,000
This dataset is used by: [`beir_quora_dev`](https://huggingface.co/datasets/irds/beir_quora_dev), [`beir_quora_test`](https://huggingface.co/datasets/irds/beir_quora_test)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_quora', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
queries = load_dataset('irds/beir_quora', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/quora/dev`
The `beir/quora/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/quora/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=5,000
- `qrels`: (relevance assessments); count=7,626
- For `docs`, use [`irds/beir_quora`](https://huggingface.co/datasets/irds/beir_quora)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_quora_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_quora_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/quora/test`
The `beir/quora/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/quora/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=10,000
- `qrels`: (relevance assessments); count=15,675
- For `docs`, use [`irds/beir_quora`](https://huggingface.co/datasets/irds/beir_quora)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_quora_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_quora_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/scifact`
The `beir/scifact` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5,183
- `queries` (i.e., topics); count=1,109
This dataset is used by: [`beir_scifact_test`](https://huggingface.co/datasets/irds/beir_scifact_test), [`beir_scifact_train`](https://huggingface.co/datasets/irds/beir_scifact_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_scifact', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ...}
queries = load_dataset('irds/beir_scifact', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Wadden2020Scifact,
title = "Fact or Fiction: Verifying Scientific Claims",
author = "Wadden, David and
Lin, Shanchuan and
Lo, Kyle and
Wang, Lucy Lu and
van Zuylen, Madeleine and
Cohan, Arman and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.609",
doi = "10.18653/v1/2020.emnlp-main.609",
pages = "7534--7550"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/scifact/test`
The `beir/scifact/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=300
- `qrels`: (relevance assessments); count=339
- For `docs`, use [`irds/beir_scifact`](https://huggingface.co/datasets/irds/beir_scifact)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_scifact_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_scifact_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Wadden2020Scifact,
title = "Fact or Fiction: Verifying Scientific Claims",
author = "Wadden, David and
Lin, Shanchuan and
Lo, Kyle and
Wang, Lucy Lu and
van Zuylen, Madeleine and
Cohan, Arman and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.609",
doi = "10.18653/v1/2020.emnlp-main.609",
pages = "7534--7550"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/scifact/train`
The `beir/scifact/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/scifact/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=809
- `qrels`: (relevance assessments); count=919
- For `docs`, use [`irds/beir_scifact`](https://huggingface.co/datasets/irds/beir_scifact)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_scifact_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_scifact_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Wadden2020Scifact,
title = "Fact or Fiction: Verifying Scientific Claims",
author = "Wadden, David and
Lin, Shanchuan and
Lo, Kyle and
Wang, Lucy Lu and
van Zuylen, Madeleine and
Cohan, Arman and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.609",
doi = "10.18653/v1/2020.emnlp-main.609",
pages = "7534--7550"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/trec-covid`
The `beir/trec-covid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/trec-covid).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=171,332
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=66,336
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_trec-covid', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ..., 'pubmed_id': ...}
queries = load_dataset('irds/beir_trec-covid', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ..., 'query': ..., 'narrative': ...}
qrels = load_dataset('irds/beir_trec-covid', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Wang2020Cord19,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
@article{Voorhees2020TrecCovid,
title={TREC-COVID: Constructing a Pandemic Information Retrieval Test Collection},
author={E. Voorhees and Tasmeer Alam and Steven Bedrick and Dina Demner-Fushman and W. Hersh and Kyle Lo and Kirk Roberts and I. Soboroff and Lucy Lu Wang},
journal={ArXiv},
year={2020},
volume={abs/2005.04474}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/webis-touche2020`
The `beir/webis-touche2020` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/webis-touche2020).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=382,545
- `queries` (i.e., topics); count=49
- `qrels`: (relevance assessments); count=2,962
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_webis-touche2020', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ..., 'stance': ..., 'url': ...}
queries = load_dataset('irds/beir_webis-touche2020', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/beir_webis-touche2020', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Bondarenko2020Tuche,
title={Overview of Touch{\'e} 2020: Argument Retrieval},
author={Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Christian Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen},
booktitle={CLEF},
year={2020}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `beir/webis-touche2020/v2`
The `beir/webis-touche2020/v2` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/webis-touche2020/v2).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=382,545
- `queries` (i.e., topics); count=49
- `qrels`: (relevance assessments); count=2,214
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_webis-touche2020_v2', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ..., 'stance': ..., 'url': ...}
queries = load_dataset('irds/beir_webis-touche2020_v2', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/beir_webis-touche2020_v2', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Bondarenko2020Tuche,
title={Overview of Touch{\'e} 2020: Argument Retrieval},
author={Alexander Bondarenko and Maik Fr{\"o}be and Meriem Beloucif and Lukas Gienapp and Yamen Ajjour and Alexander Panchenko and Christian Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen},
booktitle={CLEF},
year={2020}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
|
false |
# Dataset Card for `c4/en-noclean-tr`
The `c4/en-noclean-tr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/c4#c4/en-noclean-tr).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,063,805,381
This dataset is used by: [`c4_en-noclean-tr_trec-misinfo-2021`](https://huggingface.co/datasets/irds/c4_en-noclean-tr_trec-misinfo-2021)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/c4_en-noclean-tr', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'url': ..., 'timestamp': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `c4/en-noclean-tr/trec-misinfo-2021`
The `c4/en-noclean-tr/trec-misinfo-2021` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/c4#c4/en-noclean-tr/trec-misinfo-2021).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- For `docs`, use [`irds/c4_en-noclean-tr`](https://huggingface.co/datasets/irds/c4_en-noclean-tr)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/c4_en-noclean-tr_trec-misinfo-2021', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ..., 'description': ..., 'narrative': ..., 'disclaimer': ..., 'stance': ..., 'evidence': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `car/v1.5`
The `car/v1.5` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=29,678,367
This dataset is used by: [`car_v1.5_trec-y1_auto`](https://huggingface.co/datasets/irds/car_v1.5_trec-y1_auto), [`car_v1.5_trec-y1_manual`](https://huggingface.co/datasets/irds/car_v1.5_trec-y1_manual)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/car_v1.5', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Dietz2017Car,
title={{TREC CAR}: A Data Set for Complex Answer Retrieval},
author={Laura Dietz and Ben Gamari},
year={2017},
note={Version 1.5},
url={http://trec-car.cs.unh.edu}
}
```
|
false |
# Dataset Card for `car/v1.5/trec-y1/auto`
The `car/v1.5/trec-y1/auto` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5/trec-y1/auto).
# Data
This dataset provides:
- `qrels`: (relevance assessments); count=5,820
- For `docs`, use [`irds/car_v1.5`](https://huggingface.co/datasets/irds/car_v1.5)
## Usage
```python
from datasets import load_dataset
qrels = load_dataset('irds/car_v1.5_trec-y1_auto', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dietz2017TrecCar,
title={TREC Complex Answer Retrieval Overview.},
author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick},
booktitle={TREC},
year={2017}
}
@article{Dietz2017Car,
title={{TREC CAR}: A Data Set for Complex Answer Retrieval},
author={Laura Dietz and Ben Gamari},
year={2017},
note={Version 1.5},
url={http://trec-car.cs.unh.edu}
}
```
|
false |
# Dataset Card for `car/v1.5/trec-y1/manual`
The `car/v1.5/trec-y1/manual` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v1.5/trec-y1/manual).
# Data
This dataset provides:
- `qrels`: (relevance assessments); count=29,571
- For `docs`, use [`irds/car_v1.5`](https://huggingface.co/datasets/irds/car_v1.5)
## Usage
```python
from datasets import load_dataset
qrels = load_dataset('irds/car_v1.5_trec-y1_manual', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Dietz2017TrecCar,
title={TREC Complex Answer Retrieval Overview.},
author={Dietz, Laura and Verma, Manisha and Radlinski, Filip and Craswell, Nick},
booktitle={TREC},
year={2017}
}
@article{Dietz2017Car,
title={{TREC CAR}: A Data Set for Complex Answer Retrieval},
author={Laura Dietz and Ben Gamari},
year={2017},
note={Version 1.5},
url={http://trec-car.cs.unh.edu}
}
```
|
false |
# Dataset Card for `car/v2.0`
The `car/v2.0` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/car#car/v2.0).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=29,794,697
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/car_v2.0', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Dietz2017Car,
title={{TREC CAR}: A Data Set for Complex Answer Retrieval},
author={Laura Dietz and Ben Gamari},
year={2017},
note={Version 1.5},
url={http://trec-car.cs.unh.edu}
}
```
|
false |
# Dataset Card for `highwire/trec-genomics-2006`
The `highwire/trec-genomics-2006` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/highwire#highwire/trec-genomics-2006).
# Data
This dataset provides:
- `queries` (i.e., topics); count=28
- `qrels`: (relevance assessments); count=27,999
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/highwire_trec-genomics-2006', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/highwire_trec-genomics-2006', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'start': ..., 'length': ..., 'relevance': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hersh2006TrecGenomics,
title={TREC 2006 Genomics Track Overview},
author={William Hersh and Aaron M. Cohen and Phoebe Roberts and Hari Krishna Rekapalli},
booktitle={TREC},
year={2006}
}
```
|
false |
# Dataset Card for `highwire/trec-genomics-2007`
The `highwire/trec-genomics-2007` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/highwire#highwire/trec-genomics-2007).
# Data
This dataset provides:
- `queries` (i.e., topics); count=36
- `qrels`: (relevance assessments); count=35,996
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/highwire_trec-genomics-2007', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/highwire_trec-genomics-2007', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'start': ..., 'length': ..., 'relevance': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hersh2007TrecGenomics,
title={TREC 2007 Genomics Track Overview},
author={William Hersh and Aaron Cohen and Lynn Ruslen and Phoebe Roberts},
booktitle={TREC},
year={2007}
}
```
|
false |
# Dataset Card for `medline/2004`
The `medline/2004` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2004).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=3,672,808
This dataset is used by: [`medline_2004_trec-genomics-2004`](https://huggingface.co/datasets/irds/medline_2004_trec-genomics-2004), [`medline_2004_trec-genomics-2005`](https://huggingface.co/datasets/irds/medline_2004_trec-genomics-2005)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/medline_2004', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'abstract': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `medline/2004/trec-genomics-2004`
The `medline/2004/trec-genomics-2004` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2004/trec-genomics-2004).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=8,268
- For `docs`, use [`irds/medline_2004`](https://huggingface.co/datasets/irds/medline_2004)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/medline_2004_trec-genomics-2004', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'need': ..., 'context': ...}
qrels = load_dataset('irds/medline_2004_trec-genomics-2004', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hersh2004TrecGenomics,
title={TREC 2004 Genomics Track Overview},
author={William R. Hersh and Ravi Teja Bhuptiraju and Laura Ross and Phoebe Johnson and Aaron M. Cohen and Dale F. Kraemer},
booktitle={TREC},
year={2004}
}
```
|
false |
# Dataset Card for `medline/2004/trec-genomics-2005`
The `medline/2004/trec-genomics-2005` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2004/trec-genomics-2005).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=39,958
- For `docs`, use [`irds/medline_2004`](https://huggingface.co/datasets/irds/medline_2004)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/medline_2004_trec-genomics-2005', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/medline_2004_trec-genomics-2005', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hersh2005TrecGenomics,
title={TREC 2005 Genomics Track Overview},
author={William Hersh and Aaron Cohen and Jianji Yang and Ravi Teja Bhupatiraju and Phoebe Roberts and Marti Hearst},
booktitle={TREC},
year={2007}
}
```
|
false |
# Dataset Card for `medline/2017`
The `medline/2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2017).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=26,740,025
This dataset is used by: [`medline_2017_trec-pm-2017`](https://huggingface.co/datasets/irds/medline_2017_trec-pm-2017), [`medline_2017_trec-pm-2018`](https://huggingface.co/datasets/irds/medline_2017_trec-pm-2018)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/medline_2017', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'abstract': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `medline/2017/trec-pm-2017`
The `medline/2017/trec-pm-2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2017/trec-pm-2017).
# Data
This dataset provides:
- `queries` (i.e., topics); count=30
- `qrels`: (relevance assessments); count=22,642
- For `docs`, use [`irds/medline_2017`](https://huggingface.co/datasets/irds/medline_2017)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/medline_2017_trec-pm-2017', 'queries')
for record in queries:
record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ..., 'other': ...}
qrels = load_dataset('irds/medline_2017_trec-pm-2017', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Roberts2017TrecPm,
title={Overview of the TREC 2017 Precision Medicine Track},
author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant},
booktitle={TREC},
year={2017}
}
```
|
false |
# Dataset Card for `medline/2017/trec-pm-2018`
The `medline/2017/trec-pm-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/medline#medline/2017/trec-pm-2018).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=22,429
- For `docs`, use [`irds/medline_2017`](https://huggingface.co/datasets/irds/medline_2017)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/medline_2017_trec-pm-2018', 'queries')
for record in queries:
record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ...}
qrels = load_dataset('irds/medline_2017_trec-pm-2018', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Roberts2018TrecPm,
title={Overview of the TREC 2018 Precision Medicine Track},
author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar},
booktitle={TREC},
year={2018}
}
```
|
false |
# Dataset Card for `clinicaltrials/2017/trec-pm-2017`
The `clinicaltrials/2017/trec-pm-2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2017/trec-pm-2017).
# Data
This dataset provides:
- `queries` (i.e., topics); count=30
- `qrels`: (relevance assessments); count=13,019
- For `docs`, use [`irds/clinicaltrials_2017`](https://huggingface.co/datasets/irds/clinicaltrials_2017)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/clinicaltrials_2017_trec-pm-2017', 'queries')
for record in queries:
record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ..., 'other': ...}
qrels = load_dataset('irds/clinicaltrials_2017_trec-pm-2017', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Roberts2017TrecPm,
title={Overview of the TREC 2017 Precision Medicine Track},
author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant},
booktitle={TREC},
year={2017}
}
```
|
false |
# Dataset Card for `clinicaltrials/2017/trec-pm-2018`
The `clinicaltrials/2017/trec-pm-2018` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2017/trec-pm-2018).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=14,188
- For `docs`, use [`irds/clinicaltrials_2017`](https://huggingface.co/datasets/irds/clinicaltrials_2017)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/clinicaltrials_2017_trec-pm-2018', 'queries')
for record in queries:
record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ...}
qrels = load_dataset('irds/clinicaltrials_2017_trec-pm-2018', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Roberts2018TrecPm,
title={Overview of the TREC 2018 Precision Medicine Track},
author={Kirk Roberts and Dina Demner-Fushman and Ellen M. Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar},
booktitle={TREC},
year={2018}
}
```
|
false |
# Dataset Card for `clinicaltrials/2019`
The `clinicaltrials/2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2019).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=306,238
This dataset is used by: [`clinicaltrials_2019_trec-pm-2019`](https://huggingface.co/datasets/irds/clinicaltrials_2019_trec-pm-2019)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clinicaltrials_2019', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'condition': ..., 'summary': ..., 'detailed_description': ..., 'eligibility': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `clinicaltrials/2019/trec-pm-2019`
The `clinicaltrials/2019/trec-pm-2019` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2019/trec-pm-2019).
# Data
This dataset provides:
- `queries` (i.e., topics); count=40
- `qrels`: (relevance assessments); count=12,996
- For `docs`, use [`irds/clinicaltrials_2019`](https://huggingface.co/datasets/irds/clinicaltrials_2019)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/clinicaltrials_2019_trec-pm-2019', 'queries')
for record in queries:
record # {'query_id': ..., 'disease': ..., 'gene': ..., 'demographic': ...}
qrels = load_dataset('irds/clinicaltrials_2019_trec-pm-2019', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Roberts2019TrecPm,
title={Overview of the TREC 2019 Precision Medicine Track},
author={Kirk Roberts and Dina Demner-Fushman and Ellen Voorhees and William R. Hersh and Steven Bedrick and Alexander J. Lazar and Shubham Pant and Funda Meric-Bernstam},
booktitle={TREC},
year={2019}
}
```
|
false |
# Dataset Card for `clinicaltrials/2021`
The `clinicaltrials/2021` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=375,580
This dataset is used by: [`clinicaltrials_2021_trec-ct-2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021_trec-ct-2021), [`clinicaltrials_2021_trec-ct-2022`](https://huggingface.co/datasets/irds/clinicaltrials_2021_trec-ct-2022)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clinicaltrials_2021', 'docs')
for record in docs:
record # {'doc_id': ..., 'title': ..., 'condition': ..., 'summary': ..., 'detailed_description': ..., 'eligibility': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `clinicaltrials/2021/trec-ct-2021`
The `clinicaltrials/2021/trec-ct-2021` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021/trec-ct-2021).
# Data
This dataset provides:
- `queries` (i.e., topics); count=75
- `qrels`: (relevance assessments); count=35,832
- For `docs`, use [`irds/clinicaltrials_2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/clinicaltrials_2021_trec-ct-2021', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/clinicaltrials_2021_trec-ct-2021', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `clinicaltrials/2021/trec-ct-2022`
The `clinicaltrials/2021/trec-ct-2022` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clinicaltrials#clinicaltrials/2021/trec-ct-2022).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- For `docs`, use [`irds/clinicaltrials_2021`](https://huggingface.co/datasets/irds/clinicaltrials_2021)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/clinicaltrials_2021_trec-ct-2022', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `clueweb09`
The `clueweb09` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,040,859,705
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clueweb09', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `clueweb09/ar`
The `clueweb09/ar` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/ar).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=29,192,662
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clueweb09_ar', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `clueweb09/catb`
The `clueweb09/catb` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/catb).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=50,220,423
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clueweb09_catb', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
false |
# Dataset Card for `clueweb09/de`
The `clueweb09/de` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/de).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=49,814,309
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clueweb09_de', 'docs')
for record in docs:
record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.