id
stringlengths
2
115
private
bool
1 class
tags
list
description
stringlengths
0
5.93k
downloads
int64
0
1.14M
likes
int64
0
1.79k
nell
false
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "task_ids:fact-checking-retrieval", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100M<n<1B", "size_categories:10M<n<100M", "size_categories:1M<n<10M",...
This dataset provides version 1115 of the belief extracted by CMU's Never Ending Language Learner (NELL) and version 1110 of the candidate belief extracted by NELL. See http://rtw.ml.cmu.edu/rtw/overview. NELL is an open information extraction system that attempts to read the Clueweb09 of 500 million web pages (http:/...
665
2
neural_code_search
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "arxiv:...
Neural-Code-Search-Evaluation-Dataset presents an evaluation dataset consisting of natural language query and code snippet pairs and a search corpus consisting of code snippets collected from the most popular Android repositories on GitHub.
1,394
4
news_commentary
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "language:cs", "language:de", "language:en", "language:es", "language:fr", "language:it", "langua...
A parallel corpus of News Commentaries provided by WMT for training SMT. The source is taken from CASMACAT: http://www.casmacat.eu/corpus/news-commentary.html 12 languages, 63 bitexts total number of files: 61,928 total number of tokens: 49.66M total number of sentence fragments: 1.93M
8,609
9
newsgroup
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown" ]
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. The 20 newsgroups collection has become a popular data set for experiments in text applications of machine learning techniques, such as text classification and text cluster...
12,001
4
newsph
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:fil",...
Large-scale dataset of Filipino news articles. Sourced for the NewsPH-NLI Project (Cruz et al., 2020).
291
1
newsph_nli
false
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:tl", "license:unknown", "arxiv:2010.11574" ]
First benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
269
0
newspop
false
[ "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "social-media-shares-prediction", "arxiv:18...
This is a large data set of news items and their respective social feedback on multiple platforms: Facebook, Google+ and LinkedIn. The collected data relates to a period of 8 months, between November 2015 and July 2016, accounting for about 100,000 news items on four different topics: economy, microsoft, obama and pale...
963
2
newsqa
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit" ]
NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.
748
2
newsroom
false
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:other" ]
NEWSROOM is a large dataset for training and evaluating summarization systems. It contains 1.3 million articles and summaries written by authors and editors in the newsrooms of 38 major publications. Dataset features includes: - text: Input news text. - summary: Summary for the news. And additional features: - t...
384
4
nkjp-ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:gpl-3.0" ]
The NKJP-NER is based on a human-annotated part of National Corpus of Polish (NKJP). We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.
271
0
nli_tr
false
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "annotations_creators:expert-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:...
\ The Natural Language Inference in Turkish (NLI-TR) is a set of two large scale datasets that were obtained by translating the foundational NLI corpora (SNLI and MNLI) using Amazon Translate.
488
4
nlu_evaluation_data
false
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "li...
Raw part of NLU Evaluation Data. It contains 25 715 non-empty examples (original dataset has 25716 examples) from 68 unique intents belonging to 18 scenarios.
877
4
norec
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:nb", "language:nn", "language:no", "license:cc-by-nc-...
NoReC was created as part of the SANT project (Sentiment Analysis for Norwegian Text), a collaboration between the Language Technology Group (LTG) at the Department of Informatics at the University of Oslo, the Norwegian Broadcasting Corporation (NRK), Schibsted Media Group and Aller Media. This first release of the co...
137
0
norne
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:no", "license:other", "arxiv:1911.12146" ]
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or...
153
1
norwegian_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:no", "license:unknown" ]
Named entities Recognition dataset for Norwegian. It is a version of the Universal Dependency (UD) Treebank for both Bokmål and Nynorsk (UDN) where all proper nouns have been tagged with their type according to the NER tagging scheme. UDN is a converted version of the Norwegian Dependency Treebank into the UD scheme.
219
0
nq_open
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|natural_questions", "language:en", "license:cc-by-sa-3.0" ]
The NQ-Open task, introduced by Lee et.al. 2019, is an open domain question answering benchmark that is derived from Natural Questions. The goal is to predict an English answer string for an input English question. All questions can be answered using the contents of English Wikipedia.
3,335
0
nsmc
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ko", "license:cc-by-2.0" ]
This is a movie review dataset in the Korean language. Reviews were scraped from Naver movies. The dataset construction is based on the method noted in Large movie review dataset from Maas et al., 2011.
2,176
3
numer_sense
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:slot-filling", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other", "language:en", "license:mit", "arxiv:...
NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. We propose to study whether numerical commonsense knowledge can be induced from pre-trained language models like BERT, and to what extent this access to knowledge robust agains...
1,012
1
numeric_fused_head
false
[ "task_categories:token-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:1K<n<10K", "source_datasets:original...
Fused Head constructions are noun phrases in which the head noun is missing and is said to be "fused" with its dependent modifier. This missing information is implicit and is important for sentence understanding.The missing heads are easily filled in by humans, but pose a challenge for computational models. For examp...
401
1
oclar
false
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language...
The researchers of OCLAR Marwan et al. (2019), they gathered Arabic costumer reviews from Google reviewsa and Zomato website (https://www.zomato.com/lebanon) on wide scope of domain, including restaurants, hotels, hospitals, local shops, etc.The corpus finally contains 3916 reviews in 5-rating scale. For this research ...
269
1
offcombr
false
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pt", "license:unknown", "hate-speech-detection" ]
OffComBR: an annotated dataset containing for hate speech detection in Portuguese composed of news comments on the Brazilian Web.
400
2
offenseval2020_tr
false
[ "task_categories:text-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:tr", "license:cc-by-2.0", "offensive-language-classification" ]
OffensEval-TR 2020 is a Turkish offensive language corpus. The corpus consist of randomly sampled tweets and annotated in a similar way to OffensEval and GermEval.
464
3
offenseval_dravidian
false
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:kn", "language:ml", "language:ta", "l...
Offensive language identification in dravidian lanaguages dataset. The goal of this task is to identify offensive language content of the code-mixed dataset of comments/posts in Dravidian Languages ( (Tamil-English, Malayalam-English, and Kannada-English)) collected from social media.
533
2
ofis_publik
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:br", "language:fr", "license:unknown" ]
Texts from the Ofis Publik ar Brezhoneg (Breton Language Board) provided by Francis Tyers 2 languages, total number of files: 278 total number of tokens: 2.12M total number of sentence fragments: 0.13M
268
0
ohsumed
false
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-nc-4.0" ]
The OHSUMED test collection is a set of 348,566 references from MEDLINE, the on-line medical information database, consisting of titles and/or abstracts from 270 medical journals over a five-year period (1987-1991). The available fields are title, abstract, MeSH indexing terms, author, source, and publication type.
518
0
ollie
false
[ "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10M<n<100M", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:other", "relation-extraction", "text-to-structured" ]
The Ollie dataset includes two configs for the data used to train the Ollie informatation extraction algorithm, for 18M sentences and 3M sentences respectively. This data is for academic use only. From the authors: Ollie is a program that automatically identifies and extracts binary relationships from English sentenc...
406
0
omp
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "license:cc-by-nc-sa-4.0" ]
The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). DER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper’s website, there is a discussion section below each news article where readers engage in online disc...
533
1
onestop_english
false
[ "task_categories:text2text-generation", "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:text-simplification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "languag...
This dataset is a compilation of the OneStopEnglish corpus of texts written at three reading levels into one file. Text documents are classified into three reading levels - ele, int, adv (Elementary, Intermediate and Advance). This dataset demonstrates its usefulness for through two applica-tions - automatic readabili...
1,528
12
onestop_qa
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "source_datasets:extended|onestop_english", "language:en", "lic...
OneStopQA is a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme. The reading materials are Guardian articles taken from the [OneStopEnglish corpus](https://github.com/nishkalavallabhi/OneStopEnglishCorpus). Each article comes in thr...
282
3
open_subtitles
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "size_categories:n<1K", "source_datasets:original", "language:af", "language:ar", "language:bg", "language:bn", "l...
This is a new collection of translated movie subtitles from http://www.opensubtitles.org/. IMPORTANT: If you use the OpenSubtitle corpus: Please, add a link to http://www.opensubtitles.org/ to your website and to your reports and publications produced with the data! This is a slightly cleaner version of the subtitle ...
1,484
15
openai_humaneval
false
[ "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:mit", "code-generation", "arxiv:2107.03374" ]
The HumanEval dataset released by OpenAI contains 164 handcrafted programming challenges together with unittests to very the viability of a proposed solution.
38,002
35
openbookqa
false
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknow...
OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In particular, it contains questions that require multi-step reasoning, use of additio...
15,855
6
openslr
false
[ "task_categories:automatic-speech-recognition", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:af", "language:bn", "language:ca", "language:en", "language:es", "language:eu", "language...
OpenSLR is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. We intend to be a convenient place for anyone to put resources that they have created, so that they can be downloaded publicly.
4,092
7
openwebtext
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", ...
An open-source replication of the WebText dataset from OpenAI.
301,689
111
opinosis
false
[ "task_categories:summarization", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:apache-2.0", "abstractive-summarization" ]
The Opinosis Opinion Dataset consists of sentences extracted from reviews for 51 topics. Topics and opinions are obtained from Tripadvisor, Edmunds.com and Amazon.com.
287
1
opus100
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:1M<n<10M", "size_categories:n<1K", "source_datasets:extended", "l...
OPUS-100 is English-centric, meaning that all training pairs include English on either the source or target side. The corpus covers 100 languages (including English).OPUS-100 contains approximately 55M sentence pairs. Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 ha...
19,200
19
opus_books
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ca", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:fi", "language...
This is a collection of copyright free books aligned by Andras Farkas, which are available from http://www.farkastranslations.com/bilingual_books.php Note that the texts are rather dated due to copyright issues and that some of them are manually reviewed (check the meta-data at the top of the corpus files in XML). The ...
10,303
4
opus_dgt
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de",...
A collection of translation memories provided by the JRC. Source: https://ec.europa.eu/jrc/en/language-technologies/dgt-translation-memory 25 languages, 299 bitexts total number of files: 817,410 total number of tokens: 2.13G total number of sentence fragments: 113.52M
1,456
1
opus_dogc
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "language:es", "license:cc0-1.0" ]
This is a collection of documents from the Official Journal of the Government of Catalonia, in Catalan and Spanish languages, provided by Antoni Oliver Gonzalez from the Universitat Oberta de Catalunya.
268
0
opus_elhuyar
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:es", "language:eu", "license:unknown" ]
Dataset provided by the foundation Elhuyar, which is having data in languages Spanish to Basque.
267
0
opus_euconst
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", "language:et", "langua...
A parallel corpus collected from the European Constitution for 21 language.
27,880
2
opus_finlex
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:fi", "language:sv", "license:unknown" ]
The Finlex Data Base is a comprehensive collection of legislative and other judicial information of Finland, which is available in Finnish, Swedish and partially in English. This corpus is taken from the Semantic Finlex serice that provides the Finnish and Swedish data as linked open data and also raw XML files.
267
0
opus_fiskmo
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:original", "language:fi", "language:sv", "license:unknown" ]
fiskmo, a massive parallel corpus for Finnish and Swedish.
267
0
opus_gnome
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:af", "language:am", "language:an", "language:ang", "...
A parallel corpus of GNOME localization files. Source: https://l10n.gnome.org 187 languages, 12,822 bitexts total number of files: 113,344 total number of tokens: 267.27M total number of sentence fragments: 58.12M
1,458
0
opus_infopankki
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "language:en", "language:es", "language:et", "language:fa", "language:fi", "language:fr", "langua...
A parallel corpus of 12 languages, 66 bitexts.
8,839
1
opus_memat
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "language:xh", "license:unknown" ]
Xhosa-English parallel corpora, funded by EPSRC, the Medical Machine Translation project worked on machine translation between ixiXhosa and English, with a focus on the medical domain.
266
1
opus_montenegrinsubs
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:cnr", "language:en", "license:unknown" ]
Opus MontenegrinSubs dataset for machine translation task, for language pair en-me: english and montenegrin
267
0
opus_openoffice
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:ja", "language:ru", "language:sv", "langua...
A collection of documents from http://www.openoffice.org/.
3,814
1
opus_paracrawl
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:bg", "language:ca", "language:cs", "language:da",...
Parallel corpora from Web Crawls collected in the ParaCrawl project. 42 languages, 43 bitexts total number of files: 59,996 total number of tokens: 56.11G total number of sentence fragments: 3.13G
1,469
3
opus_rf
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:sv", "license:unknown" ]
RF is a tiny parallel corpus of the Declarations of the Swedish Government and its translations.
1,449
0
opus_tedtalks
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:hr", "license:unknown" ]
This is a Croatian-English parallel corpus of transcribed and translated TED talks, originally extracted from https://wit3.fbk.eu. The corpus is compiled by Željko Agić and is taken from http://lt.ffzg.hr/zagic provided under the CC-BY-NC-SA license. 2 languages, total number of files: 2 total number of tokens: 2.81M t...
270
0
opus_ubuntu
false
[ "task_categories:translation", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:ace", "l...
A parallel corpus of Ubuntu localization files. Source: https://translations.launchpad.net 244 languages, 23,988 bitexts total number of files: 30,959 total number of tokens: 29.84M total number of sentence fragments: 7.73M
1,448
0
opus_wikipedia
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:ar", "language:bg", "language:cs", "language:de", "language:el", "language:...
This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wołk and Krzysztof Marasek. Please cite the following publication if you use the data: Krzysztof Wołk and Krzysztof Marasek: Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs., Procedia Technology, 18,...
842
1
opus_xhosanavy
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:xh", "license:unknown" ]
This dataset is designed for machine translation from English to Xhosa.
268
2
orange_sum
false
[ "task_categories:summarization", "task_ids:news-articles-headline-generation", "task_ids:news-articles-summarization", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:fr", "license:unknown",...
The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the "Orange Actu" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to ...
672
2
oscar
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:100M<n<1B", "size_catego...
The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\
55,034
78
para_crawl
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:10M<n<100M", "source_datasets:original", "language:bg", "language:cs", "language:da", "language:de", "language:el", "language:en", "language:es", ...
null
3,319
5
para_pat
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:translation", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:10K<n<100K...
ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts This dataset contains the developed parallel corpus from the open access Google Patents dataset in 74 language pairs, comprising more than 68 million sentences and 800 million tokens. Sentences were automatically aligned using the Hunalign algor...
2,782
5
parsinlu_reading_comprehension
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|wikipedia|google", "language:fa", "license:cc-by-nc-sa-4.0", "arxiv:20...
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph). The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
269
0
pass
false
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:extended|yffc100M", "language:en", "license:cc-by-4.0", "image-self-supervised pre...
PASS (Pictures without humAns for Self-Supervision) is a large-scale dataset of 1,440,191 images that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns. The PASS images are sourced from the YFCC-100M dataset.
268
1
paws-x
false
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "task_ids:multi-input-text-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:exper...
PAWS-X, a multilingual version of PAWS (Paraphrase Adversaries from Word Scrambling) for six languages. This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. Engl...
17,688
9
paws
false
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "task_ids:multi-input-text-classification", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:machi...
PAWS: Paraphrase Adversaries from Word Scrambling This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the o...
21,905
13
pec
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-retrieval", "task_ids:dialogue-modeling", "task_ids:utterance-retrieval", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:orig...
\ A dataset of around 350K persona-based empathetic conversations. Each speaker is associated with a persona, which comprises multiple persona sentences. The response of each conversation is empathetic.
533
2
allenai/peer_read
false
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "acceptability-classification", "arxiv:1804.09635" ]
PearRead is a dataset of scientific peer reviews available to help researchers study this important artifact. The dataset consists of over 14K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR, as well as over 10K textual peer reviews written by experts for a sub...
426
2
peoples_daily_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:zh", "license:unknown" ]
People's Daily NER Dataset is a commonly used dataset for Chinese NER, with text from People's Daily (人民日报), the largest official newspaper. The dataset is in BIO scheme. Entity types are: PER (person), ORG (organization) and LOC (location).
663
4
per_sent
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|other-MPQA-KBP Challenge-MediaRank", "language:en", "license:unknown", "a...
Person SenTiment (PerSenT) is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotation for 5.3k documents and 38k paragraphs covering 3.2k unique entities. The dataset consists of sentiment annotations on news articles about people. Fo...
269
0
persian_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fa", "license:cc-by-4.0" ]
The dataset includes 250,015 tokens and 7,682 Persian sentences in total. It is available in 3 folds to be used in turn as training and test sets. The NER tags are in IOB format.
553
0
pg19
false
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:1911.05507" ]
This repository contains the PG-19 language modeling benchmark. It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919. It also contains metadata of book titles and publication dates. PG-19 is over double the size of the Billion Word benchmark and contains docume...
402
7
php
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:cs", "language:de", "language:en", "language:es", "language:fi", "language:fr", "language:he", "langua...
A parallel corpus originally extracted from http://se.php.net/download-docs.php. The original documents are written in English and have been partly translated into 21 languages. The original manuals contain about 500,000 words. The amount of actually translated texts varies for different languages between 50,000 and 38...
796
0
etalab-ia/piaf
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fr", "license:mit" ]
Piaf is a reading comprehension dataset. This version, published in February 2020, contains 3835 questions on French Wikipedia.
279
6
pib
false
[ "task_categories:translation", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:translation", "size_categories:100K<n<1M", "size_categ...
Sentence aligned parallel corpus between 11 Indian Languages, crawled and extracted from the press information bureau website.
7,392
3
piqa
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv...
To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of physical commonsense pose a challenge to state-of-the-art natural language understanding systems. The PIQA dataset introduces the task of physical commonsense reasoning and a corresponding benchmark dataset P...
595,457
14
pn_summary
false
[ "task_categories:summarization", "task_categories:text-classification", "task_ids:news-articles-summarization", "task_ids:news-articles-headline-generation", "task_ids:text-simplification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:mon...
A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification. It is imperative to consider that t...
293
3
poem_sentiment
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2011.02686" ]
Poem Sentiment is a sentiment dataset of poem verses from Project Gutenberg. This dataset can be used for tasks such as sentiment classification or style transfer for poems.
2,774
7
polemo2
false
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:bsd-3-clause" ]
The PolEmo2.0 is a set of online reviews from medicine and hotels domains. The task is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation.
399
0
poleval2019_cyberbullying
false
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:unknown" ]
In Task 6-1, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets that contain any kind of harmful information (class: 1). This includes cyberbullying, hate speech and related phenomena. In Task 6-2, the participants shall distinguish between three classes of twee...
443
0
poleval2019_mt
false
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:expert-generated", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:pl", "language:ru", "license:unknown" ]
PolEval is a SemEval-inspired evaluation campaign for natural language processing tools for Polish.Submitted solutions compete against one another within certain tasks selected by organizers, using available data and are evaluated according topre-established procedures. One of the tasks in PolEval-2019 was Machine Tran...
668
0
polsum
false
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:pl", "license:cc-by-3.0" ]
Polish Summaries Corpus: the corpus of Polish news summaries.
272
0
polyglot_ner
false
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:unknown", "source_datasets:original", "language:ar", "language:bg", "language:ca", "language:cs", "...
Polyglot-NER A training dataset automatically generated from Wikipedia and Freebase the task of named entity recognition. The dataset contains the basic Wikipedia based training data for 40 languages we have (with coreference resolution) for the task of named entity recognition. The details of the procedure of generati...
6,761
7
prachathai67k
false
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown" ]
`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The prachathai-67k dataset was scraped from the news site Prachathai. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags fro...
288
1
pragmeval
false
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "lice...
Evaluation of language understanding with a 11 datasets benchmark focusing on discourse and pragmatics
3,101
3
proto_qa
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", ...
This dataset is for studying computational models trained to reason about prototypical situations. Using deterministic filtering a sampling from a larger set of all transcriptions was built. It contains 9789 instances where each instance represents a survey question from Family Feud game. Each instance exactly is a que...
533
1
psc
false
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:cc-by-sa-3.0" ]
The Polish Summaries Corpus contains news articles and their summaries. We used summaries of the same article as positive pairs and sampled the most similar summaries of different articles as negatives.
272
1
ptb_text_only
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:...
This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material. This corpus has been annotated for part-of-speech (POS) information. In addition, over half of it has been annotated for skeletal syntactic structure.
16,135
6
pubmed
false
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:text-scoring", "task_ids:topic-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", ...
NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
498
18
pubmed_qa
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<...
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts. PubMedQA has 1k expe...
11,727
22
py_ast
false
[ "task_categories:text2text-generation", "task_categories:text-generation", "task_categories:fill-mask", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:code", "license:bsd-2-claus...
Dataset consisting of parsed ASTs that were used to train and evaluate the DeepSyn tool. The Python programs are collected from GitHub repositories by removing duplicate files, removing project forks (copy of another existing repository) ,keeping only programs that parse and have at most 30'000 nodes in the AST and we ...
275
3
qa4mre
false
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:other", "language_creators:found", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "language:bg", "language:de", "language:en", "language:es", "langu...
QA4MRE dataset was created for the CLEF 2011/2012/2013 shared tasks to promote research in question answering and reading comprehension. The dataset contains a supporting passage and a set of questions corresponding to the passage. Multiple options for answers are provided for each question, of which only one is correc...
3,542
2
qa_srl
false
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "task_ids:open-domain-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown" ]
The dataset contains question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contain a verb predicate in the sentence; the answers are phrases in the sentence. There were 2 datsets used in the paper, newswire and wikipedia. Unfortunately t...
1,285
1
qa_zre
false
[ "task_categories:question-answering", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:unknown", "zero-shot-relation-extraction" ]
A dataset reducing relation extraction to simple reading comprehension questions
844
1
qangaroo
false
[ "language:en" ]
We have created two new Reading Comprehension datasets focussing on multi-hop (alias multi-step) inference. Several pieces of information often jointly imply another fact. In multi-hop inference, a new fact is derived by combining facts via a chain of multiple steps. Our aim is to build Reading Comprehension method...
677
0
qanta
false
[ "task_categories:question-answering", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "quizbowl", "arxiv:1904.04792" ]
The Qanta dataset is a question answering dataset based on the academic trivia game Quizbowl.
670
1
qasc
false
[ "task_categories:question-answering", "task_categories:multiple-choice", "task_ids:extractive-qa", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", ...
QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
11,678
1
allenai/qasper
false
[ "task_categories:question-answering", "task_ids:closed-domain-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|s2orc", "language:en", "license:cc-by-4.0", "arxiv:2105.03011" ]
A dataset containing 1585 papers with 5049 information-seeking questions asked by regular readers of NLP papers, and answered by a separate set of NLP practitioners.
396
21
qed
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|natural_questions", "language:en", "license:unknown", "explanations-in-question-a...
QED, is a linguistically informed, extensible framework for explanations in question answering. A QED explanation specifies the relationship between a question and answer according to formal semantic notions such as referential equality, sentencehood, and entailment. It is an expertannotated dataset of QED explanations...
1,148
1
qed_amara
false
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:aa", "language:ab", "language:ae", "language:aeb", "language:af", "language:ak", "language:am", "langua...
The QCRI Educational Domain Corpus (formerly QCRI AMARA Corpus) is an open multilingual collection of subtitles for educational videos and lectures collaboratively transcribed and translated over the AMARA web-based platform. Developed by: Qatar Computing Research Institute, Arabic Language Technologies Group The QED C...
794
2
quac
false
[ "task_categories:question-answering", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categ...
Question Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2)...
1,072
4
quail
false
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0" ]
QuAIL is a reading comprehension dataset. QuAIL contains 15K multi-choice questions in texts 300-350 tokens long 4 domains (news, user stories, fiction, blogs).QuAIL is balanced and annotated for question types.\
14,397
1
quarel
false
[ "language:en" ]
QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms.
8,054
0
quartz
false
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0" ]
QuaRTz is a crowdsourced dataset of 3864 multiple-choice questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs). The QuaRTz dataset V1 contains 3864 questions about open domain qualitative relationships. Each question is...
11,616
2