dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
MIND | MIcrosoft News Dataset (MIND) is a large-scale dataset for news recommendation research. It was collected from anonymized behavior logs of Microsoft News website. The mission of MIND is to serve as a benchmark dataset for news recommendation and facilitate the research in news recommendation and recommender systems area.
MIND contains about 160k English news articles and more than 15 million impression logs generated by 1 million users. Every news article contains rich textual content including title, abstract, body, category and entities. Each impression log contains the click events, non-clicked events and historical news click behaviors of this user before this impression. To protect user privacy, each user was de-linked from the production system when securely hashed into an anonymized ID. | Provide a detailed description of the following dataset: MIND |
MLQA | MLQA (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance. MLQA consists of over 5K extractive QA instances (12K in English) in SQuAD format in seven languages - English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA is highly parallel, with QA instances parallel between 4 different languages on average. | Provide a detailed description of the following dataset: MLQA |
KdConv | KdConv is a Chinese multi-domain Knowledge-driven Conversation dataset, grounding the topics in multi-turn conversations to knowledge graphs. KdConv contains 4.5K conversations from three domains (film, music, and travel), and 86K utterances with an average turn number of 19.0. These conversations contain in-depth discussions on related topics and natural transition between multiple topics, while the corpus can also used for exploration of transfer learning and domain adaptation. | Provide a detailed description of the following dataset: KdConv |
STACKEX | STACKEX expands beyond the only existing genre (i.e., academic writing) in keyphrase generation tasks. | Provide a detailed description of the following dataset: STACKEX |
SciREX | SCIREX is a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. The dataset is annotated by integrating automatic and human annotations, leveraging existing scientific knowledge resources. | Provide a detailed description of the following dataset: SciREX |
CH-SIMS | CH-SIMS is a Chinese single- and multimodal sentiment analysis dataset which contains 2,281 refined video segments in the wild with both multimodal and independent unimodal annotations. It allows researchers to study the interaction between modalities or use independent unimodal annotations for unimodal sentiment analysis. | Provide a detailed description of the following dataset: CH-SIMS |
WCEP | The WCEP dataset for multi-document summarization (MDS) consists of short, human-written summaries about news events, obtained from the Wikipedia Current Events Portal (WCEP), each paired with a cluster of news articles associated with an event. These articles consist of sources cited by editors on WCEP, and are extended with articles automatically obtained from the Common Crawl News dataset. | Provide a detailed description of the following dataset: WCEP |
MATINF | Maternal and Infant (MATINF) Dataset is a large-scale dataset jointly labeled for classification, question answering and summarization in the domain of maternity and baby caring in Chinese. An entry in the dataset includes four fields: question (Q), description (D), class (C) and answer (A).
Nearly two million question-answer pairs are collected with fine-grained human-labeled classes from a large Chinese maternity and baby caring QA site. Authors conduct both automatic and manual data cleansing and remove: (1) classes with insufficient samples; (2) entries in which the length of the description filed is less than the length of the question field; (3) data with any field longer than 256 characters; (4) human-spotted ill-formed data. After the data cleansing, MATINF is constructed with the remaining 1.07 million entries | Provide a detailed description of the following dataset: MATINF |
FOBIE | The Focused Open Biology Information Extraction (FOBIE) dataset aims to support IE from Computer-Aided Biomimetics. The dataset contains ~1,500 sentences from scientific biological texts. These sentences are annotated with TRADE-OFFS and syntactically similar relations between unbounded arguments, as well as argument-modifiers.
The FOBIE dataset has been used to explore Semi-Open Relation Extraction (SORE). The code for this and instructions can be found inside the SORE folder Readme.md, or in the ReadTheDocs documentations. | Provide a detailed description of the following dataset: FOBIE |
CODA-19 | CODA-19 is a human-annotated dataset that denotes the Background, Purpose, Method, Finding/Contribution, and Other for 10,966 English abstracts in the COVID-19 Open Research Dataset.
CODA-19 was created by 248 crowd workers from Amazon Mechanical Turk collectively within ten days. Each abstract was annotated by nine different workers, and the final labels were obtained by majority voting.
CODA-19's labels have an accuracy of 82% and an inter-annotator agreement (Cohen's kappa) of 0.74 when compared against expert labels on 129 abstracts. | Provide a detailed description of the following dataset: CODA-19 |
RWWD | Real World Worry Dataset (RWWD) captures the emotional responses of UK residents to COVID-19 at a point in time where the impact of the COVID19 situation affected the lives of all individuals in the UK. The data were collected on the 6th and 7th of April 2020, a time at which the UK was under lockdown (news, 2020), and death tolls were increasing. On April 6, 5,373 people in the UK had died of the virus, and 51,608 tested positive. On the day before data collection, the Queen addressed the nation via a television broadcast. Furthermore, it was also announced that Prime Minister Boris Johnson was admitted to intensive care in a hospital for COVID-19 symptoms.
The RWWD is a ground truth dataset that used a direct survey method and obtained written accounts of people alongside data of their felt emotions while writing. As such, the dataset does not rely on third-person annotation but can resort to direct self-reported emotions. Two versions of RWWD are presented, each consisting of 2,500 English
texts representing the participants’ genuine emotional responses to Corona situation in the UK: the Long RWWD consists of texts that were openended in length and asked the participants to express their feelings as they wish. The Short RWWD asked the same people also to express their feelings in Tweet-sized texts. The latter was chosen
to facilitate the use of this dataset for Twitter data research. | Provide a detailed description of the following dataset: RWWD |
COVID-Q | COVID-Q consists of COVID-19 questions which have been annotated into a broad category (e.g. Transmission, Prevention) and a more specific class such that questions in the same class are all asking the same thing. | Provide a detailed description of the following dataset: COVID-Q |
WT-WT | Will-They-Won't-They (WT-WT) is a large dataset of English tweets targeted at stance detection for the rumor verification task. The dataset is constructed based on tweets that discuss five recent merger and acquisition (M&A) operations of US companies, mainly from the healthcare sector.
All the annotations are carried out by domain experts; therefore, the dataset constitutes a high-quality and reliable benchmark for future research in stance detection. | Provide a detailed description of the following dataset: WT-WT |
iSarcasm | iSarcasm is a dataset of tweets, each labelled as either sarcastic or non_sarcastic. Each sarcastic tweet is further labelled for one of the following types of ironic speech:
- sarcasm: tweets that contradict the state of affairs and are critical towards an addressee;
- irony: tweets that contradict the state of affairs but are not obviously critical towards an addressee;
- satire: tweets that appear to support an addressee, but contain underlying disagreement and mocking;
- understatement: tweets that undermine the importance of the state of affairs they refer to;
- overstatement: tweets that describe the state of affairs in obviously exaggerated terms;
- rhetorical question: tweets that include a question whose invited inference (implicature) is obviously contradicting the state of affairs.
For each sarastic tweet, there's also:
- an explanation, in English sentences, as to why it is sarcastic, and
- a rephrase that conveys the same meaning non-sarcastically. Both have been provided by the author of the tweet.
iSarcasm contains 4,484 tweets, out of which 777 are labelled as sarcastic and 3,707 as non-sarcastic. You'll find two files, isarcasm_train.csv and isarcasm_test.csv, each containing 80% and 20% of the examples chosen at random, respectively. Each line in a file has the format tweet_id,sarcasm_label,sarcasm_type, where sarcasm_type are only defined for sarcastic tweets, as specified above. | Provide a detailed description of the following dataset: iSarcasm |
KLEJ | The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding task.
Key benchmark features:
- It contains a diverse set of tasks from different domains and with different objectives.
- Most tasks are created from existing datasets but the authors also released the new sentiment analysis dataset from an e-commerce domain.
- It includes tasks which have relatively small datasets and require extensive external knowledge to solve them. It promotes the usage of transfer learning instead of training separate models from scratch.
The name KLEJ (English: GLUE) is an abbreviation for Kompleksowa Lista Ewaluacji Językowych (English: Comprehensive List of Language Evaluations) and refers to the [GLUE benchmark](/dataset/glue). | Provide a detailed description of the following dataset: KLEJ |
XQuAD | XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Consequently, the dataset is entirely parallel across 11 languages. | Provide a detailed description of the following dataset: XQuAD |
Microsoft Research Multimodal Aligned Recipe Corpus | To construct the MICROSOFT RESEARCH MULTIMODAL ALIGNED RECIPE CORPUS the authors first extract a large number of text and video recipes from the web. The goal is to find joint alignments between multiple text recipes and multiple video recipes for the same dish. The task is challenging, as different recipes vary in their order of instructions and use of ingredients. Moreover, video instructions can be noisy, and text and video instructions include different levels of specificity in their descriptions. | Provide a detailed description of the following dataset: Microsoft Research Multimodal Aligned Recipe Corpus |
ClarQ | ClarQ, consists of ∼2M examples distributed across 173 domains of stackexchange. This dataset is meant for training and evaluation of Clarification Question Generation Systems.
| Provide a detailed description of the following dataset: ClarQ |
TechQA | TECHQA is a domain-adaptation question answering dataset for the technical support domain. The TECHQA corpus highlights two real-world issues from the automated customer support domain. First, it contains actual questions posed by users on a technical forum, rather than questions generated specifically for a competition or a task. Second, it has a real-world size – 600 training, 310 dev, and 490 evaluation question/answer pairs – thus reflecting the cost of creating large labeled datasets with actual data. Consequently, TECHQA is meant to stimulate research in domain adaptation rather than being a resource to build QA systems from scratch. The dataset was obtained by crawling the IBM Developer and IBM DeveloperWorks forums for questions with accepted answers that appear in a published IBM Technote—a technical document that addresses a specific technical issue. | Provide a detailed description of the following dataset: TechQA |
Refer360° | Refer360° is a novel large-scale referring expression recognition dataset consisting of 17,137 instruction sequences and ground-truth actions for completing these instructions in 360° scenes. | Provide a detailed description of the following dataset: Refer360° |
MUStARD | We release the MUStARD dataset which is a multimodal video corpus for research in automated sarcasm discovery. The dataset is compiled from popular TV shows including Friends, The Golden Girls, The Big Bang Theory, and Sarcasmaholics Anonymous. MUStARD consists of audiovisual utterances annotated with sarcasm labels. Each utterance is accompanied by its context, which provides additional information on the scenario where the utterance occurs. | Provide a detailed description of the following dataset: MUStARD |
ChID | ChID is a large-scale Chinese IDiom dataset for cloze test. ChID contains 581K passages and 729K blanks, and covers multiple domains. In ChID, the idioms in a passage were replaced with blank symbols. For each blank, a list of candidate idioms including the golden idiom are provided as choice. | Provide a detailed description of the following dataset: ChID |
XQA | XQA is a data which consists of a total amount of 90k question-answer pairs in nine languages for cross-lingual open-domain question answering. | Provide a detailed description of the following dataset: XQA |
TalkSumm | The **TalkSumm** dataset contains 1705 automatically-generated summaries of scientific papers from ACL, NAACL, EMNLP, SIGDIAL (2015-2018), and ICML (2017-2018).
The dataset is provided as a list of titles and URLs and the corresponding summaries. | Provide a detailed description of the following dataset: TalkSumm |
CONAN | COunter NArratives through Nichesourcing (CONAN) is a dataset that consists of 4,078 pairs over the 3 languages. Additionally, 3 types of metadata are provided: expert demographics, hate speech sub-topic and counter-narrative type. The dataset is augmented through translation (from Italian/French to English) and paraphrasing, which brought the total number of pairs to 14.988. | Provide a detailed description of the following dataset: CONAN |
VIST-Edit | The dataset, VIST-Edit, includes 14,905 human-edited versions of 2,981 machine-generated visual stories. The stories were generated by two state-of-the-art visual storytelling models, each aligned to 5 human-edited versions. | Provide a detailed description of the following dataset: VIST-Edit |
OQGend | Dataset OQRanD and OQGenD for paper "Asking the crowd: Asking the Crowd: Question Analysis, Evaluation and Generation for Open Discussion on Online Forums" by Zi Chai, Xinyu Xing, Xiaojun Wan and Bo Huang. This paper is accepted by ACL'19.
The OQGenD dataset can be viewed at "OQGenD.xml". Each data (NQ-pairs) contains a certain piece of news with multiple related open-answered questions.
The OQRanD dataset can be viewed at "OQRanD.xml". Each data (Question Pairs) contains two questions, Q2 has more answers than Q1. | Provide a detailed description of the following dataset: OQGend |
OQRanD | Dataset OQRanD and OQGenD for paper "Asking the crowd: Asking the Crowd: Question Analysis, Evaluation and Generation for Open Discussion on Online Forums" by Zi Chai, Xinyu Xing, Xiaojun Wan and Bo Huang. This paper is accepted by ACL'19.
The OQGenD dataset can be viewed at "OQGenD.xml". Each data (NQ-pairs) contains a certain piece of news with multiple related open-answered questions.
The OQRanD dataset can be viewed at "OQRanD.xml". Each data (Question Pairs) contains two questions, Q2 has more answers than Q1. | Provide a detailed description of the following dataset: OQRanD |
PAWS | Paraphrase Adversaries from Word Scrambling (PAWS) is a dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the other one based on the Quora Question Pairs (QQP) dataset. | Provide a detailed description of the following dataset: PAWS |
LitBank | LitBank is an annotated dataset of 100 works of English-language fiction to support tasks in natural language processing and the computational humanities, described in more detail in the following publications:
- David Bamman, Sejal Popat and Sheng Shen (2019), "An Annotated Dataset of Literary Entities," NAACL 2019.
- Matthew Sims, Jong Ho Park and David Bamman (2019), "Literary Event Detection," ACL 2019.
- David Bamman, Olivia Lewke and Anya Mansoor (2020), "An Annotated Dataset of Coreference in English Literature", LREC.
LitBank currently contains annotations for entities, events, entity coreference, and quotation attribution in a sample of ~2,000 words from each of those texts, totaling 210,532 tokens.
LitBank is licensed under a Creative Commons Attribution 4.0 International License. | Provide a detailed description of the following dataset: LitBank |
Discovery Dataset | The *Discovery* datasets consists of adjacent sentence pairs (s1,s2) with a discourse marker (y) that occurred at the beginning of s2. They were extracted from the depcc web corpus.
Markers prediction can be used in order to train a sentence encoders. Discourse markers can be considered as noisy labels for various semantic tasks, such as entailment (y=therefore), subjectivity analysis (y=personally) or sentiment analysis (y=sadly), similarity (y=similarly), typicality, (y=curiously) ...
The specificity of this dataset is the diversity of the markers, since previously used data used only ~10 imbalanced classes. The author of the dataset provide:
- a list of the 174 discourse markers
- a Base version of the dataset with 1.74 million pairs (10k examples per marker)
- a Big version with 3.4 million pairs
- a Hard version with 1.74 million pairs where the connective couldn't be predicted with a fastText linear model | Provide a detailed description of the following dataset: Discovery Dataset |
CLEVR-Dialog | CLEVR-Dialog is a large diagnostic dataset for studying multi-round reasoning in visual dialog. Specifically, that authors construct a dialog grammar that is grounded in the scene graphs of the images from the CLEVR dataset. This combination results in a dataset where all aspects of the visual dialog are fully annotated. In total, CLEVR-Dialog contains 5 instances of 10-round dialogs for about 85k CLEVR images, totaling to 4.25M question-answer pairs.
The CLEVR-Dialog is used to benchmark performance of standard visual dialog models; in particular, on visual coreference resolution (as a function of the coreference distance). This is the first analysis of its kind for visual dialog models that was not possible without this dataset.
CLEVR-Dialog is aims to help inform the development of future models for visual dialog. | Provide a detailed description of the following dataset: CLEVR-Dialog |
MultiSense | MultiSense is a dataset of 9,504 images annotated with an English verb and its translation in Spanish and German. | Provide a detailed description of the following dataset: MultiSense |
SciQ | The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided. | Provide a detailed description of the following dataset: SciQ |
MedHop | With the same format as WikiHop, the MedHop dataset is based on research paper abstracts from PubMed, and the queries are about interactions between pairs of drugs. The correct answer has to be inferred by combining information from a chain of reactions of drugs and proteins. | Provide a detailed description of the following dataset: MedHop |
NEWSROOM | CORNELL NEWSROOM is a large dataset for training and evaluating summarization systems. It contains 1.3 million articles and summaries written by authors and editors in the newsrooms of 38 major publications. The summaries are obtained from search and social metadata between 1998 and 2017 and use a variety of summarization strategies combining extraction and abstraction. | Provide a detailed description of the following dataset: NEWSROOM |
ListOps | The ListOps examples are comprised of summary operations on lists of single digit integers, written in prefix notation. The full sequence has a corresponding solution which is
also a single-digit integer, thus making it a ten-way balanced classification problem. For example, [MAX 2 9 [MIN 4 7 ] 0 ] has the solution 9. Each operation has a corresponding closing square bracket that defines the list of numbers for the operation. In this example, MIN operates on {4, 7}, while MAX operates on {2, 9, 4, 0}. | Provide a detailed description of the following dataset: ListOps |
DuoRC | DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie.
**Why another RC dataset?**
DuoRC pushes the NLP community to address challenges on incorporating knowledge and reasoning in neural architectures for reading comprehension. It poses several interesting challenges such as:
- DuoRC using parallel plots is especially designed to contain a large number of questions with low lexical overlap between questions and their corresponding passages
- It requires models to go beyond the content of the given passage itself and incorporate world-knowledge, background knowledge, and common-sense knowledge to arrive at the answer
- It revolves around narrative passages from movie plots describing complex events and therefore naturally require complex reasoning (e.g. temporal reasoning, entailment, long-distance anaphoras, etc.) across multiple sentences to infer the answer to questions
- Several of the questions in DuoRC, while seeming relevant, cannot actually be answered from the given passage. This requires the model to detect the unanswerability of questions. This aspect is important for machines to achieve in industrial settings in particular | Provide a detailed description of the following dataset: DuoRC |
PAWS-X | PAWS-X contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. All translated pairs are sourced from examples in PAWS-Wiki. | Provide a detailed description of the following dataset: PAWS-X |
KnowledgeNet | KnowledgeNet is a benchmark dataset for the task of automatically populating a knowledge base (Wikidata) with facts expressed in natural language text on the web. KnowledgeNet provides text exhaustively annotated with facts, thus enabling the holistic end-to-end evaluation of knowledge base population systems as a whole, unlike previous benchmarks that are more suitable for the evaluation of individual subcomponents (e.g., entity linking, relation extraction).
For instance, the dataset contains text expressing the fact (Gennaro Basile; RESIDENCE; Moravia), in the passage: "Gennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries. He settled at Brünn, in Moravia, and lived about 1756..." | Provide a detailed description of the following dataset: KnowledgeNet |
CLINC150 | This dataset is for evaluating the performance of intent classification systems in the presence of "out-of-scope" queries, i.e., queries that do not fall into any of the system-supported intent classes. The dataset includes both in-scope and out-of-scope data. | Provide a detailed description of the following dataset: CLINC150 |
WikiCREM | An unsupervised dataset for co-reference resolution. Presented in the publication: Kocijan et. al, WikiCREM: A Large Unsupervised Corpus for Coreference Resolution, presented at EMNLP 2019. | Provide a detailed description of the following dataset: WikiCREM |
BiPaR | **BiPaR** is a manually annotated bilingual parallel novel-style machine reading comprehension (MRC) dataset, developed to support monolingual, multilingual and cross-lingual reading comprehension on novels. The biggest difference between BiPaR and existing reading comprehension datasets is that each triple (Passage, Question, Answer) in BiPaR is written in parallel in two languages. BiPaR is diverse in prefixes of questions, answer types and relationships between questions and passages. Answering the questions requires reading comprehension skills of coreference resolution, multi-sentence reasoning, and understanding of implicit causality. | Provide a detailed description of the following dataset: BiPaR |
PASTEL | PASTEL is a parallelly annotated stylistic language dataset. The dataset consists of ~41K parallel sentences and 8.3K parallel stories annotated across different personas. | Provide a detailed description of the following dataset: PASTEL |
PubMedQA | The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary artery bypass grafting?) using the corresponding abstracts.
PubMedQA has 1k expert labeled, 61.2k unlabeled and 211.3k artificially generated QA instances. | Provide a detailed description of the following dataset: PubMedQA |
JuICe | JuICe is a corpus of 1.5 million examples with a curated test set of 3.7K instances based on online programming assignments. Compared with existing contextual code generation datasets, JuICe provides refined human-curated data, open-domain code, and an order of magnitude more training data. | Provide a detailed description of the following dataset: JuICe |
VisPro | VisPro dataset contains coreference annotation of 29,722 pronouns from 5,000 dialogues. | Provide a detailed description of the following dataset: VisPro |
RUN | The RUN dataset is based on OpenStreetMap (OSM). The map contains rich layers and an abundance of entities of different types. Each entity is complex and can contain (at least) four labels: name, type, is building=y/n, and house number. An entity can spread over several tiles. As the maps do not overlap, only very few entities are shared among them. The RUN dataset aligns NL navigation instructions to coordinates of their corresponding route on the OSM map. | Provide a detailed description of the following dataset: RUN |
CrossWOZ | **CrossWOZ** is the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts at both user and system sides. | Provide a detailed description of the following dataset: CrossWOZ |
TyDi QA | TyDi QA is a question answering dataset covering 11 typologically diverse languages with 200K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology — the set of linguistic features that each language expresses — such that the authors expect models performing well on this set to generalize across a large number of the languages in the world. | Provide a detailed description of the following dataset: TyDi QA |
BLiMP | BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars. Aggregate human agreement with the labels is 96.4%. | Provide a detailed description of the following dataset: BLiMP |
BREAK | Break is a question understanding dataset, aimed at training models to reason over complex questions. It features 83,978 natural language questions, annotated with a new meaning representation, Question Decomposition Meaning Representation (QDMR). Each example has the natural question along with its QDMR representation. Break contains human composed questions, sampled from 10 leading question-answering benchmarks over text, images and databases. This dataset was created by a team of NLP researchers at Tel Aviv University and Allen Institute for AI. | Provide a detailed description of the following dataset: BREAK |
OLPBENCH | OLPBENCH is a large Open Link Prediction benchmark, which was derived from the state-of-the-art Open Information Extraction corpus OPIEC (Gashteovski et al., 2019). OLPBENCH contains 30M open triples, 1M distinct open relations and 2.5M distinct mentions of approximately 800K entities.
Open Link Prediction is defined as follows: Given an Open Knowledge Graph and a question consisting of an entity mention and an open relation, predict mentions as answers. A predicted mention is correct if it is a mention of the correct answer entity. For example, given the question (“NBC-TV”, “has office in”, ?), correct answers include “NYC” and “New York”. | Provide a detailed description of the following dataset: OLPBENCH |
LIAR | LIAR is a publicly available dataset for fake news detection. A decade-long of 12.8K manually labeled short statements were collected in various contexts from POLITIFACT.COM, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. The LIAR dataset4 includes 12.8K human labeled short statements from POLITIFACT.COM’s API, and each statement is evaluated by a POLITIFACT.COM editor for its truthfulness. | Provide a detailed description of the following dataset: LIAR |
STAIR Captions | STAIR Captions is a large-scale dataset containing 820,310 Japanese captions.
This dataset can be used for caption generation, multimodal retrieval, and image generation. | Provide a detailed description of the following dataset: STAIR Captions |
BillSum | BillSum is the first dataset for summarization of US Congressional and California state bills.
The BillSum dataset consists of three parts: US training bills, US test bills and California test bills. The US bills were collected from the Govinfo service provided by the United States Government Publishing Office (GPO). The corpus consists of bills from the 103rd-115th (1993-2018) sessions of Congress. The data was split into 18,949 train bills and 3,269 test bills. For California, bills from the 2015-2016 session were scraped directly from the legislature’s website; the summaries were written by their Legislative Counsel.
The BillSum corpus focuses on mid-length legislation from 5,000 to 20,000 character in length. The authors chose to measure the text length in characters, instead of words or sentences, because the texts have complex structure that makes it difficult to consistently measure words. The range was chosen because on one side, short bills introduce minor changes and do not require summaries. While the CRS produces summaries for them, they often contain most of the text of the bill. On the
other side, very long legislation is often composed of several large sections. | Provide a detailed description of the following dataset: BillSum |
Business Scene Dialogue | The Japanese-English business conversation corpus, namely **Business Scene Dialogue** corpus, was constructed in 3 steps:
1. selecting business scenes,
2. writing monolingual conversation scenarios according to the selected scenes, and
3. translating the scenarios into the other language.
Half of the monolingual scenarios were written in Japanese and the other half were written in English. The whole construction process was supervised by a person who satisfies the following conditions to guarantee the conversations to be natural:
- has the experience of being engaged in language learning programs, especially for business conversations
- is able to smoothly communicate with others in various business scenes both in Japanese and English
- has the experience of being involved in business
The BSD corpus is split into balanced training, development and evaluation sets. The documents in these sets are balanced in terms of scenes and original languages. In this repository we publicly share the full development and evaluation sets and a part of the training data set. | Provide a detailed description of the following dataset: Business Scene Dialogue |
X-WikiRE | X-WikiRE is a new, large-scale multilingual relation extraction dataset in which relation extraction is framed as a problem of reading comprehension to allow for generalization to unseen relations. | Provide a detailed description of the following dataset: X-WikiRE |
ProofWriter | The ProofWriter dataset contains many small rulebases of facts and rules, expressed in English. Each rulebase also has a set of questions (English statements) which can either be proven true or false using proofs of various depths, or the answer is “Unknown” (in open-world setting, OWA) or assumed negative (in closed-world setting, CWA).
The dataset includes full proofs with intermediate conclusions, which models can try to reproduce.
The dataset supports various tasks:
- Given rulebase + question, what is answer + proof (w/intermediates)?
- Given rulebase, what are all the provable implications?
- Given rulebase + question without proof, what single fact can be added to make the question true? | Provide a detailed description of the following dataset: ProofWriter |
Open PI | **Open PI** is the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. The dataset comprises 29,928 state changes over 4,050 sentences from 810 procedural real-world paragraphs from WikiHow.com.
The state tracking task assumes new formulation in which just the text is provided, from which a set of state changes (entity, attribute, before, after) is generated for each step, where the entity, attribute, and values must all be predicted from an open vocabulary. | Provide a detailed description of the following dataset: Open PI |
hasPart KB | This dataset is a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage of common terms (approximated as within a 10 year old’s vocabulary), as well as having several times more hasPart entries than in the popular ontologies ConceptNet and WordNet. In addition, it contains information about quantifiers, argument modifiers, and links the entities to appropriate concepts in Wikipedia and WordNet. | Provide a detailed description of the following dataset: hasPart KB |
SciDocs | SciDocs evaluation framework consists of a suite of evaluation tasks designed for document-level tasks. | Provide a detailed description of the following dataset: SciDocs |
GenericsKB | The **GenericsKB** contains 3.4M+ generic sentences about the world, i.e., sentences expressing general truths such as "Dogs bark," and "Trees remove carbon dioxide from the atmosphere." Generics are potentially useful as a knowledge source for AI systems requiring general world knowledge. The GenericsKB is the first large-scale resource containing naturally occurring generic sentences (as opposed to extracted or crowdsourced triples), and is rich in high-quality, general, semantically complete statements. Generics were primarily extracted from three large text sources, namely the Waterloo Corpus, selected parts of Simple Wikipedia, and the ARC Corpus. A filtered, high-quality subset is also available in GenericsKB-Best, containing 1,020,868 sentences. | Provide a detailed description of the following dataset: GenericsKB |
CORD-19 | CORD-19 is a free resource of tens of thousands of scholarly articles about COVID-19, SARS-CoV-2, and related coronaviruses for use by the global research community. | Provide a detailed description of the following dataset: CORD-19 |
Quoref | Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems. In this span-selection benchmark containing 24K questions over 4.7K paragraphs from Wikipedia, a system must resolve hard coreferences before selecting the appropriate span(s) in the paragraphs for answering questions. | Provide a detailed description of the following dataset: Quoref |
ROPES | ROPES is a QA dataset which tests a system's ability to apply knowledge from a passage of text to a new situation. A system is presented a background passage containing a causal or qualitative relation(s), a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the back-ground passage in the context of the situation. | Provide a detailed description of the following dataset: ROPES |
QASC | QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences. | Provide a detailed description of the following dataset: QASC |
QuaRTz | QuaRTz is a crowdsourced dataset of 3864 multiple-choice questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs).
The QuaRTz dataset V1 contains 3864 questions about open domain qualitative relationships. Each question is paired with one of 405 different background sentences (sometimes short paragraphs).
The dataset is split into train (2696), dev (384) and test (784). A background sentence will only appear in a single split.
Each line in a dataset file is a question specified as a json object, e.g., (with extra whitespace for readability). | Provide a detailed description of the following dataset: QuaRTz |
WIQA | The WIQA dataset V1 has 39705 questions containing a perturbation and a possible effect in the context of a paragraph. The dataset is split into 29808 train questions, 6894 dev questions and 3003 test questions. | Provide a detailed description of the following dataset: WIQA |
QuaRel | QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms. | Provide a detailed description of the following dataset: QuaRel |
ProPara | The **ProPara** dataset is designed to train and test comprehension of simple paragraphs describing processes (e.g., photosynthesis), designed for the task of predicting, tracking, and answering questions about how entities change during the process.
ProPara aims to promote the research in natural language understanding in the context of procedural text. This requires identifying the actions described in the paragraph and tracking state changes happening to the entities involved. The comprehension task is treated as that of predicting, tracking, and answering questions about how entities change during the procedure. The dataset contains 488 paragraphs and 3,300 sentences. Each paragraph is richly annotated with the existence and locations of all the main entities (the “participants”) at every time step (sentence) throughout the procedure (~81,000 annotations).
ProPara paragraphs are natural (authored by crowdsourcing) rather than synthetic (e.g., in bAbI). Workers were given a prompt (e.g., “What happens during photosynthesis?”) and then asked to author a series of sentences describing the sequence of events in the procedure. From these sentences, participant entities and their existence and locations were identified. The goal of the challenge is to predict the existence and location of each participant, based on sentences in the paragraph. | Provide a detailed description of the following dataset: ProPara |
ComplexWebQuestions | ComplexWebQuestions is a dataset for answering complex questions that require reasoning over multiple web snippets. It contains a large set of complex questions in natural language, and can be used in multiple ways:
1. By interacting with a search engine;
2. As a reading comprehension task: the authors release 12,725,989 web snippets that are relevant for the questions, and were collected during the development of their model;
3. As a semantic parsing task: each question is paired with a SPARQL query that can be executed against Freebase to retrieve the answer. | Provide a detailed description of the following dataset: ComplexWebQuestions |
ScienceExamCER | ScienceExamCER is a collection of resources for studying explanation-centered inference, including explanation graphs for 1,680 questions, with 4,950 tablestore rows, and other analyses of the knowledge required to answer elementary and middle-school science questions. | Provide a detailed description of the following dataset: ScienceExamCER |
TupleInf Open IE Dataset | The TupleInf Open IE dataset contains Open IE tuples extracted from 263K sentences that were used by the solver in “Answering Complex Questions Using Open Information Extraction” (referred as Tuple KB, T). These sentences were collected from a large Web corpus using training questions from 4th and 8th grade as queries. This dataset contains 156K sentences collected for 4th grade questions and 107K sentences for 8th grade questions. Each sentence is followed by the Open IE v4 tuples using their simple format. | Provide a detailed description of the following dataset: TupleInf Open IE Dataset |
TQA | The TextbookQuestionAnswering (TQA) dataset is drawn from middle school science curricula. It consists of 1,076 lessons from Life Science, Earth Science and Physical Science textbooks. This includes 26,260 questions, including 12,567 that have an accompanying diagram.
The TQA dataset encourages work on the task of Multi-Modal Machine Comprehension (M3C) task. The M3C task builds on the popular Visual Question Answering (VQA) and Machine Comprehension (MC) paradigms by framing question answering as a machine comprehension task, where the context needed to answer questions is provided and composed of both text and images. The dataset constructed to showcase this task has been built from a middle school science curriculum that pairs a given question to a limited span of knowledge needed to answer it. | Provide a detailed description of the following dataset: TQA |
Countix | Countix is a real world dataset of repetition videos collected in the wild (i.e.YouTube) covering a wide range of semantic settings with significant challenges such as camera and object motion, diverse set of periods and counts, and changes in the speed of repeated actions. Countix include repeated videos of workout activities (squats, pull ups, battle rope training, exercising arm), dance moves (pirouetting, pumping fist), playing instruments (playing ukulele), using tools repeatedly (hammer hitting objects, chainsaw cutting wood, slicing onion), artistic performances (hula hooping, juggling soccer ball), sports (playing ping pong and tennis) and many others. Figure 6 illustrates some examples from the dataset as well as the distribution of repetition counts and period lengths. | Provide a detailed description of the following dataset: Countix |
RL Unplugged | RL Unplugged is suite of benchmarks for offline reinforcement learning. The RL Unplugged is designed around the following considerations: to facilitate ease of use, the datasets are provided with a unified API which makes it easy for the practitioner to work with all data in the suite once a general pipeline has been established. This is a dataset accompanying the paper RL Unplugged: Benchmarks for Offline Reinforcement Learning.
In this suite of benchmarks, the authors try to focus on the following problems:
- High dimensional action spaces, for example the locomotion humanoid domains, there are 56 dimensional actions.
- High dimensional observations.
- Partial observability, observations have egocentric vision.
- Difficulty of exploration, using states of the art algorithms and imitation to generate data for difficult environments.
- Real world challenges. | Provide a detailed description of the following dataset: RL Unplugged |
MineRL | **MineRL**is an imitation learning dataset with over 60 million frames of recorded human player data. The dataset includes a set of tasks which highlights many of the hardest problems in modern-day Reinforcement Learning: sparse rewards and hierarchical policies. | Provide a detailed description of the following dataset: MineRL |
Mathematics Dataset | This dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This is designed to test the mathematical learning and algebraic reasoning skills of learning models. | Provide a detailed description of the following dataset: Mathematics Dataset |
PGM | PGM dataset serves as a tool for studying both abstract reasoning and generalisation in models. Generalisation is a multi-faceted phenomenon; there is no single, objective way in which models can or should generalise beyond their experience. The PGM dataset provides a means to measure the generalization ability of models in different ways, each of which may be more or less interesting to researchers depending on their intended training setup and applications. | Provide a detailed description of the following dataset: PGM |
Slim | This dataset consists of virtual scenes rendered in MuJoCo with multiple views each presented in multiple modalities: image, and synthetic or natural language descriptions. Each scene consists of two or three objects placed on a square walled room, and for each of the 10 camera viewpoint the authors rendered a 3D view of the scene as seen from that viewpoint as well as a synthetically generated description of the scene. | Provide a detailed description of the following dataset: Slim |
TableBank | To address the need for a standard open domain table benchmark dataset, the author propose a novel weak supervision approach to automatically create the TableBank, which is orders of magnitude larger than existing human labeled datasets for table analysis. Distinct from traditional weakly supervised training set, our approach can obtain not only large scale but also high quality training data.
Nowadays, there are a great number of electronic documents on the web such as Microsoft Word (.docx) and Latex (.tex) files. These online documents contain mark-up tags for tables in their source code by nature. Intuitively, one can manipulate these source code by adding bounding box using the mark-up language within each document. For Word documents, the internal Office XML code can be modified where the borderline of each table is identified. For Latex documents, the tex code can be also modified where bounding boxes of tables are recognized. In this way, high-quality labeled data is created for a variety of domains such as business documents, official fillings, research papers etc, which is tremendously beneficial for large-scale table analysis tasks.
The TableBank dataset totally consists of 417,234 high quality labeled tables as well as their original documents in a variety of domains. | Provide a detailed description of the following dataset: TableBank |
GitHub Typo Corpus | Are you the kind of person who makes a lot of typos when writing code? Or are you the one who fixes them by making "fix typo" commits? Either way, thank you—you contributed to the state-of-the-art in the NLP field.
GitHub Typo Corpus is a large-scale dataset of misspellings and grammatical errors along with their corrections harvested from GitHub. It contains more than 350k edits and 65M characters in more than 15 languages, making it the largest dataset of misspellings to date. | Provide a detailed description of the following dataset: GitHub Typo Corpus |
word2word | word2word contains easy-to-use word translations for 3,564 language pairs.
- A large collection of freely & publicly available bilingual lexicons for 3,564 language pairs across 62 unique languages.
- Easy-to-use Python interface for accessing top-k word translations and for building a new bilingual lexicon from a custom parallel corpus.
- Constructed using a simple approach that yields bilingual lexicons with high coverage and competitive translation quality. | Provide a detailed description of the following dataset: word2word |
Dakshina | The Dakshina dataset is a collection of text in both Latin and native scripts for 12 South Asian languages. For each language, the dataset includes a large collection of native script Wikipedia text, a romanization lexicon which consists of words in the native script with attested romanizations, and some full sentence parallel data in both a native script of the language and the basic Latin alphabet. | Provide a detailed description of the following dataset: Dakshina |
Dataset of Legal Documents | Dataset of Legal Documents consists of court decisions from 2017 and 2018 were selected for the dataset, published online by the Federal Ministry of Justice and Consumer Protection. The documents originate from seven federal courts: Federal Labour Court (BAG), Federal Fiscal Court (BFH), Federal Court of Justice (BGH), Federal Patent Court (BPatG), Federal Social Court (BSG), Federal Constitutional Court (BVerfG) and Federal Administrative Court (BVerwG).
The dataset consists of 66,723 sentences with 2,157,048 tokens. The sizes of the seven court-specific datasets varies between 5,858 and 12,791 sentences, and 177,835 to 404,041 tokens. The distribution of annotations on a per-token basis corresponds to approx. 19-23 %. | Provide a detailed description of the following dataset: Dataset of Legal Documents |
ChrEn | Cherokee-English Parallel Dataset is a low-resource dataset of 14,151 pairs of sentences with around
313K English tokens and 206K Cherokee tokens. The parallel corpus is accompanied by a monolingual Cherokee dataset of 5,120 sentences. Both datasets are mostly derived from Cherokee monolingual books. | Provide a detailed description of the following dataset: ChrEn |
C4 | **C4** is a colossal, cleaned version of Common Crawl's web crawl corpus. It was based on Common Crawl dataset: https://commoncrawl.org. It was used to train the T5 text-to-text Transformer models.
The dataset can be downloaded in a pre-processed form from [allennlp](https://github.com/allenai/allennlp/discussions/5056). | Provide a detailed description of the following dataset: C4 |
Image网 | **Image网** (pronounced Imagewang; 网 means "net" in Chinese) is an image classification dataset combined from [Imagenette](/dataset/imagenette) and [Imagewoof](/dataset/imagewoof) datasets in a way to make it into a semi-supervised unbalanced classification problem:
* the validation set is the same as the validation set of Imagewoof; there are no Imagenette images in the validation set (they're all in the training set),
* only 10% of Imagewoof images are in the training set. The remaining images are in the "unsupervised" split. | Provide a detailed description of the following dataset: Image网 |
CCAligned | **CCAligned** consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to multiple documents in different target language, it is possible to join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). | Provide a detailed description of the following dataset: CCAligned |
WikiTableT | WikiTableT contains Wikipedia article sections and their corresponding tabular data and various metadata. WikiTableT contains millions of instances while covering a broad range of topics and a variety of kinds of generation tasks. | Provide a detailed description of the following dataset: WikiTableT |
AutoWeakS | Collects all the courses from XuetangX5, one of the largest MOOCs in China, and this results in 1951 courses. The collected courses involve seven areas: computer science, economics, engineering, foreign language, math, physics, and social science. Each course contains 131 words in its descriptions on average. Contains 706 job postings from the recruiting website operated by JD.com (JD) and 2,456 job postings from the website owned by Tencent corporation (Tencent). The collected job postings involve six areas: technical post, financial post, product post, design post, market post, supply chain and engineering post. | Provide a detailed description of the following dataset: AutoWeakS |
MMDB | Multimodal Dyadic Behavior (MMDB) dataset is a unique collection of multimodal (video, audio, and physiological) recordings of the social and communicative behavior of toddlers. The MMDB contains 160 sessions of 3-5 minute semi-structured play interaction between a trained adult examiner and a child between the age of 15 and 30 months. The MMDB dataset supports a novel problem domain for activity recognition, which consists of the decoding of dyadic social interactions between adults and children in a developmental context. | Provide a detailed description of the following dataset: MMDB |
GazeFollow | GazeFollow is a large-scale dataset annotated with the location of where people in images are looking. It uses several major datasets that contain people as a source of images: 1, 548 images from SUN, 33, 790 images from MS COCO, 9, 135 images from Actions 40, 7, 791 images from PASCAL, 508 images from the ImageNet detection challenge and 198, 097 images from the Places dataset. This concatenation results in a challenging and large image collection of people performing diverse activities in many everyday scenarios. | Provide a detailed description of the following dataset: GazeFollow |
4DFAB | 4DFAB is a large scale database of dynamic high-resolution 3D faces which consists of recordings of 180 subjects captured in four different sessions spanning over a five-year period (2012 - 2017), resulting in a total of over 1,800,000 3D meshes. It contains 4D videos of subjects displaying both spontaneous and posed facial behaviours. The database can be used for both face and facial expression recognition, as well as behavioural biometrics. It can also be used to learn very powerful blendshapes for parametrising facial behaviour. | Provide a detailed description of the following dataset: 4DFAB |
iQIYI-VID-2019 | iQIYI-VID-2019 dataset is the first video dataset for multi-model person identification. This dataset aims to encourage the research of multi-modal based person identification. To get close to real applications, video clips are extracted from real online videos of extensive types. All the clips are labeled by human annotators, and use automatic algorithms to accelerate the collection and labeling process. The iQIYI-VID-2019 dataset is more challenging comparing to the iQIYI-VID-2018 dataset, since most hard examples are selected from iQIYI-VID-2018 while more person ids is added. The dataset contains 100K~200K video clips, divided into three parts, 40% for training, 30% for validation, and 30% for test. The dataset contains about 10, 000 identities, 5,000 of which come from the iQIYI celebrity database and mainly extracts from iQIYI-VID-2018. | Provide a detailed description of the following dataset: iQIYI-VID-2019 |
iQIYI-VID | iQIYI-VID dataset, which comprises video clips from iQIYI variety shows, films, and television dramas. The whole dataset contains 500,000 videos clips of 5,000 celebrities. The length of each video is 1~30 seconds. | Provide a detailed description of the following dataset: iQIYI-VID |
ELFW | Extended Labeled Faces in-the-Wild (ELFW) is a dataset supplementing with additional face-related categories —and also additional faces— the originally released semantic labels in the vastly used Labeled Faces in-the-Wild (LFW) dataset. Additionally, two object-based data augmentation techniques are deployed to synthetically enrich under-represented categories which, in benchmarking experiments, reveal that not only segmenting the augmented categories improves, but also the remaining ones benefit. | Provide a detailed description of the following dataset: ELFW |
KANFace | KANFace consists of 40K still images and 44K sequences (14.5M video frames in total) captured in unconstrained, real-world conditions from 1,045 subjects. The dataset is manually annotated in terms of identity, exact age, gender and kinship. | Provide a detailed description of the following dataset: KANFace |
BAVL | Blind Audio-Visual Localization (BAVL) Dataset consists of 20 audio-visual recordings of sound sources, which could be talking faces or music instruments. Most audio-visual recordings (19) are videos from Youtube except V8, which is from [1]. Besides, the video V7 was also used in[2][3], and V16 used in [3]. All 20 videos are annotated by ourselves in a uniform manner. Details of the video sequences are listed in Table 1.
The videos in the dataset have average duration of 10 seconds, and they are all recorded by one camera and one microphone. The audio files (.wav) was sampled at a 16 kHz for V7, V8, V16, and 44.1 kHz for the rest. The video frames contain the sound-making object (sound source) and distracting objects (e.g. pedestrian on the street), while the audio signals consists of the sound produced by the sound source (human speech or instrumental music), environmental noise and sometimes other sounds. The distracting objects and other irrelevant noise/sounds do not exist in all videos. The primary usage of the dataset is to evaluate the performance of sound source localization method, in the presence of distracting motions and noise.
[1] Kidron, Einat, Yoav Y. Schechner, and Michael Elad. "Pixels that sound."Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 1. IEEE, 2005.
[2] Izadinia, Hamid, Imran Saleemi, and Mubarak Shah. "Multimodal analysis for identification and segmentation of moving-sounding objects."IEEE Transactions on Multimedia 15.2 (2013): 378-390.
[3] Li, Kai, Jun Ye, and Kien A. Hua. "What's making that sound?."Proceedings of the 22nd ACM international conference on Multimedia. ACM, 2014. | Provide a detailed description of the following dataset: BAVL |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.