dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
CalMS21
The Caltech Mouse Social Interactions (CalMS21) dataset is a multi-agent dataset from behavioral neuroscience. The dataset consists of trajectory data of social interactions, recorded from videos of freely behaving mice in a standard resident-intruder assay. The CalMS21 dataset is part of the Multi-Agent Behavior Challenge 2021. To help accelerate behavioral studies, the CalMS21 dataset provides a benchmark to evaluate the performance of automated behavior classification methods in three settings: (1) for training on large behavioral datasets all annotated by a single annotator, (2) for style transfer to learn inter-annotator differences in behavior definitions, and (3) for learning of new behaviors of interest given limited training data. The dataset consists of 6 million frames of unlabelled tracked poses of interacting mice, as well as over 1 million frames with tracked poses and corresponding frame-level behavior annotations. The challenge of the dataset is to be able to classify behaviors accurately using both labelled and unlabelled tracking data, as well as being able to generalize to new annotators and behaviors.
Provide a detailed description of the following dataset: CalMS21
HumAID
Social networks are widely used for information consumption and dissemination, especially during time-critical events such as natural disasters. Despite its significantly large volume, social media content is often too noisy for direct use in any application. Therefore, it is important to filter, categorize, and concisely summarize the available content to facilitate effective consumption and decision-making. To address such issues automatic classification systems have been developed using supervised modeling approaches, thanks to the earlier efforts on creating labeled datasets. However, existing datasets are limited in different aspects (e.g., size, contains duplicates) and less suitable to support more advanced and data-hungry deep learning models. HumAID is a large-scale dataset for crisis informatics research with ~77K human-labeled tweets, sampled from a pool of ~24 million tweets across 19 disaster events that happened between 2016 and 2019. The annotations in the provided datasets consists of following humanitarian categories. The dataset consists only english tweets and it is the largest dataset for crisis informatics so far. Humanitarian categories: * Caution and advice * Displaced people and evacuations * Don't know can't judge * Infrastructure and utility damage * Injured or dead people * Missing or found people * Not humanitarian * Other relevant information * Requests or urgent needs * Rescue volunteering or donation effort * Sympathy and support
Provide a detailed description of the following dataset: HumAID
PlasticineLab
PasticineLab is a differentiable physics benchmark, which includes a diverse collection of soft body manipulation tasks. In each task, the agent uses manipulators to deform the plasticine into the desired configuration. The underlying physics engine supports differentiable elastic and plastic deformation using the DiffTaichi system, posing many under-explored challenges to robotic agents.
Provide a detailed description of the following dataset: PlasticineLab
DFUC2021
The Diabetic Foot Ulcers dataset (DFUC2021) is a dataset for analysis of pathology, focusing on infection and ischaemia. The final release of DFUC2021 consists of 15,683 DFU patches, with 5,955 training, 5,734 for testing and 3,994 unlabeled DFU patches. The ground truth labels are four classes, i.e. control, infection, ischaemia and both conditions.
Provide a detailed description of the following dataset: DFUC2021
UAV-Human
UAV-Human is a large dataset for human behavior understanding with UAVs. It contains 67,428 multi-modal video sequences and 119 subjects for action recognition, 22,476 frames for pose estimation, 41,290 frames and 1,144 identities for person re-identification, and 22,263 frames for attribute recognition. The dataset was collected by a flying UAV in multiple urban and rural districts in both daytime and nighttime over three months, hence covering extensive diversities w.r.t subjects, backgrounds, illuminations, weathers, occlusions, camera motions, and UAV flying attitudes. This dataset can be used for UAV-based human behavior understanding, including action recognition, pose estimation, re-identification, and attribute recognition.
Provide a detailed description of the following dataset: UAV-Human
Criteo Attribution Modeling Dataset
Content of this dataset This dataset includes following files: README.md criteo_attribution_dataset.tsv.gz: the dataset itself (623M compressed) Experiments.ipynb: ipython notebook with code and utilities to reproduce the results in the paper. Can also be used as a starting point for further research on this data. It requires python 3.* and standard scientific libraries such as pandas, numpy and sklearn. Data description This dataset represents a sample of 30 days of Criteo live traffic data. Each line corresponds to one impression (a banner) that was displayed to a user. For each banner we have detailed information about the context, if it was clicked, if it led to a conversion and if it led to a conversion that was attributed to Criteo or not. Data has been sub-sampled and anonymized so as not to disclose proprietary elements. Here is a detailed description of the fields (they are tab-separated in the file): timestamp: timestamp of the impression (starting from 0 for the first impression). The dataset is sorted according to timestamp. uid a unique user identifier campaign a unique identifier for the campaign conversion 1 if there was a conversion in the 30 days after the impression (independently of whether this impression was last click or not) conversion_timestamp the timestamp of the conversion or -1 if no conversion was observed conversion_id a unique identifier for each conversion (so that timelines can be reconstructed if needed). -1 if there was no conversion attribution 1 if the conversion was attributed to Criteo, 0 otherwise click 1 if the impression was clicked, 0 otherwise click_pos the position of the click before a conversion (0 for first-click) click_nb number of clicks. More than 1 if there was several clicks before a conversion cost the price paid by Criteo for this display (disclaimer: not the real price, only a transformed version of it) cpo the cost-per-order in case of attributed conversion (disclaimer: not the real price, only a transformed version of it) time_since_last_click the time since the last click (in s) for the given impression cat[1-9] contextual features associated to the display. Can be used to learn the click/conversion models. We do not disclose the meaning of these features but it is not relevant for this study. Each column is a categorical variable. In the experiments, they are mapped to a fixed dimensionality space using the Hashing Trick (see paper for reference). Key figures 2,4Gb uncompressed 16.5M impressions 45K conversions 700 campaigns Tasks This dataset can be used in a large scope of applications related to Real-Time-Bidding, including but not limited to: Attribution modeling: rule based, model based, etc… Conversion modeling in display advertising: the data includes cost and value used for computing Utility metrics. Offline metrics for real-time bidding
Provide a detailed description of the following dataset: Criteo Attribution Modeling Dataset
BSTC
BSTC (Baidu Speech Translation Corpus) is a large-scale dataset for automatic simultaneous interpretation. BSTC version 1.0 contains 50 hours of real speeches, including three parts, the audio files, the transcripts, and the translations. The corpus can be used to build automatic simultaneous interpretation system. The corpus is collected from the Chinese mandarin talks and reports, including science, technology, culture, economy, etc.,. The utterances in talks and reports are carefully transcribed into Chinese text, and further translated into English text. The sentence boundary is determined by the English text instead of the Chinese text which is analogous to the previous related corpus (TED and Translation Augmented LibriSpeech Corpus). The corpus is divided into training/develop/test datasets. In each dataset, there are three types of files: 1. Acoustic signal files, which are named as baidu_XX.wav, where XX is the identical code. All signal files are encoded in Waveform Audio File Format (WAVE) from a mono recording, with a sample rate of 16K Hz, and a bit resolution of 16bits (2 bytes). 2. Description files, encoded in JSON format for each utterance, including the corresponding description information for each acoustic signal file, such as translation, transcript, duration, offset and so on.
Provide a detailed description of the following dataset: BSTC
READ 2016
This dataset arises from the READ project (Horizon 2020). The dataset consists of a subset of documents from the Ratsprotokolle collection composed of minutes of the council meetings held from 1470 to 1805 (about 30.000 pages), which will be used in the READ project. This dataset is written in Early Modern German. The number of writers is unknown. Handwriting in this collection is complex enough to challenge the HTR software. The training dataset is composed of 400 pages; most of the pages consist of a single block with many difficulties for line detection and extraction. The ground-truth in this set is in PAGE format and it is provided annotated at line level in the PAGE files. The previous dataset is the same that is located at https://zenodo.org/record/218236#.WnLhaCHhBGF The new file includes the test set corresponding to the HTR competition held at ICFHR 2016 Toselli, A.H., Romero, V., Villegas, M., Vidal, E., & Sánchez, J.A. (2018). HTR Dataset ICFHR 2016 (Version 1.2.0) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.1297399
Provide a detailed description of the following dataset: READ 2016
RIMES
The RIMES database (Reconnaissance et Indexation de données Manuscrites et de fac similÉS / Recognition and Indexing of handwritten documents and faxes) was created to evaluate automatic systems of recognition and indexing of handwritten letters. Of particular interest are cases such as those sent by postal mail or fax by individuals to companies or administrations. The database was collected by asking volunteers to write handwritten letters in exchange of gift vouchers. Volunteer were given a fictional identity (same sex as the real one) and up to 5 scenarios. Each scenario has been chosen among 9 realistic following themes : change of personal information (address, bank account), information request, opening and closing (customer account), modification of contract or order, complaint (bad service quality…), payment difficulties (asking for a delay, tax exemption…), reminder letter, damage declaration with further circumstances and a destination (administrations or service providers (telephone, power, bank, insurances). The volunteers composed a letter with those pieces of information using their own words. The layout was free and it was only asked to use white paper and to write in a readable way with black ink. The collect was a success with more than 1,300 people who have participated to the RIMES database creation by writing up to 5 mails. The RIMES database thus obtained contains 12,723 pages corresponding to 5605 mails of two to three pages.
Provide a detailed description of the following dataset: RIMES
Twitter-MEL
Twitter-MEL is a multimodal entity linking (MEL) dataset built from Twitter. The dataset consists of tweets that had both text and images, with a total of 2.6M timeline tweets and 20k entities.
Provide a detailed description of the following dataset: Twitter-MEL
PhoNER COVID19
PhoNER_COVID19 is a dataset for recognising COVID-19 related named entities in Vietnamese, consisting of 35K entities over 10K sentences. The authors defined 10 entity types with the aim of extracting key information related to COVID-19 patients, which are especially useful in downstream applications. In general, these entity types can be used in the context of not only the COVID-19 pandemic but also in other future epidemics.
Provide a detailed description of the following dataset: PhoNER COVID19
CAMUS
This project aims to provide all the materials to the community to resolve the problem of echocardiographic image segmentation and volume estimation from 2D ultrasound sequences (both two and four-chamber views). To this aim, the following solutions were set up. 1. Introduction of the largest publicly-available and fully-annotated dataset for 2D echocardiographic assessment (to our knowledge). The CAMUS dataset, containing 2D apical four-chamber and two-chamber view sequences acquired from 500 patients, is made available for download. 2. Deployment of a dedicated Girder online platform. This platform aims to assess in a reproducible manner the performance of methods for segmenting cardiac structures (left ventricle endocardium and epicardium and left atrium borders) and extracting clinical indices (left ventricle volumes and ejection fraction). The CAMUS online platform is now available and will be maintained and kept open as long as the data remains relevant for clinical research.
Provide a detailed description of the following dataset: CAMUS
ORBIT
ORBIT is a real-world few-shot dataset and benchmark grounded in a real-world application of teachable object recognizers for people who are blind/low vision. The dataset contains 3,822 videos of 486 objects recorded by people who are blind/low-vision on their mobile phones, and the benchmark reflects a realistic, highly challenging recognition problem, providing a rich playground to drive research in robustness to few-shot, high-variation conditions.
Provide a detailed description of the following dataset: ORBIT
DexYCB
DexYCB is a dataset for capturing hand grasping of objects. It can be used three relevant tasks: 2D object and keypoint detection, 6D object pose estimation, and 3D hand pose estimation. The dataset was built using 20 objects from the YCB-Video dataset, and consists of multiple trials from 10 subjects. For each trial, there is a target object with 2 to 4 other objects placed on a table. The subject is asked to start from a relaxed pose, pick up the target object, and hold it in the air. Some subjects were asked to pretend to hand over the object to someone across from them. Each action is recorded for 3 seconds, repeating the trial 5 times for each target object, each time with a random set of accompanied objects and placement. In total there are 100 trials per subject, and 1,000 trials in total for all subjects.
Provide a detailed description of the following dataset: DexYCB
FM2
FoolMeTwice (FM2 for short) is a large dataset of challenging entailment pairs collected through a fun multi-player game. Gamification encourages adversarial examples, drastically lowering the number of examples that can be solved using "shortcuts" compared to other popular entailment datasets. Players are presented with two tasks. The first task asks the player to write a plausible claim based on the evidence from a Wikipedia page. The second one shows two plausible claims written by other players, one of which is false, and the goal is to identify it before the time runs out. Players "pay" to see clues retrieved from the evidence pool: the more evidence the player needs, the harder the claim. Game-play between motivated players leads to diverse strategies for crafting claims, such as temporal inference and diverting to unrelated evidence, and results in higher quality data for the entailment and evidence retrieval tasks.
Provide a detailed description of the following dataset: FM2
ManyTypes4Py
ManyTypes4Py is a large Python dataset for machine learning (ML)-based type inference. The dataset contains a total of 5,382 Python projects with more than 869K type annotations. Duplicate source code files were removed to eliminate the negative effect of the duplication bias. To facilitate training and evaluation of ML models, the dataset was split into training, validation and test sets by files. To extract type information from abstract syntax trees (ASTs), a lightweight static analyzer pipeline is developed and accompanied with the dataset. Using this pipeline, the collected Python projects were analyzed and the results of the AST analysis were stored in JSON-formatted files.
Provide a detailed description of the following dataset: ManyTypes4Py
EtymDB 2.0
A multilingual etymological database extracted from the Wiktionary (described in Methodological Aspects of Developing and Managing an Etymological Lexical Resource: Introducing EtymDB-2.0)
Provide a detailed description of the following dataset: EtymDB 2.0
ContraCAT
Current approaches to context-aware MT rely on a set of surface heuristics to translate pronouns, which break down when translations require real reasoning. We create a new template test set ContraCAT to assess the ability of Machine Translation to handle the specific steps necessary for successful pronoun translation.
Provide a detailed description of the following dataset: ContraCAT
SynD
SynD is a synthetic energy dataset with a focus on residential buildings. This dataset is the result of a custom simulation process that relies on power traces of household appliances. The output of simulations is the power consumption of 21 household appliances as well as the household-wide consumption (i.e. mains). Therefore, SynD's can be used for Non-Intrusive Load Monitoring, also referred to as Energy Disaggregation.
Provide a detailed description of the following dataset: SynD
Samanantar
Samanantar is the largest publicly available parallel corpora collection for Indic languages: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu. The corpus has 49.6M sentence pairs between English to Indian Languages.
Provide a detailed description of the following dataset: Samanantar
MindReader
MindReader is a novel dataset providing explicit user ratings over a knowledge graph within the movie domain. The latest stable version of the dataset contains 218,794 ratings from 2,316 users over 12,206 entities entities, and an associated knowledge graph consisting of 18,133 movie-related entities. The dataset is collected from an online movie recommendation game, MindReader, where users are pseudo-randomly asked to provide preferences for both movie- and non-movie entities (e.g., genres, actors, and directors). For each entity, users can either like it, dislike it, or state that they do not know it.
Provide a detailed description of the following dataset: MindReader
WEC-Eng
WEC-eng is a cross-document event coreference resolution dataset extracted from English Wikipedia. Coreference links are not restricted within predefined topics. The training set includes 40,529 mentions distributed into 7,042 coreference clusters.
Provide a detailed description of the following dataset: WEC-Eng
FreSaDa
FreSaDa is a French satire dataset for cross-domain satire detection, which is composed of 11,570 articles from the news domain. The dataset samples have been split into training, validation and test, such that the training publication sources are distinct from the validation and test publication sources. This gives rise to a cross-domain (cross-source) satire detection task.
Provide a detailed description of the following dataset: FreSaDa
L3DAS21
L3DAS21 is a dataset for 3D audio signal processing. It consists of a 65 hours 3D audio corpus, accompanied with a Python API that facilitates the data usage and results submission stage. The LEDAS21 datasets contain multiple-source and multiple-perspective B-format Ambisonics audio recordings. The authors sampled the acoustic field of a large office room, placing two first-order Ambisonics microphones in the center of the room and moving a speaker reproducing the analytic signal in 252 fixed spatial positions. Relying on the collected Ambisonics impulse responses (IRs), the authors augmented existing clean monophonic datasets to obtain synthetic tridimensional sound sources by convolving the original sounds with our IRs. The dataset is divided in two main sections, respectively dedicated to the challenge tasks. The first section is optimized for 3D Speech Enhancement and contains more than 30000 virtual 3D audio environments with a duration up to 10 seconds. In each sample, a spoken voice is always present alongside with other office-like background noises. As target data for this section the authors provide the clean monophonic voice signals. The other sections, instead, is dedicated to the 3D Sound Event Localization and Detection task and contains 900 60-seconds-long audio files. Each data point contains a simulated 3D office audio environment in which up to 3 simultaneous acoustic events may be active at the same time. In this section, the samples are not forced to contain a spoken voice. As target data for this section the authors provide a list of the onset and offset time stamps, the typology class, and the spatial coordinates of each individual sound event present in the data-points.
Provide a detailed description of the following dataset: L3DAS21
SI-Score
**SI-SCORE** is a synthetic dataset for the analysis of robustness to object location, rotation and size. It consists of images that vary only for factors like object size and object location. SI-SCORE was built by taking objects and backgrounds and systematically varying object size, location and rotation angle so that the effect of changing these factors on model performance can be studied.
Provide a detailed description of the following dataset: SI-Score
RLU
RL Unplugged is suite of benchmarks for offline reinforcement learning. The RL Unplugged is designed around the following considerations: to facilitate ease of use, we provide the datasets with a unified API which makes it easy for the practitioner to work with all data in the suite once a general pipeline has been established. This is a dataset accompanying the paper RL Unplugged: Benchmarks for Offline Reinforcement Learning. In this suite of benchmarks, we try to focus on the following problems: High dimensional action spaces, for example the locomotion humanoid domains, we have 56 dimensional actions. High dimensional observations. Partial observability, observations have egocentric vision. Difficulty of exploration, using states of the art algorithms and imitation to generate data for difficult environments. Real world challenges.
Provide a detailed description of the following dataset: RLU
Multifog KITTI dataset
we propose the augmented KITTI dataset with fog for both camera and LiDAR sensors with different visibility ranges from 20 to 80 meters to best match realistic fog environment.
Provide a detailed description of the following dataset: Multifog KITTI dataset
OSIC Pulmonary Fibrosis Progression
Imagine one day, your breathing became consistently labored and shallow. Months later you were finally diagnosed with pulmonary fibrosis, a disorder with no known cause and no known cure, created by scarring of the lungs. If that happened to you, you would want to know your prognosis. That’s where a troubling disease becomes frightening for the patient: outcomes can range from long-term stability to rapid deterioration, but doctors aren’t easily able to tell where an individual may fall on that spectrum. Your help, and data science, may be able to aid in this prediction, which would dramatically help both patients and clinicians. Current methods make fibrotic lung diseases difficult to treat, even with access to a chest CT scan. In addition, the wide range of varied prognoses create issues organizing clinical trials. Finally, patients suffer extreme anxiety—in addition to fibrosis-related symptoms—from the disease’s opaque path of progression. Open Source Imaging Consortium (OSIC) is a not-for-profit, co-operative effort between academia, industry and philanthropy. The group enables rapid advances in the fight against Idiopathic Pulmonary Fibrosis (IPF), fibrosing interstitial lung diseases (ILDs), and other respiratory diseases, including emphysematous conditions. Its mission is to bring together radiologists, clinicians and computational scientists from around the world to improve imaging-based treatments. In this competition, you’ll predict a patient’s severity of decline in lung function based on a CT scan of their lungs. You’ll determine lung function based on output from a spirometer, which measures the volume of air inhaled and exhaled. The challenge is to use machine learning techniques to make a prediction with the image, metadata, and baseline FVC as input. If successful, patients and their families would better understand their prognosis when they are first diagnosed with this incurable lung disease. Improved severity detection would also positively impact treatment trial design and accelerate the clinical development of novel treatments.
Provide a detailed description of the following dataset: OSIC Pulmonary Fibrosis Progression
QMSum
**QMSum** is a new human-annotated benchmark for query-based multi-domain meeting summarisation task, which consists of 1,808 query-summary pairs over 232 meetings in multiple domains.
Provide a detailed description of the following dataset: QMSum
SVAMP
A challenge set for elementary-level Math Word Problems (MWP). An MWP consists of a short Natural Language narrative that describes a state of the world and poses a question about some unknown quantities. The examples in **SVAMP** test a model across different aspects of solving MWPs: 1) Is the model question sensitive? 2) Does the model have robust reasoning ability? 3) Is it invariant to structural alterations?
Provide a detailed description of the following dataset: SVAMP
SPARTQA
**SpartQA** is a textual question answering benchmark for spatial reasoning on natural language text which contains more realistic spatial phenomena not covered by prior datasets and that is challenging for state-of-the-art language models (LM). SPARTQA is built on NLVR’s images containing more objects with richer spatial structures. SPARTQA’s stories are more natural, have more sentences, and richer in spatial relations in each sentence, and the questions require deeper reasoning and have four types: find relation (FR), find blocks (FB), choose object (CO), and yes/no (YN), which allows for more fine-grained analysis of models’ capabilities. https://aclanthology.org/2021.naacl-main.364/
Provide a detailed description of the following dataset: SPARTQA
StylePTB
**StylePTB** is a fine-grained text style transfer benchmark. It consists of paired sentences undergoing 21 fine-grained stylistic changes spanning atomic lexical, syntactic, semantic, and thematic transfers of text, as well as compositions of multiple transfers which allow modelling of fine-grained stylistic changes as building blocks for more complex, high-level transfers.
Provide a detailed description of the following dataset: StylePTB
NorDial
**NorDial** is the first step to creating a corpus of dialectal variation of written Norwegian. It consists of small corpus of tweets manually annotated as Bokmål, Nynorsk, any dialect, or a mix.
Provide a detailed description of the following dataset: NorDial
FixMyPose
**FixMyPose** is a dataset for automated pose correction. It consists of descriptions to correct a "current" pose to look like a "target" pose, in English and Hindi. The collected descriptions have interesting linguistic properties such as egocentric relations to environment objects, analogous references, etc., requiring an understanding of spatial relations and commonsense knowledge about postures. Further, to avoid ML biases, the dataset maintains a balance across characters with diverse demographics, who perform a variety of movements in several interior environments (e.g., homes, offices). This dataset introduces the pose-correctional-captioning task and its reverse target-pose-retrieval task. During the correctional-captioning task, models must generate descriptions of how to move from the current to target pose image, whereas in the retrieval task, models should select the correct target pose given the initial pose and correctional description.
Provide a detailed description of the following dataset: FixMyPose
AcinoSet
**AcinoSet** is a dataset of free-running cheetahs in the wild that contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames. The authors utilized markerless animal pose estimation with DeepLabCut to provide 2D keypoints (in the 119K frames). It also includes 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data.
Provide a detailed description of the following dataset: AcinoSet
Vietnamese intent detection and slot filling
This is a dataset for intent detection and slot filling for the Vietnamese language. The dataset consists of 5,871 gold annotated utterances with 28 intent labels and 82 slot types.
Provide a detailed description of the following dataset: Vietnamese intent detection and slot filling
XFORMAL
** XFORMAL** is a multilingual formal style transfer benchmark of multiple formal reformulations of informal text in Brazilian Portuguese, French, and Italian.
Provide a detailed description of the following dataset: XFORMAL
SSN
**SSN** (short for Semantic Scholar Network) is a scientific papers summarization dataset which contains 141K research papers in different domains and 661K citation relationships. The entire dataset constitutes a large connected citation graph.
Provide a detailed description of the following dataset: SSN
Global Wheat
Global WHEAT Dataset is the first large-scale dataset for wheat head detection from field optical images. It included a very large range of cultivars from differents continents. Wheat is a staple crop grown all over the world and consequently interest in wheat phenotyping spans the globe. Therefore, it is important that models developed for wheat phenotyping, such as wheat head detection networks, generalize between different growing environments around the world.
Provide a detailed description of the following dataset: Global Wheat
Brain-Score
The Brain-Score platform aims to yield strong computational models of the ventral stream. We enable researchers to quickly get a sense of how their model scores against standardized brain benchmarks on multiple dimensions and facilitate comparisons to other state-of-the-art models. At the same time, new brain data can quickly be tested against a wide range of models to determine how well existing models explain the data. Brain-Score is organized by the Brain-Score team in collaboration with researchers and labs worldwide. We are working towards an easy-to-use platform where a model can easily be submitted to yield its scores on a range of brain benchmarks and new benchmarks can be incorporated to challenge the models. This quantified approach lets us keep track of how close our models are to the brain on a range of experiments (data) using different evaluation techniques (metrics). For more details, please refer to the [technical paper](https://www.biorxiv.org/content/early/2018/09/05/407007) and the [perspective paper](https://www.cell.com/neuron/fulltext/S0896-6273(20)30605-X).
Provide a detailed description of the following dataset: Brain-Score
ACDC Scribbles
We release expert-made scribble annotations for the medical ACDC dataset [1]. The released data must be considered as extending the original ACDC dataset. The ACDC dataset contains cardiac MRI images, paired with hand-made segmentation masks. It is possible to use the segmentation masks provided in the ACDC dataset to evaluate the performance of methods trained using only scribble supervision. References: [1] Bernard, Olivier, et al. "Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?." IEEE transactions on medical imaging 37.11 (2018): 2514-2525.
Provide a detailed description of the following dataset: ACDC Scribbles
Synthetic COVID-19 CXR Dataset
A public open dataset of synthetic chest X-ray images of COVID-19. The dataset consists of 21,295 synthetic COVID-19 chest X-ray images. Images are generated using an unsupervised domain adaptation approach by leveraging class conditioning and adversarial training from source datasets [RSNA Kaggle Dataset](https://academictorrents.com/details/95588a735c9ae4d123f3ca408e56570409bcf2a9) and [COVID-19 Image Data Collection](https://github.com/ieee8023/covid-chestxray-dataset). Implementation of the algorithm is available [here](https://github.com/hasibzunair/adversarial-lesions).
Provide a detailed description of the following dataset: Synthetic COVID-19 CXR Dataset
Twitter Stance Election 2020
The data set contains 2500 manually-stance-labeled tweets, 1250 for each candidate (Joe Biden and Donald Trump). These tweets were sampled from the unlabeled set that our research team collected English tweets related to the 2020 US Presidential election. Through the Twitter Streaming API, the authors collected data using election-related hashtags and keywords. Between January 2020 and September 2020, over 5 million tweets were collected, not including quotes and retweets. Paper: Knowledge Enhanced Masked Language Model for Stance Detection
Provide a detailed description of the following dataset: Twitter Stance Election 2020
A2D Sentences
The Actor-Action Dataset (A2D) by Xu et al. [29] serves as the largest video dataset for the general actor and action segmentation task. It contains 3,782 videos from YouTube with pixel-level labeled actors and their actions. The dataset includes eight different actions, while a total of seven actor classes are considered to perform those actions. We follow [29], who split the dataset into 3,036 training videos and 746 testing videos. As we are interested in pixel-level actor and action segmentation from sentences, we augment the videos in A2D with natural language descriptions about what each actor is doing in the videos. Following the guidelines set forth in [12], we ask our annotators for a discriminative referring expression of each actor instance if multiple objects are considered in a video. The annotation process resulted in a total of 6,656 sentences, including 811 different nouns, 225 verbs and 189 adjectives. Our sentences enrich the actor and action pairs from the A2D dataset with finer granularities. For example, the actor adult in A2D may be annotated with man, woman, person and player in our sentences, while action rolling may also refer to flipping, sliding, moving and running when describing different actors in different scenarios. Our sentences contain on average more words than the ReferIt dataset [12] (7.3 vs 4.7), even when we leave out prepositions, articles and linking verbs (4.5 vs 3.6). This makes sense as our sentences contain a variety of verbs while existing referring expression datasets mostly ignore verbs.
Provide a detailed description of the following dataset: A2D Sentences
NewsCLIPpings
**NewsCLIPpings** is a dataset for detecting mismatched images and captions. Different to previous misinformation datasets, in NewsCLIPpings both the images and captions are unmanipulated, but some of them are mismatched.
Provide a detailed description of the following dataset: NewsCLIPpings
Countix-AV
**Countix-AV** is a dataset for repetitive action counting by sight and sound created by repurposing the [Countix](countix) dataset. It is created by selecting 19 categories from Countix for which the repetitive action has a clear sound, such as clapping, playing tennis, etc. The dataset contains 1,863 videos, with 987, 311 and 565 for training, validation and testing. The authors maintained the original count annotations from Countix and kept the same split (i.e. training, validation, or testing) for each video.
Provide a detailed description of the following dataset: Countix-AV
Referring Expressions for DAVIS 2016 & 2017
Our task is to localize and provide a pixel-level mask of an object on all video frames given a language referring expression obtained either by looking at the first frame only or the full video. To validate our approach we employ two popular video object segmentation datasets, DAVIS16 [38] and DAVIS17 [42]. These two datasets introduce various challenges, containing videos with single or multiple salient objects, crowded scenes, similar looking instances, occlusions, camera view changes, fast motion, etc. DAVIS16 [38] consists of 30 training and 20 test videos of diverse object categories with all frames annotated with pixel-level accuracy. Note that in this dataset only a single object is annotated per video. For the multiple object video segmentation task we consider DAVIS17. Compared to DAVIS16, this is a more challenging dataset, with multiple objects annotated per video and more complex scenes with more distractors, occlusions, smaller objects, and fine structures. Overall, DAVIS17 consists of a training set with 60 videos, and a validation/test-dev/test-challenge set with 30 sequences each. As our goal is to segment objects in videos using language specifications, we augment all objects annotated with mask labels in DAVIS16 and DAVIS17 with non-ambiguous referring expressions. We follow the work of [34] and ask the annotator to provide a language description of the object, which has a mask annotation, by looking only at the first frame of the video. Then another annotator is given the first frame and the corresponding description, and asked to identify the referred object. If the annotator is unable to correctly identify the object, the description is corrected to remove ambiguity and to specify the object uniquely. We have collected two referring expressions per target object annotated by non-computer vision experts (Annotator 1, 2). However, by looking only at the 1st frame, the obtained referring expressions may potentially be invalid for an entire video. (We actually quantified that only∼ 15% of the collected descriptions become invalid over time and it does not affect strongly segmentation results as temporal consistency step helps to disambiguate some of such cases, see the supp. material for details.) Besides, in many applications, such as video editing or video-based advertisement, the user has access to a full video. Providing a language query which is valid for all frames might decrease the editing time and result in more coherent predictions. Thus, on DAVIS17 we asked the workers to provide a description of the object by looking at the full video. We have collected one expression of the full video type per target object. Future work may choose to use either setting. The average length for the first frame/full video expressions is 5.5/6.3 words. For DAVIS17 first frame annotations we notice that descriptions given by Annotator 1 are longer than the ones by Annotator 2 (6.4 vs. 4.6 words). We evaluate the effect of description length on the grounding performance in §5. Besides, the expressions relevant to a full video mention verbs more often than the first frame descriptions (44% vs. 25%). This is intuitive, as referring to an object which changes its appearance and position over time may require mentioning its actions. Adjectives are present in over 50% for all annotations. Most of them refer to colors (over 70%), shapes and sizes (7%) and spatial/ordering words (6% first frame vs. 13% full video expressions). The full video expressions also have a higher number of adverbs and prepositions, and overall are more complex than the ones provided for the first frame. Overall augmented DAVIS16/17 contains ∼ 1.2k referring expressions for more than 400 objects on 150 videos with ∼ 10k frames. We believe the collected data will be of interest to segmentation as well as vision and language communities, providing an opportunity to explore language as alternative input for video object segmentation.
Provide a detailed description of the following dataset: Referring Expressions for DAVIS 2016 & 2017
IIIT-ILST
**IIIT-ILST** is a dataset and benchmark for scene text recognition for three Indic scripts - Devanagari, Telugu and Malayalam. IIIT-ILST contains nearly 1000 real images per each script which are annotated for scene text bounding boxes and transcriptions.
Provide a detailed description of the following dataset: IIIT-ILST
A2Dre
We obtain A2Dre by selecting only instances that were labeled as non-trivial, which are 433 REs from 190 videos. We do not use the trivial cases as the analysis of such examples is not relevant, as referents can be described by using the category alone. Each annotator was presented with a RE, a video in which the target object was marked by a bounding box, and a set of questions paraphrasing our categories. A2Dre was annotated by 3 authors of the paper. Our final set of category annotations used for analysis was derived by means of majority voting: for each nontrivial RE, we kept all category labels which were assigned to the RE by at least two annotators.
Provide a detailed description of the following dataset: A2Dre
A2Dre+
A2Dre is a subset from the A2D test set including $433$~\textit{non-trivial} REs. Due to its highly unbalanced distribution across the $7$~semantic categories we select the $4$~major categories \textsl{appearance, location, motion and static}. The four categories have in common that in most cases, for a given referent, a RE can be provided that expresses a certain category, and one that does not. We use these categories to augment A2Dre with additional REs, which vary according to the presence or absence of each of them. Specifically, based on our categorization of the original REs, for each RE~$re$ and category~$C$, we produce an additional RE~$re'$ by modifying $re$ slightly such that it does (or does not) express~$C$. For example, for the last RE in Figure~\ref{fig:a2d-images}, i.e. \emph{girl in yellow dress standing near the woman}, which could be categorized as \textit{appearance}, \textit{location}, no \textit{motion} and \textit{static}, we produce new REs for each category: \emph{girl standing near the woman} (no \textit{appearance}), \emph{girl in yellow dress standing} (no \textit{location}), \emph{girl in yellow dress walking} (\textit{motion}) and \emph{girl in yellow dress near the woman} (no \textit{static}). We do not apply this procedure for \textsl{category}, since it is expressed in almost all REs, and its removal may be difficult in many cases. We name this extended dataset as A2Dre+.
Provide a detailed description of the following dataset: A2Dre+
RGB-D-D
**RGB-D-D** is a large-scale dataset for depth map super-resolution (SR). It consists of real-world paired low-resolution (LR) and high-resolution (HR) depth maps. The paired LR and HR depth maps are captured from mobile phone and Lucid Helios respectively ranging from indoor scenes to challenging outdoor scenes.
Provide a detailed description of the following dataset: RGB-D-D
WikiEvents
**WikiEvents** is a document-level event extraction benchmark dataset which includes complete event and coreference annotation.
Provide a detailed description of the following dataset: WikiEvents
RaindropsOnWindshield
**RaindropsOnWindshield** is a dataset for training and assessing vision algorithms' performance for different tasks of image artifacts detection on either camera lens or windshield. The dataset contains 8190 images, of which 3390 contain raindrops. Images are annotated with the binary mask representing areas with raindrops.
Provide a detailed description of the following dataset: RaindropsOnWindshield
How2Sign
The How2Sign is a multimodal and multiview continuous American Sign Language (ASL) dataset consisting of a parallel corpus of more than 80 hours of sign language videos and a set of corresponding modalities including speech, English transcripts, and depth. A three-hour subset was further recorded in the Panoptic studio enabling detailed 3D pose estimation.
Provide a detailed description of the following dataset: How2Sign
TNL2K
**Tracking by Natural Language** (**TNL2K**) is constructed for the evaluation of tracking by natural language specification. TNL2K features: - Large-scale: 2,000 sequences, contains 1,244,340 frames, 663 words, 1300 / 700 for the train / testing respectively - High-quality: Manual annotation with careful inspection in each frame - Multi-modal: Providing visual and language annotation for each sequence - Adversarial-samples: Randomly adding adversarial samples for research on adversarial attack and defence - Significant-appearance-variation: Containing videos with cloth/face change for pedestrian - Heterogeneous: Containing RGB, thermal, Cartoon, Synthetic data - Multiple-baseline: Tracking-by-BBox, Tracking-by-Language, Tracking-by-Joint-BBox-Language
Provide a detailed description of the following dataset: TNL2K
ElBa
ElBa is composed of procedurally-generated realistic renderings, where we vary in a continuous way element shapes and colors and their distribution, to generate 30K texture images with different local symmetry, stationarity, and density of (3M) localized texels, whose attributes are thus known by construction. [Download](https://drive.google.com/file/d/1YGmDjfz2S4dOLmz0nrjZOJbJuI4h58Rv)
Provide a detailed description of the following dataset: ElBa
MS^2
**MS^2** (Multi-Document Summarization of Medical Studies) is a dataset of over 470k documents and 20k summaries derived from the scientific literature. This dataset facilitates the development of systems that can assess and aggregate contradictory evidence across multiple studies, and is one of the first large-scale, publicly available multi-document summarization dataset in the biomedical domain.
Provide a detailed description of the following dataset: MS^2
CarFusion
We provide manual annotations of 14 semantic keypoints for 100,000 car instances (sedan, suv, bus, and truck) from 53,000 images captured from 18 moving cameras at Multiple intersections in Pittsburgh, PA. Please fill the google form to get a email with the download links:
Provide a detailed description of the following dataset: CarFusion
Subjective Discourse
This is a discourse dataset with multiple and subjective interpretations of English conversation in the form of perceived conversation acts and intents. The dataset consists of witness testimonials in U.S. congressional hearings.
Provide a detailed description of the following dataset: Subjective Discourse
WMT19 Metrics Task
This shared task will examine automatic evaluation metrics for machine translation. The goals of the shared metrics task are: To achieve the strongest correlation with human judgement of translation quality; To illustrate the suitability of an automatic evaluation metric as a surrogate for human evaluation; To address problems associated with comparison with a single reference translation; To move automatic evaluation beyond system-level ranking to finer-grained sentence-level ranking. All datasets for this task are available [here](http://www.statmt.org/wmt19/metrics-task.html).
Provide a detailed description of the following dataset: WMT19 Metrics Task
ML-CB
In this paper, we develop a new privacy enhancing tool: ML-CB—a means of using distinguishable pictorial information combined with underlying website source code to produce accurate and robust machine learning classifiers able to discern fingerprinting (i.e., surreptitious tracking) from non-fingerprinting canvas-based actions. The data introduced in the paper is collected by scraping roughly half a million websites using a custom Google Chrome extension storing information related to the canvas.
Provide a detailed description of the following dataset: ML-CB
Eedi Dataset
The **Eedi dataset** contains from two school years (September 2018 to May 2020) of students’ answers to mathematics questions from Eedi, a leading educational platform which millions of students interact with daily around the globe. Eedi offers diagnostic questions to students from primary to high school (roughly between 7 and 18 years old). Each diagnostic question is a multiple-choice question with 4 possible answer choices, exactly one of which is correct. Currently, the platform mainly focuses on mathematics questions. The data is split for different tasks: 1 & 2: Answer prediction, 3: Predict question quality, and 4: Recommend questions. The total number of answer records for these tasks training sets exceeds 17 million, making it one of the largest educational datasets to date. We also provide extensive metadata on questions, students and answers.
Provide a detailed description of the following dataset: Eedi Dataset
KolektorSDD2
**KolektorSDD2** is a surface-defect detection dataset with over 3000 images containing several types of defects, obtained while addressing a real-world industrial problem. The dataset consists of: * 356 images with visible defects * 2979 images without any defect * image sizes of approximately 230 x 630 pixels * train set with 246 positive and 2085 negative images * test set with 110 positive and 894 negative images * several different types of defects (scratches, minor spots, surface imperfections, etc.)
Provide a detailed description of the following dataset: KolektorSDD2
Quasimodo
Quasimodo is commonsense knowledge base that focuses on salient properties of objects. We provide several subsets: * Positive statements only * Positive statements top 10% * Negated statements only * Occupations * Positive statements * Negative statements * Animals * Positive statements * Negative statements * Culture * Positive statements * Negative statements * ConceptNet-mapped statements Image source: [https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/commonsense/quasimodo](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/commonsense/quasimodo)
Provide a detailed description of the following dataset: Quasimodo
HO-3D
A hand-object interaction dataset with 3D pose annotations of hand and object. The dataset contains 66,034 training images and 11,524 test images from a total of 68 sequences. The sequences are captured in multi-camera and single-camera setups and contain 10 different subjects manipulating 10 different objects from YCB dataset. The annotations are automatically obtained using an optimization algorithm. The hand pose annotations for the test set are withheld and the accuracy of the algorithms on the test set can be evaluated with standard metrics using the CodaLab challenge submission(see project page). The object pose annotations for the test and train set are provided along with the dataset.
Provide a detailed description of the following dataset: HO-3D
DogFaceNet
A dog face dataset for dog face verification and recognition/identification.
Provide a detailed description of the following dataset: DogFaceNet
Retailrocket
The dataset consists of three files: a file with behaviour data (events.csv), a file with item properties (itemproperties.сsv) and a file, which describes category tree (categorytree.сsv). The data has been collected from a real-world ecommerce website. It is raw data, i.e. without any content transformations, however, all values are hashed due to confidential issues. The purpose of publishing is to motivate researches in the field of recommender systems with implicit feedback.
Provide a detailed description of the following dataset: Retailrocket
Ulm-TSST
**Ulm-TSST** is a dataset continuous emotion (valence and arousal) prediction and `physiological-emotion' prediction. It consists of a multimodal richly annotated dataset of self-reported, and external dimensional ratings of emotion and mental well-being. After a brief period of preparation the subjects are asked to give an oral presentation, within a job-interview setting. Ulm-TSST includes biological recordings, such as Electrocardiogram (ECG), Electrodermal Activity (EDA), Respiration, and Heart Rate (BPM) as well as continuous arousal and valence annotations. With 105 participants (69.5% female) aged between 18 and 39 years, a total of 10 hours were accumulated.
Provide a detailed description of the following dataset: Ulm-TSST
OmniFlow
**OmniFlow** is a synthetic omnidirectional human optical flow dataset. Based on a rendering engine the authors created a naturalistic 3D indoor environment with textured rooms, characters, actions, objects, illumination and motion blur where all components of the environment are shuffled during the data capturing process. The simulation has as output rendered images of household activities and the corresponding forward and backward optical flow. The dataset consists of 23,653 image pairs and corresponding forward and backward optical flow.
Provide a detailed description of the following dataset: OmniFlow
hERG
**hERG** is a large-scale biophysics federated molecular dataset related to cardiac toxicity. It consists of 10,572 compounds, with an average of 29.39 nodes and 94.09 edges in each graph.
Provide a detailed description of the following dataset: hERG
RTC
**RTC** is a benchmark corpus of social media comments sampled over three years. The corpus consists of 36.36m unlabelled comments for adaptation and evaluation on an upstream masked language modelling task as well as 0.9m labelled comments for finetuning and evaluation on a downstream document classification task. The Reddit Time Corpus (RTC) covers three years between March 2017 and February 2020 and is split into 36 evenly-sized monthly subsets based on comment timestamps. RTC is sampled from the Pushshift Reddit dataset.
Provide a detailed description of the following dataset: RTC
Follicular-Segmentation
The **Follicular-Segmentation** dataset consists of 6900 cropped typical image patches of 1024x1024 pixels containing: follicular areas, colloid areas, and the other blank background areas. Image source: [https://github.com/bupt-ai-cz/Hybrid-Model-Enabling-Highly-Efficient-Follicular-Segmentation](https://github.com/bupt-ai-cz/Hybrid-Model-Enabling-Highly-Efficient-Follicular-Segmentation)
Provide a detailed description of the following dataset: Follicular-Segmentation
Semantic Textual Similarity (2012 - 2016)
Semantic Textual Similarity (2012 - 2016) involves a set of semantic textual similarity datasets that were part of previous shared tasks (2012-2016): STS12 - [ Semeval-2012 task 6: A pilot on semantic textual similarity](https://www.aclweb.org/anthology/S12-1051/) STS13 - [SEM 2013 shared task: Semantic Textual Similarity](https://www.aclweb.org/anthology/S13-1004/) STS14 - [SemEval-2014 task 10: Multilingual semantic textual similarity](https://www.aclweb.org/anthology/S14-2010/) STS15 - [SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability](https://www.aclweb.org/anthology/S15-2045/) STS16 - [SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation](https://www.aclweb.org/anthology/S16-1081/)
Provide a detailed description of the following dataset: Semantic Textual Similarity (2012 - 2016)
JUSThink Dialogue and Actions Corpus
The information contained in JUSThink Dialogue and Actions Corpus dataset includes dialogue transcripts, event logs, and test responses of children aged 9 through 12, as they participate in a robot-mediated human-human collaborative learning activity named JUSThink, where children in teams of two solve a problem on graphs together. The dataset consists of three parts: * **transcripts**: anonymised dialogue transcripts for 10 teams of two children * **logs**: anonymised event logs for 39 teams of two children * **test responses**: pre-test and post-test responses for 39 teams, and the key i.e. the correct response
Provide a detailed description of the following dataset: JUSThink Dialogue and Actions Corpus
MediaSpeech
**MediaSpeech** is a media speech dataset (you might have guessed this) built with the purpose of testing Automated Speech Recognition (ASR) systems performance. The dataset consists of short speech segments automatically extracted from media videos available on YouTube and manually transcribed, with some pre- and post-processing. The dataset contains 10 hours of speech for each language provided. This release contains audio datasets in French, Arabic, Turkish and Spanish, and is a part of a larger private dataset.
Provide a detailed description of the following dataset: MediaSpeech
NISQA Speech Quality Corpus
The NISQA Corpus includes more than 14,000 speech samples with simulated (e.g. codecs, packet-loss, background noise) and live (e.g. mobile phone, Zoom, Skype, WhatsApp) conditions. Each file is labelled with subjective ratings of the overall quality and the quality dimensions Noisiness, Coloration, Discontinuity, and Loudness. In total, it contains more than 97,000 human ratings for each of the dimensions and the overall MOS. The NISQA Speech Quality Corpus contains two training, two validation and four test datasets: - NISQA_TRAIN_SIM and NISQA_VAL_SIM: contains simulated distortions with speech samples from four different datasets. Divided into a training and a validation set. - NISQA_TRAIN_LIVE and NISQA_VAL_LIVE: contains live phone and Skype recordings with Librivox audiobook samples. Divided into training and validation set. - NISQA_TEST_LIVETALK: contains recordings of real phone and VoIP calls. - NISQA_TEST_FOR: contains live and simulated conditions with speech samples from the forensic speech dataset. - NISQA_TEST_NSC: contains live and simulated conditions with speech samples from the NSC dataset. - NISQA_TEST_P501: contains live and simulated conditions with speech samples from ITU-T Rec. P.501. The datasets are provided under the original terms of the used source speech and noise samples. Please see the individual readme and license files in each of the dataset folders within the NISQA_Corpus.zip for more details about the datasets and the licenses. Generally, all of the files in this corpus can be used for non-commercial research purposes and some of the datasets can be also be used for commercial purposes.
Provide a detailed description of the following dataset: NISQA Speech Quality Corpus
IBM Debater Mention Detection Benchmark
This dataset contains general and named entities annotations on both clean written text and on noisy speech data. It includes 1000 sentences from Wikipedia and 1000 sentences of speech data that appear in two forms: (1) transcribed manually, and (2) the output of an ASR engine. Each of the datasets includes a total of around 6500 mentions linked to there DBPedia pages.
Provide a detailed description of the following dataset: IBM Debater Mention Detection Benchmark
HopeEDI
Over the past few years, systems have been developed to control online content and eliminate abusive, offensive or hate speech content. However, people in power sometimes misuse this form of censorship to obstruct the democratic right of freedom of speech. Therefore, it is imperative that research should take a positive reinforcement approach towards online content that is encouraging, positive and supportive contents. Until now, most studies have focused on solving this problem of negativity in the English language, though the problem is much more than just harmful content. Furthermore, it is multilingual as well. Thus, we have constructed a Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting. We determined that the inter-annotator agreement of our dataset using Krippendorff’s alpha. Further, we created several baselines to benchmark the resulting dataset and the results have been expressed using precision, recall and F1-score. The dataset is publicly available for the research community. We hope that this resource will spur further research on encouraging inclusive and responsive speech that reinforces positiveness.
Provide a detailed description of the following dataset: HopeEDI
Apolloscape Trajectory
Our trajectory dataset consists of camera-based images, LiDAR scanned point clouds, and manually annotated trajectories. It is collected under various lighting conditions and traffic densities in Beijing, China. More specifically, it contains highly complicated traffic flows mixed with vehicles, riders, and pedestrians.
Provide a detailed description of the following dataset: Apolloscape Trajectory
Apolloscape Inpainting
The **Inpainting** dataset consists of synchronized Labeled image and LiDAR scanned point clouds. It's captured by HESAI Pandora All-in-One Sensing Kit. It is collected under various lighting conditions and traffic densities in Beijing, China.
Provide a detailed description of the following dataset: Apolloscape Inpainting
GooAQ
GooAQ is a large-scale dataset with a variety of answer types. This dataset contains over 5 million questions and 3 million answers collected from Google. GooAQ questions are collected semi-automatically from the Google search engine using its autocomplete feature. This results in naturalistic questions of practical interest that are nonetheless short and expressed using simple language. GooAQ answers are mined from Google's responses to the collected questions, specifically from the answer boxes in the search results. This yields a rich space of answer types, containing both textual answers (short and long) as well as more structured ones such as collections.
Provide a detailed description of the following dataset: GooAQ
DiS-ReX
**DiS-ReX** is a multilingual dataset for distantly supervised (DS) relation extraction (RE). The dataset has over 1.5 million instances, spanning 4 languages (English, Spanish, German and French). The dataset has 36 positive relation types + 1 no relation (NA) class.
Provide a detailed description of the following dataset: DiS-ReX
Concadia
**Concadia** is a publicly available Wikipedia-based corpus, which consists of 96,918 images with corresponding English-language descriptions, captions, and surrounding context.
Provide a detailed description of the following dataset: Concadia
XLEnt
XLEnt consists of parallel entity mentions in 120 languages aligned with English. These entity pairs were constructed by performing named entity recognition (NER) and typing on English sentences from mined sentence pairs. These extracted English entity labels and types were projected to the non-English sentences through word alignment. Word alignment was performed by combining three alignment signals ((1) word co-occurence alignment with FastAlign (2) semantic alignment using LASER embeddings, and (3) phonetic alignment via transliteration) into a unified word-alignment model. This lexical/semantic/phonetic alignment approach yielded more than 160 million aligned entity pairs in 120 languages paired with English. Recognizing that each English is often aligned to mulitple entities in different target languages, we can join on English entities to obtain aligned entity pairs that directly pair two non-English entities (e.g., Arabic-French)
Provide a detailed description of the following dataset: XLEnt
TREC-COVID
TREC-COVID is a community evaluation designed to build a test collection that captures the information needs of biomedical researchers using the scientific literature during a pandemic. One of the key characteristics of pandemic search is the accelerated rate of change: the topics of interest evolve as the pandemic progresses and the scientific literature in the area explodes. The COVID-19 pandemic provides an opportunity to capture this progression as it happens. TREC-COVID, in creating a test collection around COVID-19 literature, is building infrastructure to support new research and technologies in pandemic search.
Provide a detailed description of the following dataset: TREC-COVID
NFCorpus
**NFCorpus** is a full-text English retrieval data set for Medical Information Retrieval. It contains a total of 3,244 natural language queries (written in non-technical English, harvested from the NutritionFacts.org site) with 169,756 automatically extracted relevance judgments for 9,964 medical documents (written in a complex terminology-heavy language), mostly from PubMed.
Provide a detailed description of the following dataset: NFCorpus
CQADupStack
CQADupStack is a benchmark dataset for community question-answering research. It contains threads from twelve StackExchange subforums, annotated with duplicate question information. Pre-defined training and test splits are provided, both for retrieval and classification experiments, to ensure maximum comparability between different studies using the set. Furthermore, it comes with a script to manipulate the data in various ways.
Provide a detailed description of the following dataset: CQADupStack
SciFact
**SciFact** is a dataset of 1.4K expert-written claims, paired with evidence-containing abstracts annotated with veracity labels and rationales.
Provide a detailed description of the following dataset: SciFact
Co/FeMn bilayers
Co/FeMn bilayers measured.
Provide a detailed description of the following dataset: Co/FeMn bilayers
BoostCLIR
**BoostCLIR** is a bilingual (Japanese-English) corpus of patent abstracts, extracted from the MAREC patent data, and the data from the NTCIR PatentMT workshop collections, accompanied with relevance judgements for the task of patent prior-art search. **Important:** The English side of the corpus contains patent IDs as well as the text of the abstracts. The Japanese side only contains patent IDs because of NTCIR copyright restrictions. The Japanese patent abstracts can be extracted from full text Japanese patent documents, which are available from the organizers of the NTCIR workshop.
Provide a detailed description of the following dataset: BoostCLIR
ConferenceVideoSegmentationDataset
This is a video and image segmentation dataset for human head and shoulders, relevant for creating elegant media for videoconferencing and virtual reality applications. The source data includes ten online conference-style green screen videos. The authors extracted 3600 frames from the videos and generated the ground truth masks for each character in the video, and then applied virtual background to the frames to generate the training/testing sets.
Provide a detailed description of the following dataset: ConferenceVideoSegmentationDataset
DeCOCO
**DeCOCO** is a bilingual (English-German) corpus of image descriptions, where the English part is extracted from the COCO dataset, and the German part are translations by a native German speaker.
Provide a detailed description of the following dataset: DeCOCO
HumanMT
**HumanMT** is a collection of human ratings and corrections of machine translations. It consists of two parts: The first part contains five-point and pairwise sentence-level ratings, the second part contains error markings and corrections. Details are described in the following. I. Sentence-level ratings This is a collection of five-point and pairwise ratings for 1000 German-English machine translations of TED talks (IWSLT 2014). The ratings were collected with the purpose of assessing machine translation quality rating reliability and learnability to improve a neural machine translation model with human reinforcement (see publications). II. Error markings and corrections This is a collection of word-level error markings and post-edits/corrections for 3120 English-German machine translated sentences of 30 selected TED talks (IWSLT 2017). Each sentence received either a correction or a marking of errors from human annotators. This data was collected with the purpose of comparing annotation cost and quality, and potential for downstream machine translation improvements between annotation modes (see publications).
Provide a detailed description of the following dataset: HumanMT
MVP
**MVP** is a multi-view partial point cloud dataset (MVP) containing over 100,000 high-quality scans, which renders partial 3D shapes from 26 uniformly distributed camera poses for each 3D CAD model.
Provide a detailed description of the following dataset: MVP
MetaCLIR
This data adds textual meta-infomation data to two existing corpora for cross language information retrieval: BoostCLIR, and the Large Scale CLIR Dataset (wiki-clir).
Provide a detailed description of the following dataset: MetaCLIR
WiTA
**WiTA** (Writing in The Air) is a dataset for the challenging writing in the air (WiTA) task -- an elaborate task bridging vision and NLP. The dataset consists of five sub-datasets in two languages (Korean and English) and amounts to 209,926 video instances from 122 participants. Finger movement for WiTA is captured with RGB cameras to ensure wide accessibility and cost-efficiency.
Provide a detailed description of the following dataset: WiTA
Large-Scale CLIR Dataset
The Large-Scale CLIR Dataset is a retrieval dataset built for Cross-Language Information Retrieval (CLIR). The dataset is derived from Wikipedia and contains more 2.8 million English single-sentence queries with relevant documents from 25 other selected languages.
Provide a detailed description of the following dataset: Large-Scale CLIR Dataset
NLmaps
There are two versions of the NLmaps corpus. NLmaps (v1) and its extension NLmaps v2. Both versions of the NLmaps corpus consist of questions about geographical facts that can be answered with the OpenStreetMap (OSM) database (available under the Open Database Licence). The questions are in English and have a corresponding Machine Readable Language (MRL) parse. Gold answers can be obtained by executing the gold parses against the OSM database using the NLmaps backend, which is based on the Overpass-API (available under the Affero GPL v3).
Provide a detailed description of the following dataset: NLmaps
SciGen
**SciGen** is a challenge dataset for the task of reasoning-aware data-to-text generation consisting of tables from scientific articles and their corresponding descriptions. The unique properties of SciGen are that (1) tables mostly contain numerical values, and (2) the corresponding descriptions require arithmetic reasoning. SciGen is therefore the first dataset that assesses the arithmetic reasoning capabilities of generation models on complex input structures, i.e., tables from scientific articles. SciGen opens new avenues for future research in reasoning-aware text generation and evaluation. The dataset consists of 1.3K pairs of tables with their descriptions, with an average of 53 cells in each table.
Provide a detailed description of the following dataset: SciGen
PatTR
**PatTR** is a sentence-parallel corpus extracted from the MAREC patent collection. The current version contains more than 22 million German-English and 18 million French-English parallel sentences collected from all patent text sections as well as 5 million German-French sentence pairs from patent titles, abstracts and claims.
Provide a detailed description of the following dataset: PatTR