dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
FineAction
**FineAction** contains 103K temporal instances of 106 action categories, annotated in 17K untrimmed videos. FineAction introduces new opportunities and challenges for temporal action localization, thanks to its distinct characteristics of fine action classes with rich diversity, dense annotations of multiple instances, and co-occurring actions of different classes.
Provide a detailed description of the following dataset: FineAction
VANiLLa
**VANiLLa** is a dataset for Question Answering over Knowledge Graphs (KGQA) offering answers in natural language sentences. The answer sentences in this dataset are syntactically and semantically closer to the question than to the triple fact. The dataset consists of over 100k simple questions adapted from the CSQA and SimpleQuestionsWikidata datasets and generated using a semi-automatic framework.
Provide a detailed description of the following dataset: VANiLLa
TabStructDB
In ICDAR-17, a Page-Object Detection (POD) competition was organized where the task was to identify page objects in documents which includes tables, figures and equations in document. The dataset was composed of 2417 images in total, where 1600 images were used for training, while the rest of the 817 images were used for testing. We are introducing a new table structure recognition dataset, TabStructDB, where we labeled each tabular region present in the ICDAR-17 POD dataset with table structure information comprising of the row and column information.
Provide a detailed description of the following dataset: TabStructDB
TEP
The original paper presented a model of the industrial chemical process named Tennessee Eastman Process and a model-based TEP simulator for data generation. The most widely used benchmark consists of 22 datasets, 21 of which (Fault 1–21) contain faults and 1 (Fault 0) is fault-free. It is available in [repository](https://github.com/YKatser/CPDE/tree/master/TEP_data). All datasets have training (500 samples) and testing (960 samples) parts: training part has healthy state observations, testing part begins right after training, and contains faults which appear after 8 h since the training part. Each dataset has 52 features or observation variables with a 3 min sampling rate for most of all.
Provide a detailed description of the following dataset: TEP
TaxiBJ
TaxiBJ consists of trajectory data from taxicab GPS data and meteorology data in Beijing from four time intervals: 1st Jul. 2013 - 30th Otc. 2013, 1st Mar. 2014 - 30th Jun. 2014, 1st Mar. 2015 - 30th Jun. 2015, 1st Nov. 2015 - 10th Apr. 2016.
Provide a detailed description of the following dataset: TaxiBJ
VidHOI
VidHOI is a video-based human-object interaction detection benchmark. VidHOI is based on VidOR which is densely annotated with all humans and predefined objects showing up in each frame. VidOR is also more challenging as the videos are non-volunteering user-generated and thus jittery at times. Image source: [https://xdshang.github.io/docs/vidor.html](https://xdshang.github.io/docs/vidor.html)
Provide a detailed description of the following dataset: VidHOI
PIC
The Person In Context (PIC) dataset is a dataset for human-centric relation segmentation (HRS), which contains 17,122 high-resolution images and densely annotated entity segmentation and relations, including 141 object categories, 23 relation categories and 25 semantic human parts.
Provide a detailed description of the following dataset: PIC
JobStack
JobStack is a new corpus for de-identification of personal data in job vacancies on Stackoverflow. De-identification is the task of detecting privacy-related entities in text, such as person names, emails and contact data.
Provide a detailed description of the following dataset: JobStack
WikiBioCTE
**WikiBioCTE** is a dataset for controllable text edition based on the existing dataset WikiBio (originally created for table-to-text generation). In the task of controllable text edition the input is a long text, a question, and a target answer, and the output is a minimally modified text, so that it fits the target answer. This task is very important in many situations, such as changing some conditions, consequences, or properties in a legal document, or changing some key information of an event in a news text.
Provide a detailed description of the following dataset: WikiBioCTE
BDD-X
**Berkeley Deep Drive-X (eXplanation)** is a dataset is composed of over 77 hours of driving within 6,970 videos. The videos are taken in diverse driving conditions, e.g. day/night, highway/city/countryside, summer/winter etc. On average 40 seconds long, each video contains around 3-4 actions, e.g. speeding up, slowing down, turning right etc., all of which are annotated with a description and an explanation. Our dataset contains over 26K activities in over 8.4M frames. Image source: [https://github.com/JinkyuKimUCB/BDD-X-dataset](https://github.com/JinkyuKimUCB/BDD-X-dataset)
Provide a detailed description of the following dataset: BDD-X
Dataset for: "It is just a flu: Assessing the Effect of Watch History on YouTube's Pseudoscientific Video Recommendations"
The dataset consists of three files: the metadata, comments, and captions of the ground-truth dataset videos collected and manually reviewed in this paper. 1. Video Metadata: - "groundtruth_videos.json": Contains the metadata of our manually reviewed ground-truth dataset videos. The ground-truth dataset includes 1,197 science, 1,325 pseudoscience, and 3,212 irrelevant videos. More specifically, it includes the metadata of videos related to the following pseudoscientific topics: - COVID-19: (607 science, 368 pseudoscience, 721 irrelevant videos) - Anti-vaccination (363 science, 394 pseudoscience, and 1,060 irrelevant videos) - Anti-mask (65 science, 188 pseudoscience, and 724 irrelevant videos) - Flat Earth (162 science, 375 pseudoscience, and 707 irrelevant videos) Note, that 600 of the videos in this dataset include the "annotation.manual_review_label" attribute, which is the label assigned by the first author of this paper to evaluate the performance of the crowdsourced annotation process. - Video Metadata Description: - "search_term": The search terms used to search YouTube and retrieve these videos during our data collection. It can be one of the following search terms: 'covid-19', 'coronavirus', 'anti-vaccination', 'anti-vaxx', 'anti-mask', or 'flat earth'. - "annotation.annotations": The list of the three annotations assigned to each video by our crowdsourced annotators. - "annotation.label": The annotation label assigned to the video based on the majority agreement of the crowdsourced annotators. - "annotation.manual_review_label": The label assigned by the first author of this paper to evaluate the performance of the crowdsourced annotation process. - "isSeed": 0 if the video is a seed video of our data collection, 1 if it is a recommended video of a seed video. - "relatedVideos": The recommended videos of the given video as returned by the YouTube Data API. 2. Video Comments: - "groundtruth_videos_comments_ids.json": Includes the identifiers of the comments of our ground-truth videos. 3. Video Transcripts: - "groundtruth_videos_transcripts.json": Includes the captions of our ground-truth videos. If you use this dataset in any publication, of any form and kind, please cite using this data.
Provide a detailed description of the following dataset: Dataset for: "It is just a flu: Assessing the Effect of Watch History on YouTube's Pseudoscientific Video Recommendations"
Enron Email Dataset
This dataset was collected and prepared by the CALO Project (A Cognitive Assistant that Learns and Organizes). It contains data from about 150 users, mostly senior management of Enron, organized into folders. The corpus contains a total of about 0.5M messages. This data was originally made public, and posted to the web, by the Federal Energy Regulatory Commission during its investigation.
Provide a detailed description of the following dataset: Enron Email Dataset
DCASE 2019 Mobile
**TAU Urban Acoustic Scenes 2019 Mobile** development dataset consists of 10-seconds audio segments from 10 acoustic scenes: Airport Indoor shopping mall Metro station Pedestrian street Public square Street with medium level of traffic Travelling by a tram Travelling by a bus Travelling by an underground metro Urban park Recordings were made with three devices that captured audio simultaneously. Each acoustic scene has 1440 segments (240 minutes of audio) recorded with device A (main device) and 108 segments of parallel audio (18 minutes) each recorded with devices B and C. The dataset contains in total 46 hours of audio. [DCASE website](http://dcase.community/challenge2019/task-acoustic-scene-classification#download)
Provide a detailed description of the following dataset: DCASE 2019 Mobile
MacaquePose
**MacaquePose** is an animal pose estimation dataset containing pictures of macaque monkeys and manually labeled annotations on them.
Provide a detailed description of the following dataset: MacaquePose
Vinegar Fly
**Vinegar Fly** is a pose estimation dataset for fruit flies.
Provide a detailed description of the following dataset: Vinegar Fly
Desert Locust
**Desert Locus** is a animal pose estimation dataset for desert locuses.
Provide a detailed description of the following dataset: Desert Locust
Grévy’s Zebra
**Grévy’s Zebra** is an animal pose estimation dataset for zebras.
Provide a detailed description of the following dataset: Grévy’s Zebra
RHD
**Rendered Hand Pose (RHD)** is a dataset for hand pose estimation. It provides segmentation maps with 33 classes: three for each finger, palm, person, and background. The 3D kinematic model of the hand provides 21 keypoints per hand: 4 keypoints per finger and one keypoint close to the wrist.
Provide a detailed description of the following dataset: RHD
TransNAS-Bench-101
**TransNAS-Bench-101** is a Neural Architecture Search (NAS) benchmark dataset containing network performance across seven tasks, covering classification, regression, pixel-level prediction, and self-supervised tasks. This diversity provides opportunities to transfer NAS methods among tasks and allows for more complex transfer schemes to evolve. We explore two fundamentally different types of search space: cell-level search space and macro-level search space. With 7,352 backbones evaluated on seven tasks, 51,464 trained models with detailed training information are provided. With TransNAS-Bench-101, we hope to encourage the advent of exceptional NAS algorithms that raise cross-task search efficiency and generalizability to the next level.
Provide a detailed description of the following dataset: TransNAS-Bench-101
EarthNet2021
Satellite images are snapshots of the Earth surface. We propose to forecast them. We frame Earth surface forecasting as the task of predicting satellite imagery conditioned on future weather. EarthNet2021 is a large dataset suitable for training deep neural networks on the task. It contains Sentinel~2 satellite imagery at $20$~m resolution, matching topography and mesoscale ($1.28$~km) meteorological variables packaged into $32000$ samples. Additionally we frame EarthNet2021 as a challenge allowing for model intercomparison. Resulting forecasts will greatly improve ($>\times50$) over the spatial resolution found in numerical models. This allows localized impacts from extreme weather to be predicted, thus supporting downstream applications such as crop yield prediction, forest health assessments or biodiversity monitoring. Find data, code, and how to participate at www.earthnet.tech.
Provide a detailed description of the following dataset: EarthNet2021
DaN+
**DaN+** is a new multi-domain corpus and annotation guidelines for Danish nested named entities (NEs) and lexical normalization to support research on cross-lingual cross-domain learning for a less-resourced language.
Provide a detailed description of the following dataset: DaN+
BAM!
The **Behance Artistic Media** dataset (BAM!) is a large-scale dataset of contemporary artwork from Behance, a website containing millions of portfolios from professional and commercial artists. We annotate Behance imagery with rich attribute labels for content, emotions, and artistic media. We believe our Behance Artistic Media dataset will be a good starting point for researchers wishing to study artistic imagery and relevant problems. The dataset consists of: * Automatically-labeled binary attribute scores for over 2.5 million images across 20 attributes each * 393,000 crowdsourced binary attribute labels for individual images * Short image descriptions/captions for 74,000 images from the crowd
Provide a detailed description of the following dataset: BAM!
XGLUE
**XGLUE** is an evaluation benchmark XGLUE,which is composed of 11 tasks that span 19 languages. For each task, the training data is only available in English. This means that to succeed at XGLUE, a model must have a strong zero-shot cross-lingual transfer capability to learn from the English data of a specific task and transfer what it learned to other languages. Comparing to its concurrent work XTREME, XGLUE has two characteristics: First, it includes cross-lingual NLU and cross-lingual NLG tasks at the same time; Second, besides including 5 existing cross-lingual tasks (i.e. NER, POS, MLQA, PAWS-X and XNLI), XGLUE selects 6 new tasks from Bing scenarios as well, including News Classification (NC), Query-Ad Matching (QADSM), Web Page Ranking (WPR), QA Matching (QAM), Question Generation (QG) and News Title Generation (NTG). Such diversities of languages, tasks and task origin provide a comprehensive benchmark for quantifying the quality of a pre-trained model on cross-lingual natural language understanding and generation.
Provide a detailed description of the following dataset: XGLUE
CAMO++
CAMO++ is a dataset for camouflaged object segmentation. This dataset increases the number of images with hierarchical pixel-wise ground-truths. The authors also provide a benchmark suite for the task of camouflaged instance segmentation.
Provide a detailed description of the following dataset: CAMO++
GLGE
**GLGE** is a general language generation evaluation benchmark which is composed of 8 language generation tasks, including Abstractive Text Summarization ([CNN/DailyMail](cnn-daily-mail-1), Gigaword, [XSUM](xsum), MSNews), Answer-aware Question Generation ([SQuAD 1.1](squad), MSQG), Conversational Question Answering ([CoQA](coqa)), and Personalizing Dialogue ([Personachat](persona-chat-1)).
Provide a detailed description of the following dataset: GLGE
LandCover.ai
The LandCover.ai (**Land Cover** from **A**erial **I**magery) dataset is a dataset for automatic mapping of buildings, woodlands, water and roads from aerial images. ### Dataset features * land cover from Poland, Central Europe * three spectral bands - RGB * 33 orthophotos with 25 cm per pixel resolution (~9000x9500 px) * 8 orthophotos with 50 cm per pixel resolution (~4200x4700 px) * total area of 216.27 sq. km ### Dataset format * rasters are three-channel GeoTiffs with EPSG:2180 spatial reference system * masks are single-channel GeoTiffs with EPSG:2180 spatial reference system
Provide a detailed description of the following dataset: LandCover.ai
CURE-OR
**CURE-OR** is a large-scale, controlled, and multi-platform object recognition dataset denoted as Challenging Unreal and Real Environments for Object Recognition. In this dataset, there are 1,000,000 images of 100 objects with varying size, color, and texture that are positioned in five different orientations and captured using five devices including a webcam, a DSLR, and three smartphone cameras in real-world (real) and studio (unreal) environments. The controlled challenging conditions include underexposure, overexposure, blur, contrast, dirty lens, image noise, resizing, and loss of color information.
Provide a detailed description of the following dataset: CURE-OR
USM-SED
**USM-SED** is a dataset for polyphonic sound event detection in urban sound monitoring use-cases. Based on isolated sounds taken from the [FSD50k](fsd50k) dataset, 20,000 polyphonic soundscapes are synthesized with sounds being randomly positioned in the stereo panorama using different loudness levels.
Provide a detailed description of the following dataset: USM-SED
CEREC
**CEREC** is a large scale corpus for entity resolution in email conversations. The corpus consists of 6001 email threads from the Enron Email Corpus containing 36,448 email messages and 60,383 entity coreference chains. The annotation is carried out as a two-step process with minimal manual effort.
Provide a detailed description of the following dataset: CEREC
GOO
**GOO** (Gaze-on-Objects) is a dataset for gaze object prediction, where the goal is to predict a bounding box for a person's gazed-at object. GOO is composed of a large set of synthetic images (GOO Synth) supplemented by a smaller subset of real images (GOO-Real) of people looking at objects in a retail environment.
Provide a detailed description of the following dataset: GOO
SimJEB
Simulated Jet Engine Bracket Dataset (**SimJEB**) is a public collection of crowdsourced mechanical brackets and high-fidelity structural simulations designed specifically for surrogate modeling. SimJEB models are more complex, diverse, and realistic than the synthetically generated datasets commonly used in parametric surrogate model evaluation. In contrast to existing engineering shape collections, SimJEB's models are all designed for the same engineering function and thus have consistent structural loads and support conditions. The models in SimJEB were collected from the original submissions to the GrabCAD Jet Engine Bracket Challenge: an open engineering design competition with over 700 hand-designed CAD entries from 320 designers representing 56 countries. Each model has been cleaned, categorized, meshed, and simulated with finite element analysis according to the original competition specifications. The result is a collection of diverse, high-quality and application-focused designs for advancing geometric deep learning and engineering surrogate models.
Provide a detailed description of the following dataset: SimJEB
POINTREC
POINTREC is a test collection for point of interest (POI) recommendation, comprising of (i) a set of information needs, (ii) a dataset of POIs, and (iii) graded relevance assessments for information need and POI pairs.
Provide a detailed description of the following dataset: POINTREC
SkyCam
**SkyCam** dataset is a collection of sky images from a variety of locations with diverse topological characteristics (Swiss Jura, Plateau and Pre-Alps regions), from both single and stereo camera settings coupled with a high-accuracy pyranometers. The dataset was collected with a high frequency with a data sample every 10 seconds. 13 images with different exposures times are generated along with a post-processed HDR images and a solar radiance values for each of the cameras and locations. We hope that SkyCam dataset will enable researchers to tackle the problem of short-term local camera-based solar radiance prediction.
Provide a detailed description of the following dataset: SkyCam
CASIA-Face-Africa
**CASIA-Face-Africa** is a face image database which contains 38,546 images of 1,183 African subjects. Multi-spectral cameras are utilized to capture the face images under various illumination settings. Demographic attributes and facial expressions of the subjects are also carefully recorded. For landmark detection, each face image in the database is manually labeled with 68 facial keypoints. A group of evaluation protocols are constructed according to different applications, tasks, partitions and scenarios. The proposed database along with its face landmark annotations, evaluation protocols and preliminary results form a good benchmark to study the essential aspects of face biometrics for African subjects, especially face image preprocessing, face feature analysis and matching, facial expression recognition, sex/age estimation, ethnic classification, face image generation, etc.
Provide a detailed description of the following dataset: CASIA-Face-Africa
Robotic Pushing
The **Robotic Pushing Dataset** is a dataset for video prediction for real-world interactive agents which consists of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a "visual imagination" of different futures based on different courses of action.
Provide a detailed description of the following dataset: Robotic Pushing
ParaQA
ParaQA is a question answering (QA) dataset with multiple paraphrased responses for single-turn conversation over knowledge graphs (KG). The dataset was created using a semi-automated framework for generating diverse paraphrasing of the answers using techniques such as back-translation. The existing datasets for conversational question answering over KGs (single-turn/multi-turn) focus on question paraphrasing and provide only up to one answer verbalization. However, ParaQA contains 5000 question-answer pairs with a minimum of two and a maximum of eight unique paraphrased responses for each question.
Provide a detailed description of the following dataset: ParaQA
EDT
The EDT dataset is designed for corporate event detection and text-based stock prediction (trading strategy) benchmark. 1. Corporate Event Detection It includes 9721​ news articles with token-level event labels. Including 11 event types: Acquisitions, Clinical Trials, Guidance Changes, New Contracts, Stock Repurchases, Stock Split, Reverse Stock Split/ADS Ratio Change, Regular Dividend, Special Dividend, Dividend Cut, Dividend Increase 2. Text-Based Stock Prediction Benchmark It includes 303893​ first-hand news articles from high-quality sources. Each news article is assigned a minute-level timestamp and comprehensive stock price labels. Please see this [Github Link](https://github.com/Zhihan1996/TradeTheEvent/tree/main/data) and [paper](https://aclanthology.org/2021.findings-acl.186.pdf) for more details.
Provide a detailed description of the following dataset: EDT
ARD-16
We create ARD-16 (Ati Realworld Dataset), a first of its kind real-world paired correspondence dataset, by applying our dataset generation method on 16-beam VLP-16 Puck LiDAR scans on a slow-moving Unmanned Ground Vehicle. We obtain ground truth poses by using fine resolution brute force scan matching, similar to Google's Cartographer. It was captured in outdoor environment at Robert Bosch centre, IISc with no moving objects during static run and several moving objects (1 car, 1 2-wheeler, few pedestrians) during dynamic run. It consists of 1.5k scans/run and we collected 10 dynamic and 5 static runs. This gives about 14k LiDAR scan pairs for training, validation and testing.
Provide a detailed description of the following dataset: ARD-16
CARLA-64
We create 64-beam LiDAR dataset with settings similar to Velodyne VLP-64 LiDAR on the CARLA simulator. It contains no moving objects during static run and several moving objects (cars, 2-wheelers, pedestrians) during dynamic runs. It consists of 16 dynamic runs and 8 static runs. This gives about 32k LiDAR scan pairs for training, validation ad tesing.
Provide a detailed description of the following dataset: CARLA-64
NEMO-Corpus
Named Entity (NER) annotations of the Hebrew Treebank (Haaretz newspaper) corpus, including: morpheme and token level NER labels, nested mentions, and more. We publish the NEMO corpus in the TACL paper [*"Neural Modeling for Named Entities and Morphology (NEMO^2)"*](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00404/107206/Neural-Modeling-for-Named-Entities-and-Morphology) [1], where we use it in extensive experiments and analyses, showing the importance of morphological boundaries for neural modeling of NER in morphologically rich languages. Code for these models and experiments can be found in the [NEMO code repo](https://github.com/OnlpLab/NEMO). ## Main features: 1. Morpheme, token-single and token-multi sequence labels. Morpheme labels provide exact boundaries, token-multi provide partial sub-word morphological but no exact boundaries, token-single provides only token-level information. 1. All annotations are in `BIOSE` format (`B`=Begin, `I`=Inside, `O`=Outside, `S`=Singleton, `E`=End). 1. Widely-used OntoNotes entity category set: `GPE` (geo-political entity), `PER` (person), `LOC` (location), `ORG` (organization), `FAC` (facility), `EVE` (event), `WOA` (work-of-art), `ANG` (language), `DUC` (product). 1. NEMO includes NER annotations for the two major versions of the Hebrew Treebank, UD (Universal Dependency) and SPMRL. These can be aligned to the other morphosyntactic information layers of the treebank using [bclm](https://github.com/OnlpLab/bclm) 1. We provide nested mentions. Only the first, widest, layer is used in the NEMO^2 paper. We invite you to take on this challenge! 1. Guidelines used for annotation are provided [here](https://github.com/OnlpLab/NEMO-Corpus/tree/main/guidelines). 1. Corpus was annotated by two native Hebrew speakers of academic education, and curated by the project manager. We provide the original annotations made by the annotators as well to promote work on [learning with disagreements](https://sites.google.com/view/semeval2021-task12/home). 1. Annotation was performed using [WebAnno](https://webanno.github.io/webanno/) (version 3.4.5) ## Basic Corpus Statistics | | train | dev | test | |------------------------------| --:| --:| --:| | Sentences | 4,937 | 500 | 706 | | Tokens | 93,504 | 8,531 | 12,619 | | Morphemes | 127,031 | 11,301 | 16,828 | | All mentions | 6,282 | 499 | 932 | | Type: Person (PER) | 2,128 | 193 | 267 | | Type: Organization (ORG) | 2,043 | 119 | 408 | | Type: Geo-Political (GPE) | 1,377 | 121 | 195 | | Type: Location (LOC) | 331 | 28 | 41 | | Type: Facility (FAC) | 163 | 12 | 11 | | Type: Work-of-Art (WOA) | 114 | 9 | 6 | | Type: Event (EVE) | 57 | 12 | 0 | | Type: Product (DUC) | 36 | 2 | 3 | | Type: Language (ANG) | 33 | 3 | 1 | ## Evaluation An evaluation script is provided in the [NEMO code repo](https://github.com/OnlpLab/NEMO#evaluation) along with evaluation instructions. ## Citation ``` @article{10.1162/tacl_a_00404, author = {Bareket, Dan and Tsarfaty, Reut}, title = "{Neural Modeling for Named Entities and Morphology (NEMO2)}", journal = {Transactions of the Association for Computational Linguistics}, volume = {9}, pages = {909-928}, year = {2021}, month = {09}, abstract = "{Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities do not necessarily coincide with token boundaries, rather, they respect morphological boundaries. To address NER in MRLs we then need to answer two fundamental questions, namely, what are the basic units to be labeled, and how can these units be detected and classified in realistic settings (i.e., where no gold morphology is available). We empirically investigate these questions on a novel NER benchmark, with parallel token- level and morpheme-level NER annotations, which we develop for Modern Hebrew, a morphologically rich-and-ambiguous language. Our results show that explicitly modeling morphological boundaries leads to improved NER performance, and that a novel hybrid architecture, in which NER precedes and prunes morphological decomposition, greatly outperforms the standard pipeline, where morphological decomposition strictly precedes NER, setting a new performance bar for both Hebrew NER and Hebrew morphological decomposition tasks.}", issn = {2307-387X}, doi = {10.1162/tacl_a_00404}, url = {https://doi.org/10.1162/tacl\_a\_00404}, eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00404/1962472/tacl\_a\_00404.pdf}, } ```
Provide a detailed description of the following dataset: NEMO-Corpus
Custom FINNgers
A dataset with 3200 images (200 for each number quantity on each hand).
Provide a detailed description of the following dataset: Custom FINNgers
Sentinel 2 manually extracted deep water spectra with high noise levels and sunglint
This dataset includes 2.133.324 reflectance water spectra which were manually extracted by visual observation from 30 Sentinel 2 level 1C satellite images. The spectra were extracted from deep water areas with high noise levels and sunglint. The Sentinel 2 images depicted 2 tiles of the same orbit and were collected in 2016 (2 images), 2017 (19 images) and 2018 (9 images). The images contain 13 bands, 3 with 60 m spatial resolution, 4 with 10 m spatial resolution and 6 with 20 m spatial resolution. Before the spectra extraction, the bands with spatial resolution 10 and 20 m were resampled to 60 m and then the images were cropped in order to remove the land and depict optically homogenous sea regions. A figure depicting the location of the Sentinel 2 tiles (white polygons (1,2)) and the cropped tiles (red polygons (3,4)) is included in this folder. A figure depicting example scenes from which spectra were obtained through regions of interest (rois) is included as well. The spectra are stored in .csv files. Each file is named after the name of the Sentinel 2 product which includes sensing and creation date as well the relative orbit number and tile code. The content of each file includes latitude and longitude coordinates (UTM/WGS84 projection) of each spectral signature as well as the reflectance values of the 13 Sentinel 2 bands. This dataset was created for the purpose of the study described in https://www.tandfonline.com/doi/full/10.1080/01431161.2020.1714776
Provide a detailed description of the following dataset: Sentinel 2 manually extracted deep water spectra with high noise levels and sunglint
RAW-C
Relatedness judgments of ambiguous English words, in experimentally controlled sentential contexts.
Provide a detailed description of the following dataset: RAW-C
TexRel
Green family of datasets for emergent communications on relations. By comparison with other relations datasets, TexRel provides rapid training and experimentation, whilst being sufficiently large to avoid overfitting in the context of emergent communications.
Provide a detailed description of the following dataset: TexRel
ShapeWorld
**ShapeWorld** is a new evaluation methodology and framework for multimodal deep learning models, with a focus on formal-semantic style generalization capabilities. In this framework, artificial data is automatically generated according to predefined specifications. This controlled data generation makes it possible to introduce previously unseen instance configurations during evaluation, which consequently require the system to recombine learned concepts in novel ways.
Provide a detailed description of the following dataset: ShapeWorld
VideoLT
**VideoLT** is a large-scale long-tailed video recognition dataset that contains 256,218 untrimmed videos, annotated into 1,004 classes with a long-tailed distribution.
Provide a detailed description of the following dataset: VideoLT
CMWD
CMWD (Cloud Motion Wind Dataset) is the first cloud motion wind dataset for deep learning research. It contains 6388 adjacent grayscale image pairs for training and another 715 images pairs for testing.
Provide a detailed description of the following dataset: CMWD
TCLD
TCLD (Typhoon Center Location Dataset) is a brand new typhoon center location dataset for deep learning research. It contains 1809 grayscale images for training and another 319 images for testing.
Provide a detailed description of the following dataset: TCLD
SCMD2016
SCMD dataset is a brand new cloudage nowcasting dataset for deep learning research. It contains 20000 grayscale image sequences for training and another 3500 image sequences for testing. You can get the SCMD2016 dataset at any time but only for scientific research. At the same time, please cite our work when you use the SCMD dataset
Provide a detailed description of the following dataset: SCMD2016
EviLOG
The dataset contains **synthetic training, validation and test data for occupancy grid mapping from lidar point clouds**. Additionally, **real-world lidar point clouds** from a test vehicle with the same lidar setup as the simulated lidar sensor is provided. Point clouds are stored as PCD files and occupancy grid maps are stored as PNG images whereas one image channel describes evidence for a free and another one describes evidence for occupied cell state.
Provide a detailed description of the following dataset: EviLOG
DEAP
The DEAP dataset consists of two parts: - The ratings from an online self-assessment where 120 one-minute extracts of music videos were each rated by 14-16 volunteers based on arousal, valence and dominance. - The participant ratings, physiological recordings and face video of an experiment where 32 volunteers watched a subset of 40 of the above music videos. EEG and physiological signals were recorded and each participant also rated the videos as above. For 22 participants frontal face video was also recorded.
Provide a detailed description of the following dataset: DEAP
Viwiki-Spelling
We introduce a first Vietnamese Spelling Correction dataset containing manual labelling mistakes and corresponding correct words.
Provide a detailed description of the following dataset: Viwiki-Spelling
RISEdb
The RISE (Robust Indoor Localization in Complex Scenarios) dataset is meant to train and evaluate visual indoor place recognizers. It contains more than 1 million geo-referenced images spread over 30 sequences, covering 5 heterogeneous buildings. For each building we provide: - A high resolution 3D point cloud (1cm) that defines the localization reference frame and that was generated with a mobile laser scanner and an inertial system. - Several image sequences spread over time with accurate ground truth poses retrieved by the laser scanner. Each sequence contains both, stereo pairs and spherical images. - Geo-referenced smartphone data, retrieved from the standard sensors of such devices.
Provide a detailed description of the following dataset: RISEdb
SynthDerm
SynthDerm is a synthetically generated dataset inspired by the real-world characteristics of melanoma skin lesions in dermatology settings. These characteristics include whether the lesion is asymmetrical, its border is irregular or jagged, is unevenly colored, has a diameter more than 0.25 inches, or is evolving in size, shape, or color over time. These qualities are usually referred to as ABCDE of melanoma. We generate SynthDerm algorithmically by varying several factors: skin tone, lesion shape, lesion size, lesion location (vertical and horizontal), and whether there are surgical markings present. We randomly assign one of the following to the lesion shape: round, asymmetrical, with jagged borders, or multi-colored (two different shades of colors overlaid with salt-and-pepper noise). For skin tone values, we simulate Fitzpatrick ratings. Fitzpatrick scale is a commonly used approach to classify the skin by its reaction to sunlight exposure modulated by the density of melanin pigments in the skin. This rating has six values, where 1 represents skin that always burns (lowest melanin) and 6 represents skin that never burns in sunlight (highest melanin). For our synthetic generation, we consider six base skin tones that similarly resemble different amounts of melanin. We also add a small amount of random noise to the base color to add further variety. Overall, SynthDerm includes more than 2,600 images of size 64x64.
Provide a detailed description of the following dataset: SynthDerm
Paralex
Paralex learns from a collection of 18 million question-paraphrase pairs scraped from WikiAnswers.
Provide a detailed description of the following dataset: Paralex
DiaKG
**DiaKG** is a high-quality Chinese dataset for Diabetes knowledge graph. The dataset is derived from 41 diabetes guidelines and consensus, which are from authoritative Chinese journals including basic research, clinical research, drug usage, clinical cases, diagnosis and treatment methods, etc. The dataset covers the most extensive field of research content and hotspot in recent years. All the annotators have a medical background, and finally conduct a high-quality diabetes database which contains 22,050 entities and 6,890 relations in total. Based on this dataset, doctors, researchers, and enterprise developers can develop knowledge bases for clinical diagnosis, knowledge graphs, and auxiliary diagnostics to further explore the mysteries of diabetes.
Provide a detailed description of the following dataset: DiaKG
MAOMaps
MAOMaps is a dataset for evaluation of Visual SLAM, RGB-D SLAM and Map Merging algorithms. It contains 40 samples with RGB and depth images, and ground truth trajectories and maps. These 40 samples are joined into 20 pairs of overlapping maps for map merging methods evaluation. The samples were collected using [Matterport3D](matterport3d) dataset and Habitat simulator.
Provide a detailed description of the following dataset: MAOMaps
CTSpine1K
**CTSpine1K** is a large-scale and comprehensive dataset for research in spinal image analysis. CTSpine1K is curated from the following four open sources, totalling 1,005 CT volumes (over 500,000 labeled slices and over 11,000 vertebrae) of diverse appearance variations. * COLONOG. This is a subset of the CT COLONOGRAPHY dataset related to a CT colonography trial12. * HNSCC-3DCT-RT. This sub-dataset contains three dimensional (3D) high-resolution fan-beam CT scans collected during pre-treatment, mid-treatment, and post-treatment using a Siemens 16-slice CT scanner with the standard clinical protocol for head-and-neck squamous cell carcinoma (HNSCC) patients13. These images are in DICOM format. * MSD T10. This sub-dataset comes from the 10th Medical Segmentation Decathlon14. To attain more slices containing the spine, we select the task03_liver dataset consisting of 201 cases. These images are in Neuroimaging Informatics Technology Initiative (NIfTI) format (https://nifti.nimh.nih.gov/nifti-1). * COVID-19. This sub-dataset consists of non-enhanced chest CTs from 632 patients with COVID-19 infections. The images were acquired at the point of care in an outbreak setting from patients with Reverse Transcription Polymerase Chain Reaction(RT-PCR) confirmation for the presence of SARS-CoV-215. We pick 40 scans with the images stored in NIfTI format.
Provide a detailed description of the following dataset: CTSpine1K
Cleft
The **Cleft** dataset is a collection of ultrasound tongue imaging and audio data, gathered from children with cleft lip and palate by a research speech and language therapist working in a hospital environment.
Provide a detailed description of the following dataset: Cleft
GeoQA
**GeoQA** is a dataset for automatic geometric problem solving containing 5,010 geometric problems with corresponding annotated programs, which illustrate the solving process of the given problems Compared with another publicly available dataset [GeoS](geos), GeoQA is 25 times larger, in which the program annotations can provide a practical testbed for future research on explicit and explainable numerical reasoning.
Provide a detailed description of the following dataset: GeoQA
GeoS
**GeoS** is a dataset for automatic math problem solving. It is a dataset of SAT plane geometry questions where every question has a textual description in English accompanied by a diagram and multiple choices. Questions and answers are compiled from previous official SAT exams and practice exams offered by the College Board. We annotate ground-truth logical forms for all questions in the dataset.
Provide a detailed description of the following dataset: GeoS
XL-BEL
**XL-BEL** is a benchmark for cross-lingual biomedical entity linking (XL-BEL). The benchmark spans 10 typologically diverse languages.
Provide a detailed description of the following dataset: XL-BEL
CoDesc
**CoDesc** is a large dataset of 4.2m Java source code and parallel data of their description from code search, and code summarization studies.
Provide a detailed description of the following dataset: CoDesc
D-OCC
**D-OCC** is a large-scale dataset of 5,617 dialogues to enable fine-grained evaluation and analysis of various dialogue systems. It is used to study common grounding in dynamic environments.
Provide a detailed description of the following dataset: D-OCC
Neural Closure Models - Runs
The following are all the runs used to generate figures in the paper. Every experiment solves the corresponding high- and low-fidelity model to generate the training, validation, and prediction data.
Provide a detailed description of the following dataset: Neural Closure Models - Runs
OTTers
**OTTers** is a dataset of human one-turn topic transitions. In this task, models must connect two topics in a cooperative and coherent manner, by generating a "bridging" utterance connecting the new topic tot he topic of the previous conversation turn.
Provide a detailed description of the following dataset: OTTers
Instantiation Dataset
**Instantiation** is a dataset for the task of instantiation detection
Provide a detailed description of the following dataset: Instantiation Dataset
LIGHT-Quests
**LIGHT-Quests** is an extension of LIGHT, a large-scale crowd-sourced fantasy text-game, to generate a dataset of quests. These contain natural language motivations paired with in-game goals and human demonstrations; completing a quest might require dialogue or actions (or both).
Provide a detailed description of the following dataset: LIGHT-Quests
UW-IS
**UW-IS** (UW Indoor Scenes) is a dataset for object recognition in indoor environments comprising scene images from two different environments, namely, a living room and a mock warehouse.
Provide a detailed description of the following dataset: UW-IS
MT40K
The **MT40K** dataset for predicting malware threat intelligence is a collection of 40,000 triples generated from 27,354 unique entities and 34 relations. The corpus consists of approximately 1,100 de-identified plain text threat reports written between 2006-2021 and all CVE vulnerability descriptions created between 1990 to 2021. The annotated keyphrases were classified into entities derived from semantic categories defined in malware threat ontologies.
Provide a detailed description of the following dataset: MT40K
MoleculeNet
**MoleculeNet** is a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. Tag: DFT?
Provide a detailed description of the following dataset: MoleculeNet
Data Collected with Package Delivery Quadcopter Drone
This experiment was performed in order to empirically measure the energy use of small, electric Unmanned Aerial Vehicles (UAVs). We autonomously direct a DJI ® Matrice 100 (M100) drone to take off, carry a range of payload weights on a triangular flight pattern, and land. Between flights, we varied specified parameters through a set of discrete options, payload of 0 , 250 g and 500 g; altitude during cruise of 25 m, 50 m, 75 m and 100 m; and speed during cruise of 4 m/s, 6 m/s, 8 m/s, 10 m/s and 12 m/s. We simultaneously collect data from a broad array of on-board sensors. The onboard sensors used to collect these data are * Wind sensor: FT Technologies FT205 UAV-mountable, pre-calibrated ultrasonic wind sensor with accuracy of $\pm$ 0.1 m/s and refresh rate of 10 Hz.; * Position: 3DM-GX5-45 GNSS/INS sensor pack. These sensors use a built-in Kalman filtering system to fuse the GPS and IMU data. The sensor has a maximum output rate of 10Hz with accuracy of $\pm$2 m RMS horizontal, $\pm$5 m RMS vertical. * Current and Voltage: Mauch Electronics PL-200 sensor. This sensor can record currents up to 200 A and voltages up to 33 V. Analogue readings from the sensor were converted into a digital format using an 8 channel 17 bit analogue-to-digital converter (ADC). The number of flights performed varying operational parameters (payload, altitude, speed) was 196. In addition, 13 recordings were done to assess the drone’s ancillary power and hover conditions.
Provide a detailed description of the following dataset: Data Collected with Package Delivery Quadcopter Drone
Tc1 Mouse cerebellum atlas
[![DOI](https://zenodo.org/badge/166476589.svg)](https://zenodo.org/badge/latestdoi/166476589) This mouse cerebellar atlas can be used for mouse cerebellar morphometry. We recommend to use [Multi Atlas Segmentation and Morphometric analysis toolkit (MASMAT) for mouse brain MRI](https://github.com/dancebean/multi-atlas-segmentation) along with other [mouse brain atlases](../../../) in this repo. ## Reference/citation: - If you're using the this mouse MRI cerebellar atlas in your paper, we ask you to please kindly cite the following papers: - Ma, D., Cardoso, M. J., Zuluaga, M. A., Modat, M., Powell, N. M., Wiseman, F. K., Cleary, J. O., Sinclair, B., Harrison, I. F., Siow, B., Popuri, K., Lee, S., Matsubara, J. A., Sarunic, M. V, Beg, M. F., Tybulewicz, V. L. J., Fisher, E. M. C., Lythgoe, M. F., & Ourselin, S. (2020). **Substantially thinner internal granular layer and reduced molecular layer surface in the cerebellum of the Tc1 mouse model of Down Syndrome – a comprehensive morphometric analysis with active staining contrast-enhanced MRI**. NeuroImage, 117271. https://doi.org/https://doi.org/10.1016/j.neuroimage.2020.117271 - Ma, D., Cardoso, M. J., Zuluaga, M. A., Modat, M., Powell, N., Wiseman, F., Tybulewicz, V., Fisher, E., Lythgoe, M. F., & Ourselin, S. (2015). **Grey Matter Sublayer Thickness Estimation in the Mouse Cerebellum**. In Medical Image Computing and Computer Assisted Intervention 2015 (pp. 644–651). https://doi.org/10.1007/978-3-319-24574-4_77
Provide a detailed description of the following dataset: Tc1 Mouse cerebellum atlas
Multi-template MRI mouse brain atlas
[![DOI](https://zenodo.org/badge/166476589.svg)](https://zenodo.org/badge/latestdoi/166476589) Mouse Brain MRI atlas (both in-vivo and ex-vivo) (repository relocated from the [original webpage](http://cmic.cs.ucl.ac.uk/staff/da_ma/multi_atlas/)) ## List of atlases - [**FVB_NCrl**](https://github.com/dancebean/mouse-brain-atlas/tree/master/FVB_NCrl): Brain MRI atlas of the wild-type `FVB_NCrl` mouse strain (used as the background strain for the `rTg4510` which is a tauopathy model mice express a repressible form of human tau containing the P301L mutation that has been linked with familial frontotemporal dementia.) - [**NeAt**](https://github.com/dancebean/mouse-brain-atlas/tree/master/NeAt): Brain MRI atlas of the whld-type `C57BL/6J` mouse strain. Atlas was created based on the original [`MRM NeAt`](http://brainatlas.mbi.ufl.edu/) mouse brain atlas (template images reoriented and bias-corrected, left/right structure label seperated, and 4th ventricle manual segmentation added). - [**Tc1 Cerebellum**](https://github.com/dancebean/mouse-brain-atlas/tree/master/Tc1_Cerebellum/): TC1 mouse cerebellar cortical sublayer lobules.This mouse cerebellar atlas can be used for mouse cerebellar morphometry. ## Citation - If you use the segmented brain structure, or use the atlas along with the [automatic mouse brain MRI segmentation tools](https://github.com/dancebean/multi-atlas-segmentation), we ask you to kindly cite the following papers: - Ma D, Cardoso MJ, Modat M, Powell N, Wells J, Holmes H, Wiseman F, Tybulewicz V, Fisher E, Lythgoe MF, Ourselin S. **Automatic structural parcellation of mouse brain MRI using multi-atlas label fusion.** PloS one. 2014 Jan 27;9(1):e86576. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0086576 - Ma D, Holmes HE, Cardoso MJ, Modat M, Harrison IF, Powell NM, O'Callaghan J, Ismail O, Johnson RA, O’Neill MJ, Collins EC, Mirza F. Beg, Karteek Popuri, Mark F. Lythgoe, and Sebastien Ourselin **Study the longitudinal in vivo and cross-sectional ex vivo brain volume difference for disease progression and treatment effect on mouse model of tauopathy using automated MRI structural parcellation.** Frontiers in Neuroscience. 2019;13:11. https://www.frontiersin.org/articles/10.3389/fnins.2019.00011 - If you use the brain MR images of the `FVB_NCrl` mouse strain (the wildtype background of rTg4510), we ask you to kindly cite the following papers: - Wells JA, O'Callaghan JM, Holmes HE, Powell NM, Johnson RA, Siow B, Torrealdea F, Ismail O, Walker-Samuel S, Golay X, Rega M. **In vivo imaging of tau pathology using multi-parametric quantitative MRI. Neuroimage.** 2015 May 1;111:369-78. https://www.sciencedirect.com/science/article/pii/S105381191500124X - Holmes HE, Colgan N, Ismail O, Ma D, Powell NM, O'Callaghan JM, Harrison IF, Johnson RA, Murray TK, Ahmed Z, Heggenes M. **Imaging the accumulation and suppression of tau pathology using multiparametric MRI.** Neurobiology of aging. 2016 Mar 1;39:184-94. https://www.sciencedirect.com/science/article/pii/S0197458015006053 - Holmes HE, Powell NM, Ma D, Ismail O, Harrison IF, Wells JA, Colgan N, O'Callaghan JM, Johnson RA, Murray TK, Ahmed Z. **Comparison of in vivo and ex vivo MRI for the detection of structural abnormalities in a mouse model of tauopathy.** Frontiers in neuroinformatics. 2017 Mar 31;11:20. https://www.frontiersin.org/articles/10.3389/fninf.2017.00020/full - If you're using the [mouse MRI T2* Active Starining Cerebellar atlas](Tc1_Cerebellum), we ask you to please kindly cite the following papers: - Ma, D., Cardoso, M. J., Zuluaga, M. A., Modat, M., Powell, N. M., Wiseman, F. K., Cleary, J. O., Sinclair, B., Harrison, I. F., Siow, B., Popuri, K., Lee, S., Matsubara, J. A., Sarunic, M. V, Beg, M. F., Tybulewicz, V. L. J., Fisher, E. M. C., Lythgoe, M. F., & Ourselin, S. (2020). Substantially thinner internal granular layer and reduced molecular layer surface in the cerebellum of the Tc1 mouse model of Down Syndrome – a comprehensive morphometric analysis with active staining contrast-enhanced MRI. NeuroImage, 117271. https://doi.org/https://doi.org/10.1016/j.neuroimage.2020.117271 - Ma, D., Cardoso, M. J., Zuluaga, M. A., Modat, M., Powell, N., Wiseman, F., Tybulewicz, V., Fisher, E., Lythgoe, M. F., & Ourselin, S. (2015). Grey Matter Sublayer Thickness Estimation in the Mouse Cerebellum. In Medical Image Computing and Computer Assisted Intervention 2015 (pp. 644–651). https://doi.org/10.1007/978-3-319-24574-4_77 ## Reference - For the original information of the `NeAt` atlas, please please refer to the website: http://brainatlas.mbi.ufl.edu/, and the following two reference papers: - Ma Yu, Smith David, Hof Patrick R, Foerster Bernd, Hamilton Scott, Blackband Stephen J, Yu Mei, Benveniste Helene **In Vivo 3D Digital Atlas Database of the Adult C57BL/6J Mouse Brain by Magnetic Resonance Microscopy**. Front. Neuroanat. 2, 1 (2008). - Ma Yu, Hof P R, Grant S C, Blackband S J, Bennett R, Slatest L, McGuigan M D, Benveniste H **A three-dimensional digital atlas database of the adult C57BL/6J mouse brain by magnetic resonance microscopy**. Neuroscience 135, 1203–15 (2005). ## Funding The works in this repositories received multiple funding from EPSRC, UCL Leonard Wolfson Experimental Neurology center, Medical Research Council (MRC), the NIHR Biomedical Research Unit (Dementia) at UCL and the National Institute for Health Research University College London Hospitals Biomedical Research center, the UK Regenerative Medicine Platform Safety Hub, and the Kings College London and UCL Comprehensive Cancer Imaging center CRUK & EPSRC in association with the MRC and DoH (England), UCL Faculty of Engineering funding scheme, Alzheimer Society Reseasrch Program from Alzheimer Society Canada, NSERC, CIHR, MSFHR Canada, Eli Lilly and Company, Wellcome Trust, the Francis Crick Institute, Cancer Research UK, and University of Melbourne McKenzie Fellowship.
Provide a detailed description of the following dataset: Multi-template MRI mouse brain atlas
ESD
**ESD** is an Emotional Speech Database for voice conversion research. The ESD database consists of 350 parallel utterances spoken by 10 native English and 10 native Chinese speakers and covers 5 emotion categories (neutral, happy, angry, sad and surprise). More than 29 hours of speech data were recorded in a controlled acoustic environment. The database is suitable for multi-speaker and cross-lingual emotional voice conversion studies.
Provide a detailed description of the following dataset: ESD
D3DFACS
The D3DFACS dataset is a dynamic 3D facial expression data set based on the Facial Action Coding System. It contains Action Unit (AU) sequences from 10 people, with 519 sequences in total. The peak image of each expression sequence has been manually FACS coded by a certified expert. Registered meshes in FLAME mesh topology are available under https://flame.is.tue.mpg.de/downloads
Provide a detailed description of the following dataset: D3DFACS
H01
The **H01** dataset is a 1.4 petabyte rendering of a small sample of human brain tissue, released by a collaboration between the Lichtman Laboratory at Harvard University and Google. The H01 sample was imaged at 4nm-resolution by serial section electron microscopy, reconstructed and annotated by automated computational techniques, and analyzed for preliminary insights into the structure of the human cortex. The dataset comprises imaging data that covers roughly one cubic millimeter of brain tissue, and includes tens of thousands of reconstructed neurons, millions of neuron fragments, 130 million annotated synapses, 104 proofread cells, and many additional subcellular annotations and structures. H01 is thus far the largest sample of brain tissue imaged and reconstructed in this level of detail, in any species, and the first large-scale study of synaptic connectivity in the human cortex that spans multiple cell types across all layers of the cortex. The primary goals of this project are to produce a novel resource for studying the human brain and to improve and scale the underlying connectomics technologies. The dataset can be browsed online using the [Neuroglancer browser interface](https://h01-release-dot-neuroglancer-demo.appspot.com/#!gs://h01-release/assets/neuroglancer_states/20210601/c3_library.json).
Provide a detailed description of the following dataset: H01
DialogSum
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues with corresponding manually labeled summaries and topics. This work is accepted by ACL findings 2021. You may find the paper here: <https://arxiv.org/pdf/2105.06762.pdf>. If you want to use our dataset, please cite our paper. #### Dialogue Data We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers. Compared with previous datasets, dialogues from DialogSum have distinct characteristics: * Under rich real-life scenarios, including more diverse task-oriented scenarios; * Have clear communication patterns and intents, which is valuable to serve as summarization sources; * Have a reasonable length, which comforts the purpose of automatic summarization. #### Summaries We ask annotators to summarize each dialogue based on the following criteria: * Convey the most salient information; * Be brief; * Preserve important named entities within the conversation; * Be written from an observer perspective; * Be written in formal language. #### Topics In addition to summaries, we also ask annotators to write a short topic for each dialogue, which can be potentially useful for future work, e.g. generating summaries by leveraging topic information. Image source: [https://arxiv.org/pdf/2105.06762.pdf](https://arxiv.org/pdf/2105.06762.pdf)
Provide a detailed description of the following dataset: DialogSum
Classic ECN AQM Fall-Back
Clickable heat-map visualizations of the experiments run to quantify the Classic ECN AQM problem and to evaluate the success of the Classic AQM Detection and Fall-back algorithm. Clicking through gives access to whisker-plot summary results, more detailed clickable heat-maps and time-series plots of all the variables in each experiment run.
Provide a detailed description of the following dataset: Classic ECN AQM Fall-Back
iMet Collection
A dataset for fine-grained art attribute recognition introduced in the 6th FGVC Workshop at CVPR 2019. It is a high-quality artwork image dataset with professional photographs of artworks from The Metropolitan Museum of Art and attribute labels curated or verified by experts.
Provide a detailed description of the following dataset: iMet Collection
FED
The FED dataset is constructed by annotating a set of human-system and human-human conversations with eighteen fine-grained dialog qualities.
Provide a detailed description of the following dataset: FED
Com2Sense
Complementary Commonsense (**Com2Sense**) is a dataset for benchmarking commonsense reasoning ability of NLP models. This dataset contains 4k statement true/false sentence pairs. The dataset is crowdsourced and enhanced with an adversarial model-in-the-loop setup to incentivize challenging samples. To facilitate a systematic analysis of commonsense capabilities, the dataset is designed along the dimensions of knowledge domains, reasoning scenarios and numeracy.
Provide a detailed description of the following dataset: Com2Sense
Semi-iNat
Semi-iNat is a challenging dataset for semi-supervised classification with a long-tailed distribution of classes, fine-grained categories, and domain shifts between labeled and unlabeled data. The data is obtained from iNaturalist, a community driven project aimed at collecting observations of biodiversity. The dataset comes with standard training, validation and test sets. The training set consists of: * labeled images from 810 species, where around 10% of the images are labeled. * unlabeled images contains unlabeled images from the same set of classes as the labeled images (in-class), plus the images from a different set of classes as the labeled set (out-of-class). The species are guaranteed to have species at the same phylum level in the labels set. This reflects a common scenario where a coarser taxonomic label of an image can be easily obtained.
Provide a detailed description of the following dataset: Semi-iNat
FacetSum
**FacetSum** is a faceted summarization dataset for scientific documents. FacetSum has been built on Emerald journal articles, covering a diverse range of domains. Different from traditional document-summary pairs, FacetSum provides multiple summaries, each targeted at specific sections of a long document, including the purpose, method, findings, and value.
Provide a detailed description of the following dataset: FacetSum
ClueWeb09
The ClueWeb09 dataset was created to support research on information retrieval and related human language technologies. It consists of about 1 billion web pages in ten languages that were collected in January and February 2009. The dataset is used by several tracks of the TREC conference.
Provide a detailed description of the following dataset: ClueWeb09
TRECDD
The dataset used for TREC 2017 Dynamic Domain Track consists of two domains: Ebola and New York Times. 1.1 Ebola The Ebola dataset is crawled by Juliana Friere (NYU, juliana dot freire at nyu dot edu), Kien Pham(NYU), Peter Landwehr (Giant Oak, peter dot landwehr at giantoak dot com) and Lewis McGibbney (JPL, Lewis dot J dot Mcgibbney at jpl dor nasa dot gov). The Ebola dataset contains records related to the Ebola outbreak in Africa in 2014-2015. The original dataset includes tweets relating to the outbreak, web pages from sites hosted in the affected countries as well as PDF documents from websites such as World Health Organization, Financial Tracking Service and The World Bank. Such information resources are designed to provide information to citizens and aid workers on the ground. 1.2 New York Times The New York Times dataset is published by Evan Sandhaus in 2008 under LDC Catalog No. LDC2008T19. The New York Times dataset consists of articles published in New York Times from January 1, 1987 to June 19, 2007 with metadata provided by the New York Times Newsroom, the New York Times Indexing Service and the online production staff at nytimes.com. Most articles are manually summarized and tagged by professional staffs. The original form of this dataset is in News Industry Text Format (NITF). This dataset can aid the research in Document Categorization, Information Retrieval, Entity Extraction and etc.
Provide a detailed description of the following dataset: TRECDD
BugClassify
Dataset of 5,591 labeled issue tickets. Originally created by Herzig et al. in : "It’s Not a Bug, It’s a Feature: How Misclassification Impacts Bug Prediction" ([paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2013/05/icse2013-bugclassify.pdf))
Provide a detailed description of the following dataset: BugClassify
OntoGUM
**OntoGUM** is an OntoNotes-like coreference dataset converted from GUM, an English corpus covering 12 genres using deterministic rules.
Provide a detailed description of the following dataset: OntoGUM
ConvoSumm
**ConvoSumm** is a suite of four datasets to evaluate a model’s performance on a broad spectrum of conversation data.
Provide a detailed description of the following dataset: ConvoSumm
EQA
The **EQA** (Embodied Question Answering) dataset is a dataset of visual questions and answers grounded in House3D. For this dataset an agent is spawned at a random location in a 3D environment and asked a question (for e.g. "What color is the car?"). In order to answer, the agent must first intelligently navigate to explore the environment, gather necessary visual information through first-person vision, and then answer the question ("orange").
Provide a detailed description of the following dataset: EQA
Everybody Dance Now
**Everybody Dance Now** is a dataset of videos that can be used for training and motion transfer. It contains long single-dancer videos that can be used to train and evaluate the model. All subjects have consented to allowing the data to be used for research purposes.
Provide a detailed description of the following dataset: Everybody Dance Now
PDE dataset
Contains data of parametric PDEs - Burgers' equation - Darcy's flow - Navier-Stokes equation
Provide a detailed description of the following dataset: PDE dataset
TaL Corpus
The Tongue and Lips (TaL) corpus is a multi-speaker corpus of ultrasound images of the tongue and video images of lips. This corpus contains synchronised imaging data of extraoral (lips) and intraoral (tongue) articulators from 82 native speakers of English. The TaL corpus consists of two datasets: * TaL1 is a single-speaker dataset containing data of one professional voice talent, a male native speaker of English, over six recording sessions. * TaL80 is a multi-speaker dataset contains recording sessions of 81 native speakers of English without voice talent experience. Each speaker was recording over a single recording session. Image source: [https://ultrasuite.github.io/data/tal_corpus/](https://ultrasuite.github.io/data/tal_corpus/)
Provide a detailed description of the following dataset: TaL Corpus
MRS
MRS, a multilingual reply suggestion dataset with ten languages. MRS can be used to compare two families of models: 1) retrieval models that select the reply from a fixed set and 2) generation models that produce the reply from scratch. Therefore, MRS complements existing cross-lingual generalization benchmarks that focus on classification and sequence labeling tasks.
Provide a detailed description of the following dataset: MRS
CCPM
**Introduction** CCPM is a large Chinese classical poetry matching dataset that can be used for poetry matching, understanding and translation. The main task of this dataset is: given a description in modern Chinese, the model is supposed to select one line of Chinese classical poetry from four candidates that semantically match the given description most. **Size** It contains 27,218 instances in total, which are split into training (21,778), validation (2,720) and test (2,720) sets. **Format** Each instance is composed of translation (the description in modern Chinese, a string), choice (four candidate lines of Chinese classical poetry, a list) and answer (the index of the correct line, an integer between 0 and 3).
Provide a detailed description of the following dataset: CCPM
The 'Call me sexist but' Dataset (CMSB)
Tweets and items from psychological scales for sexism detection with counterfactual examples. This dataset consists of three types of 'short-text' content: 1. social media posts (tweets) 2. psychological survey items, and 3. synthetic adversarial modifications of the former two categories. The tweet data can be further divided into 3 separate datasets based on their source: 1.1 the hostile sexism dataset, 1.2 the benevolent sexism dataset, and 1.3 the callme sexism dataset. 1.1 and 1.2 are pre-existing datasets obtained from Waseem, Z., & Hovy, D. (2016) and Jha, A., & Mamidi, R. (2017) that we re-annotated (see our paper and data statement for further information). The rationale for including these dataset specifically is that they feature a variety of sexist expressions in real conversational (social media) settings. In particular, they feature expressions that range from overtly antagonizing the minority gender through negative stereotypes (1.1) to leveraging positive stereotypes to subtly dismiss it as less-capable and fragile (1.2). The callme sexism dataset (1.3) was collected by us based on the presence of the phrase 'call me sexist but' in tweets. The rationale behind this choice of query was that several Twitter users opine potentially sexist comments and signal so using the presence of this phrase, which arguably serves as a disclaimer for sexist opinions. The survey items (2) pertain to attitudinal surveys that are designed to measure sexist attitudes and gender bias in participants. We provide a detailed account of our selection procedure in our paper. Finally, the adversarial examples are generated by crowdworkers from Amazon Mechanical Turk by making minimal changes to tweets and scale items, in order to change sexist examples to non-sexist ones. We hope that these examples will help us control for typical confounds in non-sexist data (e.g., topic, civility) and lead to datasets with fewer biases, and consequently allow us to train more robust machine learning models. We only asked to turn sexist examples into non-sexist ones, and not vice versa, for ethical reasons. The dataset is annotated to capture cases where text is sexist because of its content (what the speaker believes) or its phrasing (the speaker's choice of words). We explain the rationale for this codebook in our paper.
Provide a detailed description of the following dataset: The 'Call me sexist but' Dataset (CMSB)
Webly-Reference SR Dataset
Webly-Reference SR dataset is a test dataset for evaluating Ref-SR methods. It has the following advantages: * Collected in a more realistic way: For every input image, its reference image is searched using Google Image. * More diverse than previous datasets.
Provide a detailed description of the following dataset: Webly-Reference SR Dataset
CPNet
CPNet dataset has a collection of 25 categories, 2,334 models based on ShapeNetCore, which includes 1,000+ correspondence sets with 104,861 points.
Provide a detailed description of the following dataset: CPNet
MixATIS
Dataset is constructed from single intent dataset ATIS. This is a publically available multi intent dataset, which can be downloaded from https://github.com/LooperXX/AGIF/data
Provide a detailed description of the following dataset: MixATIS
FLORES-101
The FLORES evaluation benchmark consists of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond. Paper: [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://scontent-lhr8-1.xx.fbcdn.net/v/t39.8562-6/196203317_1861942553982349_5142503689226033347_n.pdf?_nc_cat=110&ccb=1-3&_nc_sid=ae5e01&_nc_ohc=ibkQ1m-Hhn4AX-dmpfR&_nc_ht=scontent-lhr8-1.xx&_nc_rmd=260&oh=dd43ca179eae3cc1b986ae06eb6de20d&oe=60DE1F0D)
Provide a detailed description of the following dataset: FLORES-101