dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
SIP
The **Salient Person** dataset (**SIP**) contains 929 salient person samples with different poses and illumination conditions. Source: [Accurate RGB-D Salient Object Detection via Collaborative Learning](https://arxiv.org/abs/2007.11782) Image Source: [https://arxiv.org/pdf/1907.06781.pdf](https://arxiv.org/pdf/1907.06781.pdf)
Provide a detailed description of the following dataset: SIP
NJU2K
**NJU2K** is a large RGB-D dataset containing 1,985 image pairs. The stereo images were collected from the Internet and 3D movies, while photographs were taken by a Fuji W3 camera. Source: [Bifurcated Backbone Strategy for RGB-D Salient Object Detection](https://arxiv.org/abs/2007.02713) Image Source: [Depth saliency based on anisotropic center-surround difference](https://doi.org/10.1109/ICIP.2014.7025222)
Provide a detailed description of the following dataset: NJU2K
NLPR
The **NLPR** dataset for salient object detection consists of 1,000 image pairs captured by a standard Microsoft Kinect with a resolution of 640×480. The images include indoor and outdoor scenes (e.g., offices, campuses, streets and supermarkets).
Provide a detailed description of the following dataset: NLPR
LFSD
The **Light Field Saliency Database** (**LFSD**) contains 100 light fields with 360×360 spatial resolution. A rough focal stack and an all-focus image are provided for each light field. The images in this dataset usually have one salient foreground object and a background with good color contrast.
Provide a detailed description of the following dataset: LFSD
Cam2BEV
The [dataset](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data) contains two subsets of synthetic, semantically segmented road-scene images, which have been created for developing and applying the methodology described in the paper **"A Sim2Real Deep Learning Approach for the Transformation of Images from Multiple Vehicle-Mounted Cameras to a Semantically Segmented Image in Bird’s Eye View"** ([IEEE Xplore](https://ieeexplore.ieee.org/document/9294462), [arXiv](http://arxiv.org/abs/2005.04078), [YouTube](https://www.youtube.com/watch?v=TzXuwt56a0E)) The dataset can be used through the official code implementation of the Cam2BEV methodology described on [Github](https://github.com/ika-rwth-aachen/Cam2BEV). | Dataset | # Training Samples | # Validation Samples | # Vehicle Cameras | # Semantic Classes | Contained Images (examples) | | --- | --- | --- | --- | --- | --- | | [Dataset 1](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/tree/master/2_F): 360° Surround | 33199 | 3731 | 4 (front, rear, left, right) | 30 (CityScapes) | [front camera](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/front.png), [rear camera](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/rear.png), [left camera](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/left.png), [right camera](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/right.png), [bird's eye view](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/bev.png), [bird's eye view incl. occlusion](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples bev+occlusion.png), [homography view](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/1_FRLR/examples/homography.png) | | [Dataset 2](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/tree/master/2_F): Front Camera only | 32246 | 3172 | 1 (front) | 30 (CityScapes) | [front camera](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/2_F/examples/front.png), [bird's eye view](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/2_F/examples/bev.png), [bird's eye view incl. occlusion](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/2_F/examples/bev+occlusion.png), [homography view](https://gitlab.ika.rwth-aachen.de/cam2bev/cam2bev-data/-/raw/master/2_F/examples/homography.png) |
Provide a detailed description of the following dataset: Cam2BEV
ssTEM
We provide two image stacks where each contains 20 sections from serial section Transmission Electron Microscopy (ssTEM) of the Drosophila melanogaster third instar larva ventral nerve cord. Both stacks measure approx. 4.7 x 4.7 x 1 microns with a resolution of 4.6 x 4.6 nm/pixel and section thickness of 45-50 nm. In addition to the raw image data, we provide for the first stack a dense labeling of neuron membranes (including orientation and junction), mitochondria, synapses and glia/extracellular space. The first stack serves as a training dataset, and a second stack of the same dimension can be used as a test dataset.
Provide a detailed description of the following dataset: ssTEM
VeRi-776
**VeRi-776** is a vehicle re-identification dataset which contains 49,357 images of 776 vehicles from 20 cameras. The dataset is collected in the real traffic scenario, which is close to the setting of CityFlow. The dataset contains bounding boxes, types, colors and brands.
Provide a detailed description of the following dataset: VeRi-776
UNSW-NB15
**UNSW-NB15** is a network intrusion dataset. It contains nine different attacks, includes DoS, worms, Backdoors, and Fuzzers. The dataset contains raw network packets. The number of records in the training set is 175,341 records and the testing set is 82,332 records from the different types, attack and normal.
Provide a detailed description of the following dataset: UNSW-NB15
FarsTail
Natural Language Inference (NLI), also called Textual Entailment, is an important task in NLP with the goal of determining the inference relationship between a premise p and a hypothesis h. It is a three-class problem, where each pair (p, h) is assigned to one of these classes: "ENTAILMENT" if the hypothesis can be inferred from the premise, "CONTRADICTION" if the hypothesis contradicts the premise, and "NEUTRAL" if none of the above holds. There are large datasets such as SNLI, MNLI, and SciTail for NLI in English, but there are few datasets for poor-data languages like Persian. Persian (Farsi) language is a pluricentric language spoken by around 110 million people in countries like Iran, Afghanistan, and Tajikistan. **FarsTail** is the first relatively large-scale Persian dataset for NLI task. A total of 10,367 samples are generated from a collection of 3,539 multiple-choice questions. The train, validation, and test portions include 7,266, 1,537, and 1,564 instances, respectively.
Provide a detailed description of the following dataset: FarsTail
CUHK-PEDES
The **CUHK-PEDES** dataset is a caption-annotated pedestrian dataset. It contains 40,206 images over 13,003 persons. Images are collected from five existing person re-identification datasets, CUHK03, Market-1501, SSM, VIPER, and CUHK01 while each image is annotated with 2 text descriptions by crowd-sourcing workers. Sentences incorporate rich details about person appearances, actions, poses.
Provide a detailed description of the following dataset: CUHK-PEDES
AND Dataset
The **AND Dataset** contains 13700 handwritten samples and 15 corresponding expert examined features for each sample. The dataset is released for public use and the methods can be extended to provide explanations on other verification tasks like face verification and bio-medical comparison. This dataset can serve as the basis and benchmark for future research in explanation based handwriting verification.
Provide a detailed description of the following dataset: AND Dataset
Object Discovery
The **Object Discovery** dataset was collected by downloading images from Internet for airplane, car and horse. It is significantly larger and thus, diverse in terms of viewpoints, texture, color etc
Provide a detailed description of the following dataset: Object Discovery
Open Entity
The **Open Entity** dataset is a collection of about 6,000 sentences with fine-grained entity types annotations. The entity types are free-form noun phrases that describe appropriate types for the role the target entity plays in the sentence. Sentences were sampled from Gigaword, OntoNotes and web articles. On average each sentence has 5 labels.
Provide a detailed description of the following dataset: Open Entity
RITE
The RITE (Retinal Images vessel Tree Extraction) is a database that enables comparative studies on segmentation or classification of arteries and veins on retinal fundus images, which is established based on the public available DRIVE database (Digital Retinal Images for Vessel Extraction). RITE contains 40 sets of images, equally separated into a training subset and a test subset, the same as DRIVE. The two subsets are built from the corresponding two subsets in DRIVE. For each set, there is a fundus photograph, a vessel reference standard, and a Arteries/Veins (A/V) reference standard. * The fundus photograph is inherited from DRIVE. * For the training set, the vessel reference standard is a modified version of 1st_manual from DRIVE. * For the test set, the vessel reference standard is 2nd_manual from DRIVE. * For the A/V reference standard, four types of vessels are labelled using four colors based on the vessel reference standard. * Arteries are labelled in red; veins are labelled in blue; the overlapping of arteries and veins are labelled in green; the vessels which are uncertain are labelled in white. * The fundus photograph is in tif format. And the vessel reference standard and the A/V reference standard are in png format. The dataset is described in more detail in our paper, which you will cite if you use the dataset in any way: Hu Q, Abràmoff MD, Garvin MK. Automated separation of binary overlapping trees in low-contrast color retinal images. Med Image Comput Comput Assist Interv. 2013;16(Pt 2):436-43. PubMed PMID: 24579170 https://doi.org/10.1007/978-3-642-40763-5_54
Provide a detailed description of the following dataset: RITE
Contract Discovery
A new shared task of semantic retrieval from legal texts, in which a so-called contract discovery is to be performed, where legal clauses are extracted from documents, given a few examples of similar clauses from other legal acts.
Provide a detailed description of the following dataset: Contract Discovery
UI-PRMD
UI-PRMD is a data set of movements related to common exercises performed by patients in physical therapy and rehabilitation programs. The data set consists of 10 rehabilitation exercises. A sample of 10 healthy individuals repeated each exercise 10 times in front of two sensory systems for motion capturing: a Vicon optical tracker, and a Kinect camera. The data is presented as positions and angles of the body joints in the skeletal models provided by the Vicon and Kinect mocap systems.
Provide a detailed description of the following dataset: UI-PRMD
TVR
A new multimodal retrieval dataset. TVR requires systems to understand both videos and their associated subtitle (dialogue) texts, making it more realistic. The dataset contains 109K queries collected on 21.8K videos from 6 TV shows of diverse genres, where each query is associated with a tight temporal window.
Provide a detailed description of the following dataset: TVR
TVQA
The **TVQA** dataset is a large-scale vido dataset for video question answering. It is based on 6 popular TV shows (Friends, The Big Bang Theory, How I Met Your Mother, House M.D., Grey's Anatomy, Castle). It includes 152,545 QA pairs from 21,793 TV show clips. The QA pairs are split into the ratio of 8:1:1 for training, validation, and test sets. The TVQA dataset provides the sequence of video frames extracted at 3 FPS, the corresponding subtitles with the video clips, and the query consisting of a question and four answer candidates. Among the four answer candidates, there is only one correct answer.
Provide a detailed description of the following dataset: TVQA
DialogRE
**DialogRE** is the first human-annotated dialogue-based relation extraction dataset, containing 1,788 dialogues originating from the complete transcripts of a famous American television situation comedy Friends. The are annotations for all occurrences of 36 possible relation types that exist between an argument pair in a dialogue. DialogRE is available in English and Chinese.
Provide a detailed description of the following dataset: DialogRE
Tweebank
Briefly describe the dataset. Provide: * a high-level explanation of the dataset characteristics * explain motivations and summary of its content * potential use cases of the dataset If the description or image is from a different paper, please refer to it as follows: Source: [title](url) Image Source: [title](url)
Provide a detailed description of the following dataset: Tweebank
WD50K
@inproceedings{StarE, title={Message Passing for Hyper-Relational Knowledge Graphs}, author={Galkin, Mikhail and Trivedi, Priyansh and Maheshwari, Gaurav and Usbeck, Ricardo and Lehmann, Jens}, booktitle={EMNLP}, year={2020} }
Provide a detailed description of the following dataset: WD50K
ScanObjectNN
**ScanObjectNN** is a newly published real-world dataset comprising of 2902 3D objects in 15 categories. It is a challenging point cloud classification datasets due to the background, missing parts and deformations.
Provide a detailed description of the following dataset: ScanObjectNN
COMA
CoMA contains 17,794 meshes of the human face in various expressions
Provide a detailed description of the following dataset: COMA
ToLD-Br
The **Toxic Language Detection for Brazilian Portuguese** (**ToLD-Br**) is a dataset with tweets in Brazilian Portuguese annotated according to different toxic aspects. Source: [https://github.com/JAugusto97/ToLD-Br](https://github.com/JAugusto97/ToLD-Br)
Provide a detailed description of the following dataset: ToLD-Br
MIMIC-CXR
**MIMIC-CXR** from Massachusetts Institute of Technology presents 371,920 chest X-rays associated with 227,943 imaging studies from 65,079 patients. The studies were performed at Beth Israel Deaconess Medical Center in Boston, MA. Source: [Can we trust deep learning models diagnosis? The impact of domain shift in chest radiograph classification](https://arxiv.org/abs/1909.01940) Image Source: [https://arxiv.org/abs/1901.07042](https://arxiv.org/abs/1901.07042)
Provide a detailed description of the following dataset: MIMIC-CXR
CheXpert
The **CheXpert** dataset contains 224,316 chest radiographs of 65,240 patients with both frontal and lateral views available. The task is to do automated chest x-ray interpretation, featuring uncertainty labels and radiologist-labeled reference standard evaluation sets.
Provide a detailed description of the following dataset: CheXpert
DeepFix
**DeepFix** consists of a program repair dataset (fix compiler errors in C programs). It enables research around automatically fixing programming errors using deep learning.
Provide a detailed description of the following dataset: DeepFix
DUC 2004
The DUC2004 dataset is a dataset for document summarization. Is designed and used for testing only. It consists of 500 news articles, each paired with four human written summaries. Specifically it consists of 50 clusters of Text REtrieval Conference (TREC) documents, from the following collections: AP newswire, 1998-2000; New York Times newswire, 1998-2000; Xinhua News Agency (English version), 1996-2000. Each cluster contained on average 10 documents.
Provide a detailed description of the following dataset: DUC 2004
UT-Interaction
The **UT-Interaction** dataset contains videos of continuous executions of 6 classes of human-human interactions: shake-hands, point, hug, push, kick and punch. Ground truth labels for these interactions are provided, including time intervals and bounding boxes. There is a total of 20 video sequences whose lengths are around 1 minute. Each video contains at least one execution per interaction, resulting in 8 executions of human activities per video on average. Several participants with more than 15 different clothing conditions appear in the videos. The videos are taken with the resolution of 720*480, 30fps, and the height of a person in the video is about 200 pixels.
Provide a detailed description of the following dataset: UT-Interaction
Microsoft Malware Classification Challenge
The Microsoft Malware Classification Challenge was announced in 2015 along with a publication of a huge dataset of nearly 0.5 terabytes, consisting of disassembly and bytecode of more than 20K malware samples. Apart from serving in the Kaggle competition, the dataset has become a standard benchmark for research on modeling malware behaviour. To date, the dataset has been cited in more than 50 research papers. Here we provide a high-level comparison of the publications citing the dataset. The comparison simplifies finding potential research directions in this field and future performance evaluation of the dataset.
Provide a detailed description of the following dataset: Microsoft Malware Classification Challenge
AVSD
The Audio Visual Scene-Aware Dialog (**AVSD**) dataset, or DSTC7 Track 3, is a audio-visual dataset for dialogue understanding. The goal with the dataset and track was to design systems to generate responses in a dialog about a video, given the dialog history and audio-visual content of the video. Source: [The Eighth Dialog System Technology Challenge](https://arxiv.org/abs/1911.06394) Image Source: [http://workshop.colips.org/dstc7/papers/DSTC7_Task_3_overview_paper.pdf](http://workshop.colips.org/dstc7/papers/DSTC7_Task_3_overview_paper.pdf)
Provide a detailed description of the following dataset: AVSD
eQASC
This dataset contains 98k 2-hop explanations for questions in the QASC dataset, with annotations indicating if they are valid (~25k) or invalid (~73k) explanations. This repository addresses the current lack of training data for distinguish valid multihop explanations from invalid, by providing three new datasets. The main one, eQASC, contains 98k explanation annotations for the multihop question answering dataset [QASC](https://allenai.org/data/qasc), and is the first that annotates multiple candidate explanations for each answer. The second dataset, eQASC-perturbed, is constructed by crowd-sourcing perturbations (while preserving their validity) of a subset of explanations in QASC, to test consistency and generalization of explanation prediction models. The third dataset eOBQA is constructed by adding explanation annotations to the [OBQA dataset](https://allenai.org/data/open-book-qa) to test generalization of models trained on eQASC.
Provide a detailed description of the following dataset: eQASC
ImageNet-LT
**ImageNet Long-Tailed** is a subset of [/dataset/imagenet](ImageNet) dataset consisting of 115.8K images from 1000 categories, with maximally 1280 images per class and minimally 5 images per class. The additional classes of images in ImageNet-2010 are used as the open set.
Provide a detailed description of the following dataset: ImageNet-LT
Places-LT
**Places-LT** has an imbalanced training set with 62,500 images for 365 classes from Places-2. The class frequencies follow a natural power law distribution with a maximum number of 4,980 images per class and a minimum number of 5 images per class. The validation and testing sets are balanced and contain 20 and 100 images per class respectively.
Provide a detailed description of the following dataset: Places-LT
Salinas
**Salinas Scene** is a hyperspectral dataset collected by the 224-band AVIRIS sensor over Salinas Valley, California, and is characterized by high spatial resolution (3.7-meter pixels). The area covered comprises 512 lines by 217 samples. 20 water absorption bands were discarder: [108-112], [154-167], 224. This image was available only as at-sensor radiance data. It includes vegetables, bare soils, and vineyard fields. Salinas groundtruth contains 16 classes.
Provide a detailed description of the following dataset: Salinas
Math23K
Math23K is a dataset created for math word problem solving, contains 23, 162 Chinese problems crawled from the Internet. Refer to our paper for more details: The dataset is originally introduced in the paper __Deep Neural Solver for Math Word Problems__. The original files are originally split into train/test split, while other research efforts (https://github.com/2003pro/Graph2Tree) perform the train/dev/test split.
Provide a detailed description of the following dataset: Math23K
RST-DT
The Rhetorical Structure Theory (RST) Discourse Treebank consists of 385 Wall Street Journal articles from the Penn Treebank annotated with discourse structure in the RST framework along with human-generated extracts and abstracts associated with the source documents. In the RST framework (Mann and Thompson, 1988), a text's discourse structure can be represented as a tree in four aspects: (1) the leaves correspond to text fragments called elementary discourse units (the mininal discourse units); (2) the internal nodes of the tree correspond to contiguous text spans; (3) each node is characterized by its nuclearity, or essential unit of information; and (4) each node is also characterized by a rhetorical relation between two or more non-overlapping, adjacent text spans. Data The data in this release is divided into a training set (347 documents) and a test set (38 documents). All annotations were produced using a discourse annotation tool that can be downloaded from http://www.isi.edu/~marcu/discourse.
Provide a detailed description of the following dataset: RST-DT
SQA
The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
Provide a detailed description of the following dataset: SQA
BC4CHEMD
Introduced by Krallinger et al. in [The CHEMDNER corpus of chemicals and drugs and its annotation principles](https://jcheminf.biomedcentral.com/articles/10.1186/1758-2946-7-S1-S2) **BC4CHEMD** is a collection of 10,000 PubMed abstracts that contain a total of 84,355 chemical entity mentions labeled manually by expert chemistry literature curators.
Provide a detailed description of the following dataset: BC4CHEMD
SherLIiC
SherLIiC is a testbed for lexical inference in context (LIiC), consisting of 3985 manually annotated inference rule candidates (InfCands), accompanied by (i) ~960k unlabeled InfCands, and (ii) ~190k typed textual relations between Freebase entities extracted from the large entity-linked corpus ClueWeb09. Each InfCand consists of one of these relations, expressed as a lemmatized dependency path, and two argument placeholders, each linked to one or more Freebase types.
Provide a detailed description of the following dataset: SherLIiC
XCOPA
The Cross-lingual Choice of Plausible Alternatives (**XCOPA**) dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around the globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages. Source: [https://github.com/cambridgeltl/xcopa](https://github.com/cambridgeltl/xcopa)
Provide a detailed description of the following dataset: XCOPA
DebateSum
**DebateSum** consists of 187328 debate documents, arguments (also can be thought of as abstractive summaries, or queries), word-level extractive summaries, citations, and associated metadata organized by topic-year. This data is ready for analysis by NLP systems.
Provide a detailed description of the following dataset: DebateSum
iSAID
iSAID contains 655,451 object instances for 15 categories across 2,806 high-resolution images. The images of iSAID is the same as the DOTA-v1.0 dataset, which are manily collected from the Google Earth, some are taken by satellite JL-1, the others are taken by satellite GF-2 of the China Centre for Resources Satellite Data and Application.
Provide a detailed description of the following dataset: iSAID
RuDaS
Logical rules are a popular knowledge representation language in many domains. Recently, neural networks have been proposed to support the complex rule induction process. However, we argue that existing datasets and evaluation approaches are lacking in various dimensions; for example, different kinds of rules or dependencies between rules are neglected. Moreover, for the development of neural approaches, we need large amounts of data to learn from and adequate, approximate evaluation measures. In this paper, we provide a tool for generating diverse datasets and for evaluating neural rule learning systems, including novel performance metrics.
Provide a detailed description of the following dataset: RuDaS
ReClor
Logical reasoning is an important ability to examine, analyze, and critically evaluate arguments as they occur in ordinary language as the definition from Law School Admission Council. **ReClor** is a dataset extracted from logical reasoning questions of standardized graduate admission examinations.
Provide a detailed description of the following dataset: ReClor
SEN12MS-CR
**SEN12MS-CR** is a multi-modal and mono-temporal data set for cloud removal. It contains observations covering 175 globally distributed Regions of Interest recorded in one of four seasons throughout the year of 2018. For each region, paired and co-registered synthetic aperture radar (SAR) Sentinel-1 measurements as well as cloudy and cloud-free optical multi-spectral Sentinel-2 observations from European Space Agency's Copernicus mission are provided. The Sentinel satellites provide public access data and are among the most prominent satellites in Earth observation.
Provide a detailed description of the following dataset: SEN12MS-CR
ConvAI2
The **ConvAI2** NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers. To avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.” The training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively.
Provide a detailed description of the following dataset: ConvAI2
EmpatheticDialogues
The **EmpatheticDialogues** dataset is a large-scale multi-turn empathetic dialogue dataset collected on the Amazon Mechanical Turk, containing 24,850 one-to-one open-domain conversations. Each conversation was obtained by pairing two crowd-workers: a speaker and a listener. The speaker is asked to talk about the personal emotional feelings. The listener infers the underlying emotion through what the speaker says and responds empathetically. The dataset provides 32 evenly distributed emotion labels.
Provide a detailed description of the following dataset: EmpatheticDialogues
Wizard of Wikipedia
**Wizard of Wikipedia** is a large dataset with conversations directly grounded with knowledge retrieved from Wikipedia. It is used to train and evaluate dialogue systems for knowledgeable open dialogue with clear grounding
Provide a detailed description of the following dataset: Wizard of Wikipedia
Image-Chat
The IMAGE-CHAT dataset is a large collection of (image, style trait for speaker A, style trait for speaker B, dialogue between A & B) tuples that we collected using crowd-workers, Each dialogue consists of consecutive turns by speaker A and B. No particular constraints are placed on the kinds of utterance, only that we ask the speakers to both use the provided style trait, and to respond to the given image and dialogue history in an engaging way. The goal is not just to build a diagnostic dataset but a basis for training models that humans actually want to engage with.
Provide a detailed description of the following dataset: Image-Chat
PPM-100
PPM is a portrait matting benchmark with the following characteristics: - Fine Annotation - All images are labeled and checked carefully. - Natural Background - All images use the original background without replacement. - Rich Diversity - The images cover full/half body and various postures. - High Resolution - The resolution of images is between 1080p and 4k. Dataset is created by authors of real-time matting model MODNet to measure performance for matting task.
Provide a detailed description of the following dataset: PPM-100
AM-2K
AM-2k (Animal Matting 2,000 Dataset) consists of 2,000 high-resolution images collected and carefully selected from websites with open licenses. AM-2k contains 20 categories of animals including alpaca, antelope, bear, camel, cat, cattle, deer, dog, elephant, giraffe, horse, kangaroo, leopard, lion, monkey, rabbit, rhinoceros, sheep, tiger, zebra, each with 100 real-world images of various appearance and diverse backgrounds
Provide a detailed description of the following dataset: AM-2K
GoEmotions
**GoEmotions** is a corpus of 58k carefully curated comments extracted from Reddit, with human annotations to 27 emotion categories or Neutral. - Number of examples: 58,009. - Number of labels: 27 + Neutral. - Maximum sequence length in training and evaluation datasets: 30. On top of the raw data, the dataset also includes a version filtered based on reter-agreement, which contains a train/test/validation split: - Size of training dataset: 43,410. - Size of test dataset: 5,427. - Size of validation dataset: 5,426. The emotion categories are: admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise.
Provide a detailed description of the following dataset: GoEmotions
DroneDeploy
From DroneDeploy: We’ve collected a dataset of aerial orthomosaics and elevation images. These have been annotated into 6 different classes: Ground, Water, Vegetation, Cars, Clutter, and Buildings. The resolution of the images is approximately 10cm per pixel which gives them a great level of detail. We’re looking forward to making more data available and encourage more research into the impact this imagery can have in furthering safety, conservation, and efficiency. Image source: [https://arxiv.org/pdf/2012.02024v1.pdf](https://arxiv.org/pdf/2012.02024v1.pdf)
Provide a detailed description of the following dataset: DroneDeploy
fastMRI
The **fastMRI** dataset includes two types of MRI scans: knee MRIs and the brain (neuro) MRIs, and containing training, validation, and masked test sets. The deidentified imaging dataset provided by NYU Langone comprises raw k-space data in several sub-dataset groups. Curation of these data are part of an IRB approved study. Raw and DICOM data have been deidentified via conversion to the vendor-neutral ISMRMD format and the RSNA clinical trial processor, respectively. Also, each DICOM image is manually inspected for the presence of any unexpected protected health information (PHI), with spot checking of both metadata and image content. **Knee MRI**: Data from more than 1,500 fully sampled knee MRIs obtained on 3 and 1.5 Tesla magnets and DICOM images from 10,000 clinical knee MRIs also obtained at 3 or 1.5 Tesla. The raw dataset includes coronal proton density-weighted images with and without fat suppression. The DICOM dataset contains coronal proton density-weighted with and without fat suppression, axial proton density-weighted with fat suppression, sagittal proton density, and sagittal T2-weighted with fat suppression. **Brain MRI**: Data from 6,970 fully sampled brain MRIs obtained on 3 and 1.5 Tesla magnets. The raw dataset includes axial T1 weighted, T2 weighted and FLAIR images. Some of the T1 weighted acquisitions included admissions of contrast agent.
Provide a detailed description of the following dataset: fastMRI
WHO-COVID19 Dataset
COVID19 Data from the World Health Organization
Provide a detailed description of the following dataset: WHO-COVID19 Dataset
MAMS
MAMS is a challenge dataset for aspect-based sentiment analysis (ABSA), in which each sentences contain at least two aspects with different sentiment polarities. MAMS dataset contains two versions: one for aspect-term sentiment analysis (ATSA) and one for aspect-category sentiment analysis (ACSA).
Provide a detailed description of the following dataset: MAMS
PEMS-BAY
PEMS-BAY is a dataset for traffic prediction.
Provide a detailed description of the following dataset: PEMS-BAY
DDRel
**DDRel** is a dataset for interpersonal relation classification in dyadic dialogues. It consists of 6,300 dyadic dialogue sessions between 694 pairs of speakers with 53,126 utterances in total. It is constructed by crawling movie scripts from IMSDb and annotating the relation labels for each session according to 13 pre-defines relationships. Source: [https://github.com/JiaQiSJTU/DialogueRelationClassification](https://github.com/JiaQiSJTU/DialogueRelationClassification)
Provide a detailed description of the following dataset: DDRel
SYSU-MM01
The **SYSU-MM01** is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery.
Provide a detailed description of the following dataset: SYSU-MM01
MusicNet
MusicNet is a collection of 330 freely-licensed classical music recordings, together with over 1 million annotated labels indicating the precise time of each note in every recording, the instrument that plays each note, and the note's position in the metrical structure of the composition. The labels are acquired from musical scores aligned to recordings by dynamic time warping. The labels are verified by trained musicians; we estimate a labeling error rate of 4%. We offer the MusicNet labels to the machine learning and music communities as a resource for training models and a common benchmark for comparing results.
Provide a detailed description of the following dataset: MusicNet
HateXplain
Covers multiple aspects of the issue. Each post in the dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labelling decision (as hate, offensive or normal) is based.
Provide a detailed description of the following dataset: HateXplain
RECCON
RECCON is a dataset for the task of recognizing emotion cause in conversations.
Provide a detailed description of the following dataset: RECCON
CANARD
CANARD is a dataset for question-in-context rewriting that consists of questions each given in a dialog context together with a context-independent rewriting of the question. The context of each question is the dialog utterences that precede the question. CANARD can be used to evaluate question rewriting models that handle important linguistic phenomena such as coreference and ellipsis resolution. CANARD is based on QuAC (Choi et al., 2018)---a conversational reading comprehension dataset in which answers are selected spans from a given section in a Wikipedia article. Some questions in QuAC are unanswerable with their given sections. We use the answer 'I don't know.' for such questions. CANARD is constructed by crowdsourcing question rewritings using Amazon Mechanical Turk. We apply several automatic and manual quality controls to ensure the quality of the data collection process. The dataset consists of 40,527 questions with different context lengths. More details are available in our EMNLP 2019 paper. An example is provided below. The dataset is distributed under the CC BY-SA 4.0 license.
Provide a detailed description of the following dataset: CANARD
RegDB
RegDB is used for Visible-Infrared Re-ID which handles the cross-modality matching between the daytime visible and night-time infrared images. The dataset contains images of 412 people. It includes 10 color and 10 thermal images for each person.
Provide a detailed description of the following dataset: RegDB
Kennedy Space Center
**Kennedy Space Center** is a dataset for the classification of wetland vegetation at the Kennedy Space Center, Florida using hyperspectral imagery. Hyperspectral data were acquired over KSC on March 23, 1996 using JPL's Airborne Visible/Infrared Imaging Spectrometer.
Provide a detailed description of the following dataset: Kennedy Space Center
VLCS
VLCS is a dataset to test for domain generalization.
Provide a detailed description of the following dataset: VLCS
CholecT40
CholecT40 is the first endoscopic dataset introduced to enable research on fine-grained action recognition in laparoscopic surgery. It consists of 40 videos of laparoscopic cholecystectomy surgery annotated with triplet information in the form of <instrument, verb, target>. The annotations spans over 128 triplet classes that are composed from 6 classes of surgical instruments, 8 classes of action verbs, and 19 classes of surgical targets. The dataset is used as benchmark for developing deep learning solution for the recognition of surgical activities in the form of a triplet. It is first surgical data science effort to replicate activity recognition in the same level as human-object interaction (HOI) in natural vision tasks. The parent dataset is [CholecT50](https://paperswithcode.com/dataset/cholect50).
Provide a detailed description of the following dataset: CholecT40
Cholec80
Cholec80 is an endoscopic video dataset containing 80 videos of cholecystectomy surgeries performed by 13 surgeons. The videos are captured at 25 fps and downsampled to 1 fps for processing. The whole dataset is labeled with the phase and tool presence annotations. The phases have been defined by a senior surgeon in Strasbourg hospital, France. Since the tools are sometimes hardly visible in the images and thus difficult to be recognized visually, a tool is defined as present in an image if at least half of the tool tip is visible.
Provide a detailed description of the following dataset: Cholec80
Multi-PIE
The **Multi-PIE** (Multi Pose, Illumination, Expressions) dataset consists of face images of 337 subjects taken under different pose, illumination and expressions. The pose range contains 15 discrete views, capturing a face profile-to-profile. Illumination changes were modeled using 19 flashlights located in different places of the room.
Provide a detailed description of the following dataset: Multi-PIE
The Pile
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. Datasheet: [Datasheet for the Pile](https://paperswithcode.com/paper/datasheet-for-the-pile)
Provide a detailed description of the following dataset: The Pile
ECB+
The ECB+ corpus is an extension to the EventCorefBank (ECB, Bejan and Harabagiu, 2010). A newly added corpus component consists of 502 documents that belong to the 43 topics of the ECB but that describe different seminal events than those already captured in the ECB. All corpus texts were found through Google Search and were annotated with mentions of events and their times, locations, human and non-human participants as well as with within- and cross-document event and entity coreference information. The 2012 version of annotation of the ECB corpus (Lee et al., 2012) was used as a starting point for re-annotation of the ECB according to the ECB+ annotation guideline. The major differences with respect to the 2012 version of annotation of the ECB are: (a) five event components are annotated in text: actions (annotation tags starting with ACTION and NEG) times (annotation tags starting with TIME) locations (annotation tags starting with LOC) human participants (annotation tags starting with HUMAN) non-human participants (annotation tags starting with NON_HUMAN) (b) specific action classes and entity subtypes are distinguished for each of the five main event components resulting in a total tagset of 30 annotation tags based on ACE annotation guidelines (LDC 2008), TimeML (Pustejovsky et al., 2003 and Sauri et al., 2005) (c) intra- and cross-document coreference relations between mentions of the five event components were established: INTRA_DOC_COREF tag captures within document coreference chains that do not participate in cross-document relations; within document coreference was annotated by means of the CAT tool (Bartalesi et al., 2012) CROSS_DOC_COREF tag indicates cross-document coreference relations created in the CROMER tool (Girardi et al., 2014); all coreference branches refer by means of relation target IDs to the so called TAG_DESCRIPTORS, pointing to human friendly instance names (assigned by coders) and also to instance_id-s (d) events are annotated from an “event-centric” perspective, i.e. annotation tags are assigned depending on the role a mention plays in an event (for more information see ECB+ references).
Provide a detailed description of the following dataset: ECB+
OC
These images were generated using UnityEyes simulator, after including essential eyeball physiology elements and modeling binocular vision dynamics. The images are annotated with head pose and gaze direction information, besides 2D and 3D landmarks of eye's most important features. Additionally, the images are distributed into two classes denoting the status of the eye (Open for open eyes, Closed for closed eyes). This dataset was used to train a DNN model for detecting drowsiness status of a driver. The dataset contains 1,704 training images, 4,232 testing images and additional 4,103 images for improvements.
Provide a detailed description of the following dataset: OC
S2ORC
A large corpus of 81.1M English-language academic papers spanning many academic disciplines. Rich metadata, paper abstracts, resolved bibliographic references, as well as structured full text for 8.1M open access papers. Full text annotated with automatically-detected inline mentions of citations, figures, and tables, each linked to their corresponding paper objects. Aggregated papers from hundreds of academic publishers and digital archives into a unified source, and create the largest publicly-available collection of machine-readable academic text to date.
Provide a detailed description of the following dataset: S2ORC
DSD100
The dsd100 is a dataset of 100 full lengths of music tracks of different styles along with their isolated drums, bass, vocals, and other stems. dsd100 contains two folders, a folder with a training set: "train", composed of 50 songs, and a folder with a test set: "test", composed of 50 songs. Supervised approaches should be trained on the training set and tested on both sets. For each file, the mixture corresponds to the sum of all the signals. All signals are stereophonic and encoded at 44.1kHz.
Provide a detailed description of the following dataset: DSD100
WebEdit
Fact-based Text Editing dataset based on WebNLG dataset.
Provide a detailed description of the following dataset: WebEdit
RotoEdit
Fact-based Text Editing dataset based on RotoWire dataset
Provide a detailed description of the following dataset: RotoEdit
PhysioNet Challenge 2020
# Data ## The data for this Challenge are from multiple sources: CPSC Database and CPSC-Extra Database INCART Database PTB and PTB-XL Database The Georgia 12-lead ECG Challenge (G12EC) Database Undisclosed Database The first source is the public (CPSC Database) and unused data (CPSC-Extra Database) from the China Physiological Signal Challenge in 2018 (CPSC2018), held during the 7th International Conference on Biomedical Engineering and Biotechnology in Nanjing, China. The unused data from the CPSC2018 is NOT the test data from the CPSC2018. The test data of the CPSC2018 is included in the final private database that has been sequestered. This training set consists of two sets of 6,877 (male: 3,699; female: 3,178) and 3,453 (male: 1,843; female: 1,610) of 12-ECG recordings lasting from 6 seconds to 60 seconds. Each recording was sampled at 500 Hz. The second source set is the public dataset from St Petersburg INCART 12-lead Arrhythmia Database. This database consists of 74 annotated recordings extracted from 32 Holter records. Each record is 30 minutes long and contains 12 standard leads, each sampled at 257 Hz. The third source from the Physikalisch Technische Bundesanstalt (PTB) comprises two public databases: the PTB Diagnostic ECG Database and the PTB-XL, a large publicly available electrocardiography dataset. The first PTB database contains 516 records (male: 377, female: 139). Each recording was sampled at 1000 Hz. The PTB-XL contains 21,837 clinical 12-lead ECGs (male: 11,379 and female: 10,458) of 10 second length with a sampling frequency of 500 Hz. The fourth source is a Georgia database which represents a unique demographic of the Southeastern United States. This training set contains 10,344 12-lead ECGs (male: 5,551, female: 4,793) of 10 second length with a sampling frequency of 500 Hz. The fifth source is an undisclosed American database that is geographically distinct from the Georgia database. This source contains 10,000 ECGs (all retained as test data). All data is provided in WFDB format. Each ECG recording has a binary MATLAB v4 file (see page 27) for the ECG signal data and a text file in WFDB header format describing the recording and patient attributes, including the diagnosis (the labels for the recording). The binary files can be read using the load function in MATLAB and the scipy.io.loadmat function in Python; please see our baseline models for examples of loading the data. The first line of the header provides information about the total number of leads and the total number of samples or points per lead. The following lines describe how each lead was saved, and the last lines provide information on demographics and diagnosis. Below is an example header file A0001.hea: ``` A0001 12 500 7500 05-Feb-2020 11:39:16 A0001.mat 16+24 1000/mV 16 0 28 -1716 0 I A0001.mat 16+24 1000/mV 16 0 7 2029 0 II A0001.mat 16+24 1000/mV 16 0 -21 3745 0 III A0001.mat 16+24 1000/mV 16 0 -17 3680 0 aVR A0001.mat 16+24 1000/mV 16 0 24 -2664 0 aVL A0001.mat 16+24 1000/mV 16 0 -7 -1499 0 aVF A0001.mat 16+24 1000/mV 16 0 -290 390 0 V1 A0001.mat 16+24 1000/mV 16 0 -204 157 0 V2 A0001.mat 16+24 1000/mV 16 0 -96 -2555 0 V3 A0001.mat 16+24 1000/mV 16 0 -112 49 0 V4 A0001.mat 16+24 1000/mV 16 0 -596 -321 0 V5 A0001.mat 16+24 1000/mV 16 0 -16 -3112 0 V6 #Age: 74 #Sex: Male #Dx: 426783006 #Rx: Unknown #Hx: Unknown #Sx: Unknown ``` From the first line, we see that the recording number is A0001, and the recording file is A0001.mat. The recording has 12 leads, each recorded at 500 Hz sample frequency, and contains 7500 samples. From the next 12 lines, we see that each signal was written at 16 bits with an offset of 24 bits, the amplitude resolution is 1000 with units in mV, the resolution of the analog-to-digital converter (ADC) used to digitize the signal is 16 bits, and the baseline value corresponding to 0 physical units is 0. The first value of the signal, the checksum, and the lead name are included for each signal. From the final 6 lines, we see that the patient is a 74-year-old male with a diagnosis (Dx) of 426783006. The medical prescription (Rx), history (Hx), and symptom or surgery (Sx) are unknown. Each ECG recording has one or more labels from different type of abnormalities in SNOMED-CT codes. The full list of diagnoses for the challenge has been posted here as a 3 column CSV file: Long-form description, corresponding SNOMED-CT code, abbreviation. Although these descriptions apply to all training data there may be fewer classes in the test data, and in different proportions. However, every class in the test data will be represented in the training data.
Provide a detailed description of the following dataset: PhysioNet Challenge 2020
WebVision
The WebVision dataset is designed to facilitate the research on learning visual representation from noisy web data. It is a large scale web images dataset that contains more than 2.4 million of images crawled from the Flickr website and Google Images search. The same 1,000 concepts as the ILSVRC 2012 dataset are used for querying images, such that a bunch of existing approaches can be directly investigated and compared to the models trained from the ILSVRC 2012 dataset, and also makes it possible to study the dataset bias issue in the large scale scenario. The textual information accompanied with those images (e.g., caption, user tags, or description) are also provided as additional meta information. A validation set contains 50,000 images (50 images per category) is provided to facilitate the algorithmic development.
Provide a detailed description of the following dataset: WebVision
NASA Worldview
In this competition you will be identifying regions in satellite images that contain certain cloud formations, with label names: Fish, Flower, Gravel, Sugar. For each image in the test set, you must segment the regions of each cloud formation label. Each image has at least one cloud formation, and can possibly contain up to all all four. The images were downloaded from NASA Worldview. Three regions, spanning 21 degrees longitude and 14 degrees latitude, were chosen. The true-color images were taken from two polar-orbiting satellites, TERRA and AQUA, each of which pass a specific region once a day. Due to the small footprint of the imager (MODIS) on board these satellites, an image might be stitched together from two orbits. The remaining area, which has not been covered by two succeeding orbits, is marked black. The labels were created in a crowd-sourcing activity at the Max-Planck-Institite for Meteorology in Hamburg, Germany, and the Laboratoire de météorologie dynamique in Paris, France. A team of 68 scientists identified areas of cloud patterns in each image, and each images was labeled by approximately 3 different scientists. Ground truth was determined by the union of the areas marked by all labelers for that image, after removing any black band area from the areas. The segment for each cloud formation label for an image is encoded into a single row, even if there are several non-contiguous areas of the same formation in an image. If there is no area of a certain cloud type for an image, the corresponding EncodedPixels prediction should be left blank. You can read more about the encoding standard on the Evaluation page. Files train.csv - the run length encoded segmentations for each image-label pair in the train_images train_images.zip - folder of training images test_images.zip - folder of test images; your task is to predict the segmentations masks of each of the 4 cloud types (labels) for each image. IMPORTANT: Your prediction masks should be scaled down to 350 x 525 px. sample_submission.csv - a sample submission file in the correct format
Provide a detailed description of the following dataset: NASA Worldview
CDD Dataset (season-varying)
Source: [CHANGE DETECTION IN REMOTE SENSING IMAGES USING CONDITIONAL ADVERSARIAL NETWORKS](https://pdfs.semanticscholar.org/ae15/e5ccccaaff44ab542003386349ef1d3b7511.pdf)
Provide a detailed description of the following dataset: CDD Dataset (season-varying)
LabelMe
**LabelMe** database is a large collection of images with ground truth labels for object detection and recognition. The annotations come from two different sources, including the LabelMe online annotation tool.
Provide a detailed description of the following dataset: LabelMe
ICT-3DHP
ICT-3DHP is collected using the Microsoft Kinect sensor and contains RGB images and depth maps of about 14k frames, divided in 10 sequences. The image resolution is 640 × 480 pixels. An hardware sensor (Polhemus Fastrack) is exploited to generate the ground truth annotation. The device is placed on a white cap worn by each subject, visible in both RGB and depth frames.
Provide a detailed description of the following dataset: ICT-3DHP
ETH
**ETH** is a dataset for pedestrian detection. The testing set contains 1,804 images in three video clips. The dataset is captured from a stereo rig mounted on car, with a resolution of 640 x 480 (bayered), and a framerate of 13--14 FPS.
Provide a detailed description of the following dataset: ETH
UCY
The **UCY** dataset consist of real pedestrian trajectories with rich multi-human interaction scenarios captured at 2.5 Hz (Δt=0.4s). It is composed of three sequences (Zara01, Zara02, and UCY), taken in public spaces from top-view.
Provide a detailed description of the following dataset: UCY
IAM
The **IAM** database contains 13,353 images of handwritten lines of text created by 657 writers. The texts those writers transcribed are from the Lancaster-Oslo/Bergen Corpus of British English. It includes contributions from 657 writers making a total of 1,539 handwritten pages comprising of 115,320 words and is categorized as part of modern collection. The database is labeled at the sentence, line, and word levels.
Provide a detailed description of the following dataset: IAM
WebKB
**WebKB** is a dataset that includes web pages from computer science departments of various universities. 4,518 web pages are categorized into 6 imbalanced categories (Student, Faculty, Staff, Department, Course, Project). Additionally there is Other miscellanea category that is not comparable to the rest.
Provide a detailed description of the following dataset: WebKB
MemeTracker
The Memetracker corpus contains articles from mainstream media and blogs from August 1 to October 31, 2008 with about 1 million documents per day. It has 10,967 hyperlink cascades among 600 media sites. Source: [Marked Temporal Dynamics Modeling based on Recurrent Neural Network](https://arxiv.org/abs/1701.03918) Image Source: [http://blog.fabric.ch/index.php?/archives/292-Memetracker-Tracking-News-Phrases-over-the-Web.html](http://blog.fabric.ch/index.php?/archives/292-Memetracker-Tracking-News-Phrases-over-the-Web.html)
Provide a detailed description of the following dataset: MemeTracker
English Web Treebank
**English Web Treebank** is a dataset containing 254,830 word-level tokens and 16,624 sentence-level tokens of webtext in 1174 files annotated for sentence- and word-level tokenization, part-of-speech, and syntactic structure. The data is roughly evenly divided across five genres: weblogs, newsgroups, email, reviews, and question-answers. The files were manually annotated following the sentence-level tokenization guidelines for web text and the word-level tokenization guidelines developed for English treebanks in the DARPA GALE project. Only text from the subject line and message body of posts, articles, messages and question-answers were collected and annotated.
Provide a detailed description of the following dataset: English Web Treebank
Silhouettes
The Caltech 101 **Silhouettes** dataset consists of 4,100 training samples, 2,264 validation samples and 2,307 test samples. The datast is based on CalTech 101 image annotations. Each image in the CalTech 101 data set includes a high-quality polygon outline of the primary object in the scene. To create the **CalTech 101 Silhouettes** data set, the authors center and scale each outline and render it on a DxD pixel image-plane. The outline is rendered as a filled, black polygon on a white background. Many object classes exhibit silhouettes that have distinctive class-specific features. A relatively small number of classes like soccer ball, pizza, stop sign, and yin-yang are indistinguishable based on shape, but have been left-in in the data.
Provide a detailed description of the following dataset: Silhouettes
ETH SfM
The **ETH SfM** (structure-from-motion) dataset is a dataset for 3D Reconstruction. The benchmark investigates how different methods perform in terms of building a 3D model from a set of available 2D images. Source: [SOSNet: Second Order Similarity Regularization forLocal Descriptor Learning](https://arxiv.org/abs/1904.05019) Image Source: [https://cvg.ethz.ch/research/symmetries-in-sfm/](https://cvg.ethz.ch/research/symmetries-in-sfm/)
Provide a detailed description of the following dataset: ETH SfM
INRIA Person
The **INRIA Person** dataset is a dataset of images of persons used for pedestrian detection. It consists of 614 person detections for training and 288 for testing.
Provide a detailed description of the following dataset: INRIA Person
INRIA-Horse
The **INRIA-Horse** dataset consists of 170 horse images and 170 images without horses. All horses in all images are annotated with a bounding-box. The main challenges it offers are clutter, intra-class shape variability, and scale changes. The horses are mostly unoccluded, taken from approximately the side viewpoint, and face the same direction. Source: [Dynamical And-Or Graph Learning for Object Shape Modeling and Detection](https://arxiv.org/abs/1502.00741) Image Source: [http://calvin-vision.net/datasets/inria-horses/](http://calvin-vision.net/datasets/inria-horses/)
Provide a detailed description of the following dataset: INRIA-Horse
INRIA Aerial Image Labeling
The **INRIA Aerial Image Labeling** dataset is comprised of 360 RGB tiles of 5000×5000px with a spatial resolution of 30cm/px on 10 cities across the globe. Half of the cities are used for training and are associated to a public ground truth of building footprints. The rest of the dataset is used only for evaluation with a hidden ground truth. The dataset was constructed by combining public domain imagery and public domain official building footprints.
Provide a detailed description of the following dataset: INRIA Aerial Image Labeling
Office-Caltech-10
**Office-Caltech-10** a standard benchmark for domain adaptation, which consists of Office 10 and Caltech 10 datasets. It contains the 10 overlapping categories between the Office dataset and Caltech256 dataset. SURF BoW historgram features, vector quantized to 800 dimensions are also available for this dataset. Source: [Impact of ImageNet Model Selection on Domain Adaptation](https://arxiv.org/abs/2002.02559) Image Source: [https://arxiv.org/abs/1409.5241](https://arxiv.org/abs/1409.5241)
Provide a detailed description of the following dataset: Office-Caltech-10
Poser
The **Poser** dataset is a dataset for pose estimation which consists of 1927 training and 418 test images. These images are synthetically generated and tuned to unimodal predictions. The images were generated using the Poser software package. Source: [Overlapping Cover Local Regression Machines](https://arxiv.org/abs/1701.01218) Image Source: [https://www.researchgate.net/figure/Test-data-used-in-the-user-study-Left-the-pose-pictures-shown-to-the-user-Middle-the_fig17_221847487](https://www.researchgate.net/figure/Test-data-used-in-the-user-study-Left-the-pose-pictures-shown-to-the-user-Middle-the_fig17_221847487)
Provide a detailed description of the following dataset: Poser
UKP
The **UKP** Argument Annotated Essays corpus consists of argument annotated persuasive essays including annotations of argument components and argumentative relations. Source: [https://www.informatik.tu-darmstadt.de/ukp/research_6/data/argumentation_mining_1/argument_annotated_essays/index.en.jsp](https://www.informatik.tu-darmstadt.de/ukp/research_6/data/argumentation_mining_1/argument_annotated_essays/index.en.jsp) Image Source: [https://www.aclweb.org/anthology/C14-1142.pdf](https://www.aclweb.org/anthology/C14-1142.pdf)
Provide a detailed description of the following dataset: UKP
ETH BIWI Walking Pedestrians
The BIWI Walking Pedestrians dataset consists of walking pedestrians in busy scenarios from a birds eye view. Source: [https://icu.ee.ethz.ch/research/datsets.html](https://icu.ee.ethz.ch/research/datsets.html) Image Source: [https://icu.ee.ethz.ch/research/datsets.html](https://icu.ee.ethz.ch/research/datsets.html)
Provide a detailed description of the following dataset: ETH BIWI Walking Pedestrians
WASABI
The **WASABI** Song Corpus is a large corpus of songs enriched with metadata extracted from music databases on the Web, and resulting from the processing of song lyrics and from audio analysis. More specifically, given that lyrics encode an important part of the semantics of a song, the authors focus on the description of the methods they proposed to extract relevant information from the lyrics, such as their structure segmentation, their topics, the explicitness of the lyrics content, the salient passages of a song and the emotions conveyed. The corpus contains 1.73M songs with lyrics (1.41M unique lyrics) annotated at different levels with the output of the above mentioned methods. Such corpus labels and the provided methods can be exploited by music search engines and music professionals (e.g. journalists, radio presenters) to better handle large collections of lyrics, allowing an intelligent browsing, categorization and segmentation recommendation of songs.
Provide a detailed description of the following dataset: WASABI
Multilingual Reuters
The **Multilingual Reuters** Collection dataset comprises over 11,000 articles from six classes in five languages, i.e., English (E), French (F), German (G), Italian (I), and Spanish (S). Source: [Multi-source Heterogeneous Domain Adaptation with Conditional Weighting Adversarial Network](https://arxiv.org/abs/2008.02714) Image Source: [https://papers.nips.cc/paper/2009/file/f79921bbae40a577928b76d2fc3edc2a-Paper.pdf](https://papers.nips.cc/paper/2009/file/f79921bbae40a577928b76d2fc3edc2a-Paper.pdf)
Provide a detailed description of the following dataset: Multilingual Reuters