dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
WMT 2016 IT
The IT Translation Task is a shared task introduced in the First Conference on Machine Translation. Compared to WMT 2016 News, this task brought several novelties to WMT: * 4 out of the 7 langauges of the IT task are new in WMT, * adaptation to the IT domain with its specifics such as frequent named entities (mostly menu items, names of products and companies) and technical jargon, * adaptation to translation of answers in helpdesk service setting (many of the sentences are instructions with imperative verbs, which is very rare in the News translation task). The test set consisted of 1000 answers from the Batch 3 of the QTLeap Corpus. The in-domain training data contained 2000 answers from the Batches 1 and 2 and also localization files from several open-source projects (LibreOffice, KDE, VLC) and bilingual dictionaries of IT-related terms extracted from Wikipedia. The out-of-domain training data contained all the corpora from the WMT 2016 News, plus PaCo2-EuEn Basque-English corpus and SETimes with Bulgarian-English parallel sentences. “Constrained” systems were restricted to use only these training data provided by the organizers. The task was evaluated on the following language pairs: * English → Bulgarian * English → Czech * English → German * English → Spanish * English → Basque * English → Dutch * English → Portuguese
Provide a detailed description of the following dataset: WMT 2016 IT
WMT 2016 Biomedical
The Biomedical Translation Shared Task was first introduced at the First Conference of Machine Translation. The task aims to evaluate systems for the translation of biomedical titles and abstracts from scientific publications. The data includes three language pairs (English ↔ Portuguese, English ↔ Spanish, English ↔ French) and two sub-domains of biological sciences and health sciences. The training data consists mainly of the Scielo corpus, a parallel collection of scientific publications composed of either titles, abstracts or title and abstracts which were retrieved from the Scielo database. For the Scielo corpus, a parallel documents are provided for all language pairs in the two sub-domains, except for the English ↔ French, where only health was considered, as there were inadequate parallel documents available for biology in that pair. The training data was aligned using the GMA alignment tool. Additionally, a corpus of parallel titles from MEDLINEⓇ for all three language pairs were provided as well as monolingual documents for the four languages, retrieved from the Scielo database. These consist of documents in the Scielo database which have no corresponding document in another language. The test set consisted of 500 documents (title and abstract) for each of the two directions of each language pair. None of the test documents was included in the training data and there is no overlap of documents between the test sets for any language pair, translation direction and sub-domain. Source: [http://www.statmt.org/wmt16/index.html](http://www.statmt.org/wmt16/index.html) Image Source: [https://www.aclweb.org/anthology/W16-2301.pdf](https://www.aclweb.org/anthology/W16-2301.pdf)
Provide a detailed description of the following dataset: WMT 2016 Biomedical
XSum
The Extreme Summarization (**XSum**) dataset is a dataset for evaluation of abstractive single-document summarization systems. The goal is to create a short, one-sentence new summary answering the question “What is the article about?”. The dataset consists of 226,711 news articles accompanied with a one-sentence summary. The articles are collected from BBC articles (2010 to 2017) and cover a wide variety of domains (e.g., News, Politics, Sports, Weather, Business, Technology, Science, Health, Family, Education, Entertainment and Arts). The official random split contains 204,045 (90%), 11,332 (5%) and 11,334 (5) documents in training, validation and test sets, respectively. Source: [https://arxiv.org/pdf/1808.08745.pdf](https://arxiv.org/pdf/1808.08745.pdf) Image Source: [https://arxiv.org/pdf/1808.08745.pdf](https://arxiv.org/pdf/1808.08745.pdf)
Provide a detailed description of the following dataset: XSum
WMT 2014
**WMT 2014** is a collection of datasets used in shared tasks of the Ninth Workshop on Statistical Machine Translation. The workshop featured four tasks: * a news translation task, * a quality estimation task, * a metrics task, * a medical text translation task.
Provide a detailed description of the following dataset: WMT 2014
WMT 2014 Medical
The Medical Translation Task of WMT 2014 addresses the problem of domain-specific and genre-specific machine translation. The task is split into two subtasks: summary translation, focused on translation of sentences from summaries of medical articles, and query translation, focused on translation of queries entered by users into medical information search engines. Both subtasks included translation between English and Czech, German, and French, in both directions. Source: [https://www.aclweb.org/anthology/W14-3302.pdf](https://www.aclweb.org/anthology/W14-3302.pdf)
Provide a detailed description of the following dataset: WMT 2014 Medical
WMT 2015
**WMT 2015** is a collection of datasets used in shared tasks of the Tenth Workshop on Statistical Machine Translation. The workshop featured five tasks: * a news translation task, * a metrics task, * a tuning task, * a quality estimation task, * an automatic post-editing task.
Provide a detailed description of the following dataset: WMT 2015
WMT 2015 News
News translation is a recurring WMT task. The test set is a collection of parallel corpora consisting of about 1500 English sentences translated into 5 languages (Czech, German, Finnish, French, Russian) and additional 1500 sentences from each of the 5 languages translated to English. The sentences are taken from newspaper articles for each language pair, except for French, where the test set was drawn from user-generated comments on the news articles (from Guardian and Le Monde). The translation was done by professional translators. The training data consists of parallel corpora to train translation models, monolingual corpora to train language models and development sets for tuning. Some training corpora were identical from WMT 2014 (Europarl, United Nations, French-English 10⁹ corpus, CzEng, Common Crawl, Russian-English parallel data provided by Yandex, Wikipedia Headlines provided by CMU) and some were update (News Commentary, monolingual news data). Additionally, the Finnish Europarl and Finnish-English Wikipedia Headline corpus were added. Source: [https://paperswithcode.com/paper/findings-of-the-2016-conference-on-machine/](https://paperswithcode.com/paper/findings-of-the-2016-conference-on-machine/) Image Source: [httpshttps://www.aclweb.org/anthology/W15-3001.pdf](httpshttps://www.aclweb.org/anthology/W15-3001.pdf)
Provide a detailed description of the following dataset: WMT 2015 News
SHAPES
**SHAPES** is a dataset of synthetic images designed to benchmark systems for understanding of spatial and logical relations among multiple objects. The dataset consists of complex questions about arrangements of colored shapes. The questions are built around compositions of concepts and relations, e.g. Is there a red shape above a circle? or Is a red shape blue?. Questions contain between two and four attributes, object types, or relationships. There are 244 questions and 15,616 images in total, with all questions having a yes and no answer (and corresponding supporting image). This eliminates the risk of learning biases. Each image is a 30×30 RGB image depicting a 3×3 grid of objects. Each object is characterized by shape (circle, square, triangle), colour (red, green, blue) and size (small, big).
Provide a detailed description of the following dataset: SHAPES
AG’s Corpus
Antonio Gulli’s corpus of news articles is a collection of more than 1 million news articles. The articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non - commercial activity. A subset of this corpus, AG News, consisting of the 4 largest classes is a popular topic classification dataset.
Provide a detailed description of the following dataset: AG’s Corpus
QUASAR-S
**QUASAR-S** is a large-scale dataset aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. It consists of 37,362 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The answer to each question is restricted to be another software entity, from an output vocabulary of 4874 entities. Source: [Quasar: Datasets for Question Answering by Search and Reading](https://paperswithcode.com/paper/quasar-datasets-for-question-answering-by/) Image Source: [Quasar: Datasets for Question Answering by Search and Reading](https://paperswithcode.com/paper/quasar-datasets-for-question-answering-by/)
Provide a detailed description of the following dataset: QUASAR-S
QUASAR-T
**QUASAR-T** is a large-scale dataset aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. It consists of 43,013 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. The answers to these questions are free-form spans of text, though most are noun phrases.
Provide a detailed description of the following dataset: QUASAR-T
MLDoc
**Multilingual Document Classification Corpus** (**MLDoc**) is a cross-lingual document classification dataset covering English, German, French, Spanish, Italian, Russian, Japanese and Chinese. It is a subset of the Reuters Corpus Volume 2 selected according to the following design choices: * uniform class coverage: same number of examples for each class and language, * official train / development / test split: for each language a training data of different sizes (1K, 2K, 5K and 10K stories), a development (1K) and a test corpus (4K) are provided (with exception of Spanish and Russian with 9458 and 5216 training documents respectively.
Provide a detailed description of the following dataset: MLDoc
WMT 2018
**WMT 2018** is a collection of datasets used in shared tasks of the Third Conference on Machine Translation. The conference builds on a series of twelve previous annual workshops and conferences on Statistical Machine Translation. The conference featured ten shared tasks: - a news translation task, - a biomedical translation task, - a multimodal machine translation task, - a metrics task, - a quality estimation task, - an automatic post-editing task, - a parallel corpus filtering task.
Provide a detailed description of the following dataset: WMT 2018
WMT 2018 News
News translation is a recurring WMT task. The test set is a collection of parallel corpora consisting of about 1500 English sentences translated into 5 languages (Chinese, Czech, Estonian, German, Finnish, Russian, Turkish) and additional 1500 sentences from each of the 7 languages translated to English. The sentences were selected from dozens of news websites and translated by professional translators. The training data consists of parallel corpora to train translation models, monolingual corpora to train language models and development sets for tuning. Some training corpora were identical from WMT 2017 (Europarl, Common Crawl, SETIMES2, Russian-English parallel data provided by Yandex, Wikipedia Headlines provided by CMU) and some were update (United Nations, CzEng v1.7, News Commentary v13, monolingual news data). Additionally, the EU Press Release parallel corpus for German, Finnish and Estonian was added.
Provide a detailed description of the following dataset: WMT 2018 News
ArxivPapers
The **ArxivPapers** dataset is an unlabelled collection of over 104K papers related to machine learning and published on arXiv.org between 2007–2020. The dataset includes around 94K papers (for which LaTeX source code is available) in a structured form in which paper is split into a title, abstract, sections, paragraphs and references. Additionally, the dataset contains over 277K tables extracted from the LaTeX papers. Due to the papers license the dataset is published as a metadata and open-source pipeline that can be used to obtain and convert the papers.
Provide a detailed description of the following dataset: ArxivPapers
SegmentedTables
The **SegmentedTables** dataset is a collection of almost 2,000 tables extracted from 352 machine learning papers. Each table consists of rich text content, layout and caption. Tables are annotated with types (leaderboard, ablation, irrelevant) and cells of relevant tables are annotated with semantic roles (such as “paper model”, “competing model”, “dataset”, “metric”). Due to the license of source papers the dataset is published as a metadata, all annotations and open-source pipeline that can be used to extract the tables. Source: [AxCell: Automatic Extraction of Results from Machine Learning Papers](https://paperswithcode.com/paper/axcell-automatic-extraction-of-results-from) Image Source: [AxCell: Automatic Extraction of Results from Machine Learning Papers](https://paperswithcode.com/paper/axcell-automatic-extraction-of-results-from)
Provide a detailed description of the following dataset: SegmentedTables
LinkedResults
The **LinkedResults** dataset contains around 1,600 results capturing performance of machine learning models from tables of 239 papers. All tables come from a subset of SegmentedTables dataset. Each result is a tuple of form (task, dataset, metric name, metric value) and is linked to a particular table, row and cell it originates from.
Provide a detailed description of the following dataset: LinkedResults
PWC Leaderboards
The **Papers with Code Leaderboards** dataset is a collection of over 5,000 results capturing performance of machine learning models. Each result is a tuple of form (task, dataset, metric name, metric value). The data was collected using the Papers with Code review interface. Source: [AxCell: Automatic Extraction of Results from Machine Learning Papers](https://paperswithcode.com/paper/axcell-automatic-extraction-of-results-from) Image Source: [AxCell: Automatic Extraction of Results from Machine Learning Papers](https://paperswithcode.com/paper/axcell-automatic-extraction-of-results-from)
Provide a detailed description of the following dataset: PWC Leaderboards
SKU110K
The Sku110k dataset provides 11,762 images with more than 1.7 million annotated bounding boxes captured in densely packed scenarios, including 8,233 images for training, 588 images for validation, and 2,941 images for testing. There are around 1,733,678 instances in total. The images are collected from thousands of supermarket stores and are of various scales, viewing angles, lighting conditions, and noise levels. All the images are resized into a resolution of one megapixel. Most of the instances in the dataset are tightly packed and typically of a certain orientation in the rage of [−15∘, 15∘]. Source: [Rethinking Object Detection in Retail Stores](https://arxiv.org/abs/2003.08230) Image Source: [https://github.com/eg4000/SKU110K_CVPR19](https://github.com/eg4000/SKU110K_CVPR19)
Provide a detailed description of the following dataset: SKU110K
UBIRIS.v2
The **UBIRIS.v2** iris dataset contains 11,102 iris images from 261 subjects with 10 images each subject. The images were captured under unconstrained conditions (at-a-distance, on-the-move and on the visible wavelength), with realistic noise factors. Source: [Constrained Design of Deep Iris Networks](https://arxiv.org/abs/1905.09481) Image Source: [https://arxiv.org/pdf/1905.09481.pdf](https://arxiv.org/pdf/1905.09481.pdf)
Provide a detailed description of the following dataset: UBIRIS.v2
VIVA
The **VIVA** challenge’s dataset is a multimodal dynamic hand gesture dataset specifically designed with difficult settings of cluttered background, volatile illumination, and frequent occlusion for studying natural human activities in real-world driving settings. This dataset was captured using a Microsoft Kinect device, and contains 885 intensity and depth video sequences of 19 different dynamic hand gestures performed by 8 subjects inside a vehicle. Source: [Short-Term Temporal Convolutional Networks for Dynamic Hand Gesture Recognition](https://arxiv.org/abs/2001.05833) Image Source: [http://www.site.uottawa.ca/research/viva/projects/hand_detection/index.html](http://www.site.uottawa.ca/research/viva/projects/hand_detection/index.html)
Provide a detailed description of the following dataset: VIVA
ITOP
The **ITOP** dataset consists of 40K training and 10K testing depth images for each of the front-view and top-view tracks. This dataset contains depth images with 20 actors who perform 15 sequences each and is recorded by two Asus Xtion Pro cameras. The ground-truth of this dataset is the 3D coordinates of 15 body joints. Source: [V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map](https://arxiv.org/abs/1711.07399) Image Source: [https://www.youtube.com/watch?v=4gPI-GOf9wg](https://www.youtube.com/watch?v=4gPI-GOf9wg)
Provide a detailed description of the following dataset: ITOP
Dayton
The **Dayton** dataset is a dataset for ground-to-aerial (or aerial-to-ground) image translation, or cross-view image synthesis. It contains images of road views and aerial views of roads. There are 76,048 images in total and the train/test split is 55,000/21,048. The images in the original dataset have 354×354 resolution. Source: [Multi-Channel Attention Selection GANs for Guided Image-to-Image Translation](https://arxiv.org/abs/2002.01048) Image Source: [https://arxiv.org/abs/1912.06112](https://arxiv.org/abs/1912.06112)
Provide a detailed description of the following dataset: Dayton
AOLP
The application-oriented license plate (**AOLP**) benchmark database has 2049 images of Taiwan license plates. This database is categorized into three subsets: access control (AC) with 681 samples, traffic law enforcement (LE) with 757 samples, and road patrol (RP) with 611 samples. AC refers to the cases that a vehicle passes a fixed passage with a lower speed or full stop. This is the easiest situation. The images are captured under different illuminations and different weather conditions. LE refers to the cases that a vehicle violates traffic laws and is captured by roadside camera. The background are really cluttered, with road sign and multiple plates in one image. RP refers to the cases that the camera is held on a patrolling vehicle, and the images are taken with arbitrary viewpoints and distances. Source: [Reading Car License Plates Using Deep Convolutional Neural Networks and LSTMs](https://arxiv.org/abs/1601.05610) Image Source: [http://aolpr.ntust.edu.tw/lab/index.html](http://aolpr.ntust.edu.tw/lab/index.html)
Provide a detailed description of the following dataset: AOLP
Set11
**Set11** is a dataset of 11 grayscale images. It is a dataset used for image reconstruction and image compression. Source: [ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing](https://arxiv.org/abs/1706.07929) Image Source: [https://arxiv.org/pdf/1706.07929.pdf](https://arxiv.org/pdf/1706.07929.pdf)
Provide a detailed description of the following dataset: Set11
SALICON
The SALIency in CONtext (**SALICON**) dataset contains 10,000 training images, 5,000 validation images and 5,000 test images for saliency prediction. This dataset has been created by annotating saliency in images from MS COCO. The ground-truth saliency annotations include fixations generated from mouse trajectories. To improve the data quality, isolated fixations with low local density have been excluded. The training and validation sets, provided with ground truth, contain the following data fields: image, resolution and gaze. The testing data contains only the image and resolution fields.
Provide a detailed description of the following dataset: SALICON
GRID Dataset
The QMUL underGround Re-IDentification (**GRID**) dataset contains 250 pedestrian image pairs. Each pair contains two images of the same individual seen from different camera views. All images are captured from 8 disjoint camera views installed in a busy underground station. The figures beside show a snapshot of each of the camera views of the station and sample images in the dataset. The dataset is challenging due to variations of pose, colours, lighting changes; as well as poor image quality caused by low spatial resolution.
Provide a detailed description of the following dataset: GRID Dataset
Flickr30K Entities
The **Flickr30K Entities** dataset is an extension to the Flickr30K dataset. It augments the original 158k captions with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. This is used to define a new benchmark for localization of textual entity mentions in an image.
Provide a detailed description of the following dataset: Flickr30K Entities
FGVC-Aircraft
FGVC-Aircraft contains 10,200 images of aircraft, with 100 images for each of 102 different aircraft model variants, most of which are airplanes. The (main) aircraft in each image is annotated with a tight bounding box and a hierarchical airplane model label. Aircraft models are organized in a four-levels hierarchy. The four levels, from finer to coarser, are: * Model, e.g. Boeing 737-76J. Since certain models are nearly visually indistinguishable, this level is not used in the evaluation. * Variant, e.g. Boeing 737-700. A variant collapses all the models that are visually indistinguishable into one class. The dataset comprises 102 different variants. * Family, e.g. Boeing 737. The dataset comprises 70 different families. * Manufacturer, e.g. Boeing. The dataset comprises 41 different manufacturers. The data is divided into three equally-sized training, validation and test subsets.
Provide a detailed description of the following dataset: FGVC-Aircraft
DUTS
**DUTS** is a saliency detection dataset containing 10,553 training images and 5,019 test images. All training images are collected from the ImageNet DET training/val sets, while test images are collected from the ImageNet DET test set and the SUN data set. Both the training and test set contain very challenging scenarios for saliency detection. Accurate pixel-level ground truths are manually annotated by 50 subjects.
Provide a detailed description of the following dataset: DUTS
LIP
The **LIP** (**Look into Person**) dataset is a large-scale dataset focusing on semantic understanding of a person. It contains 50,000 images with elaborated pixel-wise annotations of 19 semantic human part labels and 2D human poses with 16 key points. The images are collected from real-world scenarios and the subjects appear with challenging poses and view, heavy occlusions, various appearances and low resolution.
Provide a detailed description of the following dataset: LIP
ApolloScape
**ApolloScape** is a large dataset consisting of over 140,000 video frames (73 street scene videos) from various locations in China under varying weather conditions. Pixel-wise semantic annotation of the recorded data is provided in 2D, with point-wise semantic annotation in 3D for 28 classes. In addition, the dataset contains lane marking annotations in 2D.
Provide a detailed description of the following dataset: ApolloScape
PoseTrack
The **PoseTrack** dataset is a large-scale benchmark for multi-person pose estimation and tracking in videos. It requires not only pose estimation in single frames, but also temporal tracking across frames. It contains 514 videos including 66,374 frames in total, split into 300, 50 and 208 videos for training, validation and test set respectively. For training videos, 30 frames from the center are annotated. For validation and test videos, besides 30 frames from the center, every fourth frame is also annotated for evaluating long range articulated tracking. The annotations include 15 body keypoints location, a unique person id and a head bounding box for each person instance.
Provide a detailed description of the following dataset: PoseTrack
ICVL Hand Posture
The ICVL dataset is a hand pose estimation dataset that consists of 330K training frames and 2 testing sequences with each 800 frames. The dataset is collected from 10 different subjects with 16 hand joint annotations for each frame. Source: [AWR: Adaptive Weighting Regression for 3D Hand Pose Estimation](https://arxiv.org/abs/2007.09590) Image Source: [Tang et al.; Latent Regression Forest: Structured Estimation of 3D Hand Poses](https://alykhantejani.github.io/pdfs/LRF_PAMI_DRAFT.pdf)
Provide a detailed description of the following dataset: ICVL Hand Posture
SegTrack-v2
SegTrack v2 is a video segmentation dataset with full pixel-level annotations on multiple objects at each frame within each video.
Provide a detailed description of the following dataset: SegTrack-v2
Foggy Cityscapes
**Foggy Cityscapes** is a synthetic foggy dataset which simulates fog on real scenes. Each foggy image is rendered with a clear image and depth map from Cityscapes. Thus the annotations and data split in Foggy Cityscapes are inherited from Cityscapes.
Provide a detailed description of the following dataset: Foggy Cityscapes
Vimeo90K
The Vimeo-90K is a large-scale high-quality video dataset for lower-level video processing. It proposes three different video processing tasks: frame interpolation, video denoising/deblocking, and video super-resolution.
Provide a detailed description of the following dataset: Vimeo90K
MPIIGaze
**MPIIGaze** is a dataset for appearance-based gaze estimation in the wild. It contains 213,659 images collected from 15 participants during natural everyday laptop use over more than three months. It has a large variability in appearance and illumination.
Provide a detailed description of the following dataset: MPIIGaze
ReferItGame
The ReferIt dataset contains 130,525 expressions for referring to 96,654 objects in 19,894 images of natural scenes.
Provide a detailed description of the following dataset: ReferItGame
MultiTHUMOS
The **MultiTHUMOS** dataset contains dense, multilabel, frame-level action annotations for 30 hours across 400 videos in the THUMOS'14 action detection dataset. It consists of 38,690 annotations of 65 action classes, with an average of 1.5 labels per frame and 10.5 action classes per video.
Provide a detailed description of the following dataset: MultiTHUMOS
CrowdHuman
**CrowdHuman** is a large and rich-annotated human detection dataset, which contains 15,000, 4,370 and 5,000 images collected from the Internet for training, validation and testing respectively. The number is more than 10× boosted compared with previous challenging pedestrian detection dataset like CityPersons. The total number of persons is also noticeably larger than the others with ∼340k person and ∼99k ignore region annotations in the CrowdHuman training subset.
Provide a detailed description of the following dataset: CrowdHuman
MSRDailyActivity3D
**DailyActivity3D** dataset is a daily activity dataset captured by a Kinect device. There are 16 activity types: drink, eat, read book, call cellphone, write on a paper, use laptop, use vacuum cleaner, cheer up, sit still, toss paper, play game, lay down on sofa, walk, play guitar, stand up, sit down. If possible, each subject performs an activity in two different poses: “sitting on sofa” and “standing”. The total number of the activity samples is 320. This dataset is designed to cover human’s daily activities in the living room. When the performer stands close to the sofa or sits on the sofa, the 3D joint positions extracted by the skeleton tracker are very noisy. Moreover, most of the activities involve the humans-object interactions. Thus this dataset is more challenging.
Provide a detailed description of the following dataset: MSRDailyActivity3D
McMaster
The **McMaster** dataset is a dataset for color demosaicing, which contains 18 cropped images of size 500×500.
Provide a detailed description of the following dataset: McMaster
Sketch
The **Sketch** dataset contains over 20,000 sketches evenly distributed over 250 object categories.
Provide a detailed description of the following dataset: Sketch
Wireframe
The **Wireframe** dataset consists of 5,462 images (5,000 for training, 462 for test) of indoor and outdoor man-made scenes.
Provide a detailed description of the following dataset: Wireframe
MNIST-M
**MNIST-M** is created by combining MNIST digits with the patches randomly extracted from color photos of BSDS500 as their background. It contains 59,001 training and 90,001 test images.
Provide a detailed description of the following dataset: MNIST-M
tieredImageNet
The **tieredImageNet** dataset is a larger subset of ILSVRC-12 with 608 classes (779,165 images) grouped into 34 higher-level nodes in the ImageNet human-curated hierarchy. This set of nodes is partitioned into 20, 6, and 8 disjoint sets of training, validation, and testing nodes, and the corresponding classes form the respective meta-sets. As argued in Ren et al. (2018), this split near the root of the ImageNet hierarchy results in a more challenging, yet realistic regime with test classes that are less similar to training classes.
Provide a detailed description of the following dataset: tieredImageNet
aPY
**aPY** is a coarse-grained dataset composed of 15339 images from 3 broad categories (animals, objects and vehicles), further divided into a total of 32 subcategories (aeroplane, …, zebra).
Provide a detailed description of the following dataset: aPY
VisDA-2017
**VisDA-2017** is a simulation-to-real dataset for domain adaptation with over 280,000 images across 12 categories in the training, validation and testing domains. The training images are generated from the same object under different circumstances, while the validation images are collected from MSCOCO..
Provide a detailed description of the following dataset: VisDA-2017
ImageNet-32
Imagenet32 is a huge dataset made up of small images called the down-sampled version of Imagenet. Imagenet32 is composed of 1,281,167 training data and 50,000 test data with 1,000 labels.
Provide a detailed description of the following dataset: ImageNet-32
MVTecAD
MVTec AD is a dataset for benchmarking anomaly detection methods with a focus on industrial inspection. It contains over 5000 high-resolution images divided into fifteen different object and texture categories. Each category comprises a set of defect-free training images and a test set of images with various kinds of defects as well as images without defects. There are two common metrics: Detection AUROC and Segmentation (or pixelwise) AUROC Detection (or, classification) methods output single float (anomaly score) per input test image. Segmentation methods output anomaly probability for each pixel. "To assess segmentation performance, we evaluate the relative per-region overlap of the segmentation with the ground truth. To get an additional performance measure that is independent of the determined threshold, we compute the area under the receiver operating characteristic curve (ROC AUC). We define the true positive rate as the percentage of pixels that were correctly classified as anomalous" [1] Later segmentation metric was improved to balance regions with small and large area, see PRO-AUC and other in [2] Source: [MVTEC ANOMALY DETECTION DATASET](https://www.mvtec.com/company/research/datasets/mvtec-ad/) Image Source: [https://www.mvtec.com/company/research/datasets/mvtec-ad/](https://www.mvtec.com/company/research/datasets/mvtec-ad/) [1] Paul Bergmann et al, "MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection" [2] [Bergmann, P., Batzner, K., Fauser, M. et al. The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. Int J Comput Vis (2021). https://doi.org/10.1007/s11263-020-01400-4](https://link.springer.com/article/10.1007/s11263-020-01400-4)
Provide a detailed description of the following dataset: MVTecAD
Kvasir
The KVASIR Dataset was released as part of the medical multimedia challenge presented by MediaEval. It is based on images obtained from the GI tract via an endoscopy procedure. The dataset is composed of images that are annotated and verified by medical doctors, and captures 8 different classes. The classes are based on three anatomical landmarks (z-line, pylorus, cecum), three pathological findings (esophagitis, polyps, ulcerative colitis) and two other classes (dyed and lifted polyps, dyed resection margins) related to the polyp removal process. Overall, the dataset contains 8,000 endoscopic images, with 1,000 image examples per class.
Provide a detailed description of the following dataset: Kvasir
Syn2Real
**Syn2Real**, a synthetic-to-real visual domain adaptation benchmark meant to encourage further development of robust domain transfer methods. The goal is to train a model on a synthetic "source" domain and then update it so that its performance improves on a real "target" domain, without using any target annotations. It includes three tasks, illustrated in figures above: the more traditional closed-set classification task with a known set of categories; the less studied open-set classification task with unknown object categories in the target domain; and the object detection task, which involves localizing instances of objects by predicting their bounding boxes and corresponding class labels.
Provide a detailed description of the following dataset: Syn2Real
ANLI
The Adversarial Natural Language Inference (**ANLI**, Nie et al.) is a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. Particular, the data is selected to be difficult to the state-of-the-art models, including BERT and RoBERTa.
Provide a detailed description of the following dataset: ANLI
Cityscapes
**Cityscapes** is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background.
Provide a detailed description of the following dataset: Cityscapes
PASCAL VOC
The PASCAL Visual Object Classes (VOC) 2012 dataset contains 20 object categories including vehicles, household, animals, and other: aeroplane, bicycle, boat, bus, car, motorbike, train, bottle, chair, dining table, potted plant, sofa, TV/monitor, bird, cat, cow, dog, horse, sheep, and person. Each image in this dataset has pixel-level segmentation annotations, bounding box annotations, and object class annotations. This dataset has been widely used as a benchmark for object detection, semantic segmentation, and classification tasks. The **PASCAL VOC** dataset is split into three subsets: 1,464 images for training, 1,449 images for validation and a private testing set.
Provide a detailed description of the following dataset: PASCAL VOC
VGG Face
The **VGG Face** dataset is face identity recognition dataset that consists of 2,622 identities. It contains over 2.6 million images.
Provide a detailed description of the following dataset: VGG Face
LibriSpeech
The **LibriSpeech** corpus is a collection of approximately 1,000 hours of audiobooks that are a part of the LibriVox project. Most of the audiobooks come from the Project Gutenberg. The training data is split into 3 partitions of 100hr, 360hr, and 500hr sets while the dev and test data are split into the ’clean’ and ’other’ categories, respectively, depending upon how well or challenging Automatic Speech Recognition systems would perform against. Each of the dev and test sets is around 5hr in audio length. This corpus also provides the n-gram language models and the corresponding texts excerpted from the Project Gutenberg books, which contain 803M tokens and 977K unique words.
Provide a detailed description of the following dataset: LibriSpeech
CASIA-WebFace
The **CASIA-WebFace** dataset is used for face verification and face identification tasks. The dataset contains 494,414 face images of 10,575 real identities collected from the web.
Provide a detailed description of the following dataset: CASIA-WebFace
Set14
The **Set14** dataset is a dataset consisting of 14 images commonly used for testing performance of Image Super-Resolution models. Image Source: [https://www.ece.rice.edu/~wakin/images/](https://www.ece.rice.edu/~wakin/images/)
Provide a detailed description of the following dataset: Set14
MS-Celeb-1M
The **MS-Celeb-1M** dataset is a large-scale face recognition dataset consists of 100K identities, and each identity has about 100 facial images. The original identity labels are obtained automatically from webpages. **NOTE**: This dataset [is currently inactive](https://exposing.ai/msceleb/).
Provide a detailed description of the following dataset: MS-Celeb-1M
UCI Machine Learning Repository
**UCI Machine Learning Repository** is a collection of over 550 datasets.
Provide a detailed description of the following dataset: UCI Machine Learning Repository
SYNTHIA
The **SYNTHIA** dataset is a synthetic dataset that consists of 9400 multi-viewpoint photo-realistic frames rendered from a virtual city and comes with pixel-level semantic annotations for 13 classes. Each frame has resolution of 1280 × 960.
Provide a detailed description of the following dataset: SYNTHIA
NYUv2
The **NYU-Depth V2** data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect. It features: * 1449 densely labeled pairs of aligned RGB and depth images * 464 new scenes taken from 3 cities * 407,024 new unlabeled frames * Each object is labeled with a class and an instance number. The dataset has several components: * Labeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels. * Raw: The raw RGB, depth and accelerometer data as provided by the Kinect. * Toolbox: Useful functions for manipulating the data and labels.
Provide a detailed description of the following dataset: NYUv2
Urban100
The **Urban100** dataset contains 100 images of urban scenes. It commonly used as a test set to evaluate the performance of super-resolution models. Image Source: [http://vllab.ucmerced.edu/wlai24/LapSRN/](http://vllab.ucmerced.edu/wlai24/LapSRN/)
Provide a detailed description of the following dataset: Urban100
VGGFace2
The **VGGFace2** dataset is made of around 3.31 million images divided into 9131 classes, each representing a different person identity. The dataset is divided into two splits, one for the training and one for test. The latter contains around 170000 images divided into 500 identities while all the other images belong to the remaining 8631 classes available for training. While constructing the datasets, the authors focused their efforts on reaching a very low label noise and a high pose and age diversity thus, making the VGGFace2 dataset a suitable choice to train state-of-the-art deep learning models on face-related tasks. The images of the training set have an average resolution of 137x180 pixels, with less than 1% at a resolution below 32 pixels (considering the shortest side). **CAUTION**: Authors note that the distribution of identities in the VGG-Face dataset may not be representative of the global human population. Please be careful of unintended societal, gender, racial and other biases when training or deploying models trained on this data.
Provide a detailed description of the following dataset: VGGFace2
PASCAL3D+
The Pascal3D+ multi-view dataset consists of images in the wild, i.e., images of object categories exhibiting high variability, captured under uncontrolled settings, in cluttered scenes and under many different poses. Pascal3D+ contains 12 categories of rigid objects selected from the PASCAL VOC 2012 dataset. These objects are annotated with pose information (azimuth, elevation and distance to camera). Pascal3D+ also adds pose annotated images of these 12 categories from the ImageNet dataset.
Provide a detailed description of the following dataset: PASCAL3D+
SUN RGB-D
The SUN RGBD dataset contains 10335 real RGB-D images of room scenes. Each RGB image has a corresponding depth and segmentation map. As many as 700 object categories are labeled. The training and testing sets contain 5285 and 5050 images, respectively.
Provide a detailed description of the following dataset: SUN RGB-D
SUNCG
**SUNCG** is a large-scale dataset of synthetic 3D scenes with dense volumetric annotations. The dataset is currently not available.
Provide a detailed description of the following dataset: SUNCG
Places205
The **Places205** dataset is a large-scale scene-centric dataset with 205 common scene categories. The training dataset contains around 2,500,000 images from these categories. In the training set, each scene category has the minimum 5,000 and maximum 15,000 images. The validation set contains 100 images per category (a total of 20,500 images), and the testing set includes 200 images per category (a total of 41,000 images).
Provide a detailed description of the following dataset: Places205
ModelNet
The **ModelNet**40 dataset contains synthetic object point clouds. As the most widely used benchmark for point cloud analysis, ModelNet40 is popular because of its various categories, clean shapes, well-constructed dataset, etc. The original ModelNet40 consists of 12,311 CAD-generated meshes in 40 categories (such as airplane, car, plant, lamp), of which 9,843 are used for training while the rest 2,468 are reserved for testing. The corresponding point cloud data points are uniformly sampled from the mesh surfaces, and then further preprocessed by moving to the origin and scaling into a unit sphere.
Provide a detailed description of the following dataset: ModelNet
YAGO
**Yet Another Great Ontology** (**YAGO**) is a Knowledge Graph that augments WordNet with common knowledge facts extracted from Wikipedia, converting WordNet from a primarily linguistic resource to a common knowledge base. YAGO originally consisted of more than 1 million entities and 5 million facts describing relationships between these entities. YAGO2 grounded entities, facts, and events in time and space, contained 446 million facts about 9.8 million entities, while YAGO3 added about 1 million more entities from non-English Wikipedia articles. YAGO3-10 a subset of YAGO3, containing entities which have a minimum of 10 relations each.
Provide a detailed description of the following dataset: YAGO
MPI Sintel
MPI (Max Planck Institute) Sintel is a dataset for optical flow evaluation that has 1064 synthesized stereo images and ground truth data for disparity. Sintel is derived from open-source 3D animated short film Sintel. The dataset has 23 different scenes. The stereo images are RGB while the disparity is grayscale. Both have resolution of 1024×436 pixels and 8-bit per channel.
Provide a detailed description of the following dataset: MPI Sintel
Helen
The HELEN dataset is composed of 2330 face images of 400×400 pixels with labeled facial components generated through manually-annotated contours along eyes, eyebrows, nose, lips and jawline.
Provide a detailed description of the following dataset: Helen
Omniglot
The Omniglot data set is designed for developing more human-like learning algorithms. It contains 1623 different handwritten characters from 50 different alphabets. Each of the 1623 characters was drawn online via Amazon's Mechanical Turk by 20 different people. Each image is paired with stroke data, a sequences of [x,y,t] coordinates with time (t) in milliseconds.
Provide a detailed description of the following dataset: Omniglot
FrameNet
**FrameNet** is a linguistic knowledge graph containing information about lexical and predicate argument semantics of the English language. FrameNet contains two distinct entity classes: frames and lexical units, where a frame is a meaning and a lexical unit is a single meaning for a word.
Provide a detailed description of the following dataset: FrameNet
LSUN
The Large-scale Scene Understanding (**LSUN**) challenge aims to provide a different benchmark for large-scale scene classification and understanding. The LSUN classification dataset contains 10 scene categories, such as dining room, bedroom, chicken, outdoor church, and so on. For training data, each category contains a huge number of images, ranging from around 120,000 to 3,000,000. The validation data includes 300 images, and the test data has 1000 images for each category.
Provide a detailed description of the following dataset: LSUN
LFPW
The **Labeled Face Parts in-the-Wild** (**LFPW**) consists of 1,432 faces from images downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Each image was labeled by three MTurk workers, and 29 fiducial points, shown below, are included in dataset.
Provide a detailed description of the following dataset: LFPW
CARLA
**CARLA** (CAR Learning to Act) is an open simulator for urban driving, developed as an open-source layer over Unreal Engine 4. Technically, it operates similarly to, as an open source layer over Unreal Engine 4 that provides sensors in the form of RGB cameras (with customizable positions), ground truth depth maps, ground truth semantic segmentation maps with 12 semantic classes designed for driving (road, lane marking, traffic sign, sidewalk and so on), bounding boxes for dynamic objects in the environment, and measurements of the agent itself (vehicle location and orientation).
Provide a detailed description of the following dataset: CARLA
OTB
Object Tracking Benchmark (**OTB**) is a visual tracking benchmark that is widely used to evaluate the performance of a visual tracking algorithm. The dataset contains a total of 100 sequences and each is annotated frame-by-frame with bounding boxes and 11 challenge attributes. [OTB-2013](otb-2013) dataset contains 51 sequences and the [OTB-2015](otb-2015) dataset contains all 100 sequences of the OTB dataset.
Provide a detailed description of the following dataset: OTB
Places365
The **Places365** dataset is a scene recognition dataset. It is composed of 10 million images comprising 434 scene classes. There are two versions of the dataset: Places365-Standard with 1.8 million train and 36000 validation images from K=365 scene classes, and Places365-Challenge-2016, in which the size of the training set is increased up to 6.2 million extra images, including 69 new scene classes (leading to a total of 8 million train images from 434 scene classes).
Provide a detailed description of the following dataset: Places365
Extended Yale B
The **Extended Yale B** database contains 2414 frontal-face images with size 192×168 over 38 subjects and about 64 images per subject. The images were captured under different lighting conditions and various facial expressions.
Provide a detailed description of the following dataset: Extended Yale B
IMDb Movie Reviews
The **IMDb Movie Reviews** dataset is a binary sentiment analysis dataset consisting of 50,000 reviews from the Internet Movie Database (IMDb) labeled as positive or negative. The dataset contains an even number of positive and negative reviews. Only highly polarizing reviews are considered. A negative review has a score ≤ 4 out of 10, and a positive review has a score ≥ 7 out of 10. No more than 30 reviews are included per movie. The dataset contains additional unlabeled data.
Provide a detailed description of the following dataset: IMDb Movie Reviews
BookCorpus
**BookCorpus** is a large collection of free novel books written by unpublished authors, which contains 11,038 books (around 74M sentences and 1G words) of 16 different sub-genres (e.g., Romance, Historical, Adventure, etc.).
Provide a detailed description of the following dataset: BookCorpus
FaceWarehouse
**FaceWarehouse** is a 3D facial expression database that provides the facial geometry of 150 subjects, covering a wide range of ages and ethnic backgrounds.
Provide a detailed description of the following dataset: FaceWarehouse
LSP
The **Leeds Sports Pose** (**LSP**) dataset is widely used as the benchmark for human pose estimation. The original LSP dataset contains 2,000 images of sportspersons gathered from Flickr, 1000 for training and 1000 for testing. Each image is annotated with 14 joint locations, where left and right joints are consistently labelled from a person-centric viewpoint. The extended LSP dataset contains additional 10,000 images labeled for training.
Provide a detailed description of the following dataset: LSP
KTH
The efforts to create a non-trivial and publicly available dataset for action recognition was initiated at the **KTH** Royal Institute of Technology in 2004. The KTH dataset is one of the most standard datasets, which contains six actions: walk, jog, run, box, hand-wave, and hand clap. To account for performance nuance, each action is performed by 25 different individuals, and the setting is systematically altered for each action per actor. Setting variations include: outdoor (s1), outdoor with scale variation (s2), outdoor with different clothes (s3), and indoor (s4). These variations test the ability of each algorithm to identify actions independent of the background, appearance of the actors, and the scale of the actors.
Provide a detailed description of the following dataset: KTH
Places
The **Places** dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category.
Provide a detailed description of the following dataset: Places
MoCap
Collection of various motion capture recordings (walking, dancing, sports, and others) performed by over 140 subjects. The database contains free motions which you can download and use. There is a zip file of all asf/amc's on the FAQs page. Source: [https://www.re3data.org/repository/r3d100012183](https://www.re3data.org/repository/r3d100012183)
Provide a detailed description of the following dataset: MoCap
KIT Whole-Body Human Motion
The **KIT Whole-Body Human Motion** Database is a large-scale dataset of whole-body human motion with methods and tools, which allows a unifying representation of captured human motion, and efficient search in the database, as well as the transfer of subject-specific motions to robots with different embodiments. Captured subject-specific motion is normalized regarding the subject’s height and weight by using a reference kinematics and dynamics model of the human body, the master motor map (MMM). In contrast with previous approaches and human motion databases, the motion data in this database consider not only the motions of the human subject but the position and motion of objects with which the subject is interacting as well. In addition to the description of the MMM reference model, See the paper for procedures and techniques used for the systematic recording, labeling, and organization of human motion capture data, object motions as well as the subject–object relations.
Provide a detailed description of the following dataset: KIT Whole-Body Human Motion
Meta-Dataset
The **Meta-Dataset** benchmark is a large few-shot learning benchmark and consists of multiple datasets of different data distributions. It does not restrict few-shot tasks to have fixed ways and shots, thus representing a more realistic scenario. It consists of 10 datasets from diverse domains: * ILSVRC-2012 (the ImageNet dataset, consisting of natural images with 1000 categories) * Omniglot (hand-written characters, 1623 classes) * Aircraft (dataset of aircraft images, 100 classes) * CUB-200-2011 (dataset of Birds, 200 classes) * Describable Textures (different kinds of texture images with 43 categories) * Quick Draw (black and white sketches of 345 different categories) * Fungi (a large dataset of mushrooms with 1500 categories) * VGG Flower (dataset of flower images with 102 categories), * Traffic Signs (German traffic sign images with 43 classes) * MSCOCO (images collected from Flickr, 80 classes). All datasets except Traffic signs and MSCOCO have a training, validation and test split (proportioned roughly into 70%, 15%, 15%). The datasets Traffic Signs and MSCOCO are reserved for testing only.
Provide a detailed description of the following dataset: Meta-Dataset
USF
The **USF** **Human ID Gait Challenge Dataset** is a dataset of videos for gait recognition. It has videos from 122 subjects in up to 32 possible combinations of variations in factors. Source: [http://www.eng.usf.edu/cvprg/Gait_Data.html](http://www.eng.usf.edu/cvprg/Gait_Data.html)
Provide a detailed description of the following dataset: USF
BirdSong
The **BirdSong** dataset consists of audio recordings of bird songs at the H. J. Andrews (HJA) Experimental Forest, using unattended microphones. The goal of the dataset is to provide data to automatically identify the species of bird responsible for each utterance in these recordings. The dataset contains 548 10-seconds audio recordings.
Provide a detailed description of the following dataset: BirdSong
Oxford5k
Oxford5K is the **Oxford Buildings** Dataset, which contains 5062 images collected from Flickr. It offers a set of 55 queries for 11 landmark buildings, five for each landmark.
Provide a detailed description of the following dataset: Oxford5k
CBSD68
**Color BSD68** dataset for image denoising benchmarks is part of The Berkeley Segmentation Dataset and Benchmark. It is used for measuring image denoising algorithms performance. It contains 68 images.
Provide a detailed description of the following dataset: CBSD68
ScribbleSup
The **PASCAL-Scribble Dataset** is an extension of the PASCAL dataset with scribble annotations for semantic segmentation. The annotations follow two different protocols. In the first protocol, the PASCAL VOC 2012 set is annotated, with 20 object categories (aeroplane, bicycle, ...) and one background category. There are 12,031 images annotated, including 10,582 images in the training set and 1,449 images in the validation set. In the second protocol, the 59 object/stuff categories and one background category involved in the PASCAL-CONTEXT dataset are used. Besides the 20 object categories in the first protocol, there are 39 extra categories (snow, tree, ...) included. This protocol is followed to annotate the PASCAL-CONTEXT dataset. 4,998 images in the training set have been annotated.
Provide a detailed description of the following dataset: ScribbleSup
Stanford Background
The **Stanford Background** dataset contains 715 RGB images and the corresponding label images. Images are approximately 240×320 pixels in size and pixels are classified into eight different categories
Provide a detailed description of the following dataset: Stanford Background
New College
The **New College** Data is a freely available dataset collected from a robot completing several loops outdoors around the New College campus in Oxford. The data includes odometry, laser scan, and visual information. The dataset URL is not working anymore.
Provide a detailed description of the following dataset: New College
MALF
The **MALF** dataset is a large dataset with 5,250 images annotated with multiple facial attributes and it is specifically constructed for fine grained evaluation. Source: [Pushing the Limits of Unconstrained Face Detection:a Challenge Dataset and Baseline Results](https://arxiv.org/abs/1804.10275) Image Source: [http://www.cbsr.ia.ac.cn/faceevaluation/](http://www.cbsr.ia.ac.cn/faceevaluation/)
Provide a detailed description of the following dataset: MALF
Oxford-Affine
The **Oxford-Affine** dataset is a small dataset containing 8 scenes with sequence of 6 images per scene. The images in a sequence are related by homographies. Source: [A Large Dataset for Improving Patch Matching](https://arxiv.org/abs/1801.01466) Image Source: [https://www.robots.ox.ac.uk/~vgg/data/affine/](https://www.robots.ox.ac.uk/~vgg/data/affine/)
Provide a detailed description of the following dataset: Oxford-Affine