dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Argoverse
**Argoverse** is a tracking benchmark with over 30K scenarios collected in Pittsburgh and Miami. Each scenario is a sequence of frames sampled at 10 HZ. Each sequence has an interesting object called “agent”, and the task is to predict the future locations of agents in a 3 seconds future horizon. The sequences are split into training, validation and test sets, which have 205,942, 39,472 and 78,143 sequences respectively. These splits have no geographical overlap.
Provide a detailed description of the following dataset: Argoverse
MIB Dataset
You need to request access to download and use the dataset. It contains fake and real accounts of Twitter and their follower's/friends' ids (can create a graph based on that).
Provide a detailed description of the following dataset: MIB Dataset
CLEVR
**CLEVR** (**Compositional Language and Elementary Visual Reasoning**) is a synthetic Visual Question Answering dataset. It contains images of 3D-rendered objects; each image comes with a number of highly compositional questions that fall into different categories. Those categories fall into 5 classes of tasks: Exist, Count, Compare Integer, Query Attribute and Compare Attribute. The CLEVR dataset consists of: a training set of 70k images and 700k questions, a validation set of 15k images and 150k questions, a test set of 15k images and 150k questions about objects, answers, scene graphs and functional programs for all train and validation images and questions. Each object present in the scene, aside of position, is characterized by a set of four attributes: 2 sizes: large, small, 3 shapes: square, cylinder, sphere, 2 material types: rubber, metal, 8 color types: gray, blue, brown, yellow, red, green, purple, cyan, resulting in 96 unique combinations.
Provide a detailed description of the following dataset: CLEVR
PROBA-V
The PROBA-V Super-Resolution dataset is the official dataset of ESA's Kelvins competition for "PROBA-V Super Resolution". It contains satellite data from 74 hand-selected regions around the globe at different points in time. The data is composed of radiometrically and geometrically corrected Top-Of-Atmosphere (TOA) reflectances for the RED and NIR spectral bands at 300m and 100m resolution in Plate Carrée projection. The 300m resolution data is delivered as 128x128 grey-scale pixel images, the 100m resolution data as 384x384 grey-scale pixel images. Additionally, a quality map is provided for each pixel, indicating whether the pixels are concealed (i.e. by clouads, ice, water, missing information, etc.). The goal of the challenge can be described as Multi-Image Super-resolution: Construct a single high-resolution image out of a series of more frequent low resolution images. Detailed information about the related competition can be found at https://kelvins.esa.int/proba-v-super-resolution.
Provide a detailed description of the following dataset: PROBA-V
Tai-Chi-HD
**Thai-Chi-HD** is a high resolution dataset which can be used as reference benchmark for evaluating frameworks for image animation and video generation. It consists of cropped videos of full human bodies performing Tai Chi actions. Image source: [https://papers.nips.cc/paper/2019/file/31c0b36aef265d9221af80872ceb62f9-Paper.pdf](https://papers.nips.cc/paper/2019/file/31c0b36aef265d9221af80872ceb62f9-Paper.pdf)
Provide a detailed description of the following dataset: Tai-Chi-HD
CMU-MOSEI
CMU Multimodal Opinion Sentiment and Emotion Intensity (**CMU-MOSEI**) is the largest dataset of sentence level sentiment analysis and emotion recognition in online videos. CMU-MOSEI contains more than 65 hours of annotated video from more than 1000 speakers and 250 topics.
Provide a detailed description of the following dataset: CMU-MOSEI
AffectNet
**AffectNet** is a large facial expression dataset with around 0.4 million images manually labeled for the presence of eight (neutral, happy, angry, sad, fear, surprise, disgust, contempt) facial expressions along with the intensity of valence and arousal.
Provide a detailed description of the following dataset: AffectNet
FER+
The **FER+** dataset is an extension of the original FER dataset, where the images have been re-labelled into one of 8 emotion types: neutral, happiness, surprise, sadness, anger, disgust, fear, and contempt.
Provide a detailed description of the following dataset: FER+
CommonGen
CommonGen is constructed through a combination of crowdsourced and existing caption corpora, consists of 79k commonsense descriptions over 35k unique concept-sets.
Provide a detailed description of the following dataset: CommonGen
The China Physiological Signal Challenge 2018
The China Physiological Signal Challenge 2018 aims to encourage the development of algorithms to identify the rhythm/morphology abnormalities from 12-lead ECGs. The data used in CPSC 2018 include one normal ECG type and eight abnormal types.
Provide a detailed description of the following dataset: The China Physiological Signal Challenge 2018
University-1652
Contains data from three platforms, i.e., synthetic drones, satellites and ground cameras of 1,652 university buildings around the world. University-1652 is a drone-based geo-localization dataset and enables two new tasks, i.e., drone-view target localization and drone navigation.
Provide a detailed description of the following dataset: University-1652
FQuAD
A French Native Reading Comprehension dataset of questions and answers on a set of Wikipedia articles that consists of 25,000+ samples for the 1.0 version and 60,000+ samples for the 1.1 version.
Provide a detailed description of the following dataset: FQuAD
HARD
The Hotel Arabic-Reviews Dataset (HARD) contains 93700 hotel reviews in Arabic language. The hotel reviews were collected from Booking.com website during June/July 2016. The reviews are expressed in Modern Standard Arabic as well as dialectal Arabic.
Provide a detailed description of the following dataset: HARD
3DFAW
**3DFAW** contains 23k images with 66 3D face keypoint annotations. Source: [Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild](https://arxiv.org/abs/1911.11130) Image Source: [http://mhug.disi.unitn.it/workshop/3dfaw/](http://mhug.disi.unitn.it/workshop/3dfaw/)
Provide a detailed description of the following dataset: 3DFAW
Breakfast
The **Breakfast** Actions Dataset comprises of 10 actions related to breakfast preparation, performed by 52 different individuals in 18 different kitchens. The dataset is one of the largest fully annotated datasets available. The actions are recorded “in the wild” as opposed to a single controlled lab environment. It consists of over 77 hours of video recordings.
Provide a detailed description of the following dataset: Breakfast
NELL-995
NELL-995 KG Completion Dataset
Provide a detailed description of the following dataset: NELL-995
Food-101N
The Food-101N dataset is introduced in "CleanNet: Transfer Learning for Scalable Image Training with Label Noise (CVPR'18). It is an image dataset containing about 310,009 images of food recipes classified in 101 classes (categories). Food-101N and the Food-101 dataset share the same 101 classes, whereas Food-101N has much more images and is more noisy. Food-101N is designed for the following two tasks: 1)Learning image classification with label noise 2)Label noise detection
Provide a detailed description of the following dataset: Food-101N
Composition-1K
Composition-1K is a large-scale image matting dataset including 49300 training images and 1000 testing images. Image source: [https://arxiv.org/pdf/1703.03872v3.pdf](https://arxiv.org/pdf/1703.03872v3.pdf)
Provide a detailed description of the following dataset: Composition-1K
KolektorSDD
The dataset is constructed from images of defective production items that were provided and annotated by [Kolektor Group d.o.o.](https://www.kolektordigital.com/en/advanced-visual-tecnologies). The images were captured in a controlled industrial environment in a real-world case. The dataset consists of 399 images at 500 x ~1250 px in size. Please cite our paper published in the Journal of Intelligent Manufacturing when using this dataset: ``` @article{Tabernik2019JIM, author = {Tabernik, Domen and {\v{S}}ela, Samo and Skvar{\v{c}}, Jure and Sko{\v{c}}aj, Danijel}, journal = {Journal of Intelligent Manufacturing}, title = {{Segmentation-Based Deep-Learning Approach for Surface-Defect Detection}}, year = {2019}, month = {May}, day = {15}, issn={1572-8145}, doi={10.1007/s10845-019-01476-x} } ```
Provide a detailed description of the following dataset: KolektorSDD
ASLG-PC12
An artificial corpus built using grammatical dependencies rules due to the lack of resources for Sign Language.
Provide a detailed description of the following dataset: ASLG-PC12
CIFAR10-DVS
**CIFAR10-DVS** is an event-stream dataset for object classification. 10,000 frame-based images that come from CIFAR-10 dataset are converted into 10,000 event streams with an event-based sensor, whose resolution is 128×128 pixels. The dataset has an intermediate difficulty with 10 different classes. The repeated closed-loop smooth (RCLS) movement of frame-based images is adopted to implement the conversion. Due to the transformation, they produce rich local intensity changes in continuous time which are quantized by each pixel of the event-based camera. Source: [Structure-Aware Network for Lane Marker Extraction with Dynamic Vision Sensor](https://arxiv.org/abs/2008.06204) Image Source: [https://www.frontiersin.org/articles/10.3389/fnins.2017.00309/full](https://www.frontiersin.org/articles/10.3389/fnins.2017.00309/full)
Provide a detailed description of the following dataset: CIFAR10-DVS
Stanford Online Products
**Stanford Online Products** (SOP) dataset has 22,634 classes with 120,053 product images. The first 11,318 classes (59,551 images) are split for training and the other 11,316 (60,502 images) classes are used for testing
Provide a detailed description of the following dataset: Stanford Online Products
In-Shop
In-shop Clothes Retrieval Benchmark evaluates the performance of in-shop Clothes Retrieval. This is a large subset of DeepFashion, containing large pose and scale variations. It also has large diversities, large quantities, and rich annotations, including: - 7,982 number of clothing items; - 52,712 number of in-shop clothes images, and ~200,000 cross-pose/scale pairs; Each image is annotated by bounding box, clothing type and pose type.
Provide a detailed description of the following dataset: In-Shop
Ecoli
The **Ecoli** dataset is a dataset for protein localization. It contains 336 E.coli proteins split into 8 different classes.
Provide a detailed description of the following dataset: Ecoli
Yeast
Yeast dataset consists of a protein-protein interaction network. Interaction detection methods have led to the discovery of thousands of interactions between proteins, and discerning relevance within large-scale data sets is important to present-day biology.
Provide a detailed description of the following dataset: Yeast
MOT17
The **Multiple Object Tracking 17** (**MOT17**) dataset is a dataset for multiple object tracking. Similar to its previous version MOT16, this challenge contains seven different indoor and outdoor scenes of public places with pedestrians as the objects of interest. A video for each scene is divided into two clips, one for training and the other for testing. The dataset provides detections of objects in the video frames with three detectors, namely SDP, Faster-RCNN and DPM. The challenge accepts both on-line and off-line tracking approaches, where the latter are allowed to use the future video frames to predict tracks.
Provide a detailed description of the following dataset: MOT17
MOT20
**MOT20** is a dataset for multiple object tracking. The dataset contains 8 challenging video sequences (4 train, 4 test) in unconstrained environments, from crowded places such as train stations, town squares and a sports stadium. Image Source: [https://motchallenge.net/vis/MOT20-04](https://motchallenge.net/vis/MOT20-04)
Provide a detailed description of the following dataset: MOT20
SEMAINE
The **SEMAINE** videos dataset contains spontaneous data capturing the audiovisual interaction between a human and an operator undertaking the role of an avatar with four personalities: Poppy (happy), Obadiah (gloomy), Spike (angry) and Prudence (pragmatic). The audiovisual sequences have been recorded at a video rate of 25 fps (352 x 288 pixels). The dataset consists of audiovisual interaction between a human and an operator undertaking the role of an agent (Sensitive Artificial Agent). SEMAINE video clips have been annotated with couples of epistemic states such as agreement, interested, certain, concentration, and thoughtful with continuous rating (within the range [1,-1]) where -1 indicates most negative rating (i.e: No concentration at all) and +1 defines the highest (Most concentration). Twenty-four recording sessions are used in the Solid SAL scenario. Recordings are made of both the user and the operator, and there are usually four character interactions in each recording session, providing a total of 95 character interactions and 190 video clips.
Provide a detailed description of the following dataset: SEMAINE
R2R
R2R is a dataset for visually-grounded natural language navigation in real buildings. The dataset requires autonomous agents to follow human-generated navigation instructions in previously unseen buildings, as illustrated in the demo above. For training, each instruction is associated with a Matterport3D Simulator trajectory. 22k instructions are available, with an average length of 29 words. There is a test evaluation server for this dataset available at EvalAI.
Provide a detailed description of the following dataset: R2R
SceneNN
SceneNN is an RGB-D scene dataset consisting of more than 100 indoor scenes. The scenes are captured at various places, e.g., offices, dormitory, classrooms, pantry, etc., from University of Massachusetts Boston and Singapore University of Technology and Design. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. The dataset is additionally enriched with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses.
Provide a detailed description of the following dataset: SceneNN
EGTEA
Extended GTEA Gaze+ EGTEA Gaze+ is a large-scale dataset for FPV actions and gaze. It subsumes GTEA Gaze+ and comes with HD videos (1280x960), audios, gaze tracking data, frame-level action annotations, and pixel-level hand masks at sampled frames. Specifically, EGTEA Gaze+ contains 28 hours (de-identified) of cooking activities from 86 unique sessions of 32 subjects. These videos come with audios and gaze tracking (30Hz). We have further provided human annotations of actions (human-object interactions) and hand masks. The action annotations include 10325 instances of fine-grained actions, such as "Cut bell pepper" or "Pour condiment (from) condiment container into salad". The hand annotations consist of 15,176 hand masks from 13,847 frames from the videos.
Provide a detailed description of the following dataset: EGTEA
GAP
**GAP** is a graph processing benchmark suite with the goal of helping to standardize graph processing evaluations. Fewer differences between graph processing evaluations will make it easier to compare different research efforts and quantify improvements. The benchmark not only specifies graph kernels, input graphs, and evaluation methodologies, but it also provides optimized baseline implementations. These baseline implementations are representative of state-of-the-art performance, and thus new contributions should outperform them to demonstrate an improvement. The input graphs are sized appropriately for shared memory platforms, but any implementation on any platform that conforms to the benchmark's specifications could be compared. This benchmark suite can be used in a variety of settings. Graph framework developers can demonstrate the generality of their programming model by implementing all of the benchmark's kernels and delivering competitive performance on all of the benchmark's graphs. Algorithm designers can use the input graphs and the baseline implementations to demonstrate their contribution. Platform designers and performance analysts can use the suite as a workload representative of graph processing.
Provide a detailed description of the following dataset: GAP
BIPED
#Details It contains 250 outdoor images of 1280$\times$720 pixels each. These images have been carefully annotated by experts on the computer vision field, hence no redundancy has been considered. In spite of that, all results have been cross-checked several times in order to correct possible mistakes or wrong edges by just one subject. This dataset is publicly available as a benchmark for evaluating edge detection algorithms. The generation of this dataset is motivated by the lack of edge detection datasets, actually, there is just one dataset publicly available for the edge detection task published in 2016 (MDBD: Multicue Dataset for Boundary Detection—the subset for edge detection). The level of details of the edge level annotations in the BIPED’s images can be appreciated looking at the GT, see Figs above. BIPED dataset has 250 images in high definition. Thoses images are already split up for training and testing. 200 for training and 50 for testing. #Version The current version is the second one.
Provide a detailed description of the following dataset: BIPED
StereoSet
A large-scale natural dataset in English to measure stereotypical biases in four domains: gender, profession, race, and religion.
Provide a detailed description of the following dataset: StereoSet
MIT-States
The **MIT-States** dataset has 245 object classes, 115 attribute classes and ∼53K images. There is a wide range of objects (e.g., fish, persimmon, room) and attributes (e.g., mossy, deflated, dirty). On average, each object instance is modified by one of the 9 attributes it affords.
Provide a detailed description of the following dataset: MIT-States
Caltech-256
**Caltech-256** is an object recognition dataset containing 30,607 real-world images, of different sizes, spanning 257 classes (256 object classes and an additional clutter class). Each class is represented by at least 80 images. The dataset is a superset of the Caltech-101 dataset.
Provide a detailed description of the following dataset: Caltech-256
SCDE
**SCDE** is a human-created sentence cloze dataset, collected from public school English examinations in China. The task requires a model to fill up multiple blanks in a passage from a shared candidate set with distractors designed by English teachers.
Provide a detailed description of the following dataset: SCDE
VATEX
**VATEX** is multilingual, large, linguistically complex, and diverse dataset in terms of both video and natural language descriptions. It has two tasks for video-and-language research: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context.
Provide a detailed description of the following dataset: VATEX
ViGGO
The ViGGO corpus is a set of 6,900 meaning representation to natural language utterance pairs in the video game domain. The meaning representations are of 9 different dialogue acts.
Provide a detailed description of the following dataset: ViGGO
REAL275
REAL275 is a benchmark for category-level pose estimation. It contains 4300 training frames, 950 validation and 2750 for testing across 18 different real scenes.
Provide a detailed description of the following dataset: REAL275
ISTD
The Image Shadow Triplets dataset (**ISTD**) is a dataset for shadow understanding that contains 1870 image triplets of shadow image, shadow mask, and shadow-free image.
Provide a detailed description of the following dataset: ISTD
LCQMC
**LCQMC** is a large-scale Chinese question matching corpus. LCQMC is more general than paraphrase corpus as it focuses on intent matching rather than paraphrase. The corpus contains 260,068 question pairs with manual annotation.
Provide a detailed description of the following dataset: LCQMC
CoNLL-2009
The task builds on the CoNLL-2008 task and extends it to multiple languages. The core of the task is to predict syntactic and semantic dependencies and their labeling. Data is provided for both statistical training and evaluation, which extract these labeled dependencies from manually annotated treebanks such as the Penn Treebank for English, the Prague Dependency Treebank for Czech and similar treebanks for Catalan, Chinese, German, Japanese and Spanish languages, enriched with semantic relations (such as those captured in the Prop/Nombank and similar resources). Great effort has been devoted to provide the participants with a common and relatively simple data representation for all the languages, similar to the last year's English data.
Provide a detailed description of the following dataset: CoNLL-2009
Ciao
The **Ciao** dataset contains rating information of users given to items, and also contain item category information. The data comes from the Epinions dataset.
Provide a detailed description of the following dataset: Ciao
SICK
The **Sentences Involving Compositional Knowledge** (**SICK**) dataset is a dataset for compositional distributional semantics. It includes a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena. Each pair of sentences is annotated in two dimensions: relatedness and entailment. The relatedness score ranges from 1 to 5, and Pearson’s r is used for evaluation; the entailment relation is categorical, consisting of entailment, contradiction, and neutral. There are 4439 pairs in the train split, 495 in the trial split used for development and 4906 in the test split. The sentence pairs are generated from image and video caption datasets before being paired up using some algorithm.
Provide a detailed description of the following dataset: SICK
FB15k
The **FB15k** dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. It has a total of 592,213 triplets with 14,951 entities and 1,345 relationships. FB15K-237 is a variant of the original dataset where inverse relations are removed, since it was found that a large number of test triplets could be obtained by inverting triplets in the training set.
Provide a detailed description of the following dataset: FB15k
CJRC
The Chinese judicial reading comprehension (CJRC) dataset contains approximately 10K documents and almost 50K questions with answers. The documents come from judgment documents and the questions are annotated by law experts.
Provide a detailed description of the following dataset: CJRC
eRisk 2017
Data for depression
Provide a detailed description of the following dataset: eRisk 2017
HyperLex
A dataset and evaluation resource that quantifies the extent of of the semantic category membership, that is, type-of relation also known as hyponymy-hypernymy or lexical entailment (LE) relation between 2,616 concept pairs.
Provide a detailed description of the following dataset: HyperLex
DBLP
The **DBLP** is a citation network dataset. The citation data is extracted from DBLP, ACM, MAG (Microsoft Academic Graph), and other sources. The first version contains 629,814 papers and 632,752 citations. Each paper is associated with abstract, authors, year, venue, and title. The data set can be used for clustering with network and side information, studying influence in the citation network, finding the most influential papers, topic modeling analysis, etc.
Provide a detailed description of the following dataset: DBLP
ACM
The **ACM** dataset contains papers published in KDD, SIGMOD, SIGCOMM, MobiCOMM, and VLDB and are divided into three classes (Database, Wireless Communication, Data Mining). An heterogeneous graph is constructed, which comprises 3025 papers, 5835 authors, and 56 subjects. Paper features correspond to elements of a bag-of-words represented of keywords. Source: [https://arxiv.org/pdf/1903.07293.pdf](https://arxiv.org/pdf/1903.07293.pdf)
Provide a detailed description of the following dataset: ACM
FNC-1
**FNC-1** was designed as a stance detection dataset and it contains 75,385 labeled headline and article pairs. The pairs are labelled as either agree, disagree, discuss, and unrelated. Each headline in the dataset is phrased as a statement
Provide a detailed description of the following dataset: FNC-1
GYAFC
Grammarly’s Yahoo Answers Formality Corpus (GYAFC) is the largest dataset for any style containing a total of 110K informal / formal sentence pairs. Yahoo Answers is a question answering forum, contains a large number of informal sentences and allows redistribution of data. The authors used the Yahoo Answers L6 corpus to create the GYAFC dataset of informal and formal sentence pairs. In order to ensure a uniform distribution of data, they removed sentences that are questions, contain URLs, and are shorter than 5 words or longer than 25. After these preprocessing steps, 40 million sentences remain. The Yahoo Answers corpus consists of several different domains like Business, Entertainment & Music, Travel, Food, etc. Pavlick and Tetreault formality classifier (PT16) shows that the formality level varies significantly across different genres. In order to control for this variation, the authors work with two specific domains that contain the most informal sentences and show results on training and testing within those categories. The authors use the formality classifier from PT16 to identify informal sentences and train this classifier on the Answers genre of the PT16 corpus which consists of nearly 5,000 randomly selected sentences from Yahoo Answers manually annotated on a scale of -3 (very informal) to 3 (very formal). They find that the domains of Entertainment & Music and Family & Relationships contain the most informal sentences and create the GYAFC dataset using these domains.
Provide a detailed description of the following dataset: GYAFC
AIDS
**AIDS** is a graph dataset. It consists of 2000 graphs representing molecular compounds which are constructed from the AIDS Antiviral Screen Database of Active Compounds. It contains 4395 chemical compounds, of which 423 belong to class CA, 1081 to CM, and the remaining compounds to CI.
Provide a detailed description of the following dataset: AIDS
Sydney Urban Objects
This dataset contains a variety of common urban road objects scanned with a Velodyne HDL-64E LIDAR, collected in the CBD of Sydney, Australia. There are 631 individual scans of objects across classes of vehicles, pedestrians, signs and trees. It was collected in order to test matching and classification algorithms. It aims to provide non-ideal sensing conditions that are representative of practical urban sensing systems, with a large variability in viewpoint and occlusion. Source: [http://www.acfr.usyd.edu.au/papers/SydneyUrbanObjectsDataset.shtml](http://www.acfr.usyd.edu.au/papers/SydneyUrbanObjectsDataset.shtml) Image Source: [http://www.acfr.usyd.edu.au/papers/SydneyUrbanObjectsDataset.shtml](http://www.acfr.usyd.edu.au/papers/SydneyUrbanObjectsDataset.shtml)
Provide a detailed description of the following dataset: Sydney Urban Objects
Digits
The DIGITS dataset consists of 1797 8×8 grayscale images (1439 for training and 360 for testing) of handwritten digits.
Provide a detailed description of the following dataset: Digits
Brazil Air-Traffic
Brazil Air-Traffic
Provide a detailed description of the following dataset: Brazil Air-Traffic
USA Air-Traffic
Leonardo Filipe Rodrigues Ribeiro, Pedro H. P. Saverese, and Daniel R. Figueiredo. struc2vec: Learning node representations from structural identity.
Provide a detailed description of the following dataset: USA Air-Traffic
Mutagenicity
**Mutagenicity** is a chemical compound dataset of drugs, which can be categorized into two classes: mutagen and non-mutagen. Source: [Hierarchical Graph Pooling with Structure Learning](https://arxiv.org/abs/1911.05954)
Provide a detailed description of the following dataset: Mutagenicity
SIDER
**SIDER** contains information on marketed medicines and their recorded adverse drug reactions. The information is extracted from public documents and package inserts. The available information include side effect frequency, drug and side effect classifications as well as links to further information, for example drug–target relations. Source: [Side Effect Resource](http://sideeffects.embl.de/) Image Source: [http://sideeffects.embl.de/drugs/2756/](http://sideeffects.embl.de/drugs/2756/)
Provide a detailed description of the following dataset: SIDER
RCV1
The **RCV1** dataset is a benchmark dataset on text categorization. It is a collection of newswire articles producd by Reuters in 1996-1997. It contains 804,414 manually labeled newswire documents, and categorized with respect to three controlled vocabularies: industries, topics and regions.
Provide a detailed description of the following dataset: RCV1
CrossTask
**CrossTask** dataset contains instructional videos, collected for 83 different tasks. For each task an ordered list of steps with manual descriptions is provided. The dataset is divided in two parts: 18 primary and 65 related tasks. Videos for the primary tasks are collected manually and provided with annotations for temporal step boundaries. Videos for the related tasks are collected automatically and don't have annotations.
Provide a detailed description of the following dataset: CrossTask
YouCook2
**YouCook2** is the largest task-oriented, instructional video dataset in the vision community. It contains 2000 long untrimmed videos from 89 cooking recipes; on average, each distinct recipe has 22 videos. The procedure steps for each video are annotated with temporal boundaries and described by imperative English sentences (see the example below). The videos were downloaded from YouTube and are all in the third-person viewpoint. All the videos are unconstrained and can be performed by individual persons at their houses with unfixed cameras. YouCook2 contains rich recipe types and various cooking styles from all over the world.
Provide a detailed description of the following dataset: YouCook2
FaceForensics
FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions: * Source-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos. * Selfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models.
Provide a detailed description of the following dataset: FaceForensics
Stacked MNIST
The **Stacked MNIST** dataset is derived from the standard MNIST dataset with an increased number of discrete modes. 240,000 RGB images in the size of 32×32 are synthesized by stacking three random digit images from MNIST along the color channel, resulting in 1,000 explicit modes in a uniform distribution corresponding to the number of possible triples of digits.
Provide a detailed description of the following dataset: Stacked MNIST
CARPK
The Car Parking Lot Dataset (**CARPK**) contains nearly 90,000 cars from 4 different parking lots collected by means of drone (PHANTOM 3 PROFESSIONAL). The images are collected with the drone-view at approximate 40 meters height. The image set is annotated by bounding box per car. All labeled bounding boxes have been well recorded with the top-left points and the bottom-right points. It is supporting object counting, object localizing, and further investigations with the annotation format in bounding boxes.
Provide a detailed description of the following dataset: CARPK
Pix3D
The **Pix3D** dataset is a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc.
Provide a detailed description of the following dataset: Pix3D
Cell
The CELL benchmark is made of fluorescence microscopy images of cell.
Provide a detailed description of the following dataset: Cell
FBMS
The **Freiburg-Berkeley Motion Segmentation** Dataset (**FBMS**-59) is an extension of the BMS dataset with 33 additional video sequences. A total of 720 frames is annotated. It has pixel-accurate segmentation annotations of moving objects. FBMS-59 comes with a split into a training set and a test set.
Provide a detailed description of the following dataset: FBMS
NVGesture
The **NVGesture** dataset focuses on touchless driver controlling. It contains 1532 dynamic gestures fallen into 25 classes. It includes 1050 samples for training and 482 for testing. The videos are recorded with three modalities (RGB, depth, and infrared). Source: [Searching Multi-Rate and Multi-Modal Temporal Enhanced Networks for Gesture Recognition](https://arxiv.org/abs/2008.09412) Image Source: [Online Detection and Classification of Dynamic Hand Gestures With Recurrent 3D Convolutional Neural Network](https://paperswithcode.com/paper/online-detection-and-classification-of/)
Provide a detailed description of the following dataset: NVGesture
SUN09
The **SUN09** dataset consists of 12,000 annotated images with more than 200 object categories. It consists of natural, indoor and outdoor images. Each image contains an average of 7 different annotated objects and the average occupancy of each object is 5% of image size. The frequencies of object categories follow a power law distribution.
Provide a detailed description of the following dataset: SUN09
COIN
The **COIN** dataset (a large-scale dataset for COmprehensive INstructional video analysis) consists of 11,827 videos related to 180 different tasks in 12 domains (e.g., vehicles, gadgets, etc.) related to our daily life. The videos are all collected from YouTube. The average length of a video is 2.36 minutes. Each video is labelled with 3.91 step segments, where each segment lasts 14.91 seconds on average. In total, the dataset contains videos of 476 hours, with 46,354 annotated segments.
Provide a detailed description of the following dataset: COIN
Kinetics-600
The **Kinetics-600** is a large-scale action recognition dataset which consists of around 480K videos from 600 action categories. The 480K videos are divided into 390K, 30K, 60K for training, validation and test sets, respectively. Each video in the dataset is a 10-second clip of action moment annotated from raw YouTube video. It is an extensions of the Kinetics-400 dataset.
Provide a detailed description of the following dataset: Kinetics-600
AudioSet
Audioset is an audio event dataset, which consists of over 2M human-annotated 10-second video clips. These clips are collected from YouTube, therefore many of which are in poor-quality and contain multiple sound-sources. A hierarchical ontology of 632 event classes is employed to annotate these data, which means that the same sound could be annotated as different labels. For example, the sound of barking is annotated as Animal, Pets, and Dog. All the videos are split into Evaluation/Balanced-Train/Unbalanced-Train set.
Provide a detailed description of the following dataset: AudioSet
DIVA-HisDB
The database consists of 150 annotated pages of three different medieval manuscripts with challenging layouts. Furthermore, we provide a layout analysis ground-truth which has been iterated on, reviewed, and refined by an expert in medieval studies.
Provide a detailed description of the following dataset: DIVA-HisDB
VQA-RAD
VQA-RAD consists of 3,515 question–answer pairs on 315 radiology images.
Provide a detailed description of the following dataset: VQA-RAD
TDIUC
**Task Directed Image Understanding Challenge** (**TDIUC**) dataset is a Visual Question Answering dataset which consists of 1.6M questions and 170K images sourced from MS COCO and the Visual Genome Dataset. The image-question pairs are split into 12 categories and 4 additional evaluation matrices which help evaluate models’ robustness against answer imbalance and its ability to answer questions that require higher reasoning capability. The TDIUC dataset divides the VQA paradigm into 12 different task directed question types. These include questions that require a simpler task (e.g., object presence, color attribute) and more complex tasks (e.g., counting, positional reasoning). The dataset includes also an “Absurd” question category in which questions are irrelevant to the image contents to help balance the dataset.
Provide a detailed description of the following dataset: TDIUC
Mall
The **Mall** is a dataset for crowd counting and profiling research. Its images are collected from publicly accessible webcam. It mainly includes 2,000 video frames, and the head position of every pedestrian in all frames is annotated. A total of more than 60,000 pedestrians are annotated in this dataset.
Provide a detailed description of the following dataset: Mall
A3D
A new dataset of diverse traffic accidents.
Provide a detailed description of the following dataset: A3D
FRGC
The data for **FRGC** consists of 50,000 recordings divided into training and validation partitions. The training partition is designed for training algorithms and the validation partition is for assessing performance of an approach in a laboratory setting. The validation partition consists of data from 4,003 subject sessions. A subject session is the set of all images of a person taken each time a person's biometric data is collected and consists of four controlled still images, two uncontrolled still images, and one three-dimensional image. The controlled images were taken in a studio setting, are full frontal facial images taken under two lighting conditions and with two facial expressions (smiling and neutral). The uncontrolled images were taken in varying illumination conditions; e.g., hallways, atriums, or outside. Each set of uncontrolled images contains two expressions, smiling and neutral. The 3D image was taken under controlled illumination conditions. The 3D images consist of both a range and a texture image. The 3D images were acquired by a Minolta Vivid 900/910 series sensor.
Provide a detailed description of the following dataset: FRGC
HAR
The Human Activity Recognition Dataset has been collected from 30 subjects performing six different activities (Walking, Walking Upstairs, Walking Downstairs, Sitting, Standing, Laying). It consists of inertial sensor data that was collected using a smartphone carried by the subjects.
Provide a detailed description of the following dataset: HAR
MOT15
MOT2015 is a dataset for multiple object tracking. It contains 11 different indoor and outdoor scenes of public places with pedestrians as the objects of interest, where camera motion, camera angle and imaging condition vary greatly. The dataset provides detections generated by the ACF-based detector.
Provide a detailed description of the following dataset: MOT15
CASIA-MFSD
**CASIA-MFSD** is a dataset for face anti-spoofing. It contains 50 subjects, and 12 videos for each subject under different resolutions and light conditions. Three different spoof attacks are designed: replay, warp print and cut print attacks. The database contains 600 video recordings, in which 240 videos of 20 subjects are used for training and 360 videos of 30 subjects for testing.
Provide a detailed description of the following dataset: CASIA-MFSD
Replay-Attack
The **Replay-Attack** Database for face spoofing consists of 1300 video clips of photo and video attack attempts to 50 clients, under different lighting conditions. All videos are generated by either having a (real) client trying to access a laptop through a built-in webcam or by displaying a photo or a video recording of the same client for at least 9 seconds.
Provide a detailed description of the following dataset: Replay-Attack
Delicious
**Delicious** : This data set contains tagged web pages retrieved from the website delicious.com. Source: [Text segmentation on multilabel documents: A distant-supervised approach](https://arxiv.org/abs/1904.06730) Image Source: [http://mlkd.csd.auth.gr/multilabel.html](http://mlkd.csd.auth.gr/multilabel.html)
Provide a detailed description of the following dataset: Delicious
WeChat
The **WeChat** dataset for fake news detection contains more than 20k news labelled as fake news or not.
Provide a detailed description of the following dataset: WeChat
KDD12
A clickthrough prediction dataset, for more information please see the [Kaggle page](https://www.kaggle.com/c/kddcup2012-track2)
Provide a detailed description of the following dataset: KDD12
RAF-DB
The **Real-world Affective Faces** Database (**RAF-DB**) is a dataset for facial expression. It contains 29672 facial images tagged with basic or compound expressions by 40 independent taggers. Images in this database are of great variability in subjects' age, gender and ethnicity, head poses, lighting conditions, occlusions, (e.g. glasses, facial hair or self-occlusion), post-processing operations (e.g. various filters and special effects), etc.
Provide a detailed description of the following dataset: RAF-DB
FERG
**FERG** is a database of cartoon characters with annotated facial expressions containing 55,769 annotated face images of six characters. The images for each character are grouped into 7 types of cardinal expressions, viz. anger, disgust, fear, joy, neutral, sadness and surprise. Source: [VGAN-Based Image Representation Learningfor Privacy-Preserving Facial Expression Recognition](https://arxiv.org/abs/1803.07100) Image Source: [http://grail.cs.washington.edu/projects/deepexpr/ferg-2d-db.html](http://grail.cs.washington.edu/projects/deepexpr/ferg-2d-db.html)
Provide a detailed description of the following dataset: FERG
COCO-Text
The **COCO-Text** dataset is a dataset for text detection and recognition. It is based on the MS COCO dataset, which contains images of complex everyday scenes. The COCO-Text dataset contains non-text images, legible text images and illegible text images. In total there are 22184 training images and 7026 validation images with at least one instance of legible text.
Provide a detailed description of the following dataset: COCO-Text
DiscoFuse
DiscoFuse was created by applying a rule-based splitting method on two corpora - sports articles crawled from the Web, and Wikipedia. See the paper for a detailed description of the dataset generation process and evaluation. DiscoFuse has two parts with 44,177,443 and 16,642,323 examples sourced from Sports articles and Wikipedia, respectively. For each part, a random split is provided to train (98% of the examples), development (1%) and test (1%) sets. In addition, as the original data distribution is highly skewed (see details in the paper), a balanced version for each part is also provided.
Provide a detailed description of the following dataset: DiscoFuse
FIGER
The **FIGER** dataset is an entity recognition dataset where entities are labelled using fine-grained system 112 tags, such as *person/doctor*, *art/written_work* and *building/hotel*. The tags are derivied from Freebase types. The training set consists of Wikipedia articles automatically annotated with distant supervision approach that utilizes the information encoded in anchor links. The test set was annotated manually.
Provide a detailed description of the following dataset: FIGER
CUHK-SYSU
The CUKL-SYSY dataset is a large scale benchmark for person search, containing 18,184 images and 8,432 identities. Different from previous re-id benchmarks, matching query persons with manually cropped pedestrians, this dataset is much closer to real application scenarios by searching person from whole images in the gallery.
Provide a detailed description of the following dataset: CUHK-SYSU
Chairs
The **Chairs** dataset contains rendered images of around 1000 different three-dimensional chair models.
Provide a detailed description of the following dataset: Chairs
ZINC
**ZINC** is a free database of commercially-available compounds for virtual screening. ZINC contains over 230 million purchasable compounds in ready-to-dock, 3D formats. ZINC also contains over 750 million purchasable compounds that can be searched for analogs.
Provide a detailed description of the following dataset: ZINC
QED
**QED** is a linguistically principled framework for explanations in question answering. Given a question and a passage, QED represents an explanation of the answer as a combination of discrete, human-interpretable steps: sentence selection := identification of a sentence implying an answer to the question referential equality := identification of noun phrases in the question and the answer sentence that refer to the same thing predicate entailment := confirmation that the predicate in the sentence entails the predicate in the question once referential equalities are abstracted away. The QED dataset is an expert-annotated dataset of QED explanations build upon a subset of the Google Natural Questions dataset. Source: [https://github.com/google-research-datasets/QED](https://github.com/google-research-datasets/QED) Image Source: [https://github.com/google-research-datasets/QED](https://github.com/google-research-datasets/QED)
Provide a detailed description of the following dataset: QED
MEF
Multi-exposure image fusion (MEF) is considered an effective quality enhancement technique widely adopted in consumer electronics, but little work has been dedicated to the perceptual quality assessment of multi-exposure fused images. In this paper, we first build an MEF database and carry out a subjective user study to evaluate the quality of images generated by different MEF algorithms. There are several useful findings. First, considerable agreement has been observed among human subjects on the quality of MEF images. Second, no single state-of-the-art MEF algorithm produces the best quality for all test images. Third, the existing objective quality models for general image fusion are very limited in predicting perceived quality of MEF images. Motivated by the lack of appropriate objective models, we propose a novel objective image quality assessment (IQA) algorithm for MEF images based on the principle of the structural similarity approach and a novel measure of patch structural consistency. Our experimental results on the subjective database show that the proposed model well correlates with subjective judgments and significantly outperforms the existing IQA models for general image fusion. Finally, we demonstrate the potential application of the proposed model by automatically tuning the parameters of MEF algorithms
Provide a detailed description of the following dataset: MEF
DICM
**DICM** is a dataset for low-light enhancement which consists of 69 images collected with commercial digital cameras. Source: [Deep Retinex Decomposition for Low-Light Enhancement](https://arxiv.org/abs/1808.04560) Image Source: [GLADNet: Low-Light Enhancement Network with Global Awareness](https://ieeexplore.ieee.org/document/8373911)
Provide a detailed description of the following dataset: DICM
GuessWhat?!
**GuessWhat?!** is a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. GuessWhat?! is a cooperative two-player game in which both players see the picture of a rich visual scene with several objects. One player – the oracle – is randomly assigned an object (which could be a person) in the scene. This object is not known by the other player – the questioner – whose goal it is to locate the hidden object. To do so, the questioner can ask a series of yes-no questions which are answered by the oracle.
Provide a detailed description of the following dataset: GuessWhat?!
ObjectNet
**ObjectNet** is a test set of images collected directly using crowd-sourcing. ObjectNet is unique as the objects are captured at unusual poses in cluttered, natural scenes, which can severely degrade recognition performance. There are 50,000 images in the test set which controls for rotation, background and viewpoint. There are 313 object classes with 113 overlapping ImageNet.
Provide a detailed description of the following dataset: ObjectNet