dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
DCASE 2017 | The **DCASE 2017** rare sound events dataset contains isolated sound events for three classes: 148 crying babies (mean duration 2.25s), 139 glasses breaking (mean duration 1.16s), and 187 gun shots (mean duration 1.32s). As with the DCASE 2016 data, silences are not excluded from active event markings in the annotations. While this data set contains many samples per class, there are only three classes
Source: [The NIGENS General Sound Events Database](https://arxiv.org/abs/1902.08314)
Image Source: [https://arxiv.org/pdf/1911.06878.pdf](https://arxiv.org/pdf/1911.06878.pdf) | Provide a detailed description of the following dataset: DCASE 2017 |
PANDORA | PANDORA is the first large-scale dataset of Reddit comments labeled with three personality models (including the well-established Big 5 model) and demographics (age, gender, and location) for more than 10k users. | Provide a detailed description of the following dataset: PANDORA |
AVA | **AVA** is a project that provides audiovisual annotations of video for improving our understanding of human activity. Each of the video clips has been exhaustively annotated by human annotators, and together they represent a rich variety of scenes, recording conditions, and expressions of human activity. There are annotations for:
- Kinetics (AVA-Kinetics) - a crossover between AVA and Kinetics. In order to provide localized action labels on a wider variety of visual scenes, authors provide AVA action labels on videos from Kinetics-700, nearly doubling the number of total annotations, and increasing the number of unique videos by over 500x.
- Actions (AvA Actions) - the AVA dataset densely annotates 80 atomic visual actions in 430 15-minute movie clips, where actions are localized in space and time, resulting in 1.62M action labels with multiple labels per human occurring frequently.
- Spoken Activity (AVA ActiveSpeaker, AVA Speech). AVA ActiveSpeaker: associates speaking activity with a visible face, on the AVA v1.0 videos, resulting in 3.65 million frames labeled across ~39K face tracks. AVA Speech densely annotates audio-based speech activity in AVA v1.0 videos, and explicitly labels 3 background noise conditions, resulting in ~46K labeled segments spanning 45 hours of data.
Image Source: [https://www.researchgate.net/profile/Paolo_Napoletano/publication/309327222/figure/fig1/AS:419620126248965@1477056642346/Sample-images-from-the-Aesthetic-Visual-Analysis-AVA-database-sorted-by-their-aesthetic.png](https://www.researchgate.net/profile/Paolo_Napoletano/publication/309327222/figure/fig1/AS:419620126248965@1477056642346/Sample-images-from-the-Aesthetic-Visual-Analysis-AVA-database-sorted-by-their-aesthetic.png) | Provide a detailed description of the following dataset: AVA |
EPIC-KITCHENS-55 | The EPIC-KITCHENS-55 dataset comprises a set of 432 egocentric videos recorded by 32 participants in their kitchens at 60fps with a head mounted camera. There is no guiding script for the participants who freely perform activities in kitchens related to cooking, food preparation or washing up among others. Each video is split into short action segments (mean duration is 3.7s) with specific start and end times and a verb and noun annotation describing the action (e.g. ‘open fridge‘). The verb classes are 125 and the noun classes 331. The dataset is divided into one train and two test splits. | Provide a detailed description of the following dataset: EPIC-KITCHENS-55 |
Charades | The **Charades** dataset is composed of 9,848 videos of daily indoors activities with an average length of 30 seconds, involving interactions with 46 objects classes in 15 types of indoor scenes and containing a vocabulary of 30 verbs leading to 157 action classes. Each video in this dataset is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacting objects. 267 different users were presented with a sentence, which includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence. In total, the dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos. In the standard split there are7,986 training video and 1,863 validation video. | Provide a detailed description of the following dataset: Charades |
OTB-2015 | **OTB-2015**, also referred as Visual Tracker Benchmark, is a visual tracking dataset. It contains 100 commonly used video sequences for evaluating visual tracking.
Image Source: [http://cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html](http://cvlab.hanyang.ac.kr/tracker_benchmark/datasets.html) | Provide a detailed description of the following dataset: OTB-2015 |
OTB-2013 | OTB2013 is the previous version of the current OTB2015 Visual Tracker Benchmark. It contains only 50 tracking sequences, as opposed to the 100 sequences in the current version of the benchmark. | Provide a detailed description of the following dataset: OTB-2013 |
LaSOT | LaSOT is a high-quality benchmark for Large-scale Single Object Tracking. LaSOT consists of 1,400 sequences with more than 3.5M frames in total. Each frame in these sequences is carefully and manually annotated with a bounding box, making LaSOT one of the largest densely annotated
tracking benchmark. The average video length of LaSOT
is more than 2,500 frames, and each sequence comprises
various challenges deriving from the wild where target objects may disappear and re-appear again in the view. | Provide a detailed description of the following dataset: LaSOT |
TrackingNet | **TrackingNet** is a large-scale tracking dataset consisting of videos in the wild. It has a total of 30,643 videos split into 30,132 training videos and 511 testing videos, with an average of 470,9 frames. | Provide a detailed description of the following dataset: TrackingNet |
VOT2018 | **VOT2018** is a dataset for visual object tracking. It consists of 60 challenging videos collected from real-life datasets. | Provide a detailed description of the following dataset: VOT2018 |
VOT2017 | **VOT2017** is a Visual Object Tracking dataset for different tasks that contains 60 short sequences annotated with 6 different attributes. | Provide a detailed description of the following dataset: VOT2017 |
AG News | **AG News** (**AG’s News Corpus**) is a subdataset of AG's corpus of news articles constructed by assembling titles and description fields of articles from the 4 largest classes (“World”, “Sports”, “Business”, “Sci/Tech”) of AG’s Corpus. The AG News contains 30,000 training and 1,900 test samples per class. | Provide a detailed description of the following dataset: AG News |
DBpedia | **DBpedia** (from "DB" for "database") is a project aiming to extract structured content from the information created in the Wikipedia project. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datasets. | Provide a detailed description of the following dataset: DBpedia |
CMU-MOSI | The Multimodal Corpus of Sentiment Intensity (CMU-MOSI) dataset is a collection of 2199 opinion video clips. Each opinion video is annotated with sentiment in the range [-3,3]. The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. | Provide a detailed description of the following dataset: CMU-MOSI |
SST | The **Stanford Sentiment Treebank** is a corpus with fully labeled parse trees that allows for a
complete analysis of the compositional effects of
sentiment in language. The corpus is based on
the dataset introduced by Pang and Lee (2005) and
consists of 11,855 single sentences extracted from
movie reviews. It was parsed with the Stanford
parser and includes a total of 215,154 unique phrases
from those parse trees, each annotated by 3 human judges.
Each phrase is labelled as either *negative*, *somewhat negative*, *neutral*, *somewhat positive* or *positive*.
The corpus with all 5 labels is referred to as SST-5 or SST fine-grained. Binary classification experiments on full sentences (*negative* or *somewhat negative* vs *somewhat positive* or *positive* with *neutral* sentences discarded) refer to the dataset as SST-2 or SST binary. | Provide a detailed description of the following dataset: SST |
SUBJ | Available are collections of movie-review documents labeled with respect to their overall sentiment polarity (positive or negative) or subjective rating (e.g., "two and a half stars") and sentences labeled with respect to their subjectivity status (subjective or objective) or polarity. | Provide a detailed description of the following dataset: SUBJ |
BDD100K | Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page. | Provide a detailed description of the following dataset: BDD100K |
GTA5 | The **GTA5** dataset contains 24966 synthetic images with pixel level semantic annotation. The images have been rendered using the open-world video game **Grand Theft Auto 5** and are all from the car perspective in the streets of American-style virtual cities. There are 19 semantic classes which are compatible with the ones of Cityscapes dataset. | Provide a detailed description of the following dataset: GTA5 |
MovieLens | The **MovieLens** datasets, first released in 1998, describe people’s expressed preferences for movies. These preferences take the form of tuples, each the result of a person expressing a preference (a 0-5 star rating) for a movie at a particular time. These preferences were entered by way of the MovieLens web site1 — a recommender system that asks its users to give movie ratings in order to receive personalized movie recommendations. | Provide a detailed description of the following dataset: MovieLens |
Middlebury | The **Middlebury** Stereo dataset consists of high-resolution stereo sequences with complex geometry and pixel-accurate ground-truth disparity data. The ground-truth disparities are acquired using a novel technique that employs structured lighting and does not require the calibration of the light projectors. | Provide a detailed description of the following dataset: Middlebury |
V-COCO | **Verbs in COCO** (**V-COCO**) is a dataset that builds off COCO for human-object interaction detection. V-COCO provides 10,346 images (2,533 for training, 2,867 for validating and 4,946 for testing) and 16,199 person instances. Each person has annotations for 29 action categories and there are no interaction labels including objects. | Provide a detailed description of the following dataset: V-COCO |
HICO-DET | **HICO-DET** is a dataset for detecting human-object interactions (HOI) in images. It contains 47,776 images (38,118 in train set and 9,658 in test set), 600 HOI categories constructed by 80 object categories and 117 verb classes. HICO-DET provides more than 150k annotated human-object pairs. V-COCO provides 10,346 images (2,533 for training, 2,867 for validating and 4,946 for testing) and 16,199 person instances. Each person has annotations for 29 action categories and there are no interaction labels including objects. | Provide a detailed description of the following dataset: HICO-DET |
UTD-MHAD | The **UTD-MHAD** dataset consists of 27 different actions performed by 8 subjects. Each subject repeated the action for 4 times, resulting in 861 action sequences in total. The RGB, depth, skeleton and the inertial sensor signals were recorded. | Provide a detailed description of the following dataset: UTD-MHAD |
MPII | The **MPII** Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 different human activities and the poses are manually annotated with up to 16 body joints. | Provide a detailed description of the following dataset: MPII |
Kinetics | The **Kinetics** dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube. | Provide a detailed description of the following dataset: Kinetics |
MSRC-12 | The Microsoft Research Cambridge-12 Kinect gesture data set consists of sequences of human movements, represented as body-part locations, and the associated gesture to be recognized by the system. The data set includes 594 sequences and 719,359 frames—approximately six hours and 40 minutes—collected from 30 people performing 12 gestures. In total, there are 6,244 gesture instances. The motion files contain tracks of 20 joints estimated using the Kinect Pose Estimation pipeline. The body poses are captured at a sample rate of 30Hz with an accuracy of about two centimeters in joint positions. | Provide a detailed description of the following dataset: MSRC-12 |
TIMIT | The **TIMIT** Acoustic-Phonetic Continuous Speech Corpus is a standard dataset used for evaluation of automatic speech recognition systems. It consists of recordings of 630 speakers of 8 dialects of American English each reading 10 phonetically-rich sentences. It also comes with the word and phone-level transcriptions of the speech. | Provide a detailed description of the following dataset: TIMIT |
Volleyball | **Volleyball** is a video action recognition dataset. It has 4830 annotated frames that were handpicked from 55 videos with 9 player action labels and 8 team activity labels. It contains group activity annotations as well as individual activity annotations. | Provide a detailed description of the following dataset: Volleyball |
Collective Activity | The **Collective Activity** Dataset contains 5 different collective activities: crossing, walking, waiting, talking, and queueing and 44 short video sequences some of which were recorded by consumer hand-held digital camera with varying view point. | Provide a detailed description of the following dataset: Collective Activity |
MOT16 | The **MOT16** dataset is a dataset for multiple object tracking. It a collection of existing and new data (part of the sources are from and ), containing 14 challenging real-world videos of both static scenes and moving scenes, 7 for training and 7 for testing. It is a large-scale dataset, composed of totally 110407 bounding boxes in training set and 182326 bounding boxes in test set. All video sequences are annotated under strict standards, their ground-truths are highly accurate, making the evaluation meaningful. | Provide a detailed description of the following dataset: MOT16 |
NUS-WIDE | The **NUS-WIDE** dataset contains 269,648 images with a total of 5,018 tags collected from Flickr. These images are manually annotated with 81 concepts, including objects and scenes. | Provide a detailed description of the following dataset: NUS-WIDE |
PASCAL VOC 2007 | **PASCAL VOC 2007** is a dataset for image recognition. The twenty object classes that have been selected are:
Person: person
Animal: bird, cat, cow, dog, horse, sheep
Vehicle: aeroplane, bicycle, boat, bus, car, motorbike, train
Indoor: bottle, chair, dining table, potted plant, sofa, tv/monitor
The dataset can be used for image classification and object detection tasks.
Image Source: [Object Detection and Recognition in Images](https://arxiv.org/abs/1708.01241) | Provide a detailed description of the following dataset: PASCAL VOC 2007 |
Wiki | ### Context
There's a story behind every dataset and here's your opportunity to share yours.
### Content
What's inside is more than just rows and columns. Make it easy for others to get started by describing how you acquired the data and what time period it represents, too.
### Acknowledgements
We wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.
### Inspiration
Your data will be in front of the world's largest data science community. What questions do you want to see answered? | Provide a detailed description of the following dataset: Wiki |
CelebA-HQ | The **CelebA-HQ** dataset is a high-quality version of CelebA that consists of 30,000 images at 1024×1024 resolution. | Provide a detailed description of the following dataset: CelebA-HQ |
GTEA | The Georgia Tech Egocentric Activities (**GTEA**) dataset contains seven types of daily activities such as making sandwich, tea, or coffee. Each activity is performed by four different people, thus totally 28 videos. For each video, there are about 20 fine-grained action instances such as take bread, pour ketchup, in approximately one minute. | Provide a detailed description of the following dataset: GTEA |
50 Salads | Activity recognition research has shifted focus from distinguishing full-body motion patterns to recognizing complex interactions of multiple entities. Manipulative gestures – characterized by interactions between hands, tools, and manipulable objects – frequently occur in food preparation, manufacturing, and assembly tasks, and have a variety of applications including situational support, automated supervision, and skill assessment. With the aim to stimulate research on recognizing manipulative gestures we introduce the 50 Salads dataset. It captures 25 people preparing 2 mixed salads each and contains over 4h of annotated accelerometer and RGB-D video data. Including detailed annotations, multiple sensor types, and two sequences per participant, the 50 Salads dataset may be used for research in areas such as activity recognition, activity spotting, sequence analysis, progress tracking, sensor fusion, transfer learning, and user-adaptation.
The dataset includes
RGB video data 640×480 pixels at 30 Hz
Depth maps 640×480 pixels at 30 Hz
3-axis accelerometer data at 50 Hz of devices attached to a knife, a mixing spoon, a small spoon, a peeler, a glass, an oil bottle, and a pepper dispenser.
Synchronization parameters for temporal alignment of video and accelerometer data
Annotations as temporal intervals of pre- core- and post-phases of activities corresponding to steps in a recipe | Provide a detailed description of the following dataset: 50 Salads |
DIV2K | **DIV2K** is a popular single-image super-resolution dataset which contains 1,000 images with different scenes and is splitted to 800 for training, 100 for validation and 100 for testing. It was collected for NTIRE2017 and NTIRE2018 Super-Resolution Challenges in order to encourage research on image super-resolution with more realistic degradation. This dataset contains low resolution images with different types of degradations. Apart from the standard bicubic downsampling, several types of degradations are considered in synthesizing low resolution images for different tracks of the challenges. Track 2 of NTIRE 2017 contains low resolution images with unknown x4 downscaling. Track 2 and track 4 of NTIRE 2018 correspond to realistic mild ×4 and realistic wild ×4 adverse conditions, respectively. Low-resolution images under realistic mild x4 setting suffer from motion blur, Poisson noise and pixel shifting. Degradations under realistic wild x4 setting are further extended to be of different levels from image to image. | Provide a detailed description of the following dataset: DIV2K |
MAESTRO | The **MAESTRO** dataset contains over 200 hours of paired audio and MIDI recordings from ten years of International Piano-e-Competition. The MIDI data includes key strike velocities and sustain/sostenuto/una corda pedal positions. Audio and MIDI files are aligned with ∼3 ms accuracy and sliced to individual musical pieces, which are annotated with composer, title, and year of performance. Uncompressed audio is of CD quality or higher (44.1–48 kHz 16-bit PCM stereo). | Provide a detailed description of the following dataset: MAESTRO |
CASIA-B | CASIA-B is a large multiview gait database, which is created in January 2005. There are 124 subjects, and the gait data was captured from 11 views. Three variations, namely view angle, clothing and carrying condition changes, are separately considered. Besides the video files, we still provide human silhouettes extracted from video files. The detailed information about Dataset B and an evaluation framework can be found in this paper .
The format of the video filename in Dataset B is 'xxx-mm-nn-ttt.avi', where
xxx: subject id, from 001 to 124.
mm: walking status, can be 'nm' (normal), 'cl' (in a coat) or 'bg' (with a bag).
nn: sequence number.
ttt: view angle, can be '000', '018', ..., '180'. | Provide a detailed description of the following dataset: CASIA-B |
AFLW | The **Annotated Facial Landmarks in the Wild** (**AFLW**) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image. | Provide a detailed description of the following dataset: AFLW |
BIWI | The dataset contains over 15K images of 20 people (6 females and 14 males - 4 people were recorded twice). For each frame, a depth image, the corresponding rgb image (both 640x480 pixels), and the annotation is provided. The head pose range covers about +-75 degrees yaw and +-60 degrees pitch. Ground truth is provided in the form of the 3D location of the head and its rotation. | Provide a detailed description of the following dataset: BIWI |
STB | 3D hand pose data set created using stereo camera
- contains 18,000 RGB images and paired depth images
- 3D positions of hand joints (21 joints) | Provide a detailed description of the following dataset: STB |
YCB-Video | The **YCB-Video** dataset is a large-scale video dataset for 6D object pose estimation. provides accurate 6D poses of 21 objects from the YCB dataset observed in 92 videos with 133,827 frames. | Provide a detailed description of the following dataset: YCB-Video |
ApolloCar3D | **ApolloCar3DT** is a dataset that contains 5,277 driving images and over 60K car instances, where each car is fitted with an industry-grade 3D CAD model with absolute model size and semantically labelled keypoints. This dataset is above 20 times larger than PASCAL3D+ and KITTI, the current state-of-the-art. | Provide a detailed description of the following dataset: ApolloCar3D |
Darmstadt Noise Dataset | the dataset contains data about hydrogen storage in metal hydrides | Provide a detailed description of the following dataset: Darmstadt Noise Dataset |
Multi-Ego | A new multi-view egocentric dataset, Multi-Ego. The dataset is recorded simultaneously by three cameras, covering a wide variety of real-life scenarios. The footage is annotated by multiple individuals under various summarization configurations, with a consensus analysis ensuring a reliable ground truth. | Provide a detailed description of the following dataset: Multi-Ego |
SumMe | The **SumMe** dataset is a video summarization dataset consisting of 25 videos, each annotated with at least 15 human summaries (390 in total). | Provide a detailed description of the following dataset: SumMe |
Reddit TIFU | **Reddit TIFU** dataset is a newly collected Reddit dataset, where TIFU denotes the name of /r/tifu subbreddit.
There are 122,933 text-summary pairs in total. | Provide a detailed description of the following dataset: Reddit TIFU |
DAVIS 2016 | DAVIS16 is a dataset for video object segmentation which consists of 50 videos in total (30 videos for training and 20 for testing). Per-frame pixel-wise annotations are offered. | Provide a detailed description of the following dataset: DAVIS 2016 |
FBMS-59 | The **Freiburg-Berkeley Motion Segmentation** Dataset (**FBMS-59**) is a dataset for motion segmentation, which extends the BMS-26 dataset with 33 additional video sequences. A total of 720 frames is annotated. FBMS-59 comes with a split into a training set and a test set. Typical challenges appear in both sets.
Source: [https://lmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html](https://lmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html)
Image Source: [https://lmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html](https://lmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html) | Provide a detailed description of the following dataset: FBMS-59 |
CamVid | **CamVid** (**Cambridge-driving Labeled Video Database**) is a road/driving scene understanding database which was originally captured as five video sequences with a 960×720 resolution camera mounted on the dashboard of a car. Those sequences were sampled (four of them at 1 fps and one at 15 fps) adding up to 701 frames. Those stills were manually annotated with 32 classes: void, building, wall, tree, vegetation, fence, sidewalk, parking block, column/pole, traffic cone, bridge, sign, miscellaneous text, traffic light, sky, tunnel, archway, road, road shoulder, lane markings (driving), lane markings (non-driving), animal, pedestrian, child, cart luggage, bicyclist, motorcycle, car, SUV/pickup/truck, truck/bus, train, and other moving object | Provide a detailed description of the following dataset: CamVid |
DAVIS 2017 | DAVIS17 is a dataset for video object segmentation. It contains a total of 150 videos - 60 for training, 30 for validation, 60 for testing | Provide a detailed description of the following dataset: DAVIS 2017 |
CCGbank | **CCGbank** is a translation of the Penn Treebank into a corpus of Combinatory Categorial Grammar derivations. It pairs syntactic derivations with sets of word-word dependencies which approximate the underlying predicate-argument structure.
The dataset contains 99.44% of the sentences in the Penn Treebank, for which it corrects a number of inconsistencies and errors in the original annotation. | Provide a detailed description of the following dataset: CCGbank |
SVHN | Street View House Numbers (**SVHN**) is a digit classification benchmark dataset that contains 600,000 32×32 RGB images of printed digits (from 0 to 9) cropped from pictures of house number plates. The cropped images are centered in the digit of interest, but nearby digits and other distractors are kept in the image. SVHN has three sets: training, testing sets and an extra set with 530,000 images that are less difficult and can be used for helping with the training process. | Provide a detailed description of the following dataset: SVHN |
STL-10 | The **STL-10** is an image dataset derived from ImageNet and popularly used to evaluate algorithms of unsupervised feature learning or self-taught learning. Besides 100,000 unlabeled images, it contains 13,000 labeled images from 10 object classes (such as birds, cats, trucks), among which 5,000 images are partitioned for training while the remaining 8,000 images for testing. All the images are color images with 96×96 pixels in size. | Provide a detailed description of the following dataset: STL-10 |
CIFAR-10 | The **CIFAR-10** dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.
The criteria for deciding whether an image belongs to a class were as follows:
* The class name should be high on the list of likely answers to the question “What is in this picture?”
* The image should be photo-realistic. Labelers were instructed to reject line drawings.
* The image should contain only one prominent instance of the object to which the class refers.
The object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler. | Provide a detailed description of the following dataset: CIFAR-10 |
Clothing1M | **Clothing1M** contains 1M clothing images in 14 classes. It is a dataset with noisy labels, since the data is collected from several online shopping websites and include many mislabelled samples. This dataset also contains 50k, 14k, and 10k images with clean labels for training, validation, and testing, respectively. | Provide a detailed description of the following dataset: Clothing1M |
CIFAR-100 | The **CIFAR-100** dataset (Canadian Institute for Advanced Research, 100 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. There are 600 images per class. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs). There are 500 training images and 100 testing images per class.
The criteria for deciding whether an image belongs to a class were as follows:
* The class name should be high on the list of likely answers to the question “What is in this picture?”
* The image should be photo-realistic. Labelers were instructed to reject line drawings.
* The image should contain only one prominent instance of the object to which the class refers.
* The object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler. | Provide a detailed description of the following dataset: CIFAR-100 |
ADE20K | The **ADE20K** semantic segmentation dataset contains more than 20K scene-centric images exhaustively annotated with pixel-level objects and object parts labels. There are totally 150 semantic categories, which include stuffs like sky, road, grass, and discrete objects like person, car, bed. | Provide a detailed description of the following dataset: ADE20K |
MPII Human Pose | **MPII Human Pose** Dataset is a dataset for human pose estimation. It consists of around 25k images extracted from online videos. Each image contains one or more people, with over 40k people annotated in total. Among the 40k samples, ∼28k samples are for training and the remainder are for testing. Overall the dataset covers 410 human activities and each image is provided with an activity label. Images were extracted from a YouTube video and provided with preceding and following un-annotated frames. | Provide a detailed description of the following dataset: MPII Human Pose |
Human3.6M | The **Human3.6M** dataset is one of the largest motion capture datasets, which consists of 3.6 million human poses and corresponding images captured by a high-speed motion capture system. There are 4 high-resolution progressive scan cameras to acquire video data at 50 Hz. The dataset contains activities by 11 professional actors in 17 scenarios: discussion, smoking, taking photo, talking on the phone, etc., as well as provides accurate 3D joint positions and high-resolution videos. | Provide a detailed description of the following dataset: Human3.6M |
CIHP | The **Crowd Instance-level Human Parsing** (**CIHP**) dataset has 38,280 diverse human images. Each image in CIHP is labeled with pixel-wise annotations on 20 categories and instance-level identification. The dataset can be used for the human part segmentation task. | Provide a detailed description of the following dataset: CIHP |
MultiMNIST | The **MultiMNIST** dataset is generated from MNIST. The training and tests are generated by overlaying a digit on top of another digit from the same set (training or test) but different class. Each digit is shifted up to 4 pixels in each direction resulting in a 36×36 image. Considering a digit in a 28×28 image is bounded in a 20×20 box, two digits bounding boxes on average have 80% overlap. For each digit in the MNIST dataset 1,000 MultiMNIST examples are generated, so the training set size is 60M and the test set size is 10M. | Provide a detailed description of the following dataset: MultiMNIST |
iNaturalist | The iNaturalist 2017 dataset (iNat) contains 675,170 training and validation images from 5,089 natural fine-grained categories. Those categories belong to 13 super-categories including Plantae (Plant), Insecta (Insect), Aves (Bird), Mammalia (Mammal), and so on. The iNat dataset is highly imbalanced with dramatically different number of images per category. For example, the largest super-category “Plantae (Plant)” has 196,613 images from 2,101 categories; whereas the smallest super-category “Protozoa” only has 381 images from 4 categories. | Provide a detailed description of the following dataset: iNaturalist |
ScanNet | **ScanNet** is an instance-level indoor RGB-D dataset that includes both 2D and 3D data. It is a collection of labeled voxels rather than points or objects. Up to now, ScanNet v2, the newest version of ScanNet, has collected 1513 annotated scans with an approximate 90% surface coverage. In the semantic segmentation task, this dataset is marked in 20 classes of annotated 3D voxelized objects. | Provide a detailed description of the following dataset: ScanNet |
SBD | The **Semantic Boundaries Dataset** (**SBD**) is a dataset for predicting pixels on the boundary of the object (as opposed to the inside of the object with semantic segmentation). The dataset consists of 11318 images from the trainval set of the PASCAL VOC2011 challenge, divided into 8498 training and 2820 test images. This dataset has object instance boundaries with accurate figure/ground masks that are also labeled with one of 20 Pascal VOC classes. | Provide a detailed description of the following dataset: SBD |
ImageNet VID | ImageNet VID is a large-scale public dataset
for video object detection and contains more than 1M frames for training and
more than 100k frames for validation. | Provide a detailed description of the following dataset: ImageNet VID |
SK-LARGE | **SK-LARGE** is a benchmark dataset for object skeleton detection, built on the MS COCO dataset. It contains 1491 images, 746 for training and 745 for testing.
Source: [DeepFlux for Skeletons in the Wild](https://arxiv.org/abs/1811.12608)
Image Source: [http://kaizhao.net/sk-large](http://kaizhao.net/sk-large) | Provide a detailed description of the following dataset: SK-LARGE |
Indian Pines | **Indian Pines** is a Hyperspectral image segmentation dataset. The input data consists of hyperspectral bands over a single landscape in Indiana, US, (Indian Pines data set) with 145×145 pixels. For each pixel, the data set contains 220 spectral reflectance bands which represent different portions of the electromagnetic spectrum in the wavelength range 0.4−2.5⋅10−6. | Provide a detailed description of the following dataset: Indian Pines |
Pavia University | The **Pavia University** dataset is a hyperspectral image dataset which gathered by a sensor known as the reflective optics system imaging spectrometer (ROSIS-3) over the city of Pavia, Italy. The image consists of 610×340 pixels with 115 spectral bands. The image is divided into 9 classes with a total of 42,776 labelled samples, including the asphalt, meadows, gravel, trees, metal sheet, bare soil, bitumen, brick, and shadow.
Source: [Diversity in Machine Learning](https://arxiv.org/abs/1807.01477)
Image Source: [http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Pavia_Centre_and_University](http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes#Pavia_Centre_and_University) | Provide a detailed description of the following dataset: Pavia University |
RVL-CDIP | The **RVL-CDIP** dataset consists of scanned document images belonging to 16 classes such as letter, form, email, resume, memo, etc. The dataset has 320,000 training, 40,000 validation and 40,000 test images. The images are characterized by low quality, noise, and low resolution, typically 100 dpi. | Provide a detailed description of the following dataset: RVL-CDIP |
COCO Captions | COCO Captions contains over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions are be provided for each image. | Provide a detailed description of the following dataset: COCO Captions |
RotoWire | This dataset consists of (human-written) NBA basketball game summaries aligned with their corresponding box- and line-scores. Summaries taken from rotowire.com are referred to as the "rotowire" data. There are 4853 distinct rotowire summaries, covering NBA games played between 1/1/2014 and 3/29/2017; some games have multiple summaries. The summaries have been randomly split into training, validation, and test sets consisting of 3398, 727, and 728 summaries, respectively. | Provide a detailed description of the following dataset: RotoWire |
WikiBio | This dataset gathers 728,321 biographies from English Wikipedia. It aims at evaluating text generation algorithms. For each article, we provide the first paragraph and the infobox (both tokenized). | Provide a detailed description of the following dataset: WikiBio |
DailyDialog | **DailyDialog** is a high-quality multi-turn open-domain English dialog dataset. It contains 13,118 dialogues split into a training set with 11,118 dialogues and validation and test sets with 1000 dialogues each. On average there are around 8 speaker turns per dialogue with around 15 tokens per turn. | Provide a detailed description of the following dataset: DailyDialog |
WebNLG | The **WebNLG** corpus comprises of sets of triplets describing facts (entities and relations between them) and the corresponding facts in form of natural language text. The corpus contains sets with up to 7 triplets each along with one or more reference texts for each set. The test set is split into two parts: seen, containing inputs created for entities and relations belonging to DBpedia categories that were seen in the training data, and unseen, containing inputs extracted for entities and relations belonging to 5 unseen categories.
Initially, the dataset was used for the WebNLG natural language generation challenge which consists of mapping the sets of triplets to text, including referring expression generation, aggregation, lexicalization, surface realization, and sentence segmentation.
The corpus is also used for a reverse task of triplets extraction.
Versioning history of the dataset can be found [here](https://gitlab.com/shimorina/webnlg-dataset/-/tree/master/). | Provide a detailed description of the following dataset: WebNLG |
MegaFace | **MegaFace** was a publicly available dataset which is used for evaluating the performance of face recognition algorithms with up to a million distractors (i.e., up to a million people who are not in the test set). MegaFace contains 1M images from 690K individuals with unconstrained pose, expression, lighting, and exposure. MegaFace captures many different subjects rather than many images of a small number of subjects. The gallery set of MegaFace is collected from a subset of Flickr. The probe set of MegaFace used in the challenge consists of two databases; Facescrub and FGNet. FGNet contains 975 images of 82 individuals, each with several images spanning ages from 0 to 69. Facescrub dataset contains more than 100K face images of 530 people. The MegaFace challenge evaluates performance of face recognition algorithms by increasing the numbers of “distractors” (going from 10 to 1M) in the gallery set. In order to evaluate the face recognition algorithms fairly, MegaFace challenge has two protocols including large or small training sets. If a training set has more than 0.5M images and 20K subjects, it is considered as large. Otherwise, it is considered as small.
**NOTE**: This dataset [has been retired](https://exposing.ai/megaface/). | Provide a detailed description of the following dataset: MegaFace |
IJB-B | The **IJB-B** dataset is a template-based face dataset that contains 1845 subjects with 11,754 images, 55,025 frames and 7,011 videos where a template consists of a varying number of still images and video frames from different sources. These images and videos are collected from the Internet and are totally unconstrained, with large variations in pose, illumination, image quality etc. In addition, the dataset comes with protocols for 1-to-1 template-based face verification, 1-to-N template-based open-set face identification, and 1-to-N open-set video face identification. | Provide a detailed description of the following dataset: IJB-B |
IJB-A | The **IARPA Janus Benchmark A** (**IJB-A**) database is developed with the aim to augment more challenges to the face recognition task by collecting facial images with a wide variations in pose, illumination, expression, resolution and occlusion. IJB-A is constructed by collecting 5,712 images and 2,085 videos from 500 identities, with an average of 11.4 images and 4.2 videos per identity. | Provide a detailed description of the following dataset: IJB-A |
300W | The 300-W is a face dataset that consists of 300 Indoor and 300 Outdoor in-the-wild images. It covers a large variation of identity, expression, illumination conditions, pose, occlusion and face size. The images were downloaded from google.com by making queries such as “party”, “conference”, “protests”, “football” and “celebrities”. Compared to the rest of in-the-wild datasets, the 300-W database contains a larger percentage of partially-occluded images and covers more expressions than the common “neutral” or “smile”, such as “surprise” or “scream”.
Images were annotated with the 68-point mark-up using a semi-automatic methodology. The images of the database were carefully selected so that they represent a characteristic sample of challenging but natural face instances under totally unconstrained conditions. Thus, methods that achieve accurate performance on the 300-W database can demonstrate the same accuracy in most realistic cases.
Many images of the database contain more than one annotated faces (293 images with 1 face, 53 images with 2 faces and 53 images with [3, 7] faces). Consequently, the database consists of 600 annotated face instances, but 399 unique images. Finally, there is a large variety of face sizes. Specifically, 49.3% of the faces have size in the range [48.6k, 2.0M] and the overall mean size is 85k (about 292 × 292) pixels. | Provide a detailed description of the following dataset: 300W |
FG-NET | FGNet is a dataset for age estimation and face recognition across ages. It is composed of a total of 1,002 images of 82 people with age range from 0 to 69 and an age gap up to 45 years | Provide a detailed description of the following dataset: FG-NET |
IJB-C | The **IJB-C** dataset is a video-based face recognition dataset. It is an extension of the IJB-A dataset with about 138,000 face images, 11,000 face videos, and 10,000 non-face images. | Provide a detailed description of the following dataset: IJB-C |
PASCAL Face | The PASCAL FACE dataset is a dataset for face detection and face recognition. It has a total of 851 images which are a subset of the PASCAL VOC and has a total of 1,341 annotations. These datasets contain only a few hundreds of images and have limited variations in face appearance. | Provide a detailed description of the following dataset: PASCAL Face |
AFLW2000-3D | **AFLW2000-3D** is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector. | Provide a detailed description of the following dataset: AFLW2000-3D |
Florence | The **Florence** 3D faces dataset consists of:
* High-resolution 3D scans of human faces from many subjects.
* Several video sequences of varying resolution, conditions and zoom level for each subject.
Each subject is recorded in the following situations:
* In a controlled setting in HD video.
* In a less-constrained (but still indoor) setting using a standard, PTZ surveillance camera.
* In an unconstrained, outdoor environment under challenging recording conditions. | Provide a detailed description of the following dataset: Florence |
MORPH | **MORPH** is a facial age estimation dataset, which contains 55,134 facial images of 13,617 subjects ranging from 16 to 77 years old. | Provide a detailed description of the following dataset: MORPH |
CUFS | CUHK Face Sketch database (CUFS) is for research on face sketch synthesis and face sketch recognition. It includes 188 faces from the Chinese University of Hong Kong (CUHK) student database, 123 faces from the AR database [1], and 295 faces from the XM2VTS database [2]. There are 606 faces in total. For each face, there is a sketch drawn by an artist based on a photo taken in a frontal pose, under normal lighting condition, and with a neutral expression.
[1] A. M. Martinez, and R. Benavente, “The AR Face Database,” CVC Technical Report #24, June 1998.
[2] K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre, “XM2VTSDB: the Extended of M2VTS Database,” in Proceedings of International Conference on Audio- and Video-Based Person Authentication, pp. 72-77, 1999. | Provide a detailed description of the following dataset: CUFS |
CUFSF | The CUHK Face Sketch FERET (**CUFSF**) is a dataset for research on face sketch synthesis and face sketch recognition. It contains two types of face images: photo and sketch. Total 1,194 images (one image per subject) were collected with lighting variations from the FERET dataset. For each subject, a sketch is drawn with shape exaggeration. | Provide a detailed description of the following dataset: CUFSF |
Caltech-101 | The Caltech101 dataset contains images from 101 object categories (e.g., “helicopter”, “elephant” and “chair” etc.) and a background category that contains the images not from the 101 object categories. For each object category, there are about 40 to 800 images, while most classes have about 50 images. The resolution of the image is roughly about 300×200 pixels. | Provide a detailed description of the following dataset: Caltech-101 |
Oxford-IIIT Pet Dataset | The Oxford-IIIT Pet Dataset has 37 categories with roughly 200 images for each class. The images have a large variations in scale, pose and lighting. All images have an associated ground truth annotation of breed, head ROI, and pixel level trimap segmentation.
Also available on Academic torrent, Link is [here](https://academictorrents.com/details/b18bbd9ba03d50b0f7f479acc9f4228a408cecc1) | Provide a detailed description of the following dataset: Oxford-IIIT Pet Dataset |
Stanford Cars | The **Stanford Cars** dataset consists of 196 classes of cars with a total of 16,185 images, taken from the rear. The data is divided into almost a 50-50 train/test split with 8,144 training images and 8,041 testing images. Categories are typically at the level of Make, Model, Year. The images are 360×240. | Provide a detailed description of the following dataset: Stanford Cars |
NABirds | **NABirds** V1 is a collection of 48,000 annotated photographs of the 400 species of birds that are commonly observed in North America. More than 100 photographs are available for each species, including separate annotations for males, females and juveniles that comprise 700 visual categories. This dataset is to be used for fine-grained visual categorization experiments. | Provide a detailed description of the following dataset: NABirds |
Stanford Dogs | The **Stanford Dogs** dataset contains 20,580 images of 120 classes of dogs from around the world, which are divided into 12,000 images for training and 8,580 images for testing.
Source: [Universal-to-Specific Framework for Complex Action Recognition](https://arxiv.org/abs/2007.06149)
Image Source: [https://www.tensorflow.org/datasets/catalog/stanford_dogs](https://www.tensorflow.org/datasets/catalog/stanford_dogs) | Provide a detailed description of the following dataset: Stanford Dogs |
FFHQ | **Flickr-Faces-HQ (FFHQ)** consists of 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity and image background. It also has good coverage of accessories such as eyeglasses, sunglasses, hats, etc. The images were crawled from Flickr, thus inheriting all the biases of that website, and automatically aligned and cropped using dlib. Only images under permissive licenses were collected. Various automatic filters were used to prune the set, and finally Amazon Mechanical Turk was used to remove the occasional statues, paintings, or photos of photos. | Provide a detailed description of the following dataset: FFHQ |
RaFD | The **Radboud Faces Database** (**RaFD**) is a set of pictures of 67 models (both adult and children, males and females) displaying 8 emotional expressions. | Provide a detailed description of the following dataset: RaFD |
WikiQA | The **WikiQA** corpus is a publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. In order to reflect the true information need of general users, Bing query logs were used as the question source. Each question is linked to a Wikipedia page that potentially has the answer. Because the summary section of a Wikipedia page provides the basic and usually most important information about the topic, sentences in this section were used as the candidate answers. The corpus includes 3,047 questions and 29,258 sentences, where 1,473 sentences were labeled as answer sentences to their corresponding questions. | Provide a detailed description of the following dataset: WikiQA |
WebQuestions | The **WebQuestions** dataset is a question answering dataset using Freebase as the knowledge base and contains 6,642 question-answer pairs. It was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechanical Turk. The original split uses 3,778 examples for training and 2,032 for testing. All answers are defined as Freebase entities.
Example questions (answers) in the dataset include “Where did Edgar Allan Poe died?” (baltimore) or “What degrees did Barack Obama get?” (bachelor_of_arts, juris_doctor). | Provide a detailed description of the following dataset: WebQuestions |
SimpleQuestions | **SimpleQuestions** is a large-scale factoid question answering dataset. It consists of 108,442 natural language questions, each paired with a corresponding fact from Freebase knowledge base. Each fact is a triple (subject, relation, object) and the answer to the question is always the object. The dataset is divided into training, validation, and test sets with 75,910, 10,845 and 21,687 questions respectively. | Provide a detailed description of the following dataset: SimpleQuestions |
TrecQA | **Text Retrieval Conference Question Answering** (**TrecQA**) is a dataset created from the TREC-8 (1999) to TREC-13 (2004) Question Answering tracks. There are two versions of TrecQA: raw and clean. Both versions have the same training set but their development and test sets differ. The commonly used clean version of the dataset excludes questions in development and test sets with no answers or only positive/negative answers. The clean version has 1,229/65/68 questions and 53,417/1,117/1,442 question-answer pairs for the train/dev/test split. | Provide a detailed description of the following dataset: TrecQA |
WikiHop | **WikiHop** is a multi-hop question-answering dataset. The query of WikiHop is constructed with entities and relations from WikiData, while supporting documents are from WikiReading. A bipartite graph connecting entities and documents is first built and the answer for each query is located by traversal on this graph. Candidates that are type-consistent with the answer and share the same relation in query with the answer are included, resulting in a set of candidates. Thus, WikiHop is a multi-choice style reading comprehension data set. There are totally about 43K samples in training set, 5K samples in development set and 2.5K samples in test set. The test set is not provided. The task is to predict the correct answer given a query and multiple supporting documents.
The dataset includes a masked variant, where all candidates and their mentions in the supporting documents are replaced by random but consistent placeholder tokens. | Provide a detailed description of the following dataset: WikiHop |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.