dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
NSIDES | Drug side effects and drug-drug interactions were mined from publicly available data. Offsides is a database of drug side-effects that were found, but are not listed on the official FDA label. Twosides is the only comprehensive database drug-drug-effect relationships. Over 3,300 drugs and 63,000 combinations connected to millions of potential adverse reactions.
Source: [http://tatonettilab.org/offsides/](http://tatonettilab.org/offsides/)
Image Source: [http://doi.org/10.1126/scitranslmed.3003377](http://doi.org/10.1126/scitranslmed.3003377) | Provide a detailed description of the following dataset: NSIDES |
DDI | The **DDI**Extraction 2013 task relies on the DDI corpus which contains MedLine abstracts on drug-drug interactions as well as documents describing drug-drug interactions from the DrugBank database. | Provide a detailed description of the following dataset: DDI |
Stylized ImageNet | The Stylized-ImageNet dataset is created by removing local texture cues in ImageNet while retaining global shape information on natural images via AdaIN style transfer. This nudges CNNs towards learning more about shapes and less about local textures. | Provide a detailed description of the following dataset: Stylized ImageNet |
MuTual | **MuTual** is a retrieval-based dataset for multi-turn dialogue reasoning, which is modified from Chinese high school English listening comprehension test data. It tests dialogue reasoning via next utterance prediction.
Source: [https://github.com/Nealcly/MuTual](https://github.com/Nealcly/MuTual)
Image Source: [https://github.com/Nealcly/MuTual](https://github.com/Nealcly/MuTual) | Provide a detailed description of the following dataset: MuTual |
CRIM13 | The Caltech Resident-Intruder Mouse dataset (**CRIM13**) consists of 237x2 videos (recorded with synchronized top and side view) of pairs of mice engaging in social behavior, catalogued into thirteen different actions. Each video lasts ~10min, for a total of 88 hours of video and 8 million frames. A team of behavior experts annotated each video on a frame-by-frame basis for a state-of-the-art study of the neurophysiological mechanisms involved in aggression and courtship in mice.
Source: [https://pdollar.github.io/research.html](https://pdollar.github.io/research.html)
Image Source: [https://authors.library.caltech.edu/104600/1/2020.07.26.222299v1.full.pdf](https://authors.library.caltech.edu/104600/1/2020.07.26.222299v1.full.pdf) | Provide a detailed description of the following dataset: CRIM13 |
Imagewoof | **Imagewoof** is a subset of 10 dog breed classes from Imagenet. The breeds are: Australian terrier, Border terrier, Samoyed, Beagle, Shih-Tzu, English foxhound, Rhodesian ridgeback, Dingo, Golden retriever, Old English sheepdog.
Source: [https://github.com/fastai/imagenette](https://github.com/fastai/imagenette)
Image Source: [https://medium.com/@lessw/how-we-beat-the-fastai-leaderboard-score-by-19-77-a-cbb2338fab5c](https://medium.com/@lessw/how-we-beat-the-fastai-leaderboard-score-by-19-77-a-cbb2338fab5c) | Provide a detailed description of the following dataset: Imagewoof |
Imagenette | **Imagenette** is a subset of 10 easily classified classes from Imagenet (bench, English springer, cassette player, chain saw, church, French horn, garbage truck, gas pump, golf ball, parachute).
Source: [https://github.com/fastai/imagenette](https://github.com/fastai/imagenette)
Image Source: [https://docs.fast.ai/tutorial.imagenette.html](https://docs.fast.ai/tutorial.imagenette.html) | Provide a detailed description of the following dataset: Imagenette |
Stanford-ECM | **Stanford-ECM** is an egocentric multimodal dataset which comprises about 27 hours of egocentric video augmented with heart rate and acceleration data. The lengths of the individual videos cover a diverse range from 3 minutes to about 51 minutes in length. A mobile phone was used to collect egocentric video at 720x1280 resolution and 30 fps, as well as triaxial acceleration at 30Hz. The mobile phone was equipped with a wide-angle lens, so that the horizontal field of view was enlarged from 45 degrees to about 64 degrees. A wrist-worn heart rate sensor was used to capture the heart rate every 5 seconds. The phone and heart rate monitor was time-synchronized through Bluetooth, and all data was stored in the phone’s storage. Piecewise cubic polynomial interpolation was used to fill in any gaps in heart rate data. Finally, data was aligned to the millisecond level at 30 Hz.
Source: [http://ai.stanford.edu/~syyeung/ecm_dataset/egocentric_multimodal.html](http://ai.stanford.edu/~syyeung/ecm_dataset/egocentric_multimodal.html)
Image Source: [http://ai.stanford.edu/~syyeung/ecm_dataset/egocentric_multimodal.html](http://ai.stanford.edu/~syyeung/ecm_dataset/egocentric_multimodal.html) | Provide a detailed description of the following dataset: Stanford-ECM |
BSD | **BSD** is a dataset used frequently for image denoising and super-resolution. Of the subdatasets, BSD100 is aclassical image dataset having 100 test images proposed by Martin et al.. The dataset is composed of a large variety of images ranging from natural images to object-specific such as plants, people, food etc. BSD100 is the testing set of the Berkeley segmentation dataset BSD300. | Provide a detailed description of the following dataset: BSD |
THUMOS14 | The **THUMOS14** dataset is a large-scale video dataset that includes 1,010 videos for validation and 1,574 videos for testing from 20 classes. Among all the videos, there are 220 and 212 videos with temporal annotations in validation and testing set, respectively. | Provide a detailed description of the following dataset: THUMOS14 |
MSRA Hand | **MSRA Hand**s is a dataset for hand tracking. In total 6 subjects' right hands are captured using Intel's Creative Interactive Gesture Camera. Each subject is asked to make various rapid gestures in a 400-frame video sequence. To account for different hand sizes, a global hand model scale is specified for each subject: 1.1, 1.0, 0.9, 0.95, 1.1, 1.0 for subject 1~6, respectively.
The camera intrinsic parameters are: principle point = image center(160, 120), focal length = 241.42. The depth image is 320x240, each *.bin file stores the depth pixel values in row scanning order, which are 320*240 floats. The unit is millimeters. The bin file is binary and needs to be opened with std::ios::binary flag.
joint.txt file stores 400 frames x 21 hand joints per frame. Each line has 3 * 21 = 63 floats for 21 3D points in (x, y, z) coordinates. The 21 hand joints are: wrist, index_mcp, index_pip, index_dip, index_tip, middle_mcp, middle_pip, middle_dip, middle_tip, ring_mcp, ring_pip, ring_dip, ring_tip, little_mcp, little_pip, little_dip, little_tip, thumb_mcp, thumb_pip, thumb_dip, thumb_tip.
The corresponding *.jpg file is just for visualization of depth and ground truth joints.
Source: [https://jimmysuen.github.io/txt/cvpr14_MSRAHandTrackingDB_readme.txt](https://jimmysuen.github.io/txt/cvpr14_MSRAHandTrackingDB_readme.txt)
Image Source: [https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Qian_Realtime_and_Robust_2014_CVPR_paper.pdf](https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Qian_Realtime_and_Robust_2014_CVPR_paper.pdf) | Provide a detailed description of the following dataset: MSRA Hand |
MSRA10K | **MSRA10K** is a dataset for salient object detection that contains 10,000 images with pixel-level saliency labeling for 10K images from the MSRA salient object detection dataset. The original MRSA database provides salient object annotation in terms of bounding boxes provided by 3-9 users.
Source: [https://mmcheng.net/msra10k/](https://mmcheng.net/msra10k/)
Image Source: [https://mmcheng.net/msra10k/](https://mmcheng.net/msra10k/) | Provide a detailed description of the following dataset: MSRA10K |
JHMDB | **JHMDB** is an action recognition dataset that consists of 960 video sequences belonging to 21 actions. It is a subset of the larger HMDB51 dataset collected from digitized movies and YouTube videos. The dataset contains video and annotation for puppet flow per frame (approximated optimal flow on the person), puppet mask per frame, joint positions per frame, action label per clip and meta label per clip (camera motion, visible body parts, camera viewpoint, number of people, video quality). | Provide a detailed description of the following dataset: JHMDB |
UCF-CC-50 | **UCF-CC-50** is a dataset for crowd counting and consists of images of extremely dense crowds. It has 50 images with 63,974 head center annotations in total. The head counts range between 94 and 4,543 per image. The small dataset size and large variance make this a very challenging counting dataset. | Provide a detailed description of the following dataset: UCF-CC-50 |
AwA2 | **Animals with Attributes 2** (**AwA2**) is a dataset for benchmarking transfer-learning algorithms, such as attribute base classification and zero-shot learning. AwA2 is a drop-in replacement of original Animals with Attributes (AwA) dataset, with more images released for each category. Specifically, AwA2 consists of in total 37322 images distributed in 50 animal categories. The AwA2 also provides a category-attribute matrix, which contains an 85-dim attribute vector (e.g., color, stripe, furry, size, and habitat) for each category. | Provide a detailed description of the following dataset: AwA2 |
AwA | **Animals with Attributes** (**AwA**) was a dataset for benchmarking transfer-learning algorithms, in particular attribute base classification. It consisted of 30475 images of 50 animals classes with six pre-extracted feature representations for each image. The animals classes are aligned with Osherson's classical class/attribute matrix, thereby providing 85 numeric attribute values for each class. Using the shared attributes, it is possible to transfer information between different classes.
The Animals with Attributes dataset was suspended. Its images are not available anymore because of copyright restrictions. A drop-in replacement, Animals with Attributes 2, is available instead. | Provide a detailed description of the following dataset: AwA |
ARC | The AI2’s Reasoning Challenge (**ARC**) dataset is a multiple-choice question-answering dataset, containing questions from science exams from grade 3 to grade 9. The dataset is split in two partitions: Easy and Challenge, where the latter partition contains the more difficult questions that require reasoning. Most of the questions have 4 answer choices, with <1% of all the questions having either 3 or 5 answer choices. ARC includes a supporting KB of 14.3M unstructured text passages. | Provide a detailed description of the following dataset: ARC |
PASCAL VOC 2011 | **PASCAL VOC 2011** is an image segmentation dataset. It contains around 2,223 images for training, consisting of 5,034 objects. Testing consists of 1,111 images with 2,028 objects. In total there are over 5,000 precisely segmented objects for training. | Provide a detailed description of the following dataset: PASCAL VOC 2011 |
2D-3D-S | The **2D-3D-S** dataset provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. It covers over 6,000 m2 collected in 6 large-scale indoor areas that originate from 3 different buildings. It contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360° equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces. | Provide a detailed description of the following dataset: 2D-3D-S |
Color FERET | The color FERET database is a dataset for face recognition. It contains 11,338 color images of size 512×768 pixels captured in a semi-controlled environment with 13 different poses from 994 subjects. | Provide a detailed description of the following dataset: Color FERET |
ICDAR 2017 | ICDAR2017 is a dataset for scene text detection.
Source: [Scale-Invariant Multi-Oriented Text Detection in Wild Scene Images](https://arxiv.org/abs/2002.06423)
Image Source: [https://rrc.cvc.uab.es/?ch=7](https://rrc.cvc.uab.es/?ch=7) | Provide a detailed description of the following dataset: ICDAR 2017 |
BUCC | The **BUCC** mining task is a shared task on parallel sentence extraction from two monolingual corpora with a subset of them assumed to be parallel, and that has been available since 2016. For each language pair, the shared task provides a monolingual corpus for each language and a gold mapping list containing true translation pairs. These pairs are the ground truth. The task is to construct a list of translation pairs from the monolingual corpora. The constructed list is compared to the ground truth, and evaluated in terms of the F1 measure. | Provide a detailed description of the following dataset: BUCC |
Make3D | The **Make3D** dataset is a monocular Depth Estimation dataset that contains 400 single training RGB and depth map pairs, and 134 test samples. The RGB images have high resolution, while the depth maps are provided at low resolution. | Provide a detailed description of the following dataset: Make3D |
Virtual KITTI | **Virtual KITTI** is a photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks: object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation.
Virtual KITTI contains 50 high-resolution monocular videos (21,260 frames) generated from five different virtual worlds in urban settings under different imaging and weather conditions. These worlds were created using the Unity game engine and a novel real-to-virtual cloning method. These photo-realistic synthetic videos are automatically, exactly, and fully annotated for 2D and 3D multi-object tracking and at the pixel level with category, instance, flow, and depth labels (cf. below for download links). | Provide a detailed description of the following dataset: Virtual KITTI |
NCLT | The **NCLT** dataset is a large scale, long-term autonomy dataset for robotics research collected on the University of Michigan’s North Campus. The dataset consists of omnidirectional imagery, 3D lidar, planar lidar, GPS, and proprioceptive sensors for odometry collected using a Segway robot. The dataset was collected to facilitate research focusing on long-term autonomous operation in changing environments. The dataset is comprised of 27 sessions spaced approximately biweekly over the course of 15 months. The sessions repeatedly explore the campus, both indoors and outdoors, on varying trajectories, and at different times of the day across all four seasons. This allows the dataset to capture many challenging elements including: moving obstacles (e.g., pedestrians, bicyclists, and cars), changing lighting, varying viewpoint, seasonal and weather changes (e.g., falling leaves and snow), and long-term structural changes caused by construction projects. | Provide a detailed description of the following dataset: NCLT |
KITTI-Depth | The **KITTI-Depth** dataset includes depth maps from projected LiDAR point clouds that were matched against the depth estimation from the stereo cameras. The depth images are highly sparse with only 5% of the pixels available and the rest is missing. The dataset has 86k training images, 7k validation images, and 1k test set images on the benchmark server with no access to the ground truth.
Source: [Confidence Propagation through CNNs for Guided Sparse Depth Regression](https://arxiv.org/abs/1811.01791)
Image Source: [http://www.cvlibs.net/datasets/kitti/eval_depth.php?benchmark=depth_prediction](http://www.cvlibs.net/datasets/kitti/eval_depth.php?benchmark=depth_prediction) | Provide a detailed description of the following dataset: KITTI-Depth |
SoF | The **Specs on Faces** (**SoF**) dataset, a collection of 42,592 (2,662×16) images for 112 persons (66 males and 46 females) who wear glasses under different illumination conditions. The dataset is FREE for reasonable academic fair use. The dataset presents a new challenge regarding face detection and recognition. It is focused on two challenges: harsh illumination environments and face occlusions, which highly affect face detection, recognition, and classification. The glasses are the common natural occlusion in all images of the dataset. However, there are two more synthetic occlusions (nose and mouth) added to each image. Moreover, three image filters, that may evade face detectors and facial recognition systems, were applied to each image. All generated images are categorized into three levels of difficulty (easy, medium, and hard). That enlarges the number of images to be 42,592 images (26,112 male images and 16,480 female images). There is metadata for each image that contains many information such as: the subject ID, facial landmarks, face and glasses rectangles, gender and age labels, year that the photo was taken, facial emotion, glasses type, and more.
Source: [https://sites.google.com/view/sof-dataset](https://sites.google.com/view/sof-dataset)
Image Source: [https://sites.google.com/view/sof-dataset](https://sites.google.com/view/sof-dataset) | Provide a detailed description of the following dataset: SoF |
KITTI Road | KITTI Road is road and lane estimation benchmark that consists of 289 training and 290 test images. It contains three different categories of road scenes:
* uu - urban unmarked (98/100)
* um - urban marked (95/96)
* umm - urban multiple marked lanes (96/94)
* urban - combination of the three above
Ground truth has been generated by manual annotation of the images and is available for two different road terrain types: road - the road area, i.e, the composition of all lanes, and lane - the ego-lane, i.e., the lane the vehicle is currently driving on (only available for category "um"). Ground truth is provided for training images only. | Provide a detailed description of the following dataset: KITTI Road |
KAIST Urban | This data set provides Light Detection and Ranging (LiDAR) data and stereo image with various position sensors targeting a highly complex urban environment. The presented data set captures features in urban environments (e.g. metropolis areas, complex buildings and residential areas). The data of 2D and 3D LiDAR are provided, which are typical types of LiDAR sensors. Raw sensor data for vehicle navigation is presented in a file format. For convenience, development tools are provided in the Robot Operating System (ROS) environment. | Provide a detailed description of the following dataset: KAIST Urban |
Manga109 | **Manga109** has been compiled by the Aizawa Yamasaki Matsui Laboratory, Department of Information and Communication Engineering, the Graduate School of Information Science and Technology, the University of Tokyo. The compilation is intended for use in academic research on the media processing of Japanese manga. Manga109 is composed of 109 manga volumes drawn by professional manga artists in Japan. These manga were commercially made available to the public between the 1970s and 2010s, and encompass a wide range of target readerships and genres (see the table in Explore for further details.) Most of the manga in the compilation are available at the manga library “Manga Library Z” (formerly the “Zeppan Manga Toshokan” library of out-of-print manga). | Provide a detailed description of the following dataset: Manga109 |
GQA | The **GQA** dataset is a large-scale visual question answering dataset with real images from the Visual Genome dataset and balanced question-answer pairs. Each training and validation image is also associated with scene graph annotations describing the classes and attributes of those objects in the scene, and their pairwise relations. Along with the images and question-answer pairs, the GQA dataset provides two types of pre-extracted visual features for each image – convolutional grid features of size 7×7×2048 extracted from a ResNet-101 network trained on ImageNet, and object detection features of size Ndet×2048 (where Ndet is the number of detected objects in each image with a maximum of 100 per image) from a Faster R-CNN detector. | Provide a detailed description of the following dataset: GQA |
MUSE | The **MUSE** dataset contains bilingual dictionaries for 110 pairs of languages. For each language pair, the training seed dictionaries contain approximately 5000 word pairs while the evaluation sets contain 1500 word pairs. | Provide a detailed description of the following dataset: MUSE |
Replay-Mobile | The **Replay-Mobile** Database for face spoofing consists of 1190 video clips of photo and video attack attempts to 40 clients, under different lighting conditions. These videos were recorded with current devices from the market -- an iPad Mini2 (running iOS) and a LG-G4 smartphone (running Android). This Database was produced at the Idiap Research Institute (Switzerland) within the framework of collaboration with Galician Research and Development Center in Advanced Telecommunications - Gradiant (Spain). | Provide a detailed description of the following dataset: Replay-Mobile |
Netflix Prize | **Netflix Prize** consists of about 100,000,000 ratings for 17,770 movies given by 480,189 users. Each rating in the training dataset consists of four entries: user, movie, date of grade, grade. Users and movies are represented with integer IDs, while ratings range from 1 to 5. | Provide a detailed description of the following dataset: Netflix Prize |
Recipe1M+ | **Recipe1M+** is a dataset which contains one million structured cooking recipes with 13M associated images. | Provide a detailed description of the following dataset: Recipe1M+ |
DARPA | Darpa is a dataset consisting of communications between source IPs and destination IPs. This dataset contains different attacks between IPs.
Source: [dynnode2vec: Scalable Dynamic Network Embedding](https://arxiv.org/abs/1812.02356)
Image Source: [https://archive.ll.mit.edu/ideval/files/1999_DARPA_EvaulationSumPlans.pdf](https://archive.ll.mit.edu/ideval/files/1999_DARPA_EvaulationSumPlans.pdf) | Provide a detailed description of the following dataset: DARPA |
HOList | The official **HOList** benchmark for automated theorem proving consists of all theorem statements in the core, complex, and flyspeck corpora. The goal of the benchmark is to prove as many theorems as possible in the HOList environment in the order they appear in the database. That is, only theorems that occur before the current theorem are supposed to be used as premises (lemmata) in its proof. | Provide a detailed description of the following dataset: HOList |
ICDAR 2003 | The ICDAR2003 dataset is a dataset for scene text recognition. It contains 507 natural scene images (including 258 training images and 249 test images) in total. The images are annotated at character level. Characters and words can be cropped from the images. | Provide a detailed description of the following dataset: ICDAR 2003 |
CASIA-FASD | **CASIA-FASD** is a small face anti-spoofing dataset containing 50 subjects. | Provide a detailed description of the following dataset: CASIA-FASD |
CASIA-HWDB | **CASIA-HWDB** is a dataset for handwritten Chinese character recognition. It contains 300 files (240 in HWDB1.1 training set and 60 in HWDB1.1 test set). Each file contains about 3000 isolated gray-scale Chinese character images written by one writer, as well as their corresponding labels. | Provide a detailed description of the following dataset: CASIA-HWDB |
TAC 2010 | **TAC 2010** is a dataset for summarization that consists of 44 topics, each of which is associated with a set of 10 documents. The test dataset is composed of approximately 44 topics, divided into five categories: Accidents and Natural Disasters, Attacks, Health and Safety, Endangered Resources, Investigations and Trials.
Source: [Better Summarization Evaluation with Word Embeddings for ROUGE](https://arxiv.org/abs/1508.06034)
Image Source: [https://tac.nist.gov//2010/Summarization/Guided-Summ.2010.guidelines.html](https://tac.nist.gov//2010/Summarization/Guided-Summ.2010.guidelines.html) | Provide a detailed description of the following dataset: TAC 2010 |
TUM-GAID | **TUM-GAID** (TUM Gait from Audio, Image and Depth) collects 305 subjects performing two walking trajectories in an indoor environment. The first trajectory is traversed from left to right and the second one from right to left. Two recording sessions were performed, one in January, where subjects wore heavy jackets and mostly winter boots, and another one in April, where subjects wore lighter clothes. The action is captured by a Microsoft Kinect sensor which provides a video stream with a resolution of 640×480 pixels and a frame rate around 30 FPS.
Source: [Energy-based Tuning of Convolutional Neural Networks on Multi-GPUs](https://arxiv.org/abs/1808.00286)
Image Source: [https://www.ei.tum.de/mmk/verschiedenes/tum-gaid-database/](https://www.ei.tum.de/mmk/verschiedenes/tum-gaid-database/) | Provide a detailed description of the following dataset: TUM-GAID |
DUC 2005 | The **DUC 2005** data set is a dataset for summarization which consists of 50 document collections of 25 documents each; each document collection includes a human-written query. Each document collection additionally has five human-written “reference” summaries (250 words long, each) that serve as the gold standard
Source: [Search-based Structured Prediction](https://arxiv.org/abs/0907.0786)
Image Source: [https://duc.nist.gov/duc2005/tasks.html](https://duc.nist.gov/duc2005/tasks.html) | Provide a detailed description of the following dataset: DUC 2005 |
NIST SD 19 | NIST Special Database 19 contains NIST's entire corpus of training materials for handprinted document and character recognition. It publishes Handprinted Sample Forms from 3600 writers, 810,000 character images isolated from their forms, ground truth classifications for those images, reference forms for further data collection, and software utilities for image management and handling.
Source: [https://www.nist.gov/srd/nist-special-database-19](https://www.nist.gov/srd/nist-special-database-19)
Image Source: [https://www.nist.gov/srd/nist-special-database-19](https://www.nist.gov/srd/nist-special-database-19) | Provide a detailed description of the following dataset: NIST SD 19 |
PRImA | The Prima head pose dataset consists of 2790 images of 15 persons recorded twice. Pitch values lie in the interval [−60∘,60∘], and yaw values lie in the interval [−90∘,90∘] with a 15∘ step. Thus, there are 93 poses available for each person. All the recordings were achieved with the same background. One interesting feature of this dataset is the pose space is uniformly sampled. The dataset is annotated such that a face bounding box (manually annotated) and the corresponding yaw and pitch angle values are provided for each sample.
Source: [Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions](https://arxiv.org/abs/1603.09732)
Image Source: [http://www-prima.inrialpes.fr/perso/Gourier/Faces/HPDatabase.html](http://www-prima.inrialpes.fr/perso/Gourier/Faces/HPDatabase.html) | Provide a detailed description of the following dataset: PRImA |
TAU Urban Acoustic Scenes 2019 | **TAU Urban Acoustic Scenes 2019** development dataset consists of 10-seconds audio segments from 10 acoustic scenes: airport, indoor shopping mall, metro station, pedestrian street, public square, street with medium level of traffic, travelling by a tram, travelling by a bus, travelling by an underground metro and urban park. Each acoustic scene has 1440 segments (240 minutes of audio). The dataset contains in total 40 hours of audio.
Source: [https://zenodo.org/record/2589280](https://zenodo.org/record/2589280)
Image Source: [http://dcase.community/challenge2019/task-acoustic-scene-classification#citation](http://dcase.community/challenge2019/task-acoustic-scene-classification#citation) | Provide a detailed description of the following dataset: TAU Urban Acoustic Scenes 2019 |
TAU Spatial Sound Events 2019 - Ambisonic | The **TAU Spatial Sound Events 2019 - Ambisonic** dataset contains recordings from a scene (along with the Microphone Array sister dataset). It provides four-channel First-Order Ambisonic (FOA) recordings. The recordings consist of stationary point sources from multiple sound classes each associated with a temporal onset and offset time, and DOA coordinate represented using azimuth and elevation angle.
The development set consists of 400, one minute long recordings sampled at 48000 Hz, and divided into four cross-validation splits of 100 recordings each. These recordings were synthesized using spatial room impulse response (IRs) collected from five indoor locations, at 504 unique combinations of azimuth-elevation-distance. Furthermore, in order to synthesize the recordings, the collected IRs were convolved with isolated sound events dataset from DCASE 2016 task 2. Finally, to create a realistic sound scene recording, natural ambient noise collected in the IR recording locations was added to the synthesized recordings such that the average SNR of the sound events was 30 dB.
Source: [https://zenodo.org/record/2580091](https://zenodo.org/record/2580091)
Image Source: [http://dcase.community/challenge2019/task-sound-event-localization-and-detection#audio-dataset](http://dcase.community/challenge2019/task-sound-event-localization-and-detection#audio-dataset) | Provide a detailed description of the following dataset: TAU Spatial Sound Events 2019 - Ambisonic |
TAU Spatial Sound Events 2019 – Microphone Array | The **TAU Spatial Sound Events 2019 – Microphone Array** dataset contains recordings from a scene (along with the Ambisonic sister dataset). It provides four-channel directional microphone recordings from a tetrahedral array configuration. The recordings consist of stationary point sources from multiple sound classes each associated with a temporal onset and offset time, and DOA coordinate represented using azimuth and elevation angle.
The development set consists of 400, one minute long recordings sampled at 48000 Hz, and divided into four cross-validation splits of 100 recordings each. These recordings were synthesized using spatial room impulse response (IRs) collected from five indoor locations, at 504 unique combinations of azimuth-elevation-distance. Furthermore, in order to synthesize the recordings, the collected IRs were convolved with isolated sound events dataset from DCASE 2016 task 2. Finally, to create a realistic sound scene recording, natural ambient noise collected in the IR recording locations was added to the synthesized recordings such that the average SNR of the sound events was 30 dB.
Source: [https://zenodo.org/record/2580091](https://zenodo.org/record/2580091)
Image Source: [http://dcase.community/challenge2019/task-sound-event-localization-and-detection#audio-dataset](http://dcase.community/challenge2019/task-sound-event-localization-and-detection#audio-dataset) | Provide a detailed description of the following dataset: TAU Spatial Sound Events 2019 – Microphone Array |
CASIA V2 | **CASIA V2** is a dataset for forgery classification. It contains 4795 images, 1701 authentic and 3274 forged. | Provide a detailed description of the following dataset: CASIA V2 |
KTH Multiview Football II | KTI Multiview Football II consists of images of professional footballers during a match of the Allsvenskan league. It consists of two parts: one with ground truth pose in 2D and one with ground truth pose in both 2D and 3D. The 3D dataset has 800 time frames, captured from 3 views (2400 images). Views are calibrated and synchronized. 3D ground truth pose and orthographic camera matrices are provided for each frame. There are 14 annotated joints. Lastly, there are 2 different players and two sequences per player.
Source: [http://www.csc.kth.se/~vahidk/football_data.html](http://www.csc.kth.se/~vahidk/football_data.html)
Image Source: [http://www.csc.kth.se/cvap/cvg/?page=footballdataset2](http://www.csc.kth.se/cvap/cvg/?page=footballdataset2) | Provide a detailed description of the following dataset: KTH Multiview Football II |
KTH Multiview Football I | KTI Multiview Football I is a dataset of football players with annotated joints that can be used for multi-view reconstruction. The dataset includes 771 images of football players, images taken from 3 views at 257 time instances, and 14 annotated body joints.
Source: [http://www.csc.kth.se/~vahidk/football_data.html](http://www.csc.kth.se/~vahidk/football_data.html)
Image Source: [http://www.csc.kth.se/~vahidk/football_data.html](http://www.csc.kth.se/~vahidk/football_data.html) | Provide a detailed description of the following dataset: KTH Multiview Football I |
TUM monoVO | **TUM monoVO** is a dataset for evaluating the tracking accuracy of monocular Visual Odometry (VO) and SLAM methods. It contains 50 real-world sequences comprising over 100 minutes of video, recorded across different environments – ranging from narrow indoor corridors to wide outdoor scenes.
All sequences contain mostly exploring camera motion, starting and ending at the same position: this allows to evaluate tracking accuracy via the accumulated drift from start to end, without requiring ground-truth for the full sequence.
In contrast to existing datasets, all sequences are photometrically calibrated: the dataset creators provide the exposure times for each frame as reported by the sensor, the camera response function and the lens attenuation factors (vignetting).
Source: [https://vision.in.tum.de/data/datasets/mono-dataset](https://vision.in.tum.de/data/datasets/mono-dataset)
Image Source: [https://vision.in.tum.de/data/datasets/mono-dataset](https://vision.in.tum.de/data/datasets/mono-dataset) | Provide a detailed description of the following dataset: TUM monoVO |
ICL-NUIM | The **ICL-NUIM** dataset aims at benchmarking RGB-D, Visual Odometry and SLAM algorithms. Two different scenes (the living room and the office room scene) are provided with ground truth. Living room has 3D surface ground truth together with the depth-maps as well as camera poses and as a result perfectly suits not just for benchmarking camera trajectory but also reconstruction. Office room scene comes with only trajectory data and does not have any explicit 3D model with it.
All data is compatible with the evaluation tools available for the TUM RGB-D dataset, and if your system can take TUM RGB-D format PNGs as input, the authors’ TUM RGB-D Compatible data will also work (given the correct camera parameters).
Source: [https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html](https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html)
Image Source: [https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html](https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html) | Provide a detailed description of the following dataset: ICL-NUIM |
EuRoC MAV | **EuRoC MAV** is a visual-inertial datasets collected on-board a Micro Aerial Vehicle (MAV). The dataset contains stereo images, synchronized IMU measurements, and accurate motion and structure ground-truth. The datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data
Source: [https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets](https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets)
Image Source: [https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets](https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets) | Provide a detailed description of the following dataset: EuRoC MAV |
Sugar Beets 2016 | **Sugar Beets 2016** is a robot dataset for plant classification as well as localization and mapping that covers the relevant stages for robotic intervention and weed control. It contains around 5TB of data recorded from a robot with a 4-channel multi-spectral camera and a RGB-D sensor to capture detailed information about the plantation.
Source: [https://www.ipb.uni-bonn.de/data/sugarbeets2016/](https://www.ipb.uni-bonn.de/data/sugarbeets2016/)
Image Source: [https://www.ipb.uni-bonn.de/data/sugarbeets2016/](https://www.ipb.uni-bonn.de/data/sugarbeets2016/) | Provide a detailed description of the following dataset: Sugar Beets 2016 |
HDM05 | **HDM05** is a MoCap (motion capture) dataset. It contains more than three hours of systematically recorded and well-documented motion capture data in the C3D as well as in the ASF/AMC data format. HDM05 contains almost 2337 sequences with 130 motion classes performed by 5 different actors.
Source: [http://resources.mpi-inf.mpg.de/HDM05/](http://resources.mpi-inf.mpg.de/HDM05/)
Image Source: [https://arxiv.org/pdf/1908.05750.pdf](https://arxiv.org/pdf/1908.05750.pdf) | Provide a detailed description of the following dataset: HDM05 |
USYD CAMPUS | **USYD CAMPUS** is a driving dataset collected by Zhou et al at the University of Sydney (USyd) campus and surroundings. This USYD Campus Dataset contains more than 60 weeks of drives and is continuously updated. It includes multiple sensor modalities (camera, lidar, GPS, IMU, wheel encoder, steering angle, etc.) and covers various environmental conditions as well as diverse changes to illumination, scene structure, and pedestrian/vehicle traffic volumes.
Source: [http://its.acfr.usyd.edu.au/datasets/usyd-campus-dataset/](http://its.acfr.usyd.edu.au/datasets/usyd-campus-dataset/)
Image Source: [http://its.acfr.usyd.edu.au/datasets/usyd-campus-dataset/](http://its.acfr.usyd.edu.au/datasets/usyd-campus-dataset/) | Provide a detailed description of the following dataset: USYD CAMPUS |
TRECVID | **TRECVID** is a yearly set of competitions centered on video retrieval and indexing, hosting a variety of video data sets.
Source: [YouTube-BoundingBoxes: A Large High-PrecisionHuman-Annotated Data Set for Object Detection in Video](https://arxiv.org/abs/1702.00824)
Image Source: [https://www-nlpir.nist.gov/projects/tv2016/tv2016.html](https://www-nlpir.nist.gov/projects/tv2016/tv2016.html) | Provide a detailed description of the following dataset: TRECVID |
Partial-REID | Partial REID is a specially designed partial person reidentification dataset that includes 600 images from 60 people, with 5 full-body images and 5 occluded images per person. These images were collected on a university campus by 6 cameras from different viewpoints, backgrounds and different types of occlusion. The examples of partial persons in the Partial REID dataset are shown in the Figure. | Provide a detailed description of the following dataset: Partial-REID |
D-HAZY | The **D-HAZY** dataset is generated from NYU depth indoor image collection. D-HAZY contains depth map for each indoor hazy image. It contains 1400+ real images and corresponding depth maps used to synthesize hazy scenes based on Koschmieder’s light propagation mode | Provide a detailed description of the following dataset: D-HAZY |
Middlebury 2014 | The **Middlebury 2014** dataset contains a set of 23 high resolution stereo pairs for which known camera calibration parameters and ground truth disparity maps obtained with a structured light scanner are available. The images in the Middlebury dataset all show static indoor scenes with varying difficulties including repetitive structures, occlusions, wiry objects as well as untextured areas. | Provide a detailed description of the following dataset: Middlebury 2014 |
Partial-iLIDS | Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage. | Provide a detailed description of the following dataset: Partial-iLIDS |
Oxford Town Center | The **Oxford Town Center** dataset is a 5-minute video with 7500 frames annotated, which is divided into 6500 for training and 1000 for testing data for pedestrian detection. The data was recorded from a CCTV camera in Oxford for research and development into activity and face recognition.
Source: [LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior Learning](https://arxiv.org/abs/1606.08998)
Image Source: [https://megapixels.cc/oxford_town_centre/](https://megapixels.cc/oxford_town_centre/) | Provide a detailed description of the following dataset: Oxford Town Center |
VeRi-Wild | Veri-Wild is the largest vehicle re-identification dataset (as of CVPR 2019). The dataset is captured from a large CCTV surveillance system consisting of 174 cameras across one month (30× 24h) under unconstrained scenarios. This dataset comprises 416,314 vehicle images of 40,671 identities. Evaluation on this dataset is split across three subsets: small, medium and large; comprising 3000, 5000 and 10,000 identities respectively (in probe and gallery sets). | Provide a detailed description of the following dataset: VeRi-Wild |
DIODE | Diode Dense Indoor/Outdoor DEpth (**DIODE**) is the first standard dataset for monocular depth estimation comprising diverse indoor and outdoor scenes acquired with the same hardware setup. The training set consists of 8574 indoor and 16884 outdoor samples from 20 scans each. The validation set contains 325 indoor and 446 outdoor samples with each set from 10 different scans. The ground truth density for the indoor training and validation splits are approximately 99.54% and 99%, respectively. The density of the outdoor sets are naturally lower with 67.19% for training and 78.33% for validation subsets. The indoor and outdoor ranges for the dataset are 50m and 300m, respectively. | Provide a detailed description of the following dataset: DIODE |
Airport | The **Airport** dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras.
Source: [An Evaluation of Deep CNN Baselines for Scene-Independent Person Re-Identification](https://arxiv.org/abs/1805.06086)
Image Source: [http://www.northeastern.edu/alert/transitioning-technology/alert-datasets/alert-airport-re-identification-dataset/](http://www.northeastern.edu/alert/transitioning-technology/alert-datasets/alert-airport-re-identification-dataset/) | Provide a detailed description of the following dataset: Airport |
Musk v1 | The Musk dataset describes a set of molecules, and the objective is to detect musks from non-musks. This dataset describes a set of 92 molecules of which 47 are judged by human experts to be musks and the remaining 45 molecules are judged to be non-musks. There are 166 features available that describe the molecules based on the shape of the molecule.
Source: [Estimation of Dimensions Contributing to Detected Anomalies with Variational Autoencoders](https://arxiv.org/abs/1811.04576) | Provide a detailed description of the following dataset: Musk v1 |
Musk v2 | The Musk2 dataset is a set of 102 molecules of which 39 are judged by human experts to be musks and the remaining 63 molecules are judged to be non-musks. Each instance corresponds to a possible configuration of a molecule. The 166 features that describe these molecules depend upon the exact shape, or conformation, of the molecule.
Source: [Confidence-Constrained Maximum Entropy Framework for Learning from Multi-Instance Data](https://arxiv.org/abs/1603.01901) | Provide a detailed description of the following dataset: Musk v2 |
RMRC 2014 | The **RMRC 2014** indoor dataset is a dataset for indoor semantic segmentation. It employs the NYU Depth V2 and Sun3D datasets to define the training set. The test data consists of newly acquired images.
Source: [https://cs.nyu.edu/~silberman/rmrc2014/indoor.php](https://cs.nyu.edu/~silberman/rmrc2014/indoor.php)
Image Source: [https://cs.nyu.edu/~silberman/rmrc2014/indoor.php](https://cs.nyu.edu/~silberman/rmrc2014/indoor.php) | Provide a detailed description of the following dataset: RMRC 2014 |
Middlebury 2001 | The **Middlebury 2001** is a stereo dataset of indoor scenes with multiple handcrafted layouts.
Source: [https://vision.middlebury.edu/stereo/data/scenes2001/](https://vision.middlebury.edu/stereo/data/scenes2001/)
Image Source: [https://vision.middlebury.edu/stereo/data/scenes2001/](https://vision.middlebury.edu/stereo/data/scenes2001/) | Provide a detailed description of the following dataset: Middlebury 2001 |
Middlebury 2006 | The **Middlebury 2006** is a stereo dataset of indoor scenes with multiple handcrafted layouts.
Source: [https://vision.middlebury.edu/stereo/data/scenes2006/](https://vision.middlebury.edu/stereo/data/scenes2006/)
Image Source: [https://vision.middlebury.edu/stereo/data/scenes2006/](https://vision.middlebury.edu/stereo/data/scenes2006/) | Provide a detailed description of the following dataset: Middlebury 2006 |
DukeMTMC-attribute | The images in **DukeMTMC-attribute** dataset comes from Duke University. There are 1812 identities and 34183 annotated bounding boxes in the DukeMTMC-attribute dataset. This dataset contains 702 identities for training and 1110 identities for testing, corresponding to 16522 and 17661 images respectively. The attributes are annotated in the identity level, every image in this dataset is annotated with 23 attributes.
**NOTE**: This dataset [has been retracted](https://exposing.ai/duke_mtmc/). | Provide a detailed description of the following dataset: DukeMTMC-attribute |
Occluded REID | **Occluded REID** is an occluded person dataset captured by mobile cameras, consisting of 2,000 images of 200 occluded persons (see Fig. (c)). Each identity has 5 full-body person images and 5 occluded person images with different types of occlusion.
Source: [Foreground-aware Pyramid Reconstruction for Alignment-free Occluded Person Re-identification](https://arxiv.org/abs/1904.04975)
Image Source: [https://github.com/tinajia2012/ICME2018_Occluded-Person-Reidentification_datasets](https://github.com/tinajia2012/ICME2018_Occluded-Person-Reidentification_datasets) | Provide a detailed description of the following dataset: Occluded REID |
NYU Hand | The **NYU Hand** pose dataset contains 8252 test-set and 72757 training-set frames of captured RGBD data with ground-truth hand-pose information. For each frame, the RGBD data from 3 Kinects is provided: a frontal view and 2 side views. The training set contains samples from a single user only (Jonathan Tompson), while the test set contains samples from two users (Murphy Stein and Jonathan Tompson). A synthetic re-creation (rendering) of the hand pose is also provided for each view.
Source: [https://jonathantompson.github.io/NYU_Hand_Pose_Dataset.htm](https://jonathantompson.github.io/NYU_Hand_Pose_Dataset.htm)
Image Source: [https://jonathantompson.github.io/NYU_Hand_Pose_Dataset.htm](https://jonathantompson.github.io/NYU_Hand_Pose_Dataset.htm) | Provide a detailed description of the following dataset: NYU Hand |
Middlebury 2005 | **Middlebury 2005** is a stereo dataset of indoor scenes.
Source: [https://vision.middlebury.edu/stereo/data/scenes2005/](https://vision.middlebury.edu/stereo/data/scenes2005/)
Image Source: [https://vision.middlebury.edu/stereo/data/scenes2005/](https://vision.middlebury.edu/stereo/data/scenes2005/) | Provide a detailed description of the following dataset: Middlebury 2005 |
Middlebury MVS | **Middlebury MVS** is the earliest MVS dataset for multi-view stereo network evaluation. It contains two indoor objects with low-resolution (640 × 480) images and calibrated cameras.
Source: [BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo Networks](https://arxiv.org/abs/1911.10127)
Image Source: [https://vision.middlebury.edu/mview/data/](https://vision.middlebury.edu/mview/data/) | Provide a detailed description of the following dataset: Middlebury MVS |
Middlebury 2003 | **Middlebury 2003** is a stereo dataset for indoor scenes.
Source: [https://vision.middlebury.edu/stereo/data/scenes2003/](https://vision.middlebury.edu/stereo/data/scenes2003/)
Image Source: [https://vision.middlebury.edu/stereo/data/scenes2003/](https://vision.middlebury.edu/stereo/data/scenes2003/) | Provide a detailed description of the following dataset: Middlebury 2003 |
OpeReid | The **OpeReid** dataset is a person re-identification dataset that consists of 7,413 images of 200 persons.
Source: [Scalable Metric Learning via Weighted Approximate Rank Component Analysis](https://arxiv.org/abs/1603.00370) | Provide a detailed description of the following dataset: OpeReid |
Market1501-Attributes | The **Market1501-Attributes** dataset is built from the Market1501 dataset. Market1501 Attribute is an augmentation of this dataset with 28 hand annotated attributes, such as gender, age, sleeve length, flags for items carried as well as upper clothes colors and lower clothes colors.
Source: [Color inference from semantic labeling for person search in videos](https://arxiv.org/abs/1911.13114)
Image Source: [https://github.com/vana77/Market-1501_Attribute](https://github.com/vana77/Market-1501_Attribute) | Provide a detailed description of the following dataset: Market1501-Attributes |
Friedman1 | The friedman1 data set is commonly used to test semi-supervised regression methods. | Provide a detailed description of the following dataset: Friedman1 |
NSynth | **NSynth** is a dataset of one shot instrumental notes, containing 305,979 musical notes with unique pitch, timbre and envelope. The sounds were collected from 1006 instruments from commercial sample libraries and are annotated based on their source (acoustic, electronic or synthetic), instrument family and sonic qualities. The instrument families used in the annotation are bass, brass, flute, guitar, keyboard, mallet, organ, reed, string, synth lead and vocal. Four second monophonic 16kHz audio snippets were generated (notes) for the instruments. | Provide a detailed description of the following dataset: NSynth |
DCASE 2016 | **DCASE 2016** is a dataset for sound event detection. It consists of 20 short mono sound files for each of 11 sound classes (from office environments, like clearthroat, drawer, or keyboard), each file containing one sound event instance. Sound files are annotated with event on- and offset times, however silences between actual physical sounds (like with a phone ringing) are not marked and hence “included” in the event. | Provide a detailed description of the following dataset: DCASE 2016 |
DSTC7 Task 1 | The **DSTC7 Task 1** dataset is a dataset and task for goal-oriented dialogue. The data originates from human-human conversations, which is built from online resources, specifically the Ubuntu Internet Relay Chat (IRC) channel and an Advising dataset from the University of Michigan. | Provide a detailed description of the following dataset: DSTC7 Task 1 |
DSTC7 Task 2 | DSTC Task 2 is a dataset and task for end-to-end conversation modeling. The goal is to generate conversational responses that go beyond trivial chitchat by injecting informative responses that are grounded in external knowledge. The data consists of conversational data from Reddit, and contextually-relevant “facts” taken from the website that started the Reddit conversation. That is the setup is grounded, as each conversation in the data is about a specific web page that was linked at the start of the conversation.
Source: [http://workshop.colips.org/dstc7/](http://workshop.colips.org/dstc7/)
Image Source: [http://workshop.colips.org/dstc7/](http://workshop.colips.org/dstc7/) | Provide a detailed description of the following dataset: DSTC7 Task 2 |
Music21 | **Music21** is an untrimmed video dataset crawled by keyword query from Youtube. It contains music performances belonging to 21 categories. This dataset is relatively clean and collected for the purpose of training and evaluating visual sound source separation models. | Provide a detailed description of the following dataset: Music21 |
RWC | The **RWC** (Real World Computing) Music Database is a copyright-cleared music database (DB) that is available to researchers as a common foundation for research. It contains around 100 complete songs with manually labeled section boundaries. For the 50 instruments, individual sounds at half-tone intervals were captured with several variations of playing styles, dynamics, instrument manufacturers and musicians. | Provide a detailed description of the following dataset: RWC |
DCASE 2013 | **DCASE 2013** is a dataset for sound event detection. It consists of audio-only recordings where individual sound events are prominent in an acoustic scene.
Source: [http://dcase.community/challenge2013/index](http://dcase.community/challenge2013/index)
Image Source: [https://link.springer.com/article/10.1186/s13636-018-0138-4](https://link.springer.com/article/10.1186/s13636-018-0138-4) | Provide a detailed description of the following dataset: DCASE 2013 |
LITIS Rouen | The LITIS-Rouen dataset is a dataset for audio scenes. It consists of 3026 examples of 19 scene categories. Each class is specific to a location such as a train station or an open market. The audio recordings have a duration of 30 seconds and a sampling rate of 22050 Hz. The dataset has a total duration of 1500 minutes.
Source: [Spatio-Temporal Attention Pooling for Audio Scene Classification](https://arxiv.org/abs/1904.03543)
Image Source: [https://www.researchgate.net/figure/Summary-of-Litis-Rouen-audio-scene-dataset_tbl1_329608235](https://www.researchgate.net/figure/Summary-of-Litis-Rouen-audio-scene-dataset_tbl1_329608235) | Provide a detailed description of the following dataset: LITIS Rouen |
YouTube-100M | The **YouTube-100M** data set consists of 100 million YouTube videos: 70M training videos, 10M evaluation videos, and 20M validation videos. Videos average 4.6 minutes each for a total of 5.4M training hours. Each of these videos is labeled with 1 or more topic identifiers from a set of 30,871 labels. There are an average of around 5 labels per video. The labels are assigned automatically based on a combination of metadata (title, description, comments, etc.), context, and image content for each video. The labels apply to the entire video and range from very generic (e.g. “Song”) to very specific (e.g. “Cormorant”).
Being machine generated, the labels are not 100% accurate and of the 30K labels, some are clearly acoustically relevant (“Trumpet”) and others are less so (“Web Page”). Videos often bear annotations with multiple degrees of specificity. For example, videos labeled with “Trumpet” are often labeled “Entertainment” as well, although no hierarchy is enforced. | Provide a detailed description of the following dataset: YouTube-100M |
TUT Acoustic Scenes 2017 | The **TUT Acoustic Scenes 2017** dataset is a collection of recordings from various acoustic scenes all from distinct locations. For each recording location 3-5 minute long audio recordings are captured and are split into 10 seconds which act as unit of sample for this task. All the audio clips are recorded with 44.1 kHz sampling rate and 24 bit resolution.
Source: [Ensemble of deep neural networks for acoustic scene classification](https://arxiv.org/abs/1708.05826)
Image Source: [https://www.mathworks.com/help/audio/ug/acoustic-scene-recognition-using-late-fusion.html;jsessionid=95c969bc690c06fe42a7ed17f57e](https://www.mathworks.com/help/audio/ug/acoustic-scene-recognition-using-late-fusion.html;jsessionid=95c969bc690c06fe42a7ed17f57e) | Provide a detailed description of the following dataset: TUT Acoustic Scenes 2017 |
FSDnoisy18k | The **FSDnoisy18k** dataset is an open dataset containing 42.5 hours of audio across 20 sound event classes, including a small amount of manually-labeled data and a larger quantity of real-world noisy data. The audio content is taken from Freesound, and the dataset was curated using the Freesound Annotator. The noisy set of FSDnoisy18k consists of 15,813 audio clips (38.8h), and the test set consists of 947 audio clips (1.4h) with correct labels. The dataset features two main types of label noise: in-vocabulary (IV) and out-of-vocabulary (OOV). IV applies when, given an observed label that is incorrect or incomplete, the true or missing label is part of the target class set. Analogously, OOV means that the true or missing label is not covered by those 20 classes. | Provide a detailed description of the following dataset: FSDnoisy18k |
CHiME-Home | **CHiME-Home** is a dataset for sound source recognition in a domestic environment. It uses around 6.8 hours of domestic environment audio recordings. The recordings were obtained from the CHiME projects – computational hearing in multisource environments – where recording equipment was positioned inside an English Victorian semi-detached house. The recordings were selected from 22 sessions totalling 19.5 hours, with each session made between 7:30 in the morning and 20:00 in the evening. In the considered recordings, the equipment was placed in the lounge (sitting room) near the door opening onto a hallway, with the hallway opening onto a kitchen with no door. With the lounge door typically open, prominent sounds thus may originate from sources both in the lounge and kitchen.
The choice of permitted labels was motivated by the sources present in the considered acoustic environment: Human speakers (c,m,f); human activity (p); television (v); household appliances (b). Further labels o,S,U respectively relate to any other identifiable sounds, silence, unidentifiable sounds. Labels S,U may respectively only be assigned in isolation. Annotators were acquired to assign at least one label to a chunk, thus annotators may either assign one or more labels from the set {c,m,f,v,p,b,o}, or may alternatively ‘flag’ the chunk using a single label from the set {S,U}. | Provide a detailed description of the following dataset: CHiME-Home |
Bach Chorales | Bach chorales is a univariate time series based on chorales, where the task is to learn generative grammar. The dataset consists of single-line melodies of 100 Bach chorales (originally 4 voices). The melody line can be studied independently of other voices. The grand challenge is to learn a generative grammar for stylistically valid chorales.
Source: [https://archive.ics.uci.edu/ml/datasets/Bach+Chorales](https://archive.ics.uci.edu/ml/datasets/Bach+Chorales)
Image Source: [https://arxiv.org/pdf/1612.01010.pdf](https://arxiv.org/pdf/1612.01010.pdf) | Provide a detailed description of the following dataset: Bach Chorales |
FAIR-Play | **FAIR-Play** is a video-audio dataset consisting of 1,871 video clips and their corresponding binaural audio clips recording in a music room. The video clip and binaural clip of the same index are roughly aligned.
Source: [https://github.com/facebookresearch/FAIR-Play](https://github.com/facebookresearch/FAIR-Play)
Image Source: [https://github.com/facebookresearch/FAIR-Play](https://github.com/facebookresearch/FAIR-Play) | Provide a detailed description of the following dataset: FAIR-Play |
BirdVox-full-night | The **BirdVox-full-night** dataset contains 6 audio recordings, each about ten hours in duration. These recordings come from ROBIN autonomous recording units, placed near Ithaca, NY, USA during the fall 2015. They were captured on the night of September 23rd, 2015, by six different sensors, originally numbered 1, 2, 3, 5, 7, and 10.
Andrew Farnsworth used the Raven software to pinpoint every avian flight call in time and frequency. He found 35402 flight calls in total. He estimates that about 25 different species of passerines (thrushes, warblers, and sparrows) are present in this recording. Species are not labeled in BirdVox-full-night, but it is possible to tell apart thrushes from warblers and sparrrows by looking at the center frequencies of their calls. The annotation process took 102 hours.
Source: [https://wp.nyu.edu/birdvox/birdvox-full-night/](https://wp.nyu.edu/birdvox/birdvox-full-night/)
Image Source: [https://wp.nyu.edu/birdvox/birdvox-full-night/](https://wp.nyu.edu/birdvox/birdvox-full-night/) | Provide a detailed description of the following dataset: BirdVox-full-night |
POP909 | **POP909** is a dataset which contains multiple versions of the piano arrangements of 909 popular songs created by professional musicians. The main body of the dataset contains the vocal melody, the lead instrument melody, and the piano accompaniment for each song in MIDI format, which are aligned to the original audio files. Furthermore, annotations are provided of tempo, beat, key, and chords, where the tempo curves are hand-labelled and others are done by MIR algorithms. | Provide a detailed description of the following dataset: POP909 |
SINS | **SINS** is a database of continuous real-life audio recordings in a home environment. The home is a vacation home and one person lived there during the recording period of over on week. It was collected using a network of 13 microphone arrays distributed over the multiple rooms. Each microphone array consisted of 4 linearly arranged microphones. Recordings were annotated based on the level of daily activities performed in the environment.
Source: [https://www.cs.tut.fi/sgn/arg/dcase2017/documents/workshop_papers/DCASE2017Workshop_Dekkers_141.pdf](https://www.cs.tut.fi/sgn/arg/dcase2017/documents/workshop_papers/DCASE2017Workshop_Dekkers_141.pdf)
Image Source: [https://www.cs.tut.fi/sgn/arg/dcase2017/documents/workshop_papers/DCASE2017Workshop_Dekkers_141.pdf](https://www.cs.tut.fi/sgn/arg/dcase2017/documents/workshop_papers/DCASE2017Workshop_Dekkers_141.pdf) | Provide a detailed description of the following dataset: SINS |
Robbie Williams | **Robbie Williams** is a dataset of 65 songs by Robbie Williams. It consists of chords, keys and beats. The dataset does not include audio.
Source: [A BI-DIRECTIONAL TRANSFORMER FOR MUSICAL CHORD RECOGNITION](https://arxiv.org/abs/1907.02698)
Image Source: [https://www.rwdb.info/](https://www.rwdb.info/) | Provide a detailed description of the following dataset: Robbie Williams |
MuseData | **MuseData** is an electronic library of orchestral and piano classical music from CCARH. It consists of around 3MB of 783 files. | Provide a detailed description of the following dataset: MuseData |
URBAN-SED | **URBAN-SED** is a dataset of 10,000 soundscapes with sound event annotations generated using the scraper library. The dataset includes 10,000 soundscapes, totals almost 30 hours and includes close to 50,000 annotated sound events. Every soundscape is 10 seconds long and has a background of Brownian noise resembling the typical “hum” often heard in urban environments. Every soundscape contains between 1-9 sound evnts from the following classes: air_conditioner, car_horn, children_playing, dog_bark, drilling, engine_idling, gun_shot, jackhammer, siren and street_music.
The source material for the sound events are the clips from the UrbanSound8K dataset. URBAN-SED comes pre-sorted into three sets: train, validate and test. There are 6000 soundscapes in the training set, generated using clips from folds 1-6 in UrbanSound8K, 2000 soundscapes in the validation set, generated using clips from fold 7-8 in UrbanSound8K, and 2000 soundscapes in the test set, generated using clips from folds 9-10 in UrbanSound8K.
Source: [http://urbansed.weebly.com/](http://urbansed.weebly.com/)
Image Source: [http://urbansed.weebly.com/](http://urbansed.weebly.com/) | Provide a detailed description of the following dataset: URBAN-SED |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.