dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Pan+ChiPhoto | **Pan+ChiPhoto** dataset is a Chinese character dataset. It is built by the combination of two datasets: ChiPhoto and Pan_Chinese_Character dataset. The images in this dataset are mainly captured at outdoors in Beijing and Shanghai, China, which involve various scenes like signs, boards, advertisements, banners, objects with texts printed on their surfaces.
Source: [Boosting Scene Character Recognition by Learning Canonical Forms of Glyphs](https://arxiv.org/abs/1907.05577)
Image Source: [https://www.researchgate.net/publication/318679069_Multi-order_Co-occurrence_Activations_Encoded_with_Fisher_Vector_for_Scene_Character_Recognition](https://www.researchgate.net/publication/318679069_Multi-order_Co-occurrence_Activations_Encoded_with_Fisher_Vector_for_Scene_Character_Recognition) | Provide a detailed description of the following dataset: Pan+ChiPhoto |
ISI_Bengali_Character | The **ISI_Bengali_Character** dataset contains 158 classes of Bengali numerals, characters or their parts. 19,530 Bengali character samples are available. Most of the images in the dataset are synthesized.
Source: [Boosting Scene Character Recognition by Learning Canonical Forms of Glyphs](https://arxiv.org/abs/1907.05577)
Image Source: [https://www.isical.ac.in/~ujjwal/download/SegmentedSceneCharacter.html](https://www.isical.ac.in/~ujjwal/download/SegmentedSceneCharacter.html) | Provide a detailed description of the following dataset: ISI_Bengali_Character |
Florentine | The **Florentine** dataset is a dataset of facial gestures which contains facial clips from 160 subjects (both male and female), where gestures were artificially generated according to a specific request, or genuinely given due to a shown stimulus. 1032 clips were captured for posed expressions and 1745 clips for induced facial expressions amounting to a total of 2777 video clips. Genuine facial expressions were induced in subjects using visual stimuli, i.e. videos selected randomly from a bank of Youtube videos to generate a specific emotion.
Source: [Deep video gesture recognition using illumination invariants](https://arxiv.org/abs/1603.06531)
Image Source: [https://www.micc.unifi.it/resources/datasets/florence-3d-faces/](https://www.micc.unifi.it/resources/datasets/florence-3d-faces/) | Provide a detailed description of the following dataset: Florentine |
INRIA DLFD | The **INRIA Dense Light Field** Dataset (DLFD) is a dataset for testing depth estimation methods in a light field. DLFD contains 39 scenes with disparity range [-4,4] pixels. The light fields are of spatial resolution 512 x 512 and angular resolution 9 x 9.
Source: [http://clim.inria.fr/Datasets/InriaSynLF/index.html](http://clim.inria.fr/Datasets/InriaSynLF/index.html)
Image Source: [http://clim.inria.fr/Datasets/InriaSynLF/index.html](http://clim.inria.fr/Datasets/InriaSynLF/index.html) | Provide a detailed description of the following dataset: INRIA DLFD |
INRIA SLFD | The INRIA Sprse Light Field Dataset (SLFD) is a dataset for testing depth estimation methods in a light field. SLFD contains 53 scenes with disparity range [-20,20] pixels. The light fields are of spatial resolution 512 x 512 and angular resolution 9 x 9.
Source: [http://clim.inria.fr/Datasets/InriaSynLF/index.html](http://clim.inria.fr/Datasets/InriaSynLF/index.html)
Image Source: [http://clim.inria.fr/Datasets/InriaSynLF/index.html](http://clim.inria.fr/Datasets/InriaSynLF/index.html) | Provide a detailed description of the following dataset: INRIA SLFD |
AIDS Antiviral Screen | The **AIDS Antiviral Screen** dataset is a dataset of screens checking tens of thousands of compounds for evidence of anti-HIV activity. The available screen results are chemical graph-structured data of these various compounds. | Provide a detailed description of the following dataset: AIDS Antiviral Screen |
Retinal Microsurgery | The **Retinal Microsurgery** dataset is a dataset for surgical instrument tracking. It consists of 18 in-vivo sequences, each with 200 frames of resolution 1920 × 1080 pixels. The dataset is further classified into four instrument-dependent subsets. The annotated tool joints are n=3 and semantic classes c=2 (tool and background).
Source: [Concurrent Segmentation and Localization for Tracking of Surgical Instruments](https://arxiv.org/abs/1703.10701)
Image Source: [https://sites.google.com/site/sznitr/research/retinalmicrosurgery](https://sites.google.com/site/sznitr/research/retinalmicrosurgery) | Provide a detailed description of the following dataset: Retinal Microsurgery |
Daimler Monocular Pedestrian Detection | The **Daimler Monocular Pedestrian Detection** dataset is a dataset for pedestrian detection in urban environments. The training set contains 15560 pedestrian samples (image cut-outs at 48×96 resolution) and 6744 additional full images without pedestrians for extracting negative samples. The test set contains an independent sequence with more than 21790 images and 56492 pedestrian labels (fully visible or partially occluded), captured from a vehicle during a 27 min driving through the urban traffic.
Source: [A Large Scale Urban Surveillance Video Dataset for Multiple-Object Tracking and Behavior Analysis](https://arxiv.org/abs/1904.11784)
Image Source: [http://www.gavrila.net/Datasets/Daimler_Pedestrian_Benchmark_D/Daimler_Mono_Ped__Detection_Be/daimler_mono_ped__detection_be.html](http://www.gavrila.net/Datasets/Daimler_Pedestrian_Benchmark_D/Daimler_Mono_Ped__Detection_Be/daimler_mono_ped__detection_be.html) | Provide a detailed description of the following dataset: Daimler Monocular Pedestrian Detection |
ETHZ-Shape | The ETHZ Shape dataset contains images of five diverse shape-based classes, collected from Flickr and Google Images. The main challenges it offers are clutter, intra-class shape variability, and scale changes. The authors deliberately selected several images where the object comprises only a rather small portion of the image, and made an effort to include objects appearing at a wide range of scales. The objects are mostly unoccluded and are all taken from approximately the same viewpoint (the side).
Source: [http://calvin-vision.net/datasets/ethz-shape-classes/](http://calvin-vision.net/datasets/ethz-shape-classes/)
Image Source: [http://calvin-vision.net/datasets/ethz-shape-classes/](http://calvin-vision.net/datasets/ethz-shape-classes/) | Provide a detailed description of the following dataset: ETHZ-Shape |
L-Bird | The **L-Bird** (**Large-Bird**) dataset contains nearly 4.8 million images which are obtained by searching images of a total of 10,982 bird species from the Internet. | Provide a detailed description of the following dataset: L-Bird |
Extended BBC Pose | **Extended BBC Pose** is a pose estimation dataset which extends the BBC Pose dataset with 72 additional training videos. Combined with the original BBC TV dataset, the dataset contains 92 videos (82 training, 5 validation and 5 testing), i.e. around 7 million frames. The frames of the new 72 videos are automatically assigned joint locations (used as ground truth for training) with the tracker of Charles et al. IJCV'13.
Source: [https://www.robots.ox.ac.uk/~vgg/data/pose/](https://www.robots.ox.ac.uk/~vgg/data/pose/)
Image Source: [https://www.robots.ox.ac.uk/~vgg/data/pose/](https://www.robots.ox.ac.uk/~vgg/data/pose/) | Provide a detailed description of the following dataset: Extended BBC Pose |
Short BBC Pose | **Short BBC Pose** contains five one-hour-long videos with sign language signers each with different sleeve length (in contrast to the BBC pose and Extended BBC Pose, which only contain signers with moderately long sleeves). Each of the five videos has 200 test frames (which have been manually annotated with joint locations), amounting to 1,000 test frames in total. Test frames were selected by the authors to contain a diverse range of poses.
Source: [https://www.robots.ox.ac.uk/~vgg/data/pose/index.html#citation](https://www.robots.ox.ac.uk/~vgg/data/pose/index.html#citation)
Image Source: [https://www.robots.ox.ac.uk/~vgg/publications/2013/Charles13/charles13.pdf](https://www.robots.ox.ac.uk/~vgg/publications/2013/Charles13/charles13.pdf) | Provide a detailed description of the following dataset: Short BBC Pose |
ChaLearn Pose | **ChaLearn Pose** is a subset of the ChaLearn 2013 Multi-modal gesture dataset from Escalera et al. ICMI'13, which contains 23 hours of Kinect data of 27 persons performing 20 Italian gestures. The data includes RGB, depth, foreground segmentations and full body skeletons. In this dataset, both the training and testing labels are noisy (from Kinect).
Source: [https://www.robots.ox.ac.uk/~vgg/data/pose/index.html#citation](https://www.robots.ox.ac.uk/~vgg/data/pose/index.html#citation)
Image Source: [http://sunai.uoc.edu/chalearnLAP/](http://sunai.uoc.edu/chalearnLAP/) | Provide a detailed description of the following dataset: ChaLearn Pose |
VoxCeleb2 | **VoxCeleb2** is a large scale speaker recognition dataset obtained automatically from open-source media. VoxCeleb2 consists of over a million utterances from over 6k speakers. Since the dataset is collected ‘in the wild’, the speech segments are corrupted with real world noise including laughter, cross-talk, channel effects, music and other sounds. The dataset is also multilingual, with speech from speakers of 145 different nationalities, covering a wide range of accents, ages, ethnicities and languages. The dataset is audio-visual, so is also useful for a number of other applications, for example – visual speech synthesis, speech separation, cross-modal transfer from face to voice or vice versa and training face recognition from video to complement existing face recognition datasets. | Provide a detailed description of the following dataset: VoxCeleb2 |
VCTK | This CSTR **VCTK** Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive. The newspaper texts were taken from Herald Glasgow, with permission from Herald & Times Group. Each speaker has a different set of the newspaper texts selected based a greedy algorithm that increases the contextual and phonetic coverage. The details of the text selection algorithms are described in the following paper: C. Veaux, J. Yamagishi and S. King, "The voice bank corpus: Design, collection and data analysis of a large regional accent speech database," https://doi.org/10.1109/ICSDA.2013.6709856. The rainbow passage and elicitation paragraph are the same for all speakers. The rainbow passage can be found at International Dialects of English Archive: (http://web.ku.edu/~idea/readings/rainbow.htm). The elicitation paragraph is identical to the one used for the speech accent archive (http://accent.gmu.edu). The details of the the speech accent archive can be found at http://www.ualberta.ca/~aacl2009/PDFs/WeinbergerKunath2009AACL.pdf. All speech data was recorded using an identical recording setup: an omni-directional microphone (DPA 4035) and a small diaphragm condenser microphone with very wide bandwidth (Sennheiser MKH 800), 96kHz sampling frequency at 24 bits and in a hemi-anechoic chamber of the University of Edinburgh. (However, two speakers, p280 and p315 had technical issues of the audio recordings using MKH 800). All recordings were converted into 16 bits, were downsampled to 48 kHz, and were manually end-pointed. | Provide a detailed description of the following dataset: VCTK |
DIRHA | **DIRHA**-English is a multi-microphone database composed of real and simulated sequences of 1-minute. The overall corpus is composed of different types of sequences including: 1) Phonetically-rich sentences; 2) WSJ 5-k utterances; 3) WSJ 20-k utterances; 4) Conversational speech (also including keywords and commands).
The sequences are available for both UK and US English at 48 kHz. The DIRHA-English dataset offers the possibility to work with a very large number of microphone channels, to use of microphone arrays having different characteristics and to work considering different speech recognition tasks (e.g., phone-loop, keyword spotting, ASR with small and very large language models). | Provide a detailed description of the following dataset: DIRHA |
VoxForge | **VoxForge** is an open speech dataset that was set up to collect transcribed speech for use with Free and Open Source Speech Recognition Engines (on Linux, Windows and Mac).
Image Source: [http://www.voxforge.org/home](http://www.voxforge.org/home) | Provide a detailed description of the following dataset: VoxForge |
Penn Action | The **Penn Action** Dataset contains 2326 video sequences of 15 different actions and human joint annotations for each sequence. | Provide a detailed description of the following dataset: Penn Action |
FLIC | The **FLIC** dataset contains 5003 images from popular Hollywood movies. The images were obtained by running a state-of-the-art person detector on every tenth frame of 30 movies. People detected with high confidence (roughly 20K candidates) were then sent to the crowdsourcing marketplace Amazon Mechanical Turk to obtain ground truth labelling. Each image was annotated by five Turkers to label 10 upper body joints. The median-of-five labelling was taken in each image to be robust to outlier annotation. Finally, images were rejected manually by if the person was occluded or severely non-frontal. | Provide a detailed description of the following dataset: FLIC |
WikiArt | **WikiArt** contains painting from 195 different artists. The dataset has 42129 images for training and 10628 images for testing. | Provide a detailed description of the following dataset: WikiArt |
Sim10k | SIM10k is a synthetic dataset containing 10,000 images, which is rendered from the video game Grand Theft Auto V (GTA5). | Provide a detailed description of the following dataset: Sim10k |
EYEDIAP | The **EYEDIAP** dataset is a dataset for gaze estimation from remote RGB, and RGB-D (standard vision and depth), cameras. The recording methodology was designed by systematically including, and isolating, most of the variables which affect the remote gaze estimation algorithms:
* Head pose variations.
* Person variation.
* Changes in ambient and sensing condition.
* Types of target: screen or 3D object. | Provide a detailed description of the following dataset: EYEDIAP |
G3D | The Gaming 3D Dataset (**G3D**) focuses on real-time action recognition in a gaming scenario. It contains 10 subjects performing 20 gaming actions: “punch right”, “punch left”, “kick right”, “kick left”, “defend”, “golf swing”, “tennis swing forehand”, “tennis swing backhand”, “tennis serve”, “throw bowling ball”, “aim and fire gun”, “walk”, “run”, “jump”, “climb”, “crouch”, “steer a car”, “wave”, “flap” and “clap”. | Provide a detailed description of the following dataset: G3D |
O-HAZE | The O-Haze dataset contains 35 hazy images (size 2833×4657 pixels) for training. It has 5 hazy images for validation along with their corresponding ground truth images. | Provide a detailed description of the following dataset: O-HAZE |
UMIST | The Sheffield (previously **UMIST**) Face Database consists of 564 images of 20 individuals (mixed race/gender/appearance). Each individual is shown in a range of poses from profile to frontal views – each in a separate directory labelled 1a, 1b, … 1t and images are numbered consecutively as they were taken. The files are all in PGM format, approximately 220 x 220 pixels with 256-bit grey-scale.
Source: [https://www.visioneng.org.uk/datasets/](https://www.visioneng.org.uk/datasets/)
Image Source: [https://www.visioneng.org.uk/datasets/](https://www.visioneng.org.uk/datasets/) | Provide a detailed description of the following dataset: UMIST |
CVUSA | The CVUSA dataset is a matching task between street- and aerial views, from different regions of the US. This task helps to determine localization without GPS coordinates for the street-view images. Google Street View panoramas are used as ground images, and matching aerial images at zoom level 19 are obtained from Microsoft Bing Maps. The dataset comprises 35,532 image pairs for training and 8,884 image pairs for testing, and recall is the primary metric for evaluation. | Provide a detailed description of the following dataset: CVUSA |
FC100 | The **FC100** dataset (**Fewshot-CIFAR100**) is a newly split dataset based on CIFAR-100 for few-shot learning. It contains 20 high-level categories which are divided into 12, 4, 4 categories for training, validation and test. There are 60, 20, 20 low-level classes in the corresponding split containing 600 images of size 32 × 32 per class. Smaller image size makes it more challenging for few-shot learning. | Provide a detailed description of the following dataset: FC100 |
PASCAL-5i | **PASCAL-5i** is a dataset used to evaluate few-shot segmentation. The dataset is sub-divided into 4 folds each containing 5 classes. A fold contains labelled samples from 5 classes that are used for evaluating the few-shot learning method. The rest 15 classes are used for training. | Provide a detailed description of the following dataset: PASCAL-5i |
TrajNet | The **TrajNet** Challenge represents a large multi-scenario forecasting benchmark. The challenge consists on predicting 3161 human trajectories, observing for each trajectory 8 consecutive ground-truth values (3.2 seconds) i.e., t−7,t−6,…,t, in world plane coordinates (the so-called world plane Human-Human protocol) and forecasting the following 12 (4.8 seconds), i.e., t+1,…,t+12. The 8-12-value protocol is consistent with the most trajectory forecasting approaches, usually focused on the 5-dataset ETH-univ + ETH-hotel + UCY-zara01 + UCY-zara02 + UCY-univ. Trajnet extends substantially the 5-dataset scenario by diversifying the training data, thus stressing the flexibility and generalization one approach has to exhibit when it comes to unseen scenery/situations. In fact, TrajNet is a superset of diverse datasets that requires to train on four families of trajectories, namely 1) BIWI Hotel (orthogonal bird’s eye flight view, moving people), 2) Crowds UCY (3 datasets, tilted bird’s eye view, camera mounted on building or utility poles, moving people), 3) MOT PETS (multisensor, different human activities) and 4) Stanford Drone Dataset (8 scenes, high orthogonal bird’s eye flight view, different agents as people, cars etc. ), for a total of 11448 trajectories. Testing is requested on diverse partitions of BIWI Hotel, Crowds UCY, Stanford Drone Dataset, and is evaluated by a specific server (ground-truth testing data is unavailable for applicants). | Provide a detailed description of the following dataset: TrajNet |
Set12 | **Set12** is a collection of 12 grayscale images of different scenes that are widely used for evaluation of image denoising methods. The size of each image is 256×256. | Provide a detailed description of the following dataset: Set12 |
TotalCapture | The **TotalCapture** dataset consists of 5 subjects performing several activities such as walking, acting, a range of motion sequence (ROM) and freestyle motions, which are recorded using 8 calibrated, static HD RGB cameras and 13 IMUs attached to head, sternum, waist, upper arms, lower arms, upper legs, lower legs and feet, however the IMU data is not required for our experiments. The dataset has publicly released foreground mattes and RGB images. Ground-truth poses are obtained using a marker-based motion capture system, with the markers are <5mm in size. All data is synchronised and operates at a framerate of 60Hz, providing ground truth poses as joint positions. | Provide a detailed description of the following dataset: TotalCapture |
I-HAZE | The I-Haze dataset contains 25 indoor hazy images (size 2833×4657 pixels) training. It has 5 hazy images for validation along with their corresponding ground truth images.
Source: [Single image dehazing for a variety of haze scenarios using back projected pyramid network](https://arxiv.org/abs/2008.06713)
Image Source: [https://data.vision.ee.ethz.ch/cvl/ntire18//i-haze/](https://data.vision.ee.ethz.ch/cvl/ntire18//i-haze/) | Provide a detailed description of the following dataset: I-HAZE |
SEED | The **SEED** dataset contains subjects' EEG signals when they were watching films clips. The film clips are carefully selected so as to induce different types of emotion, which are positive, negative, and neutral ones. | Provide a detailed description of the following dataset: SEED |
SHREC | The **SHREC** dataset contains 14 dynamic gestures performed by 28 participants (all participants are right handed) and captured by the Intel RealSense short range depth camera. Each gesture is performed between 1 and 10 times by each participant in two way: using one finger and the whole hand. Therefore, the dataset is composed by 2800 sequences captured. The depth image, with a resolution of 640x480, and the coordinates of 22 joints (both in the 2D depth image space and in the 3D world space) are saved for each frame of each sequence in the dataset.
Source: [Exploiting Recurrent Neural Networks and Leap Motion Controller for Sign Language and Semaphoric Gesture Recognition](https://arxiv.org/abs/1803.10435)
Image Source: [http://tosca.cs.technion.ac.il/book/shrec.html](http://tosca.cs.technion.ac.il/book/shrec.html) | Provide a detailed description of the following dataset: SHREC |
Florence3D | The dataset collected at the University of Florence during 2012, has been captured using a Kinect camera. It includes 9 activities: wave, drink from a bottle, answer phone,clap, tight lace, sit down, stand up, read watch, bow. During acquisition, 10 subjects were asked to perform the above actions for 2/3 times. This resulted in a total of 215 activity samples. | Provide a detailed description of the following dataset: Florence3D |
SNAP | **SNAP** is a collection of large network datasets. It includes graphs representing social networks, citation networks, web graphs, online communities, online reviews and more.
[Social networks](http://snap.stanford.edu/data/#socnets) : online social networks, edges represent interactions between people
[Networks with ground-truth communities](http://snap.stanford.edu/data/#communities) : ground-truth network communities in social and information networks
[Communication networks](http://snap.stanford.edu/data/#email) : email communication networks with edges representing communication
[Citation networks](http://snap.stanford.edu/data/#citnets) : nodes represent papers, edges represent citations
[Collaboration networks](http://snap.stanford.edu/data/#canets) : nodes represent scientists, edges represent collaborations (co-authoring a paper)
[Web graphs](http://snap.stanford.edu/data/#web) : nodes represent webpages and edges are hyperlinks
[Amazon networks](http://snap.stanford.edu/data/#amazon) : nodes represent products and edges link commonly co-purchased products
[Internet networks](http://snap.stanford.edu/data/#p2p) : nodes represent computers and edges communication
[Road networks](http://snap.stanford.edu/data/#road) : nodes represent intersections and edges roads connecting the intersections
[Autonomous systems](http://snap.stanford.edu/data/#as) : graphs of the internet
[Signed networks](http://snap.stanford.edu/data/#signnets) : networks with positive and negative edges (friend/foe, trust/distrust)
[Location-based online social networks](http://snap.stanford.edu/data/#locnet) : social networks with geographic check-ins
[Wikipedia networks, articles, and metadata](http://snap.stanford.edu/data/#wikipedia) : talk, editing, voting, and article data from Wikipedia
[Temporal networks](http://snap.stanford.edu/data/#temporal) : networks where edges have timestamps
[Twitter and Memetracker](http://snap.stanford.edu/data/#twitter) : memetracker phrases, links and 467 million Tweets
[Online communities](http://snap.stanford.edu/data/#onlinecoms) : data from online communities such as Reddit and Flickr
[Online reviews](http://snap.stanford.edu/data/#reviews) : data from online review systems such as BeerAdvocate and Amazon
[User actions](http://snap.stanford.edu/data/#actions) : actions of users on social platforms.
[Face-to-face communication networks](http://snap.stanford.edu/data/#face2face) : networks of face-to-face (non-online) interactions
[Graph classification datasets](http://snap.stanford.edu/data/#disjointgraphs) : disjoint graphs from different classes
Image Source: [https://snap.stanford.edu/data/](https://snap.stanford.edu/data/) | Provide a detailed description of the following dataset: SNAP |
BioASQ | **BioASQ** is a question answering dataset. Instances in the BioASQ dataset are composed of a question (Q), human-annotated answers (A), and the relevant contexts (C) (also called snippets). | Provide a detailed description of the following dataset: BioASQ |
STRING | **STRING** is a collection of protein-protein interaction (PPI) networks. | Provide a detailed description of the following dataset: STRING |
OpenWebText | **OpenWebText** is an open-source recreation of the [WebText](/dataset/webtext) corpus. The text is web content extracted from URLs shared on Reddit with at least three upvotes. (38GB). | Provide a detailed description of the following dataset: OpenWebText |
Foursquare | The **Foursquare** dataset consists of check-in data for different cities. One subset contains check-ins in NYC and Tokyo collected for about 10 month (from 12 April 2012 to 16 February 2013). It contains 227,428 check-ins in New York city and 573,703 check-ins in Tokyo. Each check-in is associated with its time stamp, its GPS coordinates and its semantic meaning (represented by fine-grained venue-categories).
Another subset contains long-term (about 18 months from April 2012 to September 2013) global-scale check-in data collected from Foursquare. It contains 33,278,683 checkins by 266,909 users on 3,680,126 venues (in 415 cities in 77 countries). Those 415 cities are the most checked 415 cities by Foursquare users in the world, each of which contains at least 10K check-ins.
Source: [https://sites.google.com/site/yangdingqi/home/foursquare-dataset](https://sites.google.com/site/yangdingqi/home/foursquare-dataset) | Provide a detailed description of the following dataset: Foursquare |
PeerRead | PearRead is a dataset of scientific peer reviews. The dataset consists of over 14K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR, as well as over 10K textual peer reviews written by experts for a subset of the papers. | Provide a detailed description of the following dataset: PeerRead |
Kinship | This relational database consists of 24 unique names in two families (they have equivalent structures).
Source: [https://archive.ics.uci.edu/ml/datasets/kinship](https://archive.ics.uci.edu/ml/datasets/kinship) | Provide a detailed description of the following dataset: Kinship |
Mindboggle | **Mindboggle** is a large publicly available dataset of manually labeled brain MRI. It consists of 101 subjects collected from different sites, with cortical meshes varying from 102K to 185K vertices. Each brain surface contains 25 or 31 manually labeled parcels. | Provide a detailed description of the following dataset: Mindboggle |
Learning to Rank Challenge | The Yahoo! **Learning to Rank Challenge** dataset consists of 709,877 documents encoded in 700 features and sampled from query logs of the Yahoo! search engine, spanning 29,921 queries. | Provide a detailed description of the following dataset: Learning to Rank Challenge |
Linux | The LINUX dataset consists of 48,747 Program Dependence Graphs (PDG) generated from the **Linux** kernel. Each graph represents a function, where a node represents one statement and an edge represents the dependency between the two statements
Source: [Convolutional Set Matching for Graph Similarity](https://arxiv.org/abs/1810.10866) | Provide a detailed description of the following dataset: Linux |
AMiner | The **AMiner** Dataset is a collection of different relational datasets. It consists of a set of relational networks such as citation networks, academic social networks or topic-paper-autor networks among others. | Provide a detailed description of the following dataset: AMiner |
Email-EU | EmailEU is a directed temporal network constructed from email exchanges in a large European research institution for a 803-day period. It contains 986 email addresses as nodes and 332,334 emails as edges with timestamps. There are 42 ground truth departments in the dataset.
Source: [gl2vec: Learning Feature Representation Using Graphlets for Directed Networks](https://arxiv.org/abs/1812.05473) | Provide a detailed description of the following dataset: Email-EU |
IMDB-BINARY | **IMDB-BINARY** is a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres. | Provide a detailed description of the following dataset: IMDB-BINARY |
NCBI Disease | The **NCBI Disease** corpus consists of 793 PubMed abstracts, which are separated into training (593), development (100) and test (100) subsets. The NCBI Disease corpus is annotated with disease mentions, using concept identifiers from either MeSH or OMIM. | Provide a detailed description of the following dataset: NCBI Disease |
arXiv Astro-Ph | Arxiv ASTRO-PH (Astro Physics) collaboration network is from the e-print arXiv and covers scientific collaborations between authors papers submitted to Astro Physics category. If an author i co-authored a paper with author j, the graph contains a undirected edge from i to j. If the paper is co-authored by k authors this generates a completely connected (sub)graph on k nodes.
Source: [https://snap.stanford.edu/data/ca-AstroPh.html](https://snap.stanford.edu/data/ca-AstroPh.html) | Provide a detailed description of the following dataset: arXiv Astro-Ph |
MSLR-WEB10K | The **MSLR-WEB10K** dataset consists of 10,000 search queries over the documents from search results. The data also contains the values of 136 features and a corresponding user-labeled relevance factor on a scale of one to five with respect to each query-document pair. It is a subset of the MSLR-WEB30K dataset. | Provide a detailed description of the following dataset: MSLR-WEB10K |
BeerAdvocate | BeerAdvocate is a dataset that consists of beer reviews from beeradvocate. The data span a period of more than 10 years, including all ~1.5 million reviews up to November 2011. Each review includes ratings in terms of five "aspects": appearance, aroma, palate, taste, and overall impression. Reviews include product and user information, followed by each of these five ratings, and a plaintext review. | Provide a detailed description of the following dataset: BeerAdvocate |
Epinion | The **Epinion**s dataset is trust network dataset. For each user, it contains his profile, his ratings and his trust relations. For each rating, it has the product name and its category, the rating score, the time point when the rating is created, and the helpfulness of this rating.
Source: [https://www.cse.msu.edu/~tangjili/datasetcode/truststudy.htm](https://www.cse.msu.edu/~tangjili/datasetcode/truststudy.htm) | Provide a detailed description of the following dataset: Epinion |
Stanford Light Field | The **Stanford Light Field** Archive is a collection of several light fields for research in computer graphics and vision. | Provide a detailed description of the following dataset: Stanford Light Field |
Arxiv GR-QC | **Arxiv GR-QC** (General Relativity and Quantum Cosmology) collaboration network is from the e-print arXiv and covers scientific collaborations between authors papers submitted to General Relativity and Quantum Cosmology category. If an author i co-authored a paper with author j, the graph contains a undirected edge from i to j. If the paper is co-authored by k authors this generates a completely connected (sub)graph on k nodes.
Source: [https://snap.stanford.edu/data/ca-GrQc.html](https://snap.stanford.edu/data/ca-GrQc.html) | Provide a detailed description of the following dataset: Arxiv GR-QC |
Orkut | **Orkut** is a social network dataset consisting of friendship social network and ground-truth communities from Orkut.com on-line social network where users form friendship each other.
Each connected component in a group is regarded as a separate ground-truth community. The ground-truth communities which have less than 3 nodes are removed. The dataset also provides the top 5,000 communities with highest quality and the largest connected component of the network. | Provide a detailed description of the following dataset: Orkut |
Friendster | **Friendster** is an on-line gaming network. Before re-launching as a game website, Friendster was a social networking site where users can form friendship edge each other. Friendster social network also allows users form a group which other members can then join. The Friendster dataset consist of ground-truth communities (based on user-defined groups) and the social network from induced subgraph of the nodes that either belong to at least one community or are connected to other nodes that belong to at least one community. | Provide a detailed description of the following dataset: Friendster |
MQ2008 | The **MQ2008** dataset is a dataset for Learning to Rank. It contains 800 queries with labelled documents. | Provide a detailed description of the following dataset: MQ2008 |
IMDB-MULTI | **IMDB-MULTI** is a relational dataset that consists of a network of 1000 actors or actresses who played roles in movies in IMDB. A node represents an actor or actress, and an edge connects two nodes when they appear in the same movie. In IMDB-MULTI, the edges are collected from three different genres: Comedy, Romance and Sci-Fi. | Provide a detailed description of the following dataset: IMDB-MULTI |
REDDIT-12K | Reddit12k contains 11929 graphs each corresponding to an online discussion thread where nodes represent users, and an edge represents the fact that one of the two users responded to the comment of the other user. There is 1 of 11 graph labels associated with each of these 11929 discussion graphs, representing the category of the community. | Provide a detailed description of the following dataset: REDDIT-12K |
REDDIT-BINARY | **REDDIT-BINARY** consists of graphs corresponding to online discussions on Reddit. In each graph, nodes represent users, and there is an edge between them if at least one of them respond to the other’s comment. There are four popular subreddits, namely, IAmA, AskReddit, TrollXChromosomes, and atheism. IAmA and AskReddit are two question/answer based subreddits, and TrollXChromosomes and atheism are two discussion-based subreddits. A graph is labeled according to whether it belongs to a question/answer-based community or a discussion-based community. | Provide a detailed description of the following dataset: REDDIT-BINARY |
MQ2007 | The **MQ2007** dataset consists of queries, corresponding retrieved documents and labels provided by human experts. The possible relevance labels for each document are “relevant”, “partially relevant”, and “not relevant”. | Provide a detailed description of the following dataset: MQ2007 |
Amazon Fine Foods | Amazon Fine Foods is a dataset that consists of reviews of fine foods from amazon. The data span a period of more than 10 years, including all ~500,000 reviews up to October 2012. Reviews include product and user information, ratings, and a plaintext review. | Provide a detailed description of the following dataset: Amazon Fine Foods |
REDDIT-5K | Reddit-5K is a relational dataset extracted from Reddit. | Provide a detailed description of the following dataset: REDDIT-5K |
LastFM Asia | A social network of LastFM users which was collected from the public API in March 2020. Nodes are LastFM users from Asian countries and edges are mutual follower relationships between them. The vertex features are extracted based on the artists liked by the users. The task related to the graph is multinomial node classification - one has to predict the location of users. This target feature was derived from the country field for each user. | Provide a detailed description of the following dataset: LastFM Asia |
EMNIST | **EMNIST** (extended MNIST) has 4 times more data than [MNIST](/dataset/mnist). It is a set of handwritten digits with a 28 x 28 format. | Provide a detailed description of the following dataset: EMNIST |
Arcade Learning Environment | The **Arcade Learning Environment** (ALE) is an object-oriented framework that allows researchers to develop AI agents for Atari 2600 games. It is built on top of the Atari 2600 emulator Stella and separates the details of emulation from agent design. | Provide a detailed description of the following dataset: Arcade Learning Environment |
MedleyDB | **MedleyDB**, is a dataset of annotated, royalty-free multitrack recordings. It was curated primarily to support research on melody extraction. For each song melody f₀ annotations are provided as well as instrument activations for evaluating automatic instrument recognition. The original dataset consists of 122 multitrack songs out of which 108 include melody annotations.
The songs in MedleyDB were obtained from the following sources:
* Independent Artists (30 songs)
* NYU's Dolan Recording Studio (32 songs)
* Weathervane Music (25 songs)
* Music Delta (35 songs)
MedleyDB contains songs of a variety of musical genres: Singer/Songwriter, Classical, Rock, World/Folk, Fusion, Jazz, Pop, Musical Theatre, Rap. For each song three types of audio content are given: a mix, stems, and raw audio. All types of audio files are .wav files with a sample rate of 44.1 kHz and a bit depth of 16. | Provide a detailed description of the following dataset: MedleyDB |
MedleyDB 2.0 | **MedleyDB 2.0** is a superset of the MedleyDB – a dataset of annotated, royalty-free multitrack recordings. The second iteration of the dataset includes 74 new multitrack recordings resulting in 194 songs in total.
Source: [https://medleydb.weebly.com/](https://medleydb.weebly.com/)
Image Source: [https://medleydb.weebly.com/](https://medleydb.weebly.com/)
Audio Source: [https://zenodo.org/record/1438309](https://zenodo.org/record/1438309) | Provide a detailed description of the following dataset: MedleyDB 2.0 |
MIR-1K | **MIR-1K** (Multimedia Information Retrieval lab, 1000 song clips) is a dataset designed for singing voice separation. It contains:
* 1000 song clips with the music accompaniment and the singing voice recorded as left and right channels, respectively,
* Manual annotations of pitch contours in semitone, indices and types for unvoiced frames, lyrics, and vocal/non-vocal segments,
* The speech recordings of the lyrics by the same person who sang the songs.
The duration of each clip ranges from 4 to 13 seconds, and the total length of the dataset is 133 minutes. These clips are extracted from 110 karaoke songs which contain a mixture track and a music accompaniment track. These songs are freely selected from 5000 Chinese pop songs and sung by researchers from MIR lab (8 females and 11 males). Most of the singers are amateur and do not have professional music training. | Provide a detailed description of the following dataset: MIR-1K |
MagnaTagATune | **MagnaTagATune** dataset contains 25,863 music clips. Each clip is a 29-seconds-long excerpt belonging to one of the 5223 songs, 445 albums and 230 artists. The clips span a broad range of genres like Classical, New Age, Electronica, Rock, Pop, World, Jazz, Blues, Metal, Punk, and more. Each audio clip is supplied with a vector of binary annotations of 188 tags. These annotations are obtained by humans playing the two-player online TagATune game. In this game, the two players are either presented with the same or a different audio clip. Subsequently, they are asked to come up with tags for their specific audio clip. Afterward, players view each other’s tags and are asked to decide whether they were presented the same audio clip. Tags are only assigned when more than two players agreed. The annotations include tags like ’singer’, ’no singer’, ’violin’, ’drums’, ’classical’, ’jazz’. The top 50 most popular tags are typically used for evaluation to ensure that there is enough training data for each tag. There are 16 parts, and researchers comonnly use parts 1-12 for training, part 13 for validation and parts 14-16 for testing. | Provide a detailed description of the following dataset: MagnaTagATune |
Lakh MIDI Dataset | The Lakh MIDI dataset is a collection of 176,581 unique MIDI files, 45,129 of which have been matched and aligned to entries in the Million Song Dataset. Its goal is to facilitate large-scale music information retrieval, both symbolic (using the MIDI files alone) and audio content-based (using information extracted from the MIDI files as annotations for the matched audio files). Around 10% of all MIDI files include timestamped lyrics events with lyrics are often transcribed at the word, syllable or character level.
LMD-full denotes the whole dataset. LMD-matched is the subset of LMD-full that consists of MIDI files matched with the Million Song Dataset entries. LMD-aligned contains all the files of LMD-matched, aligned to preview MP3s from the Million Song Dataset.
A lakh is a unit of measure used in the Indian number system which signifies 100,000. | Provide a detailed description of the following dataset: Lakh MIDI Dataset |
iKala | The **iKala** dataset is a singing voice separation dataset that comprises of 252 30-second excerpts sampled from 206 iKala songs (plus 100 hidden excerpts reserved for MIREX data mining contest). The music accompaniment and the singing voice are recorded at the left and right channels respectively. Additionally, the human-labeled pitch contours and timestamped lyrics are provided.
This dataset is not available anymore. | Provide a detailed description of the following dataset: iKala |
CAL500 | **CAL500** (**Computer Audition Lab 500**) is a dataset aimed for evaluation of music information retrieval systems. It consists of 502 songs picked from western popular music. The audio is represented as a time series of the first 13 Mel-frequency cepstral coefficients (and their first and second derivatives) extracted by sliding a 12 ms half-overlapping short-time window over the waveform of each song. Each song has been annotated by at least 3 people with 135 musically-relevant concepts spanning six semantic categories:
* 29 instruments were annotated as present in the song or not,
* 22 vocal characteristics were annotated as relevant to the singer or not,
* 36 genres,
* 18 emotions were rated on a scale from one to three (e.g., ``not happy", ``neutral", ``happy"),
* 15 song concepts describing the acoustic qualities of the song, artist and recording (e.g., tempo, energy, sound quality),
* 15 usage terms (e.g., "I would listen to this song while driving, sleeping, etc."). | Provide a detailed description of the following dataset: CAL500 |
URMP | **URMP** (**University of Rochester Multi-Modal Musical Performance**) is a dataset for facilitating audio-visual analysis of musical performances. The dataset comprises 44 simple multi-instrument musical pieces assembled from coordinated but separately recorded performances of individual tracks. For each piece the dataset provided the musical score in MIDI format, the high-quality individual instrument audio recordings and the videos of the assembled pieces.
Source: [http://www2.ece.rochester.edu/projects/air/projects/URMP.html](http://www2.ece.rochester.edu/projects/air/projects/URMP.html)
Image Source: [http://www2.ece.rochester.edu/projects/air/projects/URMP.html](http://www2.ece.rochester.edu/projects/air/projects/URMP.html)
Audio Source: [http://www2.ece.rochester.edu/projects/air/projects/URMP.html](http://www2.ece.rochester.edu/projects/air/projects/URMP.html) | Provide a detailed description of the following dataset: URMP |
FMA | The **Free Music Archive** (**FMA**) is a large-scale dataset for evaluating several tasks in Music Information Retrieval. It consists of 343 days of audio from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a hierarchical taxonomy of 161 genres. It provides full-length and high-quality audio, pre-computed features, together with track- and user-level metadata, tags, and free-form text such as biographies.
There are four subsets defined by the authors:
* Full: the complete dataset,
* Large: the full dataset with audio limited to 30 seconds clips extracted from the middle of the tracks (or entire track if shorter than 30 seconds),
* Medium: a selection of 25,000 30s clips having a single root genre,
* Small: a balanced subset containing 8,000 30s clips with 1,000 clips per one of 8 root genres.
The official split into training, validation and test sets (80/10/10) uses stratified sampling to preserve the percentage of tracks per genre. Songs of the same artists are part of one set only. | Provide a detailed description of the following dataset: FMA |
CCMixter | **CCMixter** is a singing voice separation dataset consisting of 50 full-length stereo tracks from [ccMixter](www.ccmixter.org) featuring many different musical genres. For each song there are three WAV files available: the background music, the voice signal, and their sum.
Source: [Kernel Additive Models for Source Separation](https://doi.org/10.1109/TSP.2014.2332434)
Audio Source: [https://members.loria.fr/ALiutkus/kam/](https://members.loria.fr/ALiutkus/kam/) | Provide a detailed description of the following dataset: CCMixter |
GoodSounds | **GoodSounds** dataset contains around 28 hours of recordings of single notes and scales played by 15 different professional musicians, all of them holding a music degree and having some expertise in teaching. 12 different instruments (flute, cello, clarinet, trumpet, violin, alto sax alto, tenor sax, baritone sax, soprano sax, oboe, piccolo and bass) were recorded using one or up to 4 different microphones. For all the instruments the whole set of playable semitones in the instrument is recorded several times with different tonal characteristics. Each note is recorded into a separate monophonic audio file of 48kHz and 32 bits. Rich annotations of the recordings are available, including details on recording environment and rating on tonal qualities of the sound (“good-sound”, “bad”, “scale-good”, “scale-bad”).
Source: [A real-time system for measuring sound goodness in instrumental sounds](http://mtg.upf.edu/node/3197)
Image Source: [A real-time system for measuring sound goodness in instrumental sounds](http://mtg.upf.edu/node/3197)
Audio Source: [https://zenodo.org/record/820937](https://zenodo.org/record/820937) | Provide a detailed description of the following dataset: GoodSounds |
Jamendo Corpus | The **Jamendo Corpus** is a voice detection dataset consisting of 93 songs with Creative Commons license from the [Jamendo](http://www.jamendo.com/) free music sharing website. Segments of each song are annotated as “voice” (sung or spoken) or “no-voice”. The songs constitute a total of about 6 hours of music. The files are all from different artists and represent various genres from mainstream commercial music. The Jamendo audio files are coded in stereo Vorbis OGG 44.1kHz with 112KB/s bitrate. The original split contains 61, 16 and 16 songs in training, validation and testing set, respectively.
Source: [Vocal detection in music with support vector machines](https://perso.telecom-paristech.fr/grichard/Publications/Icassp08_ramona.pdf)
Audio Source: [https://zenodo.org/record/2585988](https://zenodo.org/record/2585988) | Provide a detailed description of the following dataset: Jamendo Corpus |
ForeDeCk | **ForeDeCk** is a time series database compiled at the National Technical University of Athens that contains 900,000 continuous time series, built from multiple, diverse and publicly accessible sources. ForeDeCk emphasizes business forecasting applications, including series from relevant domains such as industries, services, tourism, imports & exports, demographics, education, labor & wage, government, households, bonds, stocks, insurances, loans, real estate, transportation, and natural resources & environment.
Source: [Are forecasting competitions data representative of the reality?](https://www.sciencedirect.com/science/article/abs/pii/S0169207019300159) | Provide a detailed description of the following dataset: ForeDeCk |
M4 | The **M4** dataset is a collection of 100,000 time series used for the fourth edition of the Makridakis forecasting Competition. The M4 dataset consists of time series of yearly, quarterly, monthly and other (weekly, daily and hourly) data, which are divided into training and test sets. The minimum numbers of observations in the training test are 13 for yearly, 16 for quarterly, 42 for monthly, 80 for weekly, 93 for daily and 700 for hourly series. The participants were asked to produce the following numbers of forecasts beyond the available data that they had been given: six for yearly, eight for quarterly, 18 for monthly series, 13 for weekly series and 14 and 48 forecasts respectively for the daily and hourly ones.
The M4 dataset was created by selecting a random sample of 100,000 time series from the ForeDeCk database. The selected series were then scaled to prevent negative observations and values lower than 10, thus avoiding possible problems when calculating various error measures. The scaling was performed by simply adding a constant to the series so that their minimum value was equal to 10 (29 occurrences across the whole dataset). In addition, any information that could possibly lead to the identification of the original series was removed so as to ensure the objectivity of the results. This included the starting dates of the series, which did not become available to the participants until the M4 had ended. | Provide a detailed description of the following dataset: M4 |
MUSDB18-HQ | **MUSDB18-HQ** is a high-quality version of the MUSDB18 music tracks dataset. The high-quality dataset consists of the same 150 songs, but instead of MP4 files (compressed with Advanced Audio Coding encoder at 256kbps, with bandwidth limited to 16kHz), the songs are provided as raw WAV files.
Image Source: [https://sigsep.github.io/datasets/musdb.html](https://sigsep.github.io/datasets/musdb.html) | Provide a detailed description of the following dataset: MUSDB18-HQ |
Slakh2100 | The Synthesized Lakh (Slakh) Dataset is a dataset for audio source separation that is synthesized from the Lakh MIDI Dataset v0.1 using professional-grade sample-based virtual instruments. This first release of Slakh, called **Slakh2100**, contains 2100 automatically mixed tracks and accompanying MIDI files synthesized using a professional-grade sampling engine. The tracks in Slakh2100 are split into training (1500 tracks), validation (375 tracks), and test (225 tracks) subsets, totaling 145 hours of mixtures.
Source: [http://www.slakh.com/](http://www.slakh.com/)
Image Source: [http://www.slakh.com/](http://www.slakh.com/)
Audio Source: [http://www.slakh.com/](http://www.slakh.com/) | Provide a detailed description of the following dataset: Slakh2100 |
GuitarSet | **GuitarSet** is a dataset of high-quality guitar recordings and rich annotations. It contains 360 excerpts 30 seconds in length. The 360 excerpts are the result of the following combinations:
* 6 players,
* 2 versions: comping and soloing,
* 5 styles: Rock, Singer-Songwriter, Bossa Nova, Jazz, and Funk,
* 3 progressions: 12 Bar Blues, Autumn Leaves, and Pachelbel Canon,
* 2 tempi: slow and fast.
Each excerpt is annotated with 6 pitch contour and midi note annotations (one per string), 2 chord annotations (instructed and performed), beat and tempo annotations.
Source: [https://guitarset.weebly.com/](https://guitarset.weebly.com/)
Audio Source: [https://zenodo.org/record/3371780](https://zenodo.org/record/3371780) | Provide a detailed description of the following dataset: GuitarSet |
Mixing Secrets | **Mixing Secrets** is an instrument recognition dataset containing 258 multi-track recordings sourced from the [Mixing Secrets for The Small Studio]( https://www.cambridge-mt.com/ms/mtk/) website. The dataset was labelled to be consistent with MedleyDB format.
Source: [Mixing secrets: a multi-track dataset for instrument recognition in polyphonic music](None)
Image Source: [Mixing secrets: a multi-track dataset for instrument recognition in polyphonic music](None)
Audio Source: [https://multitracksearch.cambridge-mt.com/ms-mtk-search.htm](https://multitracksearch.cambridge-mt.com/ms-mtk-search.htm) | Provide a detailed description of the following dataset: Mixing Secrets |
OpenMIC-2018 | **OpenMIC-2018** is an instrument recognition dataset containing 20,000 examples of Creative Commons-licensed music available on the [Free Music Archive](http://freemusicarchive.org/). Each example is a 10-second excerpt which has been partially labeled for the presence or absence of 20 instrument classes by annotators on a crowd-sourcing platform.
Source: [OpenMIC-2018: An Open Data-set for Multiple Instrument Recognition](http://ismir2018.ircam.fr/doc/pdfs/248_Paper.pdf)
Image Source: [OpenMIC-2018: An Open Data-set for Multiple Instrument Recognition](http://ismir2018.ircam.fr/doc/pdfs/248_Paper.pdf)
Audio Source: [https://zenodo.org/record/1432913](https://zenodo.org/record/1432913) | Provide a detailed description of the following dataset: OpenMIC-2018 |
CAL500exp | The **CAL500 Expansion** (**CAL500exp**) dataset is an enriched version of the CAL500 music information retrieval dataset. CAL500exp is designed to facilitate music auto-tagging in a smaller temporal scale. The dataset consists of the same songs split into 3,223 acoustically homogenous segments of 3 to 16 seconds. The tag labels are annotated in the segment level instead of track level. The annotations were obtained from annotators with strong music background.
Source: [Towards time-varying music auto-tagging based on CAL500 expansion](https://doi.org/10.1109/ICME.2014.6890290)
Image Source: [Towards time-varying music auto-tagging based on CAL500 expansion](https://doi.org/10.1109/ICME.2014.6890290)
Audio Source: [http://calab1.ucsd.edu/~datasets/cal500/cal500data/](http://calab1.ucsd.edu/~datasets/cal500/cal500data/) | Provide a detailed description of the following dataset: CAL500exp |
CAL10K | The **CAL10K** dataset (introduced as Swat10k) contains 10,870 songs that are weakly-labelled using a tag vocabulary of 475 acoustic tags and 153 genre tags. The tags have all been harvested from [Pandora’s](https://www.pandora.com/) website and result from song annotations performed by expert musicologists involved with the Music Genome Project.
Source: [Exploring automatic music annotation with “acoustically-objectiv” tags](http://modelai.gettysburg.edu/2012/music/docs/Tingle_Autotag_MIR10.pdf) | Provide a detailed description of the following dataset: CAL10K |
MuseScore | The **MuseScore** dataset is a collection of 344,166 audio and MIDI pairs downloaded from [MuseScore](https://musescore.org/) website. The audio is usually synthesized by the MuseScore synthesizer. The audio clips have diverse musical genres and are about two mins long on average.
Due to copyright issues the dataset is not publicly available, but can be collected and processed with the provided source code. | Provide a detailed description of the following dataset: MuseScore |
MTG-Jamendo | The **MTG-Jamendo** dataset is an open dataset for music auto-tagging. The dataset contains over 55,000 full audio tracks with 195 tags categories (87 genre tags, 40 instrument tags, and 56 mood/theme tags). It is built using music available at Jamendo under Creative Commons licenses and tags provided by content uploaders. All audio is distributed in 320kbps MP3 format.
A subset of the dataset is used in the Emotion and Theme Recognition in Music Task within MediaEval 2019.
Source: [https://mtg.github.io/mtg-jamendo-dataset/](https://mtg.github.io/mtg-jamendo-dataset/)
Audio Source: [https://essentia.upf.edu/datasets/mtg-jamendo/raw_30s/audio/](https://essentia.upf.edu/datasets/mtg-jamendo/raw_30s/audio/) | Provide a detailed description of the following dataset: MTG-Jamendo |
LibriCount | **LibriCount** is a synthetic dataset for speaker count estimation. The dataset contains a simulated cocktail party environment of 0 to 10 speakers, mixed with 0dB SNR from random utterances of different speakers from the LibriSpeech `CleanTest` dataset. All recordings are of 5s durations, and all speakers are active for the most part of the recording.
Source: [https://faroit.com/#libricount](https://faroit.com/#libricount)
Image Source: [https://faroit.com/#libricount](https://faroit.com/#libricount)
Audio Source: [https://zenodo.org/record/1216072](https://zenodo.org/record/1216072) | Provide a detailed description of the following dataset: LibriCount |
MultiWOZ | The **Multi-domain Wizard-of-Oz** (**MultiWOZ**) dataset is a large-scale human-human conversational corpus spanning over seven domains, containing 8438 multi-turn dialogues, with each dialogue averaging 14 turns. Different from existing standard datasets like WOZ and DSTC2, which contain less than 10 slots and only a few hundred values, MultiWOZ has 30 (domain, slot) pairs and over 4,500 possible values. The dialogues span seven domains: restaurant, hotel, attraction, taxi, train, hospital and police. | Provide a detailed description of the following dataset: MultiWOZ |
ReVerb Challenge | The REVERB (**REverberant Voice Enhancement and Recognition Benchmark**) challenge is a benchmark for evaluation of automatic speech recognition techniques. The challenge assumes the scenario of capturing utterances spoken by a single stationary distant-talking speaker with 1-channe, 2-channel or 8-channel microphone-arrays in reverberant meeting rooms. It features both real recordings and simulated data.
The challenge constis of speech enhancement and automatic speech recognition tasks in reverberant environments. The speech enhancement challenge task consists of enhancing noisy reverberant speech with single-/multi-channel speech enhancement techniques, and evaluating the enhanced data in terms of objective and subjective evaluation metrics. The automatic speech recognition challenge task consists of improving the recognition accuracy of the same reverberant speech. The background noise is mostly stationary and the signal-to-noise ratio is modest. | Provide a detailed description of the following dataset: ReVerb Challenge |
MPQA Opinion Corpus | The **MPQA Opinion Corpus** contains 535 news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.). | Provide a detailed description of the following dataset: MPQA Opinion Corpus |
DROP | **Discrete Reasoning Over Paragraphs** **DROP** is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets. The questions consist of passages extracted from Wikipedia articles. The dataset is split into a training set of about 77,000 questions, a development set of around 9,500 questions and a hidden test set similar in size to the development set. | Provide a detailed description of the following dataset: DROP |
New York Times Annotated Corpus | The **New York Times Annotated Corpus** contains over 1.8 million articles written and published by the New York Times between January 1, 1987 and June 19, 2007 with article metadata provided by the New York Times Newsroom, the New York Times Indexing Service and the online production staff at nytimes.com. The corpus includes:
- Over 1.8 million articles (excluding wire services articles that appeared during the covered period).
- Over 650,000 article summaries written by library scientists.
- Over 1,500,000 articles manually tagged by library scientists with tags drawn from a normalized indexing vocabulary of people, organizations, locations and topic descriptors.
- Over 275,000 algorithmically-tagged articles that have been hand verified by the online production staff at nytimes.com.
As part of the New York Times' indexing procedures, most articles are manually summarized and tagged by a staff of library scientists. This collection contains over 650,000 article-summary pairs which may prove to be useful in the development and evaluation of algorithms for automated document summarization. Also, over 1.5 million documents have at least one tag. Articles are tagged for persons, places, organizations, titles and topics using a controlled vocabulary that is applied consistently across articles. For instance if one article mentions "Bill Clinton" and another refers to "President William Jefferson Clinton", both articles will be tagged with "CLINTON, BILL". | Provide a detailed description of the following dataset: New York Times Annotated Corpus |
VisDial | **Visual Dialog** (**VisDial**) dataset contains human annotated questions based on images of MS COCO dataset. This dataset was developed by pairing two subjects on Amazon Mechanical Turk to chat about an image. One person was assigned the job of a ‘questioner’ and the other person acted as an ‘answerer’. The questioner sees only the text description of an image (i.e., an image caption from MS COCO dataset) and the original image remains hidden to the questioner. Their task is to ask questions about this hidden image to “imagine the scene better”. The answerer sees the image, caption and answers the questions asked by the questioner. The two of them can continue the conversation by asking and answering questions for 10 rounds at max.
**VisDial v1.0** contains 123K dialogues on MS COCO (2017 training set) for training split, 2K dialogues with validation images for validation split and 8K dialogues on test set for test-standard set. The previously released v0.5 and v0.9 versions of VisDial dataset (corresponding to older splits of MS COCO) are considered deprecated. | Provide a detailed description of the following dataset: VisDial |
AMR Bank | The **AMR Bank** is a set of English sentences paired with simple, readable semantic representations. Version 3.0 released in 2020 consists of 59,255 sentences.
Each AMR is a single rooted, directed graph. AMRs include PropBank semantic roles, within-sentence coreference, named entities and types, modality, negation, questions, quantities, and so on.
The image presents an AMR of a sample sentence “The boy wants to go”. | Provide a detailed description of the following dataset: AMR Bank |
WMT 2016 | **WMT 2016** is a collection of datasets used in shared tasks of the First Conference on Machine Translation. The conference builds on ten previous Workshops on statistical Machine Translation.
The conference featured ten shared tasks:
- a news translation task,
- an IT domain translation task,
- a biomedical translation task,
- an automatic post-editing task,
- a metrics task (assess MT quality given reference translation).
- a quality estimation task (assess MT quality without access to any reference),
- a tuning task (optimize a given MT system),
- a pronoun translation task,
- a bilingual document alignment task,
- a multimodal translation task. | Provide a detailed description of the following dataset: WMT 2016 |
WMT 2016 News | News translation is a recurring WMT task. The test set is a collection of parallel corpora consisting of about 1500 English sentences translated into 5 languages (Czech, German, Finnish, Romanian, Russian, Turkish) and additional 1500 sentences from each of the 5 languages translated to English. For Romanian a third of the test set were released as a development set instead. For Turkish additional 500 sentence development set was released. The sentences were selected from dozens of news websites and translated by professional translators.
The training data consists of parallel corpora to train translation models, monolingual corpora to train language models and development sets for tuning.
Some training corpora were identical from WMT 2015 (Europarl, United Nations, French-English 10⁹ corpus, Common Crawl, Russian-English parallel data provided by Yandex, Wikipedia Headlines provided by CMU) and some were update (CzEng v1.6pre, News Commentary v11, monolingual news data). Additionally, the following new corpora were added: Romanian Europarl, SETIMES2 from OPUS for Romanian-English and Turkish-English, Monolingual data sets from CommonCrawl.
Source: [https://paperswithcode.com/paper/findings-of-the-2016-conference-on-machine/](https://paperswithcode.com/paper/findings-of-the-2016-conference-on-machine/)
Image Source: [https://www.aclweb.org/anthology/W16-2301.pdf](https://www.aclweb.org/anthology/W16-2301.pdf) | Provide a detailed description of the following dataset: WMT 2016 News |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.