dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
TUT-SED Synthetic 2016 | **TUT-SED Synthetic 2016** contains of mixture signals artificially generated from isolated sound events samples. This approach is used to get more accurate onset and offset annotations than in dataset using recordings from real acoustic environments where the annotations are always subjective.
Mixture signals in the dataset are created by randomly selecting and mixing isolated sound events from 16 sound event classes together. The resulting mixtures contains sound events with varying polyphony. All together 994 sound event samples were purchased from Sound Ideas. From the 100 mixtures created, 60% were assigned for training, 20% for testing and 20% for validation. The total amount of audio material in the dataset is 566 minutes.
Different instances of the sound events are used to synthesize the training, validation and test partitions. Mixtures were created by randomly selecting event instance and from it, randomly, a segment of length 3-15 seconds. Between events, random length silent region was introduced. Such tracks were created for four to nine event classes, and were then mixed together to form the mixture signal. As sound events are not consistently active during the samples (e.g. footsteps), automatic signal energy based annotation was applied to obtain accurate event activity within the sample. Annotation of the mixture signal was created by pooling together event activity annotation of used samples.
Source: [https://webpages.tuni.fi/arg/paper/taslp2017-crnn-sed/tut-sed-synthetic-2016](https://webpages.tuni.fi/arg/paper/taslp2017-crnn-sed/tut-sed-synthetic-2016)
Image Source: [https://arxiv.org/abs/1702.06286](https://arxiv.org/abs/1702.06286) | Provide a detailed description of the following dataset: TUT-SED Synthetic 2016 |
ISMIR Genre | ISMIR2004 is an audio dataset consisting of 6 genres with 729 excerpts of 30 seconds. It is a dataset used for musical genre classification. The training set consists of 320 classical music samples, 115 electronic music samples, 26 jazz blues samples, 45 metal/punk samples, 101 rock/pop samples and 122 world samples.
Source: [http://ismir2004.ismir.net/genre_contest/index.html](http://ismir2004.ismir.net/genre_contest/index.html)
Image Source: [http://ismir2004.ismir.net/genre_contest/index.html](http://ismir2004.ismir.net/genre_contest/index.html) | Provide a detailed description of the following dataset: ISMIR Genre |
NES-MDB | The **Nintendo Entertainment System Music Database** (**NES-MDB**) is a dataset intended for building automatic music composition systems for the NES audio synthesizer. It consists of 5278 songs from the soundtracks of 397 NES games. The dataset represents 296 unique composers, and the songs contain more than two million notes combined. It has file format options for MIDI, score and NLM (NES Language Modeling). | Provide a detailed description of the following dataset: NES-MDB |
CLO-43SD | **CLO-43SD** is a dataset for multi-class species identification in avian flight calls. It consists of 5,428 labeled audio clips of flight calls from 43 different species of North American woodwarblers (in the family Parulidae). The clips came from a variety of recording conditions, including clean recordings obtained using highly-directional shotgun microphones, recordings obtained from noisier field recordings using omnidirectional microphones, and recordings obtained from birds in captivity.
Source: [https://wp.nyu.edu/birdvox/codedata/](https://wp.nyu.edu/birdvox/codedata/)
Image Source: [https://www.allaboutbirds.org/a-rosetta-stone-for-identifying-warblers-migration-calls/](https://www.allaboutbirds.org/a-rosetta-stone-for-identifying-warblers-migration-calls/) | Provide a detailed description of the following dataset: CLO-43SD |
CLO-WTSP | **CLO-WTSP** is a dataset for species-specific flight call identification for the White-Throated Sparrow. 16,703 labeled audio clips captured by remote acoustic sensors deployed in Ithaca, NY and NYC over the fall 2014 and spring 2015 migration seasons. Each clip is labeled to indicate whether it contains a flight call from the target species White-Throated Sparrow (WTSP), a flight call from a non-target species, or no flight call at all.
Source: [https://wp.nyu.edu/birdvox/codedata/](https://wp.nyu.edu/birdvox/codedata/)
Image Source: [https://en.wikipedia.org/wiki/White-throated_sparrow#/media/File:Sparrow,_White_throated.jpg](https://en.wikipedia.org/wiki/White-throated_sparrow#/media/File:Sparrow,_White_throated.jpg) | Provide a detailed description of the following dataset: CLO-WTSP |
CLO-SWTH | **CLO-SWTH** is a dataset for species-specific flight call identification for the Swainson’s Thrush. 179,111 labeled audio clips captured by remote acoustic sensors deployed in Ithaca, NY and NYC over the fall 2014 and spring 2015 migration seasons. Each clip is labeled to indicate whether it contains a flight call from the target species Swainson’s Thrush (SWTH), a flight call from a non-target species, or no flight call at all.
Source: [https://wp.nyu.edu/birdvox/codedata/](https://wp.nyu.edu/birdvox/codedata/)
Image Source: [https://commons.wikimedia.org/wiki/Category:Catharus_ustulatus#/media/File:A_Swainson's_thrush_perched_in_a_tree_(7d58595b-c495-4744-9f20-2b301fa1cc63).jpg](https://commons.wikimedia.org/wiki/Category:Catharus_ustulatus#/media/File:A_Swainson's_thrush_perched_in_a_tree_(7d58595b-c495-4744-9f20-2b301fa1cc63).jpg) | Provide a detailed description of the following dataset: CLO-SWTH |
Bach Doodle | The **Bach Doodle** Dataset is composed of 21.6 million harmonizations submitted from the Bach Doodle. The dataset contains both metadata about the composition (such as the country of origin and feedback), as well as a MIDI of the user-entered melody and a MIDI of the generated harmonization. The dataset contains about 6 years of user entered music.
Source: [https://magenta.tensorflow.org/datasets/bach-doodle](https://magenta.tensorflow.org/datasets/bach-doodle)
Image Source: [https://magenta.tensorflow.org/datasets/bach-doodle](https://magenta.tensorflow.org/datasets/bach-doodle) | Provide a detailed description of the following dataset: Bach Doodle |
DCASE 2018 Task 4 | DCASE2018 Task 4 is a dataset for large-scale weakly labeled semi-supervised sound event detection in domestic environments. The data are YouTube video excerpts focusing on domestic context which could be used for example in ambient assisted living applications. The domain was chosen due to the scientific challenges (wide variety of sounds, time-localized events...) and potential industrial applications.
Specifically, the task employs a subset of “Audioset: An Ontology And Human-Labeled Dataset For Audio Events” by Google. Audioset consists of an expanding ontology of 632 sound event classes and a collection of 2 million human-labeled 10-second sound clips (less than 21% are shorter than 10-seconds) drawn from 2 million Youtube videos. The ontology is specified as a hierarchical graph of event categories, covering a wide range of human and animal sounds, musical instruments and genres, and common everyday environmental sounds.
Task 4 focuses on a subset of Audioset that consists of 10 classes of sound events: speech, dog, cat, alarm bell ringing, dishes, frying, blender, running water, vacuum cleaner, electric shaver toothbrush. | Provide a detailed description of the following dataset: DCASE 2018 Task 4 |
freefield1010 | Freefield1010 is a collection of 7,690 excerpts from field recordings around the world, gathered by the FreeSound project, and then standardised for research.
Source: [http://dcase.community/challenge2018/task-bird-audio-detection](http://dcase.community/challenge2018/task-bird-audio-detection)
Image Source: [https://arxiv.org/pdf/1309.5275.pdf](https://arxiv.org/pdf/1309.5275.pdf) | Provide a detailed description of the following dataset: freefield1010 |
warblrb10k | **warblrb10k** is a collection of 10,000 smartphone audio recordings from around the UK, crowdsourced by users of Warblr the bird recognition app. The audio covers a wide distribution of UK locations and environments, and includes weather noise, traffic noise, human speech and even human bird imitations.
Source: [http://dcase.community/challenge2018/task-bird-audio-detection](http://dcase.community/challenge2018/task-bird-audio-detection)
Image Source: [https://www.warblr.co.uk/](https://www.warblr.co.uk/) | Provide a detailed description of the following dataset: warblrb10k |
Chernobyl | **Chernobyl** is a collection of 620 audio clips collected from unattended remote monitoring equipment in the Chernobyl Exclusion Zone (CEZ). This data was collected as part of the TREE (Transfer-Exposure-Effects) research project into the long-term effects of the Chernobyl accident on local ecology. The audio covers a range of birds and includes weather, large mammal and insect noise sampled across various CEZ environments, including abandoned village, grassland and forest areas.
Source: [http://dcase.community/challenge2018/task-bird-audio-detection](http://dcase.community/challenge2018/task-bird-audio-detection)
Image Source: [https://en.wikipedia.org/wiki/Effects_of_the_Chernobyl_disaster#/media/File:Chernobyl,_Ukraine.jpg](https://en.wikipedia.org/wiki/Effects_of_the_Chernobyl_disaster#/media/File:Chernobyl,_Ukraine.jpg) | Provide a detailed description of the following dataset: Chernobyl |
PolandNFC | **PolandNFC** is a collection of 4,000 recordings from Hanna Pamuła's PhD project of monitoring autumn nocturnal bird migration. The recordings were collected every night, from September to November 2016 on the Baltic Sea coast, Poland, using Song Meter SM2 units with microphones mounted on 3–5 m poles. A subset derived from 15 nights with different weather conditions and background noise including wind, rain, sea noise, insect calls, human voice and deer calls was used in DCASE 2018 Challenge. | Provide a detailed description of the following dataset: PolandNFC |
NIPS4Bplus | **NIPS4Bplus** is a richly annotated birdsong audio dataset, that is comprised of recordings containing bird vocalisations along with their active species tags plus the temporal annotations acquired for them. It consists of around 687 recordings, 87 classes, species tags, annotations. The total duration of audio is around 1 hour.
Source: [https://peerj.com/articles/cs-223.pdf](https://peerj.com/articles/cs-223.pdf)
Image Source: [https://peerj.com/articles/cs-223.pdf](https://peerj.com/articles/cs-223.pdf) | Provide a detailed description of the following dataset: NIPS4Bplus |
BirdVox-DCASE-20k | The **BirdVox-DCASE-20k** dataset contains 20,000 ten-second audio recordings. These recordings come from ROBIN autonomous recording units, placed near Ithaca, NY, USA during the fall 2015. They were captured on the night of September 23rd, 2015, by six different sensors, originally numbered 1, 2, 3, 5, 7, and 10.
Out of these 20,000 recording, 10,017 (50.09%) contain at least one bird vocalization (either song, call, or chatter).
The dataset is a derivative work of the BirdVox-full-night dataset, containing almost as much data but formatted into ten-second excerpts rather than ten-hour full night recordings.
Source: [https://zenodo.org/record/1208080](https://zenodo.org/record/1208080)
Image Source: [http://dcase.community/challenge2018/task-bird-audio-detection](http://dcase.community/challenge2018/task-bird-audio-detection) | Provide a detailed description of the following dataset: BirdVox-DCASE-20k |
BirdCLEF 2019 | BirdClef 2019 is a bird soundscape dataset. It contains around 350 hours of manually annotated soundscapes using 30 field recorders between January and June of 2017 in Ithaca, NY, USA. There are around 50,000 recordings in the dataset in total, with 659 classes. The dataset also contains species tags.
Source: [https://www.imageclef.org/BirdCLEF2019](https://www.imageclef.org/BirdCLEF2019)
Image Source: [http://dcase.community/challenge2018/task-bird-audio-detection](http://dcase.community/challenge2018/task-bird-audio-detection) | Provide a detailed description of the following dataset: BirdCLEF 2019 |
BirdCLEF 2018 | BirdClef 2018 is a bird soundscape dataset based on the contributions of the Xeno-canto network. The training set contains 36,496 recordings covering 1500 species of central and south America (the largest bioacoustic dataset in the literature). There are about 68 hours of recordings in total, with 1,500 classes and species tags.
Source: [https://www.imageclef.org/BirdCLEF2019](https://www.imageclef.org/BirdCLEF2019)
Image Source: [http://dcase.community/challenge2018/task-bird-audio-detection](http://dcase.community/challenge2018/task-bird-audio-detection) | Provide a detailed description of the following dataset: BirdCLEF 2018 |
FSDKaggle2018 | **FSDKaggle2018** is an audio dataset containing 11,073 audio files annotated with 41 labels of the AudioSet Ontology. FSDKaggle2018 has been used for the DCASE Challenge 2018 Task 2. All audio samples are gathered from Freesound and are provided as uncompressed PCM 16 bit, 44.1 kHz mono audio files. The 41 categories of the AudioSet Ontology are:
"Acoustic_guitar", "Applause", "Bark", "Bass_drum", "Burping_or_eructation", "Bus", "Cello", "Chime", "Clarinet", "Computer_keyboard", "Cough", "Cowbell", "Double_bass", "Drawer_open_or_close", "Electric_piano", "Fart", "Finger_snapping", "Fireworks", "Flute", "Glockenspiel", "Gong", "Gunshot_or_gunfire", "Harmonica", "Hi-hat", "Keys_jangling", "Knock", "Laughter", "Meow", "Microwave_oven", "Oboe", "Saxophone", "Scissors", "Shatter", "Snare_drum", "Squeak", "Tambourine", "Tearing", "Telephone", "Trumpet", "Violin_or_fiddle", "Writing".
Source: [https://zenodo.org/record/2552860](https://zenodo.org/record/2552860)
Image Source: [https://labs.freesound.org/datasets/](https://labs.freesound.org/datasets/) | Provide a detailed description of the following dataset: FSDKaggle2018 |
FSDKaggle2019 | **FSDKaggle2019** is an audio dataset containing 29,266 audio files annotated with 80 labels of the AudioSet Ontology. FSDKaggle2019 has been used for the DCASE Challenge 2019 Task 2, which was run as a Kaggle competition titled Freesound Audio Tagging 2019. The dataset allows development and evaluation of machine listening methods in conditions of label noise, minimal supervision, and real-world acoustic mismatch. FSDKaggle2019 consists of two train sets and one test set. One train set and the test set consists of manually-labeled data from Freesound, while the other train set consists of noisily labeled web audio data from Flickr videos taken from the YFCC dataset.
The curated train set consists of manually labeled data from FSD: 4970 total clips with a total duration of 10.5 hours. The noisy train set has 19,815 clips with a total duration of 80 hours. The test set has 4481 clips with a total duration of 12.9 hours.
Source: [https://labs.freesound.org/datasets/](https://labs.freesound.org/datasets/)
Image Source: [https://labs.freesound.org/datasets/](https://labs.freesound.org/datasets/) | Provide a detailed description of the following dataset: FSDKaggle2019 |
Clotho | **Clotho** is an audio captioning dataset, consisting of 4981 audio samples, and each audio sample has five captions (a total of 24 905 captions). Audio samples are of 15 to 30 s duration and captions are eight to 20 words long.
Source: [https://zenodo.org/record/3490684](https://zenodo.org/record/3490684)
Image Source: [https://arxiv.org/abs/1910.09387](https://arxiv.org/abs/1910.09387) | Provide a detailed description of the following dataset: Clotho |
DBR | **DBR** dataset is an environmental audio dataset created for the Bachelor's Seminar in Signal Processing in Tampere University of Technology. The samples in the dataset were collected from the online audio database Freesound. The dataset consists of three classes, each containing 50 samples, and the classes are 'dog', 'bird', and 'rain' (hence the name DBR).
Source: [https://zenodo.org/record/1069747](https://zenodo.org/record/1069747)
Image Source: [https://medium.com/@anonyomous.ut.grad.student/building-an-audio-classifier-f7c4603aa989](https://medium.com/@anonyomous.ut.grad.student/building-an-audio-classifier-f7c4603aa989) | Provide a detailed description of the following dataset: DBR |
DESED | The **DESED** dataset is a dataset designed to recognize sound event classes in domestic environments. The dataset is designed to be used for sound event detection (SED, recognize events with their time boundaries) but it can also be used for sound event tagging (SET, indicate presence of an event in an audio file).
The dataset is composed of 10 event classes to recognize in 10 second audio files. The classes are: Alarm/bell/ringing, Blender, Cat, Dog, Dishes,
Electric shaver/toothbrush, Frying, Running water, Speech, Vacuum cleaner. | Provide a detailed description of the following dataset: DESED |
FSL4 | The **FSL4** dataset contains ~4000 user-contributed loops uploaded to Freesound. Loops were selected by searching Freesound for sounds with the query terms loop and bpm, and then automatically parsing the returned sound filenames, tags and textual descriptions to identify tempo annotations made by users. For example, a sound containing the tag 120bpm is considered to have a ground truth of 120 BPM.
Source: [https://zenodo.org/record/3685832](https://zenodo.org/record/3685832)
Image Source: [https://archives.ismir.net/ismir2016/paper/000195.pdf](https://archives.ismir.net/ismir2016/paper/000195.pdf) | Provide a detailed description of the following dataset: FSL4 |
Freesound One-Shot Percussive Sounds | The **Freesound One-Shot Percussive Sounds** dataset contains 10254 one-shot (single event) percussive sounds from Freesound.org and the corresponding timbral analysis. These were used to train the generative model for "Neural Percussive Synthesis Parameterised by High-Level Timbral Features".
Source: [https://zenodo.org/record/3665275](https://zenodo.org/record/3665275)
Image Source: [https://freesound.org/people/Robinhood76/sounds/63616/](https://freesound.org/people/Robinhood76/sounds/63616/) | Provide a detailed description of the following dataset: Freesound One-Shot Percussive Sounds |
FSD50K | Freesound Dataset 50k (or **FSD50K** for short) is an open dataset of human-labeled sound events containing 51,197 Freesound clips unequally distributed in 200 classes drawn from the AudioSet Ontology. FSD50K has been created at the Music Technology Group of Universitat Pompeu Fabra. It consists mainly of sound events produced by physical sound sources and production mechanisms, including human sounds, sounds of things, animals, natural sounds, musical instruments and more.
Source: [https://zenodo.org/record/4060432](https://zenodo.org/record/4060432)
Image Source: [https://labs.freesound.org/datasets/](https://labs.freesound.org/datasets/) | Provide a detailed description of the following dataset: FSD50K |
SimSceneTVB Learning | SimSceneTVB is a dataset of 600 simulated sound scenes of 45s each representing urban sound environments, simulated using the simScene Matlab library. The dataset is divided in two parts with a train subset (400 scenes) and a test subset (200 scenes) for the development of learning-based models.
Each scene is composed of three main sources (traffic, human voices and birds) according to an original scenario, which is composed semi-randomly conditionally to five ambiances: park, quiet street, noisy street, very noisy street and square. Separate channels for the contribution of each source are available. The base audio files used for simulation are obtained from Freesound (https://freesound.org) and LibriSpeech (http://www.openslr.org/12). The sound scenes are scaled according to a playback sound level in dB, which is drawn randomly but remains plausible according to the ambiance.
Source: [https://zenodo.org/record/3248703](https://zenodo.org/record/3248703)
Image Source: [https://hal.archives-ouvertes.fr/hal-01078098v2/document](https://hal.archives-ouvertes.fr/hal-01078098v2/document) | Provide a detailed description of the following dataset: SimSceneTVB Learning |
SimSceneTVB Perception | **SimSceneTVB Perception** is a corpus of 100 sound scenes of 45s each representing urban sound environments, including: 6 scenes recorded in Paris, 19 scenes simulated using simScene to replicate recorded scenarios, 75 scenes simulated using simScene with diverse new scenarios, containing traffic, human voices and bird sources.The base audio files used for simulation are obtained from Freesound (https://freesound.org) and LibriSpeech (http://www.openslr.org/12).
Source: [https://zenodo.org/record/3248734](https://zenodo.org/record/3248734)
Image Source: [https://hal.archives-ouvertes.fr/hal-01078098v2/document](https://hal.archives-ouvertes.fr/hal-01078098v2/document) | Provide a detailed description of the following dataset: SimSceneTVB Perception |
Sound Events for Surveillance Applications | The **Sound Events for Surveillance Applications** (SESA) dataset files were obtained from Freesound. The dataset was divided between train (480 files) and test (105 files) folders. All audio files are WAV, Mono-Channel, 16 kHz, and 8-bit with up to 33 seconds. # Classes: 0 - Casual (not a threat) 1 - Gunshot 2 - Explosion 3 - Siren (also contains alarms).
Source: [https://zenodo.org/record/3519845](https://zenodo.org/record/3519845)
Image Source: [https://labs.freesound.org/datasets/](https://labs.freesound.org/datasets/) | Provide a detailed description of the following dataset: Sound Events for Surveillance Applications |
TUT Rare Sound Events 2017 | TUT Rare Sound events 2017, development dataset consists of source files for creating mixtures of rare sound events (classes baby cry, gun shot, glass break) with background audio, as well a set of readily generated mixtures and recipes for generating them. The "source" part of the dataset consists of two subsets: (a) background recordings from 15 different acoustic scenes, (b) recordings with the target rare sound events from three classes, accompanied by annotations of their temporal occurrences, (c) a set of meta files providing the cross-validation setup: lists of background and target event recordings split into training and test subsets (called "devtrain" and "devtest", respectively, indicating they are provided as the development dataset, as opposed to the evaluation dataset released separately).
The mixture set consists of two subsets (training and testing), each containing ~1500 mixtures (~500 per target class in each subset, with half of the mixtures not containing any target class events).
Diment, Aleksandr, Mesaros, Annamaria, Heittola, Toni, & Virtanen, Tuomas. (2017). TUT Rare sound events, Development dataset [Data set]. Zenodo. http://doi.org/10.5281/zenodo.401395
Source: [https://zenodo.org/record/401395](https://zenodo.org/record/401395)
Image Source: [http://dcase.community/challenge2017/task-rare-sound-event-detection](http://dcase.community/challenge2017/task-rare-sound-event-detection) | Provide a detailed description of the following dataset: TUT Rare Sound Events 2017 |
UrbanSound8K | Urban Sound 8K is an audio dataset that contains 8732 labeled sound excerpts (<=4s) of urban sounds from 10 classes: air_conditioner, car_horn, children_playing, dog_bark, drilling, enginge_idling, gun_shot, jackhammer, siren, and street_music. The classes are drawn from the urban sound taxonomy. All excerpts are taken from field recordings uploaded to www.freesound.org. | Provide a detailed description of the following dataset: UrbanSound8K |
VocalImitationSet | The **VocalImitationSet** is a collection of crowd-sourced vocal imitations of a large set of diverse sounds collected from Freesound (https://freesound.org/), which were curated based on Google's AudioSet ontology (https://research.google.com/audioset/).
Source: [https://zenodo.org/record/1340763](https://zenodo.org/record/1340763)
Image Source: [https://www.researchgate.net/publication/332799163_VOCAL_IMITATION_SET_A_DATASET_OF_VOCALLY_IMITATED_SOUND_EVENTS_USING_THE_AUDIOSET_ONTOLOGY](https://www.researchgate.net/publication/332799163_VOCAL_IMITATION_SET_A_DATASET_OF_VOCALLY_IMITATED_SOUND_EVENTS_USING_THE_AUDIOSET_ONTOLOGY) | Provide a detailed description of the following dataset: VocalImitationSet |
TUT Sound Events 2018 | The TUT Sounds Event 2018 dataset consists of real-life first order Ambisonic (FOA) format recordings with stationary point sources each associated with a spatial coordinate. The dataset was generated by collecting impulse responses (IR) from a real environment using the Eigenmike spherical microphone array. The measurement was done by slowly moving a Genelec G Two loudspeaker continuously playing a maximum length sequence around the array in circular trajectory in one elevation at a time. The playback volume was set to be 30 dB greater than the ambient sound level. The recording was done in a corridor inside the university with classrooms around it during work hours. The IRs were collected at elevations −40 to 40 with 10-degree increments at 1 m from the Eigenmike and at elevations −20 to 20 with 10-degree increments at 2 m.
Source: [https://zenodo.org/record/1237793](https://zenodo.org/record/1237793)
Image Source: [https://www.cs.tut.fi/~mesaros/pubs/mesaros_eusipco2016-dcase.pdf](https://www.cs.tut.fi/~mesaros/pubs/mesaros_eusipco2016-dcase.pdf) | Provide a detailed description of the following dataset: TUT Sound Events 2018 |
aGender | The **aGender** corpus contains audio recordings of predefined utterances and free speech produced by humans of different age and gender. Each utterance is labeled as one of four age groups: Child, Youth, Adult, Senior, and as one of three gender classes: Female, Male and Child.
Source: [Convolutional RNN: an Enhanced Model for Extracting Features from Sequential Data](https://arxiv.org/abs/1602.05875)
Image Source: [http://www.lrec-conf.org/proceedings/lrec2010/pdf/262_Paper.pdf](http://www.lrec-conf.org/proceedings/lrec2010/pdf/262_Paper.pdf) | Provide a detailed description of the following dataset: aGender |
TUT Sound Events 2017 | The **TUT Sound Events 2017** dataset contains 24 audio recordings in a street environment and contains 6 different classes. These classes are: brakes squeaking, car, children, large vehicle, people speaking, and people walking.
Source: [Language Modelling for Sound Event Detection with Teacher Forcing and Scheduled Sampling](https://arxiv.org/abs/1907.08506)
Image Source: [https://hal.inria.fr/hal-02067935/document](https://hal.inria.fr/hal-02067935/document) | Provide a detailed description of the following dataset: TUT Sound Events 2017 |
DCASE 2014 | DCASE2014 is an audio classification benchmark.
Source: [Cooperative Learning of Audio and Video Models from Self-Supervised Synchronization](https://arxiv.org/abs/1807.00230) | Provide a detailed description of the following dataset: DCASE 2014 |
LOCATA | The **LOCATA** dataset is a dataset for acoustic source localization. It consists of real-world ambisonic speech recordings with optically tracked azimuth-elevation labels.
Source: [Regression and Classification for Direction-of-Arrival Estimation with Convolutional Recurrent Neural Networks](https://arxiv.org/abs/1904.08452)
Image Source: [https://www.locata.lms.tf.fau.de/files/2018/05/LOCATA_Paper_SAM_Workshop_2018.pdf](https://www.locata.lms.tf.fau.de/files/2018/05/LOCATA_Paper_SAM_Workshop_2018.pdf) | Provide a detailed description of the following dataset: LOCATA |
ChestX-ray8 | **ChestX-ray8** is a medical imaging dataset which comprises 108,948 frontal-view X-ray images of 32,717 (collected from the year of 1992 to 2015) unique patients with the text-mined eight common disease labels, mined from the text radiological reports via NLP techniques. | Provide a detailed description of the following dataset: ChestX-ray8 |
PPMI | The **Parkinson’s Progression Markers Initiative** (**PPMI**) dataset originates from an observational clinical and longitudinal study comprising evaluations of people with Parkinson’s disease (PD), those people with high risk, and those who are healthy. | Provide a detailed description of the following dataset: PPMI |
ISIC 2017 Task 1 | The ISIC 2017 dataset was published by the International Skin Imaging Collaboration (ISIC) as a large-scale dataset of dermoscopy images. The Task 1 challenge dataset for lesion segmentation contains 2,000 images for training with ground truth segmentations (2000 binary mask images).
Source: [https://challenge.isic-archive.com/landing/2017/42](https://challenge.isic-archive.com/landing/2017/42)
Image Source: [https://challenge.isic-archive.com/landing/2017/42](https://challenge.isic-archive.com/landing/2017/42) | Provide a detailed description of the following dataset: ISIC 2017 Task 1 |
ISIC 2017 Task 2 | The ISIC 2017 dataset was published by the International Skin Imaging Collaboration (ISIC) as a large-scale dataset of dermoscopy images. The Task 2 challenge dataset for lesion dermoscopic feature extraction contains the original lesion image, a corresponding superpixel mask, and superpixel-mapped expert annotations of the presence and absence of the following features: (a) network, (b) negative network, (c) streaks and (d) milia-like cysts.
Source: [https://challenge.isic-archive.com/landing/2017/43](https://challenge.isic-archive.com/landing/2017/43)
Image Source: [https://challenge.isic-archive.com/landing/2017/43](https://challenge.isic-archive.com/landing/2017/43) | Provide a detailed description of the following dataset: ISIC 2017 Task 2 |
ISIC 2017 Task 3 | The ISIC 2017 dataset was published by the International Skin Imaging Collaboration (ISIC) as a large-scale dataset of dermoscopy images. The Task 3 challenge dataset for lesion classification contains 2,000 images for training including 374 melanoma, 254 seborrheic keratosis and the remainder as benign nevi (1372).
Source: [https://challenge.isic-archive.com/landing/2017/42](https://challenge.isic-archive.com/landing/2017/42)
Image Source: [https://challenge.isic-archive.com/landing/2017/44](https://challenge.isic-archive.com/landing/2017/44) | Provide a detailed description of the following dataset: ISIC 2017 Task 3 |
ISIC 2018 Task 1 | The ISIC 2018 dataset was published by the International Skin Imaging Collaboration (ISIC) as a large-scale dataset of dermoscopy images. This Task 1 dataset is the challenge on lesion segmentation. It includes 2594 images.
Source: [Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions](https://arxiv.org/abs/1909.00166)
Image Source: [https://challenge2018.isic-archive.com/task1/](https://challenge2018.isic-archive.com/task1/) | Provide a detailed description of the following dataset: ISIC 2018 Task 1 |
ISIC 2018 Task 2 | The ISIC 2018 dataset was published by the International Skin Imaging Collaboration (ISIC) as a large-scale dataset of dermoscopy images. The Task 2 dataset is the challenge on lesion attribute detection. It includes 2594 images. The task is to detect the following dermoscopic attributes: pigment network, negative network, streaks, mila-like cysts and globules (including dots).
Source: [Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions](https://arxiv.org/abs/1909.00166)
Image Source: [https://challenge2018.isic-archive.com/task2/](https://challenge2018.isic-archive.com/task2/) | Provide a detailed description of the following dataset: ISIC 2018 Task 2 |
ISIC 2018 Task 3 | The ISIC 2018 dataset was published by the International Skin Imaging Collaboration (ISIC) as a large-scale dataset of dermoscopy images. The Task 3 dataset is the challenge on lesion classification. It includes 2594 images. The task is to classify the dermoscopic images into one of the following categories: melanoma, melanocytic nevus, basal cell carcinoma, actinic keratosis / Bowen’s disease, benign keratosis, dermatofibroma, and vascular lesion.
Source: [Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions](https://arxiv.org/abs/1909.00166)
Image Source: [https://challenge2018.isic-archive.com/task3/](https://challenge2018.isic-archive.com/task3/) | Provide a detailed description of the following dataset: ISIC 2018 Task 3 |
HAM10000 | **HAM10000** is a dataset of 10000 training images for detecting pigmented skin lesions. The authors collected dermatoscopic images from different populations, acquired and stored by different modalities. | Provide a detailed description of the following dataset: HAM10000 |
BCN_20000 | **BCN_20000** is a dataset composed of 19,424 dermoscopic images of skin lesions captured from 2010 to 2016 in the facilities of the Hospital Clínic in Barcelona. The dataset can be used for lesion recognition tasks such as lesion segmentation, lesion detection and lesion classification.
Source: [https://arxiv.org/abs/1908.02288](https://arxiv.org/abs/1908.02288)
Image Source: [https://arxiv.org/abs/1908.02288](https://arxiv.org/abs/1908.02288) | Provide a detailed description of the following dataset: BCN_20000 |
MSK | The **MSK** dataset is a dataset for lesion recognition from the Memorial Sloan-Kettering Cancer Center. It is used as part of the ISIC lesion recognition challenges.
Source: [https://arxiv.org/pdf/1710.05006.pdf](https://arxiv.org/pdf/1710.05006.pdf)
Image Source: [https://arxiv.org/pdf/1902.03368.pdf](https://arxiv.org/pdf/1902.03368.pdf) | Provide a detailed description of the following dataset: MSK |
NeuB1 | **NeuB1** is a microscopic neuronal image dataset for retinal vessel segmentation, which contains 112 images of size 512 x 152. The train/test split is 37/75.
Image Source: [https://web.bii.a-star.edu.sg/~zhaoh/Jaydeep_Tracing/](https://web.bii.a-star.edu.sg/~zhaoh/Jaydeep_Tracing/) | Provide a detailed description of the following dataset: NeuB1 |
BraTS 2017 | The BRATS2017 dataset. It contains 285 brain tumor MRI scans, with four MRI modalities as T1, T1ce, T2, and Flair for each scan. The dataset also provides full masks for brain tumors, with labels for ED, ET, NET/NCR. The segmentation evaluation is based on three tasks: WT, TC and ET segmentation. | Provide a detailed description of the following dataset: BraTS 2017 |
BraTS 2015 | The **BraTS 2015** dataset is a dataset for brain tumor image segmentation. It consists of 220 high grade gliomas (HGG) and 54 low grade gliomas (LGG) MRIs. The four MRI modalities are T1, T1c, T2, and T2FLAIR. Segmented “ground truth” is provide about four intra-tumoral classes, viz. edema, enhancing tumor, non-enhancing tumor, and necrosis. | Provide a detailed description of the following dataset: BraTS 2015 |
PROMISE12 | The **PROMISE12** dataset was made available for the MICCAI 2012 prostate segmentation challenge. Magnetic Resonance (MR) images (T2-weighted) of 50 patients with various diseases were acquired at different locations with several MRI vendors and scanning protocols. | Provide a detailed description of the following dataset: PROMISE12 |
LUNA16 | The **LUNA16** (LUng Nodule Analysis) dataset is a dataset for lung segmentation. It consists of 1,186 lung nodules annotated in 888 CT scans. | Provide a detailed description of the following dataset: LUNA16 |
BraTS 2013 | BRATS 2013 is a brain tumor segmentation dataset consists of synthetic and real images, where each of them is further divided into high-grade gliomas (HG) and low-grade gliomas (LG). There are 25 patients with both synthetic HG and LG images and 20 patients with real HG and 10 patients with real LG images. For each patient, FLAIR, T1, T2, and post-Gadolinium T1 magnetic resonance (MR) image sequences are available. | Provide a detailed description of the following dataset: BraTS 2013 |
ISRUC-Sleep | **ISRUC-Sleep** is a polysomnographic (PSG) dataset. The data were obtained from human adults, including healthy subjects, and subjects with sleep disorders under the effect of sleep medication. The dataset, which is structured to support different research objectives, comprises three groups of data: (a) data concerning 100 subjects, with one recording session per subject, (b) data gathered from 8 subjects; two recording sessions were performed per subject, which are useful for studies involving changes in the PSG signals over time, (c) data collected from one recording session related to 10 healthy subjects, which are useful for studies involving comparison of healthy subjects with the patients suffering from sleep disorders.
Source: [https://sleeptight.isr.uc.pt/](https://sleeptight.isr.uc.pt/)
Image Source: [https://sleeptight.isr.uc.pt/](https://sleeptight.isr.uc.pt/) | Provide a detailed description of the following dataset: ISRUC-Sleep |
LiTS17 | **LiTS17** is a liver tumor segmentation benchmark. The data and segmentations are provided by various clinical sites around the world. The training data set contains 130 CT scans and the test data set 70 CT scans.
Image Source: [https://arxiv.org/pdf/1707.07734.pdf](https://arxiv.org/pdf/1707.07734.pdf) | Provide a detailed description of the following dataset: LiTS17 |
NIH-LN | **NIH-Lymph Node** (**NIH-LN**) contains 388 mediastinal LNs in 90 CT scans and 595 abdominal LNs in 86 scans.
Source: [https://sleeptight.isr.uc.pt/](https://sleeptight.isr.uc.pt/) | Provide a detailed description of the following dataset: NIH-LN |
BraTS 2014 | BRATS 2014 is a brain tumor segmentation dataset.
Source: [Learning Fixed Points in Generative Adversarial Networks: From Image-to-Image Translation to Disease Detection and Localization](https://arxiv.org/abs/1908.06965)
Image Source: [http://people.csail.mit.edu/menze/papers/proceedings_miccai_brats_2014.pdf](http://people.csail.mit.edu/menze/papers/proceedings_miccai_brats_2014.pdf) | Provide a detailed description of the following dataset: BraTS 2014 |
BraTS 2016 | BRATS 2016 is a brain tumor segmentation dataset. It shares the same training set as BRATS 2015, which consists of 220 HHG and 54 LGG. Its testing dataset consists of 191 cases with unknown grades.
Image Source: [https://sites.google.com/site/braintumorsegmentation/home/brats_2016](https://sites.google.com/site/braintumorsegmentation/home/brats_2016) | Provide a detailed description of the following dataset: BraTS 2016 |
DBP15K | DBP15k contains four language-specific KGs that are respectively extracted from English (En), Chinese (Zh), French (Fr) and Japanese (Ja) DBpedia, each of which contains around 65k-106k entities. Three sets of 15k alignment labels are constructed to align entities between each of the other three languages and En. | Provide a detailed description of the following dataset: DBP15K |
MedDialog | The MedDialog dataset (Chinese) contains conversations (in Chinese) between doctors and patients. It has 1.1 million dialogues and 4 million utterances. The data is continuously growing and more dialogues will be added. The raw dialogues are from haodf.com. All copyrights of the data belong to haodf.com. | Provide a detailed description of the following dataset: MedDialog |
Conceptual Captions | Automatic image captioning is the task of producing a natural-language utterance (usually a sentence) that correctly reflects the visual content of an image. Up to this point, the resource most used for this task was the MS-COCO dataset, containing around 120,000 images and 5-way image-caption annotations (produced by paid annotators).
Google's Conceptual Captions dataset has more than 3 million images, paired with natural-language captions. In contrast with the curated style of the MS-COCO images, Conceptual Captions images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. The raw descriptions are harvested from the Alt-text HTML attribute associated with web images. The authors developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions. | Provide a detailed description of the following dataset: Conceptual Captions |
CNN/Daily Mail | **CNN/Daily Mail** is a dataset for text summarization. Human generated abstractive summary bullets were generated from news stories in CNN and Daily Mail websites as questions (with one of the entities hidden), and stories as the corresponding passages from which the system is expected to answer the fill-in the-blank question. The authors released the scripts that crawl, extract and generate pairs of passages and questions from these websites.
In all, the corpus has 286,817 training pairs, 13,368 validation pairs and 11,487 test pairs, as defined by their scripts. The source documents in the training set have 766 words spanning 29.74 sentences on an average while the summaries consist of 53 words and 3.72 sentences. | Provide a detailed description of the following dataset: CNN/Daily Mail |
EuroSAT | **Eurosat** is a dataset and deep learning benchmark for land use and land cover classification. The dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting out of 10 classes with in total 27,000 labeled and geo-referenced images. | Provide a detailed description of the following dataset: EuroSAT |
RESISC45 | RESISC45 dataset is a dataset for Remote Sensing Image Scene Classification (RESISC). It contains 31,500 RGB images of size 256×256 divided into 45 scene classes, each class containing 700 images. Among its notable features, RESISC45 contains varying spatial resolution ranging from 20cm to more than 30m/px. | Provide a detailed description of the following dataset: RESISC45 |
Country211 | Country211 is a dataset released by OpenAI, designed to assess the geolocation capability of visual representations. It filters the YFCC100m dataset (Thomee et al., 2016) to find 211 countries (defined as having an ISO-3166 country code) that have at least 300 photos with GPS coordinates. OpenAI built a balanced dataset with 211 categories, by sampling 200 photos for training and 100 photos for testing, for each country. | Provide a detailed description of the following dataset: Country211 |
Hateful Memes | The Hateful Memes data set is a multimodal dataset for hateful meme detection (image + text) that contains 10,000+ new multimodal examples created by Facebook AI. Images were licensed from Getty Images so that researchers can use the data set to support their work. | Provide a detailed description of the following dataset: Hateful Memes |
Rendered SST2 | The **Rendered SST2** dataset is a dataset released by OpenAI, that measures the optical character recognition capability of visual representations.
It uses sentences from the [Stanford Sentiment Treebank](/dataset/sst) dataset and renders them into images, with black texts on a white background, in a 448×448 resolution. | Provide a detailed description of the following dataset: Rendered SST2 |
AccentDB | AccentDB is a database that contains samples of 4 Indian-English accents, and a compilation of samples from 4 native-English, and a metropolitan Indian-English accent. | Provide a detailed description of the following dataset: AccentDB |
Common Voice | **Common Voice** is an audio dataset that consists of a unique MP3 and corresponding text file. There are 9,283 recorded hours in the dataset. The dataset also includes demographic metadata like age, sex, and accent. The dataset consists of 7,335 validated hours in 60 languages. | Provide a detailed description of the following dataset: Common Voice |
CREMA-D | **CREMA-D** is an emotional multimodal actor data set of 7,442 original clips from 91 actors. These clips were from 48 male and 43 female actors between the ages of 20 and 74 coming from a variety of races and ethnicities (African America, Asian, Caucasian, Hispanic, and Unspecified).
Actors spoke from a selection of 12 sentences. The sentences were presented using one of six different emotions (Anger, Disgust, Fear, Happy, Neutral, and Sad) and four different emotion levels (Low, Medium, High, and Unspecified).
Participants rated the emotion and emotion levels based on the combined audiovisual presentation, the video alone, and the audio alone. Due to the large number of ratings needed, this effort was crowd-sourced and a total of 2443 participants each rated 90 unique clips, 30 audio, 30 visual, and 30 audio-visual. 95% of the clips have more than 7 ratings. | Provide a detailed description of the following dataset: CREMA-D |
DementiaBank | DementiaBank is a shared database of multimedia interactions for the study of communication in dementia. The dataset contains 117 people diagnosed with Alzheimer Disease, and 93 healthy people, reading a description of an image. The principal task and benchmark is to classify each group. | Provide a detailed description of the following dataset: DementiaBank |
FUSS | The **Free Universal Sound Separation (FUSS)** dataset is a database of arbitrary sound mixtures and source-level references, for use in experiments on arbitrary sound separation. FUSS is based on FSD50K corpus. | Provide a detailed description of the following dataset: FUSS |
Groove | The **Groove MIDI Dataset (GMD)** is composed of 13.6 hours of aligned MIDI and (synthesized) audio of human-performed, tempo-aligned expressive drumming. The dataset contains 1,150 MIDI files and over 22,000 measures of drumming. | Provide a detailed description of the following dataset: Groove |
GTZAN | The **gtzan8** audio dataset contains 1000 tracks of 30 second length. There are 10 genres, each containing 100 tracks which are all 22050Hz Mono 16-bit audio files in .wav format. The genres are:
- blues
- classical
- country
- disco
- hiphop
- jazz
- metal
- pop
- reggae
- rock | Provide a detailed description of the following dataset: GTZAN |
gtzan_music_speech | **gtzan_music_speech** is a dataset for music/speech discrimination. It consists of 120 tracks of 30 second length. Each class (music/speech) has 60 samples. The tracks are all 22050Hz Mono 16-bit audio files in .wav format. | Provide a detailed description of the following dataset: gtzan_music_speech |
iVQA | An open-ended VideoQA benchmark that aims to: i) provide a well-defined evaluation by including five correct answer annotations per question and ii) avoid questions which can be answered without the video.
iVQA contains 10,000 video clips with one question and five corresponding answers per clip. Moreover, we manually reduce the language bias by excluding questions that could be answered without watching the video. | Provide a detailed description of the following dataset: iVQA |
HowTo100M | HowTo100M is a large-scale dataset of narrated videos with an emphasis on instructional videos where content creators teach complex tasks with an explicit intention of explaining the visual content on screen. HowTo100M features a total of:
- 136M video clips with captions sourced from 1.2M Youtube videos (15 years of video)
- 23k activities from domains such as cooking, hand crafting, personal care, gardening or fitness
Each video is associated with a narration available as subtitles automatically downloaded from Youtube. | Provide a detailed description of the following dataset: HowTo100M |
LibriTTS | **LibriTTS** is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, prepared by Heiga Zen with the assistance of Google Speech and Google Brain team members. The LibriTTS corpus is designed for TTS research. It is derived from the original materials (mp3 audio files from LibriVox and text files from Project Gutenberg) of the LibriSpeech corpus. The main differences from the LibriSpeech corpus are listed below:
- The audio files are at 24kHz sampling rate.
- The speech is split at sentence breaks.
- Both original and normalized texts are included.
- Contextual information (e.g., neighbouring sentences) can be extracted.
- Utterances with significant background noise are excluded. | Provide a detailed description of the following dataset: LibriTTS |
SAVEE | The **Surrey Audio-Visual Expressed Emotion (SAVEE)** dataset was recorded as a pre-requisite for the development of an automatic emotion recognition system. The database consists of recordings from 4 male actors in 7 different emotions, 480 British English utterances in total. The sentences were chosen from the standard TIMIT corpus and phonetically-balanced for each emotion. The data were recorded in a visual media lab with high quality audio-visual equipment, processed and labeled. To check the quality of performance, the recordings were evaluated by 10 subjects under audio, visual and audio-visual conditions. Classification systems were built using standard features and classifiers for each of the audio, visual and audio-visual modalities, and speaker-independent recognition rates of 61%, 65% and 84% achieved respectively. | Provide a detailed description of the following dataset: SAVEE |
Speech Commands | **Speech Commands** is an audio dataset of spoken words designed to help train and evaluate keyword spotting systems . | Provide a detailed description of the following dataset: Speech Commands |
FSDD | **Free Spoken Digit Dataset (FSDD)** is a simple audio/speech dataset consisting of recordings of spoken digits in wav files at 8kHz. The recordings are trimmed so that they have near minimal silence at the beginnings and ends. It contains data from 6 speakers, 3,000 recordings (50 of each digit per speaker), and English pronunciations. | Provide a detailed description of the following dataset: FSDD |
HowToVQA69M | A dataset of 69,270,581 video clip, question and answer triplets (v, q, a). HowToVQA69M is two orders of magnitude larger than any of the currently available VideoQA datasets.
On average, each original video results in 43 video clips, where each clip lasts 12.1 seconds and is associated to 1.2 question-answer pairs. Questions and answers contain 8.7 and 2.4 words on average respectively. HowToVQA69M is highly diverse and contains over 16M unique answers, where over 2M unique answers appear more than once and over 300K unique answers appear more than ten times. | Provide a detailed description of the following dataset: HowToVQA69M |
TED-LIUM 3 | **TED-LIUM 3** is an audio dataset collected from TED Talks. It contains:
- 2351 audio talks in NIST sphere format (SPH), including talks from TED-LIUM 2: be careful, same talks but not same audio files (only these audio file must be used with the TED-LIUM 3 STM files)
- 452 hours of audio
- 2351 aligned automatic transcripts in STM format
- TEDLIUM 2 dev and test data: 19 TED talks in SPH format with corresponding manual transcriptions (cf. ‘legacy’ distribution below).
- Dictionary with pronunciations (159848 entries), same file as the one included in TED-LIUM 2
- Selected monolingual data for language modeling from WMT12 publicly available corpora: these files come from the TED-LIUM 2 release, but have been modified to get a tokenization more relevant for English language | Provide a detailed description of the following dataset: TED-LIUM 3 |
Yesno | **Yesno** is an audio dataset consisting of 60 recordings of one individual saying yes or no in Hebrew; each recording is eight words long. It was created for the Kaldi audio project by an author who wishes to remain anonymous. | Provide a detailed description of the following dataset: Yesno |
AbstractReasoning | **AbstractReasoning** is a dataset for abstract reasoning, where the goal is to infer the correct answer from the context panels based on abstract reasoning.
Image Source: [Barrett et al](https://arxiv.org/pdf/1807.04225.pdf) | Provide a detailed description of the following dataset: AbstractReasoning |
BCCD | **BCCD** is a small-scale dataset for blood cells detection. | Provide a detailed description of the following dataset: BCCD |
M-VAD Names | The dataset contains the annotations of characters' visual appearances, in the form of tracks of face bounding boxes, and the associations with characters' textual mentions, when available. The detection and annotation of the visual appearances of characters in each video clip of each movie was achieved through a semi-automatic approach. The released dataset contains more than 24k annotated video clips, including 63k visual tracks and 34k textual mentions, all associated with their character identities. | Provide a detailed description of the following dataset: M-VAD Names |
TGIF | The Tumblr GIF (TGIF) dataset contains 100K animated GIFs and 120K sentences describing visual content of the animated GIFs. The animated GIFs have been collected from Tumblr, from randomly selected posts published between May and June of 2015. The dataset provides the URLs of animated GIFs. The sentences are collected via crowdsourcing, with a carefully designed annotation interface that ensures high quality dataset. There is one sentence per animated GIF for the training and validation splits, and three sentences per GIF for the test split. The dataset can be used to evaluate animated GIF/video description techniques. | Provide a detailed description of the following dataset: TGIF |
TGIF-QA | The TGIF-QA dataset contains 165K QA pairs for the animated GIFs from the TGIF dataset [Li et al. CVPR 2016]. The question & answer pairs are collected via crowdsourcing with a carefully designed user interface to ensure quality. The dataset can be used to evaluate video-based Visual Question Answering techniques. | Provide a detailed description of the following dataset: TGIF-QA |
TutorialVQA | **TutorialVQA** is a new type of dataset used to find answer spans in tutorial videos. The dataset includes about 6,000 triples, comprised of videos, questions, and answer spans manually collected from screencast tutorial videos with spoken narratives for a photo-editing software. | Provide a detailed description of the following dataset: TutorialVQA |
How2R | Amazon Mechanical Turk (AMT) is used to collect annotations on HowTo100M videos. 30k 60-second clips are randomly sampled from 9,421 videos and present each clip to the turkers, who are asked to select a video segment containing a single, self-contained scene. After this segment selection step, another group of workers are asked to write descriptions for each displayed segment. Narrations are not provided to the workers to ensure that their written queries are based on visual content only. These final video segments are 10-20 seconds long on average, and the length of queries ranges from 8 to 20 words. From this process, 51,390 queries are collected for 24k 60-second clips from 9,371 videos in HowTo100M, on average 2-3 queries per clip. The video clips and its associated queries are split into 80% train, 10% val and 10% test. | Provide a detailed description of the following dataset: How2R |
How2QA | To collect How2QA for video QA task, the same set of selected video clips are presented to another group of AMT workers for multichoice QA annotation. Each worker is assigned with one video segment and asked to write one question with four answer candidates (one correctand three distractors). Similarly, narrations are hidden from the workers to ensure the collected QA pairs are not biased by subtitles. Similar to TVQA, the start and end points are provided for the relevant moment for each question. After filtering low-quality annotations, the final dataset contains 44,007 QA pairs for 22k 60-second clips selected from 9035 videos. | Provide a detailed description of the following dataset: How2QA |
CLIC | **CLIC** is a dataset for learned image compression. The dataset contains both RGB and grayscale images. | Provide a detailed description of the following dataset: CLIC |
3DMatch | The 3DMATCH benchmark evaluates how well descriptors (both 2D and 3D) can establish correspondences between RGB-D frames of different views. The dataset contains 2D RGB-D patches and 3D patches (local TDF voxel grid volumes) of wide-baselined correspondences.
The pixel size of each 2D patch is determined by the projection of the 0.3m3 local 3D patch around the interest point onto the image plane. | Provide a detailed description of the following dataset: 3DMatch |
CIFAR-FS | **CIFAR100 few-shots** (**CIFAR-FS**) is randomly sampled from CIFAR-100 (Krizhevsky & Hinton, 2009) by using the same criteria with which miniImageNet has been generated. The average inter-class similarity is sufficiently high to represent a challenge for the current state of the art. Moreover, the limited original resolution of 32×32 makes the task harder and at the same time allows fast prototyping. | Provide a detailed description of the following dataset: CIFAR-FS |
PyBullet | PyBullet is an easy to use Python module for physics simulation, robotics and deep reinforcement learning based on the Bullet Physics SDK. With PyBullet you can load articulated bodies from URDF, SDF and other file formats. PyBullet provides forward dynamics simulation, inverse dynamics computation, forward and inverse kinematics and collision detection and ray intersection queries. Aside from physics simulation, PyBullet supports to rendering, with a CPU renderer and OpenGL visualization and support for virtual reality headsets. | Provide a detailed description of the following dataset: PyBullet |
SemEval 2016 | SemEval-16 | Provide a detailed description of the following dataset: SemEval 2016 |
ELI5 | ELI5 is a dataset for long-form question answering. It contains 270K complex, diverse questions that require explanatory multi-sentence answers. Web search results are used as evidence documents to answer each question.
ELI5 is also a task in Dodecadialogue. | Provide a detailed description of the following dataset: ELI5 |
PixelHelp | PixelHelp includes 187 multi-step instructions of 4 task categories deined in https://support.google.com/pixelphone and annotated by human. This dataset includes 88 general tasks, such as configuring accounts, 38 Gmail tasks, 31 Chrome tasks, and 30 Photos related tasks. This dataset is an updated opensource version of the original PixelHelp dataset, which was used for testing the end-to-end grounding quality of the model in paper "Mapping Natural Language Instructions to Mobile UI Action Sequences". The similar accuracy is acquired on this version of the dataset. | Provide a detailed description of the following dataset: PixelHelp |
RicoSCA | Rico is a public UI corpus with 72K Android UI screens mined from 9.7K Android apps (Deka et al., 2017). Each screen in Rico comes with a screenshot image and a view hierarchy of a collection of UI objects. Authors manually removed screens whose view hierarchies do not match their screenshots by asking annotators to visually verify whether the bounding boxes of view hierarchy leaves match each UI object on the corresponding screenshot image. This filtering results in 25K unique screens.
In total, RICOSCA contains 295,476 single-step synthetic commands for operating 177,962 different target objects across 25,677 Android screens. | Provide a detailed description of the following dataset: RicoSCA |
AndroidHowTo | AndroidHowTo contains 32,436 data points from 9,893 unique How-To instructions and split into training (8K), validation (1K) and test (900). All test examples have perfect agreement across all three annotators for the entire sequence. In total, there are 190K operation spans, 172K object spans, and 321 input spans labeled. The lengths of the instructions range from 19 to 85 tokens, with median of 59. They describe a sequence of actions from one to 19 steps, with a median of 5. | Provide a detailed description of the following dataset: AndroidHowTo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.