dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
OSTD | This dataset consists of 18 movies with duration range between 10 and 104 minutes leveraged from the OVSD dataset (Rotman et al., 2016). For these videos, the summary length limit is set to be the minimum between 4 minutes and 10% of the video length. | Provide a detailed description of the following dataset: OSTD |
PolarRR | PolarRR is a new dataset with more than 100 types of glass in which obtained transmission images are perfectly aligned with input mixed images. | Provide a detailed description of the following dataset: PolarRR |
Lytro Illum | Lytro Illum is a new light field dataset using a Lytro Illum camera. 640 light fields are collected with significant variations in terms of size, textureness, background clutter and illumination, etc. Micro-lens image arrays and central viewing images are generated, and corresponding ground-truth maps are produced. | Provide a detailed description of the following dataset: Lytro Illum |
UFPR-Eyeglasses | The UFPR-Eyeglasses dataset has 1,135 images of both eyes (2,270 cropped images of each eye) from 83 subjects (166 classes). The dataset is used to evaluate the effect of the occlusion caused by eyeglasses in periocular recognition. | Provide a detailed description of the following dataset: UFPR-Eyeglasses |
Circa | The Circa (meaning ‘approximately’) dataset aims to help machine learning systems to solve the problem of interpreting indirect answers to polar questions.
The dataset contains pairs of yes/no questions and indirect answers, together with annotations for the interpretation of the answer. The data is collected in 10 different social conversational situations (eg. food preferences of a friend). Examples:
```
Q: Are you vegan?
A: I love burgers too much. [No]
Q: Do you like spicy food?
A: I put hot sauce on everything. [Yes]
Q: Would you like to go see live music?
A: If it’s not too crowded. [Yes, upon a condition]
```
Currently, the Circa annotations focus on a few classes such as ‘yes’, ‘no’ and ‘yes, upon condition’. The data can be used to build machine learning models which can replicate these classes on new question-answer pairs, and allow evaluation of methods for doing so. | Provide a detailed description of the following dataset: Circa |
QUVA Repetition | QUVA Repetition dataset consists of 100 videos displaying a wide variety of repetitive video dynamics, including swimming, stirring, cutting, combing and music-making. All videos have been annotated with individual cycle bounds and a total repetition count. | Provide a detailed description of the following dataset: QUVA Repetition |
NH-HAZE | NN-HAZE is an image dehazing dataset. Since in many real cases haze is not uniformly distributed NH-HAZE, a non-homogeneous realistic dataset with pairs of real hazy and corresponding haze-free images. This is the first non-homogeneous image dehazing dataset and contains 55 outdoor scenes. The non-homogeneous haze has been introduced in the scene using a professional haze generator that imitates the real conditions of hazy scenes. | Provide a detailed description of the following dataset: NH-HAZE |
PVDN | PVDN is a dataset of vehicle detection at night, using light reflections caused by their headlamps. It contains 59,746 annotated grayscale images out of 346 different scenes in a rural environment at night. In these images, all oncoming vehicles, their corresponding light objects (e. g., headlamps), and their respective light reflections (e. g., light reflections on guardrails) are labeled. With this information, this dataset enables research into new methods of detecting oncoming vehicles based on the light reflections they cause, long before they are directly visible. | Provide a detailed description of the following dataset: PVDN |
JEC-QA | JEC-QA is a LQA (Legal Question Answering) dataset collected from the National Judicial Examination of China. It contains 26,365 multiple-choice and multiple-answer questions in total. The task of the dataset is to predict the answer using the questions and relevant articles. To do well on JEC-QA, both retrieving and answering are important. | Provide a detailed description of the following dataset: JEC-QA |
TrashCan | The TrashCan dataset is an instance-segmentation dataset of underwater trash. It is comprised of annotated images (7,212 images) which contain observations of trash, ROVs, and a wide variety of undersea flora and fauna. The annotations in this dataset take the format of instance segmentation annotations: bitmaps containing a mask marking which pixels in the image contain each object. The imagery in TrashCan is sourced from the J-EDI (JAMSTEC E-Library of Deep-sea Images) dataset, curated by the Japan Agency of Marine Earth Science and Technology (JAMSTEC). | Provide a detailed description of the following dataset: TrashCan |
UCLA Aerial Event Dataset | The UCLA Aerial Event Dataest has been captured by a low-cost hex-rotor with a GoPro camera, which is able to eliminate the high frequency vibration of the camera and hold in air autonomously through a GPS and a barometer. It can also fly 20 ∼ 90m above the ground and stays 5 minutes in air.
This hex-rotor has been used to take the set of videos in the dataset, captured in different places: hiking routes, parking lots, camping sites, picnic areas with shelters, restrooms, tables, trash bins and BBQ ovens. By detecting/tracking humans and objects in the videos, the videos can be annotated with events.
The original videos are pre-processed, including camera calibration and frame registration. After pre-processing, there are totally 27 videos in the dataset, the length of which ranges from 2 minutes to 5 minutes. Each video has annotations with hierarchical semantic information of objects, roles, events and groups in the videos. | Provide a detailed description of the following dataset: UCLA Aerial Event Dataset |
CQASUMM | CQASUMM is a dataset for CQA (Community Question Answering) summarization, constructed from the 4.4 million Yahoo! Answers L6 dataset. The dataset contains ~300k annotated samples. | Provide a detailed description of the following dataset: CQASUMM |
NeuralNews | NeuralNews is a dataset for machine-generated news detection. It consists of human-generated and machine-generated articles. The human-generated articles are extracted from the GoodNews dataset, which is extracted from the New York Times. It contains 4 types of articles:
- Real Articles and Real Captions
- Real Articles and Generated Captions
- Generated Articles and Real Captions
- Generated Articles and Generated Captions
In total, it contains about 32K samples of each article type (resulting in about 128K total). | Provide a detailed description of the following dataset: NeuralNews |
EyeCar | EyeCar is a dataset of driving videos of vehicles involved in rear-end collisions paired with eye fixation data captured from human subjects. It contains 21 front-view videos that were captured in various traffic, weather, and day light conditions. Each video is 30sec in length and contains typical driving tasks (e.g., lanekeeping, merging-in, and braking) ending to rear-end collisions. | Provide a detailed description of the following dataset: EyeCar |
WordNet-feelings | WordNet-feelings, is an affective dataset that identifies 3664 word senses as feelings, and associates each of these with one of the 9 categories of feeling. The 9 different categories are: Actions, Anger, Attention, Attraction, Hedonics, Other, Physiological, Social, Wellbeing. | Provide a detailed description of the following dataset: WordNet-feelings |
Doc3DShade | Doc3DShade extends Doc3D with realistic lighting and shading. Follows a similar synthetic rendering procedure using captured document 3D shapes but final image generation step combines real shading of different types of paper materials under numerous illumination conditions. | Provide a detailed description of the following dataset: Doc3DShade |
Deep Fakes Dataset | The Deep Fakes Dataset is a collection of "in the wild" portrait videos for deepfake detection. The videos in the dataset are diverse real-world samples in terms of the source generative model, resolution, compression, illumination, aspect-ratio, frame rate, motion, pose, cosmetics, occlusion, content, and context. They originate from various sources such as news articles, forums, apps, and research presentations; totalling up to 142 videos, 32 minutes, and 17 GBs. Synthetic videos are matched with their original counterparts when possible. | Provide a detailed description of the following dataset: Deep Fakes Dataset |
GSL | ## Dataset Description
The [Greek Sign Language (GSL)](https://arxiv.org/abs/2007.12530) is a large-scale RGB+D dataset, suitable for Sign Language Recognition (SLR) and Sign Language Translation (SLT). The video captures are conducted using an Intel RealSense D435 RGB+D camera at a rate of 30 fps. Both the RGB and the depth streams are acquired in the same spatial resolution of 848×480 pixels. To increase variability in the videos, the camera position and orientation is slightly altered within subsequent recordings. Seven different signers are employed to perform 5 individual and commonly met scenarios in different public services. The average length of each scenario is twenty sentences.
The dataset contains 10,290 sentence instances, 40,785 gloss instances, 310 unique glosses (vocabulary size) and 331 unique sentences, with 4.23 glosses per sentence on average. Each signer is asked to perform the pre-defined dialogues five consecutive times. In all cases, the simulation considers a deaf person communicating with a single public service employee. The involved signer performs the sequence of glosses of both agents in the discussion. For the annotation of each gloss sequence, GSL linguistic experts are involved. The given annotations are at individual gloss and gloss sequence level. A translation of the gloss sentences to spoken Greek is also provided.
## Evaluation
The GSL dataset includes the 3 evaluation setups:
- Signer-dependent continuous sign language recognition (GSL SD) – roughly 80% of videos are used for training, corresponding to 8,189 instances. The rest 1,063 (10%) were kept for validation and 1,043 (10%) for testing.
- Signer-independent continuous sign language recognition (GSL SI) – the selected test gloss sequences are not used in the training set, while all the individual glosses exist in the training set. In GSL SI, the recordings of one signer are left out for validation and testing (588 and 881 instances, respectively). The rest 8821 instances are utilized for training.
- Isolated gloss sign language recognition (GSL isol.) – The validation set consists of 2,231 gloss instances, the test set 3,500, while the remaining 34,995 are used for training. All 310 unique glosses are seen in the training set.
For more info and results, advice our [paper](https://arxiv.org/abs/2007.12530)
## Paper Abstract: A Comprehensive Study on Sign Language Recognition Methods, Adaloglou et al. 2020
In this paper, a comparative experimental assessment of computer vision-based methods for sign language recognition is conducted. By implementing the most recent deep neural network methods in this field, a thorough evaluation on multiple publicly available datasets is performed. The aim of the present study is to provide insights on sign language recognition, focusing on mapping non-segmented video streams to glosses. For this task, two new sequence training criteria, known from the fields of speech and scene text recognition, are introduced. Furthermore, a
plethora of pretraining schemes are thoroughly discussed. Finally, a new RGB+D dataset for the Greek sign language is created. To the best of our knowledge, this is the first sign language dataset where sentence and gloss level annotations are provided for every video capture.
[Arxiv link](https://arxiv.org/abs/2007.12530) | Provide a detailed description of the following dataset: GSL |
SketchyScene | SketchyScene is a large-scale dataset of scene sketches to advance research on sketch understanding at both the object and scene level. The dataset is created through a novel and carefully designed crowdsourcing pipeline, enabling users to efficiently generate large quantities of realistic and diverse scene sketches. SketchyScene contains more than 29,000 scene-level sketches, 7,000+ pairs of scene templates and photos, and 11,000+ object sketches. All objects in the scene sketches have ground-truth semantic and instance masks. The dataset is also highly scalable and extensible, easily allowing augmenting and/or changing scene composition. | Provide a detailed description of the following dataset: SketchyScene |
ECHR | ECHR is an English legal judgment prediction dataset of cases from the European Court of Human Rights (ECHR). The dataset contains ~11.5k cases, including the raw text.
For each case, the dataset provides a list of facts extracted using regular expressions from the case description. Each case is also mapped to articles of the Convention that were violated (if any). An importance score is also assigned by ECHR. | Provide a detailed description of the following dataset: ECHR |
AmazonQA | AmazonQA consists of 923k questions, 3.6M answers and 14M reviews across 156k products. Building on the well-known Amazon dataset, additional annotations are collected, marking each question as either answerable or unanswerable based on the available reviews. | Provide a detailed description of the following dataset: AmazonQA |
emrQA | emrQA has 1 million question-logical form and 400,000+ questionanswer evidence pairs. | Provide a detailed description of the following dataset: emrQA |
SuperGLUE | **SuperGLUE** is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number
performance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:
- More challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.
- More diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include
coreference resolution and question answering (QA).
- Comprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.
- Improved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).
- Refined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit
assignment to data and task creators. | Provide a detailed description of the following dataset: SuperGLUE |
TurkQA | TurkQA consists of a selection of sentences from English Wikipedia articles, with questions and answers crowdsourced from workers on Amazon Mechanical Turk. | Provide a detailed description of the following dataset: TurkQA |
XTREME | The **Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME)** benchmark was introduced to encourage more research on multilingual transfer learning,. XTREME covers 40 typologically diverse languages spanning 12 language families and includes 9 tasks that require reasoning about different levels of syntax or semantics.
The languages in XTREME are selected to maximize language diversity, coverage in existing tasks, and availability of training data. The languages in XTREME are selected to maximize language diversity, coverage in existing tasks, and availability of training data. Among these are many under-studied languages, such as the Dravidian languages Tamil (spoken in southern India, Sri Lanka, and Singapore), Telugu and Malayalam (spoken mainly in southern India), and the Niger-Congo languages Swahili and Yoruba, spoken in Africa. | Provide a detailed description of the following dataset: XTREME |
WikiMovies | WikiMovies is a dataset for question answering for movies content. It contains ~100k questions in the movie domain, and was designed to be answerable by using either a perfect KB (based on OMDb), | Provide a detailed description of the following dataset: WikiMovies |
MDD | Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion). | Provide a detailed description of the following dataset: MDD |
CBT | Children’s Book Test (CBT) is designed to measure directly how well language models can exploit wider linguistic context. The CBT is built from books that are freely available thanks to Project Gutenberg. | Provide a detailed description of the following dataset: CBT |
Dialog-based Language Learning dataset | Dialog-based Language Learning dataset is designed to measure how well models can perform at learning as a student given a teacher’s textual responses to the student’s answer (as well as potentially receiving an external real-valued reward signal). | Provide a detailed description of the following dataset: Dialog-based Language Learning dataset |
TyDiQA-GoldP | **TyDiQA** is the gold passage version of the Typologically Diverse Question Answering (TyDiWA) dataset, a benchmark for information-seeking question answering, which covers nine languages. The gold passage version is a simplified version of the primary task, which uses only the gold passage as context and excludes unanswerable questions. It is thus similar to XQuAD and MLQA, while being more challenging as questions have been written without seeing the answers, leading to 3× and 2× less lexical overlap compared to XQuAD and MLQA respectively. | Provide a detailed description of the following dataset: TyDiQA-GoldP |
WikiReading | WikiReading is a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs). | Provide a detailed description of the following dataset: WikiReading |
Tatoeba | The **Tatoeba** dataset consists of up to 1,000 English-aligned sentence pairs covering 122 languages.
Image Source: [https://arxiv.org/pdf/1812.10464v2.pdf](https://arxiv.org/pdf/1812.10464v2.pdf) | Provide a detailed description of the following dataset: Tatoeba |
DeeperForensics-1.0 | **DeeperForensics-1.0** represents the largest face forgery detection dataset by far, with 60,000 videos constituted by a total of 17.6 million frames, 10 times larger than existing datasets of the same kind. The full dataset includes 48,475 source videos and 11,000 manipulated videos. The source videos are collected on 100 paid and consented actors from 26 countries, and the manipulated videos are generated by a newly proposed many-to-many end-to-end face swapping method, DF-VAE. 7 types of real-world perturbations at 5 intensity levels are employed to ensure a larger scale and higher diversity.
Image Source: [https://github.com/EndlessSora/DeeperForensics-1.0](https://github.com/EndlessSora/DeeperForensics-1.0) | Provide a detailed description of the following dataset: DeeperForensics-1.0 |
WikiSuggest | To collect WikiSuggest, Google Suggest API is used to harvest natural language questions and submit them to Google Search. Whenever Google Search returns a box with a short answer from Wikipedia, an example from the question, answer, and the Wikipedia document are created. If the answer string is missing from the document this often implies a spurious question-answer pair, such as (‘what time is half time in rugby’, ‘80 minutes, 40 minutes’). Question-answer pairs without the exact answer string are pruned. Fifty examples after filtering are examined and 54% were found to be well-formed question-answer pairs where answers in the document can be grounded, 20% contained answers without textual evidence in the document (the answer string exists in an irreleveant context), and 26% contain incorrect QA pairs. | Provide a detailed description of the following dataset: WikiSuggest |
FineGym | **FineGym** is an action recognition dataset build on top of gymnasium videos. Compared to existing action recognition datasets, FineGym is distinguished in richness, quality, and diversity. In particular, it provides temporal annotations at both action and sub-action levels with a three-level semantic hierarchy. For example, a "balance beam" event will be annotated as a sequence of elementary sub-actions derived from five sets: "leap-jumphop", "beam-turns", "flight-salto", "flight-handspring", and "dismount", where the sub-action in each set will be further annotated with finely defined class labels. This new level of granularity presents significant challenges for action recognition, e.g. how to parse the temporal structures from a coherent action, and how to distinguish between subtly different action classes. | Provide a detailed description of the following dataset: FineGym |
Shmoop Corpus | Shmoop Corpus is a dataset of 231 stories that are paired with detailed multi-paragraph summaries for each individual chapter (7,234 chapters), where the summary is chronologically aligned with respect to the story chapter. From the corpus, a set of common NLP tasks are constructed, including Cloze-form question answering and a simplified form of abstractive summarization, as benchmarks for reading comprehension on stories. | Provide a detailed description of the following dataset: Shmoop Corpus |
BookTest | BookTest is a new dataset similar to the popular Children’s Book Test (CBT), however more than 60 times larger. | Provide a detailed description of the following dataset: BookTest |
MovieNet | **MovieNet** is a holistic dataset for movie understanding. MovieNet contains 1,100 movies with a large amount of multi-modal data, e.g. trailers, photos, plot descriptions, etc.. Besides, different aspects of manual annotations are provided in MovieNet, including 1.1M characters with bounding boxes and identities, 42K scene boundaries, 2.5K aligned description sentences, 65K tags of place and action, and 92 K tags of cinematic style. | Provide a detailed description of the following dataset: MovieNet |
DREAM | DREAM is a multiple-choice Dialogue-based REAding comprehension exaMination dataset. In contrast to existing reading comprehension datasets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding.
DREAM contains 10,197 multiple choice questions for 6,444 dialogues, collected from English-as-a-foreign-language examinations designed by human experts. DREAM is likely to present significant challenges for existing reading comprehension systems: 84% of answers are non-extractive, 85% of questions require reasoning beyond a single sentence, and 34% of questions also involve commonsense knowledge. | Provide a detailed description of the following dataset: DREAM |
MessyTable | **MessyTable** features a large number of scenes with messy tables captured from multiple camera views. Each scene in this dataset is highly complex, containing multiple object instances that could be identical, stacked and occluded by other instances. The key challenge is to associate all instances given the RGB image of all views. The seemingly simple task surprisingly fails many popular methods or heuristics. The dataset challenges existing methods in mining subtle appearance differences, reasoning based on contexts, and fusing appearance with geometric cues for establishing an association.
There are 50,211 images and 5,579 scenes in the dataset. | Provide a detailed description of the following dataset: MessyTable |
MCTest | MCTest is a freely available set of stories and associated questions intended for research on the machine comprehension of text.
MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. | Provide a detailed description of the following dataset: MCTest |
TweetQA | With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have concentrated on formal text like news and Wikipedia, the first large-scale dataset for QA over social media data is presented. To make sure the tweets are meaningful and contain interesting information, tweets used by journalists to write news articles are gathered. Then human annotators are asked to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, the answer are allowed to be abstractive. The task requires model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer. | Provide a detailed description of the following dataset: TweetQA |
UCF Sports | The UCF Sports dataset consists of a set of actions collected from various sports which are typically featured on broadcast television channels such as the BBC and ESPN. The video sequences were obtained from a wide range of stock footage websites including BBC Motion gallery and GettyImages.
The dataset includes a total of 150 sequences with the resolution of 720 x 480. The collection represents a natural pool of actions featured in a wide range of scenes and viewpoints. | Provide a detailed description of the following dataset: UCF Sports |
MSRA-B | The MSRA-B dataset is a dataset for salient object detection. It contains 5,000 images with a variety of image contents. Most of the images have a single salient object. There is a large variation among images including natural scenes, animals, indoor, outdoor, etc. | Provide a detailed description of the following dataset: MSRA-B |
VOT2015 | VOT2015 is a visual object tracking dataset. The dataset comprises 60 short sequences showing various objects in challenging backgrounds. The sequences were chosen from a large pool of sequences from different sources. | Provide a detailed description of the following dataset: VOT2015 |
VOT2014 | The dataset comprises 25 short sequences showing various objects in challenging backgrounds. Eight sequences are from the VOT2013 challenge (bolt, bicycle, david, diving, gymnastics, hand, sunshade, woman). The new sequences show complementary objects and backgrounds, for example a fish underwater or a surfer riding a big wave. The sequences were chosen from a large pool of sequences using a methodology based on clustering visual features of object and background so that those 25 sequences sample evenly well the existing pool. | Provide a detailed description of the following dataset: VOT2014 |
UCF50 | UCF50 is an action recognition data set with 50 action categories, consisting of realistic videos taken from youtube. This data set is an extension of YouTube Action data set (UCF11) which has 11 action categories.
UCF50 data set's 50 action categories collected from youtube are: Baseball Pitch, Basketball Shooting, Bench Press, Biking, Biking, Billiards Shot,Breaststroke, Clean and Jerk, Diving, Drumming, Fencing, Golf Swing, Playing Guitar, High Jump, Horse Race, Horse Riding, Hula Hoop, Javelin Throw, Juggling Balls, Jump Rope, Jumping Jack, Kayaking, Lunges, Military Parade, Mixing Batter, Nun chucks, Playing Piano, Pizza Tossing, Pole Vault, Pommel Horse, Pull Ups, Punch, Push Ups, Rock Climbing Indoor, Rope Climbing, Rowing, Salsa Spins, Skate Boarding, Skiing, Skijet, Soccer Juggling, Swing, Playing Tabla, TaiChi, Tennis Swing, Trampoline Jumping, Playing Violin, Volleyball Spiking, Walking with a dog, and Yo Yo.
| Provide a detailed description of the following dataset: UCF50 |
MSD | The Million Song Dataset is a freely-available collection of audio features and metadata for a million contemporary popular music tracks.
The core of the dataset is the feature analysis and metadata for one million songs, provided by The Echo Nest. The dataset does not include any audio, only the derived features. Note, however, that sample audio can be fetched from services like 7digital, using [code]( https://github.com/tbertinmahieux/MSongsDB/tree/master/Tasks_Demos/Preview7digital) provided by the authors.
| Provide a detailed description of the following dataset: MSD |
CASME II | The Chinese Academy of Sciences Micro-Expression dataset (CASME II) consists of 255 videos, elicited from 26 participants. The videos are recorded using Point Gray GRAS-03K2C camera which has a frame rate of 200fps. The average video length is 0.34s, equivalent to 68 frames. Each video’s emotion label is annotated by two coders, where the reliability is 0.846.
All the images are cropped to 170×140 pixels. The ground-truth information provided by the database include the emotion state, the action unit, the onset, apex and offset frame indices. The videos are grouped into seven categories: others (99 videos), disgust (63 videos), happiness (32 videos), repression (27 videos), surprise (25 videos), sadness (7 videos) and fear (2 videos). | Provide a detailed description of the following dataset: CASME II |
UCF-Crime | The UCF-Crime dataset is a large-scale dataset of 128 hours of videos. It consists of 1900 long and untrimmed real-world surveillance videos, with 13 realistic anomalies including Abuse, Arrest, Arson, Assault, Road Accident, Burglary, Explosion, Fighting, Robbery, Shooting, Stealing, Shoplifting, and Vandalism. These anomalies are selected because they have a significant impact on public safety.
This dataset can be used for two tasks. First, general anomaly detection considering all anomalies in one group and all normal activities in another group. Second, for recognizing each of 13 anomalous activities. | Provide a detailed description of the following dataset: UCF-Crime |
VOT2013 | The dataset comprises 16 short sequences showing various objects in challenging backgrounds. The sequences were chosen from a large pool of sequences using a methodology based on clustering visual features of object and background so that those 16 sequences sample evenly well the existing pool. The sequences were annotated by the VOT committee using axis-aligned bounding boxes.
| Provide a detailed description of the following dataset: VOT2013 |
Medical Segmentation Decathlon | The Medical Segmentation Decathlon is a collection of medical image segmentation datasets. It contains a total of 2,633 three-dimensional images collected across multiple anatomies of interest, multiple modalities and multiple sources. Specifically, it contains data for the following body organs or parts: Brain, Heart, Liver, Hippocampus, Prostate, Lung, Pancreas, Hepatic Vessel, Spleen and Colon. | Provide a detailed description of the following dataset: Medical Segmentation Decathlon |
CrossNER | CrossNER is a cross-domain NER (Named Entity Recognition) dataset, a fully-labeled collection of NER data spanning over five diverse domains (Politics, Natural Science, Music, Literature, and Artificial Intelligence) with specialized entity categories for different domains. Additionally, CrossNER also includes unlabeled domain-related corpora for the corresponding five domains. | Provide a detailed description of the following dataset: CrossNER |
11k Hands | A large dataset of human hand images (dorsal and palmar sides) with detailed ground-truth information for gender recognition and biometric identification. | Provide a detailed description of the following dataset: 11k Hands |
2-PM Vessel Dataset | 2-PM Vessel is an open-source volumetric brain vasculature dataset obtained with two-photon microscopy at Focused Ultrasound Lab, at Sunnybrook Research Institute (affiliated with University of Toronto by Dr. Alison Burgess, Charissa Poon and Marc Santos. The dataset contains a total of 12 volumetric stacks consisting of images of mouse brain vasculature and tumour vasculature. | Provide a detailed description of the following dataset: 2-PM Vessel Dataset |
Placepedia | **Placepedia** contains 240K places with 35M images from all over the world. Each place is associated with its district, city/town/village, state/province, country, continent, and a large amount of diverse photos. Both administrative areas and places have rich side information, e.g. discription, population, category, function. In addition, two cleaned subsets (Places-Coarse and Places-Fine) for experiments are provided. | Provide a detailed description of the following dataset: Placepedia |
2WikiMultiHopQA | Uses structured and unstructured data. The dataset introduces the evidence information containing a reasoning path for multi-hop questions. | Provide a detailed description of the following dataset: 2WikiMultiHopQA |
30MQA | An enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions. | Provide a detailed description of the following dataset: 30MQA |
360-SOD | 360-SOD contains 500 high-resolution equirectangular images. | Provide a detailed description of the following dataset: 360-SOD |
3D60 | Collects high quality 360 datasets with ground truth depth annotations, by re-using recently released large scale 3D datasets and re-purposing them to 360 via rendering. | Provide a detailed description of the following dataset: 3D60 |
3D Hand Pose | **3D Hand Pose** is a multi-view hand pose dataset consisting of color images of hands and different kind of annotations for each: the bounding box and the 2D and 3D location on the joints in the hand. | Provide a detailed description of the following dataset: 3D Hand Pose |
3D Ken Burns Dataset | Provides a large-scale synthetic dataset which contains accurate ground truth depth of various photo-realistic scenes. | Provide a detailed description of the following dataset: 3D Ken Burns Dataset |
3DMAD | The 3D Mask Attack Database (3DMAD) is a biometric (face) spoofing database. It currently contains 76500 frames of 17 persons, recorded using Kinect for both real-access and spoofing attacks. Each frame consists of:
- a depth image (640x480 pixels – 1x11 bits)
- the corresponding RGB image (640x480 pixels – 3x8 bits)
- manually annotated eye positions (with respect to the RGB image). | Provide a detailed description of the following dataset: 3DMAD |
3DPeople Dataset | A large-scale synthetic dataset with 2.5 Million photo-realistic images of 80 subjects performing 70 activities and wearing diverse outfits. | Provide a detailed description of the following dataset: 3DPeople Dataset |
3DSeg-8 | The 3DSeg-8 is a collection of several publicly available 3D segmentation datasets from different medical imaging modalities, e.g. magnetic resonance imaging (MRI) and computed tomography (CT), with various scan regions, target organs and pathologies. | Provide a detailed description of the following dataset: 3DSeg-8 |
3D-ZeF | **3D-ZeF** dataset consists of eight sequences with a duration between 15-120 seconds and 1-10 free moving zebrafish. The videos have been annotated with a total of 86,400 points and bounding boxes. | Provide a detailed description of the following dataset: 3D-ZeF |
3RScan | A novel dataset and benchmark, which features 1482 RGB-D scans of 478 environments across multiple time steps. Each scene includes several objects whose positions change over time, together with ground truth annotations of object instances and their respective 6DoF mappings among re-scans. | Provide a detailed description of the following dataset: 3RScan |
4Seasons | 4Seasons is adataset covering seasonal and challenging perceptual conditions for autonomous driving. | Provide a detailed description of the following dataset: 4Seasons |
A2D2 | Audi Autonomous Driving Dataset (A2D2) consists of simultaneously recorded images and 3D point clouds, together with 3D bounding boxes, semantic segmentation, instance segmentation, and data extracted from the automotive bus. | Provide a detailed description of the following dataset: A2D2 |
A*3D | The **A*3D** dataset is a step forward to make autonomous driving safer for pedestrians and the public in the real world.
Characteristics:
* 230K human-labeled 3D object annotations in 39,179 LiDAR point cloud frames and corresponding frontal-facing RGB images.
* Captured at different times (day, night) and weathers (sun, cloud, rain).
Source: [https://github.com/I2RDL2/ASTAR-3D](https://github.com/I2RDL2/ASTAR-3D)
Image Source: [https://github.com/I2RDL2/ASTAR-3D](https://github.com/I2RDL2/ASTAR-3D) | Provide a detailed description of the following dataset: A*3D |
Aachen Day-Night | **Aachen Day-Night** is a dataset designed for benchmarking 6DOF outdoor visual localization in changing conditions. It focuses on localizing high-quality night-time images against a day-time 3D model. There are 14,607 images with changing conditions of weather, season and day-night cycles. | Provide a detailed description of the following dataset: Aachen Day-Night |
AADB | Contains aesthetic scores and meaningful attributes assigned to each image by multiple human raters. | Provide a detailed description of the following dataset: AADB |
AAVE/SAE Paired Dataset | AAVE/SAE Paired Dataset contains 2019 intent-equivalent AAVE/SAE pairs. The AAVE (African-American Vernacular English) samples are sampled from Blodgett et. al. (2016)'s TwitterAAE, with their corresponding SAE (Standard American English) samples annotated by Amazon MTurk. | Provide a detailed description of the following dataset: AAVE/SAE Paired Dataset |
ABC Dataset | The **ABC Dataset** is a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different formats and resolutions, enabling fair comparisons for a wide range of geometric learning algorithms. | Provide a detailed description of the following dataset: ABC Dataset |
ACL ARC | ACL Anthology Reference Corpus (ACL ARC) is a collection of 10,920 academic papers from the ACL Anthology. ACL ARC is cleaned to remove:
- files that look like not full papers, paper fragments, foreign-language papers (e.g., French), or pure junk.
- headers (title and author information; NOT abstract).
- footers ("References" line and the actual references).
- some bad characters (spurious characters).
- some page numbers (i.e., a single number appearing on a line, with nothing else attached to it).
- significant foreign-language (e.g., French) content in an otherwise English paper.
The cleaned corpus has 10,628 documents. | Provide a detailed description of the following dataset: ACL ARC |
ACRONYM | A dataset for robot grasp planning based on physics simulation. The dataset contains 17.7M parallel-jaw grasps, spanning 8872 objects from 262 different categories, each labeled with the grasp result obtained from a physics simulator. | Provide a detailed description of the following dataset: ACRONYM |
Acronym Identification | Is an acronym disambiguation (AD) dataset for scientific domain with 62,441 samples which is significantly larger than the previous scientific AD dataset. | Provide a detailed description of the following dataset: Acronym Identification |
ActioNet | **ActioNet** is a video task-based dataset collected in a synthetic 3D environment. It contains 3,038 annotated videos and hierarchical task structures over 65 individual household tasks from 120 different scenes. Each task is annotated across three to five different scenes by 10 different annotators. The tasks can be broken down into four categories: living room, bedroom, bathroom, kitchen. | Provide a detailed description of the following dataset: ActioNet |
ActivityNet Entities | ActivityNet-Entities, augments the challenging ActivityNet Captions dataset with 158k bounding box annotations, each grounding a noun phrase. This allows training video description models with this data, and importantly, evaluate how grounded or "true" such model are to the video they describe.
Source: [https://github.com/facebookresearch/ActivityNet-Entities](https://github.com/facebookresearch/ActivityNet-Entities)
Image Source: [https://github.com/facebookresearch/ActivityNet-Entities](https://github.com/facebookresearch/ActivityNet-Entities) | Provide a detailed description of the following dataset: ActivityNet Entities |
ActivityNet-QA | The ActivityNet-QA dataset contains 58,000 human-annotated QA pairs on 5,800 videos derived from the popular ActivityNet dataset. The dataset provides a benchmark for testing the performance of VideoQA models on long-term spatio-temporal reasoning. | Provide a detailed description of the following dataset: ActivityNet-QA |
ActivityNet Thumbnails | Consists of 10,000+ video-sentence pairs with each accompanied by an annotated sentence specified video thumbnail. | Provide a detailed description of the following dataset: ActivityNet Thumbnails |
ADHA | ADHA: “Adverbs Describing Human Actions” is the first benchmark for a new problem — recognizing human action adverbs (HAA). This is the first step for computer vision to change over from pattern recognition to real AI. Some key features of ADHA are: a semantically complete set of adverbs describing human actions, a set of common, describable human actions, and an exhaustive labeling of simultaneously emerging actions in each video. | Provide a detailed description of the following dataset: ADHA |
ADL Piano MIDI | The **ADL Piano MIDI** is a dataset of 11,086 piano pieces from different genres. This dataset is based on the Lakh MIDI dataset, which is a collection on 45,129 unique MIDI files that have been matched to entries in the Million Song Dataset. Most pieces in the Lakh MIDI dataset have multiple instruments, so for each file the authors of ADL Piano MIDI dataset extracted only the tracks with instruments from the "Piano Family" (MIDI program numbers 1-8). This process generated a total of 9,021 unique piano MIDI files. Theses 9,021 files were then combined with other approximately 2,065 files scraped from publicly-available sources on the internet. All the files in the final collection were de-duped according to their MD5 checksum. | Provide a detailed description of the following dataset: ADL Piano MIDI |
Advice Seeking Questions | The Advice-Seeking Questions (ASQ) dataset is a collection of personal narratives with advice-seeking questions. The dataset has been split into train, test, heldout sets, with 8865, 2500, 10000 test instances each. This dataset is used to train and evaluate methods that can infer what is the advice-seeking goal behind a personal narrative. This task is formulated as a cloze test, where the goal is to identify which of two advice-seeking questions was removed from a given narrative.
Source: [https://github.com/CornellNLP/ASQ](https://github.com/CornellNLP/ASQ) | Provide a detailed description of the following dataset: Advice Seeking Questions |
ADVIO | Provides a wide range of raw sensor data that is accessible on almost any modern-day smartphone together with a high-quality ground-truth track. | Provide a detailed description of the following dataset: ADVIO |
AeroRIT | AeroRIT is a hyperspectral dataset to facilitate aerial hyperspectral scene understanding. | Provide a detailed description of the following dataset: AeroRIT |
AESLC | To study the task of email subject line generation: automatically generating an email subject line from the email body. | Provide a detailed description of the following dataset: AESLC |
Aesthetics Text Corpus | An exhaustive list of stop lemmas created from 12 corpora across multiple domains, consisting of over 13 million words, from which more than 200,000 lemmas were generated, and 11 publicly available stop word lists comprising over 1000 words, from which nearly 400 unique lemmas were generated. | Provide a detailed description of the following dataset: Aesthetics Text Corpus |
Affective Text | Affective Text (Test Corpus of SemEval 2007) by [Carlo Strapparava & Rada Mihalcea](https://www.aclweb.org/anthology/S07-1013/). | Provide a detailed description of the following dataset: Affective Text |
Aff-Wild | Aff-Wild is a dataset for emotion recognition from facial images in a variety of head poses, illumination conditions and occlusions. | Provide a detailed description of the following dataset: Aff-Wild |
Aff-Wild2 | Aff-Wild2 is an extension of the Aff-Wild dataset for affect recognition. It approximately doubles the number of included video frames and the number of subjects; thus, improving the variability of the included behaviors and of the involved persons. | Provide a detailed description of the following dataset: Aff-Wild2 |
AfroMNIST | A set of synthetic MNIST-style datasets for four orthographies used in Afro-Asiatic and Niger-Congo languages: Ge`ez (Ethiopic), Vai, Osmanya, and N'Ko. These datasets serve as "drop-in" replacements for MNIST. | Provide a detailed description of the following dataset: AfroMNIST |
Agriculture-Vision | A large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns. Collects 94,986 high-quality aerial images from 3,432 farmlands across the US, where each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel. | Provide a detailed description of the following dataset: Agriculture-Vision |
AGRR-2019 | Consists of 7.5k sentences with gapping (as well as 15k relevant negative sentences) and comprises data from various genres: news, fiction, social media and technical texts. The dataset was prepared for the Automatic Gapping Resolution Shared Task for Russian (AGRR-2019) - a competition aimed at stimulating the development of NLP tools and methods for processing of ellipsis. | Provide a detailed description of the following dataset: AGRR-2019 |
AI2D-RST | AI2D-RST is a multimodal corpus of 1000 English-language diagrams that represent topics in primary school natural sciences, such as food webs, life cycles, moon phases and human physiology. The corpus is based on the Allen Institute for Artificial Intelligence Diagrams (AI2D) dataset, a collection of diagrams with crowd-sourced descriptions, which was originally developed to support research on automatic diagram understanding and visual question answering. | Provide a detailed description of the following dataset: AI2D-RST |
AIDER | Dataset aimed to do automated aerial scene classification of disaster events from on-board a UAV. | Provide a detailed description of the following dataset: AIDER |
AIRS | The **AIRS** (Aerial Imagery for Roof Segmentation) dataset provides a wide coverage of aerial imagery with 7.5 cm resolution and contains over 220,000 buildings. The task posed for AIRS is defined as roof segmentation. | Provide a detailed description of the following dataset: AIRS |
AirSim | **AirSim** is a simulator for drones, cars and more, built on Unreal Engine. It is open-source, cross platform, and supports software-in-the-loop simulation with popular flight controllers such as PX4 & ArduPilot and hardware-in-loop with PX4 for physically and visually realistic simulations. It is developed as an Unreal plugin that can simply be dropped into any Unreal environment. Similarly, there exists an experimental version for a Unity plugin. | Provide a detailed description of the following dataset: AirSim |
AISHELL-1 | AISHELL-1 is a corpus for speech recognition research and building speech recognition systems for Mandarin. | Provide a detailed description of the following dataset: AISHELL-1 |
AISHELL-2 | AISHELL-2 contains 1000 hours of clean read-speech data from iOS is free for academic usage. | Provide a detailed description of the following dataset: AISHELL-2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.