dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
AV Digits Database
AV Digits Database is an audiovisual database which contains normal, whispered and silent speech. 53 participants were recorded from 3 different views (frontal, 45 and profile) pronouncing digits and phrases in three speech modes. The database consists of two parts: digits and short phrases. In the first part, participants were asked to read 10 digits, from 0 to 9, in English in random order five times. In case of non-native English speakers this part was also repeated in the participant’s native language. In total, 53 participants (41 males and 12 females) from 16 nationalities, were recorded with a mean age and standard deviation of 26.7 and 4.3 years, respectively. In the second part, participants were asked to read 10 short phrases. The phrases are the same as the ones used in the OuluVS2 database: “Excuse me”, “Goodbye”, “Hello”, “How are you”, “Nice to meet you”, “See you”, “I am sorry”, “Thank you”, “Have a good time”, “You are welcome”. Again, each phrase was repeated five times in 3 different modes, neutral, whisper and silent speech. Thirty nine participants (32 males and 7 females) were recorded for this part with a mean age and standard deviation of 26.3 and 3.8 years, respectively.
Provide a detailed description of the following dataset: AV Digits Database
Fabrics Dataset
The Fabrics Dataset consists of about 2000 samples of garments and fabrics. A small patch of each surface has been captured under 4 different illumination conditions using a custom made, portable photometric stereo sensor. All images have been acquired "in the field" (at clothes shops) and the dataset reflects the distribution of fabrics in real world, hence it is not balanced. The majority of clothes are made of specific fabrics, such as cotton and polyester, while some other fabrics, such as silk and linen, are more rare. Also, a large number of clothes are not composed of a single fabric but two or more fabrics are used to give the garment the desired properties (blended fabrics). For every garment there is information (attributes) about its material composition from the manufacturer label and its type (pants, shirt, skirt etc.).
Provide a detailed description of the following dataset: Fabrics Dataset
MobiFace
MobiFace is the first dataset for single face tracking in mobile situations. It consists of 80 unedited live-streaming mobile videos captured by 70 different smartphone users in fully unconstrained environments. Over 95K bounding boxes are manually labelled. The videos are carefully selected to cover typical smartphone usage. The videos are also annotated with 14 attributes, including 6 newly proposed attributes and 8 commonly seen in object tracking.
Provide a detailed description of the following dataset: MobiFace
LSFM
The Large Scale Facial Model (LSFM) is a 3D statistical model of facial shape built from nearly 10,000 individuals.
Provide a detailed description of the following dataset: LSFM
FaceScape
FaceScape dataset provides 3D face models, parametric models and multi-view images in large-scale and high-quality. The camera parameters, the age and gender of the subjects are also included. The data have been released to public for non-commercial research purpose.
Provide a detailed description of the following dataset: FaceScape
AgeDB
AgeDB contains 16, 488 images of various famous people, such as actors/actresses, writers, scientists, politicians, etc. Every image is annotated with respect to the identity, age and gender attribute. There exist a total of 568 distinct subjects. The average number of images per subject is 29. The minimum and maximum age is 1 and 101, respectively. The average age range for each subject is 50.3 years.
Provide a detailed description of the following dataset: AgeDB
AFEW-VA
The AFEW-VA databaset is a collection of highly accurate per-frame annotations levels of valence and arousal, along with per-frame annotations of 68 facial landmarks for 600 challenging video clips. These clips are extracted from feature films and were also annotated in terms of discrete emotion categories in the form of the AFEW database (that can be obtained [there](https://cs.anu.edu.au/few/AFEW.html)).
Provide a detailed description of the following dataset: AFEW-VA
KILT
**KILT** (**Knowledge Intensive Language Tasks**) is a benchmark consisting of 11 datasets representing 5 types of tasks: * Fact-checking (FEVER), * Entity linking (AIDA CoNLL-YAGO, WNED-WIKI, WNED-CWEB), * Slot filling (T-Rex, Zero Shot RE), * Open domain QA (Natural Questions, HotpotQA, TriviaQA, ELI5), * Dialog generation (Wizard of Wikipedia). All these datasets have been grounded in a single pre-processed wikipedia snapshot, allowing for fairer and more consistent evaluation as well as enabling new task setups such as multitask and transfer learning.
Provide a detailed description of the following dataset: KILT
SOREL-20M
SOREL-20M is a large-scale dataset consisting of nearly 20 million files with pre-extracted features and metadata, high-quality labels derived from multiple sources, information about vendor detections of the malware samples at the time of collection, and additional “tags” related to each malware sample to serve as additional targets.
Provide a detailed description of the following dataset: SOREL-20M
Relational Pattern Similarity Dataset
The relational pattern similarity dataset is a new dataset upon the work of Zeichner et al. (2012), which consists of relational patterns with semantic inference labels annotated. The dataset includes 5,555 pairs extracted by Reverb (Fader et al., 2011), 2,447 pairs with inference relation and 3,108 pairs (the rest) without one.
Provide a detailed description of the following dataset: Relational Pattern Similarity Dataset
PHM2017
PHM2017 is a new dataset consisting of 7,192 English tweets across six diseases and conditions: Alzheimer’s Disease, heart attack (any severity), Parkinson’s disease, cancer (any type), Depression (any severity), and Stroke. The Twitter search API was used to retrieve the data using the colloquial disease names as search keywords, with the expectation of retrieving a high-recall, low precision dataset. After removing the re-tweets and replies, the tweets were manually annotated. The labels are: - self-mention. The tweet contains a health mention with a health self-report of the Twitter account owner, e.g., "However, I worked hard and ran for Tokyo Mayer Election Campaign in January through February, 2014, without publicizing the cancer." - other-mention. The tweet contains a health mention of a health report about someone other than the account owner, e.g., "Designer with Parkinson’s couldn’t work then engineer invents bracelet + changes her world" - awareness. The tweet contains the disease name, but does not mention a specific person, e.g., "A Month Before a Heart Attack, Your Body Will Warn You With These 8 Signals" - non-health. The tweet contains the disease name, but the tweet topic is not about health. "Now I can have cancer on my wall for all to see <3"
Provide a detailed description of the following dataset: PHM2017
ORVS
The ORVS dataset has been newly established as a collaboration between the computer science and visual-science departments at the University of Calgary. This dataset contains 49 images (42 training and seven testing images) collected from a clinic in Calgary-Canada. All images were acquired with a Zeiss Visucam 200 with 30 degrees field of view (FOV). The image size is 1444×1444 with 24 bits per pixel. Images and are stored in JPEG format with low compression, which is common in ophthalmology practice. All images were manually traced by an expert who a has been working in the field of retinal-image analysis and went through training. The expert was asked to label all pixels belonging to retinal vessels. The Windows Paint 3D tool was used to manually label the images.
Provide a detailed description of the following dataset: ORVS
DR HAGIS
The DR HAGIS database has been created to aid the development of vessel extraction algorithms suitable for retinal screening programmes. Researchers are encouraged to test their segmentation algorithms using this database. All thirty-nine fundus images were obtained from a diabetic retinopathy screening programme in the UK. Hence, all images were taken from diabetic patients. Since patients attending these screening programmes exhibit other co-morbidities, the DR HAGIS database consists of following four co-morbidity subgroups: - Images 1-10: Glaucoma subgroup - Images 11-20: Hypertension subgroup - Images 21-30: Diabetic retinopathy subgroup - Images 31-40: Age-related macular degeneration subgroup Besides the fundus images, the manual segmentation of the retinal surface vessels is provided by an expert grader. These manually segmented images can be used as the ground truth to compare and assess the automatic vessel extraction algorithms. Masks of the FOV are provided as well to quantify the accuracy of vessel extraction within the FOV only. The images were acquired in different screening centers, therefore reflecting the range of image resolutions, digital cameras and fundus cameras used in the clinic. The fundus images were captured using a Topcon TRC-NW6s, Topcon TRC-NW8 or a Canon CR DGi fundus camera with a horizontal 45 degree field-of-view (FOV). The images are 4752x3168 pixels, 3456x2304 pixels, 3126x2136 pixels, 2896x1944 pixels or 2816x1880 pixels in size. The fundus images are saved as compressed JPEG files with 8 bits per colour plane. The ground truth and mask images are saved as binary PNG files.
Provide a detailed description of the following dataset: DR HAGIS
ARIA
This data set was collected in 2004 to 2006 in the United Kingdom. Subjects were adult males and females, some of whom were healthy (control group), some with age-related macular degeneration (AMD group), and some were diabetic patients (diabetic group). Unfortunately, no other information from this time exists about this subjects.
Provide a detailed description of the following dataset: ARIA
VICAVR
The VICAVR database is a set of retinal images used for the computation of the A/V Ratio. The database currently includes 58 images. The images have been acquired with a TopCon non-mydriatic camera NW-100 model and are optic disc centered with a resolution of 768x584. The database includes the caliber of the vessels measured at different radii from the optic disc as well as the vessel type (artery/vein) labelled by three experts.
Provide a detailed description of the following dataset: VICAVR
OCTAGON
The OCTAGON dataset is a set of Angiography by Octical Coherence Tomography images (OCT-A) used to the segmentation of the Foveal Avascular Zone (FAZ). The dataset includes 144 healthy OCT-A images and 69 diabetic OCT-A images, divided into four groups, each one with 36 and about 17 OCT-A images, respectively. These groups are: 3x3 superficial, 3x3 deep, 6x6 superficial and 6x6 deep, where 3x3 and 6x6 are the zoom of the image and superficial/deep are the depth level of the extracted image. The healthy dataset includes OCT-A images from people classified in 6 age ranges: 10-19 years, 20-29 years, 30-39 years, 40-49 years, 50-59 years and 60-69 years. Each age range includes 3 different patients with information of left and right eyes for each one. Finally, for each eye, there are four different images: one 3x3 superficial image, one 3x3 deep image, one 6x6 superficial image and one 6x6 deep image. Each image have two manual labelled of expert clinicians of the FAZ and their quantification in the healthy OCT-A images, and one manual labelled in the diabetic OCT-A images.
Provide a detailed description of the following dataset: OCTAGON
CLOUD
The CLOUD dataset is a set of Optical Coherence Tomography of the Anterior Segment images (AS-OCT) used to the automatic identification and representation of the cornea-contact lens relationship. The dataset includes 112 AS-OCT images that were captured from 16 different patients. In particular, the images were obtained by an OCT Cirrus 500 scanner model of Carl Zeiss Meditec with an anterior segment module for users of scleral contact lens (SCL).
Provide a detailed description of the following dataset: CLOUD
MESSIDOR
The Messidor database has been established to facilitate studies on computer-assisted diagnoses of diabetic retinopathy. The research community is welcome to test its algorithms on this database. In this section, you will find instructions on how to download the database.
Provide a detailed description of the following dataset: MESSIDOR
DIARETDB1
The database consists of 89 colour fundus images of which 84 contain at least mild non-proliferative signs (Microaneurysms) of the diabetic retinopathy, and 5 are considered as normal which do not contain any signs of the diabetic retinopathy according to all experts who participated in the evaluation. Images were captured using the same 50 degree field-of-view digital fundus camera with varying imaging settings. The data correspond to a good (not necessarily typical) practical situation, where the images are comparable, and can be used to evaluate the general performance of diagnostic methods. This data set is referred to as "calibration level 1 fundus images".
Provide a detailed description of the following dataset: DIARETDB1
UDA-CH
UDA-CH contains 16 objects that cover a variety of artworks which can be found in a museum like sculptures, paintings and books. Specifically, the dataset has been collected inside the cultural site “Galleria Regionale di Palazzo Bellomo” located in Siracusa, Italy.
Provide a detailed description of the following dataset: UDA-CH
EGO-CH
EGO-CH is a dataset of egocentric videos for visitors’ behavior understanding. The dataset has been collected in two different cultural sites and includes more than 27 hours of video acquired by 70 subjects, including volunteers and 60 real visitors. The overall dataset includes labels for 26 environments and over 200 Points of Interest (POIs). Specifically, each video of EGO-CH has been annotated with 1) temporal labels specifying the current location of the visitor and the observed POI, 2) bounding box annotations around POIs. A large subset of the dataset, consisting of 60 videos, is also associated with surveys filled out by the visitors at the end of each visit.
Provide a detailed description of the following dataset: EGO-CH
MAP
**Maybe Ambiguous Pronoun** is a dataset similar to [GAP](/dataset/gap-coreference-dataset) dataset, but without binary gender constraints.
Provide a detailed description of the following dataset: MAP
GICoref
GICoref is a fully annotated coreference resolution dataset written by and about trans people.
Provide a detailed description of the following dataset: GICoref
NAF
This dataset was created with images provided by the United States National Archive and FamilySearch. The goal of this data is to capture relationships between text/handwriting entities on form images. It will include transcriptions in the future, but doesn't currently. The form images are organized into "groups", each group containing images of the same form type.
Provide a detailed description of the following dataset: NAF
ImageNet-P
**ImageNet-P** consists of noise, blur, weather, and digital distortions. The dataset has validation perturbations; has difficulty levels; has CIFAR-10, Tiny ImageNet, ImageNet 64 × 64, standard, and Inception-sized editions; and has been designed for benchmarking not training networks. ImageNet-P departs from ImageNet-C by having perturbation sequences generated from each ImageNet validation image. Each sequence contains more than 30 frames, so to counteract an increase in dataset size and evaluation time only 10 common perturbations are used.
Provide a detailed description of the following dataset: ImageNet-P
Combinatorial 3D Shape Dataset
The combinatorial 3D shape dataset is composed of 406 instances of 14 classes. Specifically, each object in the dataset is considered equivalent to a sequence of primitive placement.
Provide a detailed description of the following dataset: Combinatorial 3D Shape Dataset
AI2D
AI2 Diagrams (AI2D) is a dataset of over 5000 grade school science diagrams with over 150000 rich annotations, their ground truth syntactic parses, and more than 15000 corresponding multiple choice questions.
Provide a detailed description of the following dataset: AI2D
Chart2Text
Chart2Text is a dataset that was crawled from 23,382 freely accessible pages from statista.com in early March of 2020, yielding a total of 8,305 charts, and associated summaries. For each chart, the chart image, the underlying data table, the title, the axis labels, and a human-written summary describing the statistic was downloaded.
Provide a detailed description of the following dataset: Chart2Text
DENSE
DENSE (Depth Estimation oN Synthetic Events) is a new dataset with synthetic events and perfect ground truth.
Provide a detailed description of the following dataset: DENSE
PixelShift200
Advanced pixel shift technology is employed to perform a full color sampling of the image. Pixel shift technology takes four samples of the same image at nearly the same time, and physically controls the camera sensor to move one pixel horizontally or vertically at each sampling to capture all color information at each pixel. The pixel shift technology ensures that the sampled images follow the distribution of natural images sampled by the camera, and the full information of the color (R, Gr, Gb, B channel) is completely obtained without any need of interpolation. In this way, the collected RGB images are artifacts-free, which leads to better training results for demosaicing related tasks. PixelShift200 Dataset contains 210 high quality 4K images. - Training: 200 images - Testing: 10 images - Key Features: fully colored, demosiacing artifacts free - Camera: SONY α7R III
Provide a detailed description of the following dataset: PixelShift200
VLEP
VLEP contains 28,726 future event prediction examples (along with their rationales) from 10,234 diverse TV Show and YouTube Lifestyle Vlog video clips. Each example (see Figure 1) consists of a Premise Event (a short video clip with dialogue), a Premise Summary (a text summary of the premise event), and two potential natural language Future Events (along with Rationales) written by people. These clips are on average 6.1 seconds long and are harvested from diverse event-rich sources, i.e., TV show and YouTube Lifestyle Vlog videos.
Provide a detailed description of the following dataset: VLEP
Cata7
Cata7 is the first cataract surgical instrument dataset for semantic segmentation. The dataset consists of seven videos while each video records a complete cataract surgery. All videos are from Beijing Tongren Hospital. Each video is split into a sequence of images, where resolution is 1920×1080 pixels. To reduce redundancy, the videos are downsampled from 30 fps to 1 fps. Also, images without surgical instruments are manually removed. Each image is labeled with precise edges and types of surgical instruments. This dataset contains 2,500 images, which are divided into training and test sets. The training set consists of five video sequences and test set consists of two video sequence.
Provide a detailed description of the following dataset: Cata7
UCC
The Unhealthy Comments Corpus (UCC) is corpus of 44355 comments intended to assist in research on identifying subtle attributes which contribute to unhealthy conversations online. Each comment is labelled as either 'healthy' or 'unhealthy', in addition to binary labels for the presence of six potentially 'unhealthy' sub-attributes: (1) hostile; (2) antagonistic, insulting, provocative or trolling; (3) dismissive; (4) condescending or patronising; (5) sarcastic; and/or (6) an unfair generalisation. Each label also has an associated confidence score. The UCC contributes further high quality data on attributes like sarcasm, hostility, and condescension, adding to existing datasets on these and related attributes, and provides the first dataset of this scale with labels for dismissiveness, unfair generalisations, antagonistic behavior, and overall assessments of whether those comments fall within 'healthy' conversation.
Provide a detailed description of the following dataset: UCC
Satire Dataset
The satire dataset is a new multi-modal dataset of satirical and regular news articles. The satirical news is collected from four websites that explicitly declare themselves to be satire, and the regular news is collected from six mainstream news websites. Specifically, the satirical news websites the articles were collected from are The Babylon Bee, Clickhole, Waterford Whisper News, and The DailyER. The regular news websites are Reuters, The Hill, Politico, New York Post, Huffington Post, and Vice News. The headlines and the thumbnail images of the latest 1000 articles for each of the publications are collected. The dataset contains a total of 4000 satirical and 6000 regular news articles.
Provide a detailed description of the following dataset: Satire Dataset
Headcam
This dataset contains panoramic video captured from a helmet-mounted camera while riding a bike through suburban Northern Virginia.
Provide a detailed description of the following dataset: Headcam
OCNLI
OCNLI stands for Original Chinese Natural Language Inference. It is corpus for Chinese Natural Language Inference, collected following closely the procedures of MNLI, but with enhanced strategies aiming for more challenging inference pairs. No human/machine translation is used in creating the dataset, and thus the Chinese texts are original and not translated. OCNLI has roughly 50k pairs for training, 3k for development and 3k for test. Only the test data is released but not its labels. OCNLI is part of the CLUE benchmark.
Provide a detailed description of the following dataset: OCNLI
QReCC
QReCC contains 14K conversations with 81K question-answer pairs. QReCC is built on questions from TREC CAsT, QuAC and Google Natural Questions. While TREC CAsT and QuAC datasets contain multi-turn conversations, Natural Questions is not a conversational dataset. Questions in NQ dataset were used as prompts to create conversations explicitly balancing types of context-dependent questions, such as anaphora (co-references) and ellipsis. For each query the authors collect query rewrites by resolving references, the resulting query rewrite is a context-independent version of the original (context-dependent) question. The rewritten query is then used to with a search engine to answer the question. Each query is also annotated with answer, link to the web page that used to produce the answer. Each conversation in the dataset contains a unique Conversation_no, Turn_no unique within a conversation, the original Question, Context, Rewrite and Answer with Answer_URL.
Provide a detailed description of the following dataset: QReCC
PHD²
The dataset contains information on what video segments a specific user considers a highlight. Having this kind of data allows for strong personalization models, as specific examples of what a user is interested in help models obtain a fine-grained understanding of that specific user. The data consists of YouTube videos, from which gifs.com users manually extracted their highlights, by creating GIFs from a segment of the full video. Thus, the dataset is similar to PHD-GIFS, with two major differences. - Each selection is associated with a user, which is what allows personalization. - instead of visual matching to find the position in the video from which a GIF was selected, PHD-GIFS uses the timestamps. Thus, the ground truth is free from any alignment errors. The training set contains highlights from 12,972 users. The test set contains highlights from 850 users.
Provide a detailed description of the following dataset: PHD²
Video2GIF
The **Video2GIF** dataset contains over 100,000 pairs of GIFs and their source videos. The GIFs were collected from two popular GIF websites (makeagif.com, gifsoup.com) and the corresponding source videos were collected from YouTube in Summer 2015. IDs and URLs of the GIFs and the videos are provided, along with temporal alignment of GIF segments to their source videos. The dataset shall be used to evaluate GIF creation and video highlight techniques. In addition to the 100K GIF-video pairs, the dataset contains 357 pairs of GIFs and their source videos as the test set. The 357 videos come with a Creative Commons CC-BY license, which allowed the authors to redistribute the material with appropriate credit to make the results on test set reproducible even when some of the videos become unavailable.
Provide a detailed description of the following dataset: Video2GIF
VAST
VAST consists of a large range of topics covering broad themes, such as politics (e.g., ‘a Palestinian state’), education (e.g., ‘charter schools’), and public health (e.g., ‘childhood vaccination’). In addition, the data includes a wide range of similar expressions (e.g., ‘guns on campus’ versus ‘firearms on campus’). This variation captures how humans might realistically describe the same topic and contrasts with the lack of variation in existing datasets.
Provide a detailed description of the following dataset: VAST
Silent Speech EMG
Facial electromyography recordings during both silent and vocalized speech.
Provide a detailed description of the following dataset: Silent Speech EMG
SMOT
The SMOT dataset, Single sequence-Multi Objects Training, is collected to represent a practical scenario of collecting training images of new objects in the real world, i.e. a mobile robot with an RGB-D camera collects a sequence of frames while driving around a table to learning multiple objects and tries to recognize objects in different locations.
Provide a detailed description of the following dataset: SMOT
3DNet
The 3DNet dataset is a free resource for object class recognition and 6DOF pose estimation from point cloud data. 3DNet provides a large-scale hierarchical CAD-model databases with increasing numbers of classes and difficulty with 10, 60 and 200 object classes together with evaluation datasets that contain thousands of scenes captured with an RGB-D sensor.
Provide a detailed description of the following dataset: 3DNet
ARID
ARID is a large-scale, multi-view object dataset collected with an RGB-D camera mounted on a mobile robot.
Provide a detailed description of the following dataset: ARID
OCID
Developing robot perception systems for handling objects in the real-world requires computer vision algorithms to be carefully scrutinized with respect to the expected operating domain. This demands large quantities of ground truth data to rigorously evaluate the performance of algorithms. The Object Cluttered Indoor Dataset is an RGBD-dataset containing point-wise labeled point-clouds for each object. The data was captured using two ASUS-PRO Xtion cameras that are positioned at different heights. It captures diverse settings of objects, background, context, sensor to scene distance, viewpoint angle and lighting conditions. The main purpose of OCID is to allow systematic comparison of existing object segmentation methods in scenes with increasing amount of clutter. In addition OCID does also provide ground-truth data for other vision tasks like object-classification and recognition.
Provide a detailed description of the following dataset: OCID
LfED-6D
The LfED-6D dataset is a collection of 6D grasp annotations acquired through experience (with a robot platform) or by human demonstration. For known objects, the annotated grasps can be directly applied given the pose of the object model is correctly computed. For unknown objects, the grasps can be generalized using methods for shape matching, for example the Dense Geometrical Correspondence Network.
Provide a detailed description of the following dataset: LfED-6D
NYU-VP
NYU-VP is a new dataset for multi-model fitting, vanishing point (VP) estimation in this case. Each image is annotated with up to eight vanishing points, and pre-extracted line segments are provided which act as data points for a robust estimator. Due to its size, the dataset is the first to allow for supervised learning of a multi-model fitting task.
Provide a detailed description of the following dataset: NYU-VP
YUD+
YUD+ is a dataset containing additional Vanishing Point Labels for the [York Urban Database](https://paperswithcode.com/dataset/york-urban-line-segment-database).
Provide a detailed description of the following dataset: YUD+
NText
NText is an eight million words dataset extracted and preprocessed from nuclear research papers and thesis.
Provide a detailed description of the following dataset: NText
NQuAD
NQuAD is a Nuclear Question Answering Dataset, which contains 700+ nuclear Question Answer pairs developed and verified by expert nuclear researchers.
Provide a detailed description of the following dataset: NQuAD
Indoor and outdoor DFD dataset
The dfd_indoor dataset contains 110 images for training and 29 images for testing. The dfd_outdoor dataset contains 34 images for tests; no ground truth was given for this dataset, as the depth sensor only works on indoor scenes.
Provide a detailed description of the following dataset: Indoor and outdoor DFD dataset
Lorenz Dataset
The Lorenz dataset contains 100000 time-series with length 24. The data has 5 modes and it is obtained using the Lorenz equation with 5 different seed values.
Provide a detailed description of the following dataset: Lorenz Dataset
EHR-Rel
EHR-RelB is a benchmark dataset for biomedical concept relatedness, consisting of 3630 concept pairs sampled from electronic health records (EHRs). EHR-RelA is a smaller dataset of 111 concept pairs, which are mainly unrelated.
Provide a detailed description of the following dataset: EHR-Rel
MLGESTURE DATASET
MlGesture is a dataset for hand gesture recognition tasks, recorded in a car with 5 different sensor types at two different viewpoints. The dataset contains over 1300 hand gesture videos from 24 participants and features 9 different hand gesture symbols. One sensor cluster with five different cameras is mounted in front of the driver in the center of the dashboard. A second sensor cluster is mounted on the ceiling looking straight down.
Provide a detailed description of the following dataset: MLGESTURE DATASET
NYT-H
NYT-H is a dataset for distantly-supervised relation extraction, in which DS-labelled training data is used and several annotators to label test data are hired. NYT-H can serve as a benchmark of distantly-supervised relation extraction.
Provide a detailed description of the following dataset: NYT-H
CSAW-S
CSAW-S is a dataset of mammography images which includes expert annotations of tumors and non-expert annotations of breast anatomy and artifacts in the image.
Provide a detailed description of the following dataset: CSAW-S
2D-3D Match Dataset
2D-3D Match Dataset is a new dataset of 2D-3D correspondences by leveraging the availability of several 3D datasets from RGB-D scans. Specifically, the data from SceneNN and 3DMatch are used. The training dataset consists of 110 RGB-D scans, of which 56 scenes are from SceneNN and 54 scenes are from 3DMatch. The 2D-3D correspondence data is generated as follows. Given a 3D point which is randomly sampled from a 3D point cloud, a set of 3D patches from different scanning views are extracted. To find a 2D-3D correspondence, for each 3D patch, its 3D position is re-projected into all RGB-D frames for which the point lies in the camera frustum, taking occlusion into account. The corresponding local 2D patches around the re-projected point are extracted. In total, around 1.4 millions 2D-3D correspondences are collected.
Provide a detailed description of the following dataset: 2D-3D Match Dataset
FIGR-8
The FIGR-8 database is a dataset containing 17,375 classes of 1,548,256 images representing pictograms, ideograms, icons, emoticons or object or conception depictions. Its aim is to set a benchmark for Few-shot Image Generation tasks, albeit not being limited to it. Each image is represented by 192x192 pixels with grayscale value of 0-255. Classes are not balanced (they do not all contain the same number of elements), but they all do contain at the very least 8 images.
Provide a detailed description of the following dataset: FIGR-8
ImageNet-Sketch
ImageNet-Sketch data set consists of 50,889 images, approximately 50 images for each of the 1000 ImageNet classes. The data set is constructed with Google Image queries "sketch of __", where __ is the standard class name. Only within the "black and white" color scheme is searched. 100 images are initially queried for every class, and the pulled images are cleaned by deleting the irrelevant images and images that are for similar but different classes. For some classes, there are less than 50 images after manually cleaning, and then the data set is augmented by flipping and rotating the images.
Provide a detailed description of the following dataset: ImageNet-Sketch
YouTube-VIS 2019
YouTubeVIS is a new dataset tailored for tasks like simultaneous detection, segmentation and tracking of object instances in videos and is collected based on the current largest video object segmentation dataset YouTubeVOS.
Provide a detailed description of the following dataset: YouTube-VIS 2019
EURLEX57K
EURLEX57K is a new publicly available legal LMTC dataset, dubbed EURLEX57K, containing 57k English EU legislative documents from the EUR-LEX portal, tagged with ∼4.3k labels (concepts) from the European Vocabulary (EUROVOC).
Provide a detailed description of the following dataset: EURLEX57K
Anonymized Keystrokes Dataset
Includes two datasets for this task, one for English-French (En-Fr) and another for English-German (En-De). For each dataset, the action sequences for full documents are provided, along with an editor identifier. The dataset contains document-level post-editing action sequences, including edit operations from keystrokes, mouse actions, and waiting times.
Provide a detailed description of the following dataset: Anonymized Keystrokes Dataset
METU-VIREF Dataset
**METU-VIREF** is a video referring expression dataset comprising of videos from VIRAT Ground and ILSVRC2015 VID datasets. VIRAT is a surveillance dataset and contains mainly people and vehicles. To line up with this and restrict the domain, only videos that contain vehicles from the ILSVRC dataset are used. The METU-VIREF dataset does not contain whole videos from these datasets (the videos need to be downloaded from the respective sources) but just referring expressions for video sequences containing an object pair. For this, object pairs are chosen which had a relation that a meaningful referring expression could be written for.
Provide a detailed description of the following dataset: METU-VIREF Dataset
UNDD
UNDD consists of 7125 unlabelled day and night images; additionally, it has 75 night images with pixel-level annotations having classes equivalent to Cityscapes dataset.
Provide a detailed description of the following dataset: UNDD
Mapillary Vistas Dataset
Mapillary Vistas Dataset is a diverse street-level imagery dataset with pixel‑accurate and instance‑specific human annotations for understanding street scenes around the world.
Provide a detailed description of the following dataset: Mapillary Vistas Dataset
Food.com Recipes and Interactions
Food.com Recipes and Interactions consists of 270K recipes and 1.4M user-recipe interactions (reviews) scraped from Food.com, covering a period of 18 years (January 2000 to December 2018).
Provide a detailed description of the following dataset: Food.com Recipes and Interactions
SYNTHIA-PANO
SYNTHIA-PANO is the panoramic version of SYNTHIA dataset. Five sequences are included: Seqs02-summer, Seqs02-fall, Seqs04-summer, Seqs04-fall and Seqs05-summer. Panomaramic images with fine annotation for semantic segmentation.
Provide a detailed description of the following dataset: SYNTHIA-PANO
SUIM
The Segmentation of Underwater IMagery (SUIM) dataset contains over 1500 images with pixel annotations for eight object categories: fish (vertebrates), reefs (invertebrates), aquatic plants, wrecks/ruins, human divers, robots, and sea-floor. The images have been rigorously collected during oceanic explorations and human-robot collaborative experiments, and annotated by human participants.
Provide a detailed description of the following dataset: SUIM
CSAbstruct Dataset
CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles. The key difference between this dataset and PUBMED-RCT is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form. Therefore, there is more variety in writing styles in CSABSTRUCT. CSABSTRUCT is collected from the Semantic Scholar corpus (Ammar et al., 2018). Each sentence is annotated by 5 workers on the Figure-eight platform,6 with one of 5 categories {BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}.
Provide a detailed description of the following dataset: CSAbstruct Dataset
Pesteh-Set
Pesteh-Set is made of two parts. The first part includes 423 images with ground truth. The pistachios are sorted into two classes: Open-mouth and closed-mouth. The ground truth of the images is a CSV file that consists of the bounding boxes of the two classes of pistachios in the images. There are between 1 to 27 pistachios in each image, and 3927 pistachios in total. The second part includes 6 videos with a total length of 167 seconds and 561 moving pistachios.
Provide a detailed description of the following dataset: Pesteh-Set
CelebAGaze
CelebAGaze consists of 25283 high-resolution celebrity images that are collected from CelebA and the Internet. It consists of 21832 face images with eyes staring at the camera and 3451 face images with eyes staring somewhere else. All images (256 × 256) are cropped and the eye mask region by dlib is computed. Specifically, dlib is used to extract 68 facial landmarks and calculate the mean of 6 points near the eye region, which will be the center point of the mask. The size of the mask is fixed to 30×50. As described above, 300 samples from domain Y are randomly selected, 100 samples from domain X as the test set, the remaining as the training set. Note that this dataset is unpaired and it is not labeled with the specific eye angle or the head pose information.
Provide a detailed description of the following dataset: CelebAGaze
IQUAD
IQUAD is a dataset for Visual Question Answering in interactive environments. It is built upon AI2-THOR, a simulated photo-realistic environment of configurable indoor scenes with interactive object. IQUAD V1 has 75,000 questions, each paired with a unique scene configuration.
Provide a detailed description of the following dataset: IQUAD
EVE
EVE (End-to-end Video-based Eye-tracking) is a dataset for eye-tracking. It is collected from 54 participants and consists of 4 camera views, over 12 million frames and 1327 unique visual stimuli (images, video, text), adding up to approximately 105 hours of video data in total. Official competition on Codalab: [https://competitions.codalab.org/competitions/28954](https://competitions.codalab.org/competitions/28954)
Provide a detailed description of the following dataset: EVE
OpoSum
OPOSUM is a dataset for the training and evaluation of Opinion Summarization models which contains Amazon reviews from six product domains: Laptop Bags, Bluetooth Headsets, Boots, Keyboards, Televisions, and Vacuums. The six training collections were created by downsampling from the Amazon Product Dataset introduced in McAuley et al. (2015) and contain reviews and their respective ratings. A subset of the dataset has been manually annotated, specifically, for each domain, 10 different products were uniformly sampled (across ratings) with 10 reviews each, amounting to a total of 600 reviews, to be used only for development (300) and testing (300).
Provide a detailed description of the following dataset: OpoSum
ForecastQA
ForecastQA is a question-answering dataset consisting of 10,392 event forecasting questions, which have been collected and verified via crowdsourcing efforts. The forecasting problem for this dataset is formulated as a restricted-domain, multiple-choice, question-answering (QA) task that simulates the forecasting scenario.
Provide a detailed description of the following dataset: ForecastQA
TSU
Toyota Smarthome Untrimmed (TSU) is a dataset for activity detection in long untrimmed videos. The dataset contains 536 videos with an average duration of 21 mins. Since this dataset is based on the same footage video as Toyota Smarthome Trimmed version, it features the same challenges and introduces additional ones. The dataset is annotated with 51 activities. The dataset has been recorded in an apartment equipped with 7 Kinect v1 cameras. It contains common daily living activities of 18 subjects. The subjects are senior people in the age range 60-80 years old. The dataset has a resolution of 640×480 and offers 3 modalities: RGB + Depth + 3D Skeleton. The 3D skeleton joints were extracted from RGB. For privacy-preserving reasons, the face of the subjects is blurred.
Provide a detailed description of the following dataset: TSU
AUTSL
The Ankara University Turkish Sign Language Dataset (AUTSL) is a large-scale, multimode dataset that contains isolated Turkish sign videos. It contains 226 signs that are performed by 43 different signers. There are 38,336 video samples in total. The samples are recorded using Microsoft Kinect v2 in RGB, depth and skeleton formats. The videos are provided at a resolution of 512×512. The skeleton data contains spatial coordinates, i.e. (x, y), of the 25 junction points on the signer body that are aligned with 512×512 data.
Provide a detailed description of the following dataset: AUTSL
WikiHowQA
WikiHowQA is a Community-based Question Answering dataset, which can be used for both answer selection and abstractive summarization tasks. It contains 76,687 questions in the train set, 8,000 in the development set and 22,354 in the test set.
Provide a detailed description of the following dataset: WikiHowQA
DRealSR
DRealSR establishes a Super Resolution (SR) benchmark with diverse real-world degradation processes, mitigating the limitations of conventional simulated image degradation. It has been collected from five DSLR cameras in natural scenes and cover indoor and outdoor scenes avoiding moving objects, e.g., advertising posters, plants, offices, buildings. The training images are cropped into 380×380, 272×272 and 192×192 patches, resulting in 31,970 patches.
Provide a detailed description of the following dataset: DRealSR
EDEN
EDEN (Enclosed garDEN) is a multimodal synthetic dataset, a dataset for nature-oriented applications. The dataset features more than 300K images captured from more than 100 garden models. Each image is annotated with various low/high-level vision modalities, including semantic segmentation, depth, surface normals, intrinsic colors, and optical flow.
Provide a detailed description of the following dataset: EDEN
Wiki-CS
Wiki-CS is a Wikipedia-based dataset for benchmarking Graph Neural Networks. The dataset is constructed from Wikipedia categories, specifically 10 classes corresponding to branches of computer science, with very high connectivity. The node features are derived from the text of the corresponding articles. They were calculated as the average of pretrained GloVe word embeddings (Pennington et al., 2014), resulting in 300-dimensional node features. The dataset has 11,701 nodes and 216,123 edges.
Provide a detailed description of the following dataset: Wiki-CS
ChaosNLI
Chaos NLI is a Natural Language Inference (NLI) dataset with 100 annotations per example (for a total of 464,500 annotations) for some existing data points in the development sets of SNLI, MNLI, and Abductive NLI. The dataset provides additional labels for NLI annotations that reflect the distribution of human annotators, instead of picking the majority label as the gold standard label.
Provide a detailed description of the following dataset: ChaosNLI
VideoForensicsHQ
VideoForensicsHQ is a benchmark dataset for face video forgery detection, providing high quality visual manipulations. It is one of the first face video manipulation benchmark sets that also contains audio and thus complements existing datasets along a new challenging dimension. VideoForensicsHQ shows manipulations at much higher video quality and resolution, and shows manipulations that are provably much harder to detect by humans than videos in other datasets. VideoForensicsHQ contains 1,737 videos of speaking faces (44% male, 56% female), with 8 different emotions, most of them of “HD” resolution. The videos amount to 1,666,816 frames.
Provide a detailed description of the following dataset: VideoForensicsHQ
SOLO
The SOLO Corpus comprises over 4 million English tweets, each of which contains at least one of the following tokens: solitude, lonely, and loneliness. The corpus has been collected to analyze the language and emotions associated with the state of being alone in English tweets. Tweets related to the state of being alone were collected by polling the Twitter API from August 28, 2018 to July 10, 2019 with the following query terms: loneliness, lonely, and solitude. Duplicate tweets, short tweets (containing less than three words), and tweets with external URLs were discarded. Further, only up to three tweets per user are kept. This minimizes the impact of prolific tweeters and bots on the corpus.
Provide a detailed description of the following dataset: SOLO
EXPO-HD
The EXPO-HD Dataset is a dataset of Expo whiteboard markers for the purpose of instance segmentation. The dataset contains two subsets (both include instances segmentation labels): * Photorealistic synthetic image dataset with 5000 images. * Real image dataset with 200 images (used for validation and test). The dataset can be used for testing domain adaptation techniques, as the training set consists of only synthetic images, and the validation and test sets consist of real images.
Provide a detailed description of the following dataset: EXPO-HD
Twitter Death Hoaxes
This is a dataset for detection fake death hoaxes. It consists of of death reports collected from Twitter between 1st January, 2012 and 31st December, 2014. It was collected by tracking the keyword 'RIP', and matching those tweets in which a name is mentioned next to RIP. Matching names were identified by using Wikidata as a database of names. The dataset contains 4,007 death reports, of which 2,301 are real deaths, 1,092 are commemorations and 614 are fake deaths.
Provide a detailed description of the following dataset: Twitter Death Hoaxes
KACC
The KACC benchmark consists of three subtasks that can be applied to knowledge graphs: knowledge abstraction, knowledge concretization and knowledge completion. - The **knowledge abstraction** subtask contains tasks of concept inference, schema prediction and concept graph completion on the two-view KG. - The **knowledge concretization** subtask requires models to do entity graph completion based on the two subgraphs. The concretization ability can be further examined by the results of long-tail entity link prediction. - The **knowledge completion** subtask consists of typical single-view knowledge graph completion tasks for each subgraph. KACC contains 999,902 entities in the entity graph, with 691 types of relations . The concept graph contains 21,293 concepts with 198 types of meta-relations. There are 2,367,971 cross-links between the two.
Provide a detailed description of the following dataset: KACC
MSSD
The Spotify Music Streaming Sessions Dataset (MSSD) consists of 160 million streaming sessions with associated user interactions, audio features and metadata describing the tracks streamed during the sessions, and snapshots of the playlists listened to during the sessions. This dataset enables research on important problems including how to model user listening and interaction behaviour in streaming, as well as Music Information Retrieval (MIR), and session-based sequential recommendations.
Provide a detailed description of the following dataset: MSSD
InterHand2.6M
The InterHand2.6M dataset is a large-scale real-captured dataset with accurate GT 3D interacting hand poses, used for 3D hand pose estimation The dataset contains 2.6M labeled single and interacting hand frames.
Provide a detailed description of the following dataset: InterHand2.6M
TaxiNLI
TaxiNLI is a dataset collected based on the principles and categorizations of the aforementioned taxonomy. A subset of examples are curated from MultiNLI (Williams et al., 2018) by sampling uniformly based on the entailment label and the domain. The dataset is annotated with finegrained category labels.
Provide a detailed description of the following dataset: TaxiNLI
AllMusic Mood Subset
The AllMusic Mood Subset (AMS) is a dataset for mood classification from songs. It is created by matching a subset of the Million Song Dataset (MSD), totalling 67k tracks, with expert annotations of 188 different moods collected from AllMusic. Since the AMS is a subset of the MSD, the audio data is gathered by obtaining the 7-digital 30 second previews associated with all MSD tracks. These are 128kbps mp3 stereo files sampled at 44.1kHz.
Provide a detailed description of the following dataset: AllMusic Mood Subset
NISP
This dataset contains speech recordings along with speaker physical parameters (height, weight, shoulder size, age ) as well as regional information and linguistic information. There are a total of 345 speakers (219 male and 126 female). The dataset contains sentences that are taken out from newspapers. Each speaker has contributed about 4-5 minutes of data that includes recordings in both English and their mother tongue. The transcript for the text is provided in UTF-8 format.
Provide a detailed description of the following dataset: NISP
EDUVSUM
EDUVSUM contains educational videos with subtitles from three popular e-learning platforms: Edx,YouTube, and TIB AV-Portal that cover the following topics: crash course on history of science and engineering, computer science, python and web programming, machine learning and computer vision, Internet of things (IoT), and software engineering. In total, the current version of the dataset contains 98 videos with ground truth values annotated by a user with an academic background in computer science.
Provide a detailed description of the following dataset: EDUVSUM
ADVANCE
The AuDio Visual Aerial sceNe reCognition datasEt (ADVANCE) is a brand-new multimodal learning dataset, which aims to explore the contribution of both audio and conventional visual messages to scene recognition. This dataset in summary contains 5075 pairs of geotagged aerial images and sounds, classified into 13 scene classes, i.e., airport, sports land, beach, bridge, farmland, forest, grassland, harbor, lake, orchard, residential area, shrub land, and train station.
Provide a detailed description of the following dataset: ADVANCE
Multi-Modal CelebA-HQ
Multi-Modal-CelebA-HQ is a large-scale face image dataset that has 30,000 high-resolution face images selected from the CelebA dataset by following CelebA-HQ. Each image has high-quality segmentation mask, sketch, descriptive text, and image with transparent background. Multi-Modal-CelebA-HQ can be used to train and evaluate algorithms of text-to-image-generation, text-guided image manipulation, sketch-to-image generation, and GANs for face generation and editing.
Provide a detailed description of the following dataset: Multi-Modal CelebA-HQ
Short Text Font Dataset
The proposed dataset includes 1,309 short text instances from Adobe Spark. The dataset is a collection of publicly available sample texts created by different designers. It covers a variety of topics found in posters, flyers, motivational quotes and advertisements.
Provide a detailed description of the following dataset: Short Text Font Dataset
SPHERE-calorie
The dataset contains both RGB and depth images, and the data from two accelerometers, together with ground truth calorie values from a calorimeter for calorie expenditure estimation in home environments.
Provide a detailed description of the following dataset: SPHERE-calorie
SmartCity
SmartCity consists of 50 images in total collected from ten city scenes including office entrance, sidewalk, atrium, shopping mall etc.. Unlike the existing crowd counting datasets with images of hundreds/thousands of pedestrians and nearly all the images being taken outdoors, SmartCity has few pedestrians in images and consists of both outdoor and indoor scenes: the average number of pedestrians is only 7.4 with minimum being 1 and maximum being 14.
Provide a detailed description of the following dataset: SmartCity
ErhuPT
This dataset is an audio dataset containing about 1500 audio clips recorded by multiple professional players.
Provide a detailed description of the following dataset: ErhuPT
VideoNavQA
The VideoNavQA dataset contains pairs of questions and videos generated in the House3D environment. The goal of this dataset is to assess question-answering performance from nearly-ideal navigation paths, while considering a much more complete variety of questions than current instantiations of the Embodied Question Answering (EQA) task. VideoNavQA contains approximately 101,000 pairs of videos and questions, 28 types of questions belonging to 8 categories, with 70 possible answers. Each question type is associated with a template that facilitates programmatic generation using ground truth information extracted from the video. The complexity of the questions in the dataset is far beyond that of other similar tasks using this generation method (such as CLEVR): the questions involve single or multiple object/room existence, object/room counting, object color recognition and localization, spatial reasoning, object/room size comparison and equality of object attributes (color, room location).
Provide a detailed description of the following dataset: VideoNavQA