dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
OTCBVS
**OCTCBVS** is a benchmark dataset for testing and evaluating novel and state-of-the-art computer vision algorithms. The benchmark contains videos and images recorded in and beyond the visible spectrum and is available for free to all researchers in the international computer vision communities.
Provide a detailed description of the following dataset: OTCBVS
LEAF-QA
LEAF-QA, a comprehensive dataset of 250,000 densely annotated figures/charts, constructed from real-world open data sources, along with ~2 million question-answer (QA) pairs querying the structure and semantics of these charts. LEAF-QA highlights the problem of multimodal QA, which is notably different from conventional visual QA (VQA), and has recently gained interest in the community. Furthermore, LEAF-QA is significantly more complex than previous attempts at chart QA, viz. FigureQA and DVQA, which present only limited variations in chart data. LEAF-QA being constructed from real-world sources, requires a novel architecture to enable question answering.
Provide a detailed description of the following dataset: LEAF-QA
Multi Task Crowd
Multi Task Crowd is a new 100 image dataset fully annotated for crowd counting, violent behaviour detection and density level classification.
Provide a detailed description of the following dataset: Multi Task Crowd
DogCentric Activity
The **DogCentric Activity** dataset is composed of dog activity videos taken from a first-person animal viewpoint. The dataset contains 10 different types of activities, including activities performed by the dog himself/herself, interactions between people and the dog, and activities performed by people or cars. The authors attached a GoPro camera to the back of each of the four dogs, and their owners took them on a walk to their familiar walking routes. The walking routes are in various environments, such as residential area, a park along a river, a sand beach, a field, streets with traffic, etc. Thus even though different dogs do the same activity, their background varies. The video contains various activities with 10 activities of interest chosen as target activities: 'playing with a ball', 'waiting for a car to passed by', 'drinking water', 'feeding', 'turning dog's head to the left', 'turning dog's head to the right', 'petting', 'shaking dog's body by himself', 'sniffing', and 'walking'. The videos are in 320*240 image resolution, 48 frames per second.
Provide a detailed description of the following dataset: DogCentric Activity
Visual Question Answering v2.0
Visual Question Answering (VQA) v2.0 is a dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer. It is the second version of the [VQA](https://www.paperswithcode.com/dataset/vqa) dataset. - 265,016 images (COCO and abstract scenes) - At least 3 questions (5.4 questions on average) per image - 10 ground truth answers per question - 3 plausible (but likely incorrect) answers per question - Automatic evaluation metric The [first version of the dataset](/dataset/visual-question-answering) was released in October 2015.
Provide a detailed description of the following dataset: Visual Question Answering v2.0
Biwi Kinect Head Pose
Biwi Kinect Head Pose is a challenging dataset mainly inspired by the automotive setup. It is acquired with the Microsoft Kinect sensor, a structured IR light device. It contains about 15k frame, with RGB. (640 × 480) and depth maps (640 × 480). Twenty subjects have been involved in the recordings: four of them were recorded twice, for a total of 24 sequences. The ground truth of yaw, pitch and roll angles is reported together with the head center and the calibration matrix.
Provide a detailed description of the following dataset: Biwi Kinect Head Pose
ELAS
ELAS is a dataset for lane detection. It contains more than 20 different scenes (in more than 15,000 frames) and considers a variety of scenarios (urban road, highways, traffic, shadows, etc.). The dataset was manually annotated for several events that are of interest for the research community (i.e., lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes).
Provide a detailed description of the following dataset: ELAS
100DOH
The 100 Days Of Hands Dataset (100DOH) is a large-scale video dataset containing hands and hand-object interactions. It consists of 27.3K Youtube videos from 11 categories with nearly 131 days of footage of everyday interaction. The focus of the dataset is hand contact, and it includes both first-person and third-person perspectives. The videos in 100DOH are unconstrained and content-rich, ranging from records of daily life to specific instructional videos. To enforce diversity, the dataset contains no more than 20 videos from each uploader.
Provide a detailed description of the following dataset: 100DOH
SVLD
The social vision and language dataset is a large-scale multimodal dataset designed for research into social contextual learning.
Provide a detailed description of the following dataset: SVLD
TextComplexityDE
TextComplexityDE is a dataset consisting of 1000 sentences in German language taken from 23 Wikipedia articles in 3 different article-genres to be used for developing text-complexity predictor models and automatic text simplification in German language. The dataset includes subjective assessment of different text-complexity aspects provided by German learners in level A and B. In addition, it contains manual simplification of 250 of those sentences provided by native speakers and subjective assessment of the simplified sentences by participants from the target group. The subjective ratings were collected using both laboratory studies and crowdsourcing approach.
Provide a detailed description of the following dataset: TextComplexityDE
Image Paragraph Captioning
The Image Paragraph Captioning dataset allows researchers to benchmark their progress in generating paragraphs that tell a story about an image. The dataset contains 19,561 images from the [Visual Genome dataset](https://paperswithcode.com/dataset/visual-genome). Each image contains one paragraph. The training/val/test sets contains 14,575/2,487/2,489 images. Since all the images are also part of the Visual Genome dataset, each image also contains 50 region descriptions (short phrases describing parts of an image), 35 objects, 26 attributes and 21 relationships and 17 question-answer pairs.
Provide a detailed description of the following dataset: Image Paragraph Captioning
Famulus
This is a dataset for segmentation and classification of epistemic activities in diagnostic reasoning texts.
Provide a detailed description of the following dataset: Famulus
CMU Wilderness Multilingual Speech Dataset
The CMU Wilderness Multilingual Speech Dataset is a dataset of over 700 different languages providing audio, aligned text and word pronunciations. On average each language provides around 20 hours of sentence-lengthed transcriptions.
Provide a detailed description of the following dataset: CMU Wilderness Multilingual Speech Dataset
Aesthetic Visual Analysis
**Aesthetic Visual Analysis** is a dataset for aesthetic image assessment that contains over 250,000 images along with a rich variety of meta-data including a large number of aesthetic scores for each image, semantic labels for over 60 categories as well as labels related to photographic style.
Provide a detailed description of the following dataset: Aesthetic Visual Analysis
BigBIRD
BigBIRD is a 3D dataset of 125 objects, with the following data for each object: * 600 12 megapixel images, sampling the viewing hemisphere * 600 registered RGB-D point clouds from a Carmine 1.09 sensor * Pose information for each of the above images and point clouds * Segmentation masks for each of the above images (and segmented point clouds) * Merged point clouds consisting of data from all 600 viewpoints * Reconstructed meshes from the merged point clouds Paper: [ICRA 2014 "A Large-Scale 3D Database of Object Instances."](https://people.eecs.berkeley.edu/~pabbeel/papers/2014-ICRA-BigBIRD.pdf)
Provide a detailed description of the following dataset: BigBIRD
WSJ0-2mix
**WSJ0-2mix** is a speech recognition corpus of speech mixtures using utterances from the Wall Street Journal (WSJ0) corpus.
Provide a detailed description of the following dataset: WSJ0-2mix
WHAM!
The **WSJ0 Hipster Ambient Mixtures** (**WHAM!**) dataset pairs each two-speaker mixture in the wsj0-2mix dataset with a unique noise background scene. It has an extension called [WHAMR!](/dataset/whamr) that adds artificial reverberation to the speech signals in addition to the background noise. The noise audio was collected at various urban locations throughout the San Francisco Bay Area in late 2018. The environments primarily consist of restaurants, cafes, bars, and parks. Audio was recorded using an Apogee Sennheiser binaural microphone on a tripod between 1.0 and 1.5 meters off the ground.
Provide a detailed description of the following dataset: WHAM!
CUHK Face Alignment Database
The CUHK Face Alignment Database is dataset with 13,466 face images, among which 5, 590 images are from LFW and the remaining 7, 876 images are downloaded from the web. Each face is labeled with the positions of five facial keypoints. 10,000 images are used for training and the remaining 3,466 images for validation. <paper> Image Source: [Deep Convolutional Network Cascade for Facial Point Detection](http://mmlab.ie.cuhk.edu.hk/archive/CNN_FacePoint.htm)
Provide a detailed description of the following dataset: CUHK Face Alignment Database
CUHK Square Dataset
CUHK Square data set is for transfer learning research on adapting generic pedestrian detectors. It includes a traffic video sequence of 60 minutes long. It is recorded by a stationary camera. The size of the scene is 720 by 576. In order to evaluate the performance of human detection on this data set, ground truth of pedestrians of some sampled frames are manually labeled. Paper Source: [Transferring a generic pedestrian detector towards specific scenes](https://doi.org/10.1109/CVPR.2012.6248064)
Provide a detailed description of the following dataset: CUHK Square Dataset
CUHK Occlusion Dataset
CUHK occlusion dataset includes 1,063 images with occluded pedestrians. It is used for Human Detection with occlusion handling in crowded scenes. Paper: [A discriminative deep model for pedestrian detection with occlusion handling](https://doi.org/10.1109/CVPR.2012.6248062)
Provide a detailed description of the following dataset: CUHK Occlusion Dataset
Grand Central Station Dataset
The Grand central station dataset includes a video with 50,010 frames which is used for Scene Understanding and Crowd Analysis. Paper: [Understanding collective crowd behaviors: Learning a Mixture model of Dynamic pedestrian-Agents](https://doi.org/10.1109/CVPR.2012.6248013)
Provide a detailed description of the following dataset: Grand Central Station Dataset
CUHK02
CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification. Image Source: [Locally Aligned Feature Transforms across Views](https://www.cv-foundation.org/openaccess/content_cvpr_2013/papers/Li_Locally_Aligned_Feature_2013_CVPR_paper.pdf)
Provide a detailed description of the following dataset: CUHK02
ArtEmis
ArtEmis is a large-scale dataset aimed at providing a detailed understanding of the interplay between visual content, its emotional effect, and explanations for the latter in language. In contrast to most existing annotation datasets in computer vision, this dataset focuses on the affective experience triggered by visual artworks an the annotators were asked to indicate the dominant emotion they feel for a given image and, crucially, to also provide a grounded verbal explanation for their emotion choice. This leads to a rich set of signals for both the objective content and the affective impact of an image, creating associations with abstract concepts (e.g., “freedom” or “love”), or references that go beyond what is directly visible, including visual similes and metaphors, or subjective references to personal experiences. This dataset focuses on visual art (e.g., paintings, artistic photographs) as it is a prime example of imagery created to elicit emotional responses from its viewers. ArtEmis contains 439K emotion attributions and explanations from humans, on 81K artworks from WikiArt. Paper: [ArtEmis: Affective Language for Visual Art](https://arxiv.org/abs/2101.07396)
Provide a detailed description of the following dataset: ArtEmis
BreakHis
The Breast Cancer Histopathological Image Classification (BreakHis) is composed of 9,109 microscopic images of breast tumor tissue collected from 82 patients using different magnifying factors (40X, 100X, 200X, and 400X). It contains 2,480 benign and 5,429 malignant samples (700X460 pixels, 3-channel RGB, 8-bit depth in each channel, PNG format). This database has been built in collaboration with the P&D Laboratory - Pathological Anatomy and Cytopathology, Parana, Brazil. Paper: [F. A. Spanhol, L. S. Oliveira, C. Petitjean and L. Heutte, "A Dataset for Breast Cancer Histopathological Image Classification," in IEEE Transactions on Biomedical Engineering, vol. 63, no. 7, pp. 1455-1462, July 2016, doi: 10.1109/TBME.2015.2496264](https://doi.org/10.1109/TBME.2015.2496264)
Provide a detailed description of the following dataset: BreakHis
2D Hela
2D HeLa is a dataset of fluorescence microscopy images of HeLa cells stained with various organelle-specific fluorescent dyes. The images include 10 organelles, which are DNA (Nuclei), ER (Endoplasmic reticulum), Giantin, (cis/medial Golgi), GPP130 (cis Golgi), Lamp2 (Lysosomes), Mitochondria, Nucleolin (Nucleoli), Actin, TfR (Endosomes), Tubulin. The purpose of the dataset is to train a computer program to automatically identify sub-cellular organelles. Paper: [M. V. Boland and R. F. Murphy (2001). A Neural Network Classifier Capable of Recognizing the Patterns of all Major Subcellular Structures in Fluorescence Microscope Images of HeLa Cells. Bioinformatics 17:1213-1223](https://doi.org/10.1093/bioinformatics/17.12.1213)
Provide a detailed description of the following dataset: 2D Hela
PointPattern
PointPattern is a graph classification dataset constructed by simple point patterns from statistical mechanics. The authors simulated three point patterns in 2D: hard disks in equilibrium (HD), Poisson point process, and random sequential adsorption (RSA) of disks. The HD and Poisson distributions can be seen as simple models that describe the microstructures of liquids and gases while the RSA is a nonequilibrium stochastic process that introduces new particles one by one subject to nonoverlapping conditions. These systems are well known to be structurally different, while being easy to simulate, thus provides a solid and controllable classification task. For each point pattern, the particles are treated as nodes, and edges are subsequently drawn according to whether two particles are within a threshold distance.
Provide a detailed description of the following dataset: PointPattern
Humans in 3D
H3D (Humans in 3D) is a dataset of annotated people. The annotations include: * The joints and other keypoints (eyes, ears, nose, shoulders, elbows, wrists, hips, knees and ankles) * The 3D pose inferred from the keypoints. * Visibility boolean for each keypoint * Region annotations (upper clothes, lower clothes, dress, socks, shoes, hands, gloves, neck, face, hair, hat, sunglasses, bag, occluder) * Body type (male, female or child) Paper: [Poselets: Body part detectors trained using 3D human pose annotations](https://doi.org/10.1109/ICCV.2009.5459303)
Provide a detailed description of the following dataset: Humans in 3D
BelgaLogos
BelgaLogos is a dataset for logo detection and recognition. The images of BelgaLogos dataset have been provided and are copyrighted by BELGA press agency. They are freely available for research purpose only. The dataset is composed of 10,000 images covering all aspects of life and current affairs: politics and economics, finance and social affairs, sports, culture and personalities. All images are in JPEG format and have been re-sized with a maximum value of height and width equal to 800 pixels, preserving aspect ratio. Paper: [Alexis Joly and Olivier Buisson, Logo retrieval with a contrario visual query expansion, In Proceedings of the Seventeen ACM international Conference on Multimedia, 2009.](https://doi.org/10.1145/1631272.1631361)
Provide a detailed description of the following dataset: BelgaLogos
Aspects dataset
This dataset contains video shots for two different classes: tigers and cars. The shots were collected from 188 car ads (~1–2 min each) and 14 nature documentaries about tigers (~40 min), amounting to roughly 14 h of video. The videos were partitioned into shorter shots, and only those showing at least one instance of the class were kept. This produced 806 shots for the car and 1880 for the tiger class, typically 1–100 sec in length. Paper: [Discovering object aspects from video](https://doi.org/10.1016/j.imavis.2016.04.014)
Provide a detailed description of the following dataset: Aspects dataset
POET
The POET (Pascal Objects Eye Tracking) is a dataset that consists of eye tracking data for the complete trainval set of ten objects classes (cat, dog, bicycle, motorbike, boat, aeroplane, horse, cow, sofa, dining table) from [Pascal VOC 2012](pascal-voc) (6,270 images in total). Each image is annotated with the eye movement record of five participants, whose task was to identify which object class was present in the image. Paper: [Training object class detectors from eye tracking data](https://doi.org/10.1007/978-3-319-10602-1_24)
Provide a detailed description of the following dataset: POET
AMUSE
The automotive multi-sensor (AMUSE) dataset consists of inertial and other complementary sensor data combined with monocular, omnidirectional, high frame rate visual data taken in real traffic scenes during multiple test drives. Paper: [A Multi-sensor Traffic Scene Dataset with Omnidirectional Video](https://doi.org/10.1109/CVPRW.2013.110)
Provide a detailed description of the following dataset: AMUSE
IMO
Dataset of annotated independently moving objects (IMO). This dataset contains left and right images, stereo images, stereo disparity from SGM, and vehicle labels as well as a ground truth annotations. Paper: [Independently Moving Object Trajectories from Sequential Hierarchical Ransac](https://users.isy.liu.se/cvl/perfo/abstracts/persson21.html)
Provide a detailed description of the following dataset: IMO
LTIR
The LTIR dataset is a thermal infrared dataset for evaluation of Short-Term Single-Object (STSO) tracking. The dataset contains * 20 thermal infrared sequences, one .png per frame. Some sequences are available in both 8- and 16-bits. * Bounding box annotations of one object per sequence. * Local per-frame annotations. Paper: [A thermal Object Tracking benchmark](https://doi.org/10.1109/AVSS.2015.7301772)
Provide a detailed description of the following dataset: LTIR
Family101
The Family101 dataset is the a large-scale dataset of families across several generations. It contains 101 different families with distinct family names, including 206 nuclear families, 607 individuals, with 14,816 images. The dataset are composed of renowned public families. Paper: [Kinship Classification by Modeling Facial Feature Heredity People](https://doi.org/10.1109/ICIP.2013.6738614)
Provide a detailed description of the following dataset: Family101
FIW
FIW is a large and comprehensive database available for kinship recognition. FIW is made up of 11,932 natural family photos of 1,000 families-- nearly 10x more than the next-to-largest, [Family-101](family101) database. Also, it contains 656,954 image pairs split between the 11 relationships, which is much larger than the 2nd to largest [KinFaceW-II](kinfacew) with 2,000 pairs for only 4 kinship types.
Provide a detailed description of the following dataset: FIW
KinFaceW
KinFaceW consists of two kinship datasets: KinFaceW-I and KinFaceW-II. Face images were collected from the internet, including some public figure face images as well as their parents' or children's face images. Face images are captured under uncontrolled environments in two datasets with no restriction in terms of pose, lighting, background, expression, age, ethnicity, and partial occlusion. The difference of KinFaceW-I and KinFaceW-II is that face images with a kin relation were acquired from different photos in KinFaceW-I and the same photo in KinFaceW-II in most cases. Paper: [Neighborhood Repulsed Metric Learning for Kinship Verification](https://doi.org/10.1109/CVPR.2012.6247978)
Provide a detailed description of the following dataset: KinFaceW
Boxy
A large vehicle detection dataset with almost two million annotated vehicles for training and evaluating object detection methods for self-driving cars on freeways. The dataset consists of: * 200,000 images * 1,990,000 annotated vehicles * 5 Megapixel resolution * Sunshine, rain, dusk, night * Clear freeways, heavy traffic, traffic jams Paper: [Boxy Vehicle Detection in Large Images](https://doi.org/10.1109/ICCVW.2019.00112)
Provide a detailed description of the following dataset: Boxy
CASR
CASR is a dataset for cyclist arm signal recognition in videos. It contains 219 annotated arm signal actions on videos of approximately 10 seconds each, containing one or two actions per video.
Provide a detailed description of the following dataset: CASR
Driving Event Camera Dataset
This dataset consists of a number of sequences that were recorded with a VGA (640x480) event camera (Samsung DVS Gen3) and a conventional RGB camera (Huawei P20 Pro) placed on the windshield of a car driving through Zurich.
Provide a detailed description of the following dataset: Driving Event Camera Dataset
FRIDA
FRIDA and FRIDA2 are databases of numerical images easily usable to evaluate in a systematic way the performance of visibility and contrast restoration algorithms. FRIDA comprises 90 synthetic images of 18 urban road scenes. FRIDA2 comprises 330 synthetic images of 66 diverse road scenes. The view point is closed to the one of the vehicle's driver. To each image without fog is associated 4 foggy images and a depthmap. Different kind of fog are added on each of the 4 associated images: uniform fog, heterogeneous fog, cloudy fog, and cloudy heterogeneous fog. These scenes can be used to test visibility and contrast restoration algorithms intensively and in an objective way, as well as "shape from fog" algorithms. The calibration parameters of the camera are given. Paper: [Improved Visibility of Road Scene Images under Heterogeneous Fog](https://doi.org/10.1109/IVS.2010.5548128)
Provide a detailed description of the following dataset: FRIDA
Ford Campus Vision and Lidar Data Set
Ford Campus Vision and Lidar Data Set is a dataset collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS LV) and consumer (Xsens MTI-G) Inertial Measuring Unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. This dataset consists of the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research campus and downtown Dearborn, Michigan during November-December 2009. The vehicle path trajectory in these datasets contain several large and small-scale loop closures, which should be useful for testing various state of the art computer vision and SLAM (Simultaneous Localization and Mapping) algorithms. Paper: [Ford Campus vision and lidar data set](https://doi.org/10.1177%2F0278364911400640)
Provide a detailed description of the following dataset: Ford Campus Vision and Lidar Data Set
JAAD
JAAD is a dataset for studying joint attention in the context of autonomous driving. The focus is on pedestrian and driver behaviors at the point of crossing and factors that influence them. To this end, JAAD dataset provides a richly annotated collection of 346 short video clips (5-10 sec long) extracted from over 240 hours of driving footage. These videos filmed in several locations in North America and Eastern Europe represent scenes typical for everyday urban driving in various weather conditions. Bounding boxes with occlusion tags are provided for all pedestrians making this dataset suitable for pedestrian detection. Behavior annotations specify behaviors for pedestrians that interact with or require attention of the driver. For each video there are several tags (weather, locations, etc.) and timestamped behavior labels from a fixed list (e.g. stopped, walking, looking, etc.). In addition, a list of demographic attributes is provided for each pedestrian (e.g. age, gender, direction of motion, etc.) as well as a list of visible traffic scene elements (e.g. stop sign, traffic signal, etc.) for each frame. Paper: [Are They Going to Cross? A Benchmark Dataset and Baseline for Pedestrian Crosswalk Behavior](https://doi.org/10.1109/ICCVW.2017.33)
Provide a detailed description of the following dataset: JAAD
LISA Vehicle Detection
This is a dataset for vehicle detection. It consists of: * Three color video sequences captured at different times of the day and illumination settings: morning, evening, sunny, cloudy, etc. * Different driving environments: highway and urban. * Varying traffic conditions: light to dense traffic Paper: [A General Active-Learning Framework for On-Road Vehicle Recognition and Tracking](https://doi.org/10.1109/TITS.2010.2040177)
Provide a detailed description of the following dataset: LISA Vehicle Detection
LLAMAS
The unsupervised Labeled Lane MArkerS dataset (LLAMAS) is a dataset for lane detection and segmentation. It contains over 100,000 annotated images, with annotations of over 100 meters at a resolution of 1276 x 717 pixels. The Unsupervised Llamas dataset was annotated by creating high definition maps for automated driving including lane markers based on Lidar. Paper: [Unsupervised Labeled Lane Markers Using Maps](https://doi.org/10.1109/ICCVW.2019.00111)
Provide a detailed description of the following dataset: LLAMAS
VIsual PERception (VIPER)
VIPER is a benchmark suite for visual perception. The benchmark is based on more than 250K high-resolution video frames, all annotated with ground-truth data for both low-level and high-level vision tasks, including optical flow, semantic instance segmentation, object detection and tracking, object-level 3D scene layout, and visual odometry. Ground-truth data for all tasks is available for every frame. The data was collected while driving, riding, and walking a total of 184 kilometers in diverse ambient conditions in a realistic virtual world.
Provide a detailed description of the following dataset: VIsual PERception (VIPER)
REC-COCO
Relations in Captions (REC-COCO) is a new dataset that contains associations between caption tokens and bounding boxes in images. REC-COCO is based on the MS-COCO and V-COCO datasets. For each image in V-COCO, we collect their corresponding captions from MS-COCO and automatically align the concept triplet in V-COCO to the tokens in the caption. This requires finding the token for concepts such as PERSON. As a result, REC-COCO contains the captions and the tokens which correspond to each subject and object, as well as the bounding boxes for the subject and object.
Provide a detailed description of the following dataset: REC-COCO
TRIPOD
TRIPOD contains screenplays and plot synopses with turning point (TP) annotations for 99 movies. Each movie contains: 1. The Wikipedia plot synopsis (extended summary of 35 sentences on average) with sentence-level TP annotations. 2. The screenplay (all dialogue and description parts of the movie) segmented into scenes (selected from the Scriptbase dataset). 3. Gold scene-level TP labels for the screenplays of the test set. 3. The cast information (according to IMDb). TRIPOD is extended in [Movie Summarization via Sparse Graph Construction](https://arxiv.org/pdf/2012.07536.pdf) with more movies in the test set (122 now in total) and multimodal features extracted from the full-length movie videos. The multimodal version can be found here: https://datashare.ed.ac.uk/handle/10283/3819
Provide a detailed description of the following dataset: TRIPOD
CSI Screenplay Summarization Corpus
The dataset contains gold-standard summary labels for 39 "CSI: Crime Scene Investigation" episodes from seasons 1-5. Each episode contains the full-length screenplay and human annotations for its summary. The annotations include: 1. scene-level binary labels denoting whether the scene belongs to the summary of the episode 2. aspect-based labels for the scenes that belong to the summary, i.e., which aspect of the summary the scene addresses (e.g., information about the victim, the crime scene, the perpetrator etc.) 3. sentence-level binary labels denoting the sentences of the screenplay that belong to the summary for 10 episodes of the dataset
Provide a detailed description of the following dataset: CSI Screenplay Summarization Corpus
FPV-O
FPV-O is a multi-subject first-person vision dataset of office activities. Office activities include person-to-person interactions, such as chatting and handshaking, person-to-object interactions, such as using a computer or a whiteboard, as well as generic activities such as walking. The videos in the dataset present a number of challenges that, in addition to intra-class differences and inter-class similarities, include frames with illumination changes, motion blur, and lack of texture. Paper: [A First-Person Vision Dataset of Office Activities](https://doi.org/10.1007/978-3-030-20984-1_3)
Provide a detailed description of the following dataset: FPV-O
MERL Shopping
MERL Shopping is a dataset for training and testing action detection algorithms. The MERL Shopping Dataset consists of 106 videos, each of which is a sequence about 2 minutes long. The videos are from a fixed overhead camera looking down at people shopping in a grocery store setting. Each video contains several instances of the following 5 actions: "Reach To Shelf" (reach hand into shelf), "Retract From Shelf " (retract hand from shelf), "Hand In Shelf" (extended period with hand in the shelf), "Inspect Product" (inspect product while holding it in hand), and "Inspect Shelf" (look at shelf while not touching or reaching for the shelf).
Provide a detailed description of the following dataset: MERL Shopping
A2D
A2D (Actor-Action Dataset) is a dataset for simultaneously inferring actors and actions in videos. A2D has seven actor classes (adult, baby, ball, bird, car, cat, and dog) and eight action classes (climb, crawl, eat, fly, jump, roll, run, and walk) not including the no-action class, which we also consider. The A2D has 3,782 videos with at least 99 instances per valid actor-action tuple and videos are labeled with both pixel-level actors and actions for sampled frames. The A2D dataset serves as a large-scale testbed for various vision problems: video-level single- and multiple-label actor-action recognition, instance-level object segmentation/co-segmentation, as well as pixel-level actor-action semantic segmentation to name a few.
Provide a detailed description of the following dataset: A2D
ASD
The Annotated Semantic Dataset is composed of $11$ videos, divided in $3$ activity categories: Biking; Driving and Walking, according to their amount of semantic information. The classes are: $0p$, which represents the videos with approximately no semantic information; $25p$, for the videos containing relevant semantic information in ∼$25%$ of its frames ; the same ideia for the classes $50p$ and $75p$, The videos were record using a GoPro Hero 3 camera mounted in a helmet for the Biking and Walking videos and attached to a head strap for the Driving videos.
Provide a detailed description of the following dataset: ASD
l2d
This dataset is composed of paired videos of people dancing 3 different music styles: Ballet, Michael Jackson and Salsa. It contains multimodal data (visual data, temporal-graphs and audio) careful-selected from publicly available videos of dancers performing representative movements of the music style and audio data from the respective styles. This dataset was used to train and evaluate methodologies for motion generation from audio. We split the samples into training and evaluation sets. The training set has 2352 samples of movements sequences with length 64. Which 525 are from Ballet style, 966 from Michael Jackson (MJ) and 861 from Salsa. The evaluation set has 471 samples. 134 from Ballet, 102 from MJ and 235 from Salsa.
Provide a detailed description of the following dataset: l2d
OccludedPASCAL3D+
The **OccludedPASCAL3D+** is a dataset is designed to evaluate the robustness to occlusion for a number of computer vision tasks, such as object detection, keypoint detection and pose estimation. In the OccludedPASCAL3D+ dataset, we simulate partial occlusion by superimposing objects cropped from the MS-COCO dataset on top of objects from the PASCAL3D+ dataset. We only use ImageNet subset in PASCAL3D+, which has 10812 testing images.
Provide a detailed description of the following dataset: OccludedPASCAL3D+
THEODORE
Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset.
Provide a detailed description of the following dataset: THEODORE
MHRI dataset
The dataset includes recordings from 10 different users teaching the robot different common kitchen objects, that consists of synchronized recordings from three cameras and a microphone mounted on the robot: An RGB-d camera covers the user manipulation and interaction with the robot An RGB-d camera mounted on the top of the robot provides a top view of the whole scenario A HD-RGB camera points to the user head to capture face and expressions
Provide a detailed description of the following dataset: MHRI dataset
highD Dataseth
The highD dataset is a new dataset of naturalistic vehicle trajectories recorded on German highways. Using a drone, typical limitations of established traffic data collection methods such as occlusions are overcome by the aerial perspective. Traffic was recorded at six different locations and includes more than 110 500 vehicles. Each vehicle's trajectory, including vehicle type, size and manoeuvres, is automatically extracted. Using state-of-the-art computer vision algorithms, the positioning error is typically less than ten centimeters. Although the dataset was created for the safety validation of highly automated vehicles, it is also suitable for many other tasks such as the analysis of traffic patterns or the parameterization of driver models.
Provide a detailed description of the following dataset: highD Dataseth
inD Dataset
The **inD** dataset is a new dataset of naturalistic vehicle trajectories recorded at German intersections. Using a drone, typical limitations of established traffic data collection methods like occlusions are overcome. Traffic was recorded at four different locations. The trajectory for each road user and its type is extracted. Using state-of-the-art computer vision algorithms, the positional error is typically less than 10 centimetres. The dataset is applicable on many tasks such as road user prediction, driver modeling, scenario-based safety validation of automated driving systems or data-driven development of HAD system components.
Provide a detailed description of the following dataset: inD Dataset
rounD Dataset
The rounD dataset is a new dataset of naturalistic road user trajectories recorded at German roundabouts. Using a drone, typical limitations of established traffic data collection methods like occlusions are overcome. Traffic was recorded at three different locations. The trajectory for each road user and its type is extracted. Using state-of-the-art computer vision algorithms, the positional error is typically less than 10 centimetres. The dataset is applicable on many tasks such as road user prediction, driver modeling, scenario-based safety validation of automated driving systems or data-driven development of HAD system components.
Provide a detailed description of the following dataset: rounD Dataset
Localized Narratives
We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. We provide an extensive analysis of these annotations showing they are diverse, accurate, and efficient to produce. We also demonstrate their utility on the application of controlled image captioning.
Provide a detailed description of the following dataset: Localized Narratives
CE4
Given the difficulty to handle planetary data we provide downloadable files in PNG format from the missions Chang'E-3 and Chang'E-4. In addition to a set of scripts to do the conversion given a different PDS4 Dataset. This set of images constitute one of the first available datasets to tackle problems of Computer Vision and Learning in the context of space exploration.
Provide a detailed description of the following dataset: CE4
MICC-SRI
The dataset contains 11,913 frame pairs of urban driving footage with and without moving objects, synthetically generated with the CARLA simulator. All frames are available both as RGB images and semantic segmentations. RGB images are non-photorealistic being rendered by a game engine, while semantic segmentations are similar to a real-world segmentations. The dataset is designed to provide a supervision for semantic road inpainting tasks.
Provide a detailed description of the following dataset: MICC-SRI
KITTI-trajectory-prediction
KITTI is a well established dataset in the computer vision community. It has often been used for trajectory prediction despite not having a well defined split, generating non comparable baselines in different works. This dataset aims at bridging this gap and proposes a well defined split of the KITTI data. Samples are collected as 6 seconds chunks (2seconds for past and 4 for future) in a sliding window fashion from all trajectories in the dataset, including the egovehicle. There are a total of 8613 top-view trajectories for training and 2907 for testing. Since top-view maps are not provided by KITTI, semantic labels of static categories obtained with DeepLab-v3+ from all frames are projected in a common top-view map using the Velodyne 3D point cloud and IMU. The resulting maps have a spatial resolution of 0.5 meters and are provided along with the trajectories.
Provide a detailed description of the following dataset: KITTI-trajectory-prediction
EmoContext
EmoContext consists of three-turn English Tweets. The emotion labels include happiness, sadness, anger and other.
Provide a detailed description of the following dataset: EmoContext
Glint360K
The largest and cleanest face recognition dataset Glint360K, which contains **`17,091,657`** images of **`360,232`** individuals, baseline models trained on Glint360K can easily achieve state-of-the-art performance.
Provide a detailed description of the following dataset: Glint360K
IndicCorp
IndicCorp is a large monolingual corpora with around 9 billion tokens covering 12 of the major Indian languages. It has been developed by discovering and scraping thousands of web sources - primarily news, magazines and books, over a duration of several months. **Languages covered**: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu **Corpus Format**: The corpus is a single large text file containing one sentence per line. The publicly released version is randomly shuffled, untokenized and deduplicated. **Downloads** | Language | \# News Articles* | Sentences | Tokens | Link | | -------- | ----------------- | ------------- | ------------- | -------- | | as | 0.60M | 1.39M | 32.6M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/as.tar.xz) | | bn | 3.83M | 39.9M | 836M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/bn.tar.xz) | | en | 3.49M | 54.3M | 1.22B | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/en.tar.xz) | | gu | 2.63M | 41.1M | 719M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/gu.tar.xz) | | hi | 4.95M | 63.1M | 1.86B | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/hi.tar.xz) | | kn | 3.76M | 53.3M | 713M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/bn.tar.xz) | | ml | 4.75M | 50.2M | 721M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/ml.tar.xz) | | mr | 2.31M | 34.0M | 551M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/mr.tar.xz) | | or | 0.69M | 6.94M | 107M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/or.tar.xz) | | pa | 2.64M | 29.2M | 773M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/pa.tar.xz) | | ta | 4.41M | 31.5M | 582M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/ta.tar.xz) | | te | 3.98M | 47.9M | 674M | [link](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/indiccorp/te.tar.xz) | \* Excluding articles obtained from the OSCAR corpus
Provide a detailed description of the following dataset: IndicCorp
RuFa
RuFa (Ruqaa-Farsi) dataset contains images of text written in one of two Arabic fonts: Ruqaa and Nastaliq (Farsi). The dataset contains 40,000 synthesized image and 516 real one, 40,516 in total. Images are in RGB JPG format at 100×100px. Text in the images has varying number of words, position, size, and opacity. Real images were extracted from: 1. “The Rules of Arabic Calligraphy” by Hashem Al-Khatat - 1986. 2. “Ottman Fonts” by Muhammad Amin Osmanli Ketbkhana. The synthetization process is described in detail [in this post](https://mhmoodlan.github.io/blog/arabic-font-classification). Dataset folder structure: **/rufa (40,516 images)** * /real (516 images) * /ruqaa (260 images) * /farsi (256 images) * /synth (40,000 images) * /ruqaa (20,000 images) * /farsi (20,000 images)
Provide a detailed description of the following dataset: RuFa
MERL-RAV
The MERL-RAV (MERL Reannotation of AFLW with Visibility) Dataset contains over 19,000 face images in a full range of head poses. Each face is manually labeled with the ground-truth locations of 68 landmarks, with the additional information of whether each landmark is unoccluded, self-occluded (due to extreme head poses), or externally occluded. The images were annotated by professional labelers, supervised by researchers at Mitsubishi Electric Research Laboratories (MERL).
Provide a detailed description of the following dataset: MERL-RAV
News Interactions on Globo.com
### Context This large dataset with users interactions logs (page views) from a news portal was kindly provided by [Globo.com][1], the most popular news portal in Brazil, for reproducibility of the experiments with CHAMELEON - a meta-architecture for contextual hybrid session-based news recommender systems. The source code was made available at [GitHub][2]. The **first version (v1)** ([download][13]) of this dataset was released for reproducibility of the experiments presented in the following paper: &gt; Gabriel de Souza Pereira Moreira, Felipe Ferreira, and Adilson Marques da Cunha. 2018. [News Session-Based Recommendations using Deep Neural Networks][3]. In [3rd Workshop on Deep Learning for Recommender Systems (DLRS 2018)][4], October 6, 2018, Vancouver, BC, Canada. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3270323.3270328 A **second version (v2)** ([download][14]) of this dataset was made available for reproducibility of the experiments presented in the following paper. Compared to the v1, the only differences are: * Included four additional user contextual attributes (click\_os, click\_country, click_region, click_referrer_type) * Removed repeated clicks (clicks in the same articles) within sessions. Those sessions with less than two clicks (minimum for the next-click prediction task) were removed &gt; Gabriel de Souza Pereira Moreira, Dietmar Jannach, and Adilson Marques da Cunha. 2019. [Contextual Hybrid Session-based News Recommendation with Recurrent Neural Networks][15]. arXiv preprint arXiv:1904.10367, 49 pages You are not allowed to use this dataset for commercial purposes, only with academic objectives (like education or research). **If used for research, please cite the above papers.** ### Content The dataset contains a sample of user interactions (page views) in [G1 news portal][5] from Oct. 1 to 16, 2017, including about 3 million clicks, distributed in more than 1 million sessions from 314,000 users who read more than 46,000 different news articles during that period. It is composed by three files/folders: - **clicks.zip** - Folder with CSV files (one per hour), containing user sessions interactions in the news portal. - **articles_metadata.csv** - CSV file with metadata information about all (364047) published articles - **articles_embeddings.pickle** Pickle (Python 3) of a NumPy matrix containing the Article Content Embeddings (250-dimensional vectors), trained upon articles' text and metadata by the CHAMELEON's ACR module (see [paper][6] for details) for 364047 published articles. P.s. The full text of news articles could not be provided due to license restrictions, but those embeddings can be used by Neural Networks to represent their content. See this [paper][7] for a t-SNE visualization of these embeddings, colored by category. ### Acknowledgements I would like to acknowledge [Globo.com][8] for providing this dataset for this research and for the academic community, in special to [Felipe Ferreira][9] for preparing the original dataset by Globo.com. *Dataset banner photo by [rawpixel][10] on [Unsplash][11]* ### Inspiration This dataset might be very useful if you want to implement and evaluate hybrid and contextual news recommender systems, using both user interactions and articles content and metadata to provide recommendations. You might also use it for analytics, trying to understand how users interactions in a news portal are distributed by user, by article, or by category, for example. If you are interested in a dataset of user interactions on articles with the full text provided, to experiment with some different text representations using NLP, you might want to take a look in this smaller [dataset][12]. [1]: https://www.globo.com/ [2]: https://github.com/gabrielspmoreira/chameleon_recsys [3]: https://arxiv.org/abs/1808.00076 [4]: https://recsys.acm.org/recsys18/dlrs/ [5]: http://g1.com.br/ [6]: https://arxiv.org/abs/1808.00076 [7]: https://arxiv.org/abs/1808.00076 [8]: https://www.globo.com/ [9]: https://www.linkedin.com/in/feliferr/ [10]: https://unsplash.com/photos/O7lbegmDGEw?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText [11]: https://unsplash.com [12]: https://www.kaggle.com/gspmoreira/articles-sharing-reading-from-cit-deskdrop [13]: https://www.kaggle.com/gspmoreira/news-portal-user-interactions-by-globocom/downloads/news-portal-user-interactions-by-globocom.zip/1 [14]: https://www.kaggle.com/gspmoreira/news-portal-user-interactions-by-globocom/downloads/news-portal-user-interactions-by-globocom.zip/2 [15]: https://arxiv.org/abs/1904.10367
Provide a detailed description of the following dataset: News Interactions on Globo.com
Synbols
Synbols is a dataset generator designed for probing the behavior of learning algorithms. By defining the distribution over latent factors one can craft a dataset specifically tailored to answer specific questions about a given algorithm. Default versions of these datasets are also materialized and can serve as benchmarks.
Provide a detailed description of the following dataset: Synbols
C&Z
One of the first datasets (if not the first) to highlight the importance of bias and diversity in the community, which started a revolution afterwards. Introduced in 2014 as integral part of a thesis of Master of Science [1,2] at Carnegie Mellon and City University of Hong Kong. It was later expanded by adding synthetic images generated by a GAN architecture at ETH Zürich (in HDCGAN by Curtó et al. 2017). Being then not only the pioneer of talking about the importance of balanced datasets for learning and vision but also for being the first GAN augmented dataset of faces. The original description goes as follows: A bias-free dataset, containing human faces from different ethnical groups in a wide variety of illumination conditions and image resolutions. C&Z is enhanced with HDCGAN synthetic images, thus being the first GAN augmented dataset of faces. Dataset: [https://github.com/curto2/c](https://github.com/curto2/c) Supplement (with scripts to handle the labels): [https://github.com/curto2/graphics](https://github.com/curto2/graphics) [1] [https://www.curto.hk/c/decurto.pdf](https://www.curto.hk/c/decurto.pdf) [2] [https://www.zarza.hk/z/dezarza.pdf](https://www.zarza.hk/z/dezarza.pdf)
Provide a detailed description of the following dataset: C&Z
GEM
Generation, Evaluation, and Metrics (GEM) is a benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metrics. GEM aims to: - measure NLG progress across 13 datasets spanning many NLG tasks and languages. - provide an in-depth analysis of data and models presented via data statements and challenge sets. - develop standards for evaluation of generated text using both automated and human metrics. It is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development by extending existing data or developing datasets for additional languages.
Provide a detailed description of the following dataset: GEM
ALFWorld
ALFWorld contains interactive TextWorld environments (Côté et. al) that parallel embodied worlds in the ALFRED dataset (Shridhar et. al). The aligned environments allow agents to reason and learn high-level policies in an abstract space before solving embodied tasks through low-level actuation.
Provide a detailed description of the following dataset: ALFWorld
HQ-WMCA
The High-Quality Wide Multi-Channel Attack database (HQ-WMCA) database consists of 2904 short multi-modal video recordings of both bona-fide and presentation attacks. There are 555 bonafide presentations from 51 participants and the remaining 2349 are presentation attacks. The data is recorded from several channels including color, depth, thermal, infrared (spectra), and short-wave infrared (spectra).
Provide a detailed description of the following dataset: HQ-WMCA
The Best Sarcasm Annotated Dataset in Spanish
### Content This dataset contains all utterances of two episodes of South Park (Latin American voices) and two episodes of Archer (Spanish voices). The order of the utterances is shuffled. Each utterance has been annotated based on whether it is sarcastic or not. Sarcastic expressions also contain further annotation based on different theories on sarcasm. This corpus is unique because it is annotated from primarily audiovisual media. It also contains a lot of negative examples of sentences that are meant to be humorous or outrageous, but not sarcastic. This data provides thus a closer to real life benchmark for any sarcasm detection system. ### Cite I annotated this data for my MA thesis, so please cite it if you use this data. Hämäläinen, Mika (2016). [Reconocimiento automático del sarcasmo: ¡Esto va a funcionar bien!](https://www.researchgate.net/publication/339182029_Reconocimiento_automatico_del_sarcasmo_Esto_va_a_funcionar_bien). Helsinki: University of Helsinki, Department of Modern Languages. ### Inspiration - Sarcasm detection - Prediction of the theoretical categories of sarcasm
Provide a detailed description of the following dataset: The Best Sarcasm Annotated Dataset in Spanish
MIRACL-VC1
MIRACL-VC1 is a lip-reading dataset including both depth and color images. It can be used for diverse research fields like visual speech recognition, face detection, and biometrics. Fifteen speakers (five men and ten women) positioned in the frustum of an MS Kinect sensor and utter ten times a set of ten words and ten phrases (see the table below). Each instance of the dataset consists of a synchronized sequence of color and depth images (both of 640x480 pixels). The MIRACL-VC1 dataset contains a total number of 3000 instances.
Provide a detailed description of the following dataset: MIRACL-VC1
XD-Violence
XD-Violence is a large-scale audio-visual dataset for violence detection in videos.
Provide a detailed description of the following dataset: XD-Violence
PatentMatch
We address the computer-assisted search for prior art by creating a training dataset for supervised machine learning called PatentMatch. It contains pairs of claims from patent applications and semantically corresponding text passages of different degrees from cited patent documents. Each pair has been labeled by technically-skilled patent examiners from the European Patent Office. Accordingly, the label indicates the degree of semantic correspondence (matching), i.e., whether the text passage is prejudicial to the novelty of the claimed invention or not.
Provide a detailed description of the following dataset: PatentMatch
A Dataset of Journalists' Interactions with Their Readership
We present a dataset of dialogs in which journalists of The Guardian replied to reader comments and identify the reasons why. Based on this data, we formulate the novel task of recommending reader comments to journalists that are worth reading or replying to, i.e., ranking comments in such a way that the top comments are most likely to require the journalists' reaction.
Provide a detailed description of the following dataset: A Dataset of Journalists' Interactions with Their Readership
Top Comment or Flop Comment?
This dataset comprises four files of IDs of either strongly or weakly engaging online news comments (please see the paper for details): "Top comments" are 1) the top 10% comments in the politics section of The Guardian with the largest relative number of *replies* received (3111 samples) and 2) the top 10% comments in the politics section with the largest relative number of *upvotes* received (11081 samples) "Flop comments" are 1) the flop 10% comments in the politics section of The Guardian with the smallest relative number of *replies* received (3111 samples) and 2) the flop 10% comments in the politics section with the smallest relative number of *upvotes* received (11081 samples)
Provide a detailed description of the following dataset: Top Comment or Flop Comment?
HeartSeg
The medaka (Oryzias latipes) and the zebrafish (Danio rerio) are used as a model organism for a variety of subjects in biomedical research. The presented work aims to study the potential of automated ventricular dimension estimation through heart segmentation in medaka. For more on this, it's time for a closer look on our paper and the supplementary materials. See our paper here: https://www.liebertpub.com/doi/10.1089/zeb.2019.1754 See demonstration of our algorithm and framework on the test set data: https://youtu.be/i5bX_XbwXq0 The raw data was provided by: Dr. Jakob Gierten Affiliated with: Department of Pediatric Cardiology, University Hospital Heidelberg, Im Neuenheimer Feld 430, 69120 Heidelberg, Germany Centre for Organismal Studies, Heidelberg University, Im Neuenheimer Feld 230, 69120 Heidelberg, Germany Contributing We hope this work sparks additional research in this direction. Either by contributing to this framework, deploying the framework, or reusing the annotated ground truth data. In any case feel free to reach out and make sure to reference this work. Schutera, M., Just, S., Gierten, J., Mikut, R., Reischl, M., & Pylatiuk, C. (2019). Machine learning methods for automated quantification of ventricular dimensions. Zebrafish, 16(6), 542-545. Contact: mark.schutera@kit.edu and pylatiuk@kit.edu
Provide a detailed description of the following dataset: HeartSeg
DNS Challenge
The DNS Challenge at INTERSPEECH 2020 intended to promote collaborative research in single-channel Speech Enhancement aimed to maximize the perceptual quality and intelligibility of the enhanced speech. The challenge evaluated the speech quality using the online subjective evaluation framework ITU-T P.808. The challenge provides large datasets for training noise suppressors.
Provide a detailed description of the following dataset: DNS Challenge
Interspeech 2021 Deep Noise Suppression Challenge
The Deep Noise Suppression (DNS) challenge is designed to foster innovation in the area of noise suppression to achieve superior perceptual speech quality. This challenge has two two tracks: **Track 1: Real-Time Denoising track for wide band scenario** The noise suppressor must take less than the stride time Ts (in ms) to process a frame of size T (in ms) on an Intel Core i5 quad-core machine clocked at 2.4 GHz or equivalent processor. For example, Ts = T/2 for 50% overlap between frames. The total algorithmic latency allowed including the frame size T, stride time Ts, and any look ahead must be less than or equal to 40ms. For example, for a real-time system that receives 20ms audio chunks, if you use a frame length of 20ms with a stride of 10ms resulting in an algorithmic latency of 30ms, then you satisfy the latency requirements. If you use a frame of size 32ms with a stride of 16ms resulting in an algorithmic latency of 48ms, then your method does not satisfy the latency requirements as the total algorithmic latency exceeds 40ms. If your frame size plus stride T1=T+Ts is less than 40ms, then you can use up to (40-T1) ms future information. **Track 2: Real-Time Denoising track for full band scenario** Satisfy Track 1 requirements but at 48 kHz.
Provide a detailed description of the following dataset: Interspeech 2021 Deep Noise Suppression Challenge
TRN
The Toulouse Road Network dataset describes patches of road maps from the city of Toulouse, represented both as spatial graphs G = (A, X) and as grayscale segmentation images. The TRN dataset contains 111,034 data points (map tiles), of which: 80,357 are in the training set (around 72.4%), 11,679 are in the validation set (around 10.5%), 18,998 are in the test set (around 17.1%). Each tile represents a squared region of side 0.001 degrees of latitude and longitude on the map, which corresponds to a square of around 110 meters. The semantic segmentation of each patch is represented as a 64 × 64 grayscale image. The dataset is generated starting from publicly available data from OpenStreetMap. More details on the dataset characteristic and generation methods are available in our [blogpost](https://davide-belli.github.io/toulouse-road-network.html).
Provide a detailed description of the following dataset: TRN
WEB-FORUM-52
The WEB-FORUM-52 gold standard comprises (i) 13 web forums from the health domain, (ii) 15 forums obtained from a Wikipedia list of popular forums (https://en.wikipedia.org/wiki/List_of_Internet_forums), (iii) 13 forums mentioned on a list of popular German Web forums (https://www.beliebte-foren.de), (iv) nine forums obtained from WPressBlog (https://www.wpressblog.com/free-forum-posting-sites-list/) and (v) two additional forums. For most forums two web pages (from different threads) were used and stored together with gold standard annotations that have been manually created by domain experts and describe the post text, post date, post user and direct URL to the post.
Provide a detailed description of the following dataset: WEB-FORUM-52
KorQuAD
KorQuAD is a large-scale question-and-answer dataset constructed for Korean machine reading comprehension, and investigate the dataset to understand the distribution of answers and the types of reasoning required to answer the question. This dataset benchmarks the data generating process of SQuAD to meet the standard.
Provide a detailed description of the following dataset: KorQuAD
MOBIO
The MOBIO database consists of bi-modal (audio and video) data taken from 152 people. The database has a female-male ratio or nearly 1:2 (100 males and 52 females) and was collected from August 2008 until July 2010 in six different sites from five different countries. This led to a diverse bi-modal database with both native and non-native English speakers. In total 12 sessions were captured for each client: 6 sessions for Phase I and 6 sessions for Phase II. The Phase I data consists of 21 questions with the question types ranging from: Short Response Questions, Short Response Free Speech, Set Speech, and Free Speech. The Phase II data consists of 11 questions with the question types ranging from: Short Response Questions, Set Speech, and Free Speech. A more detailed description of the questions asked of the clients is provided below. The database was recorded using two mobile devices: a mobile phone and a laptop computer. The mobile phone used to capture the database was a NOKIA N93i mobile while the laptop computer was a standard 2008 MacBook. The laptop was only used to capture part of the first session, this first session consists of data captured on both the laptop and the mobile phone.
Provide a detailed description of the following dataset: MOBIO
FRLL-Morphs
FRLL-Morphs is a dataset of morphed faces based on images selected from the publicly available Face Research London Lab dataset [1]. We created the database by selecting similar looking pairs of people, and made 4 types of morphs for each pair using the following morphing tools: OpenCV [2], FaceMorpher [3], StyleGAN 2 [3], WebMorpher [4]. * [1] https://figshare.com/articles/dataset/Face_Research_Lab_London_Set/5047666 * [2] https://www.learnopencv.com/face-morph-using-opencv-cpp-python * [3] https://github.com/yaopang/FaceMorpher/tree/master/facemorpher * [4] https://github.com/NVlabs/stylegan2
Provide a detailed description of the following dataset: FRLL-Morphs
VisualMRC
VisualMRC is a visual machine reading comprehension dataset that proposes a task: given a question and a document image, a model produces an abstractive answer. You can find more details, analyses, and baseline results in the paper, VisualMRC: Machine Reading Comprehension on Document Images, AAAI 2021. Statistics: 10,197 images 30,562 QA pairs 10.53 average question tokens (tokenizing with NLTK tokenizer) 9.53 average answer tokens (tokenizing wit NLTK tokenizer) 151.46 average OCR tokens (tokenizing with NLTK tokenizer)
Provide a detailed description of the following dataset: VisualMRC
FERET-Morphs
FERET-Morphs is a dataset of morphed faces selected from the publicly available FERET dataset [1]. We created the database by selecting similar looking pairs of people, and made 3 types of morphs for each pair using the following morphing tools: OpenCV [2], FaceMorpher [3], StyleGAN 2 [3]. * [1] https://www.nist.gov/itl/products-and-services/color-feret-database * [2] https://www.learnopencv.com/face-morph-using-opencv-cpp-python * [3] https://github.com/yaopang/FaceMorpher/tree/master/facemorpher * [4] https://github.com/NVlabs/stylegan2
Provide a detailed description of the following dataset: FERET-Morphs
FRGC-Morphs
FRGC-Morphs is a dataset of morphed faces selected from the publicly available FRGC dataset [1]. We created the database by selecting similar looking pairs of people, and made 3 types of morphs for each pair using the following morphing tools: OpenCV [2], FaceMorpher [3], StyleGAN 2 [3]. * [1] https://www.nist.gov/programs-projects/face-recognition-grand-challenge-frgc * [2] https://www.learnopencv.com/face-morph-using-opencv-cpp-python * [3] https://github.com/yaopang/FaceMorpher/tree/master/facemorpher * [4] https://github.com/NVlabs/stylegan2
Provide a detailed description of the following dataset: FRGC-Morphs
NISP- A Multi-lingual Multi-accent Dataset for Speaker Profiling
We announce the release of a new multilingual speaker dataset called NITK-IISc Multilingual Multi-accent Speaker Profiling(NISP) dataset. The dataset contains speech in six different languages -- five Indian languages along with Indian English. The dataset contains speech data from 345 bilingual speakers in India. Each speaker has contributed about 4-5 minutes of data that includes recordings in both English and their mother tongue. The transcript for the text is provided in UTF-8 format. For every speaker, the dataset contains speaker meta-data such as L1, native place, medium of instruction, current residing place etc. In addition the dataset also contains physical parameter information of the speakers such as age, height, shoulder size and weight. We hope that the dataset is useful for a diverse set of research activities including multilingual speaker recognition, language and accent recognition, automatic speech recognition etc.
Provide a detailed description of the following dataset: NISP- A Multi-lingual Multi-accent Dataset for Speaker Profiling
NinaPro DB2
The second Ninapro database includes 40 intact subjects and it is thoroughly described in the paper: "Manfredo Atzori, Arjan Gijsberts, Claudio Castellini, Barbara Caputo, Anne-Gabrielle Mittaz Hager, Simone Elsig, Giorgio Giatsidis, Franco Bassetto & Henning Müller. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Scientific Data, 2014" (http://www.nature.com/articles/sdata201453). Please, cite this paper for any work related to the Ninapro database. Please, use also the paper by Gijsberts et al., 2014 (http://publications.hevs.ch/index.php/publications/show/1629) for more information about the database.
Provide a detailed description of the following dataset: NinaPro DB2
BuzzFeed-Webis Fake News Corpus 2016
The BuzzFeed-Webis Fake News Corpus 16 comprises the output of 9 publishers in a week close to the US elections. Among the selected publishers are 6 prolific hyperpartisan ones (three left-wing and three right-wing), and three mainstream publishers (see Table 1). All publishers earned Facebook’s blue checkmark, indicating authenticity and an elevated status within the network. For seven weekdays (September 19 to 23 and September 26 and 27), every post and linked news article of the 9 publishers was fact-checked by professional journalists at BuzzFeed. In total, 1,627 articles were checked, 826 mainstream, 256 left-wing and 545 right-wing. The imbalance between categories results from differing publication frequencies.
Provide a detailed description of the following dataset: BuzzFeed-Webis Fake News Corpus 2016
POLIT-FALSE-n-LEGIT NEWS DB 2016-2017
The LiT.RL POLIT-FALSE-n-LEGIT NEWS DB 2016-2017 contains a total of 274 news articles about U.S. Politics, content-matched in pairs of legitimate and falsified news. The database is free and released under an open license for educational and research purposes.
Provide a detailed description of the following dataset: POLIT-FALSE-n-LEGIT NEWS DB 2016-2017
GQN rooms-ring-camera
GQN rooms-ring-camera consist of scenes of a variable number of random objects captured in a square room of size 7x7 units. Wall textures, floor textures as well as the shapes of the objects are randomly chosen within a fixed pool of discrete options. There are 5 possible wall textures (red, green, cerise, orange, yellow), 3 possible floor textures (yellow, white, blue) and 7 possible object shapes (box, sphere, cylinder, capsule, cone, icosahedron and triangle). Each scene contains 1, 2 or 3 objects. In this simplified version of the dataset, the camera only moves on a fixed ring and always faces the center of the room.
Provide a detailed description of the following dataset: GQN rooms-ring-camera
ISOT Fake News Dataset
The ISOT Fake News dataset is a compilation of several thousands fake news and truthful articles, obtained from different legitimate news sites and sites flagged as unreliable by Politifact.com.
Provide a detailed description of the following dataset: ISOT Fake News Dataset
ObjectsRoom
The **ObjectsRoom** dataset is based on the MuJoCo environment used by the Generative Query Network [4] and is a multi-object extension of the 3d-shapes dataset. The training set contains 1M scenes with up to three objects. We also provide ~1K test examples for the following variants: 2.1 Empty room: scenes consist of the sky, walls, and floor only. 2.2 Six objects: exactly 6 objects are visible in each image. 2.3 Identical color: 4-6 objects are placed in the room and have an identical, randomly sampled color. Datapoints consist of an image and fixed number of masks. The first four masks correspond to the sky, floor, and two halves of the wall respectively. The remaining masks correspond to the foreground objects.
Provide a detailed description of the following dataset: ObjectsRoom
SVDC Fake News Dataset
A labeled dataset that presents fake news surrounding the conflict in Syria. The dataset consists of a set of articles/news labeled by 0 (fake) or 1 (credible). Credibility of articles are computed with respect to a ground truth information obtained from the Syrian Violations Documentation Center (VDC). In particular, for each article, we crowdsource the information extraction (e.g., date, location, Number of casualties) job using the crowdsourcing platform Figure Eight (formally CrowdFlower). Then, we match those articles against the VDC database to be able to deduce whether an article is fake or not. The dataset can be used to train machine learning models to detect fake news.
Provide a detailed description of the following dataset: SVDC Fake News Dataset
FakeNewsAMT & Celebrity
**FakeNewsAMT & Celebrity** include two novel datasets for the task of fake news detection, covering seven different news domains.
Provide a detailed description of the following dataset: FakeNewsAMT & Celebrity