dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Oxford105k | **Oxford105k** is the combination of the Oxford5k dataset and 99782 negative images crawled from Flickr using 145 most popular tags. This dataset is used to evaluate search performance for object retrieval (reported as mAP) on a large scale. | Provide a detailed description of the following dataset: Oxford105k |
DispScenes | The **DispScenes** dataset was created to address the specific problem of disparate image matching. The image pairs in all the datasets exhibit high levels of variation in illumination and viewpoint and also contain instances of occlusion. The DispScenes dataset provides manual ground truth keypoint correspondences for all images.
Source: [Matching Disparate Image Pairs Using Shape-Aware ConvNets](https://arxiv.org/abs/1811.09889) | Provide a detailed description of the following dataset: DispScenes |
Retrieval-SfM | The Retrieval-SFM dataset is used for instance image retrieval. The dataset contains 28559 images from 713 locations in the world. Each image has a label indicating the location it belongs to. Most locations are famous man-made architectures such as palaces and towers, which are relatively static and positively contribute to visual place recognition. The training dataset contains various perceptual changes including variations in viewing angles, occlusions and illumination conditions, etc.
Source: [Localizing Discriminative Visual Landmarks for Place Recognition](https://arxiv.org/abs/1904.06635) | Provide a detailed description of the following dataset: Retrieval-SfM |
VGG Cell | The **VGG Cell** dataset (made up entirely of synthetic images) is the main public benchmark used to compare cell counting techniques.
Source: [People, Penguins and Petri Dishes: Adapting Object Counting Models To New Visual Domains And Object Types Without Forgetting](https://arxiv.org/abs/1711.05586)
Image Source: [https://www.robots.ox.ac.uk/~vgg/research/counting/index_org.html](https://www.robots.ox.ac.uk/~vgg/research/counting/index_org.html) | Provide a detailed description of the following dataset: VGG Cell |
Tiny Images | The image dataset TinyImages contains 80 million images of size 32×32 collected from the Internet, crawling the words in WordNet.
**The authors have decided to withdraw it because it contains offensive content, and have asked the community to stop using it.** | Provide a detailed description of the following dataset: Tiny Images |
Permuted MNIST | **Permuted MNIST** is an MNIST variant that consists of 70,000 images of handwritten digits from 0 to 9, where 60,000 images are used for training, and 10,000 images for test. The difference of this dataset from the original MNIST is that each of the ten tasks is the multi-class classification of a different random permutation of the input pixels. | Provide a detailed description of the following dataset: Permuted MNIST |
MNIST-8M | MNIST8M is derived from the MNIST dataset by applying random deformations and translations to the dataset. | Provide a detailed description of the following dataset: MNIST-8M |
SUN3D | **SUN3D** contains a large-scale RGB-D video database, with 8 annotated sequences. Each frame has a semantic segmentation of the objects in the scene and information about the camera pose. It is composed by 415 sequences captured in 254 different spaces, in 41 different buildings. Moreover, some places have been captured multiple times at different moments of the day. | Provide a detailed description of the following dataset: SUN3D |
TUM RGB-D | **TUM RGB-D** is an RGB-D dataset. It contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory of the sensor. The data was recorded at full frame rate (30 Hz) and sensor resolution (640x480). The ground-truth trajectory was obtained from a high-accuracy motion-capture system with eight high-speed tracking cameras (100 Hz). | Provide a detailed description of the following dataset: TUM RGB-D |
SceneNet | **SceneNet** is a dataset of labelled synthetic indoor scenes. There are several labeled indoor scenes, including:
- 11 Bedroom scenes with 428 objects
- 15 Office scenes with 1,203 objects
- 11 Kitchen scenes with 797 objects
- 10 Living Room scenes with 715 objects
- 10 Bathrooms with 556 objects | Provide a detailed description of the following dataset: SceneNet |
SceneNet RGB-D | SceneNet-RGBD is a synthetic dataset containing large-scale photorealistic renderings of indoor scene trajectories with pixel-level annotations. Random sampling permits virtually unlimited scene configurations, and the dataset creators provide a set of 5M rendered RGB-D images from over 15K trajectories in synthetic layouts with random but physically simulated object poses. Each layout also has random lighting, camera trajectories, and textures. The scale of this dataset is well suited for pre-training data-driven computer vision techniques from scratch with RGB-D inputs, which previously has been limited by relatively small labelled datasets in NYUv2 and SUN RGB-D. It also provides a basis for investigating 3D scene labelling tasks by providing perfect camera poses and depth data as proxy for a SLAM system.
Source: [ViewAL: Active Learning with Viewpoint Entropy for Semantic Segmentation](https://arxiv.org/abs/1911.11789)
Image Source: [https://robotvault.bitbucket.io/scenenet-rgbd.html](https://robotvault.bitbucket.io/scenenet-rgbd.html) | Provide a detailed description of the following dataset: SceneNet RGB-D |
SUN Attribute | The **SUN Attribute** dataset consists of 14,340 images from 717 scene categories, and each category is annotated with a taxonomy of 102 discriminate attributes. The dataset can be used for high-level scene understanding and fine-grained scene recognition. | Provide a detailed description of the following dataset: SUN Attribute |
iSUN | **iSUN** is a ground truth of gaze traces on images from the SUN dataset. The collection is partitioned into 6,000 images for training, 926 for validation and 2,000 for test. | Provide a detailed description of the following dataset: iSUN |
BMS-26 | The **Berkeley Motion Segmentation** Dataset (**BMS-26**) is a dataset for motion segmentation, which consists of 26 video sequences with pixel-accurate segmentation annotation of moving objects. A total of 189 frames is annotated. 12 of the sequences are taken from the Hopkins 155 dataset and new annotation is added.
Source: [https://lmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html](https://lmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html)
Image Source: [https://lmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html](https://lmb.informatik.uni-freiburg.de/resources/datasets/moseg.en.html) | Provide a detailed description of the following dataset: BMS-26 |
Freiburg Groceries | **Freiburg Groceries** is a groceries classification dataset consisting of 5000 images of size 256x256, divided into 25 categories. It has imbalanced class sizes ranging from 97 to 370 images per class. Images were taken in various aspect ratios and padded to squares.
Source: [XNAS: Neural Architecture Search with Expert Advice](https://arxiv.org/abs/1906.08031)
Image Source: [http://aisdatasets.informatik.uni-freiburg.de/freiburg_groceries_dataset/](http://aisdatasets.informatik.uni-freiburg.de/freiburg_groceries_dataset/) | Provide a detailed description of the following dataset: Freiburg Groceries |
Freiburg Spatial Relations | The **Freiburg Spatial Relations** dataset features 546 scenes each containing two out of 25 household objects. The depicted spatial relations can roughly be described as on top, on top on the corner, inside, inside and inclined, next to, and inclined. The dataset contains the 25 object models as textured .obj and .dae files, a low resolution .dae version for visualization in rviz, a scene description file containing the translation and rotation of the objects for each scene, a file with labels for each scene, the 15 splits used for cross validation, and a bash script to convert the models to pointclouds.
Source: [http://spatialrelations.cs.uni-freiburg.de/](http://spatialrelations.cs.uni-freiburg.de/)
Image Source: [http://spatialrelations.cs.uni-freiburg.de/](http://spatialrelations.cs.uni-freiburg.de/) | Provide a detailed description of the following dataset: Freiburg Spatial Relations |
Freiburg Street Crossing | The **Freiburg Street Crossing** dataset consists of data collected from three different street crossings in Freiburg, Germany; ; two of which were traffic light regulated intersections and one a zebra crossing without traffic lights. The data can be used to train agents to cross roads autonomously.
Source: [http://aisdatasets.informatik.uni-freiburg.de/streetcrossing/](http://aisdatasets.informatik.uni-freiburg.de/streetcrossing/)
Image Source: [http://aisdatasets.informatik.uni-freiburg.de/streetcrossing/](http://aisdatasets.informatik.uni-freiburg.de/streetcrossing/) | Provide a detailed description of the following dataset: Freiburg Street Crossing |
Freiburg Campus 3D Scan | The **Freiburg Campus 3D Scan** dataset consists of 3D area maps from the Freiburg campus that were scanned with 3D lasers. Areas include corridors, the outdoor campus, and some of the colleges and buildings.
Source: [http://aisdatasets.informatik.uni-freiburg.de/streetcrossing/](http://aisdatasets.informatik.uni-freiburg.de/streetcrossing/)
Image Source: [http://ais.informatik.uni-freiburg.de/projects/datasets/octomap/](http://ais.informatik.uni-freiburg.de/projects/datasets/octomap/) | Provide a detailed description of the following dataset: Freiburg Campus 3D Scan |
Plant Centroids | **Plant Centroids** is a dataset for stem emerging points (SEP) detection in RGB and NIR image data. The dataset is meant to aid the construction of agricultural robots, where detecting SEPs is an important perception task (to position weeding or fertilizing tools at the plant’s center and finding natural landmarks in the field environment). The dataset contains annotations for ~2000 image sets with a broad variance of plant species and growth stages.
Source: [http://plantcentroids.cs.uni-freiburg.de/](http://plantcentroids.cs.uni-freiburg.de/)
Image Source: [http://plantcentroids.cs.uni-freiburg.de/](http://plantcentroids.cs.uni-freiburg.de/) | Provide a detailed description of the following dataset: Plant Centroids |
Freiburg Across Seasons | **Freiburg Across Seasons** captures long-term perceptual changes across a span of 3 years. Image sequences were recorded with a forward facing bumblebee stereo camera mounted on a car. During summer, the camera was mounted outside the car where as during winters the camera was inside the car. The image sequences are recorded at relatively low frame rates of 1Hz and 4Hz. All the images have a resolution of 1024 × 768 (width×height) and are JPEG compressed. In total, there are ground truth matchings for 8,133 images for localization based on GPS position.
Source: [http://aisdatasets.informatik.uni-freiburg.de/freiburg_across_seasons/](http://aisdatasets.informatik.uni-freiburg.de/freiburg_across_seasons/)
Image Source: [http://aisdatasets.informatik.uni-freiburg.de/freiburg_across_seasons/](http://aisdatasets.informatik.uni-freiburg.de/freiburg_across_seasons/) | Provide a detailed description of the following dataset: Freiburg Across Seasons |
Freiburg Terrains | **Freiburg Terrains** consist of three parts: 3.7 hours of audio recordings of the microphone pointed at the robot wheels. It also contains 24K RGB images from the camera mounted on top of the robot. The dataset creators also provide the SLAM poses for each data collection run. The dataset can be used for terrain classification which is useful for agent navigation tasks.
Source: [http://deepterrain.cs.uni-freiburg.de/](http://deepterrain.cs.uni-freiburg.de/)
Image Source: [http://deepterrain.cs.uni-freiburg.de/](http://deepterrain.cs.uni-freiburg.de/) | Provide a detailed description of the following dataset: Freiburg Terrains |
Freiburg Block Tasks | **Freiburg Block Tasks** is a dataset for robot skill learning. It consists of two datasets.
The first data set consisted of three simulated robot tasks: stacking (A), color pushing (B) and color stacking (C). The data set contains 300 multi-view demonstration videos per task. The tasks are simulated with PyBullet. Of these 300 demonstrations, 150 represent unsuccessful executions of the different tasks. The authors found it helpful to add unsuccessful demonstrations in the training of the embedding to enable training RL agents on it. Without fake examples, the distances in the embedding space for states not seen during training might be noisy. The test set contains the manipulation of blocks. Within the validation set, the blocks are replaced by cylinders of different colors.
The second data set includes real-world human executions of the simulated robot tasks (A, B and C), as well as demonstrations for a task where one has to first separate blocks in order to stack them (D). For each task, there are 60 multi-view demonstration videos, corresponding to 24 minutes of interaction. In contrast to the simulated data set, the real demonstrations contain no unsuccessful executions and are of varying length. The test set contains blocks of unseen sizes and textures, as well as unknown backgrounds. | Provide a detailed description of the following dataset: Freiburg Block Tasks |
Cityscapes-Motion | The **Cityscapes-Motion** dataset is a supplement to the semantic annotations provided by the Cityscapes dataset, containing 2975 training images and 500 validation images. The dataset creators provide manually annotated motion labels for the category of cars. The images are of resolution 2048×1024 pixels. The task to learn is not just semantic segmentation but also the motion status of the objects.
Source: [http://deepmotion.cs.uni-freiburg.de/](http://deepmotion.cs.uni-freiburg.de/)
Image Source: [http://deepmotion.cs.uni-freiburg.de/](http://deepmotion.cs.uni-freiburg.de/) | Provide a detailed description of the following dataset: Cityscapes-Motion |
KITTI-Motion | The **KITTI-Motion** dataset contains pixel-wise semantic class labels and moving object annotations for 255 images taken from the KITTI Raw dataset. The images are of resolution 1280×384 pixels and contain scenes of freeways, residential areas and inner-cities. The task is not just to semantically segment objects but also to identify their motion status.
Source: [http://deepmotion.cs.uni-freiburg.de/](http://deepmotion.cs.uni-freiburg.de/)
Image Source: [http://deepmotion.cs.uni-freiburg.de/](http://deepmotion.cs.uni-freiburg.de/) | Provide a detailed description of the following dataset: KITTI-Motion |
MobilityAids | **MobilityAids** is a dataset for perception of people and their mobility aids. The annotated dataset contains five classes: pedestrian, person in wheelchair, pedestrian pushing a person in a wheelchair, person using crutches and person using a walking frame. In total the hospital dataset has over 17, 000 annotated RGB-D images, containing people categorized according to the mobility aids they use. The images were collected in the facilities of the Faculty of Engineering of the University of Freiburg and in a hospital in Frankfurt.
Source: [http://mobility-aids.informatik.uni-freiburg.de/](http://mobility-aids.informatik.uni-freiburg.de/)
Image Source: [http://mobility-aids.informatik.uni-freiburg.de/](http://mobility-aids.informatik.uni-freiburg.de/) | Provide a detailed description of the following dataset: MobilityAids |
RobotPush | **RobotPush** is a dataset for object singulation – the task of separating cluttered objects through physical interaction. The dataset contains 3456 training images with labels and 1024 validation images with labels. It consists of simulated and real-world data collected from a PR2 robot that equipped with a Kinect 2 camera. The dataset also contains ground truth instance segmentation masks for 110 images in the test set. | Provide a detailed description of the following dataset: RobotPush |
DeepLocCross | **DeepLocCross** is a localization dataset that contains RGB-D stereo images captured at 1280 x 720 pixels at a rate of 20 Hz. The ground-truth pose labels are generated using a LiDAR-based SLAM system. In addition to the 6-DoF localization poses of the robot, the dataset additionally contains tracked detections of the observable dynamic objects. Each tracked object is identified using a unique track ID, spatial coordinates, velocity and orientation angle. Furthermore, as the dataset contains multiple pedestrian crossings, labels at each intersection indicating its safety for crossing are provided.
This dataset consists of seven training sequences with a total of 2264 images, and three testing sequences with a total of 930 images. The dynamic nature of the surrounding environment at which the dataset was captured renders the tasks of localization and visual odometry estimation extremely challenging due to the varying weather conditions, presence of shadows and motion blur caused by the movement of the robot platform. Furthermore, the presence of multiple dynamic objects often results in partial and full occlusions to the informative regions of the image. Moreover, the presence of repeated structures render the pose estimation task more challenging. Overall this dataset covers a wide range of perception related tasks such as loop closure detection, semantic segmentation, visual odometry estimation, global localization, scene flow estimation and behavior prediction. | Provide a detailed description of the following dataset: DeepLocCross |
DeepLoc | **DeepLoc** is a large-scale urban outdoor localization dataset. The dataset is currently comprised of one scene spanning an area of 110 x 130 m, that a robot traverses multiple times with different driving patterns. The dataset creators use a LiDAR-based SLAM system with sub-centimeter and sub-degree accuracy to compute the pose labels that provided as groundtruth. Poses in the dataset are approximately spaced by 0.5 m which is twice as dense as other relocalization datasets.
Furthermore, for each image the dataset creators provide pixel-wise semantic segmentation annotations for ten categories: Background, Sky, Road, Sidewalk, Grass, Vegetation, Building, Poles & Fences, Dynamic and Void. The dataset is divided into a train and test splits such that the train set comprises seven loops with alternating driving styles amounting to 2737 images, while the test set comprises three loops with a total of 1173 images. The dataset also contains global GPS/INS data and LiDAR measurements.
This dataset can be very challenging for vision based applications such as global localization, camera relocalization, semantic segmentation, visual odometry and loop closure detection, as it contains substantial lighting, weather changes, repeating structures, reflective and transparent glass buildings. | Provide a detailed description of the following dataset: DeepLoc |
Freiburg Lighting Adaptable Map Tracking | **Freiburg Lighting Adaptable Map Tracking** is a dataset for camera trajectory estimation. The dataset consists of two subdatasets, each consisting of a Lighting Adaptable Map and three camera trajectories recorded under varying lighting conditions. The map meshes are stored in PLY format with custom properties and elements. The trajectories contain synchronized RGB-D images, exposure times and gains, ground-truth light settings and camera poses, as well as the camera tracking results presented in the paper.
Source: [http://tracklam.informatik.uni-freiburg.de/](http://tracklam.informatik.uni-freiburg.de/)
Image Source: [http://tracklam.informatik.uni-freiburg.de/](http://tracklam.informatik.uni-freiburg.de/) | Provide a detailed description of the following dataset: Freiburg Lighting Adaptable Map Tracking |
Freiburg Poking | The **Freiburg Poking** dataset is a dataset for learning intuitive physics from physical interaction. It consists of 40K of interaction data with a KUKA LBR iiwa manipulator and a fixed Azure Kinect RGB-D camera. The dataset creators built an arena of styrofoam with walls for preventing objects from falling down. At any given time there were 3-7 objects randomly chosen from a set of 34 distinct objects present on the arena. The objects differed from each other in shape, appearance, material, mass and friction.
Source: [http://hind4sight.cs.uni-freiburg.de/](http://hind4sight.cs.uni-freiburg.de/)
Image Source: [http://hind4sight.cs.uni-freiburg.de/](http://hind4sight.cs.uni-freiburg.de/) | Provide a detailed description of the following dataset: Freiburg Poking |
7-Scenes | The **7-Scenes** dataset is a collection of tracked RGB-D camera frames. The dataset may be used for evaluation of methods for different applications such as dense tracking and mapping and relocalization techniques.
All scenes were recorded from a handheld Kinect RGB-D camera at 640×480 resolution. The dataset creators use an implementation of the KinectFusion system to obtain the ‘ground truth’ camera tracks, and a dense 3D model. Several sequences were recorded per scene by different users, and split into distinct training and testing sequence sets.
Source: [https://www.microsoft.com/en-us/research/project/rgb-d-dataset-7-scenes/](https://www.microsoft.com/en-us/research/project/rgb-d-dataset-7-scenes/)
Image Source: [https://www.microsoft.com/en-us/research/project/rgb-d-dataset-7-scenes/](https://www.microsoft.com/en-us/research/project/rgb-d-dataset-7-scenes/) | Provide a detailed description of the following dataset: 7-Scenes |
Cross-Dataset Testbed | The Cross-dataset Testbed is a Decaf7 based cross-dataset image classification dataset, which contains 40 categories of images from 3 domains: 3,847 images in Caltech256, 4,000 images in ImageNet, and 2,626 images for SUN. In total there are 10,473 images of 40 categories from these three domains.
Source: [Probability Weighted Compact Feature for Domain Adaptive Retrieval](https://arxiv.org/abs/2003.03293)
Image Source: [https://sites.google.com/site/crossdataset/](https://sites.google.com/site/crossdataset/) | Provide a detailed description of the following dataset: Cross-Dataset Testbed |
Washington RGB-D | **Washington RGB-D** is a widely used testbed in the robotic community, consisting of 41,877 RGB-D images organized into 300 instances divided in 51 classes of common indoor objects (e.g. scissors, cereal box, keyboard etc). Each object instance was positioned on a turntable and captured from three different viewpoints while rotating. | Provide a detailed description of the following dataset: Washington RGB-D |
TUM Kitchen | The **TUM Kitchen** dataset is an action recognition dataset that contains 20 video sequences captured by 4 cameras with overlapping views. The camera network captures the scene from four viewpoints with 25 fps, and every RGB frame is of the resolution 384×288 by pixels. The action labels are frame-wise, and provided for the left arm, the right arm and the torso separately.
Source: [Temporal Human Action Segmentation via Dynamic Clustering](https://arxiv.org/abs/1803.05790)
Image Source: [https://ias.in.tum.de/dokuwiki/software/kitchen-activity-data](https://ias.in.tum.de/dokuwiki/software/kitchen-activity-data) | Provide a detailed description of the following dataset: TUM Kitchen |
HIC | The Hands in action dataset (**HIC**) dataset has RGB-D sequences of hands interacting with objects.
Source: [Learning joint reconstruction of hands and manipulated objects](https://arxiv.org/abs/1904.05767)
Image Source: [http://files.is.tue.mpg.de/dtzionas/Hand-Object-Capture/](http://files.is.tue.mpg.de/dtzionas/Hand-Object-Capture/) | Provide a detailed description of the following dataset: HIC |
George Washington | The **George Washington** dataset contains 20 pages of letters written by George Washington and his associates in 1755 and thereby categorized into historical collection. The images are annotated at word level and contain approximately 5,000 words. | Provide a detailed description of the following dataset: George Washington |
Watch-n-Patch | The **Watch-n-Patch** dataset was created with the focus on modeling human activities, comprising multiple actions in a completely unsupervised setting. It is collected with Microsoft Kinect One sensor for a total length of about 230 minutes, divided in 458 videos. 7 subjects perform human daily activities in 8 offices and 5 kitchens with complex backgrounds. Moreover, skeleton data are provided as ground truth annotations. | Provide a detailed description of the following dataset: Watch-n-Patch |
Parzival | The **Parzival** dataset consists of 47 pages by three writers. These pages were taken from a medieval German manuscript from the 13th century that contains the epic poem Parzival by Wolfram von Eschenbach. The image size is 2000 x 3000 pixels. 24 pages are selected as training set; 14 pages are selected as test set; 2 pages are selected as validation set.
Source: [https://diuf.unifr.ch/main/hisdoc/divadia](https://diuf.unifr.ch/main/hisdoc/divadia)
Image Source: [https://diuf.unifr.ch/main/hisdoc/divadia](https://diuf.unifr.ch/main/hisdoc/divadia) | Provide a detailed description of the following dataset: Parzival |
CDTB |
Source: [https://www.vicos.si/Projects/CDTB 4.2 State-of-the-art Comparison A TH CTB (color-and-depth visual object tracking) dataset is recorded by several passive and active RGB-D setups and contains indoor as well as outdoor sequences acquired in direct sunlight. The sequences were recorded to contain significant object pose change, clutter, occlusion, and periods of long-term target absence to enable tracker evaluation under realistic conditions. Sequences are per-frame annotated with 13 visual attributes for detailed analysis. It contains around 100,000 samples.](https://www.vicos.si/Projects/CDTB 4.2 State-of-the-art Comparison A TH CTB (color-and-depth visual object tracking) dataset is recorded by several passive and active RGB-D setups and contains indoor as well as outdoor sequences acquired in direct sunlight. The sequences were recorded to contain significant object pose change, clutter, occlusion, and periods of long-term target absence to enable tracker evaluation under realistic conditions. Sequences are per-frame annotated with 13 visual attributes for detailed analysis. It contains around 100,000 samples.)
Image Source: [https://www.vicos.si/Projects/CDTB](https://www.vicos.si/Projects/CDTB) | Provide a detailed description of the following dataset: CDTB |
EgoDexter | The **EgoDexter** dataset provides both 2D and 3D pose annotations for 4 testing video sequences with 3190 frames. The videos are recorded with body-mounted camera from egocentric viewpoints and contain cluttered backgrounds, fast camera motion, and complex interactions with various objects. Fingertip positions were manually annotated for 1485 out of 3190 frames.
Source: [Hand Pose Estimation via Latent 2.5D Heatmap Regression](https://arxiv.org/abs/1804.09534)
Image Source: [https://handtracker.mpi-inf.mpg.de/projects/OccludedHands/EgoDexter.htm](https://handtracker.mpi-inf.mpg.de/projects/OccludedHands/EgoDexter.htm) | Provide a detailed description of the following dataset: EgoDexter |
SynthHands | The **SynthHands** dataset is a dataset for hand pose estimation which consists of real captured hand motion retargeted to a virtual hand with natural backgrounds and interactions with different objects. The dataset contains data for male and female hands, both with and without interaction with objects. While the hand and foreground object are synthtically generated using Unity, the motion was obtained from real performances as described in the accompanying paper. In addition, real object textures and background images (depth and color) were used. Ground truth 3D positions are provided for 21 keypoints of the hand.
Source: [Egocentric 6-DoF Tracking of Small Handheld Objects](https://arxiv.org/abs/1804.05870)
Image Source: [https://handtracker.mpi-inf.mpg.de/projects/OccludedHands/SynthHands.htm](https://handtracker.mpi-inf.mpg.de/projects/OccludedHands/SynthHands.htm) | Provide a detailed description of the following dataset: SynthHands |
Washington RGB-D Scenes v2 | The RGB-D Scenes Dataset v2 consists of 14 scenes containing furniture (chair, coffee table, sofa, table) and a subset of the objects in the RGB-D Object Dataset (bowls, caps, cereal boxes, coffee mugs, and soda cans). Each scene is a point cloud created by aligning a set of video frames using Patch Volumes Mapping. | Provide a detailed description of the following dataset: Washington RGB-D Scenes v2 |
Washington RGB-D Scenes | The RGB-D Scenes Dataset contains 8 scenes annotated with objects that belong to the Washington RGB-D Object Dataset. Each scene is a single video sequence consisting of multiple RGB-D frames.
Source: [https://rgbd-dataset.cs.washington.edu/dataset/rgbd-scenes-v2/](https://rgbd-dataset.cs.washington.edu/dataset/rgbd-scenes-v2/)
Image Source: [https://arxiv.org/abs/1904.02530](https://arxiv.org/abs/1904.02530) | Provide a detailed description of the following dataset: Washington RGB-D Scenes |
MannequinChallenge | The **MannequinChallenge** Dataset (MQC) provides in-the-wild videos of people in static poses while a hand-held camera pans around the scene. The dataset consists of three splits for training, validation and testing. | Provide a detailed description of the following dataset: MannequinChallenge |
Freiburg RGB-D People | The **Freiburg RGB-D People** dataset contains 3000+ RGB-D frames acquired in a university hall from three vertically mounted Kinect sensors. The data contains mostly upright walking and standing persons seen from different orientations and with different levels of occlusions.
Source: [http://www2.informatik.uni-freiburg.de/~spinello/RGBD-dataset.html](http://www2.informatik.uni-freiburg.de/~spinello/RGBD-dataset.html)
Image Source: [http://www2.informatik.uni-freiburg.de/~spinello/RGBD-dataset.html](http://www2.informatik.uni-freiburg.de/~spinello/RGBD-dataset.html) | Provide a detailed description of the following dataset: Freiburg RGB-D People |
Fraunhofer IPA Bin-Picking | The **Fraunhofer IPA Bin-Picking** dataset is a large-scale dataset comprising both simulated and real-world scenes for various objects (potentially having symmetries) and is fully annotated with 6D poses. A pyhsics simulation is used to create scenes of many parts in bulk by dropping objects in a random position and orientation above a bin. Additionally, this dataset extends the Siléane dataset by providing more samples. This allows to e.g. train deep neural networks and benchmark the performance on the public Siléane dataset
Source: [https://www.bin-picking.ai/en/dataset.html](https://www.bin-picking.ai/en/dataset.html)
Image Source: [https://arxiv.org/abs/1912.12125](https://arxiv.org/abs/1912.12125) | Provide a detailed description of the following dataset: Fraunhofer IPA Bin-Picking |
PAVIS RGB-D | **PAVIS RGB-D** is a dataset for person re-identification using depth information. The main motivation is that techniques such as SDALF fail when the individuals change their clothing, therefore they cannot be used for long-term video surveillance. Depth information is the solution to deal with this problem because it stays constant for a longer period of time. The dataset is composed by four different groups of data collected using the Kinect. The first group of data has been obtained by recording 79 people with a frontal view, walking slowly, avoiding occlusions and with stretched arms ("Collaborative"). This happened in an indoor scenario, where the people were at least 2 meters away from the camera. The second ("Walking1") and third ("Walking2") groups of data are composed by frontal recordings of the same 79 people walking normally while entering the lab where they normally work. The fourth group ("Backwards") is a back view recording of the people walking away from the lab.
The dataset creators provide 5 synchronized information for each person: 1) a set of 5 RGB images, 2) the foreground masks, 3) the skeletons, 4) the 3d mesh (ply), 5) the estimated floor. | Provide a detailed description of the following dataset: PAVIS RGB-D |
Couples Therapy | The **Couples Therapy** corpus contains audio, video recordings and manual transcriptions of conversations between 134 real-life couples attending marital therapy. In each session, one person selected a topic that was discussed over 10 minutes with the spouse. At the end of the session, both speakers were rated separately on 33 “behavior codes” by multiple annotators based on the Couples Interaction and Social Support Rating Systems. Each behavior was rated on a Likert scale from 1, indicating absence, to 9, indicating strong presence. A session-level rating was obtained for each speaker by averaging the annotator ratings. This process was repeated for the spouse, resulting in 2 sessions per couple at a time. The total number of sessions per couple varied between 2 and 6.
Source: [Modeling Interpersonal Influence of Verbal Behaviorin Couples Therapy Dyadic Interactions](https://arxiv.org/abs/1805.09436) | Provide a detailed description of the following dataset: Couples Therapy |
Raider | The **Raider** dataset collects fMRI recordings of 1000 voxels from the ventral temporal cortex, for 10 healthy adult participants passively watching the full-length movie “Raiders of the Lost Ark”.
Source: [Time-Resolved fMRI Shared Response Model using Gaussian Process Factor Analysis](https://arxiv.org/abs/2006.05572)
Image Source: [https://arxiv.org/abs/1909.12537](https://arxiv.org/abs/1909.12537) | Provide a detailed description of the following dataset: Raider |
VizDoom | ViZDoom is an AI research platform based on the classical First Person Shooter game Doom. The most popular game mode is probably the so-called Death Match, where several players join in a maze and fight against each other. After a fixed time, the match ends and all the players are ranked by the FRAG scores defined as kills minus suicides. During the game, each player can access various observations, including the first-person view screen pixels, the corresponding depth-map and segmentation-map (pixel-wise object labels), the bird-view maze map, etc. The valid actions include almost all the keyboard-stroke and mouse-control a human player can take, accounting for moving, turning, jumping, shooting, changing weapon, etc. ViZDoom can run a game either synchronously or asynchronously, indicating whether the game core waits until all players’ actions are collected or runs in a constant frame rate without waiting. | Provide a detailed description of the following dataset: VizDoom |
StarCraft II Learning Environment | The **StarCraft II Learning Environment** (S2LE) is a reinforcement learning environment based on the game StarCraft II. The environment consists of three sub-components: a Linux StarCraft II binary, the StarCraft II API and PySC2. The StarCraft II API allows programmatic control of StarCraft II. It can be used to start a game, get observations, take actions, and review replays. PyC2 is a Python environment that wraps the StarCraft II API to ease the interaction between Python reinforcement learning agents and StarCraft II. It defines an action and observation specification, and includes a random agent and a handful of rule-based agents as examples. It also includes some mini-games as challenges and visualization tools to understand what the agent can see and do.
Source: [https://github.com/deepmind/pysc2](https://github.com/deepmind/pysc2)
Image Source: [https://github.com/deepmind/pysc2](https://github.com/deepmind/pysc2) | Provide a detailed description of the following dataset: StarCraft II Learning Environment |
AI2-THOR | AI2-Thor is an interactive environment for embodied AI. It contains four types of scenes, including kitchen, living room, bedroom and bathroom, and each scene includes 30 rooms, where each room is unique in terms of furniture placement and item types. There are over 2000 unique objects for AI agents to interact with. | Provide a detailed description of the following dataset: AI2-THOR |
TORCS | **TORCS** (**The Open Racing Car Simulator**) is a driving simulator. It is capable of simulating the essential elements of vehicular dynamics such as mass, rotational inertia, collision, mechanics of suspensions, links and differentials, friction and aerodynamics. Physics simulation is simplified and is carried out through Euler integration of differential equations at a temporal discretization level of 0.002 seconds. The rendering pipeline is lightweight and based on OpenGL that can be turned off for faster training. TORCS offers a large variety of tracks and cars as free assets. It also provides a number of programmed robot cars with different levels of performance that can be used to benchmark the performance of human players and software driving agents. TORCS was built with the goal of developing Artificial Intelligence for vehicular control and has been used extensively by the machine learning community ever since its inception. | Provide a detailed description of the following dataset: TORCS |
DeepMind Control Suite | The **DeepMind Control Suite** (DMCS) is a set of simulated continuous control environments with a standardized structure and interpretable rewards. The tasks are written and powered by the MuJoCo physics engine, making them easy to identify. Control Suite tasks include Pendulum, Acrobot, Cart-pole, Cart-k-pole, Ball in cup, Point-mass, Reacher, Finger, Hooper, Fish, Cheetah, Walker, Manipulator, Manipulator extra, Stacker, Swimmer, Humanoid, Humanoid_CMU and LQR. | Provide a detailed description of the following dataset: DeepMind Control Suite |
GVGAI | The **General Video Game AI** (**GVGAI**) framework is widely used in research which features a corpus of over 100 single-player games and 60 two-player games. These are fairly small games, each focusing on specific mechanics or skills the players should be able to demonstrate, including clones of classic arcade games such as Space Invaders, puzzle games like Sokoban, adventure games like Zelda or game-theory problems such as the Iterative Prisoners Dilemma. All games are real-time and require players to make decisions in only 40ms at every game tick, although not all games explicitly reward or require fast reactions; in fact, some of the best game-playing approaches add up the time in the beginning of the game to run Breadth-First Search in puzzle games in order to find an accurate solution. However, given the large variety of games (many of which are stochastic and difficult to predict accurately), scoring systems and termination conditions, all unknown to the players, highly-adaptive general methods are needed to tackle the diverse challenges proposed. | Provide a detailed description of the following dataset: GVGAI |
StarData | **StarData** is a StarCraft: Brood War replay dataset, with 65,646 games. The full dataset after compression is 365 GB, 1535 million frames, and 496 million player actions. The entire frame data was dumped out at 8 frames per second. | Provide a detailed description of the following dataset: StarData |
Atari-HEAD | **Atari-HEAD** is a dataset of human actions and eye movements recorded while playing Atari videos games. For every game frame, its corresponding image frame, the human keystroke action, the reaction time to make that action, the gaze positions, and immediate reward returned by the environment were recorded. The gaze data was recorded using an EyeLink 1000 eye tracker at 1000Hz. The human subjects are amateur players who are familiar with the games. The human subjects were only allowed to play for 15 minutes and were required to rest for at least 15 minutes before the next trial. Data was collected from 4 subjects, 16 games, 175 15-minute trials, and a total of 2.97 million frames/demonstrations.
Source: [https://zenodo.org/record/2587121](https://zenodo.org/record/2587121)
Image Source: [https://arxiv.org/abs/1903.06754](https://arxiv.org/abs/1903.06754) | Provide a detailed description of the following dataset: Atari-HEAD |
Mario AI | **Mario AI** was a benchmark environment for reinforcement learning. The gameplay in Mario AI, as in the original Nintendo’s version, consists in moving the controlled character, namely Mario, through two-dimensional levels, which are viewed sideways. Mario can walk and run to the right and left, jump, and (depending on which state he is in) shoot fireballs. Gravity acts on Mario, making it necessary to jump over cliffs to get past them. Mario can be in one of three states: Small, Big (can kill enemies by jumping onto them), and Fire (can shoot fireballs). | Provide a detailed description of the following dataset: Mario AI |
D4RL | **D4RL** is a collection of environments for offline reinforcement learning. These environments include Maze2D, AntMaze, Adroit, Gym, Flow, FrankKitchen and CARLA. | Provide a detailed description of the following dataset: D4RL |
AtariARI | The **AtariARI** (**Atari Annotated RAM Interface**) is an environment for representation learning. The Atari Arcade Learning Environment (ALE) does not explicitly expose any ground truth state information. However, ALE does expose the RAM state (128 bytes per timestep) which are used by the game programmer to store important state information such as the location of sprites, the state of the clock, or the current room the agent is in. To extract these variables, the dataset creators consulted commented disassemblies (or source code) of Atari 2600 games which were made available by Engelhardt and Jentzsch and CPUWIZ. The dataset creators were able to find and verify important state variables for a total of 22 games. Once this information was acquired, combining it with the ALE interface produced a wrapper that can automatically output a state label for every example frame generated from the game. The dataset creators make this available with an easy-to-use gym wrapper, which returns this information with no change to existing code using gym interfaces.
Source: [https://arxiv.org/pdf/1906.08226.pdf](https://arxiv.org/pdf/1906.08226.pdf)
Image Source: [https://github.com/mila-iqia/atari-representation-learning](https://github.com/mila-iqia/atari-representation-learning) | Provide a detailed description of the following dataset: AtariARI |
Lani | LANI is a 3D navigation environment and corpus, where an agent navigates between landmarks. **Lani** contains 27,965 crowd-sourced instructions for navigation in an open environment. Each datapoint includes an instruction, a human-annotated ground-truth demonstration trajectory, and an environment with various landmarks and lakes. The dataset train/dev/test split is 19,758/4,135/4,072. Each environment specification defines placement of 6–13 landmarks within a square grass field of size 50m×50m.
Source: [Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation Prediction](https://arxiv.org/abs/1811.04179)
Image Source: [https://arxiv.org/pdf/1809.00786.pdf](https://arxiv.org/pdf/1809.00786.pdf) | Provide a detailed description of the following dataset: Lani |
CHALET | **CHALET** is a 3D house simulator with support for navigation and manipulation. Unlike existing systems, CHALET supports both a wide range of object manipulation, as well as supporting complex environemnt layouts consisting of multiple rooms. The range of object manipulations includes the ability to pick up and place objects, toggle the state of objects like taps or televesions, open or close containers, and insert or remove objects from these containers. In addition, the simulator comes with 58 rooms that can be combined to create houses, including 10 default house layouts. CHALET is therefore suitable for setting up challenging environments for various AI tasks that require complex language understanding and planning, such as navigation, manipulation, instruction following, and interactive question answering. | Provide a detailed description of the following dataset: CHALET |
Griddly | **Griddly** is an environment for grid-world based research. Griddly provides a highly optimized game state and rendering engine with a flexible high-level interface for configuring environments. Not only does Griddly offer simple interfaces for single, multi-player and RTS games, but also multiple methods of rendering, configurable partial observability and interfaces for procedural content generation. | Provide a detailed description of the following dataset: Griddly |
NomBank | **NomBank** is an annotation project at New York University that is related to the PropBank project at the University of Colorado. The goal is to mark the sets of arguments that cooccur with nouns in the PropBank Corpus (the Wall Street Journal Corpus of the Penn Treebank), just as PropBank records such information for verbs. As a side effect of the annotation process, the authors are producing a number of other resources including various dictionaries, as well as PropBank style lexical entries called frame files. These resources help the user label the various arguments and adjuncts of the head nouns with roles (sets of argument labels for each sense of each noun). NYU and U of Colorado are making a coordinated effort to insure that, when possible, role definitions are consistent across parts of speech. For example, PropBank's frame file for the verb "decide" was used in the annotation of the noun "decision". | Provide a detailed description of the following dataset: NomBank |
QA-SRL | **QA-SRL** was proposed as an open schema for semantic roles, in which the relation between an argument and a predicate is expressed as a natural-language question containing the predicate (“Where was someone educated?”) whose answer is the argument (“Princeton”). The authors collected about 19,000 question-answer pairs from 3,200 sentences. | Provide a detailed description of the following dataset: QA-SRL |
SParC | **SParC** is a large-scale dataset for complex, cross-domain, and context-dependent (multi-turn) semantic parsing and text-to-SQL task (interactive natural language interfaces for relational databases). | Provide a detailed description of the following dataset: SParC |
CoNLL 2002 | The shared task of CoNLL-2002 concerns language-independent named entity recognition. The types of named entities include: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task were offered training and test data for at least two languages. Information sources other than the training data might have been used in this shared task. | Provide a detailed description of the following dataset: CoNLL 2002 |
Panlex | PanLex translates words in thousands of languages. Its database is panlingual (emphasizes coverage of every language) and lexical (focuses on words, not sentences). | Provide a detailed description of the following dataset: Panlex |
MCScript | **MCScript** is used as the official dataset of SemEval2018 Task11. This dataset constructs a collection of text passages about daily life activities and a series of questions referring to each passage, and each question is equipped with two answer choices. The MCScript comprises 9731, 1411, and 2797 questions in training, development, and test set respectively. | Provide a detailed description of the following dataset: MCScript |
KP20k | **KP20k** is a large-scale scholarly articles dataset with 528K articles for training, 20K articles for validation and 20K articles for testing. | Provide a detailed description of the following dataset: KP20k |
Semantic Scholar | The **Semantic Scholar** corpus (S2) is composed of titles from scientific papers published in machine learning conferences and journals from 1985 to 2017, split by year (33 timesteps). | Provide a detailed description of the following dataset: Semantic Scholar |
EVALution | **EVALution** dataset is evenly distributed among the three classes (hypernyms, co-hyponyms and random) and involves three types of parts of speech (noun, verb, adjective). The full dataset contains a total of 4,263 distinct terms consisting of 2,380 nouns, 958 verbs and 972 adjectives. | Provide a detailed description of the following dataset: EVALution |
Senseval-2 | There are now many computer programs for automatically determining the sense of a word in context (Word Sense Disambiguation or WSD). The purpose of SENSEVAL is to evaluate the strengths and weaknesses of such programs with respect to different words, different varieties of language, and different languages. | Provide a detailed description of the following dataset: Senseval-2 |
RoboCup | **RoboCup** is an initiative in which research groups compete by enabling their robots to play football matches. Playing football requires solving several challenging tasks, such as vision, motion, and team coordination. Framing the research efforts onto football attracts public interest (and potential research funding) in robotics, which may otherwise be less entertaining to non-experts. | Provide a detailed description of the following dataset: RoboCup |
ShARC | **ShARC** is a Conversational Question Answering dataset focussing on question answering from texts containing rules. | Provide a detailed description of the following dataset: ShARC |
SIQA | **Social Interaction QA (SIQA)** is a question-answering benchmark for testing social commonsense intelligence. Contrary to many prior benchmarks that focus on physical or taxonomic knowledge, Social IQa focuses on reasoning about people’s actions and their social implications. For example, given an action like "Jesse saw a concert" and a question like "Why did Jesse do this?", humans can easily infer that Jesse wanted "to see their favorite performer" or "to enjoy the music", and not "to see what's happening inside" or "to see if it works". The actions in Social IQa span a wide variety of social situations, and answer candidates contain both human-curated answers and adversarially-filtered machine-generated candidates. Social IQa contains over 37,000 QA pairs for evaluating models’ abilities to reason about the social implications of everyday events and situations. | Provide a detailed description of the following dataset: SIQA |
OLID | The **OLID** is a hierarchical dataset to identify the type and the target of offensive texts in social media. The dataset is collected on Twitter and publicly available. There are 14,100 tweets in total, in which 13,240 are in the training set, and 860 are in the test set. For each tweet, there are three levels of labels: (A) Offensive/Not-Offensive, (B) Targeted-Insult/Untargeted, (C) Individual/Group/Other. The relationship between them is hierarchical. If a tweet is offensive, it can have a target or no target. If it is offensive to a specific target, the target can be an individual, a group, or some other objects. This dataset is used in the OffensEval-2019 competition in SemEval-2019. | Provide a detailed description of the following dataset: OLID |
Multi-News | **Multi-News**, consists of news articles and human-written summaries of these articles from the site newser.com. Each summary is professionally written by editors and includes links to the original articles cited. | Provide a detailed description of the following dataset: Multi-News |
CLOTH | The Cloze Test by Teachers (**CLOTH**) benchmark is a collection of nearly 100,000 4-way multiple-choice cloze-style questions from middle- and high school-level English language exams, where the answer fills a blank in a given text. Each question is labeled with a type of deep reasoning it involves, where the four possible types are grammar, short-term reasoning, matching/paraphrasing, and long-term reasoning, i.e., reasoning over multiple sentences
Source: [Recent Advances in Natural Language Inference:A Survey of Benchmarks, Resources, and Approaches](https://arxiv.org/abs/1904.01172)
Image Source: [https://arxiv.org/pdf/1711.03225.pdf](https://arxiv.org/pdf/1711.03225.pdf) | Provide a detailed description of the following dataset: CLOTH |
CosmosQA | CosmosQA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people’s everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context. | Provide a detailed description of the following dataset: CosmosQA |
WinoBias | **WinoBias** contains 3,160 sentences, split equally for development and test, created by researchers familiar with the project. Sentences were created to follow two prototypical templates but annotators were encouraged to come up with scenarios where entities could be interacting in plausible ways. Templates were selected to be challenging and designed to cover cases requiring semantics and syntax separately. | Provide a detailed description of the following dataset: WinoBias |
Spades | Datasets **Spades** contains 93,319 questions derived from clueweb09 sentences. Specifically, the questions were created by randomly removing an entity, thus producing sentence-denotation pairs.
Source: [Learning an Executable Neural Semantic Parser](https://arxiv.org/abs/1711.05066)
Image Source: [https://github.com/sivareddyg/graph-parser/blob/master/data/spades/results/graphparser-ccg-supervised-dev.txt](https://github.com/sivareddyg/graph-parser/blob/master/data/spades/results/graphparser-ccg-supervised-dev.txt) | Provide a detailed description of the following dataset: Spades |
WikiSum | **WikiSum** is a dataset based on English Wikipedia and suitable for a task of multi-document abstractive summarization. In each instance, the input is comprised of a Wikipedia topic (title of article) and a collection of non-Wikipedia reference documents, and the target is the Wikipedia article text. The dataset is restricted to the articles with at least one crawlable citation. The official split divides the articles roughly into 80/10/10 for train/development/test subsets, resulting in 1865750, 233252, and 232998 examples respectively. | Provide a detailed description of the following dataset: WikiSum |
DRCD | Delta Reading Comprehension Dataset (DRCD) is an open domain traditional Chinese machine reading comprehension (MRC) dataset. This dataset aimed to be a standard Chinese machine reading comprehension dataset, which can be a source dataset in transfer learning. The dataset contains 10,014 paragraphs from 2,108 Wikipedia articles and 30,000+ questions generated by annotators. | Provide a detailed description of the following dataset: DRCD |
EmotionLines | **EmotionLines** contains a total of 29245 labeled utterances from 2000 dialogues. Each utterance in dialogues is labeled with one of seven emotions, six Ekman’s basic emotions plus the neutral emotion. Each labeling was accomplished by 5 workers, and for each utterance in a label, the emotion category with the highest votes was set as the label of the utterance. Those utterances voted as more than two different emotions were put into the non-neutral category. Therefore the dataset has a total of 8 types of emotion labels, anger, disgust, fear, happiness, sadness, surprise, neutral, and non-neutral. | Provide a detailed description of the following dataset: EmotionLines |
Chinese Gigaword | **Chinese Gigaword** corpus consists of 2.2M of headline-document pairs of news stories covering over 284 months from two Chinese newspapers, namely the Xinhua News Agency of China (XIN) and the Central News Agency of Taiwan (CNA). | Provide a detailed description of the following dataset: Chinese Gigaword |
CELEX | **CELEX** database comprises three different searchable lexical databases, Dutch, English and German. The lexical data contained in each database is divided into five categories: orthography, phonology, morphology, syntax (word class) and word frequency. | Provide a detailed description of the following dataset: CELEX |
MuST-C | **MuST-C** currently represents the largest publicly available multilingual corpus (one-to-many) for speech translation. It covers eight language directions, from English to German, Spanish, French, Italian, Dutch, Portuguese, Romanian and Russian. The corpus consists of audio, transcriptions and translations of English TED talks, and it comes with a predefined training, validation and test split. | Provide a detailed description of the following dataset: MuST-C |
Who-did-What | **Who-did-What** collects its corpus from news and provides options for questions similar to CBT. Each question is formed from two independent articles: an article is treated as context to be read and a separate article on the same event is used to form the query. | Provide a detailed description of the following dataset: Who-did-What |
MetaQA | The **MetaQA** dataset consists of a movie ontology derived from the WikiMovies Dataset and three sets of question-answer pairs written in natural language: 1-hop, 2-hop, and 3-hop queries. | Provide a detailed description of the following dataset: MetaQA |
FakeNewsNet | **FakeNewsNet** is collected from two fact-checking websites: GossipCop and PolitiFact containing news contents with labels annotated by professional journalists and experts, along with social context information. | Provide a detailed description of the following dataset: FakeNewsNet |
STS 2014 | STS-2014 is from SemEval-2014, constructed from image descriptions, news headlines, tweet news, discussion forums, and OntoNotes. | Provide a detailed description of the following dataset: STS 2014 |
MEDIA | The **MEDIA** French corpus is dedicated to semantic extraction from speech in a context of human/machine dialogues. The corpus has manual transcription and conceptual annotation of dialogues from 250 speakers. It is split into the following three parts : (1) the training set (720 dialogues, 12K sentences), (2) the development set (79 dialogues, 1.3K sentences, and (3) the test set (200 dialogues, 3K sentences).
Source: [Dialogue history integration into end-to-end signal-to-concept spoken language understanding systems](https://arxiv.org/abs/2002.06012)
Image Source: [http://www.lrec-conf.org/proceedings/lrec2004/pdf/356.pdf](http://www.lrec-conf.org/proceedings/lrec2004/pdf/356.pdf) | Provide a detailed description of the following dataset: MEDIA |
ASPEC | **ASPEC**, Asian Scientific Paper Excerpt Corpus, is constructed by the Japan Science and Technology Agency (JST) in collaboration with the National Institute of Information and Communications Technology (NICT). It consists of a Japanese-English paper abstract corpus of 3M parallel sentences (ASPEC-JE) and a Japanese-Chinese paper excerpt corpus of 680K parallel sentences (ASPEC-JC). This corpus is one of the achievements of the Japanese-Chinese machine translation project which was run in Japan from 2006 to 2010. | Provide a detailed description of the following dataset: ASPEC |
OMICS | **OMICS** is an extensive collection of knowledge for indoor service robots gathered from internet users. Currently, it contains 48 tables capturing different sorts of knowledge. Each tuple of the Help table maps a user desire to a task that may meet the desire (e.g., ⟨ “feel thirsty”, “by offering drink” ⟩). Each tuple of the Tasks/Steps table decomposes a task into several steps (e.g., ⟨ “serve a drink”, 0. “get a glass”, 1. “get a bottle”, 2. “fill class from bottle”, 3. “give class to person” ⟩). Given this, OMICS offers useful knowledge about hierarchism of naturalistic instructions, where a high-level user request (e.g., “serve a drink”) can be reduced to lower-level tasks (e.g., “get a glass”, ⋯). Another feature of OMICS is that elements of any tuple in an OMICS table are semantically related according to a predefined template. This facilitates the semantic interpretation of the OMICS tuples.
Source: [Understanding User Instructions by Utilizing Open Knowledge for Service Robots](https://arxiv.org/abs/1606.02877)
Image Source: [https://www.aaai.org/Papers/AAAI/2004/AAAI04-096.pdf](https://www.aaai.org/Papers/AAAI/2004/AAAI04-096.pdf) | Provide a detailed description of the following dataset: OMICS |
QUASAR | The Question Answering by Search And Reading (**QUASAR**) is a large-scale dataset consisting of [QUASAR-S](quasar-s) and [QUASAR-T](quasar-t). Each of these datasets is built to focus on evaluating systems devised to understand a natural language query, a large corpus of texts and to extract an answer to the question from the corpus. Specifically, QUASAR-S comprises 37,012 fill-in-the-gaps questions that are collected from the popular website Stack Overflow using entity tags. The QUASAR-T dataset contains 43,012 open-domain questions collected from various internet sources. The candidate documents for each question in this dataset are retrieved from an Apache Lucene based search engine built on top of the ClueWeb09 dataset. | Provide a detailed description of the following dataset: QUASAR |
Dialogue State Tracking Challenge | The Dialog State Tracking Challenges 2 & 3 (DSTC2&3) were research challenge focused on improving the state of the art in tracking the state of spoken dialog systems. State tracking, sometimes called belief tracking, refers to accurately estimating the user's goal as a dialog progresses. Accurate state tracking is desirable because it provides robustness to errors in speech recognition, and helps reduce ambiguity inherent in language within a temporal process like dialog.
In these challenges, participants were given labelled corpora of dialogs to develop state tracking algorithms. The trackers were then evaluated on a common set of held-out dialogs, which were released, un-labelled, during a one week period.
The corpus was collected using Amazon Mechanical Turk, and consists of dialogs in two domains: restaurant information, and tourist information. Tourist information subsumes restaurant information, and includes bars, cafés etc. as well as multiple new slots. There were two rounds of evaluation using this data:
DSTC 2 released a large number of training dialogs related to restaurant search. Compared to DSTC (which was in the bus timetables domain), DSTC 2 introduces changing user goals, tracking 'requested slots' as well as the new restaurants domain. Results from DSTC 2 were presented at SIGDIAL 2014.
DSTC 3 addressed the problem of adaption to a new domain - tourist information. DSTC 3 releases a small amount of labelled data in the tourist information domain; participants will use this data plus the restaurant data from DSTC 2 for training.
Dialogs used for training are fully labelled; user transcriptions, user dialog-act semantics and dialog state are all annotated. (This corpus therefore is also suitable for studies in Spoken Language Understanding.) | Provide a detailed description of the following dataset: Dialogue State Tracking Challenge |
ISEAR | Over a period of many years during the 1990s, a large group of psychologists all over the world collected data in the **ISEAR** project, directed by Klaus R. Scherer and Harald Wallbott. Student respondents, both psychologists and non-psychologists, were asked to report situations in which they had experienced all of 7 major emotions (joy, fear, anger, sadness, disgust, shame, and guilt). In each case, the questions covered the way they had appraised the situation and how they reacted. The final data set thus contained reports on seven emotions each by close to 3000 respondents in 37 countries on all 5 continents. | Provide a detailed description of the following dataset: ISEAR |
CMRC | CMRC is a dataset is annotated by human experts with near 20,000 questions as well as a challenging set which is composed of the questions that need reasoning over multiple clues. | Provide a detailed description of the following dataset: CMRC |
PubMed RCT | **PubMed 200k RCT** is new dataset based on PubMed for sequential sentence classification. The dataset consists of approximately 200,000 abstracts of randomized controlled trials, totaling 2.3 million sentences. Each sentence of each abstract is labeled with their role in the abstract using one of the following classes: background, objective, method, result, or conclusion. The purpose of releasing this dataset is twofold. First, the majority of datasets for sequential short-text classification (i.e., classification of short texts that appear in sequences) are small: the authors hope that releasing a new large dataset will help develop more accurate algorithms for this task. Second, from an application perspective, researchers need better tools to efficiently skim through the literature. Automatically classifying each sentence in an abstract would help researchers read abstracts more efficiently, especially in fields where abstracts may be long, such as the medical field. | Provide a detailed description of the following dataset: PubMed RCT |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.