dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
MasakhaNER | MasakhaNER is a collection of Named Entity Recognition (NER) datasets for 10 different African languages. The languages forming this dataset are: Amharic, Hausa, Igbo, Kinyarwanda, Luganda, Luo, Nigerian-Pidgin, Swahili, Wolof, and Yorùbá. | Provide a detailed description of the following dataset: MasakhaNER |
3D Vehicle Tracking Simulation Dataset | To collect the **3D Vehicle Tracking Simulation Dataset**, a driving simulation is used to obtain accurate 3D bounding box annotations at no cost of human efforts. The data collection and annotation pipeline extend the previous works like VIPER and FSV, especially in terms of linking identities across frames. The simulation is based on Grand Theft Auto V, a modern game that simulates a functioning city and its surroundings in a photo-realistic three-dimensional world. Note that the pipeline is real-time, providing the potential of largescale data collection, while VIPER requires expensive offline processings. | Provide a detailed description of the following dataset: 3D Vehicle Tracking Simulation Dataset |
UIT-ViCTSD | UIT-ViCTSD (Vietnamese Constructive and Toxic Speech Detection) is a dataset for constructive and toxic speech detection in Vietnamese. It consists of 10,000 human-annotated comments. | Provide a detailed description of the following dataset: UIT-ViCTSD |
ParaCrawl | ParaCrawl v.7.1 is a parallel dataset with 41 language pairs primarily aligned with English (39 out of 41) and mined using the parallel-data-crawling tool Bitextor which includes downloading documents, preprocessing and normalization, aligning documents and segments, and filtering noisy data via Bicleaner. ParaCrawl focuses on European languages, but also includes 9 lower-resource, non-European language pairs in v7.1. | Provide a detailed description of the following dataset: ParaCrawl |
mC4 | **mC4** is a multilingual variant of the [C4](https://paperswithcode.com/dataset/c4) dataset called mC4. mC4 comprises natural text in 101 languages drawn from the public Common Crawl web scrape. | Provide a detailed description of the following dataset: mC4 |
Penson et al.'s dataset derived from the MSK-IMPACT dataset | The dataset is derived from the MSK-IMPACT dataset designed and published by [Zehir](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5461196/) using the code published by Penson et al.. The derivation process is described in [Development of Genome-Derived Tumor Type Prediction to Inform Clinical Cancer Care](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6865333/). | Provide a detailed description of the following dataset: Penson et al.'s dataset derived from the MSK-IMPACT dataset |
Cross-Linguistic Polysemies | Data from: Using network approaches to enhance the analysis of cross-linguistic polysemies | Provide a detailed description of the following dataset: Cross-Linguistic Polysemies |
PRW | **PRW** is a large-scale dataset for end-to-end pedestrian detection and person recognition in raw video frames. PRW is introduced to evaluate Person Re-identification in the Wild, using videos acquired through six synchronized cameras. It contains 932 identities and 11,816 frames in which pedestrians are annotated with their bounding box positions and identities. | Provide a detailed description of the following dataset: PRW |
TICaM | TICaM is a Time-of-flight In-car Cabin Monitoring dataset for vehicle interior monitoring using a single wide-angle depth camera. This dataset addresses the deficiencies of other available in-car cabin datasets in terms of the ambit of labeled classes, recorded scenarios and provided annotations; all at the same time. It consists of an exhaustive list of actions performed while driving and multi-modal labeled images (depth, RGB and IR), with complete annotations for 2D and 3D object detection, instance and semantic segmentation as well as activity annotations for RGB frames. Additional to real recordings, it also contains a synthetic dataset of in-car cabin images with same multi-modality of images and annotations, providing a unique and extremely beneficial combination of synthetic and real data for effectively training cabin monitoring systems and evaluating domain adaptation approaches. | Provide a detailed description of the following dataset: TICaM |
F-SIOL-310 | F-SIOL-310 is a robotic dataset and benchmark for Few-Shot Incremental Object Learning, which is used to test incremental learning capabilities for robotic vision from a few examples.
A robot was used to actively capture household objects on a table. The dataset is specifically designed for FSIL with only a small set of training images and a larger set of test images per object category captured by the robot using its own camera and it considers various other robot vision challenges as well, such as different object sizes,
object transparency and a clear distinction between objects in the train and test sets. It contains images of 310 objects from 22 categories. | Provide a detailed description of the following dataset: F-SIOL-310 |
PESMOD | The PESMOD (PExels Small Moving Object Detection) dataset consists of high resolution aerial images in which moving objects are labelled manually. It was created from videos selected from the Pexels website. The aim of this dataset is to provide a different and challenging dataset for moving object detection methods evaluation. Each moving object is labelled for each frame with PASCAL VOC format in a XML file. The dataset consists of 8 different video sequences. | Provide a detailed description of the following dataset: PESMOD |
SwissDial | SwissDial is an annotated parallel corpus of spoken Swiss German across 8 major dialects, plus a Standard German reference. It contains parallel spoken data for 8 different regions: Aargau (AG), Bern (BE), Basel (BS), Graubunden (GR), Luzern (LU), St. Gallen (SG), Wallis (VS) and Zurich (ZH). | Provide a detailed description of the following dataset: SwissDial |
Alsat-2B | Alsat-2B is a remote sensing dataset of low and high spatial resolution images (10m and 2.5m respectively) for the single-image super-resolution task. The high-resolution images are obtained through pan-sharpening. The dataset has been created from 13 images captured by the Alsat-2B Earth observation satellite, where the image cover 13 different cities. | Provide a detailed description of the following dataset: Alsat-2B |
Autoencoder Paraphrase Dataset (AEPD) | This is a benchmark for neural paraphrase detection, to differentiate between original and machine-generated content.
####Training:
1,474,230 aligned paragraphs (98,282 original, 1,375,948 paraphrased with 3 models and 5 hyperparameter configurations each 98,282) extracted from 4,012 (English) Wikipedia articles.
####Testing:
```
BERT-large (cased):
arXiv - Original - 20,966; Paraphrased - 20,966;
Theses - Original - 5,226; Paraphrased - 5,226;
Wikipedia - Original - 39,241; Paraphrased - 39,241;
RoBERTa-large (cased):
arXiv - Original - 20,966; Paraphrased - 20,966;
Theses - Original - 5,226; Paraphrased - 5,226;
Wikipedia - Original - 39,241; Paraphrased - 39,241;
Longformer-large (uncased):
arXiv - Original - 20,966; Paraphrased - 20,966;
Theses - Original - 5,226; Paraphrased - 5,226;
Wikipedia - Original - 39,241; Paraphrased - 39,241;
``` | Provide a detailed description of the following dataset: Autoencoder Paraphrase Dataset (AEPD) |
Ascent KB | This dataset contains **8.9M commonsense assertions** extracted by the Ascent pipeline developed at the [Max Planck Institute for Informatics](https://mpi-inf.mpg.de). The focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc. The current version of Ascent KB (v1.0.0) is approximately **19 times larger than ConceptNet** (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded). | Provide a detailed description of the following dataset: Ascent KB |
AHP | The AHP dataset consists of 56,599 images in total which are collected from several large-scale instance segmentation and detection datasets, including COCO, VOC (w/ SBD), LIP, Objects365 and OpenImages. Each image is annotated with a pixel-level segmentation mask of a single integrated human.
The dataset is initially proposed to solve the task of human de-occlusion.
#####Data Splits
* Train: Totally 56,302 images with annotations of integrated humans.
* Valid: Totally 297 images of synthesized occlusion cases.
* Test: Totally 56 images of artificial occlusion cases. | Provide a detailed description of the following dataset: AHP |
VCAS-Motion | Video class agnostic segmentation (VCAS) is the task of segmenting objects without regards to its semantics combining appearance, motion and geometry from monocular video sequences. The main motivation behind this is to account for unknown objects in the scene and to act as a redundant signal along with the segmentation of known classes for better safety as shown in the following Figure.
This VCAS benchmark is built from KITTI-MOTS and Cityscapes-VPS. | Provide a detailed description of the following dataset: VCAS-Motion |
Tatoeba Translation Challenge | The Tatoeba Translation Challenge is a benchmark for machine translation that provides training and test data for thousands of language pairs covering over 500 languages.
The Tatoeba translation challenge includes shuffled training data taken from OPUS, an open collection of parallel corpora, and test data from Tatoeba, a crowd-sourced collection of user-provided translations in a large number of languages.
The current release includes over 500GB of compressed data for 2,961 language pairs covering 555 languages. The data sets are released per language pair with the following structure (using `deu-eng` as an example):
```
data/deu-eng/
data/deu-eng/train.src.gz
data/deu-eng/train.trg.gz
data/deu-eng/train.id.gz
data/deu-eng/dev.id
data/deu-eng/dev.src
data/deu-eng/dev.trg
data/deu-eng/test.src
data/deu-eng/test.trg
data/deu-eng/test.id
``` | Provide a detailed description of the following dataset: Tatoeba Translation Challenge |
MISAW | The MISAW data set is composed of 27 sequences of micro-surgical anastomosis on artificial blood vessels performed by 3 surgeons and 3 engineering students. The dataset contained video, kinematic, and procedural descriptions synchronized at 30Hz. The procedural descriptions contained phases, steps, and activities performed by the participants. | Provide a detailed description of the following dataset: MISAW |
SNDZoo | The softwarised network data zoo (SNDZoo) is an open collection of software networking data sets aiming to streamline and ease machine learning research in the software networking domain. Most of the published data sets focus on, but are not limited to, the performance of virtualised network functions (VNFs). The data is collected using fully automated NFV benchmarking frameworks, such as tng-bench, developed by us or third party solutions like Gym. The collection of the presented data sets follows the general VNF benchmarking methodology described in. | Provide a detailed description of the following dataset: SNDZoo |
Us Vs. Them | $\textit{Us vs. Them}$ dataset, consisting of 6861 Reddit comments annotated for populist attitudes and the first large-scale computational models of this phenomenon. It covers the relationship between populist mindsets and social groups, as well as a range of emotions typically associated with these. | Provide a detailed description of the following dataset: Us Vs. Them |
TAS500 | TAS500 is a semantic segmentation dataset for autonomous driving in unstructured environments. TAS500 offers fine-grained vegetation and terrain classes to learn drivable surfaces and natural obstacles in outdoor scenes effectively. | Provide a detailed description of the following dataset: TAS500 |
CSFCube | CSFCube is an expert annotated test collection to evaluate models trained to perform faceted Query by Example. This test collection consists of a diverse set of 50 query documents, drawn from computational linguistics and machine learning venues | Provide a detailed description of the following dataset: CSFCube |
Finnish Paraphrase Corpus | Finnish Paraphrase Corpus is a fully manually annotated paraphrase corpus for Finnish containing 53,572 paraphrase pairs harvested from alternative subtitles and news headings. Out of all paraphrase pairs in the corpus 98% are manually classified to be paraphrases at least in their given context, if not in all contexts. | Provide a detailed description of the following dataset: Finnish Paraphrase Corpus |
Rainbow | Rainbow is multi-task benchmark for common-sense reasoning that uses different existing QA datasets: aNLI, Cosmos QA, HellaSWAG. Physical IQa, Social IQa, WinoGrande. | Provide a detailed description of the following dataset: Rainbow |
Re-TACRED | The Re-TACRED dataset is a significantly improved version of the TACRED dataset for relation extraction. Using new crowd-sourced labels, Re-TACRED prunes poorly annotated sentences and addresses TACRED relation definition ambiguity, ultimately correcting 23.9% of TACRED labels. This dataset contains over 91 thousand sentences spread across 40 relations. Dataset presented at AAAI 2021.
Paper (arXiv): https://arxiv.org/abs/2104.08398 | Provide a detailed description of the following dataset: Re-TACRED |
ThreeDWorld Transport Challenge | ThreeDWorld Transport Challenge is a visually-guided and physics-driven task-and-motion planning benchmark. In this challenge, an embodied agent equipped with two 9-DOF articulated arms is spawned randomly in a simulated physical home environment. The agent is required to find a small set of objects scattered around the house, pick them up, and transport them to a desired final location. Several containers are positioned around the house that can be used as tools to assist with transporting objects efficiently. To complete the task, an embodied agent must plan a sequence of actions to change the state of a large number of objects in the face of realistic physical constraints.
This benchmark challenge has been built using the ThreeDWorld simulation: a virtual 3D environment where all objects respond to physics, and where can be controlled using fully physics-driven navigation and interaction API. | Provide a detailed description of the following dataset: ThreeDWorld Transport Challenge |
USB | The Universal-Scale object detection Benchmark (USB) is a benchmark for object detection that has variations in object scales and image domains by incorporating COCO with the recently proposed Waymo Open Dataset and Manga109-s dataset. To enable fair comparison, USB establishes different protocols by defining multiple thresholds for training epochs and evaluation image resolutions. | Provide a detailed description of the following dataset: USB |
StyleKQC | StyleKQC is a style-variant paraphrase corpus for korean questions and commands. It was built with a corpus construction scheme that simultaneously considers the core content and style of directives, namely intent and formality, for the Korean language. Utilizing manually generated natural language queries on six daily topics, the corpus was expanded to formal and informal sentences by human rewriting and transferring. | Provide a detailed description of the following dataset: StyleKQC |
ECtHR | ECtHR is a dataset comprising European Court of Human Rights cases, including annotations for paragraph-level rationales. This dataset comprises 11k ECtHR cases and can be viewed as an enriched version of the ECtHR dataset of Chalkidis et al. (2019), which did not provide ground truth for alleged article violations (articles discussed) and rationales. It is released with silver rationales obtained from references in court decisions, and gold rationales provided by ECHR-experienced lawyers | Provide a detailed description of the following dataset: ECtHR |
BookingDataChallenge | The dataset contains anonymised hotel checkins. The dataset contains train and test parts, the in the test part city of the last checkin is masked. The goal is to predict this masked checkin. | Provide a detailed description of the following dataset: BookingDataChallenge |
Multimodal Humor Dataset | A great number of situational comedies (sitcoms) are being regularly made and the task of adding laughter tracks to these is a critical task. Providing an ability to be able to predict whether something will be humorous to the audience is also crucial. In this project, we aim to automate this task. Towards doing so, we annotate an existing sitcom (Big Bang Theory') and use the laughter cues present to obtain a manual annotation for this show. We provide detailed analysis for the dataset design and further evaluate various state of the art baselines for solving this task. We observe that existing LSTM and BERT based networks on the text alone do not perform as well as joint text and video or only video-based networks. Moreover, it is challenging to ascertain that the words attended to while predicting laughter are indeed humorous. Our dataset and analysis provided through this paper is a valuable resource towards solving this interesting semantic and practical task. As an additional contribution, we have developed a novel model for solving this task that is a multi-modal self-attention based model that outperforms currently prevalent models for solving this task. | Provide a detailed description of the following dataset: Multimodal Humor Dataset |
UJIIndoorLoc | The UJIIndoorLoc is a Multi-Building Multi-Floor indoor localization database to test Indoor Positioning System that rely on WLAN/WiFi fingerprint. | Provide a detailed description of the following dataset: UJIIndoorLoc |
RC-49 | RC-49 is a benchmark dataset for generating images conditional on a continuous scalar variable. It is made by rendering 49 3-D chair models from ShapeNet individually. Each chair model is rendered at 899 yaw angles from $0.1^{\circ}$ to $89.9^{\circ}$ with a stepsize of $0.1^{\circ}$. This dataset contains 44,051 RGB images of size $64\times64$ with corresponding yaw angles as labels.
Note that in CcGAN, angles are used for training if their last digits are odd. Thus, there are 450 angles in the training set. Moreover, for these 450 training angles, only 25 images for each angle are used for the training. | Provide a detailed description of the following dataset: RC-49 |
Cell-200 | Cell-200 is a a dataset of synthetic fluorescence microscopy images with cell populations generated by SIMCEP. The Cell-200 dataset consists of 200,000 $64\times 64$ grayscale images. The number of cells per image ranges from 1 to 200 and there are 1,000 images for each cell count. However, only a subset of Cell-200 with only odd cell counts and 10 images per count (1,000 training images in total) is used for the GAN training. | Provide a detailed description of the following dataset: Cell-200 |
RealSRSet | 20 real low-resolution images selected from existing datasets or downloaded from internet | Provide a detailed description of the following dataset: RealSRSet |
L3CubeMahaSent | L3CubeMahaSent is a large publicly available Marathi Sentiment Analysis dataset. It consists of marathi tweets which are manually labelled.
This dataset contains a total of 18,378 tweets which are classified into three classes - Positive (1), Negative (-1) and Neutral (0). All tweets are present in their original form, without any preprocessing.
Out of these, 15,864 tweets are considered for splitting them into train, test and validation datasets. This has been done to avoid class imbalance in the dataset.
The remaining 2,514 tweets are also provided in a separate sheet. | Provide a detailed description of the following dataset: L3CubeMahaSent |
ACRE | Abstract Causal REasoning (ACRE) is a dataset for the systematic evaluation of current vision systems in causal induction, i.e., identifying unobservable mechanisms that lead to the observable relations among variables.
Each split of the dataset is structured as follows:
```
config/
train.json
val.json
test.json
images/
ACRE_train_00*.png
ACRE_val_00*.png
ACRE_test_00*.png
scenes/
ACRE_train_00*.json
ACRE_val_00*.json
ACRE_test_00*.json
```
Each image file in the images folder has a corresponding scene description file in scenes with the same name (except for the extension).
Each ACRE problem is named after `ACRE_{train/val/test}_{6_digit_problem_idx}_{2_digit_panel_idx}` | Provide a detailed description of the following dataset: ACRE |
Machine Prarphrase Corpus (MPC) | This dataset is used to train and evaluate models for the detection of machine-paraphrased text.
The training set consists of 200,767 paragraphs (98,282 original, 102,485 paraphrased) extracted from 8,024 Wikipedia (English) articles (4,012 original, 4,012 paraphrased using the SpinBot API).
The test set is divided into 3 subsets: one created from preprints of research papers on arXiv, one from graduation theses, and one from Wikipedia articles. Additionally, different marchine-paraphrasing methods were used.
Test sets:
```
SpinBot:
arXiv - Original - 20,966; Spun - 20,867
Theses - Original - 5,226; Spun - 3,463
Wikipedia - Original - 39,241; Spun - 40,729
SpinnerChief-4W:
arXiv - Original - 20,966; Spun - 21,671
Theses - Original - 2,379; Spun - 2,941
Wikipedia - Original - 39,241; Spun - 39,618
SpinnerChief-2W:
arXiv - Original - 20,966; Spun - 21,719
Theses - Original - 2,379; Spun - 2,941
Wikipedia - Original - 39,241; Spun - 39,697
``` | Provide a detailed description of the following dataset: Machine Prarphrase Corpus (MPC) |
RAVDESS | The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 7,356 files (total size: 24.8 GB). The database contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression. All conditions are available in three modality formats: Audio-only (16bit, 48kHz .wav), Audio-Video (720p H.264, AAC 48kHz, .mp4), and Video-only (no sound). Note, there are no song files for Actor_18.
Paper: [The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English](https://doi.org/10.1371/journal.pone.0196391)
Source: [](https://zenodo.org/record/1188976#.YFZuJ0j7SL8) | Provide a detailed description of the following dataset: RAVDESS |
CaSiNo | CaSiNo is a dataset of 1030 negotiation dialogues in English. To create the dataset, two participates take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. | Provide a detailed description of the following dataset: CaSiNo |
SUTD-TrafficQA | SUTD-TrafficQA (Singapore University of Technology and Design - Traffic Question Answering) is a dataset which takes the form of video QA based on 10,080 in-the-wild videos and annotated 62,535 QA pairs, for benchmarking the cognitive capability of causal inference and event understanding models in complex traffic scenarios. Specifically, the dataset proposes 6 challenging reasoning tasks corresponding to various traffic scenarios, so as to evaluate the reasoning capability over different kinds of complex yet practical traffic events. | Provide a detailed description of the following dataset: SUTD-TrafficQA |
LaboroTVSpeech | LaboroTVSpeech is a large-scale Japanese speech corpus built from broadcast TV recordings and their subtitles. It contains over 2,000 hours of speech. | Provide a detailed description of the following dataset: LaboroTVSpeech |
OFDIW | OnFocus Detection In the Wild (OFDIW) is an onfocus detection dataset. It consists of 20,623 images in unconstrained capture conditions (thus called "in the wild'') and contains individuals with diverse emotions, ages, facial characteristics, and rich interactions with surrounding objects and background scenes. The images are collected from the LFW dataset and the Oxford-IIIT Pet dataset. Onfocus detection aims at identifying whether the focus of the individual captured by a camera is on the camera or not. | Provide a detailed description of the following dataset: OFDIW |
P-OCT | The entire dataset consists of 61 different subjects, for each of which 12 radial OCT B-scans are collected at the Ophthalmology Department of Shanghai General Hospital by using DRI OCT-1 Atlantis (Topcon Corporation, Tokyo, Japan). The image size is 1024 × 992 pixels, corresponding to a field of view of 20.48 mm × 7.94 mm. For each subject, 2 radial OCT B-scans were randomly selected to ensure mutual exclusion. Two graders annotated these images manually through ITK-SNAP software into the optic disc and nine retinal layers under the supervision of a glaucoma specialist.
For more details, please refer to our [paper](https://www.osapublishing.org/boe/fulltext.cfm?uri=boe-12-4-2204). | Provide a detailed description of the following dataset: P-OCT |
N-MNIST | Brief Description
The Neuromorphic-MNIST (N-MNIST) dataset is a spiking version of the original frame-based MNIST dataset. It consists of the same 60 000 training and 10 000 testing samples as the original MNIST dataset, and is captured at the same visual scale as the original MNIST dataset (28x28 pixels). The N-MNIST dataset was captured by mounting the ATIS sensor on a motorized pan-tilt unit and having the sensor move while it views MNIST examples on an LCD monitor as shown in this video. A full description of the dataset and how it was created can be found in the paper below. Please cite this paper if you make use of the dataset.
Orchard, G.; Cohen, G.; Jayawant, A.; and Thakor, N. “Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades", Frontiers in Neuroscience, vol.9, no.437, Oct. 2015 | Provide a detailed description of the following dataset: N-MNIST |
SBCoseg | The SBCoseg dataset includes 889 groups of images and each group consists of 18 images with a common object, leading to 16002 images in total. The whole dataset is divided into five subsets: with ECFB, with TR, with MH, with SD, and Normal (normal data). The five subsets contain 193, 251, 82, 83, and 280 image groups, respectively. Each original image is in JPG format with a pixel size of 360 ×360, and each ground-truth image is in PNG format. | Provide a detailed description of the following dataset: SBCoseg |
LReID | LReID is a benchmark for lifelong person reidentification. It has been built using existing datasets, and it consists of two subsets: LReID-Seen and LReID-Unseen.
LReID-Seen contains 40,459 training images of the 2,500 identities selected from the following datasets: [CUHK03](cuhk03), [Market-1501](market-1501), [MSMT17 V2](msmt17), [DukeMTMC-ReID](dukemtmc-reid), [CUHK-SYSU ReID](cuhk-sysu). This is used to test a model's performance on seen domains.
LReID-Unseen contains 9,854 images from 3,594 identities from the following datasets: [VIPeR](viper), [PRID](prid2011), [GRID](grid), i-LIDS, [CUHK01](cuhk01), [CUHK02](cuhk02), [SenseReID](sensereid). This subset is used to test the model's generalisation capabilities to unseen domains. | Provide a detailed description of the following dataset: LReID |
SenseReID | SenseReID is a person re-identification dataset for evaluating ReID models. It is captured from real surveillance cameras and the person bounding boxes are obtained from state-of-the-art detection algorithm. The dataset contains 1,717 identities in total. | Provide a detailed description of the following dataset: SenseReID |
DIP-IMU | Dataset consisting of IMU measurements and corresponding SMPL poses. Participants were wearing 17 IMU sensors and reference SMPL poses were obtained by running the SIP optimization with all 17 sensors. | Provide a detailed description of the following dataset: DIP-IMU |
Twitter Abusive Context | This dataset for abusive content detection in Twitter consists of two sets of annotations for the same set of tweets, one where the human annotators had access to the tweet's content and one where they didn't know the context. | Provide a detailed description of the following dataset: Twitter Abusive Context |
TCR-pMHC | 10x Genomics dataset of sequenced TCRs barcoded by a panel of pMHCs (arranged on a dextramer) | Provide a detailed description of the following dataset: TCR-pMHC |
TCR-CMV | Adaptive Biotechnologies' dataset of sequenced T cell repertoires labelled by patient age, HLA type, and CMV serostatus | Provide a detailed description of the following dataset: TCR-CMV |
GLUCOSE | **GLUCOSE** is a large-scale dataset of implicit commonsense causal knowledge, encoded as causal mini-theories about the world, each grounded in a narrative context. To construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions of causal explanation, focusing on events, states, motivations, and emotions. Each GLUCOSE entry includes a story-specific causal statement paired with an inference rule generalized from the statement. | Provide a detailed description of the following dataset: GLUCOSE |
ArtDL | ArtDL is a novel painting data set for iconography classification composed of images collected from online sources. Most of the paintings are from the Renaissance period and depict scenes or characters of Christian art. The data set is annotated with classes representing specific characters belonging to the Iconclass classification system. | Provide a detailed description of the following dataset: ArtDL |
INSTRE | **INSTRE** is a benchmark for INSTance-level visual object REtrieval and REcognition (INSTRE). INSTRE has the following major properties: (1) balanced data scale,
(2) more diverse intraclass instance variations, (3) cluttered and less contextual backgrounds, (4) object localization annotation for each image, (5) well-manipulated double-labelled images for measuring multiple object (within one image) case.
The whole dataset is split into three disjoint subsets INSTRE-S1 (for single object case 1), INSTRE-S2 (for single object case 2) and INSTRE-M (for multiple object case). INSTRE-S1 and INSTRE-S2 are collected for measuring single object case, both of which have 100 object classes. INSTRE-S1 contains 11011 images and INSTRE-S2 contains 12059 images. | Provide a detailed description of the following dataset: INSTRE |
TrackML challenge Accuracy phase dataset | The dataset comprises multiple independent events, where each event contains simulated measurements (essentially 3D points) of particles generated in a collision between proton bunches at the Large Hadron Collider at CERN. The goal of the tracking machine learning challenge is to group the recorded measurements or hit for each event into tracks, sets of hits that belong to the same initial particle. A solution must uniquely associate each hit to one track. The training dataset contains the recorded hit, their ground truth counterpart and their association to particles, and the initial parameters of those particles. The test dataset contains only the recorded hits.
The dataset was used for the Accuracy Phase of the Tracking Machine Learning challenge on Kaggle.
See more details in the home page url. | Provide a detailed description of the following dataset: TrackML challenge Accuracy phase dataset |
fGn Traffic Traces | fGn series used in the article to develop the simulations. | Provide a detailed description of the following dataset: fGn Traffic Traces |
PIE | PIE is a new dataset for studying pedestrian behavior in traffic. PIE contains over 6 hours of footage recorded in typical traffic scenes with on-board camera. It also provides accurate vehicle information from OBD sensor (vehicle speed, heading direction and GPS coordinates) synchronized with video footage.
Rich spatial and behavioral annotations are available for pedestrians and vehicles that potentially interact with the ego-vehicle as well as for the relevant elements of infrastructure (traffic lights, signs and zebra crossings).
There are over 300K labeled video frames with 1842 pedestrian samples making this the largest publicly available dataset for studying pedestrian behavior in traffic. | Provide a detailed description of the following dataset: PIE |
HEV-I | Honda Egocentric View-Intersection Dataset (HEV-I) is introduced to enable research on traffic participants interaction modelling, future object localization, as well as learning driver action in challenging driving scenarios. The dataset includes 230 video clips of real human driving in different intersections from the San Francisco Bay Area, collected using an instrumented vehicle equipped with different sensors including cameras, GPS/IMU, and vehicle states signals. | Provide a detailed description of the following dataset: HEV-I |
Dry Bean Dataset | Seven different types of dry beans were used in this research, taking into account the features such as form, shape, type, and structure by the market situation. A computer vision system was developed to distinguish seven different registered varieties of dry beans with similar features in order to obtain uniform seed classification. For the classification model, images of 13,611 grains of 7 different registered dry beans were taken with a high-resolution camera. Bean images obtained by computer vision system were subjected to segmentation and feature extraction stages, and a total of 16 features; 12 dimensions and 4 shape forms, were obtained from the grains. | Provide a detailed description of the following dataset: Dry Bean Dataset |
3D AffordanceNet | 3D AffordanceNet is a dataset of 23k shapes for visual affordance. It consists of 56,307 well-defined affordance information annotations for 22,949 shapes covering 18 affordance classes and 23 semantic object categories. | Provide a detailed description of the following dataset: 3D AffordanceNet |
RUSS Dataset | RUSS (Rapid Universal Support Service) is a dataset that consists of a collection of 741 real-world step-by-step natural language instructions (raw and annotated) from the open web, and for each: its corresponding webpage DOM, ground-truth ThingTalk, and ground-truth actions. | Provide a detailed description of the following dataset: RUSS Dataset |
AGQA | Action Genome Question Answering (AGQA) is a benchmark for compositional spatio-temporal reasoning. AGQA contains 192M unbalanced question answer pairs for 9.6K videos. It also contains a balanced subset of 3.9M question answer pairs, 3 orders of magnitude larger than existing benchmarks, that minimizes bias by balancing the answer distributions and types of question structures.
AGQA introduces multiple training/test splits to test for various reasoning abilities, including generalization to novel compositions, to indirect references, and to more compositional steps. | Provide a detailed description of the following dataset: AGQA |
MSRB | MSRB is a benchmarking dataset for marine snow removal of underwater images. Marine snow is one of the main degradation sources of underwater images that are caused by small particles, e.g., organic matter and sand, between the underwater scene and photosensors. The dataset consists of large-scale pairs of ground-truth and degraded images to calculate objective qualities for marine snow removal and to train a deep neural network. We propose two marine snow removal tasks using the dataset and show the first benchmarking results of marine snow removal. | Provide a detailed description of the following dataset: MSRB |
RoomR | The task of Room Rearrangement consists on an agent exploring a room and recording objects' initial configurations. The agent is removed and the poses and states (e.g., open/closed) of some objects in the room are changed. The agent must restore the initial configurations of all objects in the room.
RoomR includes 6,000 distinct rearrangement settings involving 72 different object types in 120 scenes. | Provide a detailed description of the following dataset: RoomR |
Libri-adhoc40 | Libri-adhoc40 is a synchronized speech corpus which collects the replayed Librispeech data from loudspeakers by ad-hoc microphone arrays of 40 strongly synchronized distributed nodes in a real office environment. Besides, to provide the evaluation target for speech frontend processing and other applications, the authors also recorded the replayed speech in an anechoic chamber. | Provide a detailed description of the following dataset: Libri-adhoc40 |
Food2K | Food2K is a large food recognition dataset with 2,000 categories and over 1 million images. Compared with existing food recognition datasets, Food2K bypasses them in both categories and images by one order of magnitude, and thus establishes a new challenging benchmark to develop advanced models for food visual representation learning. Food2K can be further explored to benefit more food-relevant tasks including emerging and more complex ones (e.g., nutritional understanding of food), and the trained models on Food2K can be expected as backbones to improve the performance of more food-relevant tasks. | Provide a detailed description of the following dataset: Food2K |
U.S. Broadband Coverage | The U.S. Broadband Coverage data set is a publicly available dataset that reports broadband coverage percentages at a zip code-level. The authors have used differential privacy to guarantee that the privacy of individual households is preserved. The data set also contains error ranges estimates, providing information on the expected error introduced by differential privacy per zip code. | Provide a detailed description of the following dataset: U.S. Broadband Coverage |
Win-Fail Action Understanding | First of its kind paired win-fail action understanding dataset with samples from the following domains: “General Stunts,” “Internet Wins-Fails,” “Trick Shots,” & “Party Games.” The task is to identify successful and failed attempts at various activities. Unlike existing action recognition datasets, intra-class variation is high making the task challenging, yet feasible. | Provide a detailed description of the following dataset: Win-Fail Action Understanding |
Multimodal PISA | Dataset for multimodal skills assessment focusing on assessing piano player’s skill level. Annotations include player's skills level, and song difficulty level. Bounding box annotations around pianists' hands are also provided. | Provide a detailed description of the following dataset: Multimodal PISA |
iNat2021 | iNat2021 is a large-scale image dataset collected and annotated by community scientists that contains over 2.7M images from 10k different species.
To make the dataset more accessible the authors have also created a "mini" training dataset with 50 examples per species for a total of 500K images. Each species has 10 validation images, for a total of 100k validation images. There are a total of 500,000 test images. In addition to its overall scale, the main distinguishing feature of iNat2021 is that it contains at least 152 images in the training set for each species. | Provide a detailed description of the following dataset: iNat2021 |
AMT Objects | AMT Objects is a large dataset of object centric videos suitable for training and benchmarking models for generating 3D models of objects from a small number of photos of the objects. The dataset consists of multiple views of a large collection of object instances.
The dataset contains 7 object categories from the MS COCO classes: apple, sandwich, orange, donut, banana, carrot and hydrant. For each class, annotators were asked to collect a video by looking ‘around’ a class instance, resulting in a turntable video. The dataset contains 169-457 videos per class. For each class, the videos were randomly split into training and testing videos in an 8:1 ratio. | Provide a detailed description of the following dataset: AMT Objects |
RepLab 2013 | RepLab 2013 dataset uses Twitter data in English and Spanish (more than 142,000 tweets). The balance between both languages depends on the availability of data for each of the entities included in the dataset. The corpus consists of a collection of tweets referring to a selected set of 61 entities from four domains: automotive, banking, universities and music/artists. The domain selection was done to offer a variety of scenarios for reputation studies.
Crawling was performed during the period from the 1st June 2012 till the 31st Dec 2012 using the entity’s canonical name as query. For each entity, at least 2,200 tweets are collected: at least 700 tweets at the beginning of the timeline are used as training set, and at least 1,500 last tweets are reserved for the test set. The corpus also comprises additional background tweets for each entity (up to 50,000 tweets, with a large variability across entities). This distribution was set in this way to obtain a temporal separation (ideally of several months) between the training and test data.
Note that the final amount of available tweets in these sets may be lower, since some posts may have been deleted by the users: in order to respect Twitter’s terms of service, we do not provide the contents of the tweets. The tweet identifiers can be used to retrieve the texts of the posts. We provide a download tool that is similarly to the mechanism used in the TREC Microblog Track in 2011 and 2012.
For more information, please refer to the [RepLab 2013 Overview's paper](https://dl.acm.org/doi/10.1007/978-3-642-40802-1_31). | Provide a detailed description of the following dataset: RepLab 2013 |
ABCD | Action-Based Conversations Dataset (ABCD) is a goal-oriented dialogue fully-labeled dataset with over 10K human-to-human dialogues containing 55 distinct user intents requiring unique sequences of actions constrained by policies to achieve task success. The dataset is proposed to study customer service dialogue systems in more realistic settings. | Provide a detailed description of the following dataset: ABCD |
VidSitu | VidSitu is a dataset for the task of semantic role labeling in videos (VidSRL). It is a large-scale video understanding data source with 29K 10-second movie clips richly annotated with a verb and semantic-roles every 2 seconds. Entities are co-referenced across events within a movie clip and events are connected to each other via event-event relations. Clips in VidSitu are drawn from a large collection of movies (∼3K) and have been chosen to be both complex (∼4.2 unique verbs within a video) as well as diverse (∼200 verbs have more than 100 annotations each). | Provide a detailed description of the following dataset: VidSitu |
NaturalProofs | The NaturalProofs Dataset is a large-scale dataset for studying mathematical reasoning in natural language. NaturalProofs consists of roughly 20,000 theorem statements and proofs, 12,500 definitions, and 1,000 additional pages (e.g. axioms, corollaries) derived from ProofWiki, an online compendium of mathematical proofs written by a community of contributors. | Provide a detailed description of the following dataset: NaturalProofs |
Mirrored-Human | Mirrored-Human is a dataset for 3D pose estimation from a single view. It covers a large variety of human subjects, poses and backgrounds. The images are collected from the internet and consists of people in front of mirrors, were both the person and the reflected image are visible. Actions cover dancing, fitness, mirror installation, swing practice | Provide a detailed description of the following dataset: Mirrored-Human |
Omiverse Object dataset | Omiverse Object is a large-scale synthetic dataset of 60,000 images including both transparent and opaque objects in different scenes. It is used for depth completion of transparent objects from a single RGB-D view. | Provide a detailed description of the following dataset: Omiverse Object dataset |
Auto-KWS | Auto-KWS is a dataset for customized keyword spotting, the task of detecting spoken keywords. The dataset closely resembles real world scenarios, as each recorder is assigned with an unique wake-up word and can choose their recording environment and familiar dialect freely.
All data is recorded by near-field mobile phones, (located in front of the speakers at around 0.2m distance). Each sample is recorded in single channel, 16-bit streams at a 16kHz sampling rate. There are 4 datasets: training dataset, practice dataset, feedback dataset, and private dataset. Training dataset, recorded from around 100 recorders, is used for participants to develop Auto-KWS solutions. Practice dataset contains 5 speakers data, each with 5 enrollment audio data and seveal test audio. Practice dataset together with the downloadable docker provides an example of how platform would call the participants' code. Both Training and practice dataset can be downloaded for local debugging. The feedback dataset and private dataset have the same format of practice dataset and are used for final evaluation and thus will be hiden from participants. | Provide a detailed description of the following dataset: Auto-KWS |
LemgoRL | LemgoRL is an open-source benchmark tool for traffic signal control designed to train reinforcement learning agents in a highly realistic simulation scenario with the aim to reduce Sim2Real gap. In addition to the realistic simulation model, LemgoRL encompasses a traffic signal logic unit that ensures compliance with all regulatory and safety requirements. LemgoRL offers the same interface as the well-known OpenAI gym toolkit to enable easy deployment in existing research work. | Provide a detailed description of the following dataset: LemgoRL |
LSARS | In an active e-commerce environment, customers process a large number of reviews when deciding on whether to buy a product or not. Abstractive Multi-Review Summarization aims to assist users to efficiently consume the reviews that are the most relevant to them. We propose the first large-scale abstractive multi-review summarization dataset that leverages more than 17.9 billion raw reviews and uses novel aspect-alignment techniques based on aspect annotations. Furthermore, we demonstrate that one can generate higher-quality review summaries by using a novel aspect-alignment-based model. Results from both automatic and human evaluation show that the proposed dataset plus the innovative aspect-alignment model can generate high-quality and trustful review summaries.
Paper: [Large Scale Abstractive Multi-Review Summarization (LSARS) via Aspect Alignment](https://dl.acm.org/doi/abs/10.1145/3397271.3401439) | Provide a detailed description of the following dataset: LSARS |
speechocean762 | speechocean762 is an open-source speech corpus designed for pronunciation assessment use, consisting of 5000 English utterances from 250 non-native speakers, where half of the speakers are children. Five experts annotated each of the utterances at sentence-level, word-level and phoneme-level. This corpus is allowed to be used freely for commercial and non-commercial purposes. To avoid subjective bias, each expert scores independently under the same metric | Provide a detailed description of the following dataset: speechocean762 |
Timers and Such | Timers and Such is an open source dataset of spoken English commands for common voice control use cases involving numbers. The dataset has four intents, corresponding to four common offline voice assistant uses: SetTimer, SetAlarm, SimpleMath, and UnitConversion. The semantic label for each utterance is a dictionary with the intent and a number of slots.
All recordings were converted from their original formats to single-channel 16,000 Hz .wav files. | Provide a detailed description of the following dataset: Timers and Such |
BS-RSCD | BS-RSCD is a dataset for rolling shutter correction and deblurring (RSCD). The dataset includes both ego-motion and object-motion in dynamic scenes. Real distorted and blurry videos with corresponding ground truth are recorded simultaneously via a beam-splitter-based acquisition system. | Provide a detailed description of the following dataset: BS-RSCD |
SPGISpeech | SPGISpeech (pronounced “speegie-speech”) is a large-scale transcription dataset, freely available for academic research. SPGISpeech is a collection of 5,000 hours of professionally-transcribed financial audio. Contrary to previous transcription datasets, SPGISpeech contains global english accents, strongly varying audio quality as well as both spontaneous and presentation style speech. The transcripts have each been cross-checked by multiple professional editors for high accuracy and are fully formatted including sentence structure and capitalization.
SPGISpeech consists of 5,000 hours of recorded company earnings calls and associated manual transcription text. The original calls were split based on silences into slices ranging from 5 to 15 seconds to allow easy training of a speech recognition system. The format of each WAV file is single channel, 16kHz, 16 bit audio.
Transcription text represents the output of several stages of manual post-processing. As such, the text contains polished English orthography following a detailed style guide, including proper casing, punctuation, and denormalized non-standard words such as numbers or acronyms, making SPGISpeech suited for training fully formatted end-to-end models.
In general, the transcriptions aim at professional utility rather than linguistic fidelity, and the correspondence between verbatim speech and finalized text is therefore not exact, resulting in the occasional purposeful omission of meeting operator instructions or certain verbal pleasantries. | Provide a detailed description of the following dataset: SPGISpeech |
DogWhistle | Cant (also known as doublespeak, cryptolect, argot, anti-language or secret language) is important for understanding advertising, comedies and dog-whistle politics. DogWhistle is a large and diverse Chinese dataset for creating and understanding cant from a computational linguistics perspective. | Provide a detailed description of the following dataset: DogWhistle |
Summarizing Source Code using a Neural Attention Model | Presents a new dataset of code snippets with short descriptions, created using data gathered from Stackoverflow, a popular programming help website. Since access is open and unrestricted, the content is inherently noisy (ungrammatical, non-parsable, lacking content). | Provide a detailed description of the following dataset: Summarizing Source Code using a Neural Attention Model |
iPer | **iPer** is a new dataset, with diverse styles of clothes in videos, for the evaluation of human motion imitation, appearance transfer, and novel view synthesis. There are 30 subjects of different conditions of shape, height, and gender. Each subject wears different clothes and performs an A-pose video and a video with random actions. There are 103 clothes in total. The whole dataset contains 206 video sequences with 241,564 frames. | Provide a detailed description of the following dataset: iPer |
EasyCall | **EasyCall** is a new dysarthric speech command dataset in Italian. The dataset consists of 21386 audio recordings from 24 healthy and 31 dysarthric speakers, whose individual degree of speech impairment was assessed by neurologists through the Therapy Outcome Measure. The corpus aims at providing a resource for the development of ASR-based assistive technologies for patients with dysarthria. In particular, it may be exploited to develop a voice-controlled contact application for commercial smartphones, aiming at improving dysarthric patients' ability to communicate with their family and caregivers. Before recording the dataset, participants were administered a survey to evaluate which commands are more likely to be employed by dysarthric individuals in a voice-controlled contact application. In addition, the dataset includes a list of non-commands (i.e., words near/inside commands or phonetically close to commands) that can be leveraged to build a more robust command recognition system. | Provide a detailed description of the following dataset: EasyCall |
UFO Cherry Tree Point Clouds | UFO Cherry Tree Point Clouds consists of a collection of 82 scanned Upright Fruiting Offshoot (UFO) cherry tree point clouds.
Paper: [Semantics-guided Skeletonization of Sweet Cherry Trees for Robotic Pruning](https://arxiv.org/pdf/2103.02833.pdf)
Image source: [Semantics-guided Skeletonization of Sweet Cherry Trees for Robotic Pruning](https://arxiv.org/pdf/2103.02833.pdf) | Provide a detailed description of the following dataset: UFO Cherry Tree Point Clouds |
PATS | PATS dataset consists of a diverse and large amount of aligned pose, audio and transcripts. With this dataset, we hope to provide a benchmark that would help develop technologies for virtual agents which generate natural and relevant gestures.
[Webpage](https://chahuja.com/pats)
[Scripts](https://github.com/chahuja/pats) | Provide a detailed description of the following dataset: PATS |
OCD | OCD (Out-of-Context Dataset) is a synthetic dataset with fine-grained control over scene context. The images are generated using a 3D simulation engine in the VirtualHome environment, which allows to control the gravity, object co-occurrences and relative sizes across 36 object categories in a virtual household environment. | Provide a detailed description of the following dataset: OCD |
VGG-SS | VGG-SS (VGG Sound Source) is a benchmark for evaluating sound source localisation in videos. The dataset consists on a new set of annotations for the recently-introduced [VGG-Sound dataset](vgg-sound), where the sound sources visible in each video clip are explicitly marked with bounding box annotations. This dataset is 20 times larger than analogous existing ones, contains 5K videos spanning over 200 categories, and, differently from Flickr SoundNet, is video-based. | Provide a detailed description of the following dataset: VGG-SS |
RadarScenes | RadarScenes is a real-world radar point cloud dataset for automotive applications.
It consists of measurements and point-wise annotations from more than four hours of driving collected by four series radar sensors mounted on one test vehicle. Individual detections of dynamic objects were manually grouped to clusters and labeled afterwards. The purpose of this data set is to enable the development of novel (machine learning-based) radar perception algorithms with the focus on moving road users. Images of the recorded sequences were captured using a documentary camera. | Provide a detailed description of the following dataset: RadarScenes |
BiasCorp | BiasCorp is a dataset for racism detection containing 139,090 comments and news segment from three specific sources - Fox News, BreitbartNews and YouTube. | Provide a detailed description of the following dataset: BiasCorp |
GovReport | GovReport is a dataset for long document summarization, with significantly longer documents and summaries. It consists of reports written by government research agencies including Congressional Research Service and U.S. Government Accountability Office.
Compared with other long document summarization datasets, government report dataset has longer summaries and documents and requires reading in more context to cover salient words to be summarized. | Provide a detailed description of the following dataset: GovReport |
PartialSpoof | PartialSpoof is a dataset of partially-spoofed data to evaluate detection of partially-spoofed speech data. It has been built based on the ASVspoof 2019 LA database since the latter covers 17 types of spoofed data produced by advanced speech synthesizers, voice converters, and hybrids. The authors used the same set of bona fide data from the ASVspoof 2019 LA database but created partially spoofed audio from the ASVspoof 2019 LA data. | Provide a detailed description of the following dataset: PartialSpoof |
Movies and tropes, March 2020 | This dataset is a hash that uses as key the normalized movie name (for instance, {\sf TheAvengers} and as value an array of all the tropes used in that specific movie, as reported by TVTropes.org users.
Note: JSON scraped from tvtropes.org, containing the list of all movies and tropes used in them.
Image source: [Tropes in films: an initial analysis](https://arxiv.org/pdf/2006.05380.pdf) | Provide a detailed description of the following dataset: Movies and tropes, March 2020 |
Casual Conversations | **Casual Conversations** dataset is designed to help researchers evaluate their computer vision and audio models for accuracy across a diverse set of age, genders, apparent skin tones and ambient lighting conditions.
Casual Conversations is composed of over 45,000 videos (3,011 participants) and intended to be used for assessing the performance of already trained models in computer vision and audio applications for the purposes permitted in the data user agreement. The videos feature paid individuals who agreed to participate in the project and explicitly provided age and gender labels themselves. The videos were recorded in the U.S. with a diverse set of adults in various age, gender and apparent skin tone groups. A group of trained annotators labeled the participants’ apparent skin tone using the Fitzpatrick scale in addition to annotations of videos recorded in low ambient lighting conditions. | Provide a detailed description of the following dataset: Casual Conversations |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.