dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
DeepBeam
It contains 19 HDF5 files that represent a data collection campaign run on the NI mmWave Transceiver System with four SiBeam 60 GHz radio heads and on two Pi-Radio digital 60 GHz radios. Please refer to the website deepbeam.net
Provide a detailed description of the following dataset: DeepBeam
Amharic Error Corpus
Amharic Error Corpus is a manually annotated spelling error corpus for Amharic, lingua franca in Ethiopia. The corpus is designed to be used for the evaluation of spelling error detection and correction. The misspellings are tagged as non-word and real-word errors. In addition, the contextual information available in the corpus makes it useful in dealing with both types of spelling errors.
Provide a detailed description of the following dataset: Amharic Error Corpus
BarkNet 1.0
23,000 cropped images of tree bark, for 23 species of trees around Quebec City, Canada. The images were captured at a distance between 20-60 cm away from the trunk. Labels include: individual tree ID, its species, and its DBH (diameter at breast height). Pictures were taken with four different devices: Nexus 5, Samsung Galaxy S5, Samsung Galaxy S7, and a Panasonic Lumix DMC-TS5 camera. The dataset is sufficiently large to train a Deep network such as ResNet for species recognition.
Provide a detailed description of the following dataset: BarkNet 1.0
Epilepsy seizure prediction
The original dataset from the reference consists of 5 different folders, each with 100 files, with each file representing a single subject/person. Each file is a recording of brain activity for 23.6 seconds. The corresponding time-series is sampled into 4097 data points. Each data point is the value of the EEG recording at a different point in time. So we have total 500 individuals with each has 4097 data points for 23.5 seconds. We divided and shuffled every 4097 data points into 23 chunks, each chunk contains 178 data points for 1 second, and each data point is the value of the EEG recording at a different point in time. So now we have 23 x 500 = 11500 pieces of information(row), each information contains 178 data points for 1 second(column), the last column represents the label y {1,2,3,4,5}. The response variable is y in column 179, the Explanatory variables X1, X2, ..., X178 y contains the category of the 178-dimensional input vector. Specifically y in {1, 2, 3, 4, 5}: 5 - eyes open, means when they were recording the EEG signal of the brain the patient had their eyes open 4 - eyes closed, means when they were recording the EEG signal the patient had their eyes closed 3 - Yes they identify where the region of the tumor was in the brain and recording the EEG activity from the healthy brain area 2 - They recorder the EEG from the area where the tumor was located 1 - Recording of seizure activity All subjects falling in classes 2, 3, 4, and 5 are subjects who did not have epileptic seizure. Only subjects in class 1 have epileptic seizure. Our motivation for creating this version of the data was to simplify access to the data via the creation of a .csv version of it. Although there are 5 classes most authors have done binary classification, namely class 1 (Epileptic seizure) against the rest.
Provide a detailed description of the following dataset: Epilepsy seizure prediction
SymbolicData
This dataset is a collection of input-label pairs where each input is in the form of a numerical dataset, itself a set of input and output pairs {(x, y)}, and the corresponding label is a string encoding the symbolic expression governing the relationship between variables in the numerical dataset.
Provide a detailed description of the following dataset: SymbolicData
RadGraph
RadGraph is a dataset of entities and relations in radiology reports based on our novel information extraction schema, consisting of 600 reports with 30K radiologist annotations and 221K reports with 10.5M automatically generated annotations. We release a development dataset, which contains board-certified radiologist annotations for 500 radiology reports from the MIMIC-CXR dataset (14,579 entities and 10,889 relations), and a test dataset, which contains two independent sets of board-certified radiologist annotations for 100 radiology reports split equally across the MIMIC-CXR and CheXpert datasets. We also release an inference dataset, which contains automatically generated annotations for 220,763 MIMIC-CXR reports (around 6 million entities and 4 million relations) and 500 CheXpert reports (13,783 entities and 9,908 relations) with mappings to associated chest radiographs.
Provide a detailed description of the following dataset: RadGraph
CASIA-Iris-Complex
# Introduction Iris is considered one of the most accurate and reliable biometric modality. Iris is more stable and distinctive compared with fingerprint, face, voice, etc, and difficult to be replicated for spoof attacks. Although an iris pattern is naturally an ideal identifier, the development of a high-performance iris recognition algorithm and transferring it from laboratory to field application is still a challenging task. In practical applications, the iris recognition system must face various unpredictable iris image degraded. For example, recognition of low-quality iris images, non-cooperative iris images, long-range iris images, and moving iris images are all huge problems in iris recognition. We believe that the first step in solving these problems is to design and develop a database of iris images that includes all of these degraded. # Brief Descriptions and Statistics of the Database CASIA-Iris-Complex contains 22,932 images from 292 Asian subjects. It includes two subsets: CASIA-Iris-CX1 and CASIA-Iris-CX2. All images were collected under NIR illumination and two eyes were captured simultaneously.
Provide a detailed description of the following dataset: CASIA-Iris-Complex
Extended YouTube Faces (E-YTF)
The proposed Extended-YouTube Faces (E-YTF) is an extension of the famous YouTube Faces (YTF) dataset and is specifically designed to further push the challenges of face recognition by addressing the problem of open-set face identification from heterogeneous data i.e. still images vs video.
Provide a detailed description of the following dataset: Extended YouTube Faces (E-YTF)
Amazon-PQA
**Amazon-PQA** is a product question-answer dataset. The Amazon-PQA dataset includes questions and their answers that are published on Amazon website, along with the public product information and category (Amazon Browse Node name). It contains more than 8M questions from 1M+ products.
Provide a detailed description of the following dataset: Amazon-PQA
SSL
This is a dataset to benchmark real-time embedded object detection models for RoboCup SSL (Small Size League).
Provide a detailed description of the following dataset: SSL
FilmStills
FilmStills is a dataset of stills taken from a variety of films and TV shows, each concatenated with a color-compressed (with a factor of 2.667) version of itself.
Provide a detailed description of the following dataset: FilmStills
LCO CR Dataset
Cosmic rays in the LCO CR dataset are labeled accurately and consistently across many diverse observations from various instruments. To the best of our knowledge, this is the largest dataset of its kind. It consists of over 4,500 scientific images from Las Cumbres Observatory global telescope network's 23 instruments. Each sample in our dataset is a multi-extension FITS file, including three images, three corresponding CR masks, and three ignore masks. Usage: https://github.com/cy-xu/cosmic-conn For technical questions regarding the Cosmic-CoNN LCO CR dataset, please contact cxu@ucsb.edu. ### Licensing of images and data All images and data derived from observations made at LCO facilities are made available for scientific or educational use, subject to the Creative Commons license BY-CC 2.0 and in the case of science data, subject to an additional proprietary period. This allows for all LCO data and images, which are not subject to a proprietary restriction, to be freely shared and redistributed on the condition that an appropriate attribution is made. Any use of LCO images and data not for scientific or educational purposes, e.g. for commercial purposes, is not permitted unless through an explicit arrangement. Please contact us at image_use@lco.global. ### Acknowledgments and Citations Any scientific publication which results from the use of LCO facilities should include an acknowledgment of this resource: "This work makes use of observations from the Las Cumbres Observatory global telescope network." ### Contact If you would like to use our images for commercial purposes, or if further information or assistance is needed, please contact image_use@lco.global.
Provide a detailed description of the following dataset: LCO CR Dataset
Message Content Rephrasing
We introduce a new task of rephrasing for amore natural virtual assistant. Currently, vir-tual assistants work in the paradigm of intent-slot tagging and the slot values are directlypassed as-is to the execution engine. However,this setup fails in some scenarios such as mes-saging when the query given by the user needsto be changed before repeating it or sending itto another user. For example, for queries like‘ask my wife if she can pick up the kids’ or ‘re-mind me to take my pills’, we need to rephrasethe content to ‘can you pick up the kids’ and‘take your pills’. In this paper, we study theproblem of rephrasing with messaging as ause case and release a dataset of 3000 pairs oforiginal query and rephrased query. We showthat BART, a pre-trained transformers-basedmasked language model with auto-regressivedecoding, is a strong baseline for the task, andshow improvements by adding a copy-pointerand copy loss to it. We analyze different trade-offs of BART-based and LSTM-based seq2seqmodels, and propose a distilled LSTM-basedseq2seq as the best practical model.
Provide a detailed description of the following dataset: Message Content Rephrasing
TAU-NIGENS Spatial Sound Events 2021
The TAU-NIGENS Spatial Sound Events 2021 dataset contains multiple spatial sound-scene recordings, consisting of sound events of distinct categories integrated into a variety of acoustical spaces, and from multiple source directions and distances as seen from the recording position. The spatialization of all sound events is based on filtering through real spatial room impulse responses (RIRs), captured in multiple rooms of various shapes, sizes, and acoustical absorption properties. Furthermore, each scene recording is delivered in two spatial recording formats, a microphone array one (MIC), and first-order Ambisonics one (FOA). The sound events are spatialized as either stationary sound sources in the room, or moving sound sources, in which case time-variant RIRs are used. Each sound event in the sound scene is associated with a single direction-of-arrival (DoA) if static, a trajectory DoAs if moving, and a temporal onset and offset time. The isolated sound event recordings used for the synthesis of the sound scenes are obtained from the NIGENS general sound events database. These recordings serve as the development dataset for the DCASE 2021 Sound Event Localization and Detection Task of the DCASE 2021 Challenge.
Provide a detailed description of the following dataset: TAU-NIGENS Spatial Sound Events 2021
PAD
**PAD** (Purpose-driven Affordance Dataset) is a dataset for affordance detection, which refers to identifying the potential action possibilities of objects in an image, which is an important ability for robot perception and manipulation. The dataset consists of 4K images from 31 affordance and 72 object categories.
Provide a detailed description of the following dataset: PAD
XL-Sum
**XL-Sum** is a comprehensive and diverse dataset for abstractive summarization comprising 1 million professionally annotated article-summary pairs from BBC, extracted using a set of carefully designed heuristics. The dataset covers 44 languages ranging from low to high-resource, for many of which no public dataset is currently available. XL-Sum is highly abstractive, concise, and of high quality, as indicated by human and intrinsic evaluation.
Provide a detailed description of the following dataset: XL-Sum
TNCR Dataset
We present TNCR, a new table dataset with varying image quality collected from free open source websites. TNCR dataset can be used for table detection in scanned document images and their classification into 5 different classes. TNCR contains 9428 high-quality labeled images. In this paper, we have implemented state-of-the-art deep learning-based methods for table detection to create several strong baselines. Cascade Mask R-CNN with ResNeXt-101-64x4d Backbone Network achieves the highest performance compared to other methods with a precision of 79.7%, recall of 89.8%, and f1 score of 84.4% on the TNCR dataset. We have made TNCR open source in the hope of encouraging more deep learning approaches to table detection, classification and structure recognition. Image source: [https://github.com/abdoelsayed2016/TNCR_Dataset](https://github.com/abdoelsayed2016/TNCR_Dataset)
Provide a detailed description of the following dataset: TNCR Dataset
HKR
The database is written in Cyrillic and shares the same 33 characters. Besides these characters, the Kazakh alphabet also contains 9 additional specific characters. This dataset is a collection of forms. The sources of all the forms in the datasets were generated by LATEX which subsequently was filled out by persons with their handwriting. The database consists of more than 1400 filled forms. There are approximately 63000 sentences, more than 715699 symbols produced by approximately 200 diferent writers. We utilized three different datasets described as following: Handwritten samples (Forms) of keywords in Kazakh and Russian (Areas, Cities , Village , etc.) Handwritten Kazakh and Russian alphabet in cyrillic Handwritten samples (Forms) of poems in Russian Image source: [https://github.com/abdoelsayed2016/HKR_Dataset](https://github.com/abdoelsayed2016/HKR_Dataset)
Provide a detailed description of the following dataset: HKR
HT Docking
**HT Docking** is a dataset consisting of 200 million 3D complex structures and 2D structure scores across a consistent set of 13 million ``in-stock'' molecules over 15 receptors, or binding sites, across the SARS-CoV-2 proteome. It is used to study surrogate model accuracy for protein-ligand docking.
Provide a detailed description of the following dataset: HT Docking
PointQA
**PointQA** is a set of datasets for Visual Question Datasets (VQA) that require a pointer to an object in the image to be answered correctly. The different datasets are: PointQA-Local, PointQA-LookTwice and PointQA-General.
Provide a detailed description of the following dataset: PointQA
TinyFace
**TinyFace** is a large scale face recognition benchmark to facilitate the investigation of natively LRFR (Low Resolution Face Recognition) at large scales (large gallery population sizes) in deep learning. The TinyFace dataset consists of 5,139 labelled facial identities given by 169,403 native LR face images (average 20×16 pixels) designed for 1:N recognition test. All the LR faces in TinyFace are collected from public web data across a large variety of imaging scenarios, captured under uncontrolled viewing conditions in pose, illumination, occlusion and background.
Provide a detailed description of the following dataset: TinyFace
FB15K237-Refined
FB15K237-Refined is a refined version of FB15k237 by KGRefiner.
Provide a detailed description of the following dataset: FB15K237-Refined
WN18RR Refined
WN18RR Refined is a refined version of [WN18RR](https://paperswithcode.com/dataset/wn18rr) by KGRefiner
Provide a detailed description of the following dataset: WN18RR Refined
CHORD
CHORD is the first chorus recognition dataset containing 627 songs for public use.
Provide a detailed description of the following dataset: CHORD
TrajAir: A General Aviation Trajectory Dataset
This dataset contains aircraft trajectories in an untowered terminal airspace collected over 8 months surrounding the Pittsburgh-Butler Regional Airport [ICAO:KBTP], a single runway GA airport, 10 miles North of the city of Pittsburgh, Pennsylvania. The trajectory data is recorded using an on-site setup that includes an ADS-B receiver. The trajectory data provided spans days from 18 Sept 2020 till 23 Apr 2021 and includes a total of 111 days of data discounting downtime, repairs, and bad weather days with no traffic. Data is collected starting at 1:00 AM local time to 11:00 PM local time. The dataset uses an Automatic Dependent Surveillance-Broadcast (ADS-B) receiver placed within the airport premises to capture the trajectory data. The receiver uses both the 1090 MHz and 978 MHz frequencies to listen to these broadcasts. The ADS-B uses satellite navigation to produce accurate location and timestamp for the targets which is recorded on-site using our custom setup. Weather data during the data collection time period is also included for environmental context. The weather data is obtained post-hoc using the METeorological Aerodrome Reports (METAR) strings generated by the Automated Weather Observing System (AWOS) system at KBTP. The raw METAR string is then appended to the raw trajectory data by matching the closest UTC timestamps. We also provide processed data that filters, interpolates and transforms data from a global frame to an airport-centred inertial frame. The inertial frame is centred at one end of the runway with the x-axis along the runway. Trajectories are filtered with aircrafts under 6000 ft MSL and around a 5km radius around the airport origin. We also remove duplicates and interpolate data every second. The proceed files also contain wind-data; a crucial factor in decision-making; separated in components along and perpendicular to the runway direction.
Provide a detailed description of the following dataset: TrajAir: A General Aviation Trajectory Dataset
PathQuestion
Adopts two subsets of Freebase (Bollacker et al., 2008) as Knowledge Bases to construct the PathQuestion (PQ) and the PathQuestion-Large (PQL) datasets. Paths are extracted between two entities which span two hops (es → r1 → e1 → r2 → a, denoted by -2H) or three hops (es→ r1 → e1 →r2 → e2→ r3 → a, denoted by -3H) and then generated natural language questions with templates. To make the generated questions analogical to real-world questions, paraphrasing templates and synonyms for relations are included by searching the Internet and two real-world datasets, WebQuestions (Berant et al., 2013) and WikiAnswers (Fader et al., 2013). In this way, the syntactic structure and surface wording of the generated questions have been greatly enriched.
Provide a detailed description of the following dataset: PathQuestion
PELD
PELD is a text-based emotional dialog dataset with personality traits for speakers. The dialogues in PELD are merged from the emotional dialogues in MELD and EmoryNLP , as well as the personality trait annotations from FriendsPersona. The personality traits in PELD are adopted from the personality annotations in 711 different dialogues in FriendsPersona. Refer to the annotations, a role may exhibit different aspects of its personality in different dialogues. We only keep the personality traits of the six main roles for confidence as their annotations are most frequent. For each of the main roles, we average their annotated personality traits in all the dialogues by $P_n = \frac{1}{K}\sum_{i=1}^K{P_i}$ for simplification, where $K$ is the number of annotations.
Provide a detailed description of the following dataset: PELD
VoxLingua107
VoxLingua107 is a dataset for spoken language recognition of 6628 hours (62 hours per language on the average) and it is accompanied by an evaluation set of 1609 verified utterances.
Provide a detailed description of the following dataset: VoxLingua107
MultiSubs
MultiSubs is a dataset of multilingual subtitles gathered from [the OPUS OpenSubtitles dataset](https://opus.nlpl.eu/OpenSubtitles.php), which in turn was sourced from [opensubtitles.org](http://www.opensubtitles.org/). We have supplemented some text fragments (visually salient nouns in this release) within the subtitles with web images, where the word sense of the fragment has been disambiguated using a cross-lingual approach. We have introduced a fill-in-the-blank task and a lexical translation task to demonstrate the utility of the dataset. Please refer to [our paper](https://arxiv.org/abs/2103.01910) for a more detailed description of the dataset and tasks. Multisubs will benefit research on visual grounding of words especially in the context of free-form sentence. Josiah Wang, Pranava Madhyastha, Josiel Figueiredo, Chiraag Lala, Lucia Specia (2021). [MultiSubs: A Large-scale Multimodal and Multilingual Dataset](https://arxiv.org/abs/2103.01910). CoRR, abs/2103.01910. Available at: [https://arxiv.org/abs/2103.01910] (https://arxiv.org/abs/2103.01910)
Provide a detailed description of the following dataset: MultiSubs
ISPRS Potsdam
The data set contains 38 patches (of the same size), each consisting of a true orthophoto (TOP) extracted from a larger TOP mosaic.
Provide a detailed description of the following dataset: ISPRS Potsdam
ISPRS Vaihingen
The data set contains 33 patches (of different sizes), each consisting of a true orthophoto (TOP) extracted from a larger TOP mosaic.
Provide a detailed description of the following dataset: ISPRS Vaihingen
BrazilDam Dataset
BrazilDAM is a multi sensor and multitemporal dataset that consists of multispectral images of ore tailings dams throughout Brazil. Landsat 8 and Sentinel 2 satellites that capture multispectral images over the years 2016, 2017, 2018 and 2019 were used. The dataset contains samples collected in different regions, which increases the diversity and representativeness of the characteristics of the dams.
Provide a detailed description of the following dataset: BrazilDam Dataset
Brazilian Coffee Scenes Dataset
This dataset is a composition of scenes taken by SPOT sensor in 2005 over four counties in the State of Minas Gerais, Brazil: Arceburgo, Guaranesia, Guaxupé and Monte Santo. It has multispectral high-resolution scenes of coffee crops and non-coffee areas. It has many intraclass variance caused by different crop management technique, as well as scenes with different plant ages and/or with spectral distortions caused by shadows.
Provide a detailed description of the following dataset: Brazilian Coffee Scenes Dataset
SinGAN-Seg-polyps
**SinGAN-Seg-polyps** is a synthetic dataset for polyp segmentation consisting of 10,000 synthetic polyps and masks.
Provide a detailed description of the following dataset: SinGAN-Seg-polyps
XWINO
XWINO is a multilingual collection of Winograd Schemas in six languages that can be used for evaluation of cross-lingual commonsense reasoning capabilities. The datasets that comprise XWINO are: * Source: The original [Winograd Schema Challenge](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WSCollection.xml) ([Levesque](http://www.cs.toronto.edu/~hector/Papers/winograd.pdf), 2012); * Source: Additional data from the [SuperGLUE](https://super.gluebenchmark.com/tasks/) WSC benchmark ([Wang et al.](https://papers.nips.cc/paper/2019/hash/4496bf24afe7fab6f046bf4923da8de6-Abstract.html), 2019); * Source: The [Definite Pronoun Resolution](http://www.hlt.utdallas.edu/~vince/data/emnlp12/) dataset ([Rahman and Ng](https://www.aclweb.org/anthology/D12-1071/), 2012) (accessed from https://github.com/Yre/wsc_naive); * Source: A collection of [French Winograd Schemas](http://www.llf.cnrs.fr/fr/winograd-fr) ([Amsili and Seminck](https://www.aclweb.org/anthology/W17-1504/), 2017); * Source: [Japanese translation](https://github.com/ku-nlp/Winograd-Schema-Challenge-Ja) of Winograd Schema Challenge ([柴田知秀 et al.](http://www.anlp.jp/proceedings/annual_meeting/2015/pdf_dir/E3-1.pdf), 2015); * Source: [Russian Winograd Schema Challenge](https://russiansuperglue.com/tasks/task_info/RWSD) ([Shavrina et al.](https://www.aclweb.org/anthology/2020.emnlp-main.381/), 2020); * Source: A collection of [Winograd Schemas in Chinese](https://cs.nyu.edu/~davise/papers/WinogradSchemas/WSChinese.html); * Source: Winograd Schemas [in Portuguese](https://github.com/gabimelo/portuguese_wsc) ([Melo et al.](https://www.teses.usp.br/teses/disponiveis/3/3141/tde-14012021-124730/es.php), 2019).
Provide a detailed description of the following dataset: XWINO
NNE
**NNE** is a dataset for Nested Named Entity Recognition in English Newswire
Provide a detailed description of the following dataset: NNE
Antibody Watch
**Antibody Watch** is a dataset of text snippets extracted from over 2000 PubMed articles with annotations denoting specificity of antibodies.
Provide a detailed description of the following dataset: Antibody Watch
COVID-19 & Election
These datasets were used in the paper 'Evaluation of Thematic Coherence in Microblogs' (ACL, 2021). The data is structured as follows: each file represents a cluster of tweets which contains the tweet IDs, the journalist annotations for quality evaluation and issue identification, as well as the metric evaluation scores. Note that a set of 50 clusters, equally split between COVID-19 and Election domains, is shared between the 3 annotators and thus contains 3 labels. Each cluster of tweets is evaluated for its thematic coherence quality (3-point scale) and for its issue identification (Intruded, Chained or Random). For more information about the annotation scheme, please refer to the complete annotation guidelines (available at https://doi.org/10.6084/m9.figshare.14703471) or the paper. Potential uses for these datasets are in the evaluation of thematic coherence, topic modelling and text summarisation fields.
Provide a detailed description of the following dataset: COVID-19 & Election
ZooScanNet
Plankton was sampled with various nets, from bottom or 500m depth to the surface, in many oceans of the world. Samples were imaged with a ZooScan. The full images were processed with ZooProcess which generated regions of interest (ROIs) around each individual object and a set of associated features measured on the object (see Gorsky et al 2010 for more information). The same objects were re-processed to compute features with the scikit-image toolbox (http://scikit-image.org). The 1,433,278 resulting objects were sorted by a limited number of operators, following a common taxonomic guide, into 93 taxa, using the web application EcoTaxa (http://ecotaxa.obs-vlfr.fr).
Provide a detailed description of the following dataset: ZooScanNet
iMiGUE
**iMiGUE** is a dataset for emotional artificial intelligence research: identity-free video dataset for Micro-Gesture Understanding and Emotion analysis (iMiGUE). Different from existing public datasets, iMiGUE focuses on nonverbal body gestures without using any identity information, while the predominant researches of emotion analysis concern sensitive biometric data, like face and speech. Most importantly, iMiGUE focuses on micro-gestures, i.e., unintentional behaviors driven by inner feelings, which are different from ordinary scope of gestures from other gesture datasets which are mostly intentionally performed for illustrative purposes. Furthermore, iMiGUE is designed to evaluate the ability of models to analyze the emotional states by integrating information of recognized micro-gesture, rather than just recognizing prototypes in the sequences separately (or isolatedly). The authors collected 359 videos of post match press conferences of Grand Slam tournaments. This dataset contains 72 players from 28 countries and regions covering very continent which enables MGs analysis from diverse cultures. iMiGUE comprises 36 female and 36 male players whose ages are between 17 and 38.
Provide a detailed description of the following dataset: iMiGUE
MultiCite
**MultiCite** is a dataset of 12,653 citation contexts from over 1,200 computational linguistics papers used for Citation context analysis (CCA). MultiCite contains multi-sentence, multi-label citation contexts within full paper texts.
Provide a detailed description of the following dataset: MultiCite
CityNet
**CityNet** is a multi-modal urban dataset containing data from 7 cities, each of which coming from 3 data sources, which can be used for urban computing and smart city research. The dataset consists of 3 types of raw data (city layout, taxi, meteorology) collected from 7 cities.
Provide a detailed description of the following dataset: CityNet
CrowdSpeech
**CrowdSpeech** is a publicly available large-scale dataset of crowdsourced audio transcriptions. It contains annotations for more than 20 hours of English speech from more than 1,000 crowd workers.
Provide a detailed description of the following dataset: CrowdSpeech
Toloka Business ID Recognition
This dataset, commissioned by the Yandex Business Directory, contains 10,000 photos of organization information signs shot in the Russian Federation along with the INN (taxpayer ID) and OGRN (Primary State Registration Number) codes shown on these signs. Toloka was used for both capturing photos and recognizing INN and OGRN codes.
Provide a detailed description of the following dataset: Toloka Business ID Recognition
pd4ml
**pd4ml** is a collection of datasets from fundamental physics research -- including particle physics, astroparticle physics, and hadron- and nuclear physics -- for supervised machine learning studies. These datasets, containing hadronic top quarks, cosmic-ray induced air showers, phase transitions in hadronic matter, and generator-level histories, are made public to simplify future work on cross-disciplinary machine learning and transfer learning in fundamental physics. It currently consists on 5 datasets: - Top Tagging Landscape (Classification) - Train/val/test: 1.2M/400k/400k - Structure: Four vectors - Dimension: 200 particles, 4 features/particle - Smart Backgrounds (Classification) - Train/val/test: 157k/39k/84k - Structure: Decay Graph - Dimension: 100 particles, 9 features/particle - Spinodal or Not (Classification) - Train/val/test: 16.3k/4k/8.7k - Structure: 2D Histogram - Dimension: 20x20 histogram of pion spectra - EoS (Classification) - Train/val/test: 121k/25k/54k - Structure: 2D Histogram - Dimension: 24x24 histogram of pion spectra - Air Showers (Regression) - Train/val/test: 56k/30k/14k - Structure: 81 1D Traces - Dimension: 81 stations, 80 signal bins + timing
Provide a detailed description of the following dataset: pd4ml
Toloka WaterMeters
This datase, contains 1244 images of hot and cold water meters as well as their readings and coordinates of the displays showing those readings. Each image contains exactly one water meter. The archive also includes the pictures of the results of segmentation with the masks and collages. Toloka was used for photo capturing, segmentation, and recognizing the readings.
Provide a detailed description of the following dataset: Toloka WaterMeters
RuADReCT
Created as part of the Social Media Mining for Health Applications (#SMM4H '20) shared tasks, this dataset consists of 9515 tweets describing health issues. Each tweet is labeled for whether it contains information about an adverse side effect that occurred when taking a drug. The dataset was a joint effort with the UPenn HLP Center and the Chemoinformatics and Molecular Modeling Research Laboratory at Kazan Federal University.
Provide a detailed description of the following dataset: RuADReCT
LRWC
This dataset contains the opinions of Russian native speakers about the relationship between a generic term (hypernym) and a specific instance of it (hyponym). Assembled by Dmitry Ustalov in 2017. A set of 300 most frequent nouns was extracted from the Russian National Corpus. Then each method or resource (including RuThes and RuWordNet) produced at most five hypernyms, if possible. This resulted in 10,600 unique non-empty subsumption pairs, which were annotated by seven different performers whose mother tongue is Russian and were at least 20 years old as of February 1, 2017. As a result, 4,576 out of 10,600 pairs were annotated as positive while the remaining 6,024 were annotated as negative. Interestingly, the performers were more confident in the negative answers than in the positive ones.
Provide a detailed description of the following dataset: LRWC
Human-Annotated Sense-Disambiguated Word Contexts for Russian
This dataset contains human-annotated sense identifiers for 2562 contexts of 20 words used in the RUSSE'2018 shared task on Word Sense Induction and Disambiguation for the Russian language. Assembled by Dmitry Ustalov in 2017. In particular, 80 pre-annotated contexts were used for training the human annotators, and 2562 contexts were annotated by humans such that each context was annotated by 9 different annotators. After the annotation, every context was additionally inspected (“curated”) by the organizers of the shared task.
Provide a detailed description of the following dataset: Human-Annotated Sense-Disambiguated Word Contexts for Russian
ScanBank
ScanBank is a benchmark dataset for figure extraction from scanned electronic theses and dissertations containing 10 thousand scanned page images, manually labeled by humans as to the presence of the 3.3 thousand figures or tables found therein.
Provide a detailed description of the following dataset: ScanBank
Florence 3D actions dataset
The dataset collected at the University of Florence during 2012, has been captured using a Kinect camera. It includes 9 activities: wave, drink from a bottle, answer phone,clap, tight lace, sit down, stand up, read watch, bow. During acquisition, 10 subjects were asked to perform the above actions for 2/3 times. This resulted in a total of 215 activity samples.
Provide a detailed description of the following dataset: Florence 3D actions dataset
Delaunay triangulation
Delaunay triangulation dataset for 5, 10, 15, 20 points. Both random and sorted datasets are included. If you have any trouble to use this dataset, contact hunnino10@gmail.com
Provide a detailed description of the following dataset: Delaunay triangulation
ExBAN
The ExBAN dataset: a corpus of NL explanations generated by crowd-sourced participants presented with the task of explaining simple Bayesian Network (BN) graphical representations. These explanations, in a separate collection effort, are rated for clarity and informativeness.
Provide a detailed description of the following dataset: ExBAN
ObMan-Ego
The ObMan-Ego is a large-scale synthetic hand dataset with egocentric scenes in which the simulated hands are provided by ObMan. The dataset is used for a hand segmentation task and its sim-to-real adaptation benchmark. Training, validation, and testing sets contain 150, 000, 6, 500, and 6, 500 images, respectively.
Provide a detailed description of the following dataset: ObMan-Ego
CPTC-2018
Intrusion alert dataset captured through the Collegiate Penetration Testing Competition (CPTC) 2018. Contains alerts from 6 student teams. For details, see "A Cybersecurity Dataset Derived from the National Collegiate Penetration Testing Competition" by Nathan Munaiah et al.
Provide a detailed description of the following dataset: CPTC-2018
SURREALvols
Added information about the subject's body height and volumes of 14 individual body parts.
Provide a detailed description of the following dataset: SURREALvols
Fingerprint Dataset
This dataset includes all music sources, background noises and impulse-reponses (IR) samples and conversation speech that have been used in the work "Neural Audio Fingerprint for High-specific Audio Retrieval based on Contrastive Learning" ICASSP 2021 (https://arxiv.org/abs/2010.11910).
Provide a detailed description of the following dataset: Fingerprint Dataset
Steel Tube Dataset
8 kinds of weld defects
Provide a detailed description of the following dataset: Steel Tube Dataset
Geography of Open Source Software
This dataset reports counts of active GitHub contributors (activity: 2019/2020) geolocated in early 2021. Counts are aggregated at the country level and at various regional scales. Besides countries, we report data on the EU NUTS2 level, for Brazilian, Russian, Chinese, Japanese, Indian, and US-American subnational geographies. We used a pipeline approach, attempting to infer location first from GitHub profile of a developer, then from linked Twitter accounts, then from email suffixes (country level only). Our data reports the count of developers identified by each stage of the pipeline, in case for instance one prefers to only use the GitHub account information.
Provide a detailed description of the following dataset: Geography of Open Source Software
PDE solutions
In this folder, you will find solutions of the following partial differential equations: - Burgers - Kortweg-de-Vries -Newell-Whitehead - Kuramoto-Sivashinsky You will find more info about how these were generated in the supplementary material of the paper: https://arxiv.org/abs/2106.11936
Provide a detailed description of the following dataset: PDE solutions
PDEs
In this dataset, you will find solutions of the following partial differential equations: - Burgers - Kortweg-de-Vries -Newell-Whitehead - Kuramoto-Sivashinsky You will find more info about how these were generated in the supplementary material of the paper: https://arxiv.org/abs/2106.11936
Provide a detailed description of the following dataset: PDEs
IowaRain
**IowaRain** is a dataset of rainfall events for the state of Iowa (2016-2019) acquired from the National Weather Service Next Generation Weather Radar (NEXRAD) system and processed by a quantitative precipitation estimation system. The dataset presented in this study could be used for better disaster monitoring, response and recovery by paving the way for both predictive and prescriptive modeling
Provide a detailed description of the following dataset: IowaRain
Kosp2e
**Kosp2e** (read as `kospi'), is a corpus that allows Korean speech to be translated into English text in an end-to-end manner
Provide a detailed description of the following dataset: Kosp2e
HumanoidRobotPose
The **HumanoidRobotPose** dataset is a dataset for real-time pose estimation of humanoid robots.
Provide a detailed description of the following dataset: HumanoidRobotPose
FaVIQ
**FaVIQ** (Fact Verification from Information-seeking Questions) is a challenging and realistic fact verification dataset that reflects confusions raised by real users. We use the ambiguity in information-seeking questions and their disambiguation, and automatically convert them to true and false claims. These claims are natural, and require a complete understanding of the evidence for verification. FaVIQ serves as a challenging benchmark for natural language understanding, and improves performance in professional fact checking.
Provide a detailed description of the following dataset: FaVIQ
OPA
Object-Placement-Assessment (OPA) is a task consisting on verifying whether a composite image is plausible in terms of the object placement. The foreground object should be placed at a reasonable location on the background considering location, size, occlusion, semantics, and etc. **OPA** is a synthesised dataset for Object Placement Assessment based on COCO dataset. The authors selected unoccluded objects from multiple categories as our candidate foreground objects. The foreground objects are pasted on their compatible background images with random sizes and locations to form composite images, which are sent to human annotators for rationality labeling.
Provide a detailed description of the following dataset: OPA
DPPIN
**DPPIN** is a collection of dynamic networks, which consists of twelve generated dynamic protein-protein interaction networks of yeast cells, stored in twelve folders.
Provide a detailed description of the following dataset: DPPIN
MineRL BASALT
**MineRL BASALT** is an RL competition on solving human-judged tasks. The tasks in this competition do not have a pre-defined reward function: the goal is to produce trajectories that are judged by real humans to be effective at solving a given task.
Provide a detailed description of the following dataset: MineRL BASALT
SBU-WSD-Corpus
**SBU-WSD-Corpus** is a corpus for Persian Word Sense Disambiguation (WSD). It is manually annotated with senses from the Persian WordNet (FarsNet) sense inventory. SBU-WSD-Corpus consists of 19 Persian documents in different domains such as Sports, Science, Arts, etc. It includes 5892 content words of Persian running text and 3371 manually sense annotated words (2073 nouns, 566 verbs, 610 adjectives, and 122 adverbs).
Provide a detailed description of the following dataset: SBU-WSD-Corpus
VinDr-RibCXR
**VinDr-RibCXR** is a benchmark dataset for automatic segmentation and labeling of individual ribs from chest X-ray (CXR) scans. The VinDr-RibCXR contains 245 CXRs with corresponding ground truth annotations provided by human experts.
Provide a detailed description of the following dataset: VinDr-RibCXR
Disaster
**Disaster** is a dataset that contains images collected from various sources for three different disasters: fire, water and land. Besides this, it also contains images for various damaged infrastructure due to natural or man made calamities and damaged human due to war or accidents. There are 13,720 manually annotated images in this dataset, each image is annotated by three individuals. The authors are also providing discriminating image class information annotated manually with bounding box for a set of 200 test images. Images are collected from different news portals, social media, and standard datasets made available by other researchers.
Provide a detailed description of the following dataset: Disaster
Google Landmarks
The **Google Landmarks** dataset contains 1,060,709 images from 12,894 landmarks, and 111,036 additional query images. The images in the dataset are captured at various locations in the world, and each image is associated with a GPS coordinate. This dataset is used to train and evaluate large-scale image retrieval models.
Provide a detailed description of the following dataset: Google Landmarks
KiTS19
The 2021 Kidney and Kidney Tumor Segmentation challenge (abbreviated KiTS21) is a competition in which teams compete to develop the best system for automatic semantic segmentation of renal tumors and surrounding anatomy. [The 2021 Kidney and Kidney Tumor Segmentation Challenge](https://kits21.kits-challenge.org/) [The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge](https://arxiv.org/abs/1912.01054)
Provide a detailed description of the following dataset: KiTS19
UrbanScene3D
UrbanScene3D is a large scale urban scene dataset associated with a handy simulator based on Unreal Engine 4 and AirSim, which consists of both man-made and real-world reconstruction scenes in different scales, referred to as UrbanScene3D. The manually made scene models have compact structures, which are carefully constructed/designed by professional modelers according to the images and maps of target areas. In contrast, UrbanScene3D also offers dense, detailed scene models reconstructed by aerial images through multi-view stereo (MVS) techniques. These scenes have realistic textures and meticulous structures. The release also includes the originally captured aerial images that have been used to reconstruct the 3D scene models, as well as a set of 4K video sequences that would facilitate designing algorithms, such SLAM and MVS.
Provide a detailed description of the following dataset: UrbanScene3D
ChangeSim
**ChangeSim** is a dataset aimed at online scene change detection (SCD) and more. The data is collected in photo-realistic simulation environments with the presence of environmental non-targeted variations, such as air turbidity and light condition changes, as well as targeted object changes in industrial indoor environments. By collecting data in simulations, multi-modal sensor data and precise ground truth labels are obtainable such as the RGB image, depth image, semantic segmentation, change segmentation, camera poses, and 3D reconstructions. While the previous online SCD datasets evaluate models given well-aligned image pairs, ChangeSim also provides raw unpaired sequences that present an opportunity to develop an online SCD model in an end-to-end manner, considering both pairing and detection. Experiments show that even the latest pair-based SCD models suffer from the bottleneck of the pairing process, and it gets worse when the environment contains the non-targeted variations.
Provide a detailed description of the following dataset: ChangeSim
Red MiniImageNet 20% label noise
Part of the Controlled Noisy Web Labels Dataset.
Provide a detailed description of the following dataset: Red MiniImageNet 20% label noise
Red MiniImageNet 40% label noise
Part of the Controlled Noisy Web Labels Dataset.
Provide a detailed description of the following dataset: Red MiniImageNet 40% label noise
Red MiniImageNet 80% label noise
Part of the Controlled Noisy Web Labels Dataset.
Provide a detailed description of the following dataset: Red MiniImageNet 80% label noise
ISO17
### Description The molecules were randomly drawn from the largest set of isomers in the QM9 dataset [1] which consists of molecules with a fixed composition of atoms (C7O2H10) arranged in different chemically valid structures. It is an extension of the ismoer MD data used in [2]. The database was generated from molecular dynamics simulations using the Fritz-Haber Institute ab initio simulation package (FHI-aims)[3]. The simulations were carried out using the standard quantum chemistry computational method density functional theory (DFT) in the generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) functional[4] and the Tkatchenko-Scheffler (TS) van der Waals correction method [5]. The database consist of 129 molecules each containing 5,000 conformational geometries, energies and forces with a resolution of 1 femtosecond in the molecular dynamics trajectories. ### Format The data is stored in ASE sqlite format with the total energy in eV under the key total energy and the atomic_forces under the key atomic_forces in eV/Ang. The following Python snippet iterates over the first 10 entries of the dataset located at path_to_db: ```python from ase.db import connect with connect(path_to_db) as conn: for row in conn.select(limit=10): print(row.toatoms()) print(row['total_energy']) print(row.data['atomic_forces']) ``` ### Partitions The data is partitioned as used in the SchNet paper [6]: reference.db - 80% of steps of 80% of MD trajectories reference_eq.db - equilibrium conformations of those molecules test_within.db - remaining 20% unseen steps of reference trajectories test_other.db - remaining 20% unseen MD trajectories test_eq.db - equilibrium conformations of test trajectories In the paper, we split the reference data (reference.db) into 400k training examples and 4k validation examples. The indices are given in the files train_ids.txt and validation_idx.txt, respectively. ### Benchmarks Model Energy (within) [eV] Force (within) [eV/A] Energy (other) [eV] Force (other) [eV/A] SchNet [6] 0.016 0.043 0.104 0.095 ### Download Available here: data/iso17.tar.gz (799.7 MB) ### How to cite When using this dataset, please make sure to cite the following papers: K.T. Schütt, P.-J. Kindermans, H.E. Sauceda, S. Chmiela, A. Tkatchenko, K.-R. Müller. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions. Advances in Neural Information Processing System. 2017. K.T. Schütt, F. Arbabzadah, S. Chmiela, K.R. Müller, A. Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nature Communications, 8, 13890. 2017. R. Ramakrishnan, P. O. Dral, M. Rupp, and O. A. von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific Data, 1, 2014. References [1] R. Ramakrishnan, P. O. Dral, M. Rupp, and O. A. von Lilienfeld. Quantum chemistry structures and properties of 134 kilo molecules. Scientific Data, 1, 2014. [2] Schütt, K. T., Arbabzadah, F., Chmiela, S., Müller, K. R., & Tkatchenko, A. (2017). Quantum-chemical insights from deep tensor neural networks. Nature Communications, 8, 13890. [3] Blum, V.; Gehrke, R.; Hanke, F.; Havu, P.; Havu, V.; Ren, X.; Reuter, K.; Scheffler, M. Ab Initio Molecular Simulations with Numeric Atom-Centered Orbitals. Comput. Phys. Commun. 2009, 180 (11), 2175–2196. [4] Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized Gradient Approximation Made Simple. Phys. Rev. Lett. 1996, 77 (18), 3865–3868. [5] Tkatchenko, A.; Scheffler, M. Accurate Molecular Van Der Waals Interactions from Ground-State Electron Density and Free-Atom Reference Data. Phys. Rev. Lett. 2009, 102 (7), 73005. [6] Schütt, K. T., Kindermans, P. J., Sauceda, H. E., Chmiela, S., Tkatchenko, A., & Müller, K. R. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions. Advances in Neural Information Processing System (accepted). 2017.
Provide a detailed description of the following dataset: ISO17
MCMD
A large-scale dataset in multi-programming languages and with rich information.
Provide a detailed description of the following dataset: MCMD
BCOPA-CE
We provide the BCOPA-CE test set, which has balanced token distribution in the correct and wrong alternatives and increases the difficulty of being aware of cause and effect. ### construction 1. for each premise of the 500 samples in COPA-test set, we generate one event manually which is a plausible answer to the opposite question type of the original sample. 2. obtain 500 triplets of <*premise*, *cause*, *effect*> 3. construct 1000 samples by giving two different questions (**cause** or **effect**) to each triplet.
Provide a detailed description of the following dataset: BCOPA-CE
Multiple Testing and Variable Selection along Least Angle Regression's path
Data used in paper entitled "Multiple Testing and Variable Selection along Least Angle Regression's path". Zenodo file with the code of the paper arXiv:1906.12072
Provide a detailed description of the following dataset: Multiple Testing and Variable Selection along Least Angle Regression's path
HumanEval
This is an evaluation harness for the HumanEval problem solving dataset described in the paper "Evaluating Large Language Models Trained on Code". It used to measure functional correctness for synthesizing programs from docstrings. It consists of 164 original programming problems, assessing language comprehension, algorithms, and simple mathematics, with some comparable to simple software interview questions.
Provide a detailed description of the following dataset: HumanEval
Unbalance Classification Using Vibration Data
This dataset contains vibration data recorded on a rotating drive train. This drive train consists of an electronically commutated DC motor and a shaft driven by it, which passes through a roller bearing. With the help of a 3D-printed holder, unbalances with different weights and different radii were attached to the shaft. Besides the strength of the unbalances, the rotation speed of the motor was also varied. This dataset can be used to develop and test algorithms for the automatic detection of unbalances on drive trains. Datasets for 4 differently sized unbalances and for the unbalance-free case were recorded. The vibration data was recorded at a sampling rate of 4096 values per second. Datasets for development (ID "D[0-4]") as well as for evaluation (ID "E[0-4]") are available for each unbalance strength. The rotation speed was varied between approx. 630 and 2330 RPM in the development datasets and between approx. 1060 and 1900 RPM in the evaluation datasets. For each measurement of the development dataset there are approx. 107min of continuous measurement data available, for each measurement of the evaluation dataset 28min.
Provide a detailed description of the following dataset: Unbalance Classification Using Vibration Data
HYPE
HYPE Dataset - Version 1.0.0 REFERENCE PAPER ------------------- Morassi Sasso, A., Datta, S., Jeitler, M., Steckhan, N., Kessler, C. S., Michalsen, A., Arnrich, B., & Böttinger, E. (2020). HYPE: Predicting Blood Pressure from Photoplethysmograms in a Hypertensive Population. In M. Michalowski & R. Moskovitch (Eds.), Artificial Intelligence in Medicine. AIME 2020. Lecture Notes in Computer Science, volume 12299 (pp. 325–335). Springer, Cham. https://doi.org/10.1007/978-3-030-59137-3_29. CONTENT ------------------ - Sensor (PPG & Blood Pressure) and clinical data from 9 hypertensive subjects in two experiments (stress test and 24 hours) - PPG data from Empatica E4: photoplethysmography (PPG) data. - Spacelabs (SL 90217): blood pressure data. - Demographics and self-reported data during 24 hours (exercise, medication, etc.). AVAILABILITY ------------------ - Available to the scientific community through a data agreement. - The requester must be affiliated to a research institution.
Provide a detailed description of the following dataset: HYPE
Lakh Pianoroll Dataset
The Lakh Pianoroll Dataset (LPD) is a collection of 174,154 [multitrack pianorolls](https://salu133445.github.io/lakh-pianoroll-dataset/representation) derived from the [Lakh MIDI Dataset](http://colinraffel.com/projects/lmd/) (LMD). ## Getting the dataset We provide multiple subsets and versions of the dataset (see [here](https://salu133445.github.io/lakh-pianoroll-dataset/comparisons)). The dataset is available [here](https://salu133445.github.io/lakh-pianoroll-dataset/dataset). ## Using LPD The multitrack pianorolls in LPD are stored in a special format for efficient I/O and to save space. We recommend to load the data with [Pypianoroll](https://salu133445.github.io/pypianoroll/) (The dataset is created using Pypianoroll v0.3.0.). See [here](https://salu133445.github.io/pypianoroll/save_load.html) to learn how the data is stored and how to load the data properly. ## License Lakh Pianoroll Dataset is a derivative of [Lakh MIDI Dataset](http://colinraffel.com/projects/lmd/) by [Colin Raffel](http://colinraffel.com), used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). Lakh Pianoroll Dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) by [Hao-Wen Dong](https://salu133445.github.io) and [Wen-Yi Hsiao](https://github.com/wayne391). Please cite the following papers if you use Lakh Pianoroll Dataset in a published work. - Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, "__MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment__," in _Proceedings of the 32nd AAAI Conference on Artificial Intelligence_ (AAAI), 2018. - Colin Raffel, "__Learning-Based Methods for Comparing Sequences, with Applications to Audio-to-MIDI Alignment and Matching__," _PhD Thesis_, 2016. ## Related projects - [MuseGAN](https://salu133445.github.io/musegan/) - [LeadSheetGAN](https://liuhaumin.github.io/LeadsheetArrangement/)
Provide a detailed description of the following dataset: Lakh Pianoroll Dataset
Common Crawl
The Common Crawl corpus contains petabytes of data collected over 12 years of web crawling. The corpus contains raw web page data, metadata extracts and text extracts. Common Crawl data is stored on Amazon Web Services’ Public Data Sets and on multiple academic cloud platforms across the world.
Provide a detailed description of the following dataset: Common Crawl
CADSketchNet
CADSketchNet is an annotated collection of sketches of 3D CAD models. Dataset-A has 58,696 computer-generated sketches of the 3D CAD models across 68 categories of MCB. Dataset-B has 801 hand-drawn sketches of the 3D CAD models across 42 categories of ESB
Provide a detailed description of the following dataset: CADSketchNet
AIP Environment
AI Playground (AIP) is an open-source, Unreal Engine-based tool for generating and labeling virtual image data. With AIP, it is trivial to capture the same image under different conditions (e.g., fidelity, lighting, etc.) and with different ground truths (e.g., depth or surface normal values). AIP is easily extendable and can be used with or without code.
Provide a detailed description of the following dataset: AIP Environment
Voice Conversion Challenge 2018
Voice conversion (VC) is a technique to transform a speaker identity included in a source speech waveform into a different one while preserving linguistic information of the source speech waveform. The Voice Conversion Challenge (VCC) 2016 was launched in 2016 at Interspeech 2016. The objective of the 2016 challenge was to better understand different VC techniques built on a freely-available common dataset to look at a common goal, and to share views about unsolved problems and challenges faced by the current VC techniques. The VCC 2016 focused on the most basic VC task, that is, the construction of VC models that automatically transform the voice identity of a source speaker into that of a target speaker using a parallel clean training database where source and target speakers read out the same set of utterances in a professional recording studio. 17 research groups had participated in the 2016 challenge. The challenge was successful and it established new standard evaluation methodology and protocols for bench-marking the performance of VC systems. The second edition of VCC was launched in 2018, the VCC 2018. In this second edition, three aspects of the challenge were revised. First, the amount of speech data used for the construction of participant's VC systems was reduced to half. This is based on feedback from participants in the previous challenge and this is also essential for practical applications. Second, a more challenging task refereed to a Spoke task in addition to a similar task to the 1st edition was introduced, which we call a Hub task. In the Spoke task, participants need to build their VC systems using a non-parallel database in which source and target speakers read out different sets of utterances. Both parallel and non-parallel voice conversion systems are evaluated via the same large-scale crowdsourcing listening test. Third, bridging the gap between the ASV and VC communities was also attempted. Since new VC systems developed for the VCC 2018 may be strong candidates for enhancing the ASVspoof 2015 database, spoofing performance of the VC systems based on anti-spoofing scores was assessed. Description from: https://datashare.ed.ac.uk/handle/10283/3061
Provide a detailed description of the following dataset: Voice Conversion Challenge 2018
SECBENCH
Dataset of 676 security vulnerabilities patches. In 2017, we mined the commits messages of 238 projects using regular expressions for each vulnerability (cf. Patterns). In 2020, we classified vulnerabilities using the CWE taxonomy. Some vulnerabilities contain the score and severity information (CVEs).
Provide a detailed description of the following dataset: SECBENCH
Sims4Action
* **The Sims4Action Dataset**: a videogame-based dataset for Synthetic→Real domain adaptation for human activity recognition. * **Goal** : Exploring the concept of constructing training examples for Activities of Daily Living (ADL) recognition by playing life simulation video games. * ** ***Sims4Action* dataset** is created with the commercial game THE SIMS 4** by executing actions-of-interest within the game in a "top-down" manner. It features ten hours of video material of eight diverse characters and multiple environments. Ten actions are selected to have a direct correspondence to categories covered in the real-life dataset Toyota Smarthome [2] to enable the research of Synthetic→Real transfer in action recognition. * **Two benchmarks** :* Gaming→Gaming* (training and evaluation on Sims4Action) and *Gaming→Real* (training on Sims4Action, evaluation on the real Toyota Smarthome data [2]). * **Main challenge: *Gaming→Real* domain adaptation** While ADL recognition on gaming data is interesting from a theoretical perspective, the key challenge arises from transferring knowledge learned from simulated data to real-world applications. *Sims4Action* specifically provides a benchmark for this scenario since it describes a *Gaming→Real* challenge, which evaluates models on real videos derived from the existing Toyota Smarthome dataset . # References [1] [Let's Play for Action: Recognizing Activities of Daily Living by Learning from Life Simulation Video Games.](http://arxiv.org/abs/2107.05617 ) Alina Roitberg*, David Schneider*, Aulia Djamal, Constantin Seibold, Simon Reiß, Rainer Stiefelhagen, In *International Conference on Intelligent Robots and Systems (IROS)*, 2021 (* denotes equal contribution.) [2] [Toyota smarthome: Real-world activities of daily living.](https://arxiv.org/pdf/2010.14982.pdf) Srijan Das, Rui Dai, Michal Koperski, Luca Minciullo, Lorenzo Garattoni, Francois Bremond, Gianpiero Francesca, In *International Conference on Computer Vision (ICCV)*, 2019.
Provide a detailed description of the following dataset: Sims4Action
GLIB: image dataset
data/images: data/images/Base : 132 screenshots of game1 & game2 with UI display issues from 466 test reports. data/images/Code : 9,412 screenshots of game1 & game2 with UI display issues generated by our Code augmentation method. data/images/Normal: 7,750 screenshots of game1 & game2 without UI display issues collected by randomly traversing the game scene. data/images/Rule(F) : 7,750 screenshots of game1 & game2 with UI display issues generated by our Rule(F) augmentation method. data/images/Rule(R) : 7,750 screenshots of game1 & game2 with UI display issues generated by our Rule(R) augmentation method. data/images/testDataSet : 192 screenshots with UI display issues from 466 test reports(exclude game1 & game2). data/data_csv: data/data_csv/Base : dataset for baseline method. data/data_csv/Code : dataset for our Code Augmentation method. data/data_csv/Rule(F) : dataset for our Rule(F) Augmentation method. data/data_csv/Rule(R) : dataset for our Rule(R) Augmentation method. data/data_csv/Code_plus_Rule(F) : dataset for our Code&Rule(F) Augmentation method. data/data_csv/Code_plus_Rule(R) : dataset for our Code&Rule(R) Augmentation method. data/data_csv/testDataSet : test dataset(normal image and real glitch images from 466 test reports).
Provide a detailed description of the following dataset: GLIB: image dataset
Narvik Road Dataset
DIT4BEARs Internship Project (at UiT-The Arctic University of Norway) Dataset The dataset contains data of 5 months including weather conditions, friction coefficient, distance traveled, wind speed, surface temperature, air temperature, etc. This dataset was provided by DIT4BEARs for the Smart Road Internship Project at UiT-The Arctic University of Norway It can be used for weather forecasting, road friction forecasting, smart road proposition, minimization of accident rate in Barren Euro-Arctic regions.
Provide a detailed description of the following dataset: Narvik Road Dataset
PackIt
The ability to jointly understand the geometry of objects and plan actions for manipulating them is crucial for intelligent agents. This ability is referred to as geometric planning. Recently, many interactive environments have been proposed to evaluate intelligent agents on various skills, however, none of them cater to the needs of geometric planning. PackIt is a virtual environment to evaluate and potentially learn the ability to do geometric planning, where an agent needs to take a sequence of actions to pack a set of objects into a box with limited space.
Provide a detailed description of the following dataset: PackIt
EasyCom
The Easy Communications (EasyCom) dataset is a world-first dataset designed to help mitigate the cocktail party effect from an augmented-reality (AR) -motivated multi-sensor egocentric world view. The dataset contains AR glasses egocentric multi-channel microphone array audio, wide field-of-view RGB video, speech source pose, headset microphone audio, annotated voice activity, speech transcriptions, head and face bounding boxes and source identification labels. We have created and are releasing this dataset to facilitate research in multi-modal AR solutions to the cocktail party problem.
Provide a detailed description of the following dataset: EasyCom
SportSett
This resource is designed to allow for research into Natural Language Generation. In particular, with neural data-to-text approaches although it is not limited to these.
Provide a detailed description of the following dataset: SportSett
NucMM
**NucMM** is a dataset for segmenting 3D cell nuclei from microscopy image volumes that pushes the task forward to the sub-cubic millimeter scale. It consists of two fully annotated volumes: one electron microscopy (EM) volume containing nearly the entire zebrafish brain with around 170,000 nuclei; and one micro-CT (uCT) volume containing part of a mouse visual cortex with about 7,000 nuclei.
Provide a detailed description of the following dataset: NucMM
AxonEM
The **AxonEM** dataset consists of two 30x30x30 um^3 EM image volumes from the human and mouse cortex, respectively. It is used for 3D axon instance segmentation of brain cortical regions. The authors proofread over 18,000 axon instances to provide dense 3D axon instance segmentation, enabling large-scale evaluation of axon reconstruction methods. In addition, the authors also densely annotate nine ground truth subvolumes for training, per each data volume.
Provide a detailed description of the following dataset: AxonEM
MSJudge
This is a challenging dataset from real courtrooms to predict the legal judgment in a reasonably encyclopedic manner by leveraging the genuine input of the case -- plaintiff's claims and court debate data, from which the case's facts are automatically recognized by comprehensively understanding the multi-role dialogues of the court debate, and then learnt to discriminate the claims so as to reach the final judgment through multi-task learning.
Provide a detailed description of the following dataset: MSJudge