dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Supplementary material for PyTea | Manuals and test set. | Provide a detailed description of the following dataset: Supplementary material for PyTea |
EEG Motor Movement/Imagery Dataset | This data set consists of over 1500 one- and two-minute EEG recordings, obtained from 109 volunteers. | Provide a detailed description of the following dataset: EEG Motor Movement/Imagery Dataset |
DeepCom-Java | The Java dataset introduced in DeepCom ([Deep Code Comment Generation](https://dl.acm.org/doi/10.1145/3196321.3196334)), commonly used to evaluate automated code summarization. | Provide a detailed description of the following dataset: DeepCom-Java |
ParallelCorpus-Python | The Python dataset introduced in the Parallel Corpus paper ([A Parallel Corpus of Python Functions and Documentation Strings for Automated Code Documentation and Code Generation](https://aclanthology.org/I17-2053.pdf)), commonly used for evaluating automated code summarization. | Provide a detailed description of the following dataset: ParallelCorpus-Python |
Java scripts | The Java dataset introduced in Hybrid-DeepCom ([Deep code comment generation with hybrid lexical and syntactical information](https://link.springer.com/article/10.1007%2Fs10664-019-09730-9)), commonly used to evaluate automated code summarization. It is basically a further version of [DeepCom-Java](https://paperswithcode.com/dataset/deepcom-java). | Provide a detailed description of the following dataset: Java scripts |
UFPR-ADMR-v2 | The UFPR-ADMR-v2 dataset contains 5,000 dial meter images obtained on-site by employees of the Energy Company of Paraná (Copel), which serves more than 4M consuming units in the Brazilian state of Paraná. The images were acquired with many different cameras and are available in the JPG format with 320×640 or 640×320 pixels (depending on the camera orientation). More details are available in our paper.
The dataset is split into three subsets: training (3,000 images), validation (1,000 images) and testing (1,000 images). Every image has the following annotations available in a .txt file: the counter’s corners (x1, y1), (x2, y2), (x3, y3), (x4, y4). The corners can be used to rectify the counter patch and represent, respectively, the top-left, top-right, bottom-right, and bottom-left corners. For each dial, the current position (x, y, w, h) and the corresponding reading (the final reading as well as the approximate reading with one decimal place precision). All counters of the dataset (regardless of meter type) have 4 or 5 dials; thus, 22,410 dials were manually annotated. | Provide a detailed description of the following dataset: UFPR-ADMR-v2 |
ICEWS | A repository that contains political events with a specific timestamp. These political events relate entities (e.g. countries, presidents...) to a number of other entities via logical predicates (e.g. ’Make a visit’ or ’Express intent to meet or negotiate’). | Provide a detailed description of the following dataset: ICEWS |
XQLFW | An evaluation protocol for face verification focusing on a large intra-pair image quality difference.
Real-world face recognition applications often deal with suboptimal image quality or resolution due to different capturing conditions such as various subject-to-camera distances, poor camera settings, or motion blur. This characteristic has an unignorable effect on performance. Recent cross-resolution face recognition approaches used simple, arbitrary, and unrealistic down- and up-scaling techniques to measure robustness against real-world edge-cases in image quality. Thus, we propose a new standardized benchmark dataset and evaluation protocol derived from the famous Labeled Faces in the Wild (LFW). In contrast to previous derivatives, which focus on pose, age, similarity, and adversarial attacks, our Cross-Quality Labeled Faces in the Wild (XQLFW) maximizes the quality difference. It contains only more realistic synthetically degraded images when necessary. Our proposed dataset is then used to further investigate the influence of image quality on several state-of-the-art approaches. With XQLFW, we show that these models perform differently in cross-quality cases, and hence, the generalizing capability is not accurately predicted by their performance on LFW. Additionally, we report baseline accuracy with recent deep learning models explicitly trained for cross-resolution applications and evaluate the susceptibility to image quality. | Provide a detailed description of the following dataset: XQLFW |
PSI | The **IUPUI-CSRC Pedestrian Situated Intent** (**PSI**) benchmark dataset has two innovative labels besides comprehensive computer vision annotations. The first novel label is the dynamic intent changes for the pedestrians to cross in front of the ego-vehicle, achieved from 24 drivers with diverse backgrounds. The second one is the text-based explanations of the driver reasoning process when estimating pedestrian intents and predicting their behaviors during the interaction period. | Provide a detailed description of the following dataset: PSI |
E-scooter Rider Detection Benchmark Dataset | A small benchmark dataset for e-scooter rider detection task, and a trained model to support the detection of e-scooter riders from RGB images collected from natural road scenes. | Provide a detailed description of the following dataset: E-scooter Rider Detection Benchmark Dataset |
Klexikon | The dataset introduces document alignments between [German Wikipedia](de.wikipedia.org) and the children's lexicon [Klexikon](klexikon.zum.de).
The source texts in Wikipedia are both written in a more complex language than Klexikon, and also significantly longer, which makes this a suitable application for both summarization and simplification.
In fact, previous research has so far only focused on *either* of the two, but not comprehensively been studied as a joint task. | Provide a detailed description of the following dataset: Klexikon |
H²O Interaction | H²O is an image dataset annotated for Human-to-human-or-object interaction detection. H²O is composed of the images from V-COCO dataset to which are added images which mostly contain interactions between people. The dataset has been introduced in this paper: Orcesi, A., Audigier, R., Toukam, F. P., & Luvison, B. (2021, December). Detecting Human-to-Human-or-Object (H 2 O) Interactions with DIABOLO. In 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021) (pp. 1-8). IEEE.
The annotations were made with Pixano, an opensource, smart annotation tool for computer vision applications: https://pixano.cea.fr/ | Provide a detailed description of the following dataset: H²O Interaction |
CLEAR | **CLEAR** is a continual image classification benchmark dataset with a natural temporal evolution of visual concepts in the real world that spans a decade (2004-2014). CLEAR is built from existing large-scale image collections ([YFCC100M](/dataset/yfcc100m)) through a novel and scalable low-cost approach to visio-linguistic dataset curation. The pipeline makes use of pretrained vision language models (e.g. CLIP) to interactively build labeled datasets, which are further validated with crowd-sourcing to remove errors and even inappropriate images (hidden in original YFCC100M). The major strength of CLEAR over prior CL benchmarks is the smooth temporal evolution of visual concepts with real-world imagery, including both high-quality labeled data along with abundant unlabeled samples per time period for continual semi-supervised learning. | Provide a detailed description of the following dataset: CLEAR |
SMD | a dataset of time-series anomaly detection | Provide a detailed description of the following dataset: SMD |
Common Phone | **Common Phone** is a gender-balanced, multilingual corpus recorded from more than 76.000 contributors via Mozilla's Common Voice project. It comprises around 116 hours of speech enriched with automatically generated phonetic segmentation. | Provide a detailed description of the following dataset: Common Phone |
TempQA-WD | **TempQA-WD** is a benchmark dataset for temporal reasoning designed to encourage research in extending the present approaches to target a more challenging set of complex reasoning tasks. Specifically, the benchmark is a temporal question answering dataset with the following advantages: (a) it is based on Wikidata, which is the most frequently curated, openly available knowledge base, (b) it includes intermediate sparql queries to facilitate the evaluation of semantic parsing based approaches for KBQA, and (c) it generalizes to multiple knowledge bases: Freebase and Wikidata. | Provide a detailed description of the following dataset: TempQA-WD |
BigDatasetGAN | **BigDatasetGAN** is a dataset for pixel-wise ImageNet segmentation. It consists of large synthetic datasets from BigGAN & VQGAN.
Image source: [https://arxiv.org/pdf/2201.04684v1.pdf](https://arxiv.org/pdf/2201.04684v1.pdf) | Provide a detailed description of the following dataset: BigDatasetGAN |
LaFAN1 | # Ubisoft La Forge Animation Dataset ("LAFAN1")
Ubisoft La Forge Animation dataset and accompanying code for the SIGGRAPH 2020 paper [Robust Motion In-betweening](https://montreal.ubisoft.com/en/automatic-in-betweening-for-faster-animation-authoring/).
Shot in May 2017.
This dataset can be used under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License (see license.txt).
If you use this dataset or transition benchmarking code, please consider citing the paper:
```
@article{harvey2020robust,
author = {Félix G. Harvey and Mike Yurick and Derek Nowrouzezahrai and Christopher Pal},
title = {Robust Motion In-Betweening},
booktitle = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH)},
publisher = {ACM},
volume = {39},
number = {4},
year = {2020}
}
```
You may also want to consider the following papers, as they also use this dataset (or parts of it):
* [Learned Motion Matching (Holden et al., 2020)](http://theorangeduck.com/media/uploads/other_stuff/Learned_Motion_Matching.pdf)
* [Subspace Neural Physics: Fast Data-Driven Interactive Simulation (Holden et al., 2019)](http://www.theorangeduck.com/media/uploads/other_stuff/deep-cloth-paper.pdf)
* [DReCon: Data-Driven Responsive Control of Physics-Based Characters (Bergamin et al., 2019)](https://static-wordpress.akamaized.net/montreal.ubisoft.com/wp-content/uploads/2019/11/13214229/DReCon.pdf)
* [Robust Solving of Optical Motion Capture Data by Denoising (Holden, 2018)](http://theorangeduck.com/media/uploads/other_stuff/neural_solver.pdf)
* [Recurrent Transition Networks for Character Locomotion (Harvey et al., 2018)](https://arxiv.org/pdf/1810.02363.pdf)
## Data
The animation data is contained in the lafan1.zip file.
All the animation sequences are in the BVH file format.
There are 5 subjects in the dataset, 77 sequences, and 496,672 motion frames at 30fps (~4.6 hours).
Every BVH file is named with the following convention: \[*theme*\]\[*take number*\]_\[*subject ID*\].bvh.
Any sequences sharing the same *theme* and *take_number* were recorded at the same time in the studio.
Themes are high level indicators of the actions in the sequences.
The following themes are present in the LaFAN1 dataset:
| Theme | Description |Number of sequences|
|:----------------|:------------------------------------------ |:-----------------:|
| Obstacles | Locomotion on uneven terrain |17 |
| Walk | Walking locomotion, with different styles |12 |
| Dance | Free dancing |8 |
| Fall and get up | Falling on the ground and getting back up |6 |
| Aiming | Locomotion while handling or aiming a gun |5 |
| Ground | Locomotion while crawling and crouching |5 |
| Multiple actions| Miscellaneous/multiple movements per sequence|4 |
| Run | Jogging/Running locomotion |4 |
| Fight | Various fight movements |3 |
| Jumps | Locomotion with one and two-leg jumps |3 |
| Fight and sports| Fight and sports movements |2 |
| Push and stumble| Pushing, stumbling and recovery |3 |
| Push and fall | Pushing, falling, and getting up |2 |
| Sprint | Sprinting locomotion |2 |
| Push | Pushing adversary |1 |
© [2018] Ubisoft Entertainment. All Rights Reserved | Provide a detailed description of the following dataset: LaFAN1 |
CI-AVSR | **Cantonese In-car Audio-Visual Speech Recognition** (**CI-AVSR**) is a dataset for in-car command recognition in the Cantonese language with both video and audio data. It consists of 4,984 samples (8.3 hours) of 200 in-car commands recorded by 30 native Cantonese speakers. Furthermore, the dataset is augmented using common in-car background noises to simulate real environments, producing a dataset 10 times larger than the collected one. | Provide a detailed description of the following dataset: CI-AVSR |
COLDataset | **COLDataset** is a dataset to facilitate Chinese offensive language detection and model evaluation. It include a Chinese offensive language dataset containing 37k annotated sentences. | Provide a detailed description of the following dataset: COLDataset |
CPP simulated evaluation | In this repository you can find all the elaborate results that were used for the simulated evaluation of an innovative, optimized for real-life use, STC-based, multi-robot Coverage Path Planning (mCPP) algorithm. For this evaluation were introduced in "Apostolidis, S. D., Kapoutsis, P. C., Kapoutsis, A. C., & Kosmatopoulos, E. B. (2022). Cooperative multi-UAV coverage mission planning platform for remote sensing applications. Autonomous Robots, 1-28." 20 ROIs, of different shapes and areas, that may include obstacles inside them. These ROIs along with some benchmark results can be found here: https://github.com/savvas-ap/cpp-simulated-evaluations | Provide a detailed description of the following dataset: CPP simulated evaluation |
PerPaDa | **PerPaDa** is a Persian paraphrase dataset that is collected from users' input in a plagiarism detection system. | Provide a detailed description of the following dataset: PerPaDa |
RuMedBench | **RuMedBench** is a benchmark dataset for Russian medical language understanding. | Provide a detailed description of the following dataset: RuMedBench |
MuLVE | **Multi-Language Vocabulary Evaluation Data Set** (**MuLVE**) is a dataset consisting of vocabulary cards and real-life user answers, labeled indicating whether the user answer is correct or incorrect. | Provide a detailed description of the following dataset: MuLVE |
WebUAV-3M | WebUAV-3M is a new million-scale Unmanned Aerial Vehicle (UAV) tracking benchmark consisting of 4,485 videos with more than 3M frames from the Internet. An efficient and scalable Semi-Automatic Target Annotation (SATA) pipeline is devised to label the tremendous WebUAV-3M in every frame. The densely bounding box annotated WebUAV-3M one of the largest public UAV tracking benchmark. | Provide a detailed description of the following dataset: WebUAV-3M |
Grep-BiasIR | **Grep-BiasIR** is a novel thoroughly-audited dataset which aim to facilitate the studies of gender bias in the retrieved results of IR systems. | Provide a detailed description of the following dataset: Grep-BiasIR |
FIG-Loneliness | **FIG-Loneliness** (**FIne-Grained Loneliness**) is a dataset collected by using Reddit posts in two young adult-focused forums and two loneliness related forums consisting of a diverse age group. Annotations by trained human annotators for binary and fine-grained loneliness classifications of the posts are provided. | Provide a detailed description of the following dataset: FIG-Loneliness |
IKEA Object State Dataset | **IKEA Object State Dataset** is a new dataset that contains IKEA furniture 3D models, RGBD video of the assembly process, the 6DoF pose of furniture parts and their bounding box. | Provide a detailed description of the following dataset: IKEA Object State Dataset |
KazNERD | **KazNERD** is a dataset for Kazakh named entity recognition. The dataset was built as there is a clear need for publicly available annotated corpora in Kazakh, as well as annotation guidelines containing straightforward--but rigorous--rules and examples. The dataset annotation, based on the IOB2 scheme, was carried out on television news text by two native Kazakh speakers under the supervision of the first author. The resulting dataset contains 112,702 sentences and 136,333 annotations for 25 entity classes. | Provide a detailed description of the following dataset: KazNERD |
PhoMT | **PhoMT** is a high-quality and large-scale Vietnamese-English parallel dataset of 3.02M sentence pairs for machine translation. | Provide a detailed description of the following dataset: PhoMT |
HS-BAN | **HS-BAN** is a binary class hate speech (HS) dataset in Bangla language consisting of more than 50,000 labeled comments, including 40.17% hate and rest are non hate speech. | Provide a detailed description of the following dataset: HS-BAN |
PASTRIE | **Prepositions Annotated with Supersense Tags in Reddit International English** (**PASTRIE**) is a new corpus containing manually annotated preposition supersenses of English data from presumed speakers of four L1s: English, French, German, and Spanish | Provide a detailed description of the following dataset: PASTRIE |
COPA-SSE | **Semi-Structured Explanations for COPA** (**COPA-SSE**) is a new crowdsourced dataset of 9,747 semi-structured, English common sense explanations for COPA questions. The explanations are formatted as a set of triple-like common sense statements with ConceptNet relations but freely written concepts. This semi-structured format strikes a balance between the high quality but low coverage of structured data and the lower quality but high coverage of free-form crowdsourcing. Each explanation also includes a set of human-given quality ratings. With their familiar format, the explanations are geared towards commonsense reasoners operating on knowledge graphs and serve as a starting point for ongoing work on improving such systems. | Provide a detailed description of the following dataset: COPA-SSE |
The People’s Speech | **The People's Speech** is a free-to-download 30,000-hour and growing supervised conversational English speech recognition dataset licensed for academic and commercial usage under CC-BY-SA (with a CC-BY subset). The data is collected via searching the Internet for appropriately licensed audio data with existing transcriptions. | Provide a detailed description of the following dataset: The People’s Speech |
Korean Table Question Answering | Korean tabular dataset is a collection of 1.4M tables with corresponding descriptions for unsupervised pre-training language models. Korean table question answering corpus consists of 70k pairs of questions and answers created by crowd-sourced workers. | Provide a detailed description of the following dataset: Korean Table Question Answering |
EventNarrative | **EventNarrative** is a knowledge graph-to-text dataset from publicly available open-world knowledge graphs. EventNarrative consists of approximately 230,000 graphs and their corresponding natural language text. | Provide a detailed description of the following dataset: EventNarrative |
Synthetic Visual Inspections | Synthetic visual inspection data of structural elements in bridges. The data is generated using the OpenIPDM toolbox "Generate Synthetic Dataset". For further details about the data generation and the properties of the dataset, refer to the software manual at [https://github.com/CivML-PolyMtl/OpenIPDM/blob/main/Help](https://github.com/CivML-PolyMtl/OpenIPDM/blob/main/Help) | Provide a detailed description of the following dataset: Synthetic Visual Inspections |
PP-HumanSeg14K | A large-scale video portrait dataset that contains 291 videos from 23 conference scenes with 14K frames. This dataset contains various teleconferencing scenes, various actions of the participants, interference of passers-by and illumination change. | Provide a detailed description of the following dataset: PP-HumanSeg14K |
MMAC Captions | We provide a dataset called MMAC Captions for sensor-augmented egocentric-video captioning. The dataset contains 5,002 activity descriptions by extending the [CMU-MMAC dataset](http://kitchen.cs.cmu.edu/index.php). A number of activity description examples can be found in the homepage. | Provide a detailed description of the following dataset: MMAC Captions |
SLAKE-English | English subset of the SLAKE dataset, comprising 642 images and more than 7,000 question–answer pairs. | Provide a detailed description of the following dataset: SLAKE-English |
ARKitScenes | **ARKitScenes** is an RGB-D dataset captured with the widely available Apple LiDAR scanner. Along with the per-frame raw data (Wide Camera RGB, Ultra Wide camera RGB, LiDar scanner depth, IMU) the authors also provide the estimated ARKit camera pose and ARKit scene reconstruction for each iPad Pro sequence. In addition to the raw and processed data from the mobile device, ARKit. | Provide a detailed description of the following dataset: ARKitScenes |
LasHeR | **LasHeR** consists of 1224 visible and thermal infrared video pairs with more than 730K frame pairs in total. Each frame pair is spatially aligned and manually annotated with a bounding box, making the dataset well and densely annotated. LasHeR is highly diverse capturing from a broad range of object categories, camera viewpoints, scene complexities and environmental factors across seasons, weathers, day and night. | Provide a detailed description of the following dataset: LasHeR |
Symmetry-OOD | **Symmetry-OOD** is a dataset for symmetry perception by deep neural networks. | Provide a detailed description of the following dataset: Symmetry-OOD |
Urdu Online Reviews | This corpus was constructed by collecting 10,008 reviews from various domains, including sports, food, software, politics, and entertainment. Human annotators manually tagged the reviews into positive (n = 3662), negative (n = 2619), and neutral (n = 3727) categories. | Provide a detailed description of the following dataset: Urdu Online Reviews |
CVGL Camera Calibration Dataset | The dataset has been generated using Town 1 and Town 2 of CARLA Simulator. The dataset consists of 50 camera configurations with each town having 25 configurations. The parameters modified for generating the configurations include f ov, x, y, z, pitch, yaw, and roll. Here, f ov is the field of view, (x, y, z) is the translation while (pitch, yaw, and roll) is the rotation between the cameras. The total number of image pairs is 1,23,017, out of which 58,596 belong to Town 1 while 64,421 belong to Town 2, the difference in the number of images is due to the length of the tracks. | Provide a detailed description of the following dataset: CVGL Camera Calibration Dataset |
Tsinghua-Daimler Cyclist Benchmark | The Tsinghua-Daimler Cyclist Benchmark provides a benchmark dataset for cyclist detection. Bounding Box based labels are provided for the classes: ("pedestrian", "cyclist", "motorcyclist", "tricyclist", "wheelchairuser", "mopedrider"). | Provide a detailed description of the following dataset: Tsinghua-Daimler Cyclist Benchmark |
Data and Material for 'Interpersonal Conflicts During Code Review' | ## About the study
This study was exploring the landscape of interpersonal conflicts during code review in following areas:
- how these conflicts look like
- what role do they play in software development
- what are their consequences
- what factors do play role in their appearance and severity
- what strategies can be used to prevent and manage conflicts
## Methodology
The study collected 22 interviews with developers. Using qualitative thematic analysis, the authors analysed the anonymised interviews for the form and context of interpersonal conflicts during code review and potential strategies to prevent and manage them. The analysis was conducted in NVivo 12, a software for qualitative analysis.
To support validity of the analysis, it was revised through an "Audit Trail" like process. The analysis, study methodology and results have been examined by a senior researcher within the team. The files included in the audit are included in this folder as well.
## Dataset Content
-----------------
This folder contains online material for the study clarifying and extending information for reviewers and readers interested in the methodology.
- `README.txt` - information on the content and purpose of this online material
### The Analysis
- `conflicts_analysis.nvpx` - NVivo 12 file containing the complete results, transcripts, codings and definitions used and created throught the analysis
- `definitions.pdf` - definitions of the final higher themes identified in the analysis
- `codes.pdf` - complete list of codes and themes coded during the analysis
### Supplementary documents:
- `sample_descriptives.pdf` - Description of individual participants and their characteristics
- `interview_structure.pdf` - Structure of the interviews conducted in the study
- `participant_consent_form.pdf` - Participant Consent that has been signed by the participants in the study
- `transcripts.pdf` - a file containing the complete set of anonymised transcripts
- `Audit` - Folder containing files submitted for the Audit Trail except of files constant throughout the study like Participant Consent or Transcripts of the interviews.
Audit Trail
-----------------
Audit trail is a procedure to validate results of qualitative analysis. It requires the researcher to provide detailed information on how they conducted the analysis to auditors external to the analysis. The goal of a formal audit is to examine both the process and product of the inquiry, and determine the trustworthiness of the findings. This folder contains files to perform the audit trail for the study.
Goal
-----------------
The goal of the audit is:
- to get acquainted with the methodology and results of the analysis and related documentation
- to review whether the results of the analysis are a good representation of the data
- to review whether the analysis does not breach knowledge available to software engineering, unless well supported
- to review whether the results are useful for the community
- to control for issues and shortcomings of the analysis
Files
-----------------
The "Audit" folder contains following files (We recommend reviewing them in this order):
- analysisplan_results.pdf - text summarising background, methodology and results of the study
- results - folder with the latex files to generate analysisplan_results.pdf
- memos.pdf - researcher notes on issues and decision making during the analysis
- definitions_audit.pdf - definitions of higher level themes included in the results report
- codes_audit.pdf - extended definitions file containing all the codes and themes
- conflicts_analysis_audit.nvpx - NVivo file with complete coding | Provide a detailed description of the following dataset: Data and Material for 'Interpersonal Conflicts During Code Review' |
DKhate | A corpus of Offensive Language and Hate Speech Detection for Danish
This DKhate dataset contains 3600 comments from the web annotated for offensive language, following the Zampieri et al. / OLID scheme.
Submissions and benchmarks for the OffensEval 2020 Danish track are also included. | Provide a detailed description of the following dataset: DKhate |
Broad Twitter Corpus | This paper introduces the Broad Twitter Corpus (BTC), which is not only significantly bigger, but sampled across different regions, temporal periods, and types of Twitter users. The gold-standard named entity annotations are made by a combination of NLP experts and crowd workers, which enables us to harness crowd recall while maintaining high quality. We also measure the entity drift observed in our dataset (i.e. how entity representation varies over time), and compare to newswire. | Provide a detailed description of the following dataset: Broad Twitter Corpus |
DanFEVER | We present a dataset, DANFEVER, intended for claim verification in Danish. The dataset builds upon the task framing of the FEVER fact extraction and verification challenge. DANFEVER can be used for creating models for detecting mis- & disinformation in Danish as well as for verification in multilingual settings. | Provide a detailed description of the following dataset: DanFEVER |
DAGW | It’s hard to develop good tools for processing Danish with computers when no large and wide-coverage dataset of Danish text is readily available. To address this, the Danish Gigaword Project (DAGW) maintains a corpus for Danish with over a billion words. The general goals are to create a dataset that is:
* representative;
* accessible;
* a suitable common starting point for Danish NLP models. | Provide a detailed description of the following dataset: DAGW |
NASA Crew Exploration Vehicle (CEV) Software Event Log | Extensible Event Stream (XES) software event log obtained through instrumenting the NASA CEV class using the tool available at {https://svn.win.tue.nl/repos/prom/XPort/}. This event log contains method-call level events describing a single run of an exhaustive unit test suite for the Crew Exploration Vehicle (CEV) example available and documented at {http://babelfish.arc.nasa.gov/trac/jpf/wiki/projects/jpf-statechart} (trac) {http://babelfish.arc.nasa.gov/hg/jpf/jpf-statechart} (mercurial repository). Note that the life-cycle information in this log corresponds to method call (start) and return (complete), and captures a method-call hierarchy. We attached a slightly preprocessed variant of this event log, where the execution of each unit test method is represented as a separate trace. | Provide a detailed description of the following dataset: NASA Crew Exploration Vehicle (CEV) Software Event Log |
FloorPlanCAD | **FloorPlanCAD** is a large-scale real-world CAD drawing dataset containing over 15,000 floor plans, ranging from residential to commercial buildings. | Provide a detailed description of the following dataset: FloorPlanCAD |
DABS | **DABS** is a domain-agnostic benchmark for self-supervised learning to encourage research and progress towards domain-agnostic methods. | Provide a detailed description of the following dataset: DABS |
CamGes | The size of the data set is about 1GB.
The data set consists of 900 image sequences of 9 gesture classes, which are defined by 3 primitive hand shapes and 3 primitive motions. Therefore, the target task for this data set is to classify different shapes as well as different motions at a time. | Provide a detailed description of the following dataset: CamGes |
AWARE | The peer-reviewed paper of AWARE dataset is published in ASEW 2021, and can be accessed through: http://doi.org/10.1109/ASEW52652.2021.00049. Kindly cite this paper when using AWARE dataset.
Aspect-Based Sentiment Analysis (ABSA) aims to identify the opinion (sentiment) with respect to a specific aspect. Since there is a lack of smartphone apps reviews dataset that is annotated to support the ABSA task, we present AWARE: ABSA Warehouse of Apps REviews.
AWARE contains apps reviews from three different domains (Productivity, Social Networking, and Games), as each domain has its distinct functionalities and audience. Each sentence is annotated with three labels, as follows:
Aspect Term: a term that exists in the sentence and describes an aspect of the app that is expressed by the sentiment. A term value of “N/A” means that the term is not explicitly mentioned in the sentence.
Aspect Category: one of the pre-defined set of domain-specific categories that represent an aspect of the app (e.g., security, usability, etc.).
Sentiment: positive or negative.
Note: games domain does not contain aspect terms.
We provide a comprehensive dataset of 11323 sentences from the three domains, where each sentence is additionally annotated with a Boolean value indicating whether the sentence expresses a positive/negative opinion. In addition, we provide three separate datasets, one for each domain, containing only sentences that express opinions. The file named “AWARE_metadata.csv” contains a description of the dataset’s columns.
How AWARE can be used?
We designed AWARE such that it can be used to serve various tasks. The tasks can be, but are not limited to:
Sentiment Analysis.
Aspect Term Extraction.
Aspect Category Classification.
Aspect Sentiment Analysis.
Explicit/Implicit Aspect Term Classification.
Opinion/Not-Opinion Classification.
Furthermore, researchers can experiment with and investigate the effects of different domains on users' feedback. | Provide a detailed description of the following dataset: AWARE |
ManyTypes4TypeScript | [](https://doi.org/10.5281/zenodo.6336113) Type Inference dataset for TypeScript. Click on DOI tag for dataset files. | Provide a detailed description of the following dataset: ManyTypes4TypeScript |
Survey answers | Please see paper for questions. These are the answers to the surveys, processed and included in the paper via knitr | Provide a detailed description of the following dataset: Survey answers |
MARIDA | **MARIDA** (**Marine Debris Archive**) is the first dataset based on the multispectral Sentinel-2 (S2) satellite data, which distinguishes Marine Debris from various marine features that co-exist, including Sargassum macroalgae, Ships, Natural Organic Material, Waves, Wakes, Foam, dissimilar water types (i.e., Clear, Turbid Water, Sediment-Laden Water, Shallow Water), and Clouds. MARIDA is an open-access dataset which enables the research community to explore the spectral behaviour of certain floating materials, sea state features and water types, to develop and evaluate Marine Debris detection solutions based on artificial intelligence and deep learning architectures, as well as satellite pre-processing pipelines. Although it is designed to be beneficial for several machine learning tasks, it primarily aims to benchmark weakly supervised pixel-level semantic segmentation learning methods.
MARIDA can be downloaded from the repository Zenodo ([https://doi.org/10.5281/zenodo.5151941](https://zenodo.org/record/5151941#.YfFZ_PXP30o)). A quick start guide for all ML benchmarks and the detailed overview of the dataset are available at [https://marine-debris.github.io/](https://marine-debris.github.io/). | Provide a detailed description of the following dataset: MARIDA |
A ground-truth dataset to identify bots in GitHub | This dataset is a ground truth dataset that is used to identify bots. Each account in this dataset is rated by at least 3 raters with a high interrater agreement. | Provide a detailed description of the following dataset: A ground-truth dataset to identify bots in GitHub |
ISBNet | ISBNet is a dataset of images of recyclables. It is hand collected by our group at the International School of Beijing. The trash in these images was gathered from trash bins around the school. ISBNet totals 889 images distributed across 5 classes: cans (74), landfill (410), paper (182), plastic (122), and tetra pak (101). The data acquisition process involved using a piece of black poster paper as a background; this would create enough contrast for trash belonging to the paper category. These pictures were taken with an iPhone 8 and an iPhone XS. We recorded the trash bin in which the piece of trash originated from and any trash generating landmarks nearby. Please refer to the paper (ThanosNet: A Novel Trash Classification Method Using Metadata) for more about the format of the metadata. | Provide a detailed description of the following dataset: ISBNet |
TUH EEG Seizure Corpus | Our goal is to enable deep learning research in neuroscience by releasing the largest publicly available unencumbered database of EEG recordings. This ongoing project currently includes over 30,000 EEGs spanning the years from 2002 to present. Data collected can be used for both research and commercialization purposes.
Iyad Obeid and Joseph Picone. The temple university hospital eeg data corpus. Frontiers in neuroscience,10:196, 2016. | Provide a detailed description of the following dataset: TUH EEG Seizure Corpus |
PPG Dalia | PPG-DaLiA is a publicly available dataset for PPG-based heart rate estimation. This multimodal dataset features physiological and motion data, recorded from both a wrist- and a chest-worn device, of 15 subjects while performing a wide range of activities under close to real-life conditions. The included ECG data provides heart rate ground truth. The included PPG- and 3D-accelerometer data can be used for heart rate estimation, while compensating for motion artefacts. | Provide a detailed description of the following dataset: PPG Dalia |
NPSC | The Norwegian Parliamentary Speech Corpus (NPSC) is a speech corpus made by the Norwegian Language Bank at the National Library of Norway in 2019-2021. The NPSC consists of recordings of speech from Stortinget, the Norwegian parliament, and corresponding orthographic transcriptions to Norwegian Bokmål and Norwegian Nynorsk. All transcriptions are done manually by trained linguists or philologists, and the manual transcriptions are subsequently proofread to ensure consistency and accuracy. Entire days of Parliamentary meetings are transcribed in the dataset. | Provide a detailed description of the following dataset: NPSC |
IGLUE | The **Image-Grounded Language Understanding Evaluation** (**IGLUE**) benchmark brings together—by both aggregating pre-existing datasets and creating new ones—visual question answering, cross-modal retrieval, grounded reasoning, and grounded entailment tasks across 20 diverse languages. The benchmark enables the evaluation of multilingual multimodal models for transfer learning, not only in a zero-shot setting, but also in newly defined few-shot learning setups. | Provide a detailed description of the following dataset: IGLUE |
ModelNet40-C | ModelNet40-C is a comprehensive dataset to benchmark the corruption robustness of 3D point cloud recognition.
We create ModelNet40-C based on the ModelNet40 validation set with 15 corruption types and 5 severity levels for each corruption type including density, noise, and transformation corruption patterns. Our dataset contains 185,000 distinct point clouds that help provide a comprehensive picture of model robustness. | Provide a detailed description of the following dataset: ModelNet40-C |
Expressive Gaussian mixture models for high-dimensional statistical modelling: simulated data and neural network model files | Neural network model files and Madgraph event generator outputs used as inputs to the results presented in the paper "Learning to discover: expressive Gaussian mixture models for multi-dimensional simulation and parameter inference in the physical sciences" arXiv:2108.11481; 2022 Mach. Learn.: Sci. Technol. 3 015021
Code and model files can be found at:
https://github.com/darrendavidprice/science-discovery/tree/master/expressive_gaussian_mixture_models | Provide a detailed description of the following dataset: Expressive Gaussian mixture models for high-dimensional statistical modelling: simulated data and neural network model files |
Simulated EM showers data | Electromagnetic (EM) showers simulated dataset. The data contains 16,577 showers. The data includes information about the tracklets: position coordinates, direction and shower id, and about the showers: shower id, initial particle position and direction, shower energy.
Generation is done using FairShip framework. Showers energy follows gamma distribution with parameters alpha=1.4, beta = 0.5. Polar angle is simulated using log-normal distribution with parameters nu=0,3, sigma=0.7.
The data is in csv native format. Total dataset size is 669.2 MB. | Provide a detailed description of the following dataset: Simulated EM showers data |
Validation Dataset | AlgorithmComparison:
Comparison of algorithms on benchmark test cases. Details on included in the paper. 10 cases for each algorithm / benchmark test. Optimum.txt file includes the history of best optimum and SamplePointsResults.txt file contains results for all the black-box function evaluations. Last colum represents the objective value. GlobalOptimum.txt represents the global optimum for that specific test case.
AcquisitionFunctionComparison:
Comparison of acquisition functions within the MixMOBO framework. 10 cases for each acqusition function / benchmark test. File structure similar to AlgorithmComparison, except for ZDT6 where Optimum.txt file represents the current Pareto-optimal solution.
ArchitectedMaterialOptimization:
Optimization progress history of architected material optimization given in Optimum.txt file. -1, 0, 1, 2 represent unit cell A,B,C,D. The last value represent buckling load -1*P_c. | Provide a detailed description of the following dataset: Validation Dataset |
QALD-9-Plus | # QALD-9-Plus Dataset Description
[QALD-9-Plus](https://github.com/Perevalov/qald_9_plus) is the dataset for Knowledge Graph Question Answering (KGQA) based on well-known [QALD-9](https://github.com/ag-sc/QALD/tree/master/9/data).
QALD-9-Plus enables to train and test KGQA systems over DBpedia and Wikidata using questions in 9 different languages: English, German, Russian, French, Armenian, Belarusian, Lithuanian, Bashkir, and Ukrainian.
Some of the questions have several alternative writings in particular languages which enables to evaluate the robustness of KGQA systems and train paraphrasing models.
As the questions' translations were provided by native speakers, they are considered as "gold standard", therefore, machine translation tools can be trained and evaluated on the dataset.
# Dataset Statistics
| | en | de | fr | ru | uk | lt | be | ba | hy | # questions DBpedia | # questions Wikidata |
|-------|:---:|:---:|:--:|:----:|:---:|:---:|:---:|:---:|:--:|:-----------:|:-----------:|
| Train | 408 | 543 | 260 | 1203 | 447 | 468 | 441 | 284 | 80 | 408 | 371 |
| Test | 150 | 176 | 26 | 348 | 176 | 186 | 155 | 117 | 20 | 150 | 136 |
Given the numbers, it is obvious that some of the languages are covered more than once i.e., there is more than one translation for a particular question.
For example, there are 1203 Russian translations available while only 408 unique questions exist in the training subset (i.e., 2.9 Russian translations per one question).
The availability of such parallel corpora enables the researchers, developers and other dataset users to address the paraphrasing task.
[cc-by]: http://creativecommons.org/licenses/by/4.0/
[cc-by-image]: https://i.creativecommons.org/l/by/4.0/88x31.png
[cc-by-shield]: https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg | Provide a detailed description of the following dataset: QALD-9-Plus |
SuperMUDI | The Super-resolution of Multi-Dimensional Diffusion MRI
(Super MUDI) dataset contains the data of four healthyhuman subjects with ages range between 19 and 46 years.
For each subject 1,344 MRI volumes are provided. Theimaging device was clinical 3T Philips Achieva Scanner
(Best, Netherlands) with a 32-channel adult head coil.
The Super MUDI Challenge comprises two tasks:
isotropic, and anisotropic super-resolution. The names of
these tasks were derived from the acquisition strategies of
the low-resolution MRI data. The objective of using two
down-sampling strategies is to compare the combinations of
the down-sampling methods and the super-resolution
approaches that can best to be used in a clinical scheme to
obtain simulated high-quality and high-fidelity MRI images
while reducing the acquisition time. In the anisotropic
subsampling the volume has high in-plane resolution
(2.5mm ×2.5mm), but thick axial slice (5mm), while in the
isotropic subsampling the volume has low resolution (5mm)
in all the directions.
For our experiments, we use one subject each for training
and validation, and two for testing.
Reference: Marco Pizzolato, Marco Palombo, Jana Hutter, Vish-
wesh Nash, Fan Zhang, and Noemi Gyori, “Super-
resolution of Multi Dimensional Diffusion MRI data,”
Mar. 2020 | Provide a detailed description of the following dataset: SuperMUDI |
Violent-Flows | Crowd Violence \ Non-violence Database and benchmark: A database of real-world, video footage of crowd violence, along with standard benchmark protocols designed to test both violent/non-violent classification and violence outbreak detections. The data set contains 246 videos. All the videos were downloaded from YouTube. The shortest clip duration is 1.04 seconds, the longest clip is 6.52 seconds, and the average length of a video clip is 3.60 seconds.
Introduced in:
Tal Hassner, Yossi. Itcher, and Orit Kliper-Gross, Violent Flows: Real-Time Detection of Violent Crowd Behavior, 3rd IEEE International Workshop on Socially Intelligent Surveillance and Monitoring (SISM) at the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Rhode Island, June 2012 . | Provide a detailed description of the following dataset: Violent-Flows |
WikiConvert | Wiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a {{Convert}} template. | Provide a detailed description of the following dataset: WikiConvert |
CVIT PIB | We present sentence aligned parallel corpora across 10 Indian Languages - Hindi, Telugu, Tamil, Malayalam, Gujarati, Urdu, Bengali, Oriya, Marathi, Punjabi, and English - many of which are categorized as low resource. The corpora are compiled from online sources which have content shared across languages. The corpora presented significantly extends present resources that are either not large enough or are restricted to a specific domain (such as health). We also provide a separate test corpus compiled from an independent online source that can be independently used for validating the performance in 10 Indian languages. Alongside, we report on the methods of constructing such corpora using tools enabled by recent advances in machine translation and cross-lingual retrieval using deep neural network based methods. | Provide a detailed description of the following dataset: CVIT PIB |
AMR3.0 | Abstract Meaning Representation (AMR) Annotation Release 3.0 was developed by the Linguistic Data Consortium (LDC), SDL/Language Weaver, Inc., the University of Colorado's Computational Language and Educational Research group and the Information Sciences Institute at the University of Southern California. It contains a sembank (semantic treebank) of over 59,255 English natural language sentences from broadcast conversations, newswire, weblogs, web discussion forums, fiction and web text. | Provide a detailed description of the following dataset: AMR3.0 |
ShapeNetCore | ShapeNetCore is a subset of the full ShapeNet dataset with single clean 3D models and manually verified category and alignment annotations. It covers 55 common object categories with about 51,300 unique 3D models. The 12 object categories of PASCAL 3D+, a popular computer vision 3D benchmark dataset, are all covered by ShapeNetCore. | Provide a detailed description of the following dataset: ShapeNetCore |
AbdomenCT-1K | - We present a large and diverse abdominal CT organ segmentation dataset, termed AbdomenCT-1K, with more than 1000 (1K) CT scans from 12 medical centers, including multi-phase, multi-vendor, and multi-disease cases.
- Furthermore, we conduct a large-scale study for liver, kidney, spleen, and pancreas segmentation and reveal the unsolved segmentation problems of the SOTA methods, such as the limited generalization ability on distinct medical centers, phases, and unseen diseases.
- To advance the unsolved problems, we further build four organ segmentation benchmarks for fully supervised, semi-supervised, weakly supervised, and continual learning, which are currently challenging and active research topics. Accordingly, we develop a simple and effective method for each benchmark, which can be used as out-of-the-box methods and strong baselines.
- We believe the AbdomenCT-1K dataset will promote future in-depth research towards clinical applicable abdominal organ segmentation methods. | Provide a detailed description of the following dataset: AbdomenCT-1K |
CodeContests | CodeContests is a competitive programming dataset for machine-learning. This dataset was used when training [AlphaCode](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode).
It consists of programming problems, from a variety of sources.
Problems include test cases in the form of paired inputs and outputs, as well as both correct and incorrect human solutions in a variety of languages. | Provide a detailed description of the following dataset: CodeContests |
NR-HCPI | NR-HCPI (Non-redundant Human CPI dataset) | Provide a detailed description of the following dataset: NR-HCPI |
MedVidQA | The MedVidQA dataset contains the collection of 3, 010 manually created health-related questions and timestamps as visual answers to those questions from trusted video sources, such as accredited medical schools with an established reputation, health institutes, health education, and medical practitioners. | Provide a detailed description of the following dataset: MedVidQA |
MedVidCL (Medical Video Classification) | The MedVidCL dataset contains a collection of 6, 617 videos annotated into ‘medical instructional’, ‘medical non-instructional' and ‘non-medical’ classes. A two-step approach is used to construct the MedVidCL dataset. In the first step, the videos annotated by health informatics experts are used to train a machine learning model that predicts the given video to one of the three aforementioned classes. In the second step, only the high-confidence videos are used and health informatics experts assess the model’s predicted video category and update the category wherever needed. | Provide a detailed description of the following dataset: MedVidCL (Medical Video Classification) |
Illness-dataset | A dataset for evaluating text classification, domain adaptation, and active learning models. The dataset consists of 22,660 documents (tweets) collected in 2018 and 2019. It spans across four domains: Alzheimer's, Parkinson's, Cancer, and Diabetes. | Provide a detailed description of the following dataset: Illness-dataset |
IEEE-CIS 3rd Technical Challenge | The IEEE Computational Intelligence Society ran a competition from July to November 2021 for predicting and optimizing based on renewable energy data.
The data was from six buildings and six solar installations at the Monash University, Clayton campus, in Melbourne, Victoria, Australia.
The data was at 15 minute resolution and from the years 2016 to 2020.
Competitors had to predict solar generation and building electricity usage for the months of October and November 2020, given perfect weather forecasts from the Australian BOM and European ECMWF.
Then, they had to submit a schedule for classes to minimize electricity cost based on peak demand and (known) electricity prices for the month.
Citation: C. Bergmeir, F. de Nijs et al. "IEEE-CIS technical challenge on predict+optimize for renewable energy scheduling," 2021. [Online]. Available: https://dx.doi.org/10.21227/1x9c-0161 | Provide a detailed description of the following dataset: IEEE-CIS 3rd Technical Challenge |
LARa | LARa is the first freely accessible logistics-dataset for human activity recognition. In the ’Innovationlab Hybrid Services in Logistics’ at TU Dortmund University, two picking and one packing scenarios with 14 subjects were recorded using OMoCap, IMUs, and an RGB camera. 758 minutes of recordings were labeled by 12 annotators in 474 person-hours. The subsequent revision was carried out by 4 revisers in 143 person-hours. All the given data have been labeled and categorised into 8 activity classes and 19 binary coarse-semantic descriptions, also called attributes. | Provide a detailed description of the following dataset: LARa |
Topic modeling topic coverage dataset | A prevalent use case of topic models is that of topic discovery.
However, most of the topic model evaluation methods rely on abstract metrics such as perplexity or topic coherence.
The topic coverage approach is to measure the models' performance by matching model-generated topics to topics discovered by humans.
This way, the models are evaluated in the context of their use, by essentially simulating
topic modeling in a fixed setting defined by a text collection and a set of reference topics.
Reference topics represent a ground truth that can be used to evaluate both topic models and other measures of model performance.
The coverage approach enables large-scale automatic evaluation of both existing and future topic models.
The topic coverage dataset consists of two text collections and two sets of reference topics.
These two sub-datasets correspond to two domains (news text and biological text)
where topic models are used for topic discovery in large text collections.
The reference topics consist of model-generated topics inspected, selected, and curated by humans.
Each dataset contains a corpus of preprocessed (tokenized) texts and a set of reference topics,
each represented by a list of words and text documents.
The dataset details, including the instruction for the use of the data and supporting code, are here:
https://github.com/dkorenci/topic_coverage/blob/main/data.readme.txt
The coverage measures that can be used to evaluate topic models are described in the accompanying paper,
whereas the code and the instructions can be found in the github repo. | Provide a detailed description of the following dataset: Topic modeling topic coverage dataset |
OpenML-CC18 | We advocate the use of curated, comprehensive benchmark suites of machine learning datasets, backed by standardized OpenML-based interfaces and complementary software toolkits written in Python, Java and R. We demonstrate how to easily execute comprehensive benchmarking studies using standardized OpenML-based benchmarking suites and complementary software toolkits written in Python, Java and R. Major distinguishing features of OpenML benchmark suites are (i) ease of use through standardized data formats, APIs, and existing client libraries; (ii) machine-readable meta-information regarding the contents of the suite; and (iii) online sharing of results, enabling large scale comparisons. As a first such suite, we propose the OpenML-CC18, a machine learning benchmark suite of 72 classification datasets carefully curated from the thousands of datasets on OpenML.
The inclusion criteria are:
* classification tasks on dense data set
independent observations
* number of classes >= 2, each class with at least 20 observations and ratio of minority to majority class must exceed 5%
* 500 <= number of observations <= 100000
* number of features after one-hot-encoding < 5000
* no artificial data sets
* no subsets of larger data sets nor binarizations of other data sets
* no data sets which are perfectly predictable by using a single feature or by using a simple decision tree
* source or reference available
If you use this benchmarking suite, please cite:
Bernd Bischl, Giuseppe Casalicchio, Matthias Feurer, Frank Hutter, Michel Lang, Rafael G. Mantovani, Jan N. van Rijn and Joaquin Vanschoren. “OpenML Benchmarking Suites” arXiv:1708.03731v2 [stats.ML] (2019).
```
@article{oml-benchmarking-suites,
title={OpenML Benchmarking Suites},
author={Bernd Bischl and Giuseppe Casalicchio and Matthias Feurer and Frank Hutter and Michel Lang and Rafael G. Mantovani and Jan N. van Rijn and Joaquin Vanschoren},
year={2019},
journal={arXiv:1708.03731v2 [stat.ML]}
}
``` | Provide a detailed description of the following dataset: OpenML-CC18 |
NDPSID - WACV 2019 | This database offers iris images (with and without contact lenses) of the same eyes captured shortly one after another with illumination coming from two different locations. 5,796 iris images in total were acquired by the LG IrisAccess 4000 sensor from 119 subjects. This set is divided into four subsets used in the experiments: (a) 1,800 images of irises wearing regular (with dot-like pattern) textured contact lenses, as shown in Fig. 6a in the wAcv 2019 paper; (b) 864 images of irises wearing irregular (without dot-like pattern) textured contact lenses, as shown in Fig. 6b in the WACV 2019 paper; (c) 1,728 images of irises wearing clear contact lenses (without any visible pattern), and (d) 1,404 images of authentic irises without any contact. | Provide a detailed description of the following dataset: NDPSID - WACV 2019 |
CAT: Context Adjustment Training | CAT is a specialized dataset for co-saliency detection. This dataset is intended for both helping to assess the performance of vision algorithms and supporting research that aims to exploit large volumes of annotated data, e.g., for training deep neural networks.
Scale & Features
- A total number of 33500 image samples.
- 280 semantic groups affiliated to 15 superclasses.
- High-quality mask annotations.
- Diverse visual context with multiple foreground objects. | Provide a detailed description of the following dataset: CAT: Context Adjustment Training |
Colored-MNIST(with spurious correlation) | This is a dataset with spurious correlations which can be used to evaluate machine learning methods for out-of-distribution generalization, causal inference, and related field. | Provide a detailed description of the following dataset: Colored-MNIST(with spurious correlation) |
JaQuAD | **JaQuAD** (Japanese Question Answering Dataset) is a question answering dataset in Japanese that consists of 39,696 extractive question-answer pairs on Japanese Wikipedia articles. | Provide a detailed description of the following dataset: JaQuAD |
Tool clustering dataset | # Tool Database for image-set clustering
This database was generated to evaluate a robotic application dealing with image-set clustering. The goal is to sort and store tools on an table in an unsupervised way, from pixel inputs. Pictures contains objects that can be found in a shop-floor. Each picture contains only one object. There are five different conditions, for each condition, lighting conditions and background are changed. For each condition, four picture of each object are taken under different orientations. | Provide a detailed description of the following dataset: Tool clustering dataset |
EmoFilm | EmoFilm is a multilingual emotional speech corpus comprising 1115 audio instances produced in English, Italian, and Spanish languages. The audio clips (with a mean length of 3.5 sec. and std 1.2 sec.) were extracted in wave format (uncompressed, mono, 48 kHz sample rate and 16-bit) from 43 films (original in English and their over-dubbed Italian and Spanish versions). Genres including comedy, drama, horror, and thriller were considered; anger, contempt, happiness, fear, and sadness emotional states were taken into account. EmoFilm has been presented at Interspeech 2018:
Emilia Parada-Cabaleiro, Giovanni Costantini, Anton Batliner, Alice Baird, and Björn Schuller (2018), Categorical vs Dimensional Perception of Italian Emotional Speech, in Proc. of Interspeech, Hyderabad, India, pp. 3638-3642. | Provide a detailed description of the following dataset: EmoFilm |
SES | Currently, an essential point in speech synthesis is the addressing of the variability of human speech. One of the main sources of this diversity is the emotional state of the speaker. Most of the recent work in this area has been focused on the prosodic aspects of speech and on rule-based formant synthesis experiments. Even when adopting an improved voice source, we cannot achieve a smiling happy voice or the menacing quality of cold anger. For this reason, we have performed two experiments aimed at developing a concatenative emotional synthesiser, a synthesiser that can copy the quality of an emotional voice without an explicit mathematical model. | Provide a detailed description of the following dataset: SES |
AESI | The development of ecologically valid procedures for collecting reliable and unbiased emotional data towards computer interfaces with social and affective intelligence targeting patients with mental disorders. Following its development, presented with, the Athens Emotional States Inventory (AESI) proposes the design, recording and validation of an audiovisual database for five emotional states: anger, fear, joy, sadness and neutral. The items of the AESI consist of sentences each having content indicative of the corresponding emotion. Emotional content was assessed through a survey of 40 young participants with a questionnaire following the Latin square design. The emotional sentences that were correctly identified by 85% of the participants were recorded in a soundproof room with microphones and cameras. A preliminary validation of AESI is performed through automatic emotion recognition experiments from speech. The resulting database contains 696 recorded utterances in Greek language by 20 native speakers and has a total duration of approximately 28 min. Speech classification results yield accuracy up to 75.15% for automatically recognizing the emotions in AESI. These results indicate the usefulness of our approach for collecting emotional data with reliable content, balanced across classes and with reduced environmental variability. | Provide a detailed description of the following dataset: AESI |
Yeast colony morphologies | Data for the paper entitled Quantifying yeast colony morphologies with feature engineering from time-lapse photography by A. Goldschmidt et al. (https://arxiv.org/abs/2201.05259)
This project is a collaboration between Dudley Lab at the Pacific NW Research Institute and the J. Nathan Kutz group at the University of Washington.
Summary: Baker's yeast (Saccharomyces cerevisiae) is a model organism for studying the morphology that emerges at the scale of multi-cell colonies. To look at how morphology develops, we collect a dataset of time-lapse photographs of the growth of different strains of S. cerevisiae. | Provide a detailed description of the following dataset: Yeast colony morphologies |
ChEMBL v.27 | The standardised ChEMBL v.27 data set. Originally was taken from https://www.ebi.ac.uk/chembl/. Standardisation procedure is described in the HyFactor article, doi:10.26434/chemrxiv-2021-18x0d | Provide a detailed description of the following dataset: ChEMBL v.27 |
MOSES | The set is based on the ZINC Clean Leads collection. It contains 4,591,276 molecules in total, filtered by molecular weight in the range from 250 to 350 Daltons, a number of rotatable bonds not greater than 7, and XlogP less than or equal to 3.5. We removed molecules containing charged atoms or atoms besides C, N, S, O, F, Cl, Br, H or cycles longer than 8 atoms. The molecules were filtered via medicinal chemistry filters (MCFs) and PAINS filters.
The dataset contains 1,936,962 molecular structures. For experiments, we split the dataset into a training, test and scaffold test sets containing around 1.6M, 176k, and 176k molecules respectively. The scaffold test set contains unique Bemis-Murcko scaffolds that were not present in the training and test sets. We use this set to assess how well the model can generate previously unobserved scaffolds. | Provide a detailed description of the following dataset: MOSES |
BDD100K-weather(OOD Setting) | BDD100K-weather is a dataset which is inherited from BDD100K using image attribute labels for Out-of-Distribution object detection. All images in BDD100K are categorized into six domains, including clear, overcast, foggy, partly cloudy, rainy and snowy. Clear and overcast are used for training while the rest is used for testing, moreover, per training domain is sampled 1.5k images at most while per testing domain is sampled 0.5k images at most. Thus, we have BDD100K-weather (paper is under review). | Provide a detailed description of the following dataset: BDD100K-weather(OOD Setting) |
CoAuthor | **CoAuthor** is a dataset designed for revealing GPT-3's capabilities in assisting creative and argumentative writing. CoAuthor captures rich interactions between 63 writers and four instances of GPT-3 across 1445 writing sessions. | Provide a detailed description of the following dataset: CoAuthor |
Met | The **Met** dataset is a large-scale dataset for Instance-Level Recognition (ILR) in the artwork domain. It relies on the open access collection from the Metropolitan Museum of Art (The Met) in New York to form the training set, which consists of about 400k images from more than 224k classes, with artworks of world-level geographic coverage and chronological periods dating back to the Paleolithic period. Each museum exhibit corresponds to a unique artwork, and defines its own class. The training set exhibits a long-tail distribution with more than half of the classes represented by a single image, making it a special case of few-shot learning. | Provide a detailed description of the following dataset: Met |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.