dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
PropSegmEnt | **PropSegmEnt** is a corpus of over 35K propositions annotated by expert human raters. The dataset structure resembles the tasks of (1) segmenting sentences within a document to the set of propositions, and (2) classifying the entailment relation of each proposition with respect to a different yet topically-aligned document, i.e. documents describing the same event or entity. | Provide a detailed description of the following dataset: PropSegmEnt |
LiPC | **LiPC (LiDAR Point Cloud Clustering Benchmark Suite)** is a benchmark suite for point cloud clustering algorithms based on open-source software and open datasets. It aims to provide the community with a collection of methods and datasets that are easy to use, comparable, and that experimental results are traceable and reproducible. | Provide a detailed description of the following dataset: LiPC |
EU Long-term Dataset with Multiple Sensors for Autonomous Driving | **EU Long-term Dataset with Multiple Sensors for Autonomous Driving** was collected with a robocar, equipped with eleven heterogeneous sensors, in the downtown and suburban areas of Montbéliard in France. The vehicle speed was limited to 50 km/h following the French traffic rules. For the long-term data, the driving distance is about 5.0 km (containing a small and a big road loop for loop-closure purpose) and the length of recorded data is about 16 minutes for each collection round. For the roundabout data, the driving distance is about 4.2 km (containing 10 roundabouts with various sizes) and the length of recorded data is about 12 minutes for each collection round. In addition to enjoying the typical scenery of eastern France, users can also feel the daily and seasonal changes in the city. | Provide a detailed description of the following dataset: EU Long-term Dataset with Multiple Sensors for Autonomous Driving |
L-CAS 3D Point Cloud People Dataset | **L-CAS 3D Point Cloud People Dataset** contains 28,002 Velodyne scan frames acquired in one of the main buildings (Minerva Building) of the University of Lincoln, UK. Total length of the recorded data is about 49 minutes. Data were grouped into two classes according to whether the robot was stationary or moving. | Provide a detailed description of the following dataset: L-CAS 3D Point Cloud People Dataset |
Light field RGB Dataset | We created this robust and custom light field dataset in order to assist light field researchers in using SOTA machine learning algorithms for a variety of light field tasks such as depth estimation, synthetic aperture imaging, and more.
In this dataset contains five folders of 6 datasets with a specified number of light field scene snapshots: 18, 18, 40, 100, 250, and 500 snapshots.
View our kaggle dataset at this link: https://www.kaggle.com/datasets/julesh7/rgb-light-field-dataset | Provide a detailed description of the following dataset: Light field RGB Dataset |
DISE 2021 Dataset | Datasets are built upon three other datasets: DISEC 2013, RVL-CDIP, RDCL 2017. Please respect their LICENSE. | Provide a detailed description of the following dataset: DISE 2021 Dataset |
a test dataset for reuse | a | Provide a detailed description of the following dataset: a test dataset for reuse |
RoFT | RoFT is a dataset of 21,000 human annotations of generated text. The task is "Boundary detection" i.e. given a passage that starts off as human written, determine when the text transitions to being machine generated. The dataset also includes error annotations using the taxonomy introduced in the paper. The data can be used to train automatic detection systems, train automatic error correction, analyze visibility of model errors, and compare performance across models. Data was collected using http://roft.io.
Models: GPT2, GPT2-XL, CTRL, GPT3 "Davinci"
Genres: News, Stories, Recipes, Speeches | Provide a detailed description of the following dataset: RoFT |
Skit-S2I | This dataset for Intent classification from human speech covers 14 coarse-grained intents from the Banking domain. This work is inspired by a similar release in the Minds-14 dataset - here, we restrict ourselves to Indian English but with a much larger training set. The data was generated by 11 (Indian English) speakers and recorded over a telephony line. We also provide access to anonymized speaker information - like gender, languages spoken, and native language - to allow more structured discussions around robustness and bias in the models you train. | Provide a detailed description of the following dataset: Skit-S2I |
Minsk2019 ALS database | **Minsk2019 ALS database** is a dataset collected in Republican Research and Clinical Center of Neurology and Neurosurgery (Minsk, Belarus). A total of 54 speakers were recorded, with 39 healthy speakers (23 males, 16 females) and 15 ALS patients with signs of bulbar dysfunction (6 males, 9 females). It is designed for the task of ALS Detection. | Provide a detailed description of the following dataset: Minsk2019 ALS database |
Distress Analysis Interview Corpus/Wizard-of-Oz set (DAIC-WOZ) | The Distress Analysis Interview Corpus/Wizard-of-Oz set (DAIC-WOZ) dataset [24, 25] comprises voice and text samples from 189 interviewed healthy and control persons and their PHQ-8 depression detection questionnaire. This dataset is commonly used in research works for text-based detection, voice-based detection, and in multi-modal architecture | Provide a detailed description of the following dataset: Distress Analysis Interview Corpus/Wizard-of-Oz set (DAIC-WOZ) |
SUSY | This is a classification problem to distinguish between a signal process which produces supersymmetric particles and a background process which does not. | Provide a detailed description of the following dataset: SUSY |
ISLES 2017 | A medical image segmentation challenge at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2017. On the SMIR, you can register for the challenge, download the test data and submit your results. For more information, visit the official ISLES homepage under www.isles-challenge.org. | Provide a detailed description of the following dataset: ISLES 2017 |
Dusha | **Dusha** is a dataset for speech emotion recognition (SER) tasks. The corpus contains approximately 350 hours of data, more than 300 000 audio recordings with Russian speech and their transcripts. It is annotated using a crowd-sourcing platform and includes two subsets: acted and real-life. | Provide a detailed description of the following dataset: Dusha |
TextBox 2.0 | **TextBox 2.0** is a comprehensive and unified library for text generation, focusing on the use of pre-trained language models (PLMs). The library covers 13 common text generation tasks and their corresponding 83 datasets and further incorporates 45 PLMs covering general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight PLMs. | Provide a detailed description of the following dataset: TextBox 2.0 |
XAlign | It consists of an extensive collection of a high quality cross-lingual fact-to-text dataset in 11 languages: Assamese (as), Bengali (bn), Gujarati (gu), Hindi (hi), Kannada (kn), Malayalam (ml), Marathi (mr), Oriya (or), Punjabi (pa), Tamil (ta), Telugu (te), and monolingual dataset in English (en). This is the Wikipedia text <--> Wikidata KG aligned corpus used to train the data-to-text generation model. The Train & validation splits are created using distant supervision methods and Test data is generated through human annotations.
## Data Format
Dataset is publicly available [here](https://github.com/tushar117/XAlign). Each directory contains language specific dataset (refered through language ISO code) and contains of three files:
- train.jsonl
- test.jsonl
- val.jsonl
Data stored in the above files are of JSON Line (jsonl) format.
### Record structure (JSON structure)
Each record consist of the following entries:
- sentence (string) : Native language wikipedia sentence. (non-native language strings were removed.)
- `facts` (List[Dict]) : List of facts associated with the sentence where each fact is stored as dictionary.
- language (string) : Language identifier.
The `facts` key contains list of facts where each facts is stored as dictionary. A single record within fact list contains following entries:
- subject (string) : central entity.
- object (string) : entity or a piece of information about the subject.
- predicate (string) : relationship that connects the subject and the object.
- qualifiers (List[Dict]) : It provide additional information about the fact, is stored as list of
qualifier where each record is a dictionary. The dictionary contains two keys: `qualifier_predicate` to represent property of qualifer and `qualifier_object` to store value for the qualifier's predicate.
### Examples
Example from English dataset
```
{
"sentence": "Mark Paul Briers (born 21 April 1968) is a former English cricketer.",
"facts": [
{
"subject": "Mark Briers",
"predicate": "date of birth",
"object": "21 April 1968",
"qualifiers": []
},
{
"subject": "Mark Briers",
"predicate": "occupation",
"object": "cricketer",
"qualifiers": []
},
{
"subject": "Mark Briers",
"predicate": "country of citizenship",
"object": "United Kingdom",
"qualifiers": []
}
],
"language": "en"
}
```
Example from one of the low-resource languages (i.e. Hindi)
```
{
"sentence": "बोरिस पास्तेरनाक १९५८ में साहित्य के क्षेत्र में नोबेल पुरस्कार विजेता रहे हैं।",
"facts": [
{
"subject": "Boris Pasternak",
"predicate": "nominated for",
"object": "Nobel Prize in Literature",
"qualifiers": [
{
"qualifier_predicate": "point in time",
"qualifier_subject": "1958"
}
]
}
],
"language": "hi"
}
``` | Provide a detailed description of the following dataset: XAlign |
MultiSpider | **MultiSpider** is a large multilingual text-to-SQL dataset which covers seven languages (English, German, French, Spanish, Japanese, Chinese, and Vietnamese). | Provide a detailed description of the following dataset: MultiSpider |
CORBEL | Dataset included measuring static tension under 2 kg load in different points
of the CB and measurements in dynamic conditions. The latter conditions
presumed the range of the linear belt speeds between
nu_1 = 0.5 and nu_max = 1.7 m/s. 400 Hz unified sampling frequency for
the experiments. It corresponded with 140 samples.
Signal is divided into fixed-length (points P) signals of
0.2 s (80 points), 0.4 s (160 points), 0.8 s (320 points),
1.6 s (640 points), 3.2 s (1280 points), and 5.0 s (2000
points).
In such for each length equivalent data points are avaible:
Signal length (s), time points (#), Samples (#)
0.2, 80, 384200
0.4, 160, 368200
0.8, 320, 336200
1.6, 640, 272200
3.2, 1280, 144200
5.0, 2000, 200
The binary classification into loaded/no load task is investigated. | Provide a detailed description of the following dataset: CORBEL |
MusicNetEM | New refined labels for the MusicNet dataset obtained by the EM process as described in the paper: Ben Maman and Amit Bermano, "Unaligned Supervision for Automatic Music Transcription in The Wild" | Provide a detailed description of the following dataset: MusicNetEM |
MENYO-20k | MENYO-20k is the first multi-domain parallel corpus with a special focus on clean orthography for Yorùbá--English with standardized train-test splits for benchmarking. | Provide a detailed description of the following dataset: MENYO-20k |
Harmonized US National Health and Nutrition Examination Survey (NHANES) 1988-2018 | The National Health and Nutrition Examination Survey (NHANES) provides data on the health and environmental exposure of the non-institutionalized US population. Such data have considerable potential to understand how the environment and behaviors impact human health. These data are also currently leveraged to answer public health questions such as prevalence of disease. However, these data need to first be processed before new insights can be derived through large-scale analyses. NHANES data are stored across hundreds of files with multiple inconsistencies. Correcting such inconsistencies takes systematic cross examination and considerable efforts but is required for accurately and reproducibly characterizing the associations between the exposome and diseases (e.g., cancer mortality outcomes). Thus, we developed a set of curated and unified datasets and accompanied code by merging 614 separate files and harmonizing unrestricted data across NHANES III (1988-1994) and Continuous (1999-2018), totaling 134,310 participants and 4,740 variables. The variables convey 1) demographic information, 2) dietary consumption, 3) physical examination results, 4) occupation, 5) questionnaire items (e.g., physical activity, general health status, medical conditions), 6) medications, 7) mortality status linked from the National Death Index, 8) survey weights, 9) environmental exposure biomarker measurements, and 10) chemical comments that indicate which measurements are below or above the lower limit of detection. We also provide a data dictionary listing the variables and their descriptions to help researchers browse the data. We also provide R markdown files to show example codes on calculating summary statistics and running regression models to help accelerate high-throughput analysis of the exposome and secular trends on cancer mortality. | Provide a detailed description of the following dataset: Harmonized US National Health and Nutrition Examination Survey (NHANES) 1988-2018 |
GNMC | We present the Gracenote Multi-Crop (GNMC) dataset, to further research in algorithms for aesthetic image cropping. The dataset consists of a diverse collection of 10K images, each cropped in five different aspect ratios by experienced editors. GNMC is larger than existing datasets commonly used to benchmark image cropping approaches such as FCDB (1743 images) and FLMS (500 images). This dataset can enable aesthetic cropping algorithms as described in "[An Experience-Based Direct Generation Approach to Automatic Image Cropping](https://ieeexplore.ieee.org/document/9500226/)" by Christensen and Vartakavi. | Provide a detailed description of the following dataset: GNMC |
HBW | Human Bodies in the Wild (HBW) is a validation and test set for body shape estimation. It consists of images taken in the wild and ground truth 3D body scans in SMPL-X topology. To create HBW, we collect body scans of 35 participants and register the SMPL-X model to the scans. Further each participant is photographed in various outfits and poses in front of a white background and uploads full-body photos of themselves taken in the wild. The validation and test set images are released. The ground truth shape is only released for the validation set. | Provide a detailed description of the following dataset: HBW |
QoEVAVE | **Quality of Experience Evaluation of Interactive Virtual Environments with Audiovisual Scenes (QoEVAVE)** provides an initial audiovisual database consiting of 12 sequences capturing real-life nature and urban scenes. The maximum video resolution is 7680x3840 (8k) at 60 frames-per-second, with 4th-order Ambisonics spatial audio (4OA). All video sequences are recorded with a minumum target duration of 60 seconds and designed to represent real-life settings for systematically evaluating various dimensions of uni-/multimodal perception, cognition, behavior, and quality of experience (QoE) in a controlled virtual environment. This database serves as a novel high-quality reference material with an equal focus on auditory and visual sensory information within the QoE community. | Provide a detailed description of the following dataset: QoEVAVE |
HPointLoc | **HPointLoc** is a dataset designed for exploring capabilities of visual place recognition in indoor environment and loop detection in simultaneous localization and mapping. It is based on the popular Habitat simulator from 49 photorealistic indoor scenes from the Matterport3D dataset and contains 76,000 frames. | Provide a detailed description of the following dataset: HPointLoc |
FGVD | **Fine-Grained Vehicle Detection** (**FGVD**) is a dataset for fine-grained vehicle detection captured from a moving camera mounted on a car. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions.
It contains 5502 scene images with 210 unique fine- grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, FGVD introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. | Provide a detailed description of the following dataset: FGVD |
Merger Agreement Understanding Dataset (MAUD) | MAUD is an expert-annotated merger agreement reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points study, where lawyers and law students answered 92 questions about 152 merger agreements.
With over 39,000 examples and 47,000 total annotations, it is the largest expert-annotated legal reading comprehension dataset in the English language, as well as the first expert-annotated merger agreement dataset. | Provide a detailed description of the following dataset: Merger Agreement Understanding Dataset (MAUD) |
MTNeuro | **MTNeuro** is a multi-task neuroimaging benchmark built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions.
This dataset provides some key features for the neuroinformatics processing community:
- Three Dimensional Multi-Scale Annotated Dataset: The 3D x-ray microtomography dataset spans multiple brain areas and includes region of interest (ROI) annotations, densely annotated 3D cutouts, and semantic interpretable features.
- Multi-Level Benchmark Tasks: Benchmark tasks feature both microscopic and macroscopic classification objectives.
- Evaluation of Model Baselines: Both 2D and 3D training regimes are considered when training supervised and unsupervised models.
The data are derived from a unique 3D X-ray microtomography dataset covering areas of mouse cortex and thalamus. At 1.17um isotropic resolution for each voxel, both microsctructure (blood vessels, cell bodies, white matter) and macrostructure labels are available. | Provide a detailed description of the following dataset: MTNeuro |
CamNuvem Dataset | This dataset focuses only on the robbery category, presenting a new weakly labelled dataset that contains 486 new real–world robbery surveillance videos acquired from public sources. | Provide a detailed description of the following dataset: CamNuvem Dataset |
Argoverse 2 Sensor | The **Argoverse 2 Sensor** Dataset is a collection of 1,000 scenarios with 3D object tracking annotations. Each sequence in our training and validation sets includes annotations for all objects within five meters of the “drivable area” — the area in which it is possible for a vehicle to drive. The HD map for each scenario specifies the driveable area. | Provide a detailed description of the following dataset: Argoverse 2 Sensor |
Argoverse 2 Lidar | The **Argoverse 2 Lidar** Dataset is a collection of 20,000 scenarios with lidar sensor data, HD maps, and ego-vehicle pose. It does not include imagery or 3D annotations. The dataset is designed to support research into self-supervised learning in the lidar domain, as well as point cloud forecasting.
The dataset is divided into train, validation, and test sets of 16,000, 2,000, and 2,000 scenarios. This supports a point cloud forecasting task in which the future frames of the test set serve as the ground truth. Nonetheless, we encourage the community to use the dataset broadly for other tasks, such as self-supervised learning and map automation.
All Argoverse datasets contain lidar data from two out-of-phase 32 beam sensors rotating at 10 Hz. While this can be aggregated into 64 beam frames at 10 Hz, it is also reasonable to think of this as 32 beam frames at 20 Hz. Furthermore, all Argoverse datasets contain raw lidar returns with per-point timestamps, so the data does not need to be interpreted in quantized frames. | Provide a detailed description of the following dataset: Argoverse 2 Lidar |
Argoverse 2 Map Change | The **Argoverse 2 Map Change** Dataset is a collection of 1,000 scenarios with ring camera imagery, lidar, and HD maps. Two hundred of the scenarios include changes in the real-world environment that are not yet reflected in the HD map, such as new crosswalks or repainted lanes. By sharing a map dataset that labels the instances in which there are discrepancies with sensor data, we encourage the development of novel methods for detecting out-of-date map regions.
The Map Change Dataset does not include 3D object annotations (which is a point of differentiation from the Argoverse 2 Sensor Dataset). Instead, it includes temporal annotations that indicate whether there is a map change within 30 meters of the autonomous vehicle at a particular timestamp. Additionally, the scenarios tend to be longer than the scenarios in the Sensor Dataset. To avoid making the dataset excessively large, the bitrate of the imagery is reduced. | Provide a detailed description of the following dataset: Argoverse 2 Map Change |
arXiv-10 | Benchmark dataset for abstracts and titles of 100,000 ArXiv scientific papers.
This dataset contains 10 classes and is balanced (exactly 10,000 per class).
The classes include subcategories of computer science, physics, and math.
• Direct link: [Download](https://github.com/ashfarhangi/Protoformer/raw/main/data/ArXiv-10.zip)
• Citation:
```
@inproceedings{farhangi2022protoformer,
title={Protoformer: Embedding Prototypes for Transformers},
author={Farhangi, Ashkan and Sui, Ning and Hua, Nan and Bai, Haiyan and Huang, Arthur and Guo, Zhishan},
booktitle={Advances in Knowledge Discovery and Data Mining: 26th Pacific-Asia Conference, PAKDD 2022, Chengdu, China, May 16--19, 2022, Proceedings, Part I},
pages={447--458},
year={2022}
}
``` | Provide a detailed description of the following dataset: arXiv-10 |
HarveyNER | fine-grained location names extraction from disaster-related tweets | Provide a detailed description of the following dataset: HarveyNER |
JGLUE | JGLUE, Japanese General Language Understanding Evaluation, is built to measure the general NLU ability in Japanese. | Provide a detailed description of the following dataset: JGLUE |
MyStone: Preprocessed expelled kidney stones | The dataset of images is built upon a collection of 454 samples kindly provided by the urology department of the Hospital Universitary de Bellvitge (Barcelona, Spain) in the time span of several years. They cover all the main 9 classes but cystine, for which just 4 samples were available so we discarded this class as mentioned above. As for the rest, we tried to get all second scheme classes balanced and, at the same time, to record as much examples as possible to account for intraclass. | Provide a detailed description of the following dataset: MyStone: Preprocessed expelled kidney stones |
USPTO-190 | A chemical synthesis route dataset constructed from the USPTO reaction dataset (1976-Sep2016) and a list of commercially available
building blocks from eMolecules (~23.1M molecules). After processing, the dataset has 299202 training routes, 65274 validation routes, 190 test routes, and the corresponding target molecules. | Provide a detailed description of the following dataset: USPTO-190 |
BEAT | BEAT has i) 76 hours, high-quality, multi-modal data captured from 30 speakers talking with eight different emotions and in four different languages, ii) 32 millions frame-level emotion and semantic relevance annotations.
Our statistical analysis on BEAT demonstrates the correlation of conversational gestures with \textit{facial expressions}, \textit{emotions}, and \textit{semantics}, in addition to the known correlation with \textit{audio}, \textit{text}, and \textit{speaker identity}.
Based on this observation, we propose a baseline model, \textbf{Ca}scaded \textbf{M}otion \textbf{N}etwork \textbf{(CaMN)}, which consists of above six modalities modeled in a cascaded architecture for gesture synthesis. To evaluate the semantic relevancy, we introduce a metric, Semantic Relevance Gesture Recall (\textbf{SRGR}).
Qualitative and quantitative experiments demonstrate metrics' validness, ground truth data quality, and baseline's state-of-the-art performance.
To the best of our knowledge, BEAT is the largest motion capture dataset for investigating human gestures, which may contribute to a number of different research fields, including controllable gesture synthesis, cross-modality analysis, and emotional gesture recognition. The data, code and model are available on \url{https://pantomatrix.github.io/BEAT/}. | Provide a detailed description of the following dataset: BEAT |
Probability words NLI | This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP), e.g. words like "probably", "maybe", "surely", "impossible".
We used probabilitic soft logic to combine probabilistic statements expressed with WEP (WEP-Reasoning) and we also used the UNLI dataset (https://nlp.jhu.edu/unli/) to directly check whether models can detect the WEP matching human-annotated probabilities. The dataset can be used as natural langauge inference data (context, premise, label) or multiple choice question answering (context,valid_hypothesis, invalid_hypothesis). | Provide a detailed description of the following dataset: Probability words NLI |
PACO | **Parts and Attributes of Common Objects (PACO)** is a detection dataset that goes beyond traditional object boxes and masks and provides richer annotations such as part masks and attributes. It spans 75 object categories, 456 object-part categories and 55 attributes across image (LVIS) and video (Ego4D) datasets. The dataset contains 641K part masks annotated across 260K object boxes, with half of them exhaustively annotated with attributes as well. | Provide a detailed description of the following dataset: PACO |
GeoDE | **GeoDE** is a geographically diverse dataset with 61,940 images from 40 classes and 6 world regions, and no personally identifiable information, collected through crowd-sourcing. | Provide a detailed description of the following dataset: GeoDE |
Aachen Day-Night v1.1 Benchmark | Aachen Day-Night v1.1 dataset is an extended version of the original *[Aachen Day-Night dataset](https://paperswithcode.com/dataset/aachen-day-night)*. Besides the original query images, the Aachen Day-Night v1.1 dataset contains an additional 93 nighttime queries. In addition, it uses a larger 3D model containing additional images. These additional images were extracted from video sequences captured with different
cameras. Please refer to *[Reference Pose Generation for Long-term Visual Localization via Learned Features and View Synthesis](https://arxiv.org/abs/2005.05179)* for more information. | Provide a detailed description of the following dataset: Aachen Day-Night v1.1 Benchmark |
iV2V and iV2I+ | This dataset provides wireless measurements from two industrial testbeds: iV2V (industrial Vehicle-to-Vehicle) and iV2I+ (industrial Vehicular-to-Infrastructure plus sensor).
iV2V covers 10h of sidelink communication scenarios between 3 Automated Guided Vehicles (AGVs), while iV2I+ was conducted for around 16h at an industrial site where an autonomous cleaning robot is connected to a private cellular network.
The data includes information on physical layer parameters (such as signal strength and signal quality), wireless Quality of Service (QoS) like delay and throughput, and positioning information.
The datasets are labelled and pre-filtered for a fast on-boarding and applicability. The common measurement methodology to both datasets pursues an application to Machine Learning (ML) for tasks such as fingerprinting, line-of-sight detection, QoS prediction or link selection, among others. | Provide a detailed description of the following dataset: iV2V and iV2I+ |
abc_cc | ## Dataset Summary
The dataset used to train and evaluate [TunesFormer](https://huggingface.co/sander-wood/tunesformer) is collected from two sources: [The Session](https://thesession.org) and [ABCnotation.com](https://abcnotation.com). The Session is a community website focused on Irish traditional music, while ABCnotation.com is a website that provides a standard for folk and traditional music notation in the form of ASCII text files. The combined dataset consists of 285,449 ABC tunes, with 99\% (282,595) of the tunes used as the training set and the remaining 1\% (2854) used as the evaluation set.
Control codes are symbols that are added to the ABC notation representation to indicate the desired musical form of the generated melodies. We add the following control codes to each ABC tune in the dataset through an automated process to indicate its musical form:
- Number of Bars (NB): controls the number of bars in a section of the melody. For example, users could specify that they want a section to contain 8 bars, and TunesFormer would generate a section that fits within that structure. It counts on the bar symbol ***|***.
- Number of Sections (NS): controls the number of sections in the entire melody. This can be used to create a sense of structure and coherence within the melody, as different sections can be used to create musical themes or motifs. It counts on several symbols that are commonly used in ABC notation and can be used to represent section boundaries: ***\[|***,***||***,***|\]***,***|:***,***::***, and ***:|***.
- Edit Distance Similarity (EDS): controls the similarity level between the current section and a previous section in the melody.
To ensure consistency and standardization among the ABC tunes in the dataset, we first converted them all into MusicXML format and then re-converted them back into ABC notation. In order to focus solely on the musical content, we removed any natural language elements (such as titles, composers, and lyrics) and unnecessary information (such as reference numbers and sources). | Provide a detailed description of the following dataset: abc_cc |
AviationQA | AviationQA is introduced in the paper titled- There is No Big Brother or Small Brother: Knowledge Infusion in Language Models for Link Prediction and Question Answering
The paper is accepted in the main conference of ICON 2022.
We create a synthetic dataset, AviationQA, a set of 1 million factoid QA pairs from 12,000 National Transportation Safety Board (NTSB) reports using templates. These QA pairs contain questions such that answers to them are entities occurring in the AviationKG (Agarwal et al., 2022). AviationQA will be helpful to researchers in finding insights into aircraft accidents and their prevention.
Examples from dataset:
1. What was the Aircraft Damage of the accident no. ERA22LA162? Answer: Substantial
2. Where was the Destination of the accident no. ERA22LA162?, Answer: Naples, GA (APH) | Provide a detailed description of the following dataset: AviationQA |
Binette's 2022 Inventors Benchmark | Hand-disambiguation of a sample of U.S. patents inventor mentions from PatentsView.org.
Inventors we selected indirectly by sampling inventor mentions uniformly at random. This results in inventor sampled with probability proportional to their number of granted patents.
The time period considered is from 1976 to December 31, 2021, corresponding to the disambiguation labeled "disamb_inventor_id_20211230" in PatentsView's bulk data downloads "g_persistent_inventor.tsv" file (https://patentsview.org/download/data-download-tables). That is, the benchmark disambiguation intends to contain all inventor mentions for the sampled inventors from that time period. Note that the benchmark disambiguation contains a few extraneous mentions to patents granted outside of that time period. These should be ignored for evaluation purposes.
The methodology used for the hand-disambiguation is described in Binette et al. (2022) (https://arxiv.org/abs/2210.01230). We used one disambiguation of 200 inventors from Binette et al. (2022), as well as an additional disambiguation of 200 inventors provided by an additional staff member. The two disambiguations were reviewed and validated. However, they should be expected to contain errors due to the ambiguous nature of inventor disambiguation. Furthermore, given the use as the December 30, 2021, disambiguation from PatentsView as a starting point of the hand-labeling, a bias towards this disambiguation should be expected. | Provide a detailed description of the following dataset: Binette's 2022 Inventors Benchmark |
MN-DS | **Multilabeled News Dataset** (**MN-DS**) is a dataset for news classification. It consists of 10,917 articles in 17 first-level and 109 second-level categories from 215 media sources. | Provide a detailed description of the following dataset: MN-DS |
HaDes | **HaDes** is a token-level, reference-free hallucination detection dataset named HAllucination DEtection dataSet. To create this dataset, a large number of text segments extracted from English language Wikipedia are perturbed, and then verified these with crowd-sourced annotations. | Provide a detailed description of the following dataset: HaDes |
FaithDial | **FaithDial** is a new benchmark for hallucination-free dialogues, by editing hallucinated responses in the Wizard of Wikipedia (WoW) benchmark.
FaithDial contains around 50K turns across 5.5K conversations. If trained on FaithDial, state-of-the-art dialogue models are significantly more faithful while also enhancing other dialogue aspects like cooperativeness, creativity and engagement. | Provide a detailed description of the following dataset: FaithDial |
MSI | Article: A novel hierarchical model based on different emotion induction modalities for EEG emotion recognition | Provide a detailed description of the following dataset: MSI |
Compositional Visual Reasoning (CVR) | A fundamental component of human vision is our ability to parse complex visual scenes and judge the relations between their constituent objects. AI benchmarks for visual reasoning have driven rapid progress in recent years with state-of-the-art systems now reaching human accuracy on some of these benchmarks. Yet, there remains a major gap between humans and AI systems in terms of the sample efficiency with which they learn new visual reasoning tasks. Humans' remarkable efficiency at learning has been at least partially attributed to their ability to harness compositionality -- allowing them to efficiently take advantage of previously gained knowledge when learning new tasks. Here, we introduce a novel visual reasoning benchmark, *Compositional Visual Relations* (CVR), to drive progress towards the development of more data-efficient learning algorithms. We take inspiration from fluidic intelligence and non-verbal reasoning tests and describe a novel method for creating compositions of abstract rules and generating image datasets corresponding to these rules at scale. Our proposed benchmark includes measures of sample efficiency, generalization, compositionality, and transfer across task rules. We systematically evaluate modern neural architectures and find that convolutional architectures surpass transformer-based architectures across all performance measures in most data regimes. However, all computational models are much less data efficient than humans, even after learning informative visual representations using self-supervision. Overall, we hope our challenge will spur interest in developing neural architectures that can learn to harness compositionality for more efficient learning. | Provide a detailed description of the following dataset: Compositional Visual Reasoning (CVR) |
Fake News Detection | TICNN dataset | Provide a detailed description of the following dataset: Fake News Detection |
KAIST multi-spectral Day/Night 2018 | We introduce the **KAIST multi-spectral** dataset, which covers a greater range of drivable regions, from urban to residential, for autonomous systems. Our dataset provides different perspectives of the world captured in coarse time slots (day and night) in addition to fine time slots (sunrise, morning, afternoon, sunset, night and dawn). For all-day perception of autonomous systems, we propose the use of a different spectral sensor, i.e., a thermal imaging camera. Toward this goal, we develop a multi-sensor platform, which supports the use of a co-aligned RGB/Thermal camera, RGB stereo, 3D LiDAR and inertial sensors (GPS/IMU) and a related calibration technique. We design a wide range of visual perception tasks including the object detection, drivable region detection, localization, image enhancement, depth estimation and colorization using a single/multi-spectral approach. In this paper, we provide a description of our benchmark with the recording platform, data format, development toolkits, and lessons about the progress of capturing datasets. | Provide a detailed description of the following dataset: KAIST multi-spectral Day/Night 2018 |
Causal Triplet | **Causal Triplet** is a causal representation learning benchmark featuring not only visually more complex scenes, but also two crucial desiderata commonly overlooked in previous works:
1) An actionable counterfactual setting, where only certain object-level variables allow forcounterfactual observations whereas others do not.
2) An interventional downstream task with an emphasis on out-of-distribution robustness from the independent causal mechanisms principle. | Provide a detailed description of the following dataset: Causal Triplet |
AstroVision | AstroVision is a large-scale dataset comprised of 115,970 densely annotated, real images of 16 different small bodies from both legacy and ongoing deep space missions to facilitate the study of deep learning for autonomous navigation in the vicinity of a small body. | Provide a detailed description of the following dataset: AstroVision |
SFDDD | We've all been there: a light turns green and the car in front of you doesn't budge. Or, a previously unremarkable vehicle suddenly slows and starts swerving from side-to-side.
When you pass the offending driver, what do you expect to see? You certainly aren't surprised when you spot a driver who is texting, seemingly enraptured by social media, or in a lively hand-held conversation on their phone.
According to the CDC motor vehicle safety division, one in five car accidents is caused by a distracted driver. Sadly, this translates to 425,000 people injured and 3,000 people killed by distracted driving every year.
State Farm hopes to improve these alarming statistics, and better insure their customers, by testing whether dashboard cameras can automatically detect drivers engaging in distracted behaviors. Given a dataset of 2D dashboard camera images, State Farm is challenging Kagglers to classify each driver's behavior. Are they driving attentively, wearing their seatbelt, or taking a selfie with their friends in the backseat? | Provide a detailed description of the following dataset: SFDDD |
ParagraphOrdreing | We have prepared a dataset, ParagraphOrdreing, which consists of around 300,000 paragraph pairs. We collected our data from Project Gutenberg. We have written an API for gathering and pre-processing in order to have the appropriate format for the defined task. Each example contains two paragraphs and a label that determines whether the second paragraph comes really after the first paragraph (true order with label 1) or the order has been reversed.
Data Statistics:
- #Train Samples 294,265
- #Test Samples 32,697
- Unique Paragraphs 239,803
- Average Number of Tokens 160.39
- Average Number of Sentences 9.31 | Provide a detailed description of the following dataset: ParagraphOrdreing |
BA-2motifs | It's a synthetic dataset, which contains 1000 graphs divided into two classes according to the motif they contain: either a “house” or a five-node cycle. | Provide a detailed description of the following dataset: BA-2motifs |
File S1 | -Tab 1 (Carboxylase table):
This expanded table contains additional information for carboxylase classes and splits them into individual examples.
Notes:
Primary direction is carboxylating if we could find a single natural or even artificial example that runs primarily in that direction.
Thermodynamic data is missing for many classes, equilibrator has a hard time with reducing agents. Values for TPP-containing enzymes may not be extremely reliable.
-Tab 2 (Enolate table):
This is a subset of carboxylases that act on substrates with the general structure in the table
This table specifies the R and X groups for each of the “enolate” enzymes. | Provide a detailed description of the following dataset: File S1 |
File S2 | -Tab 1 Rubisco forms:
This excel sheet contains one row for every rubisco form considered in this review (some forms like IAq and IAc from [31] are not considered separately because they are only phylogenetically separated in the small subunits). For each form we include the pathway in which it functions, the chemical reaction catalyzed - for known types of Form IV RLPs (rubisco-like proteins), an example sequence of the protein and a citation. We also include an example structure if available.
-Tab 2 Reference sequences:
This list of rubisco sequences is the list of references used in this paper to annotate rubisco forms. These sequences were taken from [2,6]. | Provide a detailed description of the following dataset: File S2 |
File S3 | This .csv file contains all of the sequences used in the phylogenetic analysis (see above). For form annotation sequences under 360aa were excluded and no upper limit was set. Sequences removed by trimAL (using a gap threshold of 0.1) are labeled as “Unannotated” - many of them may not be actual rubiscos. Sequences that are too short are labeled as such. Some sequences will have a form indicated but not a subform, for instance, some sequences are labeled as Form III but with no indicated subform because they do not fit into an established subclade. We used a tree made from a 65% identity dereplication (using CD-HIT with standard parameters, File S4). The tree was produced as described above using IQTree with the following parameters: -bb 1000 -m MFP -safe. The tree was rooted just past the Form IIIA clade so that all bona fide rubiscos form one clade and all RLPs form another clade. There are a few branches in between that we consider to be RLPs.
There is a beta-hairpin sequence in most bonafide rubiscos that is absent from the Form IVs (DEAQGPFYR in R. rubrum). Erb and Zarzycki 2018 [5] implicate this structural feature in their argument regarding whether RLPs or Form III rubiscos came first. We find that this sequence is absent in Form IIIA sequences and is present in some but not all Form IIIC. This sequence is quite divergent between clades (e.g. Form II/III has an extended hairpin) and structurally they seem to vary a fair amount. There may be useful information in the phylogenetic distribution of this hairpin that may inform the placement of the root of the rubisco tree.
With one exception, clades were assigned by grouping together all branches that share a common ancestor with the reference sequences (see tab 2 of file S2). Then we assigned the same form to all sequences that were clustered together with the CD-HIT algorithm.
The A. fulgidus clade could not be assigned in this way. This paraphyly is apparent in [6]. In order to overcome this obstacle we remade the tree at 70% identity dereplication (File S5). We also used 5 sequences from figures S12 and S13 of Erb et al. 2012 in order to pinpoint the A. fulgidus clade. Four of the Five clustered into a single clade while the fifth (ZP_08130208.1, sister to WP_025656390.1) branches much closer to the YkrW clade. In order to avoid mislabeling we have omitted the clade containing that reference and assign the A. fulgidus clade with just the other four references (WP_010879084.1, ZP_09117828.1, MBQ5951511.1, and WP_012813926.1).
Form IE rubiscos may be paraphyletic, sequences were chosen based on clades containing reference sequences from File S2 tab2.
Sequences that diverge before or after the Form IEs are labeled as Form I with no subform specified. Similarly, there are a few outgroups to the Form IA and IB that are not assigned a subform.
Form ID emerges from within the Form IC: the distinction is taxonomic and not phylogenetic, Form ID rubiscos are eukaryotic while IC are prokaryotic.
We include some additional sequences in the III-like clade because they branch very closely to the remainder of the III-like and far from everything else.
Form I is defined as all sequences in a clade containing the Form I alphas. This excludes one sequence that is between the Form I alphas and the Form III transaldolase variants: MCA9846407.1
Form I'' can diverge before or after Form I' depending on the tree model used. In Schulz et al. 2022 [32] they constrain it to diverge after I' but with a bootstrap of just 23 (they used model LG by using a best fit approach in RaxML). This constraint was imposed because of a short insertion common to all Form I sequences and Form I’’ that is absent in Form I’ and Iα - parsimony would clearly indicate that Form I’’ is more derived (see Schulz et al. 2022 supplementary text).
When we make the tree using the model from West-Roberts et al. 2021 [16] (LG+F+G, File S6) we get a similar result to Schulz et al., with I'' diverging after I' and a bootstrap of 29. When we used model finder in IQTree it chooses LG+R8 and we get a bootstrap of 99 with I' diverging before I'' (File S7). The placement of the I'' clade is therefore sensitive to the exact sequences used and the alignment model. With advances in metagenomics and the discovery of new sequences of this enigmatic group, the resolution of the position in the tree may improve. | Provide a detailed description of the following dataset: File S3 |
File S4 | This tree was generated as indicated above in the methods. The model chosen by the algorithm was LG_F_R10. | Provide a detailed description of the following dataset: File S4 |
File S5 | This tree was generated as indicated above in the methods. The model chosen by the algorithm was LG+F+R8. | Provide a detailed description of the following dataset: File S5 |
File S6 | This tree was generated as indicated above in the text for file S3 using model LG+F+G. | Provide a detailed description of the following dataset: File S6 |
File S7 | This tree was generated as indicated above in the methods. The model chosen by the algorithm was LG+R8. | Provide a detailed description of the following dataset: File S7 |
EGFxSet | EGFxSet (Electric Guitar Effects dataset) features recordings for all clean tones in a 22-fret Stratocaster, recorded with 5 different pickup configurations, also processed through 12 popular guitar effects. Our dataset was recorded in real hardware, making it relevant for music information retrieval tasks on real music. We also include annotations for parameter settings of the effects we used.
EGFxSet is a dataset of 8,970 audio files with a 5-second duration each, summing a total time of 12 hours and 28 minutes.
All possible 138 notes of a standard tuning 22 frets guitar were recorded in each one of the 5 pickup configurations, giving a total of 690 clean tone audio files ( 58 min ).
The 690 clean audio (58 min) files were processed through 12 different audio effects employing actual guitar gear (no VST emulations were used), summing a total of 8,280 processed audio files (11 hours 30 min).
The effects employed were divided into four categories, and each category comprised three different effects. Some gear used supported the generation of more than one effect setting (but only one was recorded at a time).
Categories, Models and Effects: Distortion (Blues Driver, Tube Screamer, RAT2 Distortion), Modulation (Chorus, Phaser, Flanger), Delays (Digital Delay, Tape Echo, Sweep Echo), Reverb (Plate Reverb, Hall Reverb, Spring Reverb).
Annotations are labeled by a trained electric guitar musician. For each tone, we provide: Guitar string number, Fret number, Guitar pickup configuration, Effect name, Effect type, Hardware modes, Knob names, Knob types, Knob settings).
The dataset website is: https://egfxset.github.io/
The data can be accessed here: https://zenodo.org/record/7044411#.YxKdSWzMKEI
An ISMIR extended abstract was presented in 2022: https://ismir2022.ismir.net/program/lbd/
This dataset was conceived during Iran Roman's "Deep Learning for Music Information Retrieval" course imparted in the postgraduate studies in music technology at the UNAM (Universidad Nacional Autónoma de México). The result is a combined effort between two UNAM postgraduate students (Hegel Pedroza and Gerardo Meza) and Iran Roman (NYU). | Provide a detailed description of the following dataset: EGFxSet |
VTC | VTC is a large-scale multimodal dataset containing video-caption pairs (~300k) alongside comments that can be used for multimodal representation learning. | Provide a detailed description of the following dataset: VTC |
Duke Breast Cancer MRI | Breast MRI scans of 922 cancer patients from Duke University, with tumor bounding box annotations, clinical, imaging, and many other features, and more. | Provide a detailed description of the following dataset: Duke Breast Cancer MRI |
SymphonyNet | First large-scale symphony generation dataset. | Provide a detailed description of the following dataset: SymphonyNet |
RxRx1 | **RxRx1** is a biological dataset designed specifically for the systematic study of batch effect correction methods. The dataset consists of 125,510 high-resolution fluorescence microscopy images of human cells under 1,138 genetic perturbations in 51 experimental batches across 4 cell types. | Provide a detailed description of the following dataset: RxRx1 |
ValNov Subtask A | Binary labels for Validity and Novelty respectively are given for each Conclusion. | Provide a detailed description of the following dataset: ValNov Subtask A |
ValNov Subtask B | Validity and Novelty are determined in a comparative setting between two conclusions at a time. For Validity and Novelty possible labels are "Conclusion 1 is better", "tie" and "Conclusion 2 is better", for Validity and Novelty respectively. | Provide a detailed description of the following dataset: ValNov Subtask B |
InstructPix2Pix Image Editing Dataset | 
A dataset for image editing containing *>450k* samples of:
1. input image (with corresponding text caption describing the image)
2. text-based edit instruction
3. edited image (with corresponding text caption describing the image)
This dataset is automatically generated using a combination of GPT-3 (for generating the text edits) and StableDiffusion+Prompt-To-Prompt (for generating the input & edited images).
Full description of the dataset can be found in the paper: (https://www.timothybrooks.com/instruct-pix2pix/) | Provide a detailed description of the following dataset: InstructPix2Pix Image Editing Dataset |
AlexMI | # Alex Motor Imagery dataset.
## Dataset summary
Motor imagery dataset from the PhD dissertation of A. Barachant.
This Dataset contains EEG recordings from 8 subjects, performing 2
task of motor imagination (right hand, feet or rest). Data have been
recorded at 512Hz with 16 wet electrodes (Fpz, F7, F3, Fz, F4, F8, T7,
C3, Cz, C4, T8, P7, P3, Pz, P4, P8) with a g.tec g.USBamp EEG
amplifier.
File are provided in MNE raw file format. A stimulation channel
encoding the timing of the motor imagination. The start of a trial is
encoded as 1, then the actual start of the motor imagination is
encoded with 2 for imagination of a right hand movement, 3 for
imagination of both feet movement and 4 with a rest trial.
The duration of each trial is 3 second. There is 20 trial of each
class. | Provide a detailed description of the following dataset: AlexMI |
BNCI 2014-001 Motor Imagery dataset. | ## BNCI 2014-001 Motor Imagery dataset
Dataset IIa from BCI Competition 4 [1].
### Dataset Description
This data set consists of EEG data from 9 subjects. The cue-based BCI paradigm consisted of four different motor imagery tasks, namely the imagination of movement of the left hand (class 1), right hand (class 2), both feet (class 3), and tongue (class 4). Two sessions on different days were recorded for each subject. Each session is comprised of 6 runs separated by short breaks. One run consists of 48 trials (12 for each of the four possible classes), yielding a total of 288 trials per session.
The subjects were sitting in a comfortable armchair in front of a computer screen. At the beginning of a trial ( t = 0 s), a fixation cross appeared on the black screen. In addition, a short acoustic warning tone was presented. After two seconds ( t = 2 s), a cue in the form of an arrow pointing either to the left, right, down or up (corresponding to one of the four classes left hand, right hand, foot or tongue) appeared and stayed on the screen for 1.25 s. This prompted the subjects to perform the desired motor imagery task. No feedback was provided. The subjects were ask to carry out the motor imagery task until the fixation cross disappeared from the screen at t = 6 s.
Twenty-two Ag/AgCl electrodes (with inter-electrode distances of 3.5 cm) were used to record the EEG; the montage is shown in Figure 3 left. All signals were recorded monopolarly with the left mastoid serving as reference and the right mastoid as ground. The signals were sampled with. 250 Hz and bandpass-filtered between 0.5 Hz and 100 Hz. The sensitivity of the amplifier was set to 100 μV . An additional 50 Hz notch filter was enabled to suppress line noise.
### References
[1] Tangermann, M., Müller, K.R., Aertsen, A., Birbaumer, N., Braun, C., Brunner, C., Leeb, R., Mehring, C., Miller, K.J., Mueller-Putz, G. and Nolte, G., 2012. Review of the BCI competition IV. Frontiers in neuroscience, 6, p.55. | Provide a detailed description of the following dataset: BNCI 2014-001 Motor Imagery dataset. |
BNCI 2014-004 Motor Imagery dataset. | **Dataset description**
This data set consists of EEG data from 9 subjects of a study published in
[1]_. The subjects were right-handed, had normal or corrected-to-normal
vision and were paid for participating in the experiments.
All volunteers were sitting in an armchair, watching a flat screen monitor
placed approximately 1 m away at eye level. For each subject 5 sessions
are provided, whereby the first two sessions contain training data without
feedback (screening), and the last three sessions were recorded with
feedback.
Three bipolar recordings (C3, Cz, and C4) were recorded with a sampling
frequency of 250 Hz.They were bandpass- filtered between 0.5 Hz and 100 Hz,
and a notch filter at 50 Hz was enabled. The placement of the three
bipolar recordings (large or small distances, more anterior or posterior)
were slightly different for each subject (for more details see [1]).
The electrode position Fz served as EEG ground. In addition to the EEG
channels, the electrooculogram (EOG) was recorded with three monopolar
electrodes.
The cue-based screening paradigm consisted of two classes,
namely the motor imagery (MI) of left hand (class 1) and right hand
(class 2).
Each subject participated in two screening sessions without feedback
recorded on two different days within two weeks.
Each session consisted of six runs with ten trials each and two classes of
imagery. This resulted in 20 trials per run and 120 trials per session.
Data of 120 repetitions of each MI class were available for each person in
total. Prior to the first motor im- agery training the subject executed
and imagined different movements for each body part and selected the one
which they could imagine best (e. g., squeezing a ball or pulling a brake).
Each trial started with a fixation cross and an additional short acoustic
warning tone (1 kHz, 70 ms). Some seconds later a visual cue was presented
for 1.25 seconds. Afterwards the subjects had to imagine the corresponding
hand movement over a period of 4 seconds. Each trial was followed by a
short break of at least 1.5 seconds. A randomized time of up to 1 second
was added to the break to avoid adaptation
For the three online feedback sessions four runs with smiley feedback
were recorded, whereby each run consisted of twenty trials for each type of
motor imagery. At the beginning of each trial (second 0) the feedback (a
gray smiley) was centered on the screen. At second 2, a short warning beep
(1 kHz, 70 ms) was given. The cue was presented from second 3 to 7.5. At
second 7.5 the screen went blank and a random interval between 1.0 and 2.0
seconds was added to the trial. | Provide a detailed description of the following dataset: BNCI 2014-004 Motor Imagery dataset. |
Motor Imagery dataset from Cho et al 2017. | **Dataset Description**
We conducted a BCI experiment for motor imagery movement (MI movement)
of the left and right hands with 52 subjects (19 females, mean age ±
SD age = 24.8 ± 3.86 years); Each subject took part in the same
experiment, and subject ID was denoted and indexed as s1, s2, ...,
s52. Subjects s20 and s33 were both-handed, and the other 50 subjects
were right-handed.
EEG data were collected using 64 Ag/AgCl active electrodes. A
64-channel montage based on the international 10-10 system was used to
record the EEG signals with 512 Hz sampling rates. The EEG device used
in this experiment was the Biosemi ActiveTwo system. The BCI2000
system 3.0.2 was used to collect EEG data and present instructions
(left hand or right hand MI). Furthermore, we recorded EMG as well as
EEG simultaneously with the same system and sampling rate to check
actual hand movements. Two EMG electrodes were attached to the flexor
digitorum profundus and extensor digitorum on each arm.
Subjects were asked to imagine the hand movement depending on the
instruction given. Five or six runs were performed during the MI
experiment. After each run, we calculated the classification accuracy
over one run and gave the subject feedback to increase motivation.
Between each run, a maximum 4-minute break was given depending on the
subject\'s demands. | Provide a detailed description of the following dataset: Motor Imagery dataset from Cho et al 2017. |
Physionet Motor Imagery dataset. | ## Physionet MI dataset: https://physionet.org/pn4/eegmmidb/
This data set consists of over 1500 one- and two-minute EEG recordings,
obtained from 109 volunteers [2]_.
Subjects performed different motor/imagery tasks while 64-channel EEG were
recorded using the BCI2000 system (http://www.bci2000.org) [1]_.
Each subject performed 14 experimental runs: two one-minute baseline runs
(one with eyes open, one with eyes closed), and three two-minute runs of
each of the four following tasks:
1. A target appears on either the left or the right side of the screen.
The subject opens and closes the corresponding fist until the target
disappears. Then the subject relaxes.
2. A target appears on either the left or the right side of the screen.
The subject imagines opening and closing the corresponding fist until
the target disappears. Then the subject relaxes.
3. A target appears on either the top or the bottom of the screen.
The subject opens and closes either both fists (if the target is on top)
or both feet (if the target is on the bottom) until the target
disappears. Then the subject relaxes.
4. A target appears on either the top or the bottom of the screen.
The subject imagines opening and closing either both fists
(if the target is on top) or both feet (if the target is on the bottom)
until the target disappears. Then the subject relaxes.
## References
[1] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N. and
Wolpaw, J.R., 2004. BCI2000: a general-purpose brain-computer
interface (BCI) system. IEEE Transactions on biomedical engineering,
51(6), pp.1034-1043.
[2] Goldberger, A.L., Amaral, L.A., Glass, L., Hausdorff, J.M., Ivanov,
P.C., Mark, R.G., Mietus, J.E., Moody, G.B., Peng, C.K., Stanley,
H.E. and PhysioBank, P., PhysioNet: components of a new research
resource for complex physiologic signals Circulation 2000 Volume
101 Issue 23 pp. E215–E220. | Provide a detailed description of the following dataset: Physionet Motor Imagery dataset. |
High-gamma dataset discribed in Schirrmeister et al. 2017 | High-gamma dataset discribed in Schirrmeister et al. 2017
Our “High-Gamma Dataset” is a 128-electrode dataset (of which we later only use
44 sensors covering the motor cortex, (see Section 2.7.1), obtained from 14
healthy subjects (6 female, 2 left-handed, age 27.2 ± 3.6 (mean ± std)) with
roughly 1000 (963.1 ± 150.9, mean ± std) four-second trials of executed
movements divided into 13 runs per subject. The four classes of movements were
movements of either the left hand, the right hand, both feet, and rest (no
movement, but same type of visual cue as for the other classes). The training
set consists of the approx. 880 trials of all runs except the last two runs,
the test set of the approx. 160 trials of the last 2 runs. This dataset was
acquired in an EEG lab optimized for non-invasive detection of high- frequency
movement-related EEG components (Ball et al., 2008; Darvas et al., 2010).
Depending on the direction of a gray arrow that was shown on black back-
ground, the subjects had to repetitively clench their toes (downward arrow),
perform sequential finger-tapping of their left (leftward arrow) or right
(rightward arrow) hand, or relax (upward arrow). The movements were selected
to require little proximal muscular activity while still being complex enough
to keep subjects in- volved. Within the 4-s trials, the subjects performed the
repetitive movements at their own pace, which had to be maintained as long as
the arrow was showing. Per run, 80 arrows were displayed for 4 s each, with 3
to 4 s of continuous random inter-trial interval. The order of presentation
was pseudo-randomized, with all four arrows being shown every four trials.
Ideally 13 runs were performed to collect 260 trials of each movement and rest.
The stimuli were presented and the data recorded with BCI2000 (Schalk et al.,
2004). The experiment was approved by the ethical committee of the University
of Freiburg.
## References
[1] Schirrmeister, Robin Tibor, et al. "Deep learning with convolutional
neural networks for EEG decoding and visualization." Human brain mapping 38.11
(2017): 5391-5420. | Provide a detailed description of the following dataset: High-gamma dataset discribed in Schirrmeister et al. 2017 |
Motor Imagey Dataset from Shin et al 2017 | ## Data Acquisition
EEG and NIRS data was collected in an ordinary bright room. EEG data was
recorded by a multichannel BrainAmp EEG amplifier with thirty active
electrodes (Brain Products GmbH, Gilching, Germany) with linked mastoids
reference at 1000 Hz sampling rate. The EEG amplifier was also used to
measure the electrooculogram (EOG), electrocardiogram (ECG) and respiration
with a piezo based breathing belt. Thirty EEG electrodes were placed on a
custom-made stretchy fabric cap (EASYCAP GmbH, Herrsching am Ammersee,
Germany) and placed according to the international 10-5 system (AFp1, AFp2,
AFF1h, AFF2h, AFF5h, AFF6h, F3, F4, F7, F8, FCC3h, FCC4h, FCC5h, FCC6h, T7,
T8, Cz, CCP3h, CCP4h, CCP5h, CCP6h, Pz, P3, P4, P7, P8, PPO1h, PPO2h, POO1,
POO2 and Fz for ground electrode).
NIRS data was collected by NIRScout (NIRx GmbH, Berlin, Germany) at 12.5 Hz
sampling rate. Each adjacent source-detector pair creates one physiological
NIRS channel. Fourteen sources and sixteen detectors resulting in
thirty-six
physiological channels were placed at frontal (nine channels around Fp1,
Fp2, and Fpz), motor (twelve channels around C3 and C4, respectively) and
visual areas (three channels around Oz). The inter-optode distance was 30
mm. NIRS optodes were fixed on the same cap as the EEG electrodes. Ambient
lights were sufficiently blocked by a firm contact between NIRS optodes and
scalp and use of an opaque cap.
EOG was recorded using two vertical (above and below left eye) and two
horizontal (outer canthus of each eye) electrodes. ECG was recorded based
on
Einthoven triangle derivations I and II, and respiration was measured using
a respiration belt on the lower chest. EOG, ECG and respiration were
sampled
at the same sampling rate of the EEG. ECG and respiration data were not
analyzed in this study, but are provided along with the other signals.
## Experimental Procedure
The subjects sat on a comfortable armchair in front of a 50-inch white
screen. The distance between their heads and the screen was 1.6 m. They
were
asked not to move any part of the body during the data recording. The
experiment consisted of three sessions of left and right hand MI (dataset
A)and MA and baseline tasks (taking a rest without any thought) (dataset B)
each. Each session comprised a 1 min pre-experiment resting period, 20
repetitions of the given task and a 1 min post-experiment resting
period. The task started with 2 s of a visual introduction of the task,
followed by 10 s of a task period and resting period which was given
randomly from 15 to 17 s. At the beginning and end of the task period, a
short beep (250 ms) was played. All instructions were displayed on the
white
screen by a video projector. MI and MA tasks were performed in separate
sessions but in alternating order (i.e., sessions 1, 3 and 5 for MI
(dataset
A) and sessions 2, 4 and 6 for MA (dataset B)). Fig. 2 shows the schematic
diagram of the experimental paradigm. Five sorts of motion artifacts
induced
by eye and head movements (dataset C) were measured. The motion artifacts
were recorded after all MI and MA task recordings. The experiment did not
include the pre- and post-experiment resting state periods.
## Motor Imagery (Dataset A)
For motor imagery, subjects were instructed to perform haptic motor imagery
(i.e. to imagine the feeling of opening and closing their hands as they
were
grabbing a ball) to ensure that actual motor imagery, not visual imagery,
was performed. All subjects were naive to the MI experiment. For the visual
instruction, a black arrow pointing to either the left or right side
appeared at the center of the screen for 2 s. The arrow disappeared with a
short beep sound and then a black fixation cross was displayed during the
task period. The subjects were asked to imagine hand gripping (opening and
closing their hands) in a 1 Hz pace. This pace was shown to and repeated by
the subjects by performing real hand gripping before the experiment. Motor
imagery was performed continuously over the task period. The task period
was finished with a short beep sound and a 'STOP' displayed for 1s on the
screen. The fixation cross was displayed again during the rest period and
the subjects were asked to gaze at it to minimize their eye movements. This
process was repeated twenty times in a single session (ten trials per
condition in a single session; thirty trials in the whole sessions). In a
single session, motor imagery tasks were performed on the basis of ten
subsequent blocks randomly consisting of one of two conditions: Either
first left and then right hand motor imagery or vice versa.
## References
[1] Shin, J., von Lühmann, A., Blankertz, B., Kim, D.W., Jeong, J.,
Hwang, H.J. and Müller, K.R., 2017. Open access dataset for EEG+NIRS
single-trial classification. IEEE Transactions on Neural Systems
and Rehabilitation Engineering, 25(10), pp.1735-1745.
[2] GNU General Public License, Version 3
`<https://www.gnu.org/licenses/gpl-3.0.txt>`_ | Provide a detailed description of the following dataset: Motor Imagey Dataset from Shin et al 2017 |
Mental Arithmetic Dataset from Shin et al 2017 | **Data Acquisition**
EEG and NIRS data was collected in an ordinary bright room. EEG data was
recorded by a multichannel BrainAmp EEG amplifier with thirty active
electrodes (Brain Products GmbH, Gilching, Germany) with linked mastoids
reference at 1000 Hz sampling rate. The EEG amplifier was also used to
measure the electrooculogram (EOG), electrocardiogram (ECG) and respiration
with a piezo based breathing belt. Thirty EEG electrodes were placed on a
custom-made stretchy fabric cap (EASYCAP GmbH, Herrsching am Ammersee,
Germany) and placed according to the international 10-5 system (AFp1, AFp2,
AFF1h, AFF2h, AFF5h, AFF6h, F3, F4, F7, F8, FCC3h, FCC4h, FCC5h, FCC6h, T7,
T8, Cz, CCP3h, CCP4h, CCP5h, CCP6h, Pz, P3, P4, P7, P8, PPO1h, PPO2h, POO1,
POO2 and Fz for ground electrode).
NIRS data was collected by NIRScout (NIRx GmbH, Berlin, Germany) at 12.5 Hz
sampling rate. Each adjacent source-detector pair creates one physiological
NIRS channel. Fourteen sources and sixteen detectors resulting in
thirty-six
physiological channels were placed at frontal (nine channels around Fp1,
Fp2, and Fpz), motor (twelve channels around C3 and C4, respectively) and
visual areas (three channels around Oz). The inter-optode distance was 30
mm. NIRS optodes were fixed on the same cap as the EEG electrodes. Ambient
lights were sufficiently blocked by a firm contact between NIRS optodes and
scalp and use of an opaque cap.
EOG was recorded using two vertical (above and below left eye) and two
horizontal (outer canthus of each eye) electrodes. ECG was recorded based
on
Einthoven triangle derivations I and II, and respiration was measured using
a respiration belt on the lower chest. EOG, ECG and respiration were
sampled
at the same sampling rate of the EEG. ECG and respiration data were not
analyzed in this study, but are provided along with the other signals.
**Experimental Procedure**
The subjects sat on a comfortable armchair in front of a 50-inch white
screen. The distance between their heads and the screen was 1.6 m. They
were
asked not to move any part of the body during the data recording. The
experiment consisted of three sessions of left and right hand MI (dataset
A)and MA and baseline tasks (taking a rest without any thought) (dataset B)
each. Each session comprised a 1 min pre-experiment resting period, 20
repetitions of the given task and a 1 min post-experiment resting
period. The task started with 2 s of a visual introduction of the task,
followed by 10 s of a task period and resting period which was given
randomly from 15 to 17 s. At the beginning and end of the task period, a
short beep (250 ms) was played. All instructions were displayed on the
white
screen by a video projector. MI and MA tasks were performed in separate
sessions but in alternating order (i.e., sessions 1, 3 and 5 for MI
(dataset
A) and sessions 2, 4 and 6 for MA (dataset B)). Fig. 2 shows the schematic
diagram of the experimental paradigm. Five sorts of motion artifacts
induced
by eye and head movements (dataset C) were measured. The motion artifacts
were recorded after all MI and MA task recordings. The experiment did not
include the pre- and post-experiment resting state periods.
**Mental Arithmetic (Dataset B)**
For the visual instruction of the MA task, an initial subtraction such as
'three-digit number minus one-digit number' (e.g., 384-8) appeared at the
center of the screen for 2 s. The subjects were instructed to memorize the
numbers while the initial subtraction was displayed on the screen. The
initial subtraction disappeared with a short beep sound and a black
fixation cross was displayed during the task period in which the subjects
were asked
to repeatedly perform to subtract the one-digit number from the result of
the previous subtraction. For the baseline task, no specific sign but the
black fixation cross was displayed on the screen, and the subjects were
instructed to take a rest. Note that there were other rest periods between
the MA and baseline task periods, as same with the MI paradigm. Both task
periods were finished with a short beep sound and a 'STOP' displayed for
1 s on the screen. The fixation cross was displayed again during the rest
period. MA and baseline trials were randomized in the same way as MI.
References
----------
[1] Shin, J., von Lühmann, A., Blankertz, B., Kim, D.W., Jeong, J.,
Hwang, H.J. and Müller, K.R., 2017. Open access dataset for EEG+NIRS
single-trial classification. IEEE Transactions on Neural Systems
and Rehabilitation Engineering, 25(10), pp.1735-1745.
https://ieeexplore.ieee.org/document/7742400/
[2] GNU General Public License, Version 3
`<https://www.gnu.org/licenses/gpl-3.0.txt>`_ | Provide a detailed description of the following dataset: Mental Arithmetic Dataset from Shin et al 2017 |
Motor Imagery dataset from Weibo et al 2014. | Dataset from the article *Evaluation of EEG oscillatory patterns and
cognitive process during simple and compound limb motor imagery* [1]_.
It contains data recorded on 10 subjects, with 60 electrodes.
This dataset was used to investigate the differences of the EEG patterns
between simple limb motor imagery and compound limb motor
imagery. Seven kinds of mental tasks have been designed, involving three
tasks of simple limb motor imagery (left hand, right hand, feet), three
tasks of compound limb motor imagery combining hand with hand/foot
(both hands, left hand combined with right foot, right hand combined with
left foot) and rest state.
At the beginning of each trial (8 seconds), a white circle appeared at the
center of the monitor. After 2 seconds, a red circle (preparation cue)
appeared for 1 second to remind the subjects of paying attention to the
character indication next. Then red circle disappeared and character
indication (‘Left Hand’, ‘Left Hand & Right Foot’, et al) was presented on
the screen for 4 seconds, during which the participants were asked to
perform kinesthetic motor imagery rather than a visual type of imagery
while avoiding any muscle movement. After 7 seconds, ‘Rest’ was presented
for 1 second before next trial (Fig. 1(a)). The experiments were divided
into 9 sections, involving 8 sections consisting of 60 trials each for six
kinds of MI tasks (10 trials for each MI task in one section) and one
section consisting of 80 trials for rest state. The sequence of six MI
tasks was randomized. Intersection break was about 5 to 10 minutes.
References
-----------
[1] Yi, Weibo, et al. "Evaluation of EEG oscillatory patterns and
cognitive process during simple and compound limb motor imagery."
PloS one 9.12 (2014). https://doi.org/10.1371/journal.pone.0114853 | Provide a detailed description of the following dataset: Motor Imagery dataset from Weibo et al 2014. |
Motor Imagery dataset from Zhou et al 2016. | Dataset from the article *A Fully Automated Trial Selection Method for
Optimization of Motor Imagery Based Brain-Computer Interface* [1]_.
This dataset contains data recorded on 4 subjects performing 3 type of
motor imagery: left hand, right hand and feet.
Every subject went through three sessions, each of which contained two
consecutive runs with several minutes inter-run breaks, and each run
comprised 75 trials (25 trials per class). The intervals between two
sessions varied from several days to several months.
A trial started by a short beep indicating 1 s preparation time,
and followed by a red arrow pointing randomly to three directions (left,
right, or bottom) lasting for 5 s and then presented a black screen for
4 s. The subject was instructed to immediately perform the imagination
tasks of the left hand, right hand or foot movement respectively according
to the cue direction, and try to relax during the black screen.
References
----------
[1] Zhou B, Wu X, Lv Z, Zhang L, Guo X (2016) A Fully Automated
Trial Selection Method for Optimization of Motor Imagery Based
Brain-Computer Interface. PLoS ONE 11(9).
https://doi.org/10.1371/journal.pone.0162657 | Provide a detailed description of the following dataset: Motor Imagery dataset from Zhou et al 2016. |
BNCI 2014-002 Motor Imagery dataset | **Dataset Description**
This data set consists of EEG data from 9 subjects. The cue-based BCI
paradigm consisted of four different motor imagery tasks, namely the imag-
ination of movement of the left hand (class 1), right hand (class 2), both
feet (class 3), and tongue (class 4). Two sessions on different days were
recorded for each subject. Each session is comprised of 6 runs separated
by short breaks. One run consists of 48 trials (12 for each of the four
possible classes), yielding a total of 288 trials per session.
The subjects were sitting in a comfortable armchair in front of a computer
screen. At the beginning of a trial ( t = 0 s), a fixation cross appeared
on the black screen. In addition, a short acoustic warning tone was
presented. After two seconds ( t = 2 s), a cue in the form of an arrow
pointing either to the left, right, down or up (corresponding to one of the
four classes left hand, right hand, foot or tongue) appeared and stayed on
the screen for 1.25 s. This prompted the subjects to perform the desired
motor imagery task. No feedback was provided. The subjects were ask to
carry out the motor imagery task until the fixation cross disappeared from
the screen at t = 6 s.
Twenty-two Ag/AgCl electrodes (with inter-electrode distances of 3.5 cm)
were used to record the EEG; the montage is shown in Figure 3 left. All
signals were recorded monopolarly with the left mastoid serving as
reference and the right mastoid as ground. The signals were sampled with.
250 Hz and bandpass-filtered between 0.5 Hz and 100 Hz. The sensitivity of
the amplifier was set to 100 μV . An additional 50 Hz notch filter was
enabled to suppress line noise
References
----------
[1] Tangermann, M., Müller, K.R., Aertsen, A., Birbaumer, N., Braun, C.,
Brunner, C., Leeb, R., Mehring, C., Miller, K.J., Mueller-Putz, G.
and Nolte, G., 2012. Review of the BCI competition IV.
Frontiers in neuroscience, 6, p.55. | Provide a detailed description of the following dataset: BNCI 2014-002 Motor Imagery dataset |
BNCI 2015-001 Motor Imagery dataset | **Dataset description**
We acquired the EEG from three Laplacian derivations, 3.5 cm (center-to-
center) around the electrode positions (according to International 10-20
System of Electrode Placement) C3 (FC3, C5, CP3 and C1), Cz (FCz, C1, CPz
and C2) and C4 (FC4, C2, CP4 and C6). The acquisition hardware was a
g.GAMMAsys active electrode system along with a g.USBamp amplifier (g.tec,
Guger Tech- nologies OEG, Graz, Austria). The system sampled at 512 Hz,
with a bandpass filter between 0.5 and 100 Hz and a notch filter at 50 Hz.
The order of the channels in the data is FC3, FCz, FC4, C5, C3, C1, Cz, C2,
C4, C6, CP3, CPz, CP4.
The task for the user was to perform sustained right hand versus both feet
movement imagery starting from the cue (second 3) to the end of the cross
period (sec- ond 8). A trial started with 3 s of reference period,
followed by a brisk audible cue and a visual cue (arrow right for right
hand, arrow down for both feet) from second 3 to 4.25.
The activity period, where the users received feedback, lasted from
second 4 to 8. There was a random 2 to 3 s pause between the trials.
References
----------
[1] J. Faller, C. Vidaurre, T. Solis-Escalante, C. Neuper and R.
Scherer (2012). Autocalibration and recurrent adaptation: Towards a
plug and play online ERD- BCI. IEEE Transactions on Neural Systems
and Rehabilitation Engineering, 20(3), 313-319. | Provide a detailed description of the following dataset: BNCI 2015-001 Motor Imagery dataset |
BNCI 2015-004 Motor Imagery dataset | **Dataset description**
We provide EEG data recorded from nine users with disability (spinal cord
injury and stroke) on two different days (sessions). Users performed,
follow- ing a cue-guided experimental paradigm, five distinct mental tasks
(MT). MTs include mental word association (condition WORD), mental
subtraction (SUB), spatial navigation (NAV), right hand motor imagery
(HAND) and
feet motor imagery (FEET). Details on the experimental paradigm are
summarized in Figure 1. The session for a single subject consisted of 8
runs resulting in 40 trials of each class for each day. One single
experimental run consisted of 25 cues, with 5 of each mental task. Cues
were presented in random order.
EEG was recorded from 30 electrode channels placed on the scalp according
to the international 10-20 system. Electrode positions included channels
AFz, F7, F3, Fz, F4, F8, FC3, FCz, FC4, T3, C3, Cz, C4, T4, CP3, CPz,CP4,
P7, P5, P3, P1, Pz, P2, P4, P6, P8, PO3, PO4, O1, and O2. Reference and
ground were placed at the left and right mastoid, respectively. The g.tec
GAMMAsys system with g.LADYbird active electrodes and two g.USBamp
biosignal
amplifiers (Guger Technolgies, Graz, Austria) was used for recording. EEG
was band pass filtered 0.5-100 Hz (notch filter at 50 Hz) and sampled at a
rate of 256 Hz.
The duration of a single imagery trials is 10 s. At t = 0 s, a cross was
presented in the middle of the screen. Participants were asked to relax
and
fixate the cross to avoid eye movements. At t = 3 s, a beep was sounded to
get the participant’s attention. The cue indicating the requested imagery
task, one out of five graphical symbols, was presented from t = 3 s to t =
4.25 s. At t = 10 s, a second beep was sounded and the fixation-cross
disappeared, which indicated the end of the trial. A variable break
(inter-trial-interval, ITI) lasting between 2.5 s and 3.5 s occurred
before
the start of the next trial. Participants were asked to avoid movements
during the imagery period, and to move and blink during the
ITI. Experimental runs began and ended with a blank screen (duration 4 s)
References
----------
[1] Scherer R, Faller J, Friedrich EVC, Opisso E, Costa U, Kübler A, et
al. (2015) Individually Adapted Imagery Improves Brain-Computer
Interface Performance in End-Users with Disability. PLoS ONE 10(5).
https://doi.org/10.1371/journal.pone.0123727 | Provide a detailed description of the following dataset: BNCI 2015-004 Motor Imagery dataset |
BMI/OpenBMI dataset for MI. | BMI/OpenBMI dataset for MI.
Dataset from Lee et al 2019 [1].
### Dataset Description
EEG signals were recorded with a sampling rate of 1,000 Hz and collected with 62 Ag/AgCl electrodes. The EEG amplifier used in the experiment was a BrainAmp (Brain Products; Munich, Germany). The channels were nasion-referenced and grounded to electrode AFz. Additionally, an EMG electrode recorded from each flexor digitorum profundus muscle with the olecranon used as reference. The EEG/EMG channel configuration and indexing numbers are described in Fig. 1. The impedances of the EEG electrodes were maintained below 10 k during the entire experiment.
MI paradigm The MI paradigm was designed following a well-established system protocol. For all blocks, the first 3 s of each trial began with a black fixation cross that appeared at the center of the monitor to prepare subjects for the MI task. Afterwards, the subject performed the imagery task of grasping with the appropriate hand for 4 s when the right or left arrow appeared as a visual cue. After each task, the screen remained blank for 6 s (± 1.5 s). The experiment consisted of training and test phases; each phase had 100 trials with balanced right and left hand imagery tasks. During the online test phase, the fixation cross appeared at the center of the monitor and moved right or left, according to the real-time classifier output of the EEG signal.
### References
[1] Lee, M. H., Kwon, O. Y., Kim, Y. J., Kim, H. K., Lee, Y. E., Williamson, J., … Lee, S. W. (2019). EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience, 8(5), 1–16. https://doi.org/10.1093/gigascience/giz002 | Provide a detailed description of the following dataset: BMI/OpenBMI dataset for MI. |
Munich Motor Imagery dataset | BMI/OpenBMI dataset for MI.
Dataset from Lee et al 2019 [1].
### Dataset Description
A trial started with the central display of a white fixation cross. After 3 s, a white arrow was superimposed on the fixation cross, either pointing to the left or the right. Subjects were instructed to perform haptic motor imagery of the left or the right hand during display of the arrow, as indicated by the direction of the arrow. After another 7 s, the arrow was removed, indicating the end of the trial and start of the next trial. While subjects were explicitly instructed to perform haptic motor imagery with the specified hand, i.e., to imagine feeling instead of visualizing how their hands moved, the exact choice of which type of imaginary movement, i.e., moving the fingers up and down, gripping an object, etc., was left unspecified. A total of 150 trials per condition were carried out by each subject, with trials presented in pseudorandomized order.
Ten healthy subjects (S1--S10) participated in the experimental evaluation. Of these, two were females, eight were right handed, and their average age was 25.6 years with a standard deviation of 2.5 years. Subject S3 had already participated twice in a BCI experiment, while all other subjects were naive to BCIs. EEG was recorded at M=128 electrodes placed according to the extended 10--20 system. Data were recorded at 500 Hz with electrode Cz as reference. Four BrainAmp amplifiers were used for this purpose, using a temporal analog high-pass filter with a time constant of 10 s. The data were re-referenced to common average reference offline. Electrode impedances were below 10 kΩ for all electrodes and subjects at the beginning of each recording session. No trials were rejected and no artifact correction was performed. For each subject, the locations of the 128 electrodes were measured in three dimensions using a Zebris ultrasound tracking system and stored for further offline analysis.
### References
[1] Grosse-Wentrup, Moritz, et al. "Beamforming in noninvasive brain–computer interfaces." IEEE Transactions on Biomedical Engineering 56.4 (2009): 1209-1219. | Provide a detailed description of the following dataset: Munich Motor Imagery dataset |
Motor Imagery dataset from Ofner et al 2017 | ### Dataset description
We recruited 15 healthy subjects aged between 22 and 40 years with a mean
age of 27 years (standard deviation 5 years). Nine subjects were female,
and all the subjects except s1 were right-handed.
We measured each subject in two sessions on two different days, which were
not separated by more than one week. In the first session the subjects
performed ME, and MI in the second session. The subjects performed six
movement types which were the same in both sessions and comprised of
elbow flexion/extension, forearm supination/pronation and hand open/close;
all with the right upper limb. All movements started at a
neutral position: the hand half open, the lower arm extended to 120
degree and in a neutral rotation, i.e. thumb on the inner side.
Additionally to the movement classes, a rest class was recorded in which
subjects were instructed to avoid any movement and to stay in the starting
position. In the ME session, we instructed subjects to execute sustained
movements. In the MI session, we asked subjects to perform kinesthetic MI
of the movements done in the ME session (subjects performed one ME run
immediately before the MI session to support kinesthetic MI).
The paradigm was trial-based and cues were displayed on a computer screen
in front of the subjects, Fig 2 shows the sequence of the paradigm.
At second 0, a beep sounded and a cross popped up on the computer screen
(subjects were instructed to fixate their gaze on the cross). Afterwards,
at second 2, a cue was presented on the computer screen, indicating the
required task (one out of six movements or rest) to the subjects. At the
end of the trial, subjects moved back to the starting position. In every
session, we recorded 10 runs with 42 trials per run. We presented 6
movement classes and a rest class and recorded 60 trials per class in a
session.
### References
----------
[1] Ofner, P., Schwarz, A., Pereira, J. and Müller-Putz, G.R., 2017. Upper limb movements can be decoded from the time-domain of low-frequency EEG. PloS one, 12(8), p.e0182578. | Provide a detailed description of the following dataset: Motor Imagery dataset from Ofner et al 2017 |
OLKAVS | The dataset contains 1,150 hours of transcribed audio from 1,107 Korean speakers in a studio setup with nine different viewpoints and various noise situations. We also provide the pre-trained baseline models for two tasks, audio-visual speech recognition and lip reading. | Provide a detailed description of the following dataset: OLKAVS |
RGB Arabic Alphabet Sign Language (AASL) dataset | RGB Arabic Alphabet Sign Language (AASL) dataset | Provide a detailed description of the following dataset: RGB Arabic Alphabet Sign Language (AASL) dataset |
BB | The Bacteria Biotope (BB) Task is part of the [BioNLP Open Shared Tasks](http://2019.bionlp-ost.org) and meets the BioNLP-OST standards of quality, originality and data formats.
Manually annotated data is provided for training, development and evaluation of information extraction methods. Tools for the detailed evaluation of system outputs are available. Support in performing linguistic processing are provided in the form of analyses created by various state-of-the art tools on the dataset texts.
##Motivation
Biology and bioinformatics projects produce huge amounts of heterogeneous information about the microbial strains that have been experimentally identified in a given environment (habitat), and theirs properties (phenotype). These projects include applied microbiology domain (food safety), health sciences and waste processing. Knowledge about microbial diversity is critical for studying in depth the microbiome, the interaction mechanisms of bacteria with their environment from genetic, phylogenetic and ecology perspectives.
A large part of the information is expressed in free text in large sets of scientific papers, web pages or databases. Thus, automatic systems are needed to extract the relevant information. The BB task aims to encourage the development of such systems.
##BB Task Goal
The BB Task is an information extraction task involving entity recognition, entity normalization and relation extraction.
The BB Task consists in recognizing mentions of microorganisms and microbial biotopes and phenotypes in scientific and textbook text, normalizing these mentions according to domain knowledge resources (a taxonomy and an ontology), and extracting relations between them.
It is the new edition of the Bacteria Biotope task previously run at BioNLP Shared Task [2016](https://2016.bionlp-st.org/tasks/bb3), [2013](https://2013.bionlp-st.org/tasks/bacteria-biotopes-bb) and [2011](https://2011.bionlp-st.org/bionlp-shared-task-2011/bacteria-biotopes). The task has been extended to include new entity and relation types and new documents. | Provide a detailed description of the following dataset: BB |
BB-norm-habitat | In the BB-norm modality of this task, participant systems had to normalize textual entity mentions according to the OntoBiotope ontology for habitats.
See [BB-dataset](https://paperswithcode.com/dataset/bb) for more information. | Provide a detailed description of the following dataset: BB-norm-habitat |
BB-norm-phenotype | In the BB-norm modality of this task, participant systems had to normalize textual entity mentions according to the OntoBiotope ontology for phenotypes. See [BB-dataset](https://paperswithcode.com/dataset/bb) for more information. | Provide a detailed description of the following dataset: BB-norm-phenotype |
ChemDisGene | ChemDisGene, a new dataset for training and evaluating multi-class multi-label biomedical relation extraction models. | Provide a detailed description of the following dataset: ChemDisGene |
Traffic | **Abstract**: The task for this dataset is to forecast the spatio-temporal traffic volume based on the historical traffic volume and other features in neighboring locations.
| Data Set Characteristics | Number of Instances | Area | Attribute Characteristics | Number of Attributes | Date Donated | Associated Tasks | Missing Values |
| ------------------------ | ------------------- | -------- | ------------------------- | -------------------- | ------------ | ---------------- | -------------- |
| Multivariate | 2101 | Computer | Real | 47 | 2020-11-17 | Regression | N/A |
### Source:
Liang Zhao, liang.zhao '@' emory.edu, Emory University.
### Data Set Information:
The task for this dataset is to forecast the spatio-temporal traffic volume based on the historical traffic volume and other features in neighboring locations. Specifically, the traffic volume is measured every 15 minutes at 36 sensor locations along two major highways in Northern Virginia/Washington D.C. capital region. The 47 features include: 1) the historical sequence of traffic volume sensed during the 10 most recent sample points (10 features), 2) week day (7 features), 3) hour of day (24 features), 4) road direction (4 features), 5) number of lanes (1 feature), and 6) name of the road (1 feature). The goal is to predict the traffic volume 15 minutes into the future for all sensor locations. With a given road network, we know the spatial connectivity between sensor locations. For the detailed data information, please refer to the file README.docx.
### Attribute Information:
The 47 features include: (1) the historical sequence of traffic volume sensed during the 10 most recent sample points (10 features), (2) week day (7 features), (3) hour of day (24 features), (4) road direction (4 features), (5) number of lanes (1 feature), and (6) name of the road (1 feature).
### Relevant Papers:
Liang Zhao, Olga Gkountouna, and Dieter Pfoser. 2019. Spatial Auto-regressive Dependency Interpretable Learning Based on Spatial Topological Constraints. ACM Trans. Spatial Algorithms Syst. 5, 3, Article 19 (August 2019), 28 pages. DOI:[[Web Link](https://doi.org/10.1145/3339823)]
### Citation Request:
To use these datasets, please cite the papers:
Liang Zhao, Olga Gkountouna, and Dieter Pfoser. 2019. Spatial Auto-regressive Dependency Interpretable Learning Based on Spatial Topological Constraints. ACM Trans. Spatial Algorithms Syst. 5, 3, Article 19 (August 2019), 28 pages. DOI:[[Web Link](https://doi.org/10.1145/3339823)] | Provide a detailed description of the following dataset: Traffic |
Sales | Forecast Sales using ARIMA and SARIMA | Provide a detailed description of the following dataset: Sales |
Bochrum movement data | The linked repository holds data from a controlled single-obstacle avoidance experiment recorded in a motion laboratory. If you are using this data, please cite the following sources for the data
B. Grimme, Analysis and identification of elementary invariants as building blocks of human arm movements. PhD thesis, International Graduate School of Biosciences, Ruhr-Universität Bochum. (In German)
L.L. Raket, B. Grimme, G. Schöner, C. Igel, and B. Markussen, "Separating timing, movement conditions and individual differences in the analysis of human movement ," PLOS Computational Biology, 2016. | Provide a detailed description of the following dataset: Bochrum movement data |
The tourism forecasting competition | > The data we use include 366 monthly series, 427 quarterly series and 518 yearly series. They were supplied by both tourism bodies (such as Tourism Australia, the Hong Kong Tourism Board and Tourism New Zealand) and various academics, who had used them in previous tourism forecasting studies (please refer to the acknowledgements and details of the data sources and availability).
from [the paper](https://www.sciencedirect.com/science/article/pii/S016920701000107X) | Provide a detailed description of the following dataset: The tourism forecasting competition |
HOI-SDC | In order to avoid the training process of the model being influenced by a portion of HOI classes with a very small number of instances, we remove some of the HOI classes containing a very small number of instances and HOI classes with no interaction from the training \textbf{S}et for the \textbf{D}ouble \textbf{C}hallenge. Finally, there are total 321 HOI classes, 74 object classes and 93 action classes. The training and testing set contain 37,155 and 9,666 images, respectively. | Provide a detailed description of the following dataset: HOI-SDC |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.