dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
VFITex | To test interpolation performance on various texture types, we developed a new test set, VFITex, which contains twenty 100-frame UHD or HD videos at 24, 30 or 50 FPS, collected from the Xiph, Mitch Martinez Free 4K Stock Footage, UVG database and pexels.com. This dataset covers diverse textured scenes, including crowds, flags, foliage, animals, water, leaves, fire and smoke. HD patches were center-cropped from the UHD sequences, preserving the original UHD characteristics. All frames in each sequence were used for evaluation, totaling 940 quintuplets. | Provide a detailed description of the following dataset: VFITex |
HONEST | The HONEST dataset is a template-based corpus for testing the hurtfulness of sentence completions in language models (e.g., BERT) in six different languages (English, Italian, French, Portuguese, Romanian, and Spanish). HONEST is composed of 420 instances for each language, which are generated from 28 identity terms (14 male and 14 female) and 15 templates. It uses a set of identifier terms in singular and plural (i.e., woman, women, girl, boys) and a series of predicates (i.e., “works as [MASK]”, “is known for [MASK]”). The objective is to use language models to fill the sentence, then the hurtfulness of the completion is evaluated. | Provide a detailed description of the following dataset: HONEST |
SID | The See-in-the-Dark (SID) dataset contains 5094 raw short-exposure images, each with a corresponding long-exposure reference image.
Images were captured using two cameras: Sony α7SII and Fujifilm X-T2. | Provide a detailed description of the following dataset: SID |
ELD | Extreme low-light denoising (ELD) dataset that covers 10 indoor scenes and 4 camera devices from multiple brands (SonyA7S2, NikonD850, CanonEOS70D, CanonEOS700D).
It has three levels (800, 1600, 3200) and two low light factors(100, 200) for noisy images, resulting in 240 (3×2×10×4) raw image pairs in total. | Provide a detailed description of the following dataset: ELD |
DoodleUINet | Doodle to UI Dataset contains 11 thousand drawings from 16 categories. | Provide a detailed description of the following dataset: DoodleUINet |
CLEVR-X | **CLEVR-X** is a dataset that extends the [CLEVR](/dataset/clevr) dataset with natural language explanations in the context of VQA. It consists of 3.6 million natural language explanations for 850k question-image pairs.
For each image-question pair in the CLEVR dataset, CLEVR-X contains multiple structured textual explanations which are derived from the original scene graphs. By construction, the CLEVR-X explanations are correct and describe the reasoning and visual information that is necessary to answer a given question.
The CLEVR-X dataset consists of:
- A training set of 2,401,275 natural language explanations for 70,000 images.
- A validation set of 599,711 natural language explanations for 14,000 images.
- A test set of 644,151 natural language explanations for 15,000 images. | Provide a detailed description of the following dataset: CLEVR-X |
BirdClef 2020 (Pruned) | Due to the highly variable sample size of the original BirdClef2020 dataset and the issues that it presents with reproducibility, we propose a pruned version of the set, where samples longer than 180s are removed along with classes with fewer than 50 samples. This processing puts it further in line with other complex audio datasets and allows for experimentation on more consumer friendly hardware. | Provide a detailed description of the following dataset: BirdClef 2020 (Pruned) |
Cellcycle Funcat | Hierarchical multi-label classification dataset for functional genomics | Provide a detailed description of the following dataset: Cellcycle Funcat |
Derisi Funcat | Hierarchical-multilabel classification dataset for functional genomics | Provide a detailed description of the following dataset: Derisi Funcat |
Eisen Funcat | Hierarchical-multilabel classification dataset for functional genomics | Provide a detailed description of the following dataset: Eisen Funcat |
Expr Funcat | Hierarchical-multilabel classification dataset for functional genomics | Provide a detailed description of the following dataset: Expr Funcat |
Gasch1 Funcat | Hierarchical-multilabel classification dataset for functional genomics | Provide a detailed description of the following dataset: Gasch1 Funcat |
Gasch2 Funcat | Hierarchical-multilabel classification dataset for functional genomics | Provide a detailed description of the following dataset: Gasch2 Funcat |
Seq Funcat | Hierarchical-multilabel classification dataset for functional genomics | Provide a detailed description of the following dataset: Seq Funcat |
Spo Funcat | Hierarchical-multilabel classification dataset for functional genomics | Provide a detailed description of the following dataset: Spo Funcat |
IJB-S | [Paper Abstract](http://biometrics.cse.msu.edu/Publications/Face/Kalkaetal_IJBSIARPPAJanusSurveillanceVideoBenchmark_BTAS2018.pdf)
We present IJB–S dataset, an open-source IARPA Janus Surveillance Video Benchmark and associated protocols. The dataset consists of images and surveillance video collected from 202 subjects at a Department of Defense (DoD) training facility. Surveillance video was captured across multiple vignettes representative of a variety of real-world surveillance use cases that are particularly of interest to law enforcement and national security communities. Each video was annotated by human subject matter experts in order to generate ground truth identity and bounding box face labels. In total, over 10 million annotations were collected for the dataset. | Provide a detailed description of the following dataset: IJB-S |
EVICAN | Deep learning use for quantitative image analysis is exponentially increasing. However, training accurate, widely deployable deep learning algorithms requires a plethora of annotated (ground truth) data. Image collections must contain not only thousands of images to provide sufficient example objects (i.e. cells), but also contain an adequate degree of image heterogeneity. We present a new dataset, EVICAN-Expert visual cell annotation, comprising partially annotated grayscale images of 30 different cell lines from multiple microscopes, contrast mechanisms and magnifications that is readily usable as training data for computer vision applications. With 4600 images and ∼26 000 segmented cells, our collection offers an unparalleled heterogeneous training dataset for cell biology deep learning application development. The dataset is freely available (https://edmond.mpdl.mpg.de/imeji/collection/l45s16atmi6Aa4sI?q=). | Provide a detailed description of the following dataset: EVICAN |
SR-RAW | Raw sensor dataset where each sequence captures 7 (few contain 6) images (RAW and JPG) taken by different focal lengths. | Provide a detailed description of the following dataset: SR-RAW |
bladderbatch | Microarray gene expression data on 57 bladder samples from 5 batches. | Provide a detailed description of the following dataset: bladderbatch |
HRSOD | There exist several datasets for saliency detection, but none of them is specifically designed for high-resolution salient object detection. High-Resolution Salient Object Detection (HRSOD) dataset, containing 1610 training images and 400 test images. The total 2010 images are collected from the website of Flickr with the license of all creative commons. Pixel-level ground truths are manually annotated by 40 subjects. The shortest edge of each image in HRSOD is more than 1200 pixels. | Provide a detailed description of the following dataset: HRSOD |
RASFF | In the actual globalized world, the transportation of goods between any country is something normal. Considering that the protocols in quality and security vary from one country to another, there is a risk with the products that do not comply with the legislation of a country cross the border. In the case of edible products, the importance of avoiding this kind of situation is even higher. Since 1979, European Union members were obligated to register any risk to public health-related with the food and feed that is traded alongside the territory. This information has been registered in a portal called Rapid Alert System for Food and Feed (RASFF). The content of this paper provides a deep description of a set of records that goes from September 1979 to September 2019 both included. Each record represents an issue registered by RASFF workers containing a set of generic features that all issues have in common, and a set of features that are considered details of the issue. The nature of the data in these features is only categorical except in one case, which is a string of characters that correspond to a subject. All the data was downloaded by using an automatic scrapper and using techniques to clean and transform it, so they can be stored as a .csv file. The potential use of the dataset is related to feature engineering, predictions, a search of behavior patterns, etc.
Description from [Mendeley](https://data.mendeley.com/datasets/yxkm4gs7zf/2) | Provide a detailed description of the following dataset: RASFF |
DAVIS-585 | A dataset for interactive segmentation with simulated initial masks. | Provide a detailed description of the following dataset: DAVIS-585 |
CNN Filter DB-Robust | Dataset for the Paper "Adversarial Robustness through the Lens of Convolutional Filters". | Provide a detailed description of the following dataset: CNN Filter DB-Robust |
CVL-DataBase | The CVL Database is a public database for writer retrieval, writer identification and word spotting. The database consists of 7 different handwritten texts (1 German and 6 Englisch Texts). In total 310 writers participated in the dataset. 27 of which wrote 7 texts and 283 writers had to write 5 texts. For each text a rgb color image (300 dpi) comprising the handwritten text and the printed text sample is available as well as a cropped version (only handwritten). An unique id identifies the writer, whereas the Bounding Boxes for each single word are stored in an XML file.
The CVL-database consists of images with cursively handwritten german and english texts which has been choosen from literary works. All pages have a unique writer id and the text number (separated by a dash) at the upper right corner, followed by the printed sample text. The text is placed between two horizontal separatores. Beneath the printed text individuals have been asked to write the text using a ruled undersheet to prevent curled text lines. The layout follows the style of the IAM database. The database was updated on 12/09/2013 since one writer ID (265/266) was wrong. The version number was changed to 1.1.
Samples of the following texts have been used:
Edwin A. Abbot – Flatland: A Romance of Many Dimension (92 words).
William Shakespeare – Mac Beth (49 words).
Wikipedia – Mailüfterl (73 words, under CC Attribution-ShareALike License).
Charles Darwin – Origin of Species (52 words).
Johann Wolfgang von Goethe – Faust. Eine Tragödie (50 words).
Oscar Wilde – The Picture of Dorian Gray (66 words).
Edgar Allan Poe – The Fall of the House of Usher (78 words). | Provide a detailed description of the following dataset: CVL-DataBase |
Gait3D | Gait3D is a large-scale 3D representation-based gait recognition dataset. It contains 4,000 subjects and over 25,000 sequences extracted from 39 cameras in an unconstrained indoor scene. | Provide a detailed description of the following dataset: Gait3D |
Korean UnSmile Dataset | 1.9K Korean Online Hate Speech Comments for Multilabel Classification (Annotated by Three Independent Labelers per Data) | Provide a detailed description of the following dataset: Korean UnSmile Dataset |
HateScore | 2.2K neutral sentences from Wikipedia
1.7K additionally labeled sentences generated by the Human-in-the-Loop procedure (based on Korean Unsmile Dataset Base Model)
7.1K rule-generated neutral sentences | Provide a detailed description of the following dataset: HateScore |
The Little Prince | This corpus is an annotation of the novel The Little Prince by Antoine de Saint-Exupéry, published in 1943. We were inspired by the UNL project to include this novel, so that different groups could compare representations on the same text. | Provide a detailed description of the following dataset: The Little Prince |
Bio | This corpus includes annotations of cancer-related PubMed articles, covering 3 full papers (PMID:24651010, PMID:11777939, PMID:15630473) as well as the result sections of 46 additional PubMed papers. The corpus also includes about 1000 sentences each from the BEL BioCreative training corpus and the Chicago Corpus. | Provide a detailed description of the following dataset: Bio |
New3 | New3, a set of 527 instances from AMR 3.0, whose original source was the LORELEI DARPA project – not included in the AMR 2.0 training set – consisting of excerpts from newswires and online forum. | Provide a detailed description of the following dataset: New3 |
RepCount | Counting repetitive actions are widely seen in human activities such as physical exercise. Existing methods focus on performing repetitive action counting in short videos, which is tough for dealing with longer videos in more realistic scenarios. In the data-driven era, the degradation of such generalization capability is mainly attributed to the lack of long video datasets. To complement this margin, we introduce a new large-scale repetitive action counting dataset called RepCount covering a wide variety of video lengths, along with more realistic situations where action interruption or action inconsistencies occur in the video. Besides, we also provide a fine-grained annotation of the action cycles instead of just counting annotation along with a numerical value. Such a dataset contains **1451** videos with about **20000**
annotations, which is more challenging. Furthermore, the dataset consists of two subsets namely Part-A and Part-B. The videos in Part-A are fetched from YouTube, while the others in Part-B record simulated physical examinations by junior school students and teachers. | Provide a detailed description of the following dataset: RepCount |
Multispectral Image Database | We present a database of multispectral images that were used to emulate the GAP camera. The images are of a wide variety of real-world materials and objects. We are making this database available to the research community. Details of the database can be found in the following publication:
"Generalized Assorted Pixel Camera: Post-Capture Control of Resolution, Dynamic Range and Spectrum,"
F. Yasuma, T. Mitsunaga, D. Iso, and S.K. Nayar,
Technical Report, Department of Computer Science, Columbia University CUCS-061-08,
Nov. 2008. | Provide a detailed description of the following dataset: Multispectral Image Database |
RS-Haze | A large-scale non-homogeneous remote sensing image dehazing dataset | Provide a detailed description of the following dataset: RS-Haze |
CP2A dataset | We present a new simulated dataset for pedestrian action anticipation collected using the CARLA simulator.
To generate this dataset, we place a camera sensor on the ego-vehicle in the Carla environment and set the parameters to those of the camera used to record the PIE dataset (i.e., 1920x1080, 110° FOV). Then, we compute bounding boxes for each pedestrian interacting with the ego vehicle as seen through the camera's field of view. We generated the data in two urban environments available in the CARLA simulator: Town02 and Town03.
The total number of simulated pedestrians is nearly 55k, equivalent to 14M bounding boxes samples. The critical point for each pedestrian is their first point of crossing the street (in case they will eventually cross) or the last bounding box coordinates of their path in the opposite case. The crossing behavior represents 25% of the total pedestrians. We balanced the training split of the dataset to obtain labeled sequences crossing/non-crossing in equal parts. We used sequence-flipping to augment the minority class (i.e., crossing behavior in our case) and then undersampled the rest of the dataset. The result is a total of nearly 50k pedestrian sequences.
Next, the pedestrian trajectory sequences were transformed into observation sequences of equal length (i.e., 0.5 seconds) with a 60% overlap for the training splits. The TTE length is between 30 and 60 frames. It resulted in a total of nearly 220k observation sequences. | Provide a detailed description of the following dataset: CP2A dataset |
Digits-Five | Digits-Five is a collection of five most popular digit datasets, MNIST (mt) (55000 samples), MNIST-M (mm) (55000 samples), Synthetic Digits (syn) (25000 samples), SVHN (sv)(73257 samples), and USPS (up) (7438 samples). Each digit dataset includes a different style of 0-9 digit images. | Provide a detailed description of the following dataset: Digits-Five |
Amazon Review | Amazon Review is a dataset to tackle the task of identifying whether the sentiment of a product review is positive or negative. This dataset includes reviews from four different merchandise categories: Books (B) (2834 samples), DVDs (D) (1199 samples), Electronics (E) (1883 samples), and Kitchen and housewares (K) (1755 samples). | Provide a detailed description of the following dataset: Amazon Review |
DIBCO 2019 | DIBCO 2019 is the international Competition on Document Image Binarization organized in conjunction with the ICDAR 2019 conference. The general objective of the contest is to identify current advances in document image binarization of machine-printed and handwritten document images using performance evaluation measures that are motivated by document image analysis and recognition requirements. | Provide a detailed description of the following dataset: DIBCO 2019 |
OSS for Social Good Project List | # Leaving My Fingerprints: Motivations and Challenges of Contributing to OSS for Social Good
-> ICSE 2021 <-
### Authors
+ Yu Huang
+ Denae Ford
+ Thomas Zimmermann
### Abstract
When inspiring software developers to contribute to open source software, the act is often referenced as an opportunity to build tools to support the developer community. However, that is not the only charge that propels contributions—growing interest in open source has also been attributed to software developers deciding to use their technical skills to benefit a common societal good. To understand how developers identify these projects, their motivations for contributing, and challenges they face, we conducted 21 semi-structured interviews with OSS for Social Good (OSS4SG) contributors. From our interview analysis, we identified themes of contribution styles that we wanted to understand at scale by deploying a survey to over 5765 OSS and Open Source Software for Social Good contributors. From our quantitative analysis of 517 responses, we find that the majority of contributors demonstrate a distinction between OSS4SG and OSS. Likewise, contributors described definitions based on what societal issue the project was to mitigate and who the outcomes of the project were going to benefit. In addition, we find that OSS4SG contributors focus less on benefiting themselves by padding their resume with new technology skills and are more interested in leaving their mark on society at statistically significant levels. We also find that OSS4SG contributors evaluate the owners of the project significantly more than OSS contributors. These findings inform implications to help contributors identify high societal impact projects, help project maintainers reduce barriers to entry, and help organizations understand why contributors are drawn to these projects to sustain active participation.
### Package
* The replication package containing the following:
* interview protocol
* codebook generated from our thematic analysis of interview data presented in this paper
* survey distributed to both OSS and OSS4SG contributors
* a list of OSS project repositories
* a list of OSS4SG project repositories | Provide a detailed description of the following dataset: OSS for Social Good Project List |
Chart-to-text | **Chart-to-text** is a large-scale benchmark with two datasets and a total of 44,096 charts covering a wide range of topics and chart types. | Provide a detailed description of the following dataset: Chart-to-text |
RELiC | **RELiC** is a large-scale dataset of 79k excerpts of literary scholarship, each containing a quotation from a primary source and the surrounding critical analysis. 79 public domain primary sources and over 8,836 secondary sources are represented in RELiC. | Provide a detailed description of the following dataset: RELiC |
PhysioNet Challenge 2021 | # Data Description
The training data contains twelve-lead ECGs. The validation and test data contains twelve-lead, six-lead, four-lead, three-lead, and two-lead ECGs:
1. Twelve leads: I, II, III, aVR, aVL, aVF, V1, V2, V3, V4, V5, V6
2. Six leads: I, II, III, aVR, aVL, aVF
3. Four leads: I, II, III, V2
4. Three leads: I, II, V2
5. Two leads: I, II
Each ECG recording has one or more labels that describe cardiac abnormalities (and/or a normal sinus rhythm). We mapped the labels for each recording to [SNOMED-CT codes](http://bioportal.bioontology.org/ontologies/SNOMEDCT). The lists of [scored labels](https://github.com/physionetchallenges/evaluation-2021/blob/main/dx_mapping_scored.csv) and [unscored labels](https://github.com/physionetchallenges/evaluation-2021/blob/main/dx_mapping_unscored.csv) are given with the [evaluation code](https://github.com/physionetchallenges/evaluation-2021); see the [scoring section](https://physionet.org/content/challenge-2021/1.0.2/#scoring) for details.
# Data Sources
The Challenge data include recordings from last year’s Challenge and many new recordings for this year’s Challenge:
1. CPSC Database and CPSC-Extra Database
2. INCART Database
3. PTB and PTB-XL Database
4. The Georgia 12-lead ECG Challenge (G12EC) Database
5. Augmented Undisclosed Database
6. Chapman-Shaoxing and Ningbo Database
7. The University of Michigan (UMich) Database
The Challenge data include annotated twelve-lead ECG recordings from six sources in four countries across three continents. These databases include over 100,000 twelve-lead ECG recordings with over 88,000 ECGs shared publicly as training data, 6,630 ECGs retained privately as validation data, and 36,266 ECGs retained privately as test data.
- The first source is the China Physiological Signal Challenge in 2018 (CPSC 2018), which was held during the 7th International Conference on Biomedical Engineering and Biotechnology in Nanjing, China. This source contains two databases: the data from CPSC 2018 (the CPSC Database) and unused data from CPSC 2018 (the CPSC-Extra Database). Together, these databases contain 13,256 ECGs (10,330 ECGs shared as training data, 1,463 retained as validation data, and 1,463 retained as test data). We shared the training set and an unused dataset from CPSC 2018 as training data, and we split the test set from CPSC 2018 into validation and test sets. Each recording is between 6 and 144 seconds long with a sampling frequency of 500 Hz.
- The second source is the St Petersburg INCART 12-lead Arrhythmia Database. This source contains 74 annotated ECGs (all shared as training data) extracted from 32 Holter monitor recordings. Each recording is 30 minutes long with a sampling frequency of 257 Hz.
- The third source is the Physikalisch-Technische Bundesanstalt (PTB) and includes two public datasets: the PTB and the PTB-XL databases. The source contains 22,353 ECGs (all shared as training data). Each recording is between 10 and 120 seconds long with a sampling frequency of either 500 or 1,000 Hz.
- The fourth source is a Georgia database which represents a unique demographic of the Southeastern United States. This source contains 20,672 ECGs (10,344 ECGs shared as training data, 5,167 retained as validation data, and 5,161 retained as test data). Each recording is between 5 and 10 seconds long with a sampling frequency of 500 Hz.
- The fifth source is an undisclosed American database that is geographically distinct from the Georgia database. This source contains 10,000 ECGs (all retained as test data).
- The sixth source is the Chapman University, Shaoxing People’s Hospital (Chapman-Shaoxing) and Ningbo First Hospital (Ningbo) database. This source contains 45,152 ECGS (all shared as training data). Each recording is 10 seconds long with a sampling frequency of 500 Hz.
- The seventh source is UMich Database from the University of Michigan. This source contains 19,642 ECGs (all retained as test data). Each recording is 10 seconds long with a sampling frequency of either 250 Hz or 500 Hz.
Like other real-world datasets, different databases may have different proportions of cardiac abnormalities, but all of the labels in the validation or test data are represented in the training data. Moreover, while this is a curated dataset, some of the data and labels are likely to have errors, and an important part of the Challenge is to work out these issues. In particular, some of the databases have human-overread machine labels with single or multiple human readers, so the quality of the labels varies between databases. You can find more information about the label mappings of the Challenge training data in this [table](https://docs.google.com/spreadsheets/d/1Q4m9axOlE1rEb7Fi2t4fPbvpw8JPvikLBO_j-lQcuuE/edit?usp=sharing).
The six-lead, four-lead, three-lead, and two-lead validation data are reduced-lead versions of the twelve-lead validation data: the same recordings with the same header data but only with signal data for the relevant leads.
We are not planning to release the test data at any point, including after the end of the Challenge. Requests for the test data will not receive a response. We do not release test data to prevent overfitting on the test data and claims or publications of inflated performances. We will entertain requests to run code on the test data after the Challenge on a limited basis based on publication necessity and capacity. (The Challenge is largely staged by volunteers.)
# Data Format
All data was formatted in WFDB format. Each ECG recording uses a binary MATLAB v4 file (see page 27) for the ECG signal data and a plain text file in WFDB header format for the recording and patient attributes, including the diagnosis, i.e., the labels for the recording. The binary files can be read using the load function in MATLAB and the scipy.io.loadmat function in Python; see our [MATLAB](https://github.com/physionetchallenges/matlab-classifier-2021) and [Python](https://github.com/physionetchallenges/python-classifier-2021) example code for working examples. The first line of the header provides information about the total number of leads and the total number of samples or time points per lead, the following lines describe how each lead was encoded, and the last lines provide information on the demographics and diagnosis of the patient.
For example, a header file A0001.hea may have the following contents:
```
A0001 12 500 7500 05-Feb-2020 11:39:16
A0001.mat 16+24 1000/mV 16 0 28 -1716 0 I
A0001.mat 16+24 1000/mV 16 0 7 2029 0 II
A0001.mat 16+24 1000/mV 16 0 -21 3745 0 III
A0001.mat 16+24 1000/mV 16 0 -17 3680 0 aVR
A0001.mat 16+24 1000/mV 16 0 24 -2664 0 aVL
A0001.mat 16+24 1000/mV 16 0 -7 -1499 0 aVF
A0001.mat 16+24 1000/mV 16 0 -290 390 0 V1
A0001.mat 16+24 1000/mV 16 0 -204 157 0 V2
A0001.mat 16+24 1000/mV 16 0 -96 -2555 0 V3
A0001.mat 16+24 1000/mV 16 0 -112 49 0 V4
A0001.mat 16+24 1000/mV 16 0 -596 -321 0 V5
A0001.mat 16+24 1000/mV 16 0 -16 -3112 0 V6
#Age: 74
#Sex: Male
#Dx: 426783006
#Rx: Unknown
#Hx: Unknown
#Sx: Unknown
```
From the first line of the file, we see that the recording number is A0001, and the recording file is A0001.mat. The recording has 12 leads, each recorded at a 500 Hz sampling frequency, and contains 7500 samples. From the next 12 lines of the file (one for each lead), we see that each signal was written at 16 bits with an offset of 24 bits, the floating point number (analog-to-digital converter (ADC) units per physical unit) is 1000/mV, the resolution of the analog-to-digital converter (ADC) used to digitize the signal is 16 bits, and the baseline value corresponding to 0 physical units is 0. The first value of the signal (-1716, etc.), the checksum (0, etc.), and the lead name (I, etc.) are the last three entries of each of these lines. From the final 6 lines, we see that the patient is a 74-year-old male with a diagnosis (Dx) of 426783006, which is the SNOMED-CT code for sinus rhythm. The medical prescription (Rx), history (Hx), and symptom or surgery (Sx) are unknown. Please visit WFDB header format for more information on the header file and variables. | Provide a detailed description of the following dataset: PhysioNet Challenge 2021 |
LUDB | # Abstract
Lobachevsky University Electrocardiography Database (LUDB) is an ECG signal database with marked boundaries and peaks of P, T waves and QRS complexes. The database consists of 200 10-second 12-lead ECG signal records representing different morphologies of the ECG signal. The ECGs were collected from healthy volunteers and patients of the Nizhny Novgorod City Hospital No 5 in 2017–2018. The patients had various cardiovascular diseases while some of them had pacemakers. The boundaries of P, T waves and QRS complexes were manually annotated by cardiologists for all 200 records. Also, each record is annotated with the corresponding diagnosis. The database can be used for educational purposes as well as for training and testing algorithms for ECG delineation, i.e. for automatic detection of boundaries and peaks of P, T waves and QRS complexes.
# Background
Validating ECG delineation algorithms requires standardized databases with complexes and waves, manually annotated by specialists. Several collections are currently available: MIT-BIH Arrhythmia Database [1], European ST-T Database [2], and QT Database [3], however their annotation is not exhaustive. For example, MIT-BIH Arrhythmia Database and European ST-T Database has a markup only for QRS complexes. The QT Database contains annotations for P, QRS and T waves, but several complexes are unmarked. By assembling a new ECG database at Lobachevsky University (LUDB), we sought to eliminate these shortcomings.
# Methods
ECG 10 seconds records were obtained by the Schiller Cardiovit AT-101 cardiograph, with conventional 12 leads (i, ii, iii, avr, avl, avf, v1, v2, v3, v4, v5, v6). Signals are digitized at 500 samples per second. The boundaries and peaks of P, T waves and QRS complexes were determined by certified cardiologists by an eye inspection of each ECG signal and independently for each of 12 leads. The records were made by specialized medical staff (functional diagnostics nurses). All volunteers provided informed written consent before collecting the data. The research was approved by Lobachevsky University IRB (#23; 19 October 2017).
# Data Description
The database consists of 200 10-second 12-lead ECG signal records collected from 2017 to 2018: in total, 16797 P waves, 21966 QRS complexes, 19666 T waves (in total, 58429 annotated waves). The age of all volunteers ranged from a minimum of 11 years old to a maximum of >89 years old with an average 52 years old while the distribution by gender was 85 women and 115 men.
The number of records with specified heart rate types in the dataset:
| Rhythms |Number of ECGs|
|------|--------|
|Sinus rhythm |143|
|Sinus tachycardia| 4|
|Sinus bradycardia| 25|
|Sinus arrhythmia| 8|
|Irregular sinus rhythm 2|
|Abnormal rhythm| 19|
The number of records with specified types of the position of the electrical axis of the heart:
|Electric axis of the heart| Number of ECGs|
|------|--------|
|Normal |75|
|Left axis deviation |66|
|Vertical |26|
|Horizontal| 20|
|Right axis deviation| 3|
|Undetermined| 10|
The number of records with specified types of conduction abnormalities:
|Conduction abnormalities| Number of ECGs|
|------|--------|
|Sinoatrial blockade, undetermined |1|
|I degree AV block |10|
|III degree AV-block |5|
|Incomplete right bundle branch block| 29|
|Incomplete left bundle branch block |6|
|Left anterior hemiblock |16|
|Complete right bundle branch block |4|
|Complete left bundle branch block |4|
|Non-specific intravintricular conduction delay| 4|
The numbers of records with specified types of extrasystolies:
|Extrasystolies |Number of ECGs|
|------|--------|
|Atrial extrasystole, undetermined |2|
|Atrial extrasystole, low atrial |1|
|Atrial extrasystole, left atrial |2|
|Atrial extrasystole, SA-nodal extrasystole |3|
|Atrial extrasystole, type: single PAC |4|
|Atrial extrasystole, type: bigemini |1|
|Atrial extrasystole, type: quadrigemini |1|
|Atrial extrasystole, type: allorhythmic pattern |1|
|Ventricular extrasystole, morphology: polymorphic |2|
|Ventricular extrasystole, localisation: RVOT, anterior wall |3|
|Ventricular extrasystole, localisation: RVOT, antero-septal part |1|
|Ventricular extrasystole, localisation: IVS, middle part |1|
|Ventricular extrasystole, localisation: LVOT, LVS |2|
|Ventricular extrasystole, localisation: LV, undefined |1|
|Ventricular extrasystole, type: single PVC |6|
|Ventricular extrasystole, type: intercalary PVC |2|
|Ventricular extrasystole, type: couplet |2|
The number of records with specified types of hypertrophies:
|Hypertrophies |Number of ECGs|
|------|--------|
|Right atrial hypertrophy |1|
|Left atrial hypertrophy |102|
|Right atrial overload |17|
|Left atrial overload |11|
|Left ventricular hypertrophy |108|
|Right ventricular hypertrophy |3|
|Left ventricular overload |11|
The number of records with cardiac pacing:
|Cardiac pacing |Number of ECGs|
|------|--------|
|UNIpolar atrial pacing |1|
|UNIpolar ventricular pacing |6|
|BIpolar ventricular pacing |2|
|Biventricular pacing |1|
|P-synchrony |2|
The number of records with ischemia:
|Ischemia |Number of ECGs|
|------|--------|
|STEMI: anterior wall |8|
|STEMI: lateral wall |7|
|STEMI: septal |8|
|STEMI: inferior wall |1|
|STEMI: apical |5|
|Ischemia: anterior wall |5|
|Ischemia: lateral wall |8|
|Ischemia: septal |4|
|Ischemia: inferior wall |10|
|Ischemia: posterior wall |2|
|Ischemia: apical |6|
|Scar formation: lateral wall |3|
|Scar formation: septal |9|
|Scar formation: inferior wall |3|
|Scar formation: posterior wall |6|
|Scar formation: apical |5|
|Undefined ischemia/scar/supp.NSTEMI: anterior wall |12|
|Undefined ischemia/scar/supp.NSTEMI: lateral wall |16|
|Undefined ischemia/scar/supp.NSTEMI: septal |5|
|Undefined ischemia/scar/supp.NSTEMI: inferior wall |3|
|Undefined ischemia/scar/supp.NSTEMI: posterior wall| 4|
|Undefined ischemia/scar/supp.NSTEMI: apical |11|
The number of records with non-specific repolarization abnormalities:
|Non-specific repolarization abnormalities |Number of ECGs|
|------|--------|
|Anterior wall |18|
|Lateral wall |13|
|Septal |15|
|Inferior wall |19|
|Posterior wall |9|
|Apical |11|
The number of records with other cases:
|Other states |Number of ECGs|
|------|--------|
|Early repolarization syndrome |9| | Provide a detailed description of the following dataset: LUDB |
CPSC2019 | # Introduction
The China Physiological Signal Challenge 2019 (CPSC 2019) aims to encourage the development of algorithms for challenging QRS detection and heart rate (HR) estimation from short-term single-lead ECG recordings usually with low signal quality and/or abnormal rhythm waveforms.
ECG signal provides an important role in non-invasively monitoring and clinical diagnosis for cardiovascular disease (CVD). Detection of QRS complex is an essential step for ECG signal processing, and can benefit the following HR calculation and abnormal situation analysis. Although detection methods of QRS complex have been severely tracked throughout the last several decades, accurate QRS location and HR estimation are still challenging in noisy signal episode or abnormal rhythm waveforms, especially when the ECG recordings are from the wearable dynamic ECG acquisition. It is true that many of the developed QRS detection algorithms can achieve high accuracy (over 99% in sensitivity and positive predictivity) when tested over the standard ECG databases such as MIT-BIH Arrhythmia Database or AHA Database [1]. However, these algorithms may not be able to perform well when used in the daily life environment that will cause severe noises and significantly reduce the signal quality. A recent study confirmed that none of the common QRS algorithms can obtain 80% detection accuracy when tested in a common dynamic noisy ECG database [2]. Thus, in this challenge, we provide a new ECG database containing noisy ECG episodes and/or signals with different arrhythmia patterns, encouraging participants to develop more efficient and robust algorithms QRS detection and HR estimation. In addition, it is worth to note that, although HR can be calculated from the detection results of QRS complexes, HR can be still estimated without QRS detection step [3,4].
# Challenge Data
Training data consists of 2,000 single-lead ECG recordings collected from patients with cardiovascular disease (CVD), each of the recording last for 10 s. Test set contains similar ECG recordings of same lengths, which is unavailable to public and will remain private for the purpose of scoring for the duration of the Challenge and for some period afterwards. ECG recordings were obtained from multiple sources using a variety of instrumentation, although in all cases they are presented as 500 Hz sample rate here. All recordings were provided in MATLAB format (each including two .mat file: one is ECG data and another one is the corresponding QRS annotation file). Pan &Tompkins (P&T) algorithm [5,6] is also provided as benchmark or comparable algorithm.
Although QRS detection and HR estimation are widely studied by lots of researchers for many years, accurate detection is still really challenging in this Challenge due to the QRS amplitude variation, QRS morphological variation, and occurrence of intense variability in the intervals between beats, different arrhythmias, as well as noises.
# Reference
1. G.B. Moody, R.G. Mark, The impact of the MIT-BIH arrhythmia database, IEEE Engineering in Medicine & Biology Magazine the Quarterly Magazine of the Engineering in Medicine & Biology Society, 20 (2001) 45-50.
2. Liu, F.F.; Wei, S.S.; Li, Y.B.; Jiang, X.E.; Zhang, Z.M.; Liu, C.Y., Performance analysis of ten common qrs detectors on different ecg application cases. Journal of Healthcare Engineering 2018, 2018, ID 9050812.
3. J.J. Gieraltowski, K. Ciuchcinski, I. Grzegorczyk, K. Kosna, Heart rate variability discovery: Algorithm for detection of heart rate from noisy, multimodal recordings, Computing in Cardiology, 2014, pp. 253-256.
4. J. Gieraltowski, K. Ciuchcinski, I. Grzegorczyk, K. Kosna, M. Solinski, P.Podziemski, RS slope detection algorithm for extraction of heart rate from noisy, multimodal recordings, Physiological Measurement, 36 (2015) 1743-1761.
5. P.S. Hamilton, W.J. Tompkins, Quantitative investigation of QRS detection rules using the MIT/BIH arrhythmia database, Biomedical Engineering, IEEE Transactions on, (1986) 1157-1165.
6. J. Pan, W.J. Tompkins, A real-time QRS detection algorithm, Biomedical Engineering, IEEE Transactions on, (1985) 230-236.
7. ANSI-AAMI (1998). Testing and reporting performance results of cardiac rhythm and st segment measurement algorithms, ANSI-AAMI:EC57. | Provide a detailed description of the following dataset: CPSC2019 |
CPSC2020 | # Introduction
Abnormality of cardiac conduction system can induce arrhythmia. Abnormal heart rhythm can lead to other cardiac diseases and complications, and can be life-threatening [1]. There are various types of arrhythmias and each type is associated with a pattern, and as such, it is possible to be identified. Arrhythmias can be classified into two major categories. The first category consists of arrhythmias formed by a single irregular heartbeat in electrocardiogram (ECG), herein called morphological arrhythmia, while another category consists of arrhythmias formed by a set of irregular heartbeats in ECG, herein called rhythmic arrhythmias [2]. Dynamic electrocardiogram (DCG), like ECG Holter, provides an important way to monitor the incidences of arrhythmias in daily life, facilitating the doctors to check a total number and distribution of arrhythmias in a long time and thus to provide the required therapy to prevent further problems.
The 3rd China Physiological Signal Challenge 2020 (CPSC 2020) aims to encourage the development of algorithms for searching for premature ventricular contraction (PVC) and supraventricular premature beat (SPB) from 24-hour dynamic single-lead ECG recordings usually with low signal quality and/or abnormal rhythm waveforms. Similar the previous works and efforts of the CPSC 2018 [3] and CPSC 2019 [4], accurate locating of abnormal heartbeats is another critical issue put forward here for further discussion.
ECG signal provides an important role in non-invasively monitoring and clinical diagnosis for cardiovascular disease (CVD). Arrhythmia detection is one of the ultimate goals of routine ECG monitoring, and PVC and SPB are the two most common arrhythmias. Increase in these beats may be a precursor to stroke or sudden cardiac death [5]. Although their detection methods have been severely tracked throughout the last several decades, accurate and robust detections are still challenging in noisy or low-signal quality environment, especially for daily monitored ECG waveforms. It is true that many of the developed PVC and SPB detection algorithms can achieve high accuracy (over 96% in sensitivity and positive predictivity) when tested over the standard ECG databases such as the MIT-BIH Arrhythmia Database or AHA Database [6]. However, these algorithms may fail when used in the noisy environment. Especially, even the basic QRS detection can be invalid in the low signal quality ECG analysis [7]. A recent study confirmed that none of the common QRS detection algorithms can obtain 80% detection accuracy when tested in a dynamic noisy ECG database. In this year’s challenge, we provide a new ECG database containing long-term noisy ECG recordings from clinical arrhythmia patients, to encourage the participants to develop more efficient and robust algorithms for PVC and SPB detection.
# Challenge Data
Training data consists of 10 single-lead ECG recordings collected from arrhythmia patients, each of the recording last for about 24 hours (shown in Table 1). Table 1 also indicates the patient if he/she is an atrial fibrillation (AF) patient. Test set contains similar ECG recordings, which is unavailable to public and will remain private for the purpose of scoring for the duration of Challenge and for some period afterwards. All data were collected by a unified wearable ECG device with a sampling frequency of 400 Hz, and provided in MATLAB format (each including three *.mat file: one is ECG data and another two are the corresponding PVC and SPB annotation files, respectively).
Detailed information of training data.
|Recordings | AF patient ? |Length (h) |\# N beats |\# V beats |\# S beats |\# Total beats|
|-----|-----|-----|-----|-----|-----|-----|
|A01 |No |25.89 |109,062 |0 |24 |109,086|
|A02 |Yes| 22.83 |98,936| 4,554| 0 |103,490|
|A03 |Yes| 24.70 |137,249| 382| 0| 137,631|
|A04 |No |24.51| 77,812 |19,024| 3,466| 100,302|
|A05 |No| 23.57| 94,614 |1| 25| 94,640|
|A06 |No| 24.59| 77,621| 0| 6 |77,627|
|A07 |No |23.11| 73,325 |15,150 |3,481 |91,956|
|A08 |Yes| 25.46| 115,518| 2,793| 0| 118,311|
|A09 |No| 25.84| 88,229| 2| 1,462| 89,693|
|A10 |No |23.64| 72,821| 169| 9,071| 82,061|
# Reference
[1] S. L. Oh, E. Y. Ng, R. San Tan, and U. R. Acharya, "Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats," Computers in biology and medicine, vol. 102, pp. 278-287, 2018.
[2] E. J. D. S. Luz, W. R. Schwartz, G. Cámara-Chávez, and D. Menotti, "ECG-based heartbeat classification for arrhythmia detection: A survey," Computer methods and programs in biomedicine, vol. 127, pp. 144-164, 2016.
[3] F. Liu, C. Liu, L. Zhao, X. Zhang, X. Wu, X. Xu, Y. Liu, C. Ma, S. Wei, Z. He, J. Li, and E. Y. K. Ng, "An open access database for evaluating the algorithms of electrocardiogram rhythm and morphology abnormality detection," Journal of Medical Imaging and Health Informatics, vol. 8, pp. 1368-1373, 2018.
[4] H. Gao, C. Liu, X. Wang, L. Zhao, Q. Shen, E. Y. K. Ng, and J. Li, "An Open-Access ECG Database for Algorithm Evaluation of QRS Detection and Heart Rate Estimation," Journal of Medical Imaging and Health Informatics, vol. 9, pp. 1853-1858, 2019.
[5] J. Oster, J. Behar, O. Sayadi, S. Nemati, A. E. Johnson, and G. D. Clifford, "Semisupervised ECG ventricular beat classification with novelty detection based on switching Kalman filters," IEEE Transactions on Biomedical Engineering, vol. 62, pp. 2125-2134, 2015.
[6] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. Peng, and H. E. Stanley, "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals," Circulation, vol. 101, pp. e215-e220, 2000.
[7] F. Liu, C. Liu, X. Jiang, Z. Zhang, Y. Zhang, J. Li, and S. Wei, "Performance analysis of ten common QRS detectors on different ECG application cases," Journal of Healthcare Engineering, vol. 2018, pp. 9050812(1)-9050812(8), 2018.
[8] ANSI/AAMI EC57, "1998 / (R) 2008-Testing and reporting performance results of cardiac rhythm and ST segment measurement algorithms", Arlington, VA, USA, 2008. | Provide a detailed description of the following dataset: CPSC2020 |
CPSC2021 | # Introduction
The 4th China Physiological Signal Challenge 2021 (CPSC 2021) aims to encourage the development of algorithms for searching the paroxysmal atrial fibrillation (PAF) events from dynamic ECG recordings.
ECG signal provides an important role in non-invasively monitoring and clinical diagnosis for cardiovascular disease (CVD). AF is the most frequent arrhythmia, but PAF often remains unrecognized[1, 2]. Early screening and early detection of paroxysmal AF are particularly important. It is of great value for AF surgery options, drug intervention, and the diagnosis and treatment of various clinical complications [3].
Although accurate detection of paroxysmal AF is very important, there is currently no algorithm that can efficiently measure the onsets and offsets of AF events in dynamic or wearable ECGs [4]. Previous AF detection algorithms usually focus on the classification of AF rhythm, such as entropy feature-based [5, 6] or machine learning-based methods [7, 8], without the location of onsets and offsets of AF events. Thus, the clinical significance for the personalized treatment and management of AF patients is limited. In clinical applications, other abnormal rhythms can significantly influence the accurate identification of AF rhythm. In this year’s challenge, we focus on the detection of paroxysmal AF events from dynamic ECGs. A new dynamic ECG database containing episodes with totally or partly AF rhythm, or non-AF rhythm was constructed, to encourage the development of the more efficient and robust algorithms for paroxysmal AF event detection.
# Challenge Data
Data are recorded from 12-lead Holter or 3-lead wearable ECG monitoring devices. Challenge data provides variable-length ECG records fragments extracted from lead I and lead II of the long-term dynamic ECGs, each sampled at 200 Hz. In order to avoid ambiguity in annotation, an AF event is limited to contain no less than 5 heart beats.
The training set in the 1st stage consists of 730 records, extracted from the Holter records from 10 AF patients (5 PAF patients) and 39 non-AF patients (usually including other abnormal and normal rhythms).
The training set in the 2nd stage consists of 706 records from 37 AF patients (18 PAF patients) and 14 non-AF patients.
The test set comprises data from the same source as the training set as well as different data source. We ensure that at least one test subset was collected by a different ECG monitoring system compared with the training set. Same as in previous years, we are not planning to release the test set at any point.
All data is provided in WFDB format and the annotations are standardized according to PhysioBank Annotations (link: https://archive.physionet.org/physiobank/annotations.shtml). The annotation includes the beat annotations (R peak location and beat type), the rhythm annotations (rhythm change flag and rhythm type) and the diagnosis of the global rhythm. Please refer to the example code entry (link: https://github.com/CPSC-Committee/cpsc2021-python-entry ) of the challenge for specific data and label load functions. Note that the flag of atrial fibrillation and atrial flutter (‘AFIB’ and ‘AFL’) in annotated information are seemed as the same type when scoring the method.
Please download the training data from here ( [Training Set I](https://opensz.oss-cn-beijing.aliyuncs.com/icbeb2021/file/trainingI.zip) and [Training Set II](https://opensz.oss-cn-beijing.aliyuncs.com/icbeb2021/file/trainingII.zip)). | Provide a detailed description of the following dataset: CPSC2021 |
AnthroProtect | For a detailed description, we refer to Section 3 in our research article. | Provide a detailed description of the following dataset: AnthroProtect |
SSD_PHONE | SSD (Sub-slot Dialog) dataset: This is the dataset for the ACL 2022 paper "A Slot Is Not Built in One Utterance: Spoken Language Dialogs with Sub-Slots". | Provide a detailed description of the following dataset: SSD_PHONE |
SSD_ID | SSD (Sub-slot Dialog) dataset: This is the dataset for the ACL 2022 paper "A Slot Is Not Built in One Utterance: Spoken Language Dialogs with Sub-Slots". | Provide a detailed description of the following dataset: SSD_ID |
SSD_NAME | SSD (Sub-slot Dialog) dataset: This is the dataset for the ACL 2022 paper "A Slot Is Not Built in One Utterance: Spoken Language Dialogs with Sub-Slots". | Provide a detailed description of the following dataset: SSD_NAME |
SSD_PLATE | SSD (Sub-slot Dialog) dataset: This is the dataset for the ACL 2022 paper "A Slot Is Not Built in One Utterance: Spoken Language Dialogs with Sub-Slots". | Provide a detailed description of the following dataset: SSD_PLATE |
TUSC | Tweets from US and Canada (TUSC) is a large dataset of more than 45 million geo-located tweets posted between 2015 and 2021 from US and Canada (TUSC), especially curated for natural language analysis | Provide a detailed description of the following dataset: TUSC |
SF-XL | Large scale dataset for visual geo-localization / visual place recognition.
It provides images from the city of San Francisco, labeled with GPS coordinates and heading. | Provide a detailed description of the following dataset: SF-XL |
SF-XL test v1 | Test set version 1 for the San Francisco eXtra Large dataset | Provide a detailed description of the following dataset: SF-XL test v1 |
SF-XL test v2 | Test set version 2 for the San Francisco eXtra Large dataset | Provide a detailed description of the following dataset: SF-XL test v2 |
AmsterTime | **AmsterTime** dataset offers a collection of 2,500 well-curated images matching the same scene from a street view matched to historical archival image data from Amsterdam city. The image pairs capture the same place with different cameras, viewpoints, and appearances. Unlike existing benchmark datasets, AmsterTime is directly crowdsourced in a GIS navigation platform (Mapillary). In turn, all the matching pairs are verified by a human expert to verify the correct matches and evaluate the human competence in the Visual Place Recognition (VPR) task for further references.
The properties of the dataset are summarized as:
- 1200+ license-free images from the Amsterdam City Archive, representing urban places in the city of Amsterdam, captured in the past century by many photographers.
- All archival queries are matched with street view images from Mapillary.
- All matches are verified by architectural historians and Amsterdam inhabitants.
- Image pairs are archival and street views capturing the same place with different cameras, time lags, structural changes, occlusion, viewpoint, appearance, and illuminations.
- The dataset exhibits a domain shift between query and
the gallery due to significant difference between scanned archival and street view images.
Two sub-tasks are created on the dataset:
- **Verification** is a binary classification (auxiliary) task to detect a pair of archival and street-view images of the same
place. The verification task for AmsterTime dataset has all of the crowdsourced image pairs as positive labeled, where the same number of negative samples are generated by randomly pairing archival and street-view images summing up to a total of 2,462 pairs in the verification task.
- **Retrieval** is the main task corresponding to VPR, in which a given query image is matched with a set of gallery images. For the retrieval task, AmsterTime dataset offers 1231 query images where the leave-one-out set serves as the gallery images for each query. | Provide a detailed description of the following dataset: AmsterTime |
ChAII - Hindi and Tamil Question Answering | The dataset covers Hindi and Tamil, collected without the use of translation. It provides a realistic information-seeking task with questions written by native-speaking expert data annotators. | Provide a detailed description of the following dataset: ChAII - Hindi and Tamil Question Answering |
MC_GRID | Here we release the dataset (Multi_Channel_Grid, abbreviated as **MC_Grid**) used in our paper [LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION]([[2111.04063\] LiMuSE: Lightweight Multi-modal Speaker Extraction (arxiv.org)](https://arxiv.org/abs/2111.04063)).
MC_Grid, which is based on [GRID](http://spandh.dcs.shef.ac.uk/gridcorpus/) dataset, includes multi-channel audio, extracted voiceprint and visual feature. The method of feature extraction will be introduced below.
MC_Grid is specially prepared for speaker extraction task, and our code is available at [aispeech-lab/LiMuSE](https://github.com/aispeech-lab/LiMuSE). Feel free to contact us if you have any questions or suggestions. | Provide a detailed description of the following dataset: MC_GRID |
PDNC | A annotated dataset of quotations and within-quotation-mentions in 22 full-length English novels. | Provide a detailed description of the following dataset: PDNC |
Korean Hate Speech Evaluation Datasets | APEACH is the first crowd-generated Korean evaluation dataset for hate speech detection. Sentences of the dataset are created by anonymous participants using an online crowdsourcing platform DeepNatural AI. | Provide a detailed description of the following dataset: Korean Hate Speech Evaluation Datasets |
Forest CoverType | Predicting forest cover type from cartographic variables only (no remotely sensed data). The actual forest cover type for a given observation (30 x 30 meter cell) was determined from US Forest Service (USFS) Region 2 Resource Information System (RIS) data. Independent variables were derived from data originally obtained from US Geological Survey (USGS) and USFS data. Data is in raw form (not scaled) and contains binary (0 or 1) columns of data for qualitative independent variables (wilderness areas and soil types).
This study area includes four wilderness areas located in the Roosevelt National Forest of northern Colorado. These areas represent forests with minimal human-caused disturbances, so that existing forest cover types are more a result of ecological processes rather than forest management practices.
Some background information for these four wilderness areas: Neota (area 2) probably has the highest mean elevational value of the 4 wilderness areas. Rawah (area 1) and Comanche Peak (area 3) would have a lower mean elevational value, while Cache la Poudre (area 4) would have the lowest mean elevational value.
As for primary major tree species in these areas, Neota would have spruce/fir (type 1), while Rawah and Comanche Peak would probably have lodgepole pine (type 2) as their primary species, followed by spruce/fir and aspen (type 5). Cache la Poudre would tend to have Ponderosa pine (type 3), Douglas-fir (type 6), and cottonwood/willow (type 4).
The Rawah and Comanche Peak areas would tend to be more typical of the overall dataset than either the Neota or Cache la Poudre, due to their assortment of tree species and range of predictive variable values (elevation, etc.) Cache la Poudre would probably be more unique than the others, due to its relatively low elevation range and species composition. | Provide a detailed description of the following dataset: Forest CoverType |
Casino Reviews | This dataset contain online reviews gathered from google reviews written by north american casino users.
explain motivations and summary of its content.
Can be used to study user experience and relative research directions such as cultural impacts on latency of aspects, domain importance, sentiment analysis, opinion mining, aspect-based sentiment analysis, etc. | Provide a detailed description of the following dataset: Casino Reviews |
WHAMR_ext | WHAMR_ext is an extension to the WHAMR corpus with larger RT60 values (between 1s and 3s) | Provide a detailed description of the following dataset: WHAMR_ext |
60k Stack Overflow Questions | The dataset contains 60,000 Stack Overflow questions from 2016-2020, classified into three categories:
1. HQ: High-quality posts without a single edit.
2. LQ_EDIT: Low-quality posts with a negative score, and multiple community edits. However, they still remain open after those changes.
3. LQ_CLOSE: Low-quality posts that were closed by the community without a single edit.
## Notes
- Questions are sorted according to Question Id.
- Question body is in HTML format.
- All dates are in UTC format.
- The dataset is also accessible at https://www.kaggle.com/imoore/60k-stack-overflow-questions-with-quality-rate
## How to cite
This is an original dataset, published under MIT License. Please cite the dataset for your usage as the following:
```
@article{annamoradnejad2022multiview,
title={Multi-View Approach to Suggest Moderation Actions in Community Question Answering Sites},
author={Annamoradnejad, Issa and Habibi, Jafar and Fazli, Mohammadamin},
journal = {Information Sciences},
volume = {600},
pages = {144-154},
year = {2022},
issn = {0020-0255},
doi = {https://doi.org/10.1016/j.ins.2022.03.085},
url = {https://www.sciencedirect.com/science/article/pii/S0020025522003127}
}
``` | Provide a detailed description of the following dataset: 60k Stack Overflow Questions |
Wiki-ZSL | The Wiki-ZSL (Wiki Zero-Shot Learning) dataset contains 113 relations and 94,383 instances from Wikipedia. The dataset is divided into three subsets: training set (98 relations), validation set (5 relations) and test set (10 relations). | Provide a detailed description of the following dataset: Wiki-ZSL |
Tweet IDs - Academic API Experiments | The lists of Tweet IDs for the experiments of the article: This Sample seems to be good enough! Assessing Coverage and Temporal Reliability of Twitter's Academic API by Juergen Pfeffer, Angelina Mooseder, Luca Hammer, Oliver Stritzel, David Garcia. | Provide a detailed description of the following dataset: Tweet IDs - Academic API Experiments |
Anshita | Potential cases | Provide a detailed description of the following dataset: Anshita |
KITTI-360 | KITTI-360 is a large-scale dataset that contains rich sensory information and full annotations. It is the successor of the popular KITTI dataset, providing more comprehensive semantic/instance labels in 2D and 3D, richer 360 degree sensory information (fisheye images and pushbroom laser scans), very accurate and geo-localized vehicle and camera poses, and a series of new challenging benchmarks. | Provide a detailed description of the following dataset: KITTI-360 |
BDD100K-Subsets | Subsets of BDD100K Dataset that are used in Object Detection Under Rainy Conditions for Autonomous Vehicles: A Review of State-of-the-Art and Emerging Techniques | Provide a detailed description of the following dataset: BDD100K-Subsets |
CholecT45 | CholecT45 is a subset of CholecT50 consisting of 45 videos from the Cholec80 dataset.
It is the first public release of part of CholecT50 dataset.
CholecT50 is a dataset of 50 endoscopic videos of laparoscopic cholecystectomy surgery introduced to enable research on fine-grained action recognition in laparoscopic surgery.
It is annotated with 100 triplet classes in the form of <instrument, verb, target>.
- See [CholecT50](https://paperswithcode.com/dataset/cholect50) for more information.
- The dataset split is given [here](https://arxiv.org/abs/2204.05235). | Provide a detailed description of the following dataset: CholecT45 |
tdcommons | Therapeutics Data Commons is an open-science initiative with AI/ML-ready datasets and AI/ML tasks for therapeutics, spanning the discovery and development of safe and effective medicines. TDC provides an ecosystem of tools, libraries, leaderboards, and community resources, including data functions, strategies for systematic model evaluation, meaningful data splits, data processors, and molecule generation oracles. All resources are integrated via an open Python library. | Provide a detailed description of the following dataset: tdcommons |
Amazon Men | This datasets is a subset of the Amazon reviews dataset which contain Men related products | Provide a detailed description of the following dataset: Amazon Men |
Amazon Fashion | This datasets is a subset of the Amazon reviews dataset which contain Fashion related products | Provide a detailed description of the following dataset: Amazon Fashion |
TCR | A dataset of Joint Reasoning for Temporal and Causal Relations | Provide a detailed description of the following dataset: TCR |
Four Shapes | This dataset contains 16,000 images of four shapes; square, star, circle, and triangle. Each image is 200x200 pixels. | Provide a detailed description of the following dataset: Four Shapes |
RainCityscapes | A dataset for rain removal with scene depth information.
Compared with previous datasets, this dataset are all outdoor photos, each with a depth map, and the rain images exhibit different degrees of rain and fog. | Provide a detailed description of the following dataset: RainCityscapes |
OUMVLP-Pose | The OU-ISIR Gait Database, Multi-View Large Population Database with Pose Sequence (OUMVLP-Pose) is meant to aid research efforts in the general area of developing, testing and evaluating algorithms for model-based gait recognition.
This data set was built upon OU-MVLP. It contains 10,307 subjects of round-trip walking sequences captured by seven network cameras at intervals of 15° (this sums to 14 views by considering the round trip on the same walking course) with an image size of 1,280 x 980 pixels and a frame-rate of 25 fps. | Provide a detailed description of the following dataset: OUMVLP-Pose |
PhC-C2DH-U373 | Glioblastoma-astrocytoma U373 cells on a polyacrylamide substrate
Dr. S. Kumar. Department of Bioengineering, University of California at Berkeley, Berkeley CA (USA) | Provide a detailed description of the following dataset: PhC-C2DH-U373 |
DIC-C2DH-HeLa | HeLa cells on a flat glass
Dr. G. van Cappellen. Erasmus Medical Center, Rotterdam, The Netherlands | Provide a detailed description of the following dataset: DIC-C2DH-HeLa |
Fluo-N2DH-SIM+ | Simulated nuclei of HL60 cells stained with Hoescht
Dr. V. Ulman and Dr. D. Svoboda. Centre for Biomedical Image Analysis (CBIA),
Masaryk University, Brno, Czech Republic (Created using MitoGen, part of Cytopacq) | Provide a detailed description of the following dataset: Fluo-N2DH-SIM+ |
Fluo-N2DH-GOWT1 | GFP-GOWT1 mouse stem cells
Dr. E. Bártová. Institute of Biophysics, Academy of Sciences of the Czech Republic, Brno, Czech Republic | Provide a detailed description of the following dataset: Fluo-N2DH-GOWT1 |
Fluo-N2DL-HeLa | HeLa cells stably expressing H2b-GFP
Mitocheck Consortium | Provide a detailed description of the following dataset: Fluo-N2DL-HeLa |
Fluo-N3DL-TRIC | Developing Tribolium Castaneum embryo (3D cartographic projection)
Dr. A. Jain. Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany | Provide a detailed description of the following dataset: Fluo-N3DL-TRIC |
Fluo-C3DH-A549-SIM | Simulated GFP-actin-stained A549 Lung Cancer cells embedded in a Matrigel matrix
Dr. M. Maška and Dr. D. V. Sorokin. Centre for Biomedical Image Analysis (CBIA),
Masaryk University, Brno, Czech Republic (Created using FiloGen, part of Cytopacq) | Provide a detailed description of the following dataset: Fluo-C3DH-A549-SIM |
Fluo-C3DL-MDA231 | MDA231 human breast carcinoma cells infected with a pMSCV vector including the GFP sequence, embedded in a collagen matrix
Dr. R. Kamm. Dept. of Biological Engineering, Massachusetts Institute of Technology, Cambridge MA (USA) | Provide a detailed description of the following dataset: Fluo-C3DL-MDA231 |
Dataset: Relationship extraction for knowledge graph creation from biomedical literature (Gene-Disease relationships) | This is the dataset used for classifying Gene-Disease relationship types from sentences. The dataset consists of 3 files:
* manually_annotated_set.xlsx - set of 2000 manualy annotated sentences with entities
* Unbalanced_dataset.xlsx - set of 12000 sentences, out of which 2000 are from the first set, manually annotated, and the rest have been added using rule based method by adding sentences where extraction had confidence 1.
* Balanced_dataset_SUB_PRED.xlsx - balanced dataset generated by taking 2000 manually annotated sentences, but then adding sentences from the rule-based method with confidence 1 in such a way that each relationship class had at least 1400 sentences (for biomarkers, we could obtain 1243 sentences with confidence 1 from a processed portion of the data we had at the time of building the dataset). | Provide a detailed description of the following dataset: Dataset: Relationship extraction for knowledge graph creation from biomedical literature (Gene-Disease relationships) |
VISUELLE2.0 | Visuelle 2.0 is a dataset containing real data for 5355 clothing products of the retail fast-fashion Italian company, Nuna Lie. Specifically, Visuelle 2.0 provides data from 6 fashion seasons (partitioned in Autumn-Winter and Spring-Summer) from 2017-2019, right before the Covid-19 pandemic. Each product is accompanied by an HD image, textual tags and more. The time series data are disaggregated at the shop level, and include the sales, inventory stock, max-normalized prices (for the sake of confidentiality} and discounts. Exogenous time series data is also provided, in the form of Google Trends based on the textual tags and multivariate weather conditions of the stores’ locations. Finally, we also provide purchase data for 667K customers whose identity has been anonymized, to capture personal preferences. With these data, Visuelle 2.0 allows to cope with several problems which characterize the activity of a fast fashion company: new product demand forecasting, short-observation new product sales forecasting, and product recommendation. | Provide a detailed description of the following dataset: VISUELLE2.0 |
CB-ToF | # Cornell-Box Dataset
## Download
The CornellBox Dataset can be downloaded from this URL
>https://viscom.datasets.uni-ulm.de/radu/dataset.zip
## Dataset
The dataset contains correlation measurements, ToF depth images and ground truth depth images in `.hdr` format.
The script `simulate_noise_on_correlations.py` can be used to simulate shot noise on the correlation images using the default arguments.
## Citing this work
If you use this data in your work, please kindly cite the following paper:
```
@InProceedings{schelling2022radu,
author = {Schelling, Michael and Hermosilla, Pedro and Ropinski, Timo},
title = {{RADU} - Ray-Aligned Depth Update Convolutions for {ToF} Data Denoising},
booktitle = {Conference on Computer Vision and Patter Recognition (CVPR)},
year = {2022}
}
```
## References
The data was generated using the transient renderer of Jarabo et al. [1].
[1] Jarabo, A., Marco, J., Muñoz, A., Buisan, R., Jarosz, W., Gutierrez, D.: "A framework for transient rendering" ACM Transactions on Graphics, SIGGRAPH ASIA, (2014). | Provide a detailed description of the following dataset: CB-ToF |
FLAT | FLAT, a synthetic dataset of 2000 ToF measurements that capture all of these nonidealities, and can be used to simulate different hardware | Provide a detailed description of the following dataset: FLAT |
DeePore | DeePore is a deep learning workflow for rapid estimation of a wide range of porous material properties based on the binarized micro–tomography images. By combining naturally occurring porous textures we generated 17,700 semi–real 3–D micro–structures of porous geo–materials with the size of $256^3$ voxels and 30 physical properties of each sample are calculated using physical simulations on the corresponding pore network models.
In the related paper, a designed feed–forward convolutional neural network (CNN) is trained based on the dataset to estimate several morphological, hydraulic, electrical, and mechanical characteristics of the porous material in a fraction of a second. In order to fine–tune the CNN design, we tested 9 different training scenarios and selected the one with the highest average coefficient of determination (R2) equal to 0.885 for 1418 testing samples. Additionally, 3 independent synthetic images as well as 3 realistic tomography images have been tested using the proposed method and results are compared with pore network modelling and experimental data, respectively. Tested absolute permeabilities had around 13% relative error compared to the experimental data which is noticeable considering the accuracy of the direct numerical simulation methods such as Lattice Boltzmann and Finite Volume. The workflow is compatible with any physical size of the images due to its dimensionless approach and can be used to characterize large–scale 3–D images by averaging the model outputs for a sliding window that scans the whole geometry. | Provide a detailed description of the following dataset: DeePore |
MTHS | the MTHS dataset contains 30Hz PPG signals obtained from
62 patients, including 35 men and 27 women. The ground truth
data includes heart rate and oxygen saturation levels sampled
at 1Hz. The HR and SPo2 measurement is obtained using a pulse oximeter (M70). An iPhone 5s was used to obtain the
ppg recordings at 30 fps. | Provide a detailed description of the following dataset: MTHS |
MUGEN | **MUGEN** is a large-scale video-audio-text dataset MUGEN, collected using the open-sourced platform game CoinRun. MUGEN can help progress research in many tasks in multimodal understanding and generation. | Provide a detailed description of the following dataset: MUGEN |
Distinctions-646 | Dinstinctions-646 are composed of 646 foreground images with manually annotated alpha mattes | Provide a detailed description of the following dataset: Distinctions-646 |
SMC Text Corpus | Contents (As on March 4, 2019)
--------
The text corpus contains running text from various free licensed sources.
- The whole content of Malayalam Wikipedia extracted on January 1, 2019
- News/Article from various sources, source mentioned in respective files:
- 251 Mb
- 8,60,159 lines
- 98,15,533 words
- 10,11,11,885 characters
The word corpus contains
- Classified lexicon prepared for [Malaylam Morphology Analyser project](https://gitlab.com/smc/mlmorph)
- Unique words extracted from Malayalam Wikipedia, Wictionary etc.
- 14,27,392 words | Provide a detailed description of the following dataset: SMC Text Corpus |
SILVR | We present _SILVR_, a dataset of light field images for six-degrees-of-freedom
navigation in large fully-immersive volumes. The _SILVR_ dataset is short for
_"**S**ynthetic **I**mmersive **L**arge-**V**olume **R**ay"_ dataset.
## Properties
Our dataset exhibits the following properties:
- **synthetic**: Rendered using Blender 3.0 with Cycles, the images are
perfect and do not need any calibration. Camera positions and lens
configurations are known exactly and provided in the corresponding JSON
files.
- **large interpolation volume**: The camera configurations span a
relatively large volume (a couple of meters in diameter).
- **large field of view**: In order to maximize the _interpolation volume_
(a.k.a: the walkable volume of light), the images are rendered using fisheye
lenses with a field of view of 180°.
- **immersive**: Thanks to the large field of view and positioning of the
viewpoints, every point within the interpolation volume has a full panoramic
field of view of light information available.
- **realism**: The selected scenes have reasonable realism.
- **depth maps**: As the images are computer-generated renders, we provide
depth maps for every image.
- **specularities** and **reflections**: The scenes exhibit some specularities
or reflections, including mirrors. Reflections and mirrors always have the
depth of the surface, and not the apparent depth of the reflections.
- **volumetrics**: Some volumetrics are also present (fire, smoke, fog) in the
`garden` scene.
- **densly rendered**: The camera setup is rather dense (around 10cm spacing
between cameras). | Provide a detailed description of the following dataset: SILVR |
MSU Super-Resolution for Video Compression | This is a dataset for a super-resolution task. The dataset contains 480x270 videos that were decoded with 6 different bitrates (100 - 4000 kbps) using 5 different codecs (H.264, H.265, H.266, AV1, and AVS3 standards). The dataset contains indoor and outdoor videos as well as animation. All videos have low SI/TI values and simple textures. It was made to minimize compression artifacts that may occur to make restoration of details possible. | Provide a detailed description of the following dataset: MSU Super-Resolution for Video Compression |
MASSIVE | MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks of intent prediction and slot annotation. Utterances span 60 intents and include 55 slot types. MASSIVE was created by localizing the SLURP dataset, composed of general Intelligent Voice Assistant single-shot interactions. | Provide a detailed description of the following dataset: MASSIVE |
MICCAI 2015 Head and Neck Challenge | This database is provided and maintained by Dr. Gregory C Sharp (Harvard Medical School – MGH, Boston) and his group.
The data here provided have been used for the “Head and Neck Auto Segmentation MICCAI Challenge (2015)”.
To cite the challenge or the data, please refer to:
Raudaschl, P. F., Zaffino, P., Sharp, G. C., Spadea, M. F., Chen, A., Dawant, B. M., … & Jung, F. (2017).
Evaluation of segmentation methods on head and neck CT: Auto‐segmentation challenge 2015.
Medical Physics, 44(5), 2020-2036.
PDDCA version 1.4.1 comprises 48 patient CT images from the Radiation Therapy Oncology Group (RTOG) 0522 study (a multi-institutional clinical trial led by Dr Kian Ang), together with manual segmentation of left and right parotid glands, brainstem, optic chiasm, optic nerves (both left and right), mandible, submandibular glands (both left and right) and manual identification of bony landmarks.
We give this data to the community in the hopes that it will be helpful. Any errors in delineation and markup are ours, and are not the fault of participating doctors.
Please see pddca.odt for complete information.
For practical reasons the database is split in 3 zipped files.
Part 3 contains images with some of the above listed structures missing.
Details regarding the Challenge Train/Test Splits can be found in the dataset description | Provide a detailed description of the following dataset: MICCAI 2015 Head and Neck Challenge |
NR2R | To form the collection of nighttime RAW samples, we first selected a total of 150 images with the spatial resolution at 3464×5202 from the training and validation sets provided by the night image challenge. And then these RAW images are pre-processed to best produce noise-free samples using a notable CNN based denoiser. This is because nighttime imaging experiences a very challenging situation with heavy noises incurred by high ISO setting under poor illumination condition (e.g., underexposure).
We applied a two-stage process to derive the corresponding RGB image of each RAW input. We first used a simple ISP that was comprised of linear demosaicing, gray-world white balance, color correction, and gamma correction to convert each denoised RAW input to its RGB format for groundtruth illumination estimation. To this aim, we mark the “White Patch” from each converted RGB, where the patch is presented in neutral gray, and its RGB channels are approximately the same. Since the gray surface presumably reflects all incoming light radiation, it can be used to represent the ground truth illumination of the RAW image accordingly. We then perform the 2-stage labeling using the illumination from the 1-stage. Specifically, first we get the correct color image by a serial operations including linear demosaicing, white balance using the label white balance and color correction with the camera inner color correction matrix (CCM). The brightness adjustment consists of local and global tone mapping jointly. Since local tone mapping requires fine-grained adjustment of each small patch in the scene, it is difficult to annotate it manually. Therefore, we use a pre-trained local tone mapping model to fulfill the task. Since the pre-trained tone mapping network was trained using daytime image, it is good for local adjustment, but fails to control the global brightness. We save the model output using a 16-bit intermediate format in PNG, and then import it into the Lightroom app to adjust the global exposure, brightness, shadows and contrast manually for final high-quality RGB image rendering, with which we emulate the image rendering knowledge from Professional Photographers. Thereafter, we successfully obtain a high-resolution nighttime RAW-RGB image dataset. | Provide a detailed description of the following dataset: NR2R |
160_subset | the 160x160 subset of the GasHisSDB dataset. | Provide a detailed description of the following dataset: 160_subset |
Basketball Ballistic raw sequences | Ballistic trajectories | Provide a detailed description of the following dataset: Basketball Ballistic raw sequences |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.