dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
NuCLS | The NuCLS dataset contains over 220,000 labeled nuclei from breast cancer images from TCGA. These nuclei were annotated through the collaborative effort of pathologists, pathology residents, and medical students using the Digital Slide Archive. These data can be used in several ways to develop and validate algorithms for nuclear detection, classification, and segmentation, or as a resource to develop and evaluate methods for interrater analysis.
Data from both single-rater and multi-rater studies are provided. For single-rater data we provide both pathologist-reviewed and uncorrected annotations. For multi-rater datasets we provide annotations generated with and without suggestions from weak segmentation and classification algorithms. | Provide a detailed description of the following dataset: NuCLS |
K-Hairstyle | K-hairstyle is a novel large-scale Korean hairstyle dataset with 256,679 high-resolution images. In addition, K-hairstyle contains various hair attributes annotated by Korean expert hair stylists and hair segmentation masks. | Provide a detailed description of the following dataset: K-Hairstyle |
CC12M | Conceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for vision-and-language pre-training. | Provide a detailed description of the following dataset: CC12M |
ACDC | The goal of the **Automated Cardiac Diagnosis Challenge (ACDC)** challenge is to:
- compare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances;
- compare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle).
The overall **ACDC** dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.
The database is made available to participants through two datasets from the dedicated online evaluation website after a personal registration: i) a training dataset of 100 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing dataset composed of 50 new patients, without manual annotations but with the patient information given above. The raw input images are provided through the Nifti format. | Provide a detailed description of the following dataset: ACDC |
MoNuSeg | The dataset for this challenge was obtained by carefully annotating tissue images of several patients with tumors of different organs and who were diagnosed at multiple hospitals. This dataset was created by downloading H&E stained tissue images captured at 40x magnification from TCGA archive. H&E staining is a routine protocol to enhance the contrast of a tissue section and is commonly used for tumor assessment (grading, staging, etc.). Given the diversity of nuclei appearances across multiple organs and patients, and the richness of staining protocols adopted at multiple hospitals, the training datatset will enable the development of robust and generalizable nuclei segmentation techniques that will work right out of the box. | Provide a detailed description of the following dataset: MoNuSeg |
GlaS | The dataset used in this challenge consists of 165 images derived from 16 H&E stained histological sections of stage T3 or T42 colorectal adenocarcinoma. Each section belongs to a different patient, and sections were processed in the laboratory on different occasions. Thus, the dataset exhibits high inter-subject variability in both stain distribution and tissue architecture. The digitization of these histological sections into whole-slide images (WSIs) was accomplished using a Zeiss MIRAX MIDI Slide Scanner with a pixel resolution of 0.465µm. | Provide a detailed description of the following dataset: GlaS |
Brain US | This brain anatomy segmentation dataset has 1300 2D US scans for training and 329 for testing. A total of 1629 in vivo B-mode US images were obtained from 20 different subjects (age<1 years old) who were treated between 2010 and 2016. The dataset contained subjects with IVH and without (healthy subjects but in risk of developing IVH). The US scans were collected using a Philips US machine with a C8-5 broadband curved array transducer using coronal and sagittal scan planes. For every collected image ventricles and septum pellecudi are manually segmented by an expert ultrasonographer. We split these images randomly into 1300 Training images and 329 Testing images for experiments. Note that these images are of size 512 × 512. | Provide a detailed description of the following dataset: Brain US |
PieAPP dataset | The PieAPP dataset is a large-scale dataset used for training and testing perceptually-consistent image-error prediction algorithms.
The dataset can be downloaded from: [server containing a zip file with all data](https://web.ece.ucsb.edu/~ekta/projects/PieAPPv0.1/all_data_PieAPP_dataset_CVPR_2018.zip) (2.2GB) or [Google Drive](https://drive.google.com/drive/folders/10RmBhfZFHESCXhhWq0b3BkO5z8ryw85p?usp=sharing) (ideal for quick browsing).
The dataset contains undistorted high-quality reference images and several distorted versions of these reference images. Pairs of distorted images corresponding to a reference image are labeled with **probability of preference** labels.
These labels indicate the fraction of human population that considers one image to be visually closer to the reference over another in the pair.
To ensure reliable pairwise probability of preference labels, we query 40 human subjects via Amazon Mechanical Turk for each image pair.
We then obtain the percentage of people who selected image A over B as the ground-truth label for this pair, which is the probability of preference of A over B (the [supplementary document](https://openaccess.thecvf.com/content_cvpr_2018/Supplemental/3483-supp.pdf) explains the choice of using 40 human subjects to capture accurate probabilities).
This approach is more robust because it is easier to identify the visually closer image than to assign quality scores, and does not suffer from set-dependency or scalability issues like Swiss tournaments since we never label the images with per-image quality scores (see the associated [paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Prashnani_PieAPP_Perceptual_Image-Error_CVPR_2018_paper.pdf) and [supplementary document](https://openaccess.thecvf.com/content_cvpr_2018/Supplemental/3483-supp.pdf) for issues with such existing labeling schemes).
A pairwise learning framework, discussed in the paper, can be used to train image error predictors on the PieAPP dataset.
## Dataset statistics
We make this dataset available for non-commercial and educational purposes only.
The dataset contains a total of 200 undistorted reference images, divided into train / validation / test split.
These reference images are derived from the [Waterloo Exploration Dataset](https://ece.uwaterloo.ca/~k29ma/exploration/). We release the subset of 200 reference images used in PieAPP from the Waterloo Exploration Dataset with permissions for non-commercial, educational, use from the authors.
The users of the PieAPP dataset are requested to cite the Waterloo Exploration Dataset for the reference images, along with PieAPP dataset, as mentioned [here](https://github.com/prashnani/PerceptualImageError/blob/master/dataset/dataset_README.md#terms-of-usage-and-how-to-cite-this-dataset).
The training + validation set contain a total of 160 reference images and test set contains 40 reference images.
A total of 19,680 distorted images are generated for the train/val set and pairwise probability of preference labels for 77,280 image pairs are made available (derived from querying 40 human subjects for a pairwise comparison + max-likelihood estimation of some missing pairs).
For test set, 15 distorted images per reference (total 600 distorted images) are created and **all possible** pairwise comparisons (total 4200) are performed to label **each** image pair with a probability of preference derived from 40 human subjects' votes.
Overall, the PieAPP dataset provides a total of 20,280 distorted images derived from 200 reference images, and 81,480 pairwise probability-of-preference labels.
More details of dataset collection can be found in Sec.4 of the [paper](https://openaccess.thecvf.com/content_cvpr_2018/papers/Prashnani_PieAPP_Perceptual_Image-Error_CVPR_2018_paper.pdf) and [supplementary document](https://openaccess.thecvf.com/content_cvpr_2018/Supplemental/3483-supp.pdf). | Provide a detailed description of the following dataset: PieAPP dataset |
AbstRCT - Neoplasm | The AbstRCT dataset consists of randomized controlled trials retrieved from the MEDLINE database via PubMed search. The trials are annotated with argument components and argumentative relations.
Paper: [Transformer-Based Argument Mining for Healthcare Applications](https://hal.archives-ouvertes.fr/hal-02879293/) | Provide a detailed description of the following dataset: AbstRCT - Neoplasm |
CDCP | The Cornell eRulemaking Corpus – CDCP is an argument mining corpus annotated with argumentative structure information capturing the evaluability of arguments. The corpus consists of 731 user comments on Consumer Debt Collection Practices (CDCP) rule by the Consumer Financial Protection Bureau (CFPB); the resulting dataset contains 4931 elementary unit and 1221 support relation annotations. It is a resource for building argument mining systems that can not only extract arguments from unstructured text, but also identify what additional information is necessary
for readers to understand and evaluate a given argument. Immediate applications include providing real-time feedback to commenters, specifying which types of support for which propositions can be added to construct better-formed arguments. | Provide a detailed description of the following dataset: CDCP |
DRI Corpus | The **Dr. Inventor Multi-Layer Scientific Corpus** (**DRI Corpus**) includes 40 Computer Graphics papers, selected by domain experts. Each paper of the Corpus has been annotated by three annotators by providing the following layers of annotations, each one characterizing a core aspect of scientific publications:
* Scientific discourse: each sentence has been associated to a specific scientific discourse category (Background, Approach, Challenge, Future Work, etc.).
* Subjective statements and novelty: each sentence has been characterized with respect to advantages, disadvantages and novel aspects presented.
* Citation purpose: to each citation has been associated a purpose specifying the reason why the authors of the paper cited the specific piece of research.
* Summary relevance of sentences and hand written summaries: each sentence of the paper has been characterized by an integer score ranging from 1 to 5, to point out the relevance of the same sentence for its inclusion in the summary of the paper. Sentences rated as 5 are the most relevant ones to summarize a paper. For each paper three hand-written summaries (max 250 words) are provided. | Provide a detailed description of the following dataset: DRI Corpus |
PIPAL | PIPAL training set contains 200 reference images, 40 distortion types, 23k distortion images, and more than one million human ratings. Especially, we include GAN-based algorithms’ outputs as a new GAN-based distortion type. We employ the Elo rating system to assign the Mean Opinion Scores (MOS). | Provide a detailed description of the following dataset: PIPAL |
PWDB | # Overview
This database of simulated arterial pulse waves is designed to be representative of a sample of pulse waves measured from healthy adults. It contains pulse waves for 4,374 virtual subjects, aged from 25-75 years old (in 10 year increments). The database contains a baseline set of pulse waves for each of the six age groups, created using cardiovascular properties (such as heart rate and arterial stiffness) which are representative of healthy subjects at each age group. It also contains 728 further virtual subjects at each age group, in which each of the cardiovascular properties are varied within normal ranges. This allows for extensive in silico analyses of haemodynamics and the performance of pulse wave analysis algorithms.
# Data Description
The database contains the following [pulse waves](https://github.com/peterhcharlton/pwdb/wiki/pwdb_data.mat#datawaves), sampled at 500 Hz:
- arterial flow velocity (U),
- luminal area (A),
- pressure (P), and
- photoplethysmogram (PPG).
These pulse waves are provided at a range of [measurement sites](https://github.com/peterhcharlton/pwdb/wiki/pwdb_data.mat#datawaves), including:
- aorta (ascending and descending)
- carotid artery
- brachial artery
- radial artery
- finger
- femoral artery
The database also contains numerous [reference variables](https://github.com/peterhcharlton/pwdb/wiki/pwdb_data.mat#datahaemods), mostly relating to cardiovascular properties, such as:
- heart rate
- cardiac output
- blood pressure
- pulse wave velocity
- age
The data are available in three formats: Matlab, CSV and WaveForm Database (WFDB) format. Further details of the formatting and contents of each file are available [here](https://github.com/peterhcharlton/pwdb/wiki/Using-the-Pulse-Wave-Database).
# Accompanying Publication
The database is described in the following publication:
[Charlton P.H., Mariscal Harana, J., Vennin, S., Li, Y., Chowienczyk, P. & Alastruey, J., **“Modelling arterial pulse waves in healthy ageing: a database for in silico evaluation of haemodynamics and pulse wave indices,”** AJP Hear. Circ. Physiol., 317(5), pp.H1062-H1085, 2019. https://doi.org/10.1152/ajpheart.00218.2019](https://doi.org/10.1152/ajpheart.00218.2019)
Please cite this publication when using the database.
# Further Information
Further information on the Pulse Wave Database project can be found at [the project homepage](https://peterhcharlton.github.io/pwdb/). In particular, an accompanying [Wiki](https://github.com/peterhcharlton/pwdb/wiki) provides:
- An introduction to the dataset [here](https://github.com/peterhcharlton/pwdb/wiki)
- The methods used to create and analyse the dataset [here](https://github.com/peterhcharlton/pwdb/wiki/Reproducing-the-Pulse-Wave-Database)
- An explanation of each of the variables in the dataset [here](https://github.com/peterhcharlton/pwdb/wiki)
- Case studies of analyses conducted on the dataset in Matlab [here](https://github.com/peterhcharlton/pwdb/wiki/Case-Studies) | Provide a detailed description of the following dataset: PWDB |
ReCAM | Tasks
Our shared task has three subtasks. Subtask 1 and 2 focus on evaluating machine learning models' performance with regard to two definitions of abstractness (Spreen and Schulz, 1966; Changizi, 2008), which we call imperceptibility and nonspecificity, respectively. Subtask 3 aims to provide some insights to their relationships.
• Subtask 1: ReCAM-Imperceptibility
Concrete words refer to things, events, and properties that we can perceive directly with our senses (Spreen and Schulz, 1966; Coltheart 1981; Turney et al., 2011), e.g., donut, trees, and red. In contrast, abstract words refer to ideas and concepts that are distant from immediate perception. Examples include objective, culture, and economy. In subtask 1, the participanting systems are required to perform reading comprehension of abstract meaning for imperceptible concepts.
Below is an example. Given a passage and a question, your model needs to choose from the five candidates the best one for replacing @placeholder.
• Subtask 2: ReCAM-Nonspecificity
Subtask 2 focuses on a different type of definition. Compared to concrete concepts like groundhog and whale, hypernyms such as vertebrate are regarded as more abstract (Changizi, 2008).
• Subtask 3: ReCAM-Intersection
Subtask 3 aims to provide more insights to the relationship of the two views on abstractness, In this subtask, we test the performance of a system that is trained on one definition and evaluted on the other. | Provide a detailed description of the following dataset: ReCAM |
VQA-E | VQA-E is a dataset for Visual Question Answering with Explanation, where the models are required to generate and explanation with the predicted answer. The VQA-E dataset is automatically derived from the VQA v2 dataset by synthesizing a textual explanation for each image-question-answer triple.
Image Source: [VQA-E: Explaining, Elaborating, and Enhancing Your Answers for Visual Questions](https://arxiv.org/abs/1803.07464) | Provide a detailed description of the following dataset: VQA-E |
RSPECT | **The RSNA Pulmonary Embolism CT** (**RSPECT**) Dataset is composed of CT pulmonary angiogram images and annotations related to pulmonary embolism. It's part of the 2020 RSNA Pulmonary Embolism Detection Challenge which invited researchers to develop machine-learning algorithms to detect and characterize instances of pulmonary embolism (PE) on chest CT studies. The competition, conducted in collaboration with the Society of Thoracic Radiology (STR), involved creating the largest publicly available annotated PE dataset, comprised of more than 12,000 CT studies. Imaging data was contributed by five international research centers and labeled with detailed clinical annotations by a group of more than 80 expert thoracic radiologists. For the first time in an RSNA data challenge, the rules required competitors to submit and run their code in a standard shared environment, producing simpler, more readily usable models. | Provide a detailed description of the following dataset: RSPECT |
SEP-28k | Stuttering Events in Podcasts (SEP-28k) is a dataset containing over 28k clips labeled with five event types including blocks, prolongations, sound repetitions, word repetitions, and interjections. Audio comes from public podcasts largely consisting of people who stutter interviewing other people who stutter. | Provide a detailed description of the following dataset: SEP-28k |
FluencyBank | **FluencyBank** is a shared database for the study of fluency development. Participants include typically-developing monolingual and bilingual children, children and adults who stutter (C/AWS) or who clutter (C/AWC), and second language learners.
Image Source: [FluencyBank](https://fluency.talkbank.org/) | Provide a detailed description of the following dataset: FluencyBank |
MHIST | The **m**inimalist **hist**opathology image analysis dataset (**MHIST**) is a binary classification dataset of 3,152 fixed-size images of colorectal polyps, each with a gold-standard label determined by the majority vote of seven board-certified gastrointestinal pathologists. MHIST also includes each image’s annotator agreement level. As a minimalist dataset, MHIST occupies less than 400 MB of disk space, and a ResNet-18 baseline can be trained to convergence on MHIST in just 6 minutes using approximately 3.5 GB of memory on a NVIDIA RTX 3090. As example use cases, the authors use MHIST to study natural questions that arise in histopathology image classification such as how dataset size, network depth, transfer learning, and high-disagreement examples affect model performance. | Provide a detailed description of the following dataset: MHIST |
CC-News | **CommonCrawl News** is a dataset containing news articles from news sites all over the world. The dataset is available in form of Web ARChive (WARC) files that are released on a daily basis. | Provide a detailed description of the following dataset: CC-News |
MalNet | MalNet is a large public graph database, representing a large-scale ontology of software function call graphs. MalNet contains over 1.2 million graphs, averaging over 17k nodes and 39k edges per graph, across a hierarchy of 47 types and 696 families.
Image Source: [Expore MalNet](https://mal-net.org/explore) | Provide a detailed description of the following dataset: MalNet |
IBM-Rank-30k | The IBM-Rank-30k is a dataset for the task of argument quality ranking. It is a corpus of 30,497 arguments carefully annotated for point-wise quality.
Image Source: [A Large-scale Dataset for Argument Quality Ranking: Construction and Analysis](https://arxiv.org/pdf/1911.11408v1.pdf) | Provide a detailed description of the following dataset: IBM-Rank-30k |
OTEANNv3 | This dataset contains orthographic samples of words in 19 languages (ar, br, de, en, eno, ent, eo, es, fi, fr, fro, it, ko, nl, pt, ru, sh, tr, zh). Each sample contains two text features: a Word (the textual representation of the word according to its orthography) and a Pronunciation (the highest-surface IPA pronunciation of the word as pronunced in its language). | Provide a detailed description of the following dataset: OTEANNv3 |
Maintenance of Wakefulness Test (MWT) recordings | Maintenance of Wakefulness Test (MWT) is a dataset of recordings with microsleep episodes and drowsiness.
Cite as:
Hertig-Godeschalk Anneke, Skorucak Jelena, Malafeev Alexander, Achermann Peter, Mathis Johannes, & Schreier David R. (2019). Maintenance of Wakefulness Test (MWT) recordings (Version v1) [Data set]. Zenodo. http://doi.org/10.5281/zenodo.325171
Each file contains a MWT trial (first trial after noon) recording of a patient. The data contains occipital EEG and EOG data. All signals were bandpass filtered between 0.5-45 Hz.
In each file, the data is structured as the following:
fs: sampling rate.
eeg_O1: EEG channel O1-M2 where M2 is the mastoid electrode on the opposite side.
eeg_O2: EEG channel O2-M1 where M1 is the mastoid electrode on the opposite side.
E1 and E2: EOG channels for left and right eye, both referenced to M1.
labels_O1 and labels_O2: arrays with expert scoring (0-wake, 1-MSE, 2-MSEc, 3-ED, according to the BERN scoring criteria published in Hertig-Godeschalk et al. doi:10.1093/sleep/zsz163.); length of the arrays is the same as for other signals, i.e. there is a label per sample.
prec: amount of signal samples per label, in this case it is 1. variables prec and half_prec were not used.
num_Labels: length of the signal in samples.
Further descriptions, details, and outcomes can be found in the related studies. The published studies which are based on this data and address the borderland between wakefulness and sleep, i.e. microsleep episodes, are listed under related/alternative identifiers. | Provide a detailed description of the following dataset: Maintenance of Wakefulness Test (MWT) recordings |
darpa_sd2_perovskites | Included in this content:
* 0045.perovksitedata.csv - main dataset used in this article. A more detailed description can be found in the “dataset overview” section below
* Chemical Inventory.csv - the hand curated file of all chemicals used in the construction of the perovskite dataset. This file includes identifiers, chemical properties, and other information.
* ExcessMolarVolumeData.xlsx - record of experimental data, computations, and final dataset used in the generation of the excess molar volume plots.
* MLModelMetrics.xlsx - all of the ML metrics organized in one place (excludes reactant set specific breakdown, see ML_Logs.zip for those files).
* OrganoammoniumDensityDataset.xlsx - complete set of the data used to generate the density values. Example calculations included.
* model_matchup_main.py - python pipeline used to generate all of the ML runs associated with the article. More detailed instructions on the operation of this code is included in the “ML Code” Section below. This file is also hosted on
* GIT: https://github.com/ipendlet/MLScripts/blob/master/temp_densityconc/model_matchup_main_20191231.py
* SolutionVolumeDataset - complete set of 219 solutions in the perovskite dataset. Tabs include the automatically generated reagent information from ESCALATE, hand curated reagent information from early runs, and the generation of the dataset used in the creation of Figure 5.
* error_auditing.zip - code and historical datasets used for reporting the dataset auditing.
* “AllCode.zip” which contains:
* model_matchup_main_20191231.py - python pipeline used to generate all of the ML runs associated with the article. More detailed instructions on the operation of this code is included in the “ML Code” Section below. This file is also hosted on
* GIT: https://github.com/ipendlet/MLScripts/blob/master/temp_densityconc/0045.perovskitedata.csv
* VmE_CurveFitandPlot.py - python code for generating the third order polynomial fit to the VmE vs mole fraction of FAH included in the main text. Requires the ‘MolFractionResults.csv’ to function (also included).
* Calculation_Vm_Ve_CURVEFITTING.nb - mathematica code for generating the third order polynomial fit to the VmE vs mole fraction of FAH included in the main text.
* Covariance_Analysis.py - python code for ingesting and plotting the covariance of features and volumes in the perovskite dataset. Includes renaming dictionaries used for the publication.
* FeatureComparison_Plotting.py - python code for reading in and plotting features for the ‘GBT’ and ‘OHGBT’ folders in this directory. The code parses the contents of these folders and generates feature comparison metrics used for Figure 9 and the associated Figure S8. Some assembly required.
* Requirements.txt - all of the packages used in the generation of this paper
* 0045.perovskitedata.csv - the main dataset described throughout the article. This file is required to run some of the code and is therefore kept near the code.
* “ML_Logs.zip” which contains:
* A folder describing every model generated for this article. In each folder there are a number of files:
* Features_named_important.csv and features_value_importance.csv - these files are linked together and describe the weighted feature contributions from features (only present for GBT models)
* AnalysisLog.txt - Log file of the run including all options, data curation and model training summaries
* LeaveOneOut_Summary.csv - Results of the leave-one-reactant set-out studies on the model (if performed)
* LOOModelInfo.txt - Hyperparameter information for each model in the study (associated with the given dataset, sometimes includes duplicate runs).
* STTSModelInfo.txt - Hyperparameter information for each model in the study (associated with the given dataset, sometimes includes duplicate runs).
* StandardTestTrain_Summary.csv - Results of the 6 fold cross validation ML performance (for the hold out case)
* LeaveOneOut_FullDataset_ByAmine.csv - Results of the leave-one-reactant set-out studies performed on the full dataset (all experiments) specified by reactant set (delineated by the amine)
* LeaveOneOut_StratifiedData_ByAmine.csv - Results of the leave-one-reactant set-out studies performed on a random stratified sample (96 random experiments) specified by reactant set (delineated by the amine)
* model_matchup_main_*.py - code used to generate all of the runs contained in a particular folder. The code is exactly what was used at run time to generate a given dataset (requires 0045.perovskitedata.csv file to run). | Provide a detailed description of the following dataset: darpa_sd2_perovskites |
Decagon | Bio-decagon is a dataset for polypharmacy side effect identification problem framed as a multirelational link prediction problem in a two-layer multimodal graph/network of two node types: drugs and proteins. Protein-protein interaction
network describes relationships between proteins. Drug-drug interaction network contains 964 different types of edges (one for each side effect type) and describes which drug pairs lead to which side effects. Lastly,
drug-protein links describe the proteins targeted by a given drug.
The final network after linking entity vocabularies used by different databases has 645 drug and 19,085 protein nodes connected by 715,612 protein-protein, 4,651,131 drug-drug, and 18,596 drug-protein edges. | Provide a detailed description of the following dataset: Decagon |
TREC-10 | A question type classification dataset with 6 classes for questions about a person, location, numeric information, etc. The test split has 500 questions, and the training split has 5452 questions.
Paper: [Learning Question Classifiers](https://www.aclweb.org/anthology/C02-1150/) | Provide a detailed description of the following dataset: TREC-10 |
Deep Thermal Imaging Dataset | The **Deep Thermal Imaging dataset** consists of two main datasets:
- **DeepTherm I** (Indoor materials) - 15 indoor materials were used to create the dataset DeepTherm I which consists of 14,860 processed thermal images (average count of data for each individual class: 990.7, SD=425.9; 400-600 images of each material per each variable). The dataset was created by recording thermal image sequences in a room with different lighting levels (bright / dark), with/without air-conditioning, different places (on a floor or a desk) and from different perspectives (Figure 5). The spatial temperature patterns were collected from different angles and different distances (between 10 and 50 cm, from the camera lens to the material). The data was collected five times in about 3 weeks.
- **DeepTherm II** (Outdoor materials) - 17 outdoor materials were targeted. The data collection process produced the DeepTherm II dataset which includes 26,584 labelled thermal images. The average number of collected spatial thermal patterns from each material was 1563.8 (SD=295.3; about 300-500 images of each material per each condition).
Image source: [Cho et al.](https://arxiv.org/pdf/1803.02310v1.pdf) | Provide a detailed description of the following dataset: Deep Thermal Imaging Dataset |
Fluent Speech Commands | Fluent Speech Commands is an open source audio dataset for spoken language understanding (SLU) experiments. Each utterance is labeled with "action", "object", and "location" values; for example, "turn the lights on in the kitchen" has the label {"action": "activate", "object": "lights", "location": "kitchen"}. A model must predict each of these values, and a prediction for an utterance is deemed to be correct only if all values are correct.
The task is very simple, but the dataset is large and flexible to allow for many types of experiments: for instance, one can vary the number of speakers, or remove all instances of a particular sentence and test whether a model trained on the remaining sentences can generalize. | Provide a detailed description of the following dataset: Fluent Speech Commands |
Endotect Polyp Segmentation Challenge Dataset | A challenge that consists of three tasks, each targeting a different requirement for in-clinic use. The first task involves classifying images from the GI tract into 23 distinct classes. The second task focuses on efficiant classification measured by the amount of time spent processing each image. The last task relates to automatcially segmenting polyps. | Provide a detailed description of the following dataset: Endotect Polyp Segmentation Challenge Dataset |
Medico automatic polyp segmentation challenge (dataset) | The “Medico automatic polyp segmentation challenge” aims to develop computer-aided diagnosis systems for automatic polyp segmentation to detect all types of polyps (for example, irregular polyp, smaller or flat polyps) with high efficiency and accuracy. The main goal of the challenge is to benchmark semantic segmentation algorithms on a publicly available dataset, emphasizing robustness, speed, and generalization.
Medico Multimedia Task at MediaEval 2020:Automatic Polyp Segmentation (https://arxiv.org/pdf/2012.15244.pdf) | Provide a detailed description of the following dataset: Medico automatic polyp segmentation challenge (dataset) |
WIT | **Wikipedia-based Image Text** (**WIT**) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its size enables WIT to be used as a pretraining dataset for multimodal machine learning models.
**Key Advantages**
A few unique advantages of WIT:
- The largest multimodal dataset (time of this writing) by the number of image-text examples.
- A massively multilingual (first of its kind) with coverage for over 100+ languages.
- A collection of diverse set of concepts and real world entities.
- Brings forth challenging real-world test sets. | Provide a detailed description of the following dataset: WIT |
Unsplash Dataset | The Unsplash Dataset is created by over 200,000 contributing photographers and billions of searches across thousands of applications, uses, and contexts. It contains over 2M Unsplash images. | Provide a detailed description of the following dataset: Unsplash Dataset |
IDRiD | Indian Diabetic Retinopathy Image Dataset (IDRiD) dataset consists of typical diabetic retinopathy lesions and normal retinal structures annotated at a pixel level. This dataset also provides information on the disease severity of diabetic retinopathy and diabetic macular edema for each image. This dataset is perfect for the development and evaluation of image analysis algorithms for early detection of diabetic retinopathy. | Provide a detailed description of the following dataset: IDRiD |
ReDWeb | The ReDWeb dataset consists of 3600 RGB-RD image pairs collected from the Web. This dataset covers a wide range of scenes and features various non-rigid objects. | Provide a detailed description of the following dataset: ReDWeb |
HRWSI | The HRWSI dataset consists of about 21K diverse high-resolution RGB-D image pairs derived from the Web stereo images. Also, it provides sky segmentation masks, instance segmentation masks as well as invalid pixel masks. | Provide a detailed description of the following dataset: HRWSI |
Fongbe audio | Fongbe Data collected by Fréjus A. A LALEYE
This dataset contains Fongbe speech corpus with audio data and transcriptions. | Provide a detailed description of the following dataset: Fongbe audio |
DeepFluoroLabeling-IPCAI2020 | This collection contains data and code associated with the IPCAI/IJCARS 2020 paper “Automatic Annotation of Hip Anatomy in Fluoroscopy for Robust and Efficient 2D/3D Registration.” The data hosted here consists of annotated datasets of actual hip fluoroscopy, CT and derived data from six lower torso cadaveric specimens. Documentation and examples for using the dataset and Python code for training and testing the proposed models are also included. Higher-level information, including clinical motivations, prior works, algorithmic details, applications to 2D/3D registration, and experimental details, may be found in the companion paper which is available at [https://arxiv.org/abs/1911.07042](https://arxiv.org/abs/1911.07042) or [https://doi.org/10.1007/s11548-020-02162-7](https://doi.org/10.1007/s11548-020-02162-7). We hope that this code and data will be useful in the development of new computer-assisted capabilities that leverage fluoroscopy. | Provide a detailed description of the following dataset: DeepFluoroLabeling-IPCAI2020 |
Lens Flare Dataset | The Lens Flare dataset is an internal dataset for Flare Spot detection used in the paper "Automatic Flare Spot Artifact Detection and Removal in Photographs" by Patricia Vitoria and Coloma Ballester.
The dataset consists of 405 natural images in which a minimum of one flare spot artifact appears. The sources of light can be the sun, light bulbs or specular surfaces, among others. The images have been captured by different cameras with different technical specifications. | Provide a detailed description of the following dataset: Lens Flare Dataset |
SARA motion | Sara motion is a 3D motion dataset, named Synthetic Actors and Real Actions (SARA), for training a model to produce motion embeddings suitable for reasoning about motion similarity.
The motion sequence data for this dataset was generated by combining 18 different actors (i.e., action performing characters). The characters were rendered in a skeleton shape with Adobe Fuse software. Four action categories were selected (Combat, Adventure, Sport, and Dance) comprising a number of motion variations, where each action has a frame length of 32 or more. There are 4,428 base motions (e.g., dancing, jumping) in the SARA dataset. | Provide a detailed description of the following dataset: SARA motion |
NTU RGB+D 120 motion similarity | Motion similarity annotations for [NTU RGB+D 120 dataset](https://paperswithcode.com/dataset/ntu-rgb-d-120) to evaluate motion similarity in the real world. | Provide a detailed description of the following dataset: NTU RGB+D 120 motion similarity |
BU-BIL | **BU-BIL** is an image library which includes six datasets that represent three imaging modalities and six object types. Providers of the datasets are instructed to choose images that capture the various environmental conditions and imaging noise that arose in their studies. These experts are asked to then select objects from those images that reflect the natural diversity of shape and appearances that these objects can exhibit. The image subregions containing the identified objects are cropped to create the image library. The outcome was a library with 305 objects from 235 images. Authors verify by visual inspection that the image library includes a variety of object appearances, backgrounds, and properties distinguishing objects from the background.
Paper: [How to Collect Segmentations for Biomedical Images? A Benchmark Evaluating the Performance of Experts, Crowdsourced Non-Experts, and Algorithms](https://www.cs.bu.edu/fac/betke/papers/Gurari-etal-WACV-2015.pdf)
Image source: [How to Collect Segmentations for Biomedical Images? A Benchmark Evaluating the Performance of Experts, Crowdsourced Non-Experts, and Algorithms](https://www.cs.bu.edu/fac/betke/papers/Gurari-etal-WACV-2015.pdf) | Provide a detailed description of the following dataset: BU-BIL |
MTA-KDD'19 | Malware Traffic Analysis Knowledge Dataset 2019 (MTA-KDD'19) is an updated and refined dataset specifically tailored to train and evaluate machine learning based malware traffic analysis algorithms. To generate it, that authors started from the largest databases of network traffic captures available online, deriving a dataset with a set of widely-applicable features and then cleaning and preprocessing it to remove noise, handle missing data and keep its size as small as possible. The resulting dataset is not biased by any specific application (although specifically addressed to machine learning algorithms), and the entire process can run automatically to keep it updated. | Provide a detailed description of the following dataset: MTA-KDD'19 |
Cuff-Less Blood Pressure Estimation | ##Data Set Information:
The main goal of this data set is providing clean and valid signals for designing cuff-less blood pressure estimation algorithms. The raw electrocardiogram (ECG), photoplethysmograph (PPG), and arterial blood pressure (ABP) signals are originally collected from the physionet.org and then some preprocessing and validation performed on them. (For more information about the process please refer to our paper)
##Attribute Information:
This database consists of a cell array of matrices, each cell is one record part.
In each matrix each row corresponds to one signal channel:
1: PPG signal, FS=125Hz; photoplethysmograph from fingertip
2: ABP signal, FS=125Hz; invasive arterial blood pressure (mmHg)
3: ECG signal, FS=125Hz; electrocardiogram from channel II
Note: dataset is splitted to multiple parts to make it easier to load on machines with low memory. Each cell is a record. There might be more than one record per patient (which is not possible to distinguish). However, records of the same patient appear next to each other. N-fold cross test and train is suggested to reduce the chance of trainset being contaminated by test patients. | Provide a detailed description of the following dataset: Cuff-Less Blood Pressure Estimation |
POTUS Corpus | The **POTUS Corpus** is a Database of Weekly Addresses for the Study of Stance in Politics and Virtual Agents.
One of the main challenges in the field of Embodied Conversational Agent (ECA) is to generate socially believable agents. The common strategy for agent behaviour synthesis is to rely on dedicated corpus analysis. Such a corpus is composed of multimedia files of socio-emotional behaviors which have been annotated by external observers. The underlying idea is to identify interaction information for the agent’s socio-emotional behavior by checking whether the intended socio-emotional behavior is actually perceived by humans. Then, the annotations can be used as learning classes for machine learning algorithms applied to the social signals. This paper introduces the POTUS Corpus composed of high-quality audio-video files of political addresses to the American people. Two protagonists are present in this database. First, it includes speeches of former president Barack Obama to the American people. Secondly, it provides videos of these same speeches given by a virtual agent named Rodrigue. The ECA reproduces the original address as closely as possible using social signals automatically extracted from the original one. Both are annotated for social attitudes, providing information about the stance observed in each file. It also provides the social signals automatically extracted from Obama’s addresses used to generate Rodrigue’s ones. | Provide a detailed description of the following dataset: POTUS Corpus |
ImageNet VIPriors subset | The training and validation data are subsets of the training split of the Imagenet 2012. The test set is taken from the validation split of the Imagenet 2012 dataset. Each data set includes 50 images per class. | Provide a detailed description of the following dataset: ImageNet VIPriors subset |
BiRD | **Bigram Relatedness Dataset** (**BiRD**) is a large, fine-grained, bigram relatedness dataset, using a comparative annotation technique called Best Worst Scaling. Each of BiRD's 3,345 English term pairs involves at least one bigram. BiRD is made freely available to foster further research on how meaning can be represented and how meaning can be composed.
Image source: [http://saifmohammad.com/WebPages/BiRD.html](http://saifmohammad.com/WebPages/BiRD.html) | Provide a detailed description of the following dataset: BiRD |
Shiny dataset | The shiny folder contains 8 scenes with challenging view-dependent effects used in our paper. We also provide additional scenes in the shiny_extended folder.
The test images for each scene used in our paper consist of one of every eight images in alphabetical order.
Each scene contains the following directory structure:
```
scene/
dense/
cameras.bin
images.bin
points3D.bin
project.ini
images/
image_name1.png
image_name2.png
...
image_nameN.png
images_distort/
image_name1.png
image_name2.png
...
image_nameN.png
sparse/
cameras.bin
images.bin
points3D.bin
project.ini
database.db
hwf_cxcy.npy
planes.txt
poses_bounds.npy
```
- dense/ folder contains COLMAP's output [1] after the input images are undistorted.
- images/ folder contains undistorted images. (We use these images in our experiments.)
- images_distort/ folder contains raw images taken from a smartphone.
- sparse/ folder contains COLMAP's sparse reconstruction output [1].
Our poses_bounds.npy is similar to the LLFF[2] file format with a slight modification. This file stores a Nx14 numpy array, where N is the number of cameras. Each row in this array is split into two parts of sizes 12 and 2. The first part, when reshaped into 3x4, represents the camera extrinsic (camera-to-world transformation), and the second part with two dimensions stores the distances from that point of view to the first and last planes (near, far). These distances are computed automatically based on the scene’s statistics using LLFF’s code. (For details on how these are computed, see [this code](https://git.io/JqLKF))
hwf_cxcy.npy stores the camera intrinsic (height, width, focal length, principal point x, principal point y) in a 1x5 numpy array.
planes.txt stores information about the MPI planes. The first two numbers are the distances from a reference camera to the first and last planes (near, far). The third number tells whether the planes are placed equidistantly in the depth space (0) or inverse depth space (1). The last number is the padding size in pixels on all four sides of each of the MPI planes. I.e., the total dimension of each plane is (H + 2 * padding, W + 2 * padding).
References:
- [1]: [COLMAP structure from motion (Schönberger and Frahm, 2016)](https://colmap.github.io/).
- [2]: [Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines (Mildenhall et al., 2019)](https://arxiv.org/abs/1905.00889). | Provide a detailed description of the following dataset: Shiny dataset |
MATH | MATH is a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. | Provide a detailed description of the following dataset: MATH |
PhysioNet Challenge 2016 | Introduction
The 2016 PhysioNet/CinC Challenge aims to encourage the development of algorithms to classify heart sound recordings collected from a variety of clinical or nonclinical (such as in-home visits) environments. The aim is to identify, from a single short recording (10-60s) from a single precordial location, whether the subject of the recording should be referred on for an expert diagnosis.
During the cardiac cycle, the heart firstly generates the electrical activity and then the electrical activity causes atrial and ventricular contractions. This in turn forces blood between the chambers of the heart and around the body. The opening and closure of the heart valves is associated with accelerations-decelerations of blood, giving rise to vibrations of the entire cardiac structure (the heart sounds and murmurs) [1]. These vibrations are audible at the chest wall, and listening for specific heart sounds can give an indication of the health of the heart. The phonocardiogram (PCG) is the graphical representation of a heart sound recording. Figure 1 illustrates a short section of a PCG recording. | Provide a detailed description of the following dataset: PhysioNet Challenge 2016 |
IXI | **IXI Dataset** is a collection of 600 MR brain images from normal, healthy subjects. The MR image acquisition protocol for each subject includes:
* T1, T2 and PD-weighted images
* MRA images
* Diffusion-weighted images (15 directions)
The data has been collected at three different hospitals in London:
* Hammersmith Hospital using a Philips 3T system (details of scanner parameters)
* Guy’s Hospital using a Philips 1.5T system (details of scanner parameters)
* Institute of Psychiatry using a GE 1.5T system (details of the scan parameters not available at the moment)
The data has been collected as part of the project:
* IXI – Information eXtraction from Images (EPSRC GR/S21533/02)
The images in NIFTI format can be downloaded from [here](https://brain-development.org/ixi-dataset/):
This data is made available under the Creative Commons CC BY-SA 3.0 license. If you use the IXI data please acknowledge the source of the IXI data. | Provide a detailed description of the following dataset: IXI |
LIFULL HOME'S | The National Institute of Informatics provides LIFULL HOME'S Dataset to researchers, which was offered by [LIFULL Co., Ltd.](https://lifull.com/en/) for promoting research in informatics and the related fields.
The dataset contains the data of [LIFULL HOME'S](https://www.homes.co.jp/), a Real Estate Information Service in Japan.
1. Snapshot Data of Rentals (snapshot of 2015-09)
Rental data (5.33 million all over Japan): rental fee, area, location, age, floor plan, structure, facilities, etc.; approx. 1.6GB .tsv format files.
Image data (83 million files): floor plan image, room view, etc. of all the above items; approx. 210GB .jpg format files, max size: 120x120.
2. High Resolution Floor Plan Image Data
High resolution version data of floor plan image (5.31 million files) included in Snapshot Data of Rentals; approx. 140GB .jpg format files. Additional application required to use this data (see Application section below).
3. Monthly Data of Rentals and Sales (2015-07 - 2017-06, 24 months)
Property data of rentals and sales (5.33 million all over Japan): rental fee/price, area, location, age, floor plan, structure, facilities, etc.; approx. 1.7 - 4.5GB .tsv format files for respective months.
In addition, LIFULL Co., Ltd. provides a sample script for classifying image types via Github:
https://github.com/Littel-Laboratory/homes-dataset-tools | Provide a detailed description of the following dataset: LIFULL HOME'S |
CosmoFlow | The latest CosmoFlow dataset includes around 10,000 cosmological N-body dark matter simulations. The simulations are run using MUSIC to generate the initial conditions, and are evolved with pyCOLA, a multithreaded Python/Cython N-body code. The output of these simulations is then binned into a 3D histogram of particle counts in a cube of size 512x512x512, which is sampled at 4 different redshifts. | Provide a detailed description of the following dataset: CosmoFlow |
Sketch2aia (Mobile User Interface Sketches) | Dataset of 374 photos of hand-drawn sketches of App Inventor apps used for development of the Sketch2aia model for automatic generation of App Inventor wireframes from hand-drawn sketches.
Data format
Training:2 37 images in JPG (.jpg) format with 720×1280 pixels, each accompanied by a JSON (.json) file with manually attributed bounding box annotation for 10 different classes of UI elements (Screen, Label, Button, Switch, Slider, TextBox, CheckBox, ListPicker, Image and Map), used to train the Sketch2aia model.
Validation: 42 images in JPG (.jpg) format with 720×1280 pixels, each accompanied by a JSON (.json) file with manually attributed bounding box annotation for 10 different classes of UI elements (Screen, Label, Button, Switch, Slider, TextBox, CheckBox, ListPicker, Image and Map), used to test the Sketch2aia model.
Additional Images: 95 images in JPG (.jpg) format with 720×1280 pixels. Some images are accompanied by a JSON (.json) file with manually attributed bounding box annotation for 10 different classes of UI elements (Screen, Label, Button, Switch, Slider, TextBox, CheckBox, ListPicker, Image and Map), while others have not yet been labeled. This portion of the dataset was collected during user evaluation of the Sketch2aia model, and have not been directly used to train or test the object detection model. | Provide a detailed description of the following dataset: Sketch2aia (Mobile User Interface Sketches) |
An Amharic News Text classification Dataset | In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses are indisputable. The lack of labeled training data made it harder to do these tasks in low resource languages like Amharic. The task of collecting, labeling, annotating, and making valuable this kind of data will encourage junior researchers, schools, and machine learning practitioners to implement existing classification models in their language. In this short paper, we aim to introduce the Amharic text classification dataset that consists of more than 50k news articles that were categorized into 6 classes. This dataset is made available with easy baseline performances to encourage studies and better performance experiments. | Provide a detailed description of the following dataset: An Amharic News Text classification Dataset |
PHOENIX14T | Over a period of three years (2009 - 2011) the daily news and weather forecast airings of the German public tv-station PHOENIX featuring sign language interpretation have been recorded and the weather forecasts of a subset of 386 editions have been transcribed using gloss notation. Furthermore, we used automatic speech recognition with manual cleaning to transcribe the original German speech. As such, this corpus allows to train end-to-end sign language translation systems from sign language video input to spoken language.
The signing is recorded by a stationary color camera placed in front of the sign language interpreters. Interpreters wear dark clothes in front of an artificial grey background with color transition. All recorded videos are at 25 frames per second and the size of the frames is 210 by 260 pixels. Each frame shows the interpreter box only. | Provide a detailed description of the following dataset: PHOENIX14T |
CUAD | **Contract Understanding Atticus Dataset** (**CUAD**) is a dataset for legal contract review. CUAD was created with dozens of legal experts from The Atticus Project
and consists of over 13,000 annotations. The task is to highlight salient portions of a contract that are important for a human to review. | Provide a detailed description of the following dataset: CUAD |
BIKED | **BIKED** is a dataset comprised of 4500 individually designed bicycle models sourced from hundreds of designers. BIKED enables a variety of data-driven design applications for bicycles and generally supports the development of data-driven design methods. The dataset is comprised of a variety of design information including assembly images, component images, numerical design parameters, and class labels. | Provide a detailed description of the following dataset: BIKED |
THEOStereo | THEOStereo is a dataset providing synthetic stereo image pairs and their corresponding scene depth and will be published along with [1]. All images follow the omnidirectional camera model. In total, there are *31,250* omnidirectional images pairs. The training set contains *25,000* image pairs. For validation and testing there are *3,125* image pairs, respectively. For each pair, there is a ground truth depth map describing the pixel-wise distance of the object along the left camera's z-axis. The virtual omnidirectional cameras exhibit a FOV of *180* degrees and can be described using Kannala's camera model [2]. The distortion parameters are *k_1 = 1* and *k_2 = k_3 = k_4 = k_5 = 0*. The length of the stereo camera's baseline was *0.3* AU (approx. *15* cm, not *30* cm!). Please do not forget to cite [1] if you use the dataset in your work. Thank you.
## Structure of the Dataset
```
.
├── README.md
├── test
│ ├── depth_exr_abs
│ ├── img_stereo_webp
│ └── img_webp
├── train
│ ├── depth_exr_abs
│ ├── img_stereo_webp
│ └── img_webp
└── valid
├── depth_exr_abs
├── img_stereo_webp
└── img_webp
```
The directory `depth_exr_abs` contain the depth maps given in meters. The depth reference to the image of the left camera. All images of the left camera are stored in the `img_webp`. The right camera's images can be found in `img_stereo_webp`.
## License
This dataset is licensed under CC BY 4.0.
For details, please visit <https://creativecommons.org/licenses/by/4.0/>.
[](https://creativecommons.org/licenses/by/4.0/)
## Conference paper
The conference paper can be downloaded from [here](https://www.scitepress.org/Papers/2021/103248/103248.pdf).
## BibTex
If you use the dataset in your work, we would kindly ask you to cite [1].
You might want to use the following BibTex entry:
```bibtex
@inproceedings{seuffert_study_2021,
address = {Online Conference},
title = {A {Study} on the {Influence} of {Omnidirectional} {Distortion} on {CNN}-based {Stereo} {Vision}},
isbn = {978-989-758-488-6},
doi = {10.5220/0010324808090816},
booktitle = {Proceedings of the 16th {International} {Joint} {Conference} on {Computer} {Vision}, {Imaging} and {Computer} {Graphics} {Theory} and {Applications}, {VISIGRAPP} 2021, {Volume} 5: {VISAPP}},
publisher = {SciTePress},
author = {Seuffert, Julian Bruno and Perez Grassi, Ana Cecilia and Scheck, Tobias and Hirtz, Gangolf},
year = {2021},
month = {2},
pages = {809--816}
}
```
## References
[1] J. B. Seuffert, A. C. Perez Grassi, T. Scheck, and G. Hirtz, “A Study on the Influence of Omnidirectional Distortion on CNN-based Stereo Vision,” in *Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2021, Volume 5: VISAPP*, Online Conference, Feb. 2021, pp. 809–816, doi: 10.5220/0010324808090816.
[2] J. Kannala, J. Heikkilä, and S. S. Brandt, “Geometric Camera Calibration,” in *Wiley Encyclopedia of Computer Science
and Engineering*, B. W. Wah, Ed. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2008. | Provide a detailed description of the following dataset: THEOStereo |
PCD | The Arabic dataset is scraped mainly from الموسوعة الشعرية and الديوان. After merging both, the total number of verses is 1,831,770 poetic verses. Each verse is labeled by its meter, the poet who wrote it, and the age which it was written in. There are 22 meters, 3701 poets and 11 ages: Pre-Islamic, Islamic, Umayyad, Mamluk, Abbasid, Ayyubid, Ottoman, Andalusian, era between Umayyad and Abbasid, Fatimid, and finally the modern age. We are only interested in the 16 classic meters which are attributed to Al-Farahidi, and they comprise the majority of the dataset with a total number around 1.7M verses. It is important to note that the verses diacritic states are not consistent. This means that a verse can carry full, semi diacritics, or it can carry nothing. | Provide a detailed description of the following dataset: PCD |
ARCH | **ARCH** is a computational pathology (CP) multiple instance captioning dataset to facilitate dense supervision of CP tasks. Existing CP datasets focus on narrow tasks; ARCH on the other hand contains dense diagnostic and morphological descriptions for a range of stains, tissue types and pathologies. | Provide a detailed description of the following dataset: ARCH |
UASOL | The UASOL an RGB-D stereo dataset, that contains 160902 frames, filmed at 33 different scenes, each with between 2 k and 10 k frames. The frames show different paths from the perspective of a pedestrian, including sidewalks, trails, roads, etc. The images were extracted from video files with 15 fps at HD2K resolution with a size of 2280 × 1282 pixels. The dataset also provides a GPS geolocalization tag for each second of the sequences and reflects different climatological conditions. It also involved up to 4 different persons filming the dataset at different moments of the day.
We propose a [train, validation and test split](https://www.nature.com/articles/s41597-019-0168-5/tables/4) to train the network.
Additionally, we introduce a subset of [676 pairs of RGB Stereo images and their respective depth](https://osf.io/64532/files/), which we extracted randomly from the entire dataset. This given test set is introduced to make comparability possible between the different methods trained with the dataset. | Provide a detailed description of the following dataset: UASOL |
SUM | SUM is a new benchmark dataset of semantic urban meshes which covers about 4 km2 in Helsinki (Finland), with six classes: Ground, Vegetation, Building, Water, Vehicle, and Boat.
The authors used Helsinki 3D textured meshes as input and annotated them as a benchmark dataset of semantic urban meshes. The Helsinki's raw dataset covers about 12 km2 and was generated in 2017 from oblique aerial images that have about a 7.5 cm ground sampling distance (GSD) using an off-the-shelf commercial software namely ContextCapture.
The entire region of Helsinki is split into tiles, and each of them covers about 250 m2.
Image source: [Gao et al.](https://arxiv.org/pdf/2103.00355v1.pdf) | Provide a detailed description of the following dataset: SUM |
BLURB | **BLURB** is a collection of resources for biomedical natural language processing. In general domains such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models such as BERTs provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks.
Inspired by prior efforts toward this direction (e.g., BLUE), BLURB (short for Biomedical Language Understanding and Reasoning Benchmark) was created. BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact. | Provide a detailed description of the following dataset: BLURB |
GAD | **GAD**, or **Gene Associations Database**, is a corpus of gene-disease associations curated from genetic association studies. | Provide a detailed description of the following dataset: GAD |
BC2GM | Created by Smith et al. at 2008, the BioCreative II Gene Mention Recognition (BC2GM) Dataset contains data where participants are asked to identify a gene mention in a sentence by giving its start and end characters. The training set consists of a set of sentences, and for each sentence a set of gene mentions (GENE annotations). [registration required for access], in English language. Containing 20 in n/a file format. | Provide a detailed description of the following dataset: BC2GM |
Kaggle EyePACS | Diabetic retinopathy is the leading cause of blindness in the working-age population of the developed world. It is estimated to affect over 93 million people.
retina
The US Center for Disease Control and Prevention estimates that 29.1 million people in the US have diabetes and the World Health Organization estimates that 347 million people have the disease worldwide. Diabetic Retinopathy (DR) is an eye disease associated with long-standing diabetes. Around 40% to 45% of Americans with diabetes have some stage of the disease. Progression to vision impairment can be slowed or averted if DR is detected in time, however this can be difficult as the disease often shows few symptoms until it is too late to provide effective treatment.
Currently, detecting DR is a time-consuming and manual process that requires a trained clinician to examine and evaluate digital color fundus photographs of the retina. By the time human readers submit their reviews, often a day or two later, the delayed results lead to lost follow up, miscommunication, and delayed treatment.
Clinicians can identify DR by the presence of lesions associated with the vascular abnormalities caused by the disease. While this approach is effective, its resource demands are high. The expertise and equipment required are often lacking in areas where the rate of diabetes in local populations is high and DR detection is most needed. As the number of individuals with diabetes continues to grow, the infrastructure needed to prevent blindness due to DR will become even more insufficient.
The need for a comprehensive and automated method of DR screening has long been recognized, and previous efforts have made good progress using image classification, pattern recognition, and machine learning. With color fundus photography as input, the goal of this competition is to push an automated detection system to the limit of what is possible – ideally resulting in models with realistic clinical potential. The winning models will be open sourced to maximize the impact such a model can have on improving DR detection.
Acknowledgements
This competition is sponsored by the California Healthcare Foundation.

Retinal images were provided by EyePACS, a free platform for retinopathy screening.
 | Provide a detailed description of the following dataset: Kaggle EyePACS |
THFOOD-50 | Fine-Grained Thai Food Image Classification Datasets
THFOOD-50 containing 15,770 images of 50 famous Thai dishes. | Provide a detailed description of the following dataset: THFOOD-50 |
SRD | SRD is a dataset for shadow removal that contains 3088 shadow and shadow-free image pairs. | Provide a detailed description of the following dataset: SRD |
BL30K | BL30K is a synthetic dataset rendered using Blender with ShapeNet's data. We break the dataset into six segments, each with approximately 5K videos. The videos are organized in a similar format as DAVIS and YouTubeVOS, so dataloaders for those datasets can be used directly. Each video is 160 frames long, and each frame has a resolution of 768*512. There are 3-5 objects per video, and each object has a random smooth trajectory -- we tried to optimize the trajectories in a greedy fashion to minimize object intersection (not guaranteed), with occlusions still possible (happen a lot in reality). See [MiVOS](https://github.com/hkchengrex/MiVOS) for details. | Provide a detailed description of the following dataset: BL30K |
BIG | A high-resolution semantic segmentation dataset with 50 validation and 100 test objects. Image resolution in BIG ranges from 2048×1600 to 5000×3600. Every image in the dataset has been carefully labeled by a professional while keeping the same guidelines as PASCAL VOC 2012 without the void region. | Provide a detailed description of the following dataset: BIG |
COCO Object Detection VIPriors subset | The training and validation data are subsets of the training split of the MS COCO dataset (2017 release, bounding boxes only). The test set is taken from the validation split of the MS COCO dataset. | Provide a detailed description of the following dataset: COCO Object Detection VIPriors subset |
Cityscapes VIPriors subset | The training and validation data are subsets of the training split of the Cityscapes dataset. The test set is taken from the validation split of the Cityscapes dataset. | Provide a detailed description of the following dataset: Cityscapes VIPriors subset |
UCF-101 VIPriors subset | The VIriors Action Recognition Challenge uses a subset of the UCF101 action recognition dataset:
Train set: ~4.8K clips.
Validation set: ~4.7K clips.
Test set: ~3.8K clips. | Provide a detailed description of the following dataset: UCF-101 VIPriors subset |
CLEVR-Hans | The CLEVR-Hans data set is a novel confounded visual scene data set, which captures complex compositions of different objects. This data set consists of [CLEVR](clevr) images divided into several classes.
The membership of a class is based on combinations of objects’ attributes and relations. Additionally, certain classes within the data set are confounded. Thus, within the data set, consisting of train, validation, and test splits, all train, and validation images of confounded classes will be confounded with a specific attribute or combination of attributes.
Each class is represented by 3000 training images, 750 validation images, and 750 test images. The training, validation, and test set splits contain 9000, 2250, and 2250 samples, respectively, for CLEVR-Hans3 and 21000, 5250, and 5250 samples for CLEVR-Hans7. The class distribution is balanced for all data splits.
For CLEVR-Hans classes for which class rules contain more than three objects, the number of objects to be placed per scene was randomly chosen between the minimal required number of objects for that class and ten, rather than between three and ten, as in the original CLEVR data set.
Finally, the images were created such that the exact combinations of the class rules did not occur in images of other classes. It is possible that a subset of objects from one class rule occur in an image of another class. However, it is not possible that more than one complete class rule is contained in an image. | Provide a detailed description of the following dataset: CLEVR-Hans |
Tsinghua Dogs | Tsinghua Dogs is a fine-grained classification dataset for dogs, over 65% of whose images are collected from people's real life. Each dog breed in the dataset contains at least 200 images and a maximum of 7,449 images, basically in proportion to their frequency of occurrence in China, so it significantly increases the diversity for each breed over existing dataset. Furthermore, Tsinghua Dogs annotated bounding boxes of the dog’s whole body and head in each image, which can be used for supervising the training of learning algorithms as well as testing them. | Provide a detailed description of the following dataset: Tsinghua Dogs |
ADAM | ADAM is organized as a half day Challenge, a Satellite Event of the ISBI 2020 conference in Iowa City, Iowa, USA.
The ADAM challenge focuses on the investigation and development of algorithms associated with the diagnosis of Age-related Macular degeneration (AMD) and segmentation of lesions in fundus photos from AMD patients. The goal of the challenge is to evaluate and compare automated algorithms for the detection of AMD on a common dataset of retinal fundus images. We invite the medical image analysis community to participate by developing and testing existing and novel automated fundus classification and segmentation methods.
Instructions:
ADAM: Automatic Detection challenge on Age-related Macular degeneration
Link: https://amd.grand-challenge.org
Age-related macular degeneration, abbreviated as AMD, is a degenerative disorder in the macular region. It mainly occurs in people older than 45 years old and its incidence rate is even higher than diabetic retinopathy in the elderly.
The etiology of AMD is not fully understood, which could be related to multiple factors, including genetics, chronic photodestruction effect, and nutritional disorder. AMD is classified into Dry AMD and Wet AMD. Dry AMD (also called nonexudative AMD) is not neovascular. It is characterized by progressive atrophy of retinal pigment epithelium (RPE). In the late stage, drusen and the large area of atrophy could be observed under ophthalmoscopy. Wet AMD (also called neovascular or exudative AMD), is characterized by active neovascularization under RPE, subsequently causing exudation, hemorrhage, and scarring, and will eventually cause irreversible damage to the photoreceptors and rapid vision loss if left untreated.
An early diagnosis of AMD is crucial to treatment and prognosis. Fundus photo is one of the basic examinations. The current dataset is composed of AMD and non-AMD (myopia, normal control, etc.) photos. Typical signs of AMD that can be found in these photos include drusen, exudation, hemorrhage, etc.
The ADAM challenge has 4 tasks:
Task 1: Classification of AMD and non-AMD fundus images.
Task 2: Detection and segmentation of optic disc.
Task 3: Localization of fovea.
Task 4: Detection and Segmentation of lesions from fundus images. | Provide a detailed description of the following dataset: ADAM |
DiCOVA | The DiCOVA Challenge dataset is derived from the Coswara dataset, a crowd-sourced dataset of sound recordings from COVID-19 positive and non-COVID-19 individuals. The Coswara data is collected using a web-application2, launched in April-2020, accessible through the internet by anyone around the globe. The volunteering subjects are advised to record their respiratory sounds in a quiet environment.
Each subject provides 9 audio recordings, namely, (a) shallow and deep breathing (2 nos.), (b) shallow and heavy cough (2 nos.), (c) sustained phonation of vowels [æ] (as in bat), [i] (as in beet), and [u] (as in boot) (3 nos.), and (d) fast and normal pace 1 to 20 number counting (2 nos.).
The DiCOVA Challenge has two tracks. The participants also provided metadata corresponding to their current health status (includes COVID19 status, any other respiratory ailments, and symptoms), demographic information, age and gender. From this Coswara dataset, two datasets have been created:
(a) Track-1 dataset: composed of cough sound recordings. It t is composed of cough audio data from 1040 subjects.
(b) Track-2 dataset: composed of deep breathing, vowel [i], and number counting (normal pace) speech recordings. It is composed of audio data from 1199 subjects. | Provide a detailed description of the following dataset: DiCOVA |
Digital Peter | Digital Peter is a dataset of Peter the Great's manuscripts annotated for segmentation and text recognition. The dataset may be useful for researchers to train handwriting text recognition models as a benchmark for comparing different models. It consists of 9,694 images and text files corresponding to lines in historical documents. The dataset includes Peter’s handwritten materials covering the period from 1709 to 1713.
The open machine learning competition Digital Peter was held based on the considered dataset. | Provide a detailed description of the following dataset: Digital Peter |
OGB-LSC | OGB Large-Scale Challenge (OGB-LSC) is a collection of three real-world datasets for advancing the state-of-the-art in large-scale graph ML. OGB-LSC provides graph datasets that are orders of magnitude larger than existing ones and covers three core graph learning tasks -- link prediction, graph regression, and node classification.
OGB-LSC consists of three datasets: MAG240M-LSC, WikiKG90M-LSC, and PCQM4M-LSC. Each dataset offers an independent task.
* MAG240M-LSC is a heterogeneous academic graph, and the task is to predict the subject areas of papers situated in the heterogeneous graph (node classification).
* WikiKG90M-LSC is a knowledge graph, and the task is to impute missing triplets (link prediction).
* PCQM4M-LSC is a quantum chemistry dataset, and the task is to predict an important molecular property, the HOMO-LUMO gap, of a given molecule (graph regression). | Provide a detailed description of the following dataset: OGB-LSC |
TeachMyAgent | TeachMyAgent (TA) is a benchmark for Automatic Curriculum Learning (ACL) algorithms leveraging procedural task generation. It includes 1) challenge-specific unit-tests using variants of a procedural Box2D bipedal walker environment, and 2) a new procedural Parkour environment combining most ACL challenges, making it ideal for global performance assessment. | Provide a detailed description of the following dataset: TeachMyAgent |
L1000 | The **L1000** dataset consists of ~1,400,000 gene-expression profiles on the responses of ~50 human cell lines to one of ~20,000 compounds across a range of concentrations. The L1000 dataset and its normalization versions10 were recently widely used in drug repurposing and discovery.
Description from: [A deep learning framework for high-throughput mechanism-driven phenotype compound screening and its application to COVID-19 drug repurposing](https://www.nature.com/articles/s42256-020-00285-9)
Publication introducing the dataset: [A Next Generation Connectivity Map: L1000 Platform and the First 1,000,000 Profiles](https://pubmed.ncbi.nlm.nih.gov/29195078/) | Provide a detailed description of the following dataset: L1000 |
DSBEC | The data set consists of 6257 labeled images of Bose-Einstein condensates (BECs) with and without solitonic excitations, including kink solitons and solitonic vortices. Each element of the data set contains a masked image (132x164 pixels) of 2D atomic density used to train the machine learning model used in the paper "Machine-learning enhanced dark soliton detection in Bose-Einstein condensates," (https://arxiv.org/abs/2101.05404), and a label indicating the class a given image belongs to (0 indicates no solitons, 1 indicates a single soliton, and 2 indicates other excitations). The data structure file and project description are included with the data.
This data set was used to train a deep convolutional neural network to automatically recognize whether or not a lone dark soliton has been created in BECs that was then implemented within an automated soliton detection and positioning system (see https://arxiv.org/abs/2101.05404 for details). | Provide a detailed description of the following dataset: DSBEC |
ConScenD | The ConScenD dataset consists of over 340 scenarios extracted from the naturalistic highway dataset highD. This scenarios can be used to test for the introduction of Level 3 Automated Lane Keeping Systems according to the UNECE R157 ALKS Regulation. | Provide a detailed description of the following dataset: ConScenD |
LDC2020T02 | Abstract Meaning Representation (AMR) Annotation Release 3.0 was developed by the Linguistic Data Consortium (LDC), SDL/Language Weaver, Inc., the University of Colorado's Computational Language and Educational Research group and the Information Sciences Institute at the University of Southern California. It contains a sembank (semantic treebank) of over 59,255 English natural language sentences from broadcast conversations, newswire, weblogs, web discussion forums, fiction and web text. This release adds new data to, and updates material contained in, Abstract Meaning Representation 2.0 (LDC2017T10), specifically: more annotations on new and prior data, new or improved PropBank-style frames, enhanced quality control, and multi-sentence annotations.
AMR captures "who is doing what to whom" in a sentence. Each sentence is paired with a graph that represents its whole-sentence meaning in a tree-structure. AMR utilizes PropBank frames, non-core semantic roles, within-sentence coreference, named entity annotation, modality, negation, questions, quantities, and so on to represent the semantic structure of a sentence largely independent of its syntax. | Provide a detailed description of the following dataset: LDC2020T02 |
KoDF | The Korean DeepFake Detection Dataset (KoDF) is a large-scale collection of synthesized and real videos focused on Korean subjects, used for the task of deepfake detection.
The dataset consists of 62,166 real videos and 175,776 fake videos from 403 subjects. The fake videos are created using 6 different methods: FaceSwap, DeepFaceLab, FSGAN, FOMM, ATFHP and Wav2Lip. | Provide a detailed description of the following dataset: KoDF |
HDA Facial Tattoo and Painting Database | The Hochschule Darmstadt (HDA) facial tattoo and paintings database contains 500 pairs of facial images of individuals with and without facial tattoos or paintings. The database was collected from multiple online sources. | Provide a detailed description of the following dataset: HDA Facial Tattoo and Painting Database |
Gowalla | Gowalla is a location-based social networking website where users share their locations by checking-in. The friendship network is undirected and was collected using their public API, and consists of 196,591 nodes and 950,327 edges. We have collected a total of 6,442,890 check-ins of these users over the period of Feb. 2009 - Oct. 2010. | Provide a detailed description of the following dataset: Gowalla |
DODa | Darija Open Dataset (**DODa**) is an open-source project for the Moroccan dialect. With more than 10,000 entries DODa is arguably the largest open-source collaborative project for Darija-English translation built for Natural Language Processing purposes. In fact, besides semantic categorization, DODa also adopts a syntactic one, presents words under different spellings, offers verb-to-noun and masculine-to-feminine correspondences, contains the conjugation of hundreds of verbs in different tenses, and many other subsets to help researchers better understand and study Moroccan dialect. | Provide a detailed description of the following dataset: DODa |
LeT-Mi | Levantine Twitter dataset for Misogynistic language (LeT-Mi) is an Arabic Levantine Twitter dataset for misogynistic language to be the first benchmark dataset for Arabic misogyny.
⚠️ Note: To be made publicly available on Github | Provide a detailed description of the following dataset: LeT-Mi |
SVT | **The Street View Text** (**SVT**) dataset was harvested from Google Street View. Image text in this data exhibits high variability and often has low resolution. In dealing with outdoor street level imagery, we note two characteristics. (1) Image text often comes from business signage and (2) business names are easily available through geographic business searches. These factors make the SVT set uniquely suited for word spotting in the wild: given a street view image, the goal is to identify words from nearby businesses.
Note: the dataset has undergone revision since the time it was evaluated in this publication. Please consult the [ICCV2011 paper](http://vision.ucsd.edu/~kai/pubs/wang_iccv2011.pdf) for most up-to-date results. | Provide a detailed description of the following dataset: SVT |
RETWEET | **RETWEET** is a dataset of tweets and overall predominant sentiment of their replies.
SUMMARY
------
**WHAT:** Message-level Polarity Classification.
**GOAL:** To predict the predominant sentiment among (potential) first-order replies to a given tweet.
**IDEA:** Mitigate the problem of lacking labeled training data wi treating the unsupervised nature of the problem as a supervised learning case.
### APPROACH:
1. Train a tweet classifier.
2. Automatically label the replies using the classifier trained in the first part.
3. Choose a final label representing the general predominant sentiment of the replies of every tweet.
### DATA COLLECTION
To download all of the replies to a tweet, the Search API should be used. However, the Search API is limited to 75000 requests per hour, which causes the mining and downloading process to be slow.
Furthermore, using the Twitter API, there is no possibility of downloading absolute random data. Therefore, we try to make the procedure as random as possible by utilizing two different strategies for data downloading and using them in an intermixed manner.
1. Our first strategy is based on a sample of English tweets obtained by filtering the Twitter stream via [a list of cultural keywords](https://www.wiley.com/en-us/New+Keywords%3A+A+Revised+Vocabulary+of+Culture+and+Society-p-9780631225690). This list consists of 147 words that are deemed to play a "pivotal role in discussions of culture and society", covering diverse words such as *aesthetics*, *environment*, *feminism*, *power*, *tourism*, or *youth*. We extracted all tweets in 2019 that have a minimum of 20 first-order replies in the dataset. The data come with an obvious caveat: Both the source tweet as well as all the replies must contain at least one word from the list of keywords. Therewith, it is highly unlikely that the list of replies for any given source is exhaustive, i.e. there might be many more first-order replies to the source tweet that are not in the dataset.
2. As our second approach, we use the [GetOldTweets3](https://github.com/Mottl/GetOldTweets3/tree/master/GetOldTweets3) library to download all the replies corresponding to every tweet. We define few restrictions to add randomization to the process. Firstly, every tweet and also every reply should contain at least 20 strings. This is due to the fact that our automatic tweet classifier, explsined in the paper, is optimized based on the message-level classification paradigm. Therefore, it operates optimal when the input contains at least a sufficient number of words. The second constraint is that every tweet should contain at least 20 first-order replies. In order to increase randomness, in this strategy, instead of referencing to a list of keywords, we manually choose some keywords, which are most likely to include long discussions, such as *Coronavirus* and *football* or the ones, which are most likely to include strong opinions such as *birthday*, *war*, or *racism* in order to account for the easy-to-guess examples.
### MANUAL ANNOTATIONS FOR THE RETWEET (TEST GOLD DATASET)
5,015 tweets with their corresponding replies, collected as a combination of the two different collection strategies, were given to three different students. Each of them had to read all the replies corresponding to every tweet, without observing the original tweet in order to avoid having a prior knowledge, and decide on ONE final sentiment for the replies. The assigned sentiment can only be one of the positive, negative, or neutral labels.
Considering the fact that this is a really challenging task for the machine, to prevent human mistakes, we correlated the results of the three annotators and only chose the tweets, in which all of the annotators had the same opinion on the labels, as the final gold standard test data. Therefore, we finally, ended up with a test set consisting of 1,519 human labeled tweets, with the labels being the sentiment of the replies of a tweet and not the tweet itself.
DATASET CONTENTS
---
**1. Training raw dataset**: *34,953 unique tweets* in total and individual automatic labels for all of their corresponding replies (*1,519,504 total replies*). Including,
- `./RETWEET_data/train_reply_labels_set1.txt`
- `./RETWEET_data/train_reply_labels_set2.txt`
**2. Training autamtically-labeled dataset**: *34,953 unique tweets* and ONE final *automatic* label (chosen based on the algorithm 1 of our paper) for every tweet. Including,
- `./RETWEET_data/train_final_label.txt`
**3. Gold standard test dataset (RETWEET)**: *1,519 unique tweets* with their *manual* labels for replies. ONE final label, which states the predominant overall polarity of all its replies, is assigned to every tweet. Including,
- `./RETWEET_data/test_gold.txt`
NOTES
---
1. Please note that by downloading the Twitter data you agree to abide by the [Twitter terms of service](https://twitter.com/tos), and in particular you agree not to redistribute the data and to delete tweets that are marked deleted in the future.
2. The "neutral" label in the annotations stands for objective or neutral.
3. The distribution consists of a set of Twitter unique tweet IDs with annotations (overall polarity of replies). As for data privacy, the texts of the tweets and replies are not distributed. But as all the utilized resources in this dataset are taken from public tweets, having the tweet unique IDs, you can download the tweet and its replies.
You can use the Semeval Twitter data downloading script to obtain the corresponding tweets:
https://github.com/seirasto/twitter_download/
4. The dataset URL:
https://kaggle.com/soroosharasteh/retweet/
LICENSE
---
The accompanying dataset is released under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
SOURCE CODE
---
The official source code of the paper: https://github.com/starasteh/retweet
### In case you use this dataset, please cite the original paper:
S. Tayebi Arasteh, M. Monajem, V. Christlein, P. Heinrich, A. Nicolaou, H.N. Boldaji, M. Lotfinia, S. Evert. "*How Will Your Tweet Be Received? Predicting the Sentiment Polarity of Tweet Replies*". Proceedings of the 2021 IEEE 15th International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA, January 2021.
### BibTex
@inproceedings{RETWEET,
title = "How Will Your Tweet Be Received? Predicting the Sentiment Polarity of Tweet Replies",
author = "Tayebi Arasteh, Soroosh and Monajem, Mehrpad and Christlein, Vincent and
Heinrich, Philipp and Nicolaou, Anguelos and Naderi Boldaji, Hamidreza and Lotfinia, Mahshad and Evert, Stefan",
booktitle = "Proceedings of the 2021 IEEE 15th International Conference on Semantic Computing (ICSC)",
address = "Laguna Hills, CA, USA",
pages = "370-373",
doi = "10.1109/ICSC50631.2021.00068",
url = "https://ieeexplore.ieee.org/document/9364527/",
month = "01",
year = "2021"
}
* Dataset DOI: 10.34740/kaggle/ds/736988
* Paper: https://ieeexplore.ieee.org/document/9364527
* Paper DOI: 10.1109/ICSC50631.2021.00068
CONTACT
---
E-mail: soroosh.arasteh@fau.de
DATA FORMAT FOR ALL THE FILES
---
label TAB id
where, "label" can be positive, neutral or negative, corresponding to the overall message-level polarity of the replies of the tweet and "id" corresponds to the Twitter unique ID for the tweets. | Provide a detailed description of the following dataset: RETWEET |
TRANCE | TRANCE extends CLEVR by asking a uniform question, i.e. what is the transformation between two given images, to test the ability of transformation reasoning. TRANCE includes three levels of settings, i.e. Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views). Detailed information can be found in [https://hongxin2019.github.io/TVR](https://hongxin2019.github.io/TVR). | Provide a detailed description of the following dataset: TRANCE |
Sewer-ML | Sewer-ML is a sewer defect dataset. It contains 1.3 million images, from 75,618 videos collected from three Danish water utility companies over nine years. All videos have been annotated by licensed sewer inspectors following the Danish sewer inspection standard, Fotomanualen. This leads to consistent and reliable annotations, and a total of 17 annotated defect classes. | Provide a detailed description of the following dataset: Sewer-ML |
HW-NAS-Bench | HW-NAS-Bench is a dataset for HardWare-aware Neural Architecture Search (HW-NAS). It is the first dataset for HW-NAS research aiming to democratize HW-NAS research to non-hardware experts and facilitate a unified benchmark for HW-NAS to make HW-NAS research more reproducible and accessible, covering two SOTA NAS search spaces including NAS-Bench-201 and FBNet | Provide a detailed description of the following dataset: HW-NAS-Bench |
MMKG | MMKG is a collection of three knowledge graphs for link prediction and entity matching research. Contrary to other knowledge graph datasets, these knowledge graphs contain both numerical features and images for all entities as well as entity alignments between pairs of KGs. While MMKG is intended to perform relational reasoning across different entities and images, previous resources are intended to perform visual reasoning within the same image.
The three knowledge graphs augmented with numerical features and images are called FB15k, YAGO15k, and DBPEDIA15k. | Provide a detailed description of the following dataset: MMKG |
UBI-Fights | UBI-Fights - Concerning a specific anomaly detection and still providing a wide diversity in fighting scenarios, the UBI-Fights dataset is a unique new large-scale dataset of 80 hours of video fully annotated at the frame level. Consisting of 1000 videos, where 216 videos contain a fight event, and 784 are normal daily life situations. All unnecessary video segments (e.g., video introductions, news, etc.) that could disturb the learning process were removed. | Provide a detailed description of the following dataset: UBI-Fights |
SKAB | SKAB is designed for evaluating algorithms for anomaly detection. The benchmark currently includes 30+ datasets plus Python modules for algorithms’ evaluation. Each dataset represents a multivariate time series collected from the sensors installed on the testbed. All instances are labeled for evaluating the results of solving outlier detection and changepoint detection problems. | Provide a detailed description of the following dataset: SKAB |
DF20 | Danish Fungi 2020 (DF20) is a fine-grained dataset and benchmark. The dataset, constructed from observations submitted to the Danish Fungal Atlas, is unique in its taxonomy-accurate class labels, small number of errors, highly unbalanced long-tailed class distribution, rich observation metadata, and well-defined class hierarchy. DF20 has zero overlap with ImageNet, allowing unbiased comparison of models fine-tuned from publicly available ImageNet checkpoints.
The dataset has 1,604 different classes, with 248,466 training images and 27,608 test images.
Image Source: [Danish Fungi 2020 - Not Just Another Image Recognition Dataset](https://arxiv.org/abs/2103.10107v4) | Provide a detailed description of the following dataset: DF20 |
DF20 - Mini | Danish Fungi 2020 (DF20) is a novel fine-grained dataset and benchmark. The dataset, constructed from observations submitted to the Danish Fungal Atlas, is unique in its taxonomy-accurate class labels, small number of errors, highly unbalanced long-tailed class distribution, rich observation metadata, and well-defined class hierarchy. DF20 has zero overlap with ImageNet, allowing unbiased comparison of models fine-tuned from publicly available ImageNet checkpoints. | Provide a detailed description of the following dataset: DF20 - Mini |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.