dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
CIFAR-100N
This work presents two new benchmark datasets (CIFAR-10N, CIFAR-100N), equipping the training dataset of CIFAR-10 and CIFAR-100 with human-annotated real-world noisy labels that we collect from Amazon Mechanical Turk.
Provide a detailed description of the following dataset: CIFAR-100N
MedMNIST v2
MedMNIST v2 is a large-scale MNIST-like collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D. All images are pre-processed into 28 x 28 (2D) or 28 x 28 x 28 (3D) with the corresponding classification labels, so that no background knowledge is required for users. Covering primary data modalities in biomedical images, MedMNIST v2 is designed to perform classification on lightweight 2D and 3D images with various data scales (from 100 to 100,000) and diverse tasks (binary/multi-class, ordinal regression and multi-label). The resulting dataset, consisting of 708,069 2D images and 10,214 3D images in total, could support numerous research / educational purposes in biomedical image analysis, computer vision and machine learning. Description and image from: [MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification](https://paperswithcode.com/paper/medmnist-v2-a-large-scale-lightweight) Each subset keeps the same license as that of the source dataset. Please also cite the corresponding paper of source data if you use any subset of MedMNIST.
Provide a detailed description of the following dataset: MedMNIST v2
OpenBMAT
Open Broadcast Media Audio from TV (OpenBMAT) is an open, annotated dataset for the task of music detection that contains over 27 hours of TV broadcast audio from 4 countries distributed over 1647 one-minute long excerpts. It is designed to encompass several essential features for any music detection dataset and is the first one to include annotations about the loudness of music in relation to other simultaneous non-music sounds. OpenBMAT has been cross-annotated by 3 annotators obtaining high inter-annotator agreement percentages, which validates the annotation methodology and ensures the annotations reliability.
Provide a detailed description of the following dataset: OpenBMAT
IndoNLG
IndoNLG is a benchmark to measure natural language generation (NLG) progress in three low-resource—yet widely spoken—languages of Indonesia: Indonesian, Javanese, and Sundanese. Altogether, these languages are spoken by more than 100 million native speakers, and hence constitute an important use case of NLG systems today. Concretely, IndoNLG covers six tasks: summarization, question answering, chit-chat, and three different pairs of machine translation (MT) tasks.
Provide a detailed description of the following dataset: IndoNLG
Continual World
Continual World is a benchmark consisting of realistic and meaningfully diverse robotic tasks built on top of Meta-World as a testbed.
Provide a detailed description of the following dataset: Continual World
Natural Instructions
Natural-Instructions is a dataset of 61 distinct tasks, their human-authored instructions and 193k task instances. The instructions are obtained from crowdsourcing instructions used to create existing NLP datasets and mapped to a unified schema.
Provide a detailed description of the following dataset: Natural Instructions
Pyxis
Pyxis is a performance dataset for specialized accelerators on sparse data. Pyxis collects accelerator designs and real execution performance statistics. Currently, there are 73.8 K instances in Pyxis.
Provide a detailed description of the following dataset: Pyxis
OPERAnet
**OPERAnet** is a multimodal activity recognition dataset acquired from radio frequency and vision-based sensors. Approximately 8 hours of annotated measurements are provided, which are collected across two different rooms from 6 participants performing 6 activities, namely, sitting down on a chair, standing from sit, lying down on the ground, standing from the floor, walking and body rotating. The dataset has been acquired from four synchronized modalities for the purpose of passive Human Activity Recognition (HAR) as well as localization and crowd counting.
Provide a detailed description of the following dataset: OPERAnet
AI-TOD
AI-TOD comes with 700,621 object instances for eight categories across 28,036 aerial images. Compared to existing object detection datasets in aerial images, the mean size of objects in AI-TOD is about 12.8 pixels, which is much smaller than others.
Provide a detailed description of the following dataset: AI-TOD
URLB
URLB consists of two phases: reward-free pre-training and downstream task adaptation with extrinsic rewards. Building on the DeepMind Control Suite, it provides twelve continuous control tasks from three domains for evaluation.
Provide a detailed description of the following dataset: URLB
CoVA
We labeled _7,740_ webpage screenshots spanning _408_ domains (Amazon, Walmart, Target, etc.). Each of these webpages contains exactly one labeled price, title, and image. All other web elements are labeled as background. On average, there are _90_ web elements in a webpage. Webpage screenshots and bounding boxes can be obtained [here](https://drive.google.com/drive/folders/1LQPXGhDVh40bIT2-LZfo498M93tidABe?usp=sharing) ### Train-Val-Test split We create a cross-domain split which ensures that each of the train, val and test sets contains webpages from different domains. Specifically, we construct a 3 : 1 : 1 split based on the number of distinct domains. We observed that the top-5 domains (based on number of samples) were Amazon, EBay, Walmart, Etsy, and Target. So, we created 5 different splits for 5-Fold Cross Validation such that each of the major domains is present in one of the 5 splits for test data.
Provide a detailed description of the following dataset: CoVA
Persian Reverse Dictionary Dataset
The Persian Reverse Dictionary Dataset is a collection of 855217 words along with the phrases describing them. The phrases were extracted from the top three most well-known Persian dictionaries (including Amid, Moeen, and Dehkhoda), Persian Wikipedia, and a Persian Wordnet (called Farsnet).
Provide a detailed description of the following dataset: Persian Reverse Dictionary Dataset
CADB
To the best of our knowledge, there is no prior dataset specifically constructed for composition assessment. To support the research on this task, we build a dataset upon the existing AADB dataset, from which we collect a total of 9,958 real-world photos. We adopt a composition rating scale from 1 to 5, where a larger score indicates better composition. We make annotation guidelines for composition quality rating and train five individual raters who specialize in fine art. So for each image, we can obtain five composition scores ranging from 1 to 5. Given the subjective nature of human aesthetic activity, we perform sanity check and consistency analysis. We use 240 additional “sanity check” images during annotating to roughly verify the validness of our annotations. We also examine the consistency of composition ratings provided by five individual raters (see Supplementary). We average the composition scores as the ground-truth composition mean score for each image. More details about our CADB dataset will be elaborated in Supplementary. Besides, we observe the content bias in our CADB dataset, that is, there are some biased categories whose score distributions are concentrated in a very narrow interval. After removing 461 biased images, we split the remaining images into 8,547 training images and 950 test images, in which the test set is made less biased for better evaluation (see Supplementary). *Cited from "Image Composition Assessment with Saliency-augmented Multi-pattern Pooling" Zhang, Bo and Niu, Li and Zhang, Liqing* ## Citation @article{zhang2021image, title={Image Composition Assessment with Saliency-augmented Multi-pattern Pooling}, author={Zhang, Bo and Niu, Li and Zhang, Liqing}, journal={arXiv preprint arXiv:2104.03133}, year={2021} }
Provide a detailed description of the following dataset: CADB
RWanda Built-up Region Segmentation
We create Rwanda built-up regions dataset, a different and versatile in nature from previously available datasets. The varying structure size and formation, irregular patterns of construction, buildings in forests and deserts, and the existence of mud houses make it very challenging. A total of 787 satellite images of size 256 × 256 are collected at a high resolution (HR) of 1.193 meters per pixel and hand tagged for built-up region segmentation using an online tool Label-Box.
Provide a detailed description of the following dataset: RWanda Built-up Region Segmentation
Genome-wide miRNA detection
We've made available several genome-wide datasets, which can be used for training microRNA (miRNA) classifiers. The hairpin sequences available are from the genomes of: Homo sapiens, Arabidopsis thaliana, Anopheles gambiae, Caenorhabditis elegans and Drosophila melanogaster. Hairpin.s are small RNA sequences that naturaly folds into a hairpin-structure. However, not all hairpins have clear function (they are not miRNAs). Each dataset provides the genome data divided into sequences and a set of computed features for predictions. Each sequence has one label: i) “positive”: meaning that it is a well-known pre-miRNA, according to miRBase v21; or ii) “unlabeled”: indicating that the sequence has not (yet) a known function and could be a possible candidate to novel pre-miRNA. Due to the fact that selecting an informative feature set is very important for a good pre-miRNA classifier, a representative feature set with large discriminative power has been calculated and it is provided, as well, for each genome. This feature set contains typical information about sequence, topology and structure.
Provide a detailed description of the following dataset: Genome-wide miRNA detection
GSM8K
GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. The dataset is segmented into 7.5K training problems and 1K test problems. These problems take between 2 and 8 steps to solve, and solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − ×÷) to reach the final answer. A bright middle school student should be able to solve every problem. It can be used for multi-step mathematical reasoning. Image source: [https://arxiv.org/pdf/2110.14168v1.pdf](https://arxiv.org/pdf/2110.14168v1.pdf)
Provide a detailed description of the following dataset: GSM8K
Inter4K
A video dataset for benchmarking upsampling methods. Inter4K contains 1,000 ultra-high resolution videos with 60 frames per second (fps) from online resources. The dataset provides standardized video resolutions at ultra-high definition (UHD/4K), quad-high definition (QHD/2K), full-high definition (FHD/1080p), (standard) high definition (HD/720p), one quarter of full HD (qHD/520p) and one ninth of a full HD (nHD/360p). We use frame rates of 60, 50, 30, 24 and 15 fps for each resolution. Based on this standardization, both super-resolution and frame interpolation tests can be performed for different scaling sizes ($\times 2$, $\times 3$ and $\times 4$). In this paper, we use Inter4K to address frame upsampling and interpolation. Inter4K provides both standardized UHD resolution and 60 fps for all of videos by also containing a diverse set of 1,000 5-second videos. Differences between scenes originate from the equipment (e.g., professional 4K cameras or phones), lighting conditions, variations in movements, actions or objects. The dataset is divided into 800 videos for training, 100 videos for validation and 100 videos for testing.
Provide a detailed description of the following dataset: Inter4K
TUDA
Overall duration per microphone: about 36 hours (31 hrs train / 2.5 hrs dev / 2.5 hrs test) Count of microphones: 3 (Microsoft Kinect, Yamaha, Samson) Count of wave-files per microphone: about 14500 Overall count of participations: 180 (130 male / 50 female)
Provide a detailed description of the following dataset: TUDA
Market-1501-C
**Market-1501-C** is an evaluation set that consists of algorithmically generated corruptions applied to the Market-1501 test-set. These corruptions consist of Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital: contrast, elastic, pixel, JPEG compression, and saturate. Each corruption has five severity levels, resulting in 100 distinct corruptions.
Provide a detailed description of the following dataset: Market-1501-C
MSMT17-C
**MSMT17-C** is an evaluation set that consists of algorithmically generated corruptions applied to the MSMT17 test-set. These corruptions consist of Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital: contrast, elastic, pixel, JPEG compression, and saturate. Each corruption has five severity levels, resulting in 100 distinct corruptions.
Provide a detailed description of the following dataset: MSMT17-C
CUHK03-C
**CUHK03-C** is an evaluation set that consists of algorithmically generated corruptions applied to the CUHK03 test-set. These corruptions consist of Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital: contrast, elastic, pixel, JPEG compression, and saturate. Each corruption has five severity levels, resulting in 100 distinct corruptions.
Provide a detailed description of the following dataset: CUHK03-C
SYSU-MM01-C
**SYSU-MM01-C** is an evaluation set that consists of algorithmically generated corruptions applied to the SYSU-MM01 test-set. These corruptions consist of Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital: contrast, elastic, pixel, JPEG compression, and saturate. Each corruption has five severity levels, resulting in 100 distinct corruptions.
Provide a detailed description of the following dataset: SYSU-MM01-C
Supporting data for "Multi-Stage Malaria Parasites Recognition by Deep Learning"
Malaria, a mosquito-borne infectious disease affecting humans and other animals, is widespread in the tropical and subtropical regions. Microscopy is the most common method in diagnosing the malaria parasite from stained blood smears. However, this procedure is time-consuming, error-prone, and requires a well-trained professional. Moreover, the recognition of a malaria parasite through a microscope is still a challenging process, especially in distinguishing multiple stages of parasites. Here is a large-scale dataset of unseen malaria parasites for a Multi-stage Malaria Recognition experiment. This includes test and training images of parasitized cells, test and training images of leukocytes, test and training images of gametocytes, test and training images of uninfected cells, test and training images of red blood cells, test and training images of ring cells, test and training images of schizont cells, test and training images of trophozoite cells. Related P. vivax (malaria) infected human blood smear data is available in the BBBC repository and can be accessed with accession No. BBBC041 https://bbbc.broadinstitute.org/BBBC041 A related large scale malaria dataset consisting of 13,780 both malaria parasites and RBCs testing images are available in the National Library of Medicine (NLM) respository and can be accessed with accession No. PUB9932. https://lhncbc.nlm.nih.gov/LHC-publications/pubs/MalariaDatasets.html
Provide a detailed description of the following dataset: Supporting data for "Multi-Stage Malaria Parasites Recognition by Deep Learning"
Mouse Grooming Behavior
This dataset was generated to characterize mouse grooming behavior. Mouse grooming serves many adaptive functions such as coat and body care, stress reduction, de-arousal, social functions, thermoregulation, nociception, as well as other functions. Alteration of this behavior is measured and used for mouse pre-clinical models of human psychiatric illnesses. Grooming behavior in mice contains a variety of visually diverse syntaxes including but not limited to paw licking, face washing, and flank licking. Additionally, this dataset includes visually diverse mice including 157 individual mice spanning 60 different inbred and F1 hybrid mouse strains. This feature is a stark difference to most other mouse behavior datasets, which typically only include 1-2 inbred strains. This dataset includes 1,253 video clips of mice behaving in an open field imaged from top-down perspective. Each video clip contains a 112x112 video tubelet cropped around the center of mass of the mouse as it walks around the open field arena. Video clips are of variable length totaling 2,637,363 frames. Annotators were required to provide a "Grooming" or "Not Grooming" annotation for each frame. Frames where annotators disagree is also provided.
Provide a detailed description of the following dataset: Mouse Grooming Behavior
PQ-decaNLP
Multitask learning has led to significant advances in Natural Language Processing, including the decaNLP benchmark where question answering is used to frame 10 natural language understanding tasks in a single model. PQ-decaNLP is a crowd-sourced corpus of paraphrased questions, annotated with paraphrase phenomena. This enables analysis of how transformations such as swapping the class labels and changing the sentence modality lead to a large performance degradation.
Provide a detailed description of the following dataset: PQ-decaNLP
map2seq
7,672 human written natural language navigation instructions for routes in OpenStreetMap with a focus on visual landmarks. Validated in Street View.
Provide a detailed description of the following dataset: map2seq
DrugProt
DrugProt corpus, where domain experts have exhaustively labeled:(a) all chemical and gene mentions, and (b) all binary relationships between them corresponding to a specific set of biologically relevant relation types (DrugProt relation classes).
Provide a detailed description of the following dataset: DrugProt
RegDB-C
RegDB-C is an evaluation set that consists of algorithmically generated corruptions applied to the RegDB test-set (color images). These corruptions consist of Noise: Gaussian, shot, impulse, and speckle; Blur: defocus, frosted glass, motion, zoom, and Gaussian; Weather: snow, frost, fog, brightness, spatter, and rain; Digital: contrast, elastic, pixel, JPEG compression, and saturate. Each corruption has five severity levels, resulting in 100 distinct corruptions.
Provide a detailed description of the following dataset: RegDB-C
Building air quality and pandemic risk simulation
The original paper contains a high-level explanation of the dataset characteristics, and potential use cases of the dataset. ArchABM can help to quantify the impact of some of these building- and company policy-related measures. **Baseline experiment** A baseline case with no measures and reduced ventilation is first studied. An schedule is set for each event type, along with their minimum and maximum duration $\tau$, and the number of repetitions. The mask is not used anywhere ($m_e = 0$). Meetings and the lunch activity are considered to be collective events. The places' capacity refers to the maximum number of people that can be present in that specified space. A low natural ventilation rate is established ($\lambda_a = 1.5$, and $\lambda_a = 0.5$ for poor ventilated rooms) and there is no mechanical ventilation ($\lambda_r = 0$). **Building-related experiments** - **Larger building**: each room's area (and thus each room's volume) is increased by 20%. This measure needs to take into account the increase in costs, which would mean an increase of almost 20% in the final construction costs as well. - **Separate workspaces**: the open office is divided into three identical offices, each one with 110 $m^2$, 16 people (48/3), and a capacity of 20 (60/3). - **Better natural ventilation**: windows are opened everywhere except in restrooms for better outdoor air supply. $\lambda_a$ is increased up to 5 $h^{-1}$. - **Better mechanical ventilation**: the flow rate $Q_{AC}$ of the AC system is incremented, assuming a 20% filter efficiency $\varepsilon_{filter}$, a 10% of removal in ducts $\varepsilon_{ducts}$ and no additional $\varepsilon_{extra}$ removal measures. Adding AC to the building would mean an increase of 14% in the building overall costs. **Policy-related experiments** - **Shifts between workers**: this would imply a reduction in the number of people present in each room. For this experiment, the population is reduced by 40%, resulting in 29 people in the open office, 4 in the IT Office, and 1 in each chief office, summing up to 36 people. This measure also entails a non-quantifiable cost to the company. - **Limit duration of events**: the duration of meetings is limited to a maximum of 30 minutes, setting $\tau = [0.\hat{3} - 0.5] h$. The duration of coffee breaks would be limited to 5 minutes, meaning $\tau = 0.08\hat{3} h$, and lunch would be of 20 minutes, $\tau = 0.\hat{3} h$. - **Use of masks**: in this case, the mask use is mandatory, meaning that $m_f = 1$ and the mask efficiency, $m_e$, is set to 0.75 in the offices and meeting rooms, to 0.5 in the restrooms, to 0.3 for coffee breaks and leaving it at 0 for lunch breaks, representing the absence of masks while eating.
Provide a detailed description of the following dataset: Building air quality and pandemic risk simulation
LSVTD
**LSVTD** is a large scale video text dataset for promoting the video text spotting community, which contains 100 text videos from 22 different real-life scenarios. LSVTD covers a wide range of 13 indoor (eg. bookstore, shopping mall) and 9 outdoor scenarios, which is more than 3 times the diversity of IC15.
Provide a detailed description of the following dataset: LSVTD
DriverMHG
**Driver Micro Hand Gestures** (**DriverMHG**) is a dataset for dynamic recognition of driver micro hand gestures, which consists of RGB, depth and infrared modalities.
Provide a detailed description of the following dataset: DriverMHG
BCI Competition Datasets
The goal of the "BCI Competition" is to validate signal processing and classification methods for Brain-Computer Interfaces (BCIs).
Provide a detailed description of the following dataset: BCI Competition Datasets
UQuAD
Large scale machine reading comprehension dataset in Urdu language.
Provide a detailed description of the following dataset: UQuAD
Adaptiope
Adaptiope is a domain adaptation dataset with 123 classes in the three domains synthetic, product and real life. One of the main goals of Adaptiope is to offer a clean and well curated set of images for domain adaptation. This was necessary as many other common datasets in the area suffer from label noise and low quality images. Additionally, Adaptiope's class set was chosen in a way that minimizes the overlap with the class set of the commonly used ImageNet pretraining, therefore preventing information leakage in a domain adaptation setup.
Provide a detailed description of the following dataset: Adaptiope
Modern Office-31
Modern Office-31 is a refurbished version of the commonly used [Office-31](https://paperswithcode.com/dataset/office-31) dataset. Modern Office-31 rectifies many of the annotation errors and low quality images in the Amazon domain of the original Office-31 dataset. Additionally, this dataset adds another synthetic domain based on the [Adaptiope](https://paperswithcode.com/dataset/adaptiope) dataset.
Provide a detailed description of the following dataset: Modern Office-31
AVASpeech-SMAD
We propose a dataset, AVASpeech-SMAD, to assist speech and music activity detection research. With frame-level music labels, the proposed dataset extends the existing AVASpeech dataset, which originally consists of 45 hours of audio and speech activity labels. To the best of our knowledge, the proposed AVASpeech-SMAD is the first open-source dataset that features strong polyphonic labels for both music and speech. The dataset was manually annotated and verified via an iterative cross-checking process. A simple automatic examination was also implemented to further improve the quality of the labels. Evaluation results from two state-of-the-art SMAD systems are also provided as a benchmark for future reference.
Provide a detailed description of the following dataset: AVASpeech-SMAD
Ballroom
This data set includes beat and bar annotations of the ballroom dataset, introduced by Gouyon et al. [1]. [1] Gouyon F., A. Klapuri, S. Dixon, M. Alonso, G. Tzanetakis, C. Uhle, and P. Cano. An experimental comparison of audio tempo induction algorithms. Transactions on Audio, Speech and Language Processing 14(5), pp.1832-1844, 2006.
Provide a detailed description of the following dataset: Ballroom
Beatles
This dataset includes the beat and downbeat annotations for Beatles albums. The annotations are provided by M. E. P. Davies et. al [1]. M. E. P. Davies, N. Degara, and M. D. Plumbley, “Evaluation methods for musical audio beat tracking algorithms,” in Technical Report C4DM-TR-09-06, Centre for Digital Music, Queen Mary University of London, 2009.
Provide a detailed description of the following dataset: Beatles
Rock Corpus
This dataset contains 200 famous songs in different genres (mostly in rock) and the beats and downbeat annotations are provided by T. de Clercq and D. Temperley [1]. [1] T. de Clercq and D. Temperley., “A corpus analysis of rock harmony,” Popular Music, vol. 30, no. 1, pp. 47– 70, 2011.
Provide a detailed description of the following dataset: Rock Corpus
Carnatic
This dataset includes music time information i.e. Beat, Bar, and meter annotations of the Indian Carnatic music dataset. The dataset is gathered by A. Srinivasamurthy and X. Serra [1]. [1] A. Srinivasamurthy and X. Serra, “A supervised approach to hierarchical metrical cycle tracking from audio music recordings,” in In Proc. of the IEEE Int. Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2014.
Provide a detailed description of the following dataset: Carnatic
SINGA:PURA
This repository contains the SINGA:PURA dataset, a strongly-labelled polyphonic urban sound dataset with spatiotemporal context. The data were collected via a number of recording units deployed across Singapore as a part of a wireless acoustic sensor network. These recordings were made as part of a project to identify and mitigate noise sources in Singapore, but also possess a wider applicability to sound event detection, classification, and localization. The taxonomy we used for the labels in this dataset has been designed to be compatible with other existing datasets for urban sound tagging while also able to capture sound events unique to the Singaporean context. Please refer to our conference paper published in APSIPA 2021 (which is found in this repository as the file "APSIPA.pdf") or download the readme ("Readme.md") for more details regarding the data collection, annotation, and processing methodologies for the creation of the dataset.
Provide a detailed description of the following dataset: SINGA:PURA
ACAV100M
ACAV100M processes 140 million full-length videos (total duration 1,030 years) which are used to produce a dataset of 100 million 10-second clips (31 years) with high audio-visual correspondence. This is two orders of magnitude larger than the current largest video dataset used in the audio-visual learning literature, i.e., AudioSet (8 months), and twice as large as the largest video dataset in the literature, i.e., HowTo100M (15 years).
Provide a detailed description of the following dataset: ACAV100M
LAION-400M
**LAION-400M** is a dataset with CLIP-filtered 400 million image-text pairs, their CLIP embeddings and kNN indices that allow efficient similarity search. #### ⚠️ Disclaimer & Content Warning (from the authors) *Our filtering protocol only removed NSFW images detected as illegal, but the dataset still has NSFW content accordingly marked in the metadata. When freely navigating through the dataset, keep in mind that it is a large-scale, non-curated set crawled from the internet for research purposes, such that collected links may lead to discomforting and disturbing content. Therefore, please use the demo links with caution. You can extract a “safe” subset by filtering out samples drawn with NSFW or via stricter CLIP filtering.* *There is a certain degree of duplication because we used URL+text as deduplication criteria. The same image with the same caption may sit at different URLs, causing duplicates. The same image with other captions is not, however, considered duplicated.* *Using KNN clustering should make it easy to further deduplicate by image content.*
Provide a detailed description of the following dataset: LAION-400M
DeepNets-1M
The DeepNets-1M dataset is composed of neural network architectures represented as graphs where nodes are operations (convolution, pooling, etc.) and edges correspond to the forward pass flow of data through the network. DeepNets-1M has 1 million training architectures and 1402 in-distribution (ID) and out-of-distribution (OOD) evaluation architectures: 500 validation and 500 testing ID architectures, 100 wide OOD architectures, 100 deep OOD architectures, 100 dense OOD architectures, 100 OOD archtectures without batch normalization, and 2 predefined architectures (ResNet-50 and 12 layer Visual Transformer). For 1402 evaluation architectures, DeepNets-1M includes accuracies of the networks on CIFAR-10 and ImageNet after training them with stochastic gradient descent (SGD). Besides accuracy, other properties of evaluation architectures are included: accuracy on noisy images, inference and convergence time. These properties of architectures can enable training neural architecture search models. The DeepNets-1M is used to train and evaluate parameter prediction models such as Graph HyperNetworks. These models can predict all parameters for a given network (graph) in a single forward pass and the results can be compared to optimizing parameters with SGD.
Provide a detailed description of the following dataset: DeepNets-1M
RLV
We provide video observations of humans performing two simple tasks in natural environments. The tasks are pushing and drawer opening.
Provide a detailed description of the following dataset: RLV
Earth’s Mantle Convection
The dataset, generated from a scientific simulation, consists of a time series (251 steps) of 3D scalar fields on a spherical 180x201x360 grid covering 500 Myr of geological time. Each time step is 2 Myrs, and the fields are: * temperature [degrees K], * three Cartesian velocity components [m/s], * thermal conductivity anomaly [Watt/m/K], * thermal expansivity anomaly [1/K], * temperature anomaly [degrees K], and * spin transition-induced density anomaly [kg/m^3]. The simulation was performed in double precision, however, to reduce downloading time, we provide the data in single precision. Each file was saved in a NetCDF Climate and Forecast (CF) convention format, with each 3D scalar field being a function of latitude [degrees north], longitude [degrees east], and radius [km]. The model’s inner and outer radii are 3485 km and 6371 km, respectively.
Provide a detailed description of the following dataset: Earth’s Mantle Convection
FEAFA+
**FEAFA+** is a dataset for Facial expression analysis and 3D Facial animation. It includes 150 video sequences from FEAFA and [DISFA](disfa), with a total of 230,184 frames being manually annotated on floating-point intensity value of 24 redefined AUs using the Expression Quantitative Tool.
Provide a detailed description of the following dataset: FEAFA+
GO21
GO21 is a biomedical knowledge graph that models genes, proteins, drugs, and the hierarchy of the biological processes they participate in. It consists of 806,136 triples with 21 relations and 89127 entities. GO21 can be used for knowledge graph completion tasks (link prediction) as well as hierarchical reasoning tasks, such as ancestor-descendant prediction task proposed in the paper.
Provide a detailed description of the following dataset: GO21
A Datacube for the analysis of wildfires in Greece
This dataset is meant to be used to develop models for next-day fire hazard forecasting in Greece. It contains data from 2009 to 2020 at a 1km x 1km x 1 daily grid. Check the [Jupyter notebook](https://github.com/DeepCube-org/uc3-public-notebooks/blob/main/1_UC3_Datacube_Access_and_Plotting.ipynb) for an example showing how to access the dataset.
Provide a detailed description of the following dataset: A Datacube for the analysis of wildfires in Greece
CLUES
CLUES (Constrained Language Understanding Evaluation Standard) is a benchmark for evaluating the few-shot learning capabilities of NLU models.
Provide a detailed description of the following dataset: CLUES
Only Time Will Tell
Simulation results of time-respecting and time-ignoring horizon of code review network at Microsoft as JSON. For further details, please look at https://github.com/michaeldorner/only-time-will-tell
Provide a detailed description of the following dataset: Only Time Will Tell
LRA
Long-range arena (LRA) is an effort toward systematic evaluation of efficient transformer models. The project aims at establishing benchmark tasks/datasets using which we can evaluate transformer-based models in a systematic way, by assessing their generalization power, computational efficiency, memory foot-print, etc. Long-Range Arena is specifically focused on evaluating model quality under long-context scenarios. The benchmark is a suite of tasks consisting of sequences ranging from 1K to 16K tokens, encompassing a wide range of data types and modalities such as text, natural, synthetic images, and mathematical expressions requiring similarity, structural, and visual-spatial reasoning. Description from: [Long Range Arena : A Benchmark for Efficient Transformers](https://arxiv.org/pdf/2011.04006v1.pdf)
Provide a detailed description of the following dataset: LRA
AdvGLUE
Adversarial GLUE (AdvGLUE) is a new multi-task benchmark to quantitatively and thoroughly explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks. In particular, we systematically apply 14 textual adversarial attack methods to [GLUE](/dataset/glue) tasks to construct AdvGLUE, which is further validated by humans for reliable annotations. Description from: [Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models](https://paperswithcode.com/paper/adversarial-glue-a-multi-task-benchmark-for)
Provide a detailed description of the following dataset: AdvGLUE
CoDEx Medium
CoDEx comprises a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing knowledge graph completion benchmarks in scope and level of difficulty. CoDEx comprises three knowledge graphs varying in size and structure, multilingual descriptions of entities and relations, and tens of thousands of hard negative triples that are plausible but verified to be false.
Provide a detailed description of the following dataset: CoDEx Medium
CoDEx Large
CoDEx comprises a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing knowledge graph completion benchmarks in scope and level of difficulty. CoDEx comprises three knowledge graphs varying in size and structure, multilingual descriptions of entities and relations, and tens of thousands of hard negative triples that are plausible but verified to be false.
Provide a detailed description of the following dataset: CoDEx Large
IDDA
**IDDA** is a large scale, synthetic dataset for semantic segmentation with more than 100 different source visual domains. The dataset has been created to explicitly address the challenges of domain shift between training and test data in various weather and view point conditions, in seven different city types.
Provide a detailed description of the following dataset: IDDA
SyRIP
**SyRIP** is a hybrid synthetic and real infant pose (SyRIP) dataset with small yet diverse real infant images as well as generated synthetic infant poses and (2) a multi-stage invariant representation learning strategy that could transfer the knowledge from the adjacent domains of adult poses and synthetic infant images into our fine-tuned domain-adapted infant pose
Provide a detailed description of the following dataset: SyRIP
RobustBench
**RobustBench** is a benchmark of adversarial robustness, which as accurately as possible reflects the robustness of the considered models within a reasonable computational budget. To this end, we start by considering the image classification task and introduce restrictions (possibly loosened in the future) on the allowed models.
Provide a detailed description of the following dataset: RobustBench
WaveFake
WaveFake is a dataset for audio deepfake detection. The dataset consists of a large-scale dataset of over 100K generated audio clips.
Provide a detailed description of the following dataset: WaveFake
WWU DUNEuro reference data set
The provided dataset consists of high-quality realistic head models and combined EEG/MEG data which can be used for state-of-the-art methods in brain research, such as modern finite element methods (FEM) to compute the EEG/MEG forward problems using the software toolbox DUNEuro ([http://duneuro.org](http://duneuro.org)). For further details see [DOI: 10.5281/zenodo.3888380](https://doi.org/10.5281/zenodo.3888380).
Provide a detailed description of the following dataset: WWU DUNEuro reference data set
VSLID
VSLID stands for Very Small Lego Image Dataset. It has a bit over 1800 images of piles of LEGO bricks of 85 different types. There are between 1 and 10 bricks per image. Backgrounds and lighting conditions vary. All images are annotated with a list of the visible bricks. The images can have two resolutions, so rescaling them is recommended before usage. There are three folders each containing two subfolders named Renders and Sets. The Renders subfolder contains the photos in PNG format, while the Sets subfolder contains a brick_sets.csv file recording just the identifiers of the bricks present in each image and a labels.csv file additionally recording the time of day the photo was taken at and its background.
Provide a detailed description of the following dataset: VSLID
WORD
**WORD** is a dataset for organ semantic segmentation that contains 150 abdominal CT volumes (30,495 slices) and each volume has 16 organs with fine pixel-level annotations and scribble-based sparse annotation, which may be the largest dataset with whole abdominal organs annotation.
Provide a detailed description of the following dataset: WORD
REAL-M
**Real-M** is a crowd-sourced speech-separation corpus of real-life mixtures. The mixtures are recorded in different acoustic environments using a wide variety of recording devices such as laptops and smartphones, thus reflecting more closely potential application scenarios.
Provide a detailed description of the following dataset: REAL-M
RIKEN Microstructural Imaging Metadatabase
The **RIKEN Microstructural Imaging Metadatabase** is a semantic web-based imaging database in which image metadata are described using the Resource Description Framework (RDF) and detailed biological properties observed in the images can be represented as Linked Open Data. The metadata are used to develop a large-scale imaging viewer that provides a straightforward graphical user interface to visualise a large microstructural tiling image at the gigabyte level.
Provide a detailed description of the following dataset: RIKEN Microstructural Imaging Metadatabase
BOBSL
BOBSL is a large-scale dataset of British Sign Language (BSL). It comprises 1,962 episodes (approximately 1,400 hours) of BSL-interpreted BBC broadcast footage accompanied by written English subtitles. From horror, period and medical dramas, history, nature and science documentaries, sitcoms, children’s shows and programs covering cooking, beauty, business and travel, BOBSL covers a wide range of topics. The dataset features a total of 39 signers. Distinct signers appear in the training, validation and test sets for signer-independent evaluation. Description from: [BOBSL: BBC-Oxford British Sign Language Dataset](https://www.robots.ox.ac.uk/~vgg/data/bobsl/)
Provide a detailed description of the following dataset: BOBSL
AdobeVFR syn
Subset of AdobeVFR. The dataset contains images depicting English text and consists of 1000 synthetic images for training and 100 for testing, for each of 2383 font classes. The training and test sets are called *VFR_syn_train* and *VFR_syn_val*, respectively. The other part of AdobeVFR consists of "real-world text images".
Provide a detailed description of the following dataset: AdobeVFR syn
Explor_all
Explor_all font image dataset https://drive.google.com/file/d/1P2DbNbVw4Q__WcV1YdzE7zsDKilmd3pO/view
Provide a detailed description of the following dataset: Explor_all
SDSS Galaxies
This is a dataset of 306,006 galaxies whose coordinates are taken from the Sloan Digital Sky Survey Data Release 7 and a modified catalogue from Brinchmann+2003 and Wilman+2010. This volume complete sample has an r-band absolute magnitude limit of $M_r\leq-20$ and a redshift limit of $z\leq0.08$. See Arora+2019 for details. This catalogue covers a wide range of environments from clusters to groups and field systems. The galaxy images are taken from the Dark Energy Spectroscopic Instrument, and contain $g$, $r$, and $z$ bands.
Provide a detailed description of the following dataset: SDSS Galaxies
VFR-447
A synthetic dataset containing 447 typefaces with only one font variation for each typeface, created for visual font recognition. > Each class in VFR-447 and VFR-2420 has 1,000 synthetic word images, which are evenly split into 500 training and 500 testing. There are no common words between the training and testing images. > To model the realistic use cases, we add moderate distortions and noise to the synthetic data.
Provide a detailed description of the following dataset: VFR-447
VFR-2420
A synthetic dataset containing word images of 447 typefaces with font variations for each typeface, created for visual font recognition. > We collect in total 447 typefaces, each with different number of variations resulting from combinations of different styles, e.g., regular, semibold, bold, black, and italic, leading to 2,420 font classes in the end. > Each class in VFR-447 and VFR-2420 has 1,000 synthetic word images, which are evenly split into 500 training and 500 testing. There are no common words between the training and testing images. > To model the realistic use cases, we add moderate distortions and noise to the synthetic data.
Provide a detailed description of the following dataset: VFR-2420
VFR-Wild
325 word images intended for font recognition, whose fonts are included in [VFR-447] (and [VFR-2420]). > (...) 325 real world test images for the font classes we have in the training set. These images were collected from typography forums, such as myfonts.com, where people post these images seeking help from experts to identify the fonts. Compared with the synthetic data, these images typically have much larger appearance variations caused by scale, background, lighting, noise, perspective distortions, and compression artifacts. We manually cropped the texts from these images with a bounding box to normalize the text size approximately to the same scale as the synthetic data. [VFR-447]: https://paperswithcode.com/dataset/vfr-447 [VFR-2420]: https://paperswithcode.com/dataset/vfr-2420
Provide a detailed description of the following dataset: VFR-Wild
AdobeVFR real
Subset of AdobeVFR. The dataset contains "real-world text images". > We collected 201,780 text images from various typography forums, where people post these images seeking help from experts to identify the fonts. Most of them come with hand-annotated font labels which may be inaccurate. (...) Finally, we obtain 4,384 real-world test images with reliable labels, covering 617 classes (out of 2,383). (...) Removing the 4,384 labeled images from the full set, we are left with 197,396 unlabeled real- world images which we denote as *VFR_real_u*. The labeled images form *VFR_real_test*. The other part of AdobeVFR consists of synthetic data (with 2383 classes).
Provide a detailed description of the following dataset: AdobeVFR real
Federated Stack Overflow
This dataset is derived from the Stack Overflow Data hosted by kaggle.com and available to query through Kernels using the BigQuery API: https://www.kaggle.com/stackoverflow/stackoverflow
Provide a detailed description of the following dataset: Federated Stack Overflow
BPCIS
BPCIS is collection of 364 bacterial phase contrast images and corresponding label matrices for instance segmentation. Labels were made according to fluorescence channels where possible. Prior to manual annotation, images were automatically cropped into microcolonies and tiled into ensemble images to reduce the empty (non-cell) image regions for training and testing. Subsequent to annotation, we performed non-rigid registration of phase contrast to cell masks. Species include *Escherichia coli, Shigella flexneri, Francisella tularensis subsp. novicida, Acinetobacter baylyi, Burkholderia thailandensis, Helicobacter pylori, Caulobacter crescentus , Streptomyces pristinaespiralis, Vibrio cholerae, Serratia proteamaculans, Pseudomonas aeruginosa, Staphylococcus aureus,* and *Bacillus subtilis*. *E. coli* mutant CS703-1 and *H. pylori* were treated with Aztreonam. Included are independent treatments of *S. flexneri* with cephalexin and A22. Also included are mixtures of *E. coli* and *S. protreamaculans*, mixtures of *P. aeruginosa* and *S. aureus*, and *P. aeruginosa*, *S. aureus*, *V. cholerae*, and *B. subtilis*. This dataset represents a wide range of morphological and optical phenotypes both common and uncommon to bacterial microscopy. All manual annotaton was performed by Kevin J. Cutler. Micrographs were captured by Kevin J. Cutler, Teresa Lo, Paul A. Wiggins, and Maxime Jacq.
Provide a detailed description of the following dataset: BPCIS
Audio demo files
Audio files that supplement "Treatise on Hearing: The Temporal Auditory Imaging Theory Inspired by Optics and Communication".
Provide a detailed description of the following dataset: Audio demo files
SustainBench
SustainBench is a collection of 15 benchmark tasks across 7 sustainable development goals (SDGs), including tasks related to economic development, agriculture, health, education, water and sanitation, climate action, and life on land. The goals for SustainBench are to: - lower the barriers to entry for the machine learning community to contribute to measuring and achieving the SDGs; - provide standard benchmarks for evaluating machine learning models on tasks across a variety of SDGs; and - encourage the development of novel machine learning methods where improved model performance facilitates progress towards the SDGs.
Provide a detailed description of the following dataset: SustainBench
HC18
Automated measurement of fetal head circumference using 2D ultrasound images
Provide a detailed description of the following dataset: HC18
BCSS
The BCSS dataset contains over 20,000 segmentation annotations of tissue regions from breast cancer images from The Cancer Genome Atlas (TCGA). This large-scale dataset was annotated through the collaborative effort of pathologists, pathology residents, and medical students using the Digital Slide Archive. It enables the generation of highly accurate machine-learning models for tissue segmentation.
Provide a detailed description of the following dataset: BCSS
unarXive
A scholarly data set with publications’ full-text, annotated in-text citations, and links to metadata. The unarXive data set contains * One million papers in plain text * 63 million citation contexts * 39 million reference strings * A citation network of 16 million connections The data is generated from all LaTeX sources on [arXiv](https://arxiv.org/) from 1991–2020/07 and therefore of higher quality than data generated from PDF files. Furthermore, as all citing papers are available in full text, citation contexts of arbitrary size can be extracted. Typical uses of the data set are approaches in * Citation recommendation * Citation context analysis * Reference string parsing The code for generating the data set is [publicly available](https://github.com/IllDepence/unarXive).
Provide a detailed description of the following dataset: unarXive
Next2You data and results dataset
This record serves as an index to the other dataset releases that are part of the paper "Next2You: Robust Copresence Detection Based on Channel State Information" by Mikhail Fomichev, Luis F. Abanto-Leon, Max Stiegler, Alejandro Molina, Jakob Link, Matthias Hollick, in ACM Transactions on Internet of Things (2021).
Provide a detailed description of the following dataset: Next2You data and results dataset
GRB
**Graph Robustness Benchmark** (**GRB**) provides scalable, unified, modular, and reproducible evaluation on the adversarial robustness of graph machine learning models. GRB has elaborated datasets, unified evaluation pipeline, modular coding framework, and reproducible leaderboards, which facilitate the developments of graph adversarial learning, summarizing existing progress and generating insights into future research. GitHub: [https://github.com/thudm/grb](https://github.com/thudm/grb)
Provide a detailed description of the following dataset: GRB
NAO
**Natural Adversarial Objects** (**NAO**) is a new dataset to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence.
Provide a detailed description of the following dataset: NAO
Retinal-Lesions
Over 1.5K images selected from the public Kaggle DR Detection dataset; Five DR grades (DR0 / DR1 / DR2 / DR3 / DR4), re-labeled by a panel of 45 experienced ophthalmologists; Eight retinal lesion classes, including microaneurysm, intraretinal hemorrhage, hard exudate, cotton-wool spot, vitreous hemorrhage, preretinal hemorrhage, neovascularization and fibrous proliferation; Over 34K expert-labeled pixel-level lesion segments; Multi-task, i.e., lesion segmentation, lesion classification, and DR grading.
Provide a detailed description of the following dataset: Retinal-Lesions
DSurVD
A large-scale dataset, namely Distorted Surveillance Video Database (DSurVD), which can be downloaded from the link: https://sites.google.com/site/sorsyuanyuan/home/dsurvd Image source: [https://sites.google.com/site/sorsyuanyuan/home/dsurvd](https://sites.google.com/site/sorsyuanyuan/home/dsurvd)
Provide a detailed description of the following dataset: DSurVD
WildReceipt
WildReceipt is a collection of receipts. It contains, for each photo, of a list of OCRs - with bounding box, text, and class. It contains 1765 photos, with 25 classes, and 50000 text boxes. The goal is to benchmark "key information extraction" - extracting key information from documents. There are two different modalities - text and visual features - which is an interesting problem. Potential uses - extracting information from documents. *The dataset is pending release.*
Provide a detailed description of the following dataset: WildReceipt
ParsTwiner
An open, broad-coverage corpus for informal Persian named entity recognition was collected from Twitter.
Provide a detailed description of the following dataset: ParsTwiner
ESC50
The ESC-50 dataset is a labeled collection of 2000 environmental audio recordings suitable for benchmarking methods of environmental sound classification. The dataset consists of 5-second-long recordings organized into 50 semantical classes (with 40 examples per class) loosely arranged into 5 major categories. Reference: [https://dl.acm.org/doi/10.1145/2733373.2806390](https://dl.acm.org/doi/10.1145/2733373.2806390)
Provide a detailed description of the following dataset: ESC50
Kinetics-Sound
This is a subset of Kinetics-400, introduced in Look, Listen and Learn by Relja Arandjelovic and Andrew Zisserman.
Provide a detailed description of the following dataset: Kinetics-Sound
PhysioNet Challenge 2018
Data for this challenge were contributed by the Massachusetts General Hospital’s (MGH) Computational Clinical Neurophysiology Laboratory (CCNL), and the Clinical Data Animation Laboratory (CDAC). The dataset includes 1,985 subjects which were monitored at an MGH sleep laboratory for the diagnosis of sleep disorders. The data were partitioned into balanced training (n = 994), and test sets (n = 989). The sleep stages of the subjects were annotated by clinical staff at the MGH according to the American Academy of Sleep Medicine (AASM) manual for the scoring of sleep. More specifically, the following six sleep stages were annotated in 30 second contiguous intervals: wakefulness, stage 1, stage 2, stage 3, rapid eye movement (REM), and undefined. Certified sleep technologists at the MGH also annotated waveforms for the presence of arousals that interrupted the sleep of the subjects. The annotated arousals were classified as either: spontaneous arousals, respiratory effort related arousals (RERA), bruxisms, hypoventilations, hypopneas, apneas (central, obstructive and mixed), vocalizations, snores, periodic leg movements, Cheyne-Stokes breathing or partial airway obstructions. The subjects had a variety of physiological signals recorded as they slept through the night including: electroencephalography (EEG), electrooculography (EOG), electromyography (EMG), electrocardiology (EKG), and oxygen saturation (SaO2). Excluding SaO2, all signals were sampled to 200 Hz and were measured in microvolts. For analytic convenience, SaO2 was resampled to 200 Hz, and is measured as a percentage.
Provide a detailed description of the following dataset: PhysioNet Challenge 2018
MoviePlotEvents
A version of the CMU Movie Summary Corpus (http://www.cs.cmu.edu/~ark/personas/), which was originally scraped from plot summaries from Wikipedia, with some cleaning and sentences turned into events & sorted into "genres" (via LDA).
Provide a detailed description of the following dataset: MoviePlotEvents
CMU Movie Summary Corpus
Dataset [46 M] and readme: 42,306 movie plot summaries extracted from Wikipedia + aligned metadata extracted from Freebase, including: *Movie box office revenue, genre, release date, runtime, and language *Character names and aligned information about the actors who portray them, including gender and estimated age at the time of the movie's release Supplement: Stanford CoreNLP-processed summaries [628 M]. All of the plot summaries from above, run through the Stanford CoreNLP pipeline (tagging, parsing, NER and coref).
Provide a detailed description of the following dataset: CMU Movie Summary Corpus
Scifi TV Shows
A collection of long-running (80+ episodes) science fiction TV show synopses, scraped from Fandom.com wikis. Collected Nov 2017. Each episode is considered a "story". Contains plot summaries from : * Babylon 5 (https://babylon5.fandom.com/wiki/Main_Page) - 84 stories * Doctor Who (https://tardis.fandom.com/wiki/Doctor_Who_Wiki) - 311 stories * Doctor Who spin-offs - 95 stories * Farscape (https://farscape.fandom.com/wiki/Farscape_Encyclopedia_Project:Main_Page) - 90 stories * Fringe (https://fringe.fandom.com/wiki/FringeWiki) - 87 stories * Futurama (https://futurama.fandom.com/wiki/Futurama_Wiki) - 87 stories * Stargate (https://stargate.fandom.com/wiki/Stargate_Wiki) - 351 stories * Star Trek (https://memory-alpha.fandom.com/wiki/Star_Trek) - 701 stories * Star Wars books (https://starwars.fandom.com/wiki/Main_Page) - 205 stories * Star Wars Rebels - 65 stories * X-Files (https://x-files.fandom.com/wiki/Main_Page) - 200 stories Total: 2276 stories Dataset is "eventified" and generalized (see _LJ Martin, P Ammanabrolu, X Wang, W Hancock, S Singh, B Harrison, and MO Riedl. Event Representations for Automated Story Generation with Deep Neural Nets, Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), 2018._ for details on these processes.) and split into train-test-validation sets for converting events into full sentences.
Provide a detailed description of the following dataset: Scifi TV Shows
Embrapa ADD 256
[![DOI](https://zenodo.org/badge/419452503.svg)](https://zenodo.org/badge/latestdoi/419452503) This is a detailed description of the dataset, a data sheet for the dataset as proposed by [Gebru *et al.*](https://arxiv.org/abs/1803.09010) Motivation for Dataset Creation ------------------------------- ### Why was the dataset created? Embrapa ADD 256 (*Apples by Drones Detection Dataset — 256 × 256*) was created to provide images and annotation for research on *apple detection in orchards for UAV-based monitoring in apple production. ### What (other) tasks could the dataset be used for? Apple detection in *low-resolution scenarios*, similar to the aerial images employed here. ### Who funded the creation of the dataset? The building of the ADD256 dataset was supported by the Embrapa SEG Project 01.14.09.001.05.04, *Image-based metrology for Precision Agriculture and Phenotyping*, and [FAPESP](https://fapesp.br/) under grant (2017/19282-7). Dataset Composition ------------------- ### What are the instances? Each instance consists of an RGB image and an annotation describing apples locations as _circular markers_ (i.e., presenting **center and radius**). ### How many instances of each type are there? The dataset consists of 1,139 images containing 2,471 apples. ### What data does each instance consist of? Each instance contains an 8-bits RGB image. Its corresponding annotation is found in the JSON files: each apple marker is composed by its center (cx, cy) and its radius (in pixels), as seen below: "gebler-003-06.jpg": [ { "cx": 116, "cy": 117, "r": 10 }, { "cx": 134, "cy": 113, "r": 10 }, { "cx": 221, "cy": 95, "r": 11 }, { "cx": 206, "cy": 61, "r": 11 }, { "cx": 92, "cy": 1, "r": 10 } ], `Dataset.ipynb` is a Jupyter Notebook presenting a code example for reading the data as a PyTorch's Dataset (it should be straightforward to adapt the code for other frameworks as Keras/TensorFlow, fastai/PyTorch, Scikit-learn, etc.) ### Is everything included or does the data rely on external resources? Everything is included in the dataset. ### Are there recommended data splits or evaluation measures? The dataset comes with specified train/test splits. The splits are found in lists stored as JSON files. | | Number of images | Number of annotated apples | | --- | --- | --- | |Training | 1,025 | 2,204 | |Test | 114 | 267 | |Total | 1,139 | 2,471 | *Dataset recommended split.* Standard measures from the information retrieval and computer vision literature should be employed: precision and recall, *F1-score* and average precision as seen in [COCO](http://cocodataset.org) and [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC). ### What experiments were initially run on this dataset? The first experiments run on this dataset are described in [*A methodology for detection and location of fruits in apples orchards from aerial images*](https://arxiv.org/abs/2110.12331) by Santos & Gebler (2021). Data Collection Process ----------------------- ### How was the data collected? The data employed in the development of the methodology came from two plots located at the Embrapa’s Temperate Climate Fruit Growing Experimental Station at Vacaria-RS (28°30’58.2”S, 50°52’52.2”W). Plants of the varieties _Fuji_ and _Gala_ are present in the dataset, in equal proportions. The images were taken during December 13, 2018, by an UAV (DJI Phantom 4 Pro) that flew over the rows of the field at a height of 12 m. The images mix nadir and non-nadir views, allowing a more extensive view of the canopies. A subset from the images was random selected and 256 × 256 pixels *patches* were extracted. ### Who was involved in the data collection process? T. T. Santos and L. Gebler captured the images in field. T. T. Santos performed the annotation. ### How was the data associated with each instance acquired? The circular markers were annotated using the [VGG Image Annotator (VIA)](https://www.robots.ox.ac.uk/~vgg/software/via/). **WARNING**: Find non-ripe apples in low-resolution images of orchards is a challenging task *even for humans*. ADD256 was annotated by a single annotator. So, users of this dataset should consider it a *noisy dataset*. Data Preprocessing ------------------ ### What preprocessing/cleaning was done? No preprocessing was applied. Dataset Distribution -------------------- ### How is the dataset distributed? The dataset is [available at GitHub](https://github.com/thsant/add256). ### When will the dataset be released/first distributed? The dataset was released in October 2021. ### What license (if any) is it distributed under? The data is released under [**Creative Commons BY-NC 4.0 (Attribution-NonCommercial 4.0 International license)**](https://creativecommons.org/licenses/by-nc/4.0/). There is a request to cite the corresponding paper if the dataset is used. For commercial use, contact Embrapa Agricultural Informatics business office. ### Are there any fees or access/export restrictions? There are no fees or restrictions. For commercial use, contact Embrapa Agricultural Informatics business office. Dataset Maintenance ------------------- ### Who is supporting/hosting/maintaining the dataset? The dataset is hosted at Embrapa Agricultural Informatics and all comments or requests can be sent to [Thiago T. Santos](https://github.com/thsant) (maintainer). ### Will the dataset be updated? There is no scheduled updates. ### If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? Contributors should contact the maintainer by e-mail. ### No warranty The maintainers and their institutions are *exempt from any liability, judicial or extrajudicial, for any losses or damages arising from the use of the data contained in the image database*.
Provide a detailed description of the following dataset: Embrapa ADD 256
Cryptics
Official dataset of Decrypting Cryptic Crosswords: Semantically Complex Wordplay Puzzles as a Target for NLP. See github.com/jsrozner/decrypt and https://doi.org/10.5061/dryad.n02v6wwzp
Provide a detailed description of the following dataset: Cryptics
mini-ImageNet-LT
mini-ImageNet was proposed by Matching networks for one-shot learning for few-shot learning evaluation, in an attempt to have a dataset like ImageNet while requiring fewer resources. Similar to the statistics for CIFAR-100-LT with an imbalance factor of 100, we construct a long-tailed variant of mini-ImageNet that features all the 100 classes and an imbalanced training set with $N_1 = 500$ and $N_K = 5$ images. For evaluation, both the validation and test sets are balanced and contain 10K images, 100 samples for each of the 100 categories.
Provide a detailed description of the following dataset: mini-ImageNet-LT
SentiMix
Sentiment analysis of codemixed tweets.
Provide a detailed description of the following dataset: SentiMix
fNIRS2MW
The Tufts fNIRS to Mental Workload (fNIRS2MW) open-access dataset is a new dataset for building machine learning classifiers that can consume a short window (30 seconds) of multivariate fNIRS recordings and predict the mental workload intensity of the user during that window. You can use this dataset for tasks like - time series classification using sliding windows - domain adaptation or domain generalization (how well does your classifier generalize to a new subject?) - fairness of time series classifiers (does performance of your classifier vary by subject race or gender?) **Useful Links:** * Project Website (and data download links): <https://tufts-hci-lab.github.io/code_and_datasets/fNIRS2MW.html> * Code for benchmarks: <https://github.com/tufts-ml/fNIRS-mental-workload-classifiers> * DataSheet documentation: <https://github.com/tufts-ml/fNIRS-mental-workload-classifiers/blob/main/Datasheet-Tufts-fNIRS2MW.pdf> * Academic Paper (published at NeurIPS Datasets & Benchmarks '21) <https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/bd686fd640be98efaae0091fa301e613-Paper-round2.pdf> ## Motivation We are interested in building brain computer interfaces (BCIs) that would help out everyday computer users working at a desktop or laptop. In our target future use case, a user would actively use a keyboard and mouse as usual, but also wear a non-intrusive headband sensor that would passively provide real-time measurements of brain activity to the computer. Based on moment-to-moment estimates of mental workload, the computer could adjust the interface to support the user. Functional near-infrared spectroscopy (fNIRS) is a promising sensor technology for achieving this goal of "everyday BCI", compared to alternatives like EEG or fMRI. We have developed a prototype fNIRS probe mounted on a headband that we used to collect this dataset (see our paper for details). # Dataset Overview For a complete dataset summary, see our public [DataSheet PDF](https://github.com/tufts-ml/fNIRS-mental-workload-classifiers/blob/main/Datasheet-Tufts-fNIRS2MW.pdf) For each participant (68 recommended; 87 total), the dataset contains the following records obtained during one 30-60 minute experimental session. Each subject contributes just over 21 minutes of fNIRS data from the desired n-back experimental conditions, with remaining time related to rest or instruction periods. - *fNIRS recordings* - Multivariate (D=8) time-series representing brain activity throughout the session, recorded by a sensor probe placed on the forehead and secured via headband - All measurements are recorded at a regular sampling rate of 5.2 Hz. - At each timestep, we record 8 real-valued measurements, one for each combination of - 2 blood chemical concentration changes (oxygenated hemoglobin and deoxygenated hemoglobin) - 2 optical data types used for the measurement (intensity and phase) - 2 spatial locations on the forehead. - The units of each measurement are micro-moles of (oxy-/deoxy-)hemoglobin per liter of tissue. - *Activity labels* - Annotations of the experimental task activity the subject performed throughout the session, including instruction, rest, and active experiment segments. - We label each segment of the active experiment as one of four possible n-back working memory intensity levels (0-back, 1-back, 2-back, or 3-back). Increased intensity levels are intended to induce an increased level of cognitive workload. - For all experiments reported in the paper, we focus on a binary task (0 vs 2 back) - *Demographics* - The participant’s age, gender, race, handedness, and other attributes. This lets us measure and audit performance by subpopulation (e.g. how does the classifier perform on white subjects vs. black subjects). ## Publications The Tufts fNIRS Mental Workload Dataset & Benchmark for Brain-Computer Interfaces that Generalize Zhe Huang, Liang Wang, Giles Blaney, Christopher Slaughter, Devon McKeon, Ziyu Zhou, Robert Jacob, and Michael C. Hughes To appear in the Proceedings of Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks , 2021 Link to Paper PDF: <a href="https://openreview.net/pdf?id=QzNHE7QHhut">https://openreview.net/pdf?id=QzNHE7QHhut</a>
Provide a detailed description of the following dataset: fNIRS2MW
Multilingual Terms of Service
The first annotated corpus for multilingual analysis of potentially unfair clauses in online Terms of Service. The data set comprises a total of 100 contracts, obtained from 25 documents annotated in four different languages: English, German, Italian, and Polish. For each contract, potentially unfair clauses for the consumer are annotated, for nine different unfairness categories.
Provide a detailed description of the following dataset: Multilingual Terms of Service
Archival bundle of the data used for "Predictive Auto-scaling with OpenStack Monasca" (UCC 2021)
Follow the instructions provided in the [companion repo](https://github.com/giacomolanciano/UCC2021-predictive-auto-scaling-openstack) to automatically download and decompress the archive. The following files are included: | File | Description | | :------------------------------------------------------ | :------------------------------------------------------------- | | `amphora-x64-haproxy.qcow2` | Image used to create Octavia amphorae | | `distwalk-{lin,mlp,rnn,stc}-<INCREMENTAL-ID>.log` | `distwalk` run log | | `distwalk-{lin,mlp,rnn,stc}-<INCREMENTAL-ID>-pred.json` | Predictive metric data exported from Monasca DB | | `distwalk-{lin,mlp,rnn,stc}-<INCREMENTAL-ID>-real.json` | Actual metric data exported from Monasca DB | | `distwalk-{lin,mlp,rnn,stc}-<INCREMENTAL-ID>-times.csv` | Client-side response time for each request sent during a run | | `model_dumps/*` | Dumps of the models and data scalers used for the validation | | `predictor.log` | `monasca-predictor` log | | `predictor-times.log` | `monasca-predictor` log (timing info only) | | `predictor-times-{lin,mlp,rnn}.{csv,log}` | `monasca-predictor` log (timing info only, group by predictor) | | `super_steep_behavior.csv` | Dataset used to train MLP and RNN models | | `test_behavior_02_distwalk-6t_last100.dat` | `distwalk` load trace | | `ubuntu-20.04-min-distwalk.img` | Image used to create Nova instances for the scaling group |
Provide a detailed description of the following dataset: Archival bundle of the data used for "Predictive Auto-scaling with OpenStack Monasca" (UCC 2021)
MONK's Problems
There are three MONK's problems. The domains for all MONK's problems are the same (described below). One of the MONK's problems has noise added. For each problem, the domain has been partitioned into a train and test set.
Provide a detailed description of the following dataset: MONK's Problems