dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
Enoch Oluwumi
Briefly describe the dataset. Provide: * a high-level explanation of the dataset characteristics * explain motivations and summary of its content * potential use cases of the dataset If the description or image is from a different paper, please refer to it as follows: Source: [title](url) Image Source: [title](url)
Provide a detailed description of the following dataset: Enoch Oluwumi
Visuomotor affordance learning (VAL) robot interaction dataset
This data contains about 2500 trajectories (with images and actions) of a Sawyer robot interacting with various objects. Examples from the dataset are shown in the adjacent video. We provide two versions of the VAL dataset - one with low-res images (1.4 GB) and one with high-res images (162 GB). The data quantity and format is the same between these two versions; the difference is only the image observation quality. The smaller dataset, with 48x48x3 images which can be used for eg. offline RL, is available for direct download: <https://drive.google.com/file/d/1UuWANkVtWLg4egIK2LB_YCKuF87rMQ1H/view?usp=sharing> The larger dataset, with 480x640x3 which might be preferred for eg. representation learning, is available at this Google drive folder: <https://drive.google.com/drive/folders/1kD9kyP7-RlIrSnuN7rpEASAGWp5qnNov?usp=sharing> To download the larger dataset, we suggest using https://rclone.org/ The data is sorted into several folders. There are a total of 300 files and 2500 trajectories. - fixed_drawer - Human-controlled demonstration data opening and closing drawers. (~10%) - fixed_pnp - Human-controlled demonstration data picking up objects. (~10%) - fixed_pot - Human-controlled demonstration data interacting with a pot and a lid. (~10%) - fixed_tray - Human-controlled demonstration data picking up objects and placing it in a tray. (~10%) - general - Further human-controlled demonstration data collected with the most diversity and variation. (~40%) - onpolicy_eval - Evaluation data collected by an RL policy. (~10%) - onpolicy_expl - Exploration data collected by an RL policy. (~10%)
Provide a detailed description of the following dataset: Visuomotor affordance learning (VAL) robot interaction dataset
XFUND
XFUND is a multilingual form understanding benchmark dataset that includes human-labeled forms with key-value pairs in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese).
Provide a detailed description of the following dataset: XFUND
NTIRE 2021 HDR
The **NTIRE 2021 HDR** was built for the first challenge on high-dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021. The challenge aims at estimating a HDR image from one or multiple respective low-dynamic range (LDR) observations, which might suffer from under- or over-exposed regions and different sources of noise. The challenge is composed by two tracks: In Track 1 only a single LDR image is provided as input, whereas in Track 2 three differently-exposed LDR images with inter-frame motion are available. In both tracks, the ultimate goal is to achieve the best objective HDR reconstruction in terms of PSNR with respect to a ground-truth image, evaluated both directly and with a canonical tone mapping operation.
Provide a detailed description of the following dataset: NTIRE 2021 HDR
BAAI-VANJEE
**BAAI-VANJEE** is a dataset for benchmarking and training various computer vision tasks such as 2D/3D object detection and multi-sensor fusion. The BAAI-VANJEE roadside dataset consists of LiDAR data and RGB images collected by VANJEE smart base station placed on the roadside about 4.5m high. This dataset contains 2500 frames of LiDAR data, 5000 frames of RGB images, including 20% collected at the same time. It also contains 12 classes of objects, 74K 3D object annotations and 105K 2D object annotations.
Provide a detailed description of the following dataset: BAAI-VANJEE
PS5k
We introduce a new data set containing 5000 scientific papers and their slides crawled from conference proceeding websites such as aclweb and usenix.
Provide a detailed description of the following dataset: PS5k
SemEval-2013 Task 2
The **SemEval-2013 Task 2** dataset contains data for two subtasks: A, an expression-level subtask, and B, a message-level subtask. Crowdsourcing was used to label a large Twitter training dataset along with additional test sets of Twitter and SMS messages for both subtasks.
Provide a detailed description of the following dataset: SemEval-2013 Task 2
MLQuestions
**MLQuestions** is a domain-adaptation dataset for the machine learning domain containing 50K unaligned passages and 35K unaligned questions, and 3K aligned passage and question pairs.
Provide a detailed description of the following dataset: MLQuestions
iWildCam 2021
**iWildCam 2021** is a dataset for counting the number of animals of each species that appear in sequences of images captured with camera traps. The training data and test data are from different cameras spread across the globe. The set of species seen in each camera overlap but are not identical. The challenge is to categorize species and count the number of individuals across image bursts.
Provide a detailed description of the following dataset: iWildCam 2021
NeoRL
- **NeoRL** is a collection of environments and datasets for offline reinforcement learning with a special focus on real-world applications. The design follows real-world properties like the conservative of behavior policies, limited amounts of data, high-dimensional state and action spaces, and the highly stochastic nature of the environments. - The datasets include robotics, industrial control, finance trading and city management tasks with real-world properties, containing three-level sizes of dataset, three-level quality of data to mimic the dataset we will meet in offline RL scenarios. - Users can use the dataset to evaluate offline RL algorithms with near real-world application nature.
Provide a detailed description of the following dataset: NeoRL
ARC Ukiyo-e Faces
**ARC Ukiyo-e Faces** is a large-scale (>10k paintings, >20k faces) Ukiyo-e dataset with coherent semantic labels and geometric annotations through augmenting and organizing existing datasets with automatic detection.
Provide a detailed description of the following dataset: ARC Ukiyo-e Faces
IBims-1
iBims-1 (independent Benchmark images and matched scans - version 1) is a new high-quality RGB-D dataset, especially designed for testing single-image depth estimation (SIDE) methods. A customized acquisition setup, composed of a digital single-lens reflex (DSLR) camera and a high-precision laser scanner was used to acquire high-resolution images and highly accurate depth maps of diverse indoors scenarios. Compared to related RGB-D datasets, iBims-1 stands out due to a very low noise level, sharp depth transitions, no occlusions, and high depth ranges. Our dataset consists of the following components: Core dataset: - 100 RGB-D image pairs of various indoor scenes in high- and low resolution - Masks for invalid, transparent and planar regions (tables, floors, walls) - Masks for distinct depth transitions - Camera calibration parameters Auxiliary dataset: - 56 different color and geometric augmentations for each image of the core dataset - Additional hand-held images for testing MVS methods - Images of printed patterns and photos posted on a wall to assess performance of textured planar surfaces - Several RGB-D image sequences of static scenes with varying illumation
Provide a detailed description of the following dataset: IBims-1
UIT-ViSFD
UIT-ViSFD is a Vietnamese Smartphone Feedback Dataset as a new benchmark corpus built based on strict annotation schemes for evaluating aspect-based sentiment analysis, consisting of 11,122 human-annotated comments for mobile e-commerce, which is freely available for research purposes.
Provide a detailed description of the following dataset: UIT-ViSFD
ToyADMOS2
**ToyADMOS2** is a dataset of miniature-machine operating sounds for anomalous sound detection under domain shift conditions.
Provide a detailed description of the following dataset: ToyADMOS2
Colored MNIST
Colored MNIST is a synthetic binary classification task derived from [MNIST](/dataset/mnist).
Provide a detailed description of the following dataset: Colored MNIST
Stanford Schema2QA Dataset
Schema2QA is the first large question answering dataset over real-world Schema.org data. It covers 6 common domains: restaurants, hotels, people, movies, books, and music, based on crawled Schema.org metadata from 6 different websites (Yelp, Hyatt, LinkedIn, IMDb, Goodreads, and last.fm.). In total, there are **over 2,000,000 examples for training**, consisting of both augmented human paraphrase data and high-quality synthetic data generated by Genie. All questions are annotated with executable virtual assistant programming language ThingTalk. Schema2QA includes challenging evaluation questions collected from crowd workers. Workers are prompted with only what the domain is and what properties are supported. Thus, the sentences are natural and diverse. They also contain entities unseen during training. The collected sentences are manually annotated with ThingTalk by the authors. In total there are **over 5,000 examples for dev and test**. An example of an evaluation question and its ThingTalk annotation is shown below: "What are the highest ranked burger joints in the 40 mile area around Asheville NC?" ``` sort(aggregateRating.ratingValue desc of @org.schema.Restaurant.Restaurant() filter distance(geo, new Location("asheville nc" )) <= 40 mi && servesCuisine =~ "burger")[1] ; ```
Provide a detailed description of the following dataset: Stanford Schema2QA Dataset
CBC
The complete blood count (CBC) dataset contains 360 blood smear images along with their annotation files splitting into Training, Testing, and Validation sets. The training folder contains 300 images with annotations. The testing and validation folder both contain 60 images with annotations. We have done some modifications over the original dataset to prepare this CBC dataset where some of the image annotation files contain very low red blood cells (RBCs) than actual and one annotation file does not include any RBC at all although the cell smear image contains RBCs. So, we clear up all the fallacious files and split the dataset into three parts. Among the 360 smear images, 300 blood cell images with annotations are used as the training set first, and then the rest of the 60 images with annotations are used as the testing set. Due to the shortage of data, a subset of the training set is used to prepare the validation set which contains 60 images with annotations.
Provide a detailed description of the following dataset: CBC
Quo Vadis, Open Source?
This is an complete set of the data we collected and analyzed in our study "Quo Vadis, Open Source? The Limits of Open Source Growth". Please see our GitHub repository for details and tool chain.
Provide a detailed description of the following dataset: Quo Vadis, Open Source?
X4K1000FPS
Dataset of high-resolution (4096×2160), high-fps (1000fps) video frames with extreme motion. X-TEST consists of 15 video clips with 33-length of 4K-1000fps frames. X-TRAIN consists of 4,408 clips from various types of 110 scenes. The clips are 65-length of 1000fps frames
Provide a detailed description of the following dataset: X4K1000FPS
Webis-ConcluGen-21
**Webis-ConcluGen-21** is a large-scale corpus of 136,996 samples of argumentative texts and their conclusions used for the task of generating informative conclusions.
Provide a detailed description of the following dataset: Webis-ConcluGen-21
CTFW
**CTFW** is a large annotated procedural text dataset in the cybersecurity domain (3154 documents). It is used to generate flow graphs from procedural texts.
Provide a detailed description of the following dataset: CTFW
Herbarium 2021 Half–Earth
The **Herbarium Half-Earth** dataset is a large and diverse dataset of herbarium specimens to date for automatic taxon recognition. The Herbarium 2021: Half-Earth Challenge dataset includes more than 2.5M images representing nearly 65,000 species from the Americas and Oceania that have been aligned to a standardized plant list. This dataset has a long tail; there are a minimum of 3 images per species. However, some species can be represented by more than 100 images. This dataset only includes vascular land plants which include lycophytes, ferns, gymnosperms, and flowering plants. The extinct forms of lycophytes are the major component of coal deposits, ferns are indicators of ecosystem health, gymnosperms provide major habitats for animals, and flowering plants provide almost all of our crops, vegetables, and fruits.
Provide a detailed description of the following dataset: Herbarium 2021 Half–Earth
Dark Machines Anomaly Score
This dataset is the outcome of a data challenge conducted as part of the Dark Machines Initiative and the Les Houches 2019 workshop on Physics at TeV colliders. The challenge aims at detecting signals of new physics at the LHC using unsupervised machine learning algorithms. It consists on a large benchmark dataset, consisting of >1 Billion simulated LHC events corresponding to 10 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV.
Provide a detailed description of the following dataset: Dark Machines Anomaly Score
PROST
The **PROST** (Physical Reasoning about Objects Through Space and Time) dataset contains 18,736 multiple-choice questions made from 14 manually curated templates, covering 10 physical reasoning concepts. All questions are designed to probe both causal and masked language models in a zero-shot setting.
Provide a detailed description of the following dataset: PROST
COVID-Fact
**COVID-Fact** is a FEVER-like dataset of claims concerning the COVID-19 pandemic. The dataset contains claims, evidence for the claims, and contradictory claims refuted by the evidence.
Provide a detailed description of the following dataset: COVID-Fact
AppleScabLDs
Dataset contains images with apple leaves infected by scab. The images are grouped in two folders: "Healthy" and "Scab". The collection of digital images were carried out in different locations of Latvia. Digital images with characteristic scab symptoms on leaves were collected by the Institute of Horticulture (LatHort) under project "lzp-2019/1-0094 Application of deep learning and datamining for the study of plant-pathogen interaction: the case of apple and pear scab" with a goal to create mobile application for apple scab detection using convolution neural networks. Devices: smartphone cameras (12 MP, 13 MP, 48 MP) and a digital compact camera (10 MP). The collection of images was carried out in field conditions, in orchards. The images were taken at three different stages of the day - in the morning (9:00-10:00), around noon (12:00-14:00), as well as in the evening (16:00-17:00) to provide a variety of natural light conditions. The images were also taken on both sunny days and overcast days to provide different types of light (soft light and hard light). The leaves were framed so that they occupied the image area as much as possible and were in the center of the image, and the focal point was on the object. The object may have had other leaves or fruits in the background. The same object was photographed from multiple viewpoints.
Provide a detailed description of the following dataset: AppleScabLDs
AppleScabFDs
Dataset contains images with apples infected by scab. The images are grouped in two folders: "Healthy" and "Scab". The collection of digital images were carried out in different locations of Latvia. Digital images with characteristic scab symptoms on fruits were collected by the Institute of Horticulture (LatHort) under project "lzp-2019/1-0094 Application of deep learning and datamining for the study of plant-pathogen interaction: the case of apple and pear scab" with a goal to create mobile application for apple scab detection using convolution neural networks. Devices: smartphone cameras (12 MP, 13 MP, 48 MP) and a digital compact camera (10 MP). The collection of images was carried out in field conditions, in orchards. The images were taken at three different stages of the day - in the morning (9:00-10:00), around noon (12:00-14:00), as well as in the evening (16:00-17:00) to provide a variety of natural light conditions. The images were also taken on both sunny days and overcast days to provide different types of light (soft light and hard light). The leaves were framed so that they occupied the image area as much as possible and were in the center of the image, and the focal point was on the object. The object may have had other leaves or fruits in the background. The same object was photographed from multiple viewpoints.
Provide a detailed description of the following dataset: AppleScabFDs
LSEC
The **LSEC** (Live Stream E-Commerce) dataset has two subsets: LSEC-Small and LSEC-Large. It is a dataset for studying E-commerce transactions in the context of live streams, where the streames are talking about products while interacting with their audience. The dataset consists of interaction information among streamers, users, and products.
Provide a detailed description of the following dataset: LSEC
BiToD
**BiToD** is a bilingual multi-domain dataset for end-to-end task-oriented dialogue modeling. BiToD contains over 7k multi-domain dialogues (144k utterances) with a large and realistic bilingual knowledge base. It serves as an effective benchmark for evaluating bilingual ToD systems and cross-lingual transfer learning approaches.
Provide a detailed description of the following dataset: BiToD
CoSQA
CoSQA (Code Search and Question Answering) It includes 20,604 labels for pairs of natural language queries and codes, each annotated by at least 3 human annotators.
Provide a detailed description of the following dataset: CoSQA
TikTok Dataset
We learn high fidelity human depths by leveraging a collection of social media dance videos scraped from the [TikTok mobile social networking application](hhttps://www.tiktok.com/). It is by far one of the most popular video sharing applications across generations, which include short videos (10-15 seconds) of diverse dance challenges as shown above. We manually find more than 300 dance videos that capture a single person performing dance moves from TikTok dance challenge compilations for each month, variety, type of dances, which are moderate movements that do not generate excessive motion blur. For each video, we extract RGB images at 30 frame per second, resulting in more than 100K images. We segmented these images using [Removebg](https://www.remove.bg/) application, and computed the UV coordinates from [DensePose](http://densepose.org/). Download TikTok Dataset: * Please use the dataset only for the research purpose. * The dataset can be viewed and downloaded from the [Kaggle page](https://www.kaggle.com/yasaminjafarian/tiktokdataset). (you need to make an account in Kaggle to be able to download the data. It is free!) * The dataset can also be downloaded from [here](https://drive.google.com/file/d/1dmZh6I3kvh__nB-nWL8VT3jIn3mDL1VA/view) (42 GB). The dataset resolution is: (1080 x 604) * The original YouTube videos corresponding to each sequence and the dance name can be downloaded from [here](https://drive.google.com/file/d/1YvRkJmiO_rN-_a2qhpptwCWIY0oqh-dP/view) (2.6 GB).
Provide a detailed description of the following dataset: TikTok Dataset
Unsplash2K
Unsplash2K is high-resolution image dataset with 2K resolution. Unsplash2K dataset is crawled from unsplash. Unsplash2K dataset contains 498 high-resolution images and corresponding low-resolution images which are downsampled by bicubic downsamling for x2, x4, x8 scale. Unsplash2K contains diverse contents such as animals, architectures and flowers.
Provide a detailed description of the following dataset: Unsplash2K
DIPS-Plus
How and where proteins interface with one another can ultimately impact the proteins' functions along with a range of other biological processes. As such, precise computational methods for protein interface prediction (PIP) come highly sought after as they could yield significant advances in drug discovery and design as well as protein function analysis. However, the traditional benchmark dataset for this task, Docking Benchmark 5 (DB5), contains only a paltry 230 complexes for training, validating, and testing different machine learning algorithms. In this work, we expand on a dataset recently introduced for this task, the Database of Interacting Protein Structures (DIPS), to present DIPS-Plus, an enhanced, feature-rich dataset of 42,112 complexes for geometric deep learning of protein interfaces. The previous version of DIPS contains only the Cartesian coordinates and types of the atoms comprising a given protein complex, whereas DIPS-Plus now includes a plethora of new residue-level features including protrusion indices, half-sphere amino acid compositions, and new profile hidden Markov model (HMM)-based sequence features for each amino acid, giving researchers a large, well-curated feature bank for training protein interface prediction methods.
Provide a detailed description of the following dataset: DIPS-Plus
Disfl-QA
**Disfl-QA** is a targeted dataset for contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages. Disfl-QA builds upon the [SQuAD-v2](squad) dataset, where each question in the dev set is annotated to add a contextual disfluency using the paragraph as a source of distractors. The final dataset consists of ~12k (disfluent question, answer) pairs. Over 90% of the disfluencies are corrections or restarts, making it a much harder test set for disfluency correction. Disfl-QA aims to fill a major gap between speech and NLP research community. We hope the dataset can serve as a benchmark dataset for testing robustness of models against disfluent inputs.
Provide a detailed description of the following dataset: Disfl-QA
TimeDial
**TimeDial** presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from the [DailyDialog](dailydialog), which is a multi-turn dialog corpus. TimeDial dataset consists of 1,104 dialog instances with 2 correct and 2 incorrect options with the following statistics:
Provide a detailed description of the following dataset: TimeDial
CFD
CrackForest Dataset is an annotated road crack image database which can reflect urban road surface condition in general. If you use this crack image dataset, we appreciate it if you cite an appropriate subset of the following papers: @article{shi2016automatic, title={Automatic road crack detection using random structured forests}, author={Shi, Yong and Cui, Limeng and Qi, Zhiquan and Meng, Fan and Chen, Zhensong}, journal={IEEE Transactions on Intelligent Transportation Systems}, volume={17}, number={12}, pages={3434--3445}, year={2016}, publisher={IEEE} } @inproceedings{cui2015pavement, title={Pavement Distress Detection Using Random Decision Forests}, author={Cui, Limeng and Qi, Zhiquan and Chen, Zhensong and Meng, Fan and Shi, Yong}, booktitle={International Conference on Data Science}, pages={95--102}, year={2015}, organization={Springer} } 2.License. The dataset is made available for non-commercial research purposes only. 3.History. Version 1.0 (2015/09/29) initial version
Provide a detailed description of the following dataset: CFD
Rel3D
Understanding spatial relations (e.g., “laptop on table”) in visual input is important for both humans and robots. Existing datasets are insufficient as they lack largescale, high-quality 3D ground truth information, which is critical for learning spatial relations. In this paper, we fill this gap by constructing Rel3D: the first large-scale, human-annotated dataset for grounding spatial relations in 3D. Rel3D enables quantifying the effectiveness of 3D information in predicting spatial relations on large-scale human data. Moreover, we propose minimally contrastive data collection—a novel crowdsourcing method for reducing dataset bias. The 3D scenes in our dataset come in minimally contrastive pairs: two scenes in a pair are almost identical, but a spatial relation holds in one and fails in the other. We empirically validate that minimally contrastive examples can diagnose issues with current relation detection models as well as lead to sample-efficient training. Code and data are available at https://github.com/princeton-vl/Rel3D.
Provide a detailed description of the following dataset: Rel3D
Topo-boundary
**Topo-boundary** is a new benchmark dataset, named \textit{Topo-boundary}, for off-line topological road-boundary detection. The dataset contains 21,556 1000 X 1000-sized 4-channel aerial images. Each image is provided with 8 training labels for different sub-tasks. Image source: [https://github.com/TonyXuQAQ/Topo-boundary](https://github.com/TonyXuQAQ/Topo-boundary)
Provide a detailed description of the following dataset: Topo-boundary
Swords
**Swords** (Standford Word Substitution) is a benchmark for lexical substitution, the task of finding appropriate substitutes for a target word in a context. Swords is composed of context, target word, and substitute triples (c, w, w'), each of which has a score that indicates the appropriateness of the substitute.
Provide a detailed description of the following dataset: Swords
Emol news articles and comments
The dataset provides News articles obtained from emol.cl including their content, title and all the comments it received in JSON format
Provide a detailed description of the following dataset: Emol news articles and comments
JFT-3B
**JFT-3B** is an internal Google dataset and a larger version of the JFT-300M dataset. It consists of nearly 3 billion images, annotated with a class-hierarchy of around 30k labels via a semi-automatic pipeline. In other words, the data and associated labels are noisy.
Provide a detailed description of the following dataset: JFT-3B
VOID
The dataset was collected using the Intel RealSense D435i camera, which was configured to produce synchronized accelerometer and gyroscope measurements at 400 Hz, along with synchronized VGA-size (640 x 480) RGB and depth streams at 30 Hz. The depth frames are acquired using active stereo and is aligned to the RGB frame using the sensor factory calibration. All the measurements are timestamped. The dataset contains 56 sequences in total, both indoor and outdoor with challenging motion. Typical scenes include classrooms, offices, stairwells, laboratories, and gardens. Of the 56 sequences, 48 sequences (approximately 47K frames) are designated for training and 8 sequences for testing, from which we sampled 800 frames to construct the testing set. Each sequence constains sparse depth maps at three density levels, 1500, 500 and 150 points, corresponding to 0.5%, 0.15% and 0.05% of VGA size.
Provide a detailed description of the following dataset: VOID
FastZIP Data
# Structure of code/data folders and how to use them #### fastzip-code * Contains codebase to generate results in *fastzip-results* folder * Individual notebooks contain comments on their functionality/how to use them * *FastZIP-Resample.ipynb* (optional) * Resamples the collected sensor data to desired sampling rates (eliminates the effect of sampling rate instability/drift) * Input: *fastzip-data/exp-X/raw* * Output: *fastzip-data/exp-X/adv* and *fastzip-data/exp-X/non-adv* * *FastZIP-Process.ipynb* * Computes error rates for adversarial and benign devices in different configurations, generates binary fingerprints (see comments inside the notebook) * Input: *fastzip-data/exp-X/adv* and *fastzip-data/exp-X/non-adv* * Output: *fastzip-results/logs* and *fastzip-results/fps* * *FastZIP-Results.ipynb* * Parses and caches results generated by *FastZIP-Process.ipynb* notebook to be used for plotting and data analysis * Input: *fastzip-results/logs* and *fastzip-results/fps* * Output: *fastzip-results/cache* * The system paths that are used by all notebooks are set in *fastzip-code/const/globconst.py* --> **adjust them** before running the code! * Uses folder *fastzip-data* as input * Uses folder *fastzip-results* as output * The above notebooks were run using Python 3.6.5 on Ubuntu 18.04.5 LTS bionic (x64) * The list of python packages with versions installed on the test machine is in *fastzip-code/python3-packages.txt* #### fastzip-data * Contains sensor data collected from multiple devices in running/stationary cars in various conditions (e.g., in a city, etc.) from three experiments: *exp-3*, *exp-4*, and *exp-5* * Each *exp-X* folder has the same structure: * *adv* - sensor data collected when two cars drive one after another * *non-adv* - sensor when two cars drive the same route but not one after another * In adv and non-adv the sensor data is resampled (see description of *FastZIP-Resample.ipynb* ) * *raw* - sensor data collected in the experiment before resampling or split into *adv* and *non-adv* #### fastzip-min_entropy * Contains input and evaluation results for min-entropy estimation of our generated fingerprints using *NIST SP 800 90B tests* * See README inside the folder for more detail #### fastzip-results * Results in *json* or *json.gz* formats generated by *FastZIP-Process.ipynb* and *FastZIP-Results.ipynb* * See comments in these notebooks for the type of results stored in folders inside *fastzip-results* #### fpake * Contains the implementation of the fPAKE protocol as well as input and output for its benchmarking * See README inside the folder for more detail
Provide a detailed description of the following dataset: FastZIP Data
TESTIMAGES
A collection of photographic and synthetic images intended for analysis of image processing techniques and quality assessment of displays. Image source: [https://testimages.org/](https://testimages.org/)
Provide a detailed description of the following dataset: TESTIMAGES
SEDE
**SEDE** is a dataset comprised of 12,023 complex and diverse SQL queries and their natural language titles and descriptions, written by real users of the Stack Exchange Data Explorer out of a natural interaction. These pairs contain a variety of real-world challenges which were rarely reflected so far in any other semantic parsing dataset. The goal of this dataset is to take a significant step towards evaluation of Text-to-SQL models in a real-world setting. Compared to other Text-to-SQL datasets, SEDE contains at least 10 times more SQL queries templates (queries after canonization and anonymization of values) than other datasets, and has the most diverse set of utterances and SQL queries (in terms of 3-grams) out of all single-domain datasets. SEDE introduces real-world challenges, such as under-specification, usage of parameters in queries, dates manipulation and more.
Provide a detailed description of the following dataset: SEDE
CoNaLa
The **CMU CoNaLa, the Code/Natural Language Challenge** dataset is a joint project from the Carnegie Mellon University [NeuLab](http://www.cs.cmu.edu/~neulab/) and [Strudel](https://cmustrudel.github.io/) labs. Its purpose is for testing the generation of code snippets from natural language. The data comes from StackOverflow questions. There are 2379 training and 500 test examples that were manually annotated. Every example has a natural language *intent* and its corresponding python *snippet*. In addition to the manually annotated dataset, there are also 598,237 mined intent-snippet pairs. These examples are similar to the hand-annotated ones except that they contain a probability if the pair is valid.
Provide a detailed description of the following dataset: CoNaLa
SIPaKMeD
* a high-level explanation of the dataset characteristics * explain motivations and summary of its content * potential use cases of the dataset
Provide a detailed description of the following dataset: SIPaKMeD
CoNaLa-Ext
The **CoNaLa Extended With Question Text** is an extension to the original [CoNaLa Dataset](https://conala-corpus.github.io/) ([Papers With Code Link](https://paperswithcode.com/dataset/conala)) proposed in the NLP4Prog workshop paper "[Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation](https://arxiv.org/abs/2106.04447)". The key additions are that every example now has the full question body from its respective StackOverflow Question. **IMPORTANT** If you use this dataset, you MUST cite the [original CoNaLa dataset paper](https://arxiv.org/abs/1805.08949).
Provide a detailed description of the following dataset: CoNaLa-Ext
VALUE
**VALUE** is a Video-And-Language Understanding Evaluation benchmark to test models that are generalizable to diverse tasks, domains, and datasets. It is an assemblage of 11 VidL (video-and-language) datasets over 3 popular tasks: (i) text-to-video retrieval; (ii) video question answering; and (iii) video captioning. VALUE benchmark aims to cover a broad range of video genres, video lengths, data volumes, and task difficulty levels. Rather than focusing on single-channel videos with visual information only, VALUE promotes models that leverage information from both video frames and their associated subtitles, as well as models that share knowledge across multiple tasks. The datasets used for the VALUE benchmark are: [TVQA](tvqa), [TVR](tvr), [TVC](tvc), [How2R](how2r), [How2QA](how2qa), [VIOLIN](violin), [VLEP](vlep), [YouCook2](youcook2) (YC2C, YC2R), [VATEX](vatex)
Provide a detailed description of the following dataset: VALUE
Itihasa
Itihasa is a large-scale corpus for Sanskrit to English translation containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata.
Provide a detailed description of the following dataset: Itihasa
5k_presetation_slides
We crawled 5000 paper, slide pairs from conference proceeding websites. (e.g. acl.org and usenix.org).
Provide a detailed description of the following dataset: 5k_presetation_slides
Notre-Dame Cathedral Fire
**Number of images:** 1,657 images during or after the fire If you use the dataset, please cite the following works: > Padilha, Rafael and Andaló, Fernanda A. and Rocha, Anderson. “Improving the chronological sorting of images through occlusion: A study on the Notre-Dame cathedral fire,” in 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020. ### Description of the event and data collection: On April 15th, 2019, large parts of Notre-Dame Cathedral's structure and spire were devastated by a fire. People worldwide followed the tragic event through images and videos that were shared by the media and citizens. From the generated imagery, we collected a total of 23,683 images posted on Twitter during and on the day after the fire. Even though most of them were related to the event, several were memes, cartoons, compositions and artwork, while some depicted the cathedral before the fire. As we focus on learning how the fire and appearance of the cathedral evolved during the event, we removed them, reducing our set to 5,206 relevant images. Among these, several examples were duplicates or near-duplicates of other images. Considering their little contribution to the training process, after their removal, we were left with 1,657 distinct images related to the event. The cleaning process involved using methods such as local sensitive hashing for filtering near-duplicates, and semi-supervised approaches based on Optimum-path Forest theory to mine for relevant and non-relevant imagery of the event. By analyzing the event's description, four main sub-events can be defined: spire on fire, spire collapsing, fire continues on roof, and fire extinguished. Each sub-event contains specific visual clues (e.g., the absence of the central spire) that can be leveraged to estimate the temporal position of an image. Each image in the dataset was manually labeled as being captured in one of these sub-events. We also consider an unknown category for images that do not contain any hint of the sub-event in which they were captured, such as zoom-ins of the cathedral's facades. Besides that, each image was annotated with respect to the intercardinal direction of the cathedral’s facade being depicted in the image (north, northeast, east, southeast, south, southwest, west, northwest). Image source: [ Improving the chronological sorting of images through occlusion: A study on the Notre-Dame cathedral fire](https://ieeexplore.ieee.org/document/9054120)
Provide a detailed description of the following dataset: Notre-Dame Cathedral Fire
CLINC-Single-Domain-OOS
A dataset with two separate domains, i.e., the "Banking'' domain and the "Credit cards'' domain with both general Out-of-Scope (OOD-OOS) queries and In-Domain but Out-of-Scope (ID-OOS) queries, where ID-OOS queries are semantically similar intents/queries with in-scope intents. Each domain in CLINC150 originally includes 15 intents. Each domain includes ten in-scope intents in this dataset, and the ID-OOS queries are built up based on five held-out in-scope intents. Can be used to conduct intent detection with and without OOD-OOS and ID-OOS queries
Provide a detailed description of the following dataset: CLINC-Single-Domain-OOS
BANKING77-OOS
A dataset with a single banking domain, includes both general Out-of-Scope (OOD-OOS) queries and In-Domain but Out-of-Scope (ID-OOS) queries, where ID-OOS queries are semantically similar intents/queries with in-scope intents. BANKING77 originally includes 77 intents. BANKING77-OOS includes 50 in-scope intents in this dataset, and the ID-OOS queries are built up based on 27 held-out in-scope intents. Conduct intent detection with and without OOD-OOS and ID-OOS queries
Provide a detailed description of the following dataset: BANKING77-OOS
ZeroWaste
**ZeroWaste** is a dataset for automatic waste detection and segmentation. This dataset contains over 1,800 fully segmented video frames collected from a real waste sorting plant along with waste material labels for training and evaluation of the segmentation methods, as well as over 6,000 unlabeled frames that can be further used for semi-supervised and self-supervised learning techniques. ZeroWaste also provides frames of the conveyor belt before and after the sorting process, comprising a novel setup that can be used for weakly-supervised segmentation.
Provide a detailed description of the following dataset: ZeroWaste
ILDC
The **ILDC** dataset (Indian Legal Documents Corpus) is a large corpus of 35k Indian Supreme Court cases annotated with original court decisions. A portion of the corpus (a separate test set) is annotated with gold standard explanations by legal experts. The dataset is used for Court Judgment Prediction and Explanation (CJPE). The task requires an automated system to predict an explainable outcome of a case.
Provide a detailed description of the following dataset: ILDC
CiteWorth
**CiteWorth** is a a large, contextualized, rigorously cleaned labelled dataset for cite-worthiness detection built from a massive corpus of extracted plain-text scientific documents.
Provide a detailed description of the following dataset: CiteWorth
IndiaPoliceEvents
**IndiaPoliceEvents** is a corpus of 21,391 sentences from 1,257 English-language Times of India articles about events in the state of Gujarat during March 2002. This dataset is used for automated event extraction.
Provide a detailed description of the following dataset: IndiaPoliceEvents
Multilingual TOP
**Multilingual TOP** is a dataset for multilingual semantic parsing with human-written sentences as opposed to machine translated ones. The dataset sentences are in English, Italian and Japanese and it is based on the Facebook Task Oriented Parsing (TOP) dataset.
Provide a detailed description of the following dataset: Multilingual TOP
MultiOpEd
**MultiOpEd** is a corpus of multi-perspective news editorials. It is an open-domain news editorial corpus that supports various tasks pertaining to the argumentation structure in news editorials, focusing on automatic perspective discovery. News editorial is a genre of persuasive text, where the argumentation structure is usually implicit. However, the arguments presented in an editorial typically center around a concise, focused thesis, which we refer to as their perspective. MultiOpEd aims at supporting the study of multiple tasks relevant to automatic perspective discovery, where a system is expected to produce a single-sentence thesis statement summarizing the arguments presented.
Provide a detailed description of the following dataset: MultiOpEd
S_B_D
100,000 LR synthetic barcode datasets along with their corresponding bounding boxes ground truth masks. 100,000 UHR synthetic barcode datasets along with their corresponding bounding boxes ground truth masks.
Provide a detailed description of the following dataset: S_B_D
EMOTyDA
EMOTyDA is a multimodal Emotion aware Dialogue Act dataset collected from open-sourced dialogue datasets.
Provide a detailed description of the following dataset: EMOTyDA
Rent3D++
Rent3D++ is an extension of the Rent3D floorplans + photos dataset. The floorplans are annotated with room outline polygons, doors/windows as line segments, object-icons as axis-aligned bounding boxes, room-door-room connectivity graphs, and photo-room assignments. We have extracted rectified surface crops from architectural surfaces in photos, and these can drive interior texturing/material modeling tasks. This dataset can be used with our paper Plan2Scene to generate textured 3D mesh models of houses using floorplans and photos. The complete list of improvements we did on the Rent3D dataset are as follows: - Fixed incorrectly categorized rooms and added wall outlines and categories from missing rooms. - Expanding the room category set {reception, bedroom, kitchen, bathroom, outdoor} by adding another 7 common room types: {closet, entrance, corridor, staircase, balcony, terrace, unknown}. - Generated room-door-room connectivity graphs for floorplans. - Annotated all windows, doors, and other wall openings, and associated them with corresponding rooms. - Defined a new 60/20/20% (129/43/43 houses) training, validation, test split (cf. original 100/30/85 house split), giving more samples to training and validation. - Extract rectified surface crops from architectural surfaces seen in photos (floors, walls, ceilings). - Annotated axis-aligned bounding boxes for fixed object icons indicated on the test set floorplans.
Provide a detailed description of the following dataset: Rent3D++
Date Estimation in the Wild
~1M Flickr images from the XX century-aged from the 1910s to 1990s. Dataset was introduced by Müller et al. and can be found https://www.radar-service.eu/radar/en/dataset/tJzxrsYUkvPklBOw
Provide a detailed description of the following dataset: Date Estimation in the Wild
Dirty-MNIST
DirtyMNIST is a concatenation of MNIST + AmbiguousMNIST, with 60k samples each in the training set. AmbiguousMNIST contains additional ambiguous digits with varying ambiguity. The AmbiguousMNIST test set contains 60k ambiguous samples as well. ## Additional Guidance 1. DirtyMNIST is a concatenation of MNIST + AmbiguousMNIST, with 60k samples each in the training set. 2. The current AmbiguousMNIST contains 6k unique samples with 10 labels each. This multi-label dataset gets flattened to 60k samples. The assumption is that ambiguous samples have multiple "valid" labels as they are ambiguous. MNIST samples are intentionally undersampled (in comparison), which benefits AL acquisition functions that can select unambiguous samples. 3. Pick your initial training samples (for warm starting Active Learning) from the MNIST half of DirtyMNIST to avoid starting training with potentially very ambiguous samples, which might add a lot of variance to your experiments. 4. Make sure to pick your validation set from the MNIST half as well, for the same reason as above. 5. Make sure that your batch acquisition size is >= 10 (probably) given that there are 10 multi-labels per samples in Ambiguous-MNIST. 6. By default, Gaussian noise with stddev 0.05 is added to each sample to prevent acquisition functions (in Active Learning) from cheating by disgarding "duplicates". 7. If you want to split Ambiguous-MNIST into subsets (or Dirty-MNIST within the second ambiguous half), make sure to split by multiples of 10 to avoid splits within a flattened multi-label sample.
Provide a detailed description of the following dataset: Dirty-MNIST
Symmetric Solids
This is a pose estimation dataset, consisting of symmetric 3D shapes where multiple orientations are visually indistinguishable. The challenge is to predict all equivalent orientations when only one orientation is paired with each image during training (as is the scenario for most pose estimation datasets). In contrast to most pose estimation datasets, the full set of equivalent orientations is available for evaluation. There are eight shapes total, each rendered from 50,000 viewpoints distributed uniformly at random over the full space of 3D rotations. Five of the shapes are featureless -- tetrahedron, cube, icosahedron, cone, and cylinder. Of those, the three Platonic solids (tetrahedron, cube, icosahedron) are annotated with their 12, 24, and 60 discrete symmetries, respectively. The cone and cylinder are annotated with their continuous symmetries discretized at 1 degree intervals. These symmetries are provided for evaluation; the intended supervision is only a single rotation with each image. The remaining three shapes are marked with a distinguishing feature. There is a tetrahedron with one red-colored face, a cylinder with an off-center dot, and a sphere with an X capped by a dot. Whether or not the distinguishing feature is visible, the space of possible orientations is reduced. We do not provide the set of equivalent rotations for these shapes. Each example contains of - the 224x224 RGB image - a shape index so that the dataset may be filtered by shape. The indices correspond to: 0 = tetrahedron 1 = cube 2 = icosahedron 3 = cone 4 = cylinder 5 = marked tetrahedron 6 = marked cylinder 7 = marked sphere - the rotation used in the rendering process, represented as a 3x3 rotation matrix - the set of known equivalent rotations under symmetry, for evaluation. In the case of the three marked shapes, this is only the rendering rotation.
Provide a detailed description of the following dataset: Symmetric Solids
Evidence-based Factual Error Correction
Intermediate annotations from the FEVER dataset that describe original facts extracted from Wikipedia and the mutations that were applied, yielding the claims in FEVER.
Provide a detailed description of the following dataset: Evidence-based Factual Error Correction
TI1K Dataset
Thumb Index 1000 (TI1K) is a dataset of 1000 hand images with the hand bounding box, and thumb and index fingertip positions. The dataset includes the natural movement of the thumb and index fingers making it suitable for mixed reality (MR) applications. The dataset contains images only with the thumb and index fingers of both hands of resolution 640x480. All the annotations of the training and test images are in the "label.txt" file in the Annotation folder.
Provide a detailed description of the following dataset: TI1K Dataset
Bus Trajectory Dataset
This dataset contains the bus trajectory dataset collected by 6 volunteers who were asked to travel across the sub-urban city of Durgapur, India, on intra-city buses (route name: 54 Feet). During the travel, the volunteers captured sensor logs through an Android application installed on COTS smartphones.
Provide a detailed description of the following dataset: Bus Trajectory Dataset
MARS-DL
MARS dataset processed with our re-Detect and Link (DL) module. More information: [https://github.com/jackie840129/CF-AAN](https://github.com/jackie840129/CF-AAN)
Provide a detailed description of the following dataset: MARS-DL
DukeMTMC-VideoReID-DL
DukeMTMC-VideoReID-DL processed with our re-Detect and Link (DL) module.
Provide a detailed description of the following dataset: DukeMTMC-VideoReID-DL
PHASE
PHASE is a dataset of physically-grounded abstract social events, that resemble a wide range of real-life social interactions by including social concepts such as helping another agent. PHASE consists of 2D animations of pairs of agents moving in a continuous space generated procedurally using a physics engine and a hierarchical planner. Agents have a limited field of view, and can interact with multiple objects, in an environment that has multiple landmarks and obstacles. Using PHASE, we design a social recognition task and a social prediction task. PHASE is validated with human experiments demonstrating that humans perceive rich interactions in the social events, and that the simulated agents behave similarly to humans.
Provide a detailed description of the following dataset: PHASE
AGENT
Inspired by cognitive development studies on intuitive psychology, we present a benchmark consisting of a large dataset of procedurally generated 3D animations, AGENT (Action, Goal, Efficiency, coNstraint, uTility), structured around four scenarios (goal preferences, action efficiency, unobserved constraints, and cost-reward trade-offs) that probe key concepts of core intuitive psychology.
Provide a detailed description of the following dataset: AGENT
TCIA Brain-Tumor-Progression
This collection includes datasets from 20 subjects with primary newly diagnosed glioblastoma who were treated with surgery and standard concomitant chemo-radiation therapy (CRT) followed by adjuvant chemotherapy. Two MRI exams are included for each patient: within 90 days following CRT completion and at progression (determined clinically, and based on a combination of clinical performance and/or imaging findings, and punctuated by a change in treatment or intervention). All image sets are in DICOM format and contain T1w (pre and post-contrast agent), FLAIR, T2w, ADC, normalized cerebral blood flow, normalized relative cerebral blood volume, standardized relative cerebral blood volume, and binary tumor masks (generated using T1w images). The perfusion images were generated from dynamic susceptibility contrast (GRE-EPI DSC) imaging following a preload of contrast agent. All of the series are co-registered with the T1+C images. The intent of this dataset is for assessing deep learning algorithm performance to predict tumor progression. ### Data Citation ``` Schmainda KM, Prah M (2018). Data from Brain-Tumor-Progression. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2018.15quzvnb ```
Provide a detailed description of the following dataset: TCIA Brain-Tumor-Progression
Ruddit
Ruddit is a dataset of English language Reddit comments that has fine-grained, real-valued scores for offensive language detection between -1 (maximally supportive) and 1 (maximally offensive). The dataset was annotated using Best--Worst Scaling, a form of comparative annotation that has been shown to alleviate known biases of using rating scales.
Provide a detailed description of the following dataset: Ruddit
HuRDL
The **Human-Robot Dialogue Learning (HuRDL) Corpus** is a dataset about asking questions in situated task-based interactions. It is a dialogue corpus collected in an online interactive virtual environment in which human participants play the role of a robot performing a collaborative tool-organization task.
Provide a detailed description of the following dataset: HuRDL
HPO-B
HPO-B is a benchmark for assessing the performance of HPO (Hyperparameter optimization) algorithms.
Provide a detailed description of the following dataset: HPO-B
DUO
**DUO** is a dataset for Underwater object detection for robot picking. The dataset contains a collection of diverse underwater images with more rational annotations.
Provide a detailed description of the following dataset: DUO
SciCo
**SciCo** is an expert-annotated dataset for hierarchical CDCR (cross-document coreference resolution) for concepts in scientific papers, with the goal of jointly inferring coreference clusters and hierarchy between them.
Provide a detailed description of the following dataset: SciCo
Python Programming Puzzles (P3)
Python Programming Puzzles (P3) is an open-source dataset where each puzzle is defined by a short Python program , and the goal is to find an input which makes output "True". The puzzles are objective in that each one is specified entirely by the source code of its verifier, so evaluating is all that is needed to test a candidate solution. They do not require an answer key or input/output examples, nor do they depend on natural language understanding. The dataset is comprehensive in that it spans problems of a range of difficulties and domains, ranging from trivial string manipulation problems that are immediately obvious to human programmers (but not necessarily to AI), to classic programming puzzles (e.g., Towers of Hanoi), to interview/competitive-programming problems (e.g., dynamic programming), to longstanding open problems in algorithms and mathematics (e.g., factoring). The objective nature of P3 readily supports self-supervised bootstrapping.
Provide a detailed description of the following dataset: Python Programming Puzzles (P3)
2021 Hotel-ID
**2021 Hotel-ID** is a dataset for hotel recognition to help raise awareness of human trafficking and generate novel approaches. The dataset consists of hotel room images that have been crowd-sourced and uploaded through the TraffickCam mobile application.
Provide a detailed description of the following dataset: 2021 Hotel-ID
FEVEROUS
FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured information) is a fact verification dataset which consists of 87,026 verified claims. Each claim is annotated with evidence in the form of sentences and/or cells from tables in Wikipedia, as well as a label indicating whether this evidence supports, refutes, or does not provide enough information to reach a verdict.
Provide a detailed description of the following dataset: FEVEROUS
FetReg
Fetoscopic Placental Vessel Segmentation and Registration (**FetReg**) is a large-scale multi-centre dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms for the fetal environment with a focus on creating drift-free mosaics from long duration fetoscopy videos.
Provide a detailed description of the following dataset: FetReg
Replication Data for: Online Learning with Optimism and Delay
The model forecasts for the sub-seasonal forecasting application considered in the Online Learning under Optimism and Delay paper experiments. This dataset consists of a single ZIP archive (919MB) that contains 1) a "models" folder that contains, for each model the forecasts for the Precip. 3-4w, Precip. 5-6w, Temp. 3-4w, Temp. 5-6w tasks on the western United States geography, and 2) a "data" folder that contains supporting geographic data. The data should be used to reproduce the PoolD experiments in https://github.com/geflaspohler/poold as described in the README. (2021-06-10)
Provide a detailed description of the following dataset: Replication Data for: Online Learning with Optimism and Delay
Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People's Republic of China
This data is for the Mis2-KDD 2021 under review paper: Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People’s Republic of China We present our dataset that focuses on propaganda techniques in Mandarin based on a state-linked information operations dataset from the PRC released by Twitter in July 2019. The dataset consists of multi-label propaganda techniques of the sampled tweets. In total, we have 9,950 labeled tweets with 21 different propaganda techniques. The tweets are the state-linked information operations dataset from the PRC released by Twitter.
Provide a detailed description of the following dataset: Dataset of Propaganda Techniques of the State-Sponsored Information Operation of the People's Republic of China
GitHub-Python
Repair AST parse (syntax) errors in Python code
Provide a detailed description of the following dataset: GitHub-Python
Artificial signal data for signal alignment testing
This is a set of signals-pairs, univariate and multivariate, that can be used to test alignment algorithms. Signals are morphologically different. Signal data is synchronized, but the provided timestamp is shifted with small time-jumps.
Provide a detailed description of the following dataset: Artificial signal data for signal alignment testing
BBBC005
Since robust foreground/background separation and segmentation of cellular objects (i.e.,identification of which pixels below to which objects) strongly depends on image quality, focus artifacts are detrimental to data quality. This image set provides examples of in- and out-of-focus synthetic images, which can be used for validation of focus metrics. Image source: [https://bbbc.broadinstitute.org/BBBC005](https://bbbc.broadinstitute.org/BBBC005)
Provide a detailed description of the following dataset: BBBC005
BBBC039
This image set is part of a high-throughput chemical screen on U2OS cells, with examples of 200 bioactive compounds. The effect of the treatments was originally imaged using the Cell Painting assay (fluorescence microscopy). This data set only includes the DNA channel of a single field of view per compound. These images present a variety of nuclear phenotypes, representative of high-throughput chemical perturbations. The main use of this data set is the study of segmentation algorithms that can separate individual nucleus instances in an accurate way, regardless of their shape and cell density. The collection has around 23,000 single nuclei manually annotated to establish a ground truth collection for segmentation evaluation. This data set has a total of 200 fields of view of nuclei captured with fluorescence microscopy using the Hoechst stain. These images are a sample of the larger BBBC022 chemical screen. The images are stored as TIFF files with 520x696 pixels at 16 bits.
Provide a detailed description of the following dataset: BBBC039
TNBC
Inolves an annotated a large number of cells, including normal epithelial and myoepithelial breast cells (localized in ducts and lobules), invasive carcinomatous cells, fibroblasts, endothelial cells, adipocytes, macrophages and inflammatory cells (lymphocytes and plasmocytes). In total, our data set consists of 50 images with a total of 4022 annotated cells, the maximum number of cells in one sample is 293 and the minimum number of cells in one sample is 5, with an average of 80 cells per sample and a high standard deviation of 58. The annotation was performed by three experts: an expert pathologist and two trained research fellows. Each sample was annotated by one of the annotators, checked by another one and in case of disagreement, a consensus was established by discussion among the 3 experts.
Provide a detailed description of the following dataset: TNBC
alpha-matte MFIF dataset
A large-scale training dataset suffering from the defocus spread effect (DSE) is synthesized by applying an $\alpha$-matte boundary defocus model to the VOC 2012 dataset. Motivation: Due to the lack of large-scale datasets of multi-focus images, several data generation methods based on public natural image datasets have been adopted in many deep learning (DL)-based multi-focus image fusion algorithms. However, the DSE is neglected in all the abovementioned datasets. This unrealistic training data may limit the performance of these algorithms. Application: For training DL-based multi-focus image fusion algorithms.
Provide a detailed description of the following dataset: alpha-matte MFIF dataset
Dataset of Context information for Zero Interaction Security
We release both the processed data and evaluation results from our own experiments, and the underlying raw data that can be used for future experiments and schemes in the domain of Zero-Interaction Security. Find more details in the dataset description on Zenodo.
Provide a detailed description of the following dataset: Dataset of Context information for Zero Interaction Security
GitTables
GitTables is a corpus of currently 1M relational tables extracted from CSV files in GitHub covering 96 topics. Table columns in GitTables have been annotated with more than 2K different semantic types from Schema.org and DBpedia. The column annotations consist of semantic types, hierarchical relations, range types, table domain and descriptions. The tables were annotated using two methods: the semantic method and syntactic one. This leads to two kinds of annotations which in the metadata of the tables are referred to as syntactic and semantic annotations. The first method annotated 888,678 tables with Schema.org semantic types and 875,630 with DBpedia, while the second method annotated 1,161,117 tables with Schema.org and 1,156,601 with DBpedia semantic types. Some statistics about the tables are provided in the table below, "Columns" referring to the number of annotated columns and "Classes" to the number of unique DBpedia or Schema.org semantic types used for annotation. | | Columns | Classes | |----------------------|------------|---------| | Syntactic-DBpedia | 3,441,251 | 834 | | Syntactic-Schema.org | 2,671,588 | 677 | | Semantic-DBpedia | 10,757,184 | 2,380 | | Semantic-Schema.org | 10,475,155 | 2,407 |
Provide a detailed description of the following dataset: GitTables
PartialSpoof_v1
All existing databases of spoofed speech contain attack data that is spoofed in its entirety. In practice, it is entirely plausible that successful attacks can be mounted with utterances that are only partially spoofed. By definition, partially-spoofed utterances contain a mix of both spoofed and bona fide segments, which will likely degrade the performance of countermeasures trained with entirely spoofed utterances. This hypothesis raises the obvious question: ‘Can we detect partially spoofed audio?’ This paper introduces a new database of partially-spoofed data, named PartialSpoof, to help address this question. This new database enables to investigate and compare the performance of countermeasures on both utterance- and segmental- level labels.
Provide a detailed description of the following dataset: PartialSpoof_v1
SurfaceGrid
The SurfaceGrid dataset contains nearly a million 512x512 images for use in training neural networks on shape-fron-surface contour task.
Provide a detailed description of the following dataset: SurfaceGrid
WNUT 2020
The training and development dataset for our task was taken from previous work on wet lab corpus (Kulkarni et al., 2018) that consists of from the 623 protocols. We excluded the eight duplicate protocols from this dataset and then re-annotated the 615 unique protocols in BRAT (Stenetorp et al., 2012).
Provide a detailed description of the following dataset: WNUT 2020
selfie2anime
The selfie dataset contains 46,836 selfie images annotated with 36 different attributes. We only use photos of females as training data and test data. The size of the training dataset is 3400, and that of the test dataset is 100, with the image size of 256 x 256. For the anime dataset, we have firstly retrieved 69,926 animation character images from Anime-Planet1. Among those images, 27,023 face images are extracted by using an anime-face detector2. After selecting only female character images and removing monochrome images manually, we have collected two datasets of female anime face images, with the sizes of 3400 and 100 for training and test data respectively, which is the same numbers as the selfie dataset. Finally, all anime face images are resized to 256 x 256 by applying a CNN-based image super-resolution algorithm.
Provide a detailed description of the following dataset: selfie2anime
WNUT-2020 Task 2
Briefly describe the dataset. Provide: * a high-level explanation of the dataset characteristics * explain motivations and summary of its content * potential use cases of the dataset If the description or image is from a different paper, please refer to it as follows: Source: [title](url) Image Source: [title](url)
Provide a detailed description of the following dataset: WNUT-2020 Task 2
DIR-LAB COPDgene
Inspiratory and exipratory breath-hold CT image pairs acquired from the National Heart Lung Blood Institute COPDgene study archive.
Provide a detailed description of the following dataset: DIR-LAB COPDgene
Children's Song Dataset
Children's Song Dataset is open source dataset for singing voice research. This dataset contains 50 Korean and 50 English songs sung by one Korean female professional pop singer. Each song is recorded in two separate keys resulting in a total of 200 audio recordings. Each audio recording is paired with a MIDI transcription and lyrics annotations in both grapheme-level and phoneme-level. ### Dataset Structure The entire data splits into Korean and English and each language splits into 'wav', 'mid', 'lyric', 'txt' and 'csv' folders. Each song has the identical file name for each format. Each format represents following information. Additional information like original song name, tempo and time signature for each song can be found in 'metadata.json'. * 'wav': Vocal recordings in 44.1kHz 16bit wav format * 'mid': Score information in MIDI format * 'lyric': Lyric information in grapheme-level * 'txt': Lyric information in syllable and phoneme-level * 'csv': Note onsets and offsets and syllable timings in comma-separated value (CSV) format
Provide a detailed description of the following dataset: Children's Song Dataset