dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
ChineseLP | The ChineseLP dataset contains 411 vehicle images (mostly of passenger cars) with Chinese license plates (LPs). It consists of 252 images captured by the authors and 159 images
downloaded from the internet. The images present great variations in resolution (from 143 × 107 to 2048 × 1536 pixels), illumination and background. | Provide a detailed description of the following dataset: ChineseLP |
UFPR-ADMR-v1 | This dataset contains 2,000 dial meter images obtained on-site by employees of the Energy Company of Paraná (Copel), which serves more than 4 million consuming units in the Brazilian state of Paraná. The images were acquired with many different cameras and are available in the JPG format with 320×640 or 640×320 pixels (depending on the camera orientation).
The dataset is split into three sets: training (1200 images), validation (400 images), and testing (400 images). Every image has the following annotations available in a .txt file: the counter’s corners (x1, y1), (x2, y2), (x3, y3), (x4, y4). The corners can be used to rectify the counter patch and represent, respectively, the top-left, top-right, bottom-right, and bottom-left corners. For each dial, the current position (x, y, w, h) and the corresponding reading (pointed values and final reading). All counters of the dataset (regardless of meter type) have 4 or 5 dials; thus, 9,097 dials were manually annotated. | Provide a detailed description of the following dataset: UFPR-ADMR-v1 |
LFM-BeyMS | This dataset is based on the LFM-1b [ and the Cultural LFM-1b [2] datasets. LFM-BeyMS includes equally-sized groups of both, beyond-mainstream and mainstream music listeners and thus, can be used for studying the characteristics of beyond-mainstream music listeners for recommendation experiments. For more details, we refer to our publication.
LFM-BeyMS contains
* 4,148 users
* 1,084,922 tracks
* 110,898 artists
* 16,687,363 listening events | Provide a detailed description of the following dataset: LFM-BeyMS |
GUITAR-FX-DIST: A Dataset of Processed Guitar Recordings for Music Research - (Mono Continuous) | GUITAR-FX-DIST is a dataset of electric guitar recordings processed with overdrive, distortion, and fuzz audio effects. It was developed for research in guitar effects detection, classification, and parameters estimation. The dataset is also useful for research on automatic music transcription, intelligent music production, signal processing, or effects modelling. It contains both unprocessed and processed recordings.
The dataset is split into 4 sub-datasets: Mono Continuous, Mono Discrete, Poly Continuous, Poly Discrete
Authors:
Marco Comunità - Centre for Digital Music, Queen Mary University of London
Reference:
If you make use of GUITAR-FX-DIST, please cite the following publication:
```
@article{comunita2020guitar,
title={Guitar Effects Recognition and Parameter Estimation with Convolutional Neural Networks},
author={Comunit{\`a}, Marco and Stowell, Dan and Reiss, Joshua D},
journal={arXiv preprint arXiv:2012.03216},
year={2020}
}
``` | Provide a detailed description of the following dataset: GUITAR-FX-DIST: A Dataset of Processed Guitar Recordings for Music Research - (Mono Continuous) |
GUITAR-FX-DIST: A Dataset of Processed Guitar Recordings for Music Research - (Mono Discrete) | GUITAR-FX-DIST is a dataset of electric guitar recordings processed with overdrive, distortion and fuzz audio effects. It was developed for research in guitar effects detection, classification and parameters estimation. The dataset is also useful for research on automatic music transcription, intelligent music production, signal processing or effects modelling. It contains both unprocessed and processed recordings.
The dataset is split into 4 sub-datasets: Mono Continuous, Mono Discrete, Poly Continuous, Poly Discrete
Authors:
Marco Comunità - Centre for Digital Music, Queen Mary University of London
Reference:
If you make use of GUITAR-FX-DIST, please cite the following publication:
```
@article{comunita2020guitar,
title={Guitar Effects Recognition and Parameter Estimation with Convolutional Neural Networks},
author={Comunit{\`a}, Marco and Stowell, Dan and Reiss, Joshua D},
journal={arXiv preprint arXiv:2012.03216},
year={2020}
}
``` | Provide a detailed description of the following dataset: GUITAR-FX-DIST: A Dataset of Processed Guitar Recordings for Music Research - (Mono Discrete) |
GUITAR-FX-DIST: A Dataset of Processed Guitar Recordings for Music Research - (Poly Discrete) | GUITAR-FX-DIST is a dataset of electric guitar recordings processed with overdrive, distortion and fuzz audio effects. It was developed for research in guitar effects detection, classification and parameters estimation. The dataset is also useful for research on automatic music transcription, intelligent music production, signal processing or effects modelling. It contains both unprocessed and processed recordings.
The dataset is split into 4 sub-datasets: Mono Continuous, Mono Discrete, Poly Continuous, Poly Discrete
Authors:
Marco Comunità - Centre for Digital Music, Queen Mary University of London
Reference:
If you make use of GUITAR-FX-DIST, please cite the following publication:
```
@article{comunita2020guitar,
title={Guitar Effects Recognition and Parameter Estimation with Convolutional Neural Networks},
author={Comunit{\`a}, Marco and Stowell, Dan and Reiss, Joshua D},
journal={arXiv preprint arXiv:2012.03216},
year={2020}
}
``` | Provide a detailed description of the following dataset: GUITAR-FX-DIST: A Dataset of Processed Guitar Recordings for Music Research - (Poly Discrete) |
GUITAR-FX-DIST: A Dataset of Processed Guitar Recordings for Music Research - (Poly Continuous) | GUITAR-FX-DIST is a dataset of electric guitar recordings processed with overdrive, distortion and fuzz audio effects. It was developed for research in guitar effects detection, classification and parameters estimation. The dataset is also useful for research on automatic music transcription, intelligent music production, signal processing or effects modelling. It contains both unprocessed and processed recordings.
The dataset is split into 4 sub-datasets: Mono Continuous, Mono Discrete, Poly Continuous, Poly Discrete
Authors:
Marco Comunità - Centre for Digital Music, Queen Mary University of London
Reference:
If you make use of GUITAR-FX-DIST, please cite the following publication:
```
@article{comunita2020guitar,
title={Guitar Effects Recognition and Parameter Estimation with Convolutional Neural Networks},
author={Comunit{\`a}, Marco and Stowell, Dan and Reiss, Joshua D},
journal={arXiv preprint arXiv:2012.03216},
year={2020}
}
``` | Provide a detailed description of the following dataset: GUITAR-FX-DIST: A Dataset of Processed Guitar Recordings for Music Research - (Poly Continuous) |
METAR | Weather reports of 57 stations in the east coast. | Provide a detailed description of the following dataset: METAR |
Netzschleuder | This is a catalogue and repository of network datasets with the aim of aiding scientific research.
This website is meant to be browsed both by humans and machines alike, and can also be accessed via a convenient JSON API, or via the [graph-tool](https://graph-tool.skewed.de/static/doc/collection.html#graph_tool.collection.ns) library. The network datasets themselves are available in several machine-readable formats, in particular gt, GraphML, GML and CSV.
The upstream origin of each dataset is meant to be as transparent as possible. Each dataset contains its own publicly available extraction and parsing script, accessible via a git repository, which also includes the entire code for this website, released as Free Software under the AGPLv3.
Users are encouraged to inspect the entire pipeline from original upstream data publication, downloading, parsing and format conversion.
Users are also welcome to report problems or omissions with the datasets, as well as suggest new ones, either by opening an issue, or simply by forking the git repository and proposing a merge request. | Provide a detailed description of the following dataset: Netzschleuder |
Darpa OpTC | Operationally Transparent Cyber (OpTC) was a technology transition pilot study funded under Boston Fusion Corp.'s Cyber APT Scenarios for Enterprise Systems (CASES) project. Its primary objective was to determine if DARPA Transparent Computing (TC) program technologies could scale without loss of detection performance to address cyber defense capability gaps identified in USTRANSCOM's Joint Deployment Distribution Enterprise (JDDE) solicitation for the government fiscal years 2019-2023. Boston Fusion along with two performers from the TC program (Five Directions providing endpoint telemetry (TA1) and BAE providing analysis over the data (TA2)) worked to scale their systems from two machines to one thousand machines. The OpTC team conducted scaling and detection tests in the fall of 2019. A third performer (Provatek), not originally associated with the TC program, acted as a red team and test coordinator. This data set represents a subset of that activity.
The OpTC system architecture is based on one used in TC program evaluations. Kafka, an open-source stream-processing server, is used to pass information among system components. Each Windows 10 endpoint is equipped with an endpoint sensor that monitors host events, packs them into JSON records, and sends them to Kafka. As these records flow into Kafka, a translation server aggregates them into new data records in a format called eCAR that are then pushed back to Kafka. As the translation server pushes eCAR records to Kafka, a data analytics component integrates them into a graph data structure for analysis and visualization.
OpTC took TC system components that worked well on two hosts in TC program tests and scaled them up to work with one thousand hosts. This scaled-up system was evaluated over two weeks in a highly instrumented environment, and the data in this collection contains approximately a terabyte of data in compressed JSON compatible format from that evaluation. The evaluation started with a period of benign record generation, followed by the injection of malware by a red team. Benign traffic ran continuously during red team activity. Due to constraints in collection data space during the evaluation, data from five hundred hosts were collected rather than from the full set of one-thousand hosts. | Provide a detailed description of the following dataset: Darpa OpTC |
Home Action Genome | Home Action Genome is a large-scale multi-view video database of indoor daily activities. Every activity is captured by synchronized multi-view cameras, including an egocentric view.
There are 30 hours of vides with 70 classes of daily activities and 453 classes of atomic actions. | Provide a detailed description of the following dataset: Home Action Genome |
OVIS | OVIS is a new large scale benchmark dataset for video instance segmentation task. It is designed with the philosophy of perceiving object occlusions in videos, which could reveal the complexity and the diversity of real-world scenes. OVIS consists of:
* 296k high-quality instance masks
* 25 commonly seen semantic categories
* 901 videos with severe object occlusions
* 5,223 unique instances
If the description or image is from a different paper, please refer to it as follows:
Source: [http://songbai.site/ovis/](http://songbai.site/ovis/) | Provide a detailed description of the following dataset: OVIS |
SyntheticFur | **SyntheticFur** is a dataset for neural rendering. Collecting and generating high quality fur images is an expensive and difficult process that requires content specialists to generate. By releasing this unique dataset with high quality lighting simulation via ray tracing, this can save time for researchers seeking to advance studies of fur rendering and simulation, without having to recreate this laborious process.
The dataset was used for neural rendering research at Google that takes advantage of rasterized image buffers and converts them into high quality raytraced fur renders. We believe that this dataset can contribute to the computer graphics and machine learning community to develop more advanced techniques with fur rendering.
It contains approximately 140,000 procedurally generated images and 15 simulations with Houdini. The images consist of fur groomed with different skin primitives and move with various motions in a predefined set of lighting environments. | Provide a detailed description of the following dataset: SyntheticFur |
TabLeX | **TabLeX** is a large-scale benchmark dataset comprising table images generated from scientific articles. TabLeX consists of two subsets, one for table structure extraction and the other for table content extraction. Each table image is accompanied by its corresponding LATEX source code. To facilitate the development of robust table IE tools, TabLeX contains images in different aspect ratios and in a variety of fonts. | Provide a detailed description of the following dataset: TabLeX |
PeMSD7 | PeMSD7 is traffic data in District 7 of California consisting of the traffic speed of 228 sensors while the period is from May to June in 2012 (only weekdays) with a time interval of 5 minutes. This dataset is popular for benchmark the traffic forecasting models. | Provide a detailed description of the following dataset: PeMSD7 |
PeMSD4 | The dataset refers to the traffic speed data in San Francisco Bay Area, containing 307 sensors on 29 roads. The time span of the dataset is January-February in 2018. It is a popular benchmark for traffic forecasting. | Provide a detailed description of the following dataset: PeMSD4 |
PeMSD8 | This dataset contains the traffic data in San Bernardino from July to August in 2016, with 170 detectors on 8 roads with a time interval of 5 minutes. This dataset is popular as a benchmark traffic forecasting dataset. | Provide a detailed description of the following dataset: PeMSD8 |
SaRoCo | **SaRoCo** is a dataset for detecting satire in Romanian news containing 55,608 news articles from multiple real and satirical news sources, of which 27,980 are regular and 27,628 satirical news reports. We provide the data in csv format, in three files following the train/validation/test splits. | Provide a detailed description of the following dataset: SaRoCo |
CHAOS | CHAOS challenge aims the segmentation of abdominal organs (liver, kidneys and spleen) from CT and MRI data. ONsite section of the CHAOS was held in The IEEE International Symposium on Biomedical Imaging (ISBI) on April 11, 2019, Venice, ITALY. Online submissions are still welcome!
\textbf{Challenge Description}
Understanding prerequisites of complicated medical procedures plays an important role in the success of the operations. To enrich the level of understanding, physicians use advanced tools such as three-dimensional visualization and printing, which require extraction of the object(s) of interest from DICOM images. Accordingly, the precise segmentation of abdominal organs (i.e. liver, kidney(s) and spleen) has critical importance for several clinical procedures including but not limited to pre-evaluation of liver for living donor-based transplantation surgery or detailed analysis of abdominal organs to determine the vessels arising from and entering them for correct positioning of a graft prior to abdominal aortic surgery. This motivates ongoing research to achieve better segmentation results and overcoming countless challenges originating from both highly flexible anatomical properties of abdomen and limitations of modalities reflected to image characteristics. In this context, the proposed challenge has two separate but related aims:
1) Segmentation of liver from computed tomography (CT) data sets, which are acquired at portal phase after contrast agent injection for pre-evaluation of living donated liver transplantation donors.
2) Segmentation of four abdominal organs (i.e. liver, spleen, right and left kidneys) from magnetic resonance imaging (MRI) data sets acquired with two different sequences (T1-DUAL and T2-SPIR).
CHAOS tasks contain combination of these organs' segmentation.
\textbf{Tasks}
There are five competition categories in which the participating teams can take place and submit their result(s):
1) Liver Segmentation (CT & MRI): This is also called "cross-modality" [1] and it is simply based on using a single system, which can segment liver from both CT and MRI. For instance, the training and test sets of a machine learning approach would have images from both modalities without explicitly feeding the model with corresponding information. A unique study about this is a reference below and this task is one of the most interesting tasks of the challenge. Keep in mind that the fusion of individual systems for different modalities (i.e. two models, one working on CT and the other on MRI ) would not be valid for this category. They can be evaluated as individual systems at Tasks 2 and 3. On the other hand, in this task, fusion of individual systems between MR sequences (i.e. two models, one working on T1-DUAL and the other on T2-SPIR ) is allowed.
2) Liver Segmentation (CT only): This is mostly a regular task of liver segmentation from CT, (such as SLIVER07). This task is easier than SLIVER07 as it only contains healthy livers aligned in the same direction and patient position. However, the challenging part is the enhanced vascular structures (portal phase) due to the contrast injection.
3) Liver Segmentation (MRI only): Similar to "Task 2", this is also a regular task of liver segmentation from MRI. It includes two different pulse sequences: T1-DUAL and T2-SPIR. Moreover, T1-DUAL has two forms (in and out phase). The developed system should work on both sequences. In this task, the fusion of individual systems between MR sequences (i.e. two models, one working on T1-DUAL and the other on T2-SPIR ) are allowed.
4) Segmentation of abdominal organs (CT & MRI): This task is extension of Task 1 to kidneys and spleen in MRI data. In this task, the interesting part is that CT datasets have only liver, but the MRI datasets have four annotated abdominal organs (liver, kidneys, spleen). Keep in mind that fusion of individual systems for different modalities (i.e. two models, one working on CT and the other on MRI ) would not be valid for this category. On the other hand, in this task, fusion of individual systems between MR sequences (i.e. two models, one working on T1-DUAL and the other on T2-SPIR ) are allowed.
5) Segmentation of abdominal organs (MRI only): The same task given in "Task 3" but extended to four abdominal organs; liver, kidneys, spleen. In this task, ensemble or fusion of individual systems between MR sequences (i.e. two models, one working on T1-DUAL and the other on T2-SPIR ) are allowed.
[1] Valindria, V. et al. (2018, March). Multi-modal learning from unpaired images: Application to multi-organ segmentation in CT and MRI. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 547-556). IEEE. https://doi.ieeecomputersociety.org/10.1109/WACV.2018.00066 | Provide a detailed description of the following dataset: CHAOS |
TrackML challenge Throughput phase dataset | The dataset comprises multiple independent events, where each event contains simulated measurements (essentially 3D points) of particles generated in a collision between proton bunches at the Large Hadron Collider at CERN. The goal of the tracking machine learning challenge is to group the recorded measurements or hit for each event into tracks, sets of hits that belong to the same initial particle. A solution must uniquely associate each hit to one track. The training dataset contains the recorded hit, their ground truth counterpart and their association to particles, and the initial parameters of those particles. The test dataset contains only the recorded hits.
The dataset was used for the Throughput Phase of the Tracking Machine Learning challenge on Codalab.
See more details in the home page url. | Provide a detailed description of the following dataset: TrackML challenge Throughput phase dataset |
Scroll Readability Dataset | Scroll Readability Dataset contains scroll interactions of 598 participants reading advanced and elementary texts from the OneStopEnglish corpus. | Provide a detailed description of the following dataset: Scroll Readability Dataset |
AID | AID is a new large-scale aerial image dataset, by collecting sample images from Google Earth imagery. Note that although the Google Earth images are post-processed using RGB renderings from the original optical aerial images, it has proven that there is no significant difference between the Google Earth images with the real optical aerial images even in the pixel-level land use/cover mapping. Thus, the Google Earth images can also be used as aerial images for evaluating scene classification algorithms.
The new dataset is made up of the following 30 aerial scene types: airport, bare land, baseball field, beach, bridge, center, church, commercial, dense residential, desert, farmland, forest, industrial, meadow, medium residential, mountain, park, parking, playground, pond, port, railway station, resort, river, school, sparse residential, square, stadium, storage tanks and viaduct. All the images are labelled by the specialists in the field of remote sensing image interpretation, and some samples of each class are shown in Fig.1. In all, the AID dataset has a number of 10000 images within 30 classes.
The images in AID are actually multi-source, as Google Earth images are from different remote imaging sensors. This brings more challenges for scene classification than the single source images like UC-Merced dataset. Moreover, all the sample images per each class in AID are carefully chosen from different countries and regions around the world, mainly in China, the United States, England, France, Italy, Japan, Germany, etc., and they are extracted at different time and seasons under different imaging conditions, which increases the intra-class diversities of the data. | Provide a detailed description of the following dataset: AID |
GID | Gaofen Image Dataset (GID) is a large-scale land-cover dataset constructed with Gaofen-2 (GF-2) satellite images. This dataset has superiorities over the existing land-cover dataset because of its large coverage, wide distribution, and high spatial resolution. It contains 150 GF-2 images annotated at the pixel level for 5 categories: built-up, farmland, forest, meadow, and water. | Provide a detailed description of the following dataset: GID |
WHU-RS19 | WHU-RS19 is a set of satellite images exported from Google Earth, which provides high-resolution satellite images up to 0.5 m. Some samples of the database are displayed in the following picture. It contains 19 classes of meaningful scenes in high-resolution satellite imagery, including airport, beach, bridge, commercial, desert, farmland, footballfield, forest, industrial, meadow, mountain, park, parking, pond, port, railwaystation, residential, river, and viaduct. For each class, there are about 50 samples. It’s worth noticing that the image samples of the same class are collected from different regions in satellite images of different resolutions and then might have different scales, orientations and illuminations. | Provide a detailed description of the following dataset: WHU-RS19 |
SECOND | SECOND is a well-annotated semantic change detection dataset. To ensure data diversity, we firstly collect 4662 pairs of aerial images from several platforms and sensors. These pairs of images are distributed over the cities such as Hangzhou, Chengdu, and Shanghai. Each image has size 512 x 512 and is annotated at the pixel level. The annotation of SECOND is carried out by an expert group of earth vision applications, which guarantees high label accuracy. For the change category in the SECOND dataset, we focus on 6 main land-cover classes, i.e. , non-vegetated ground surface, tree, low vegetation, water, buildings and playgrounds , that are frequently involved in natural and man-made geographical changes. It is worth noticing that, in the new dataset, non-vegetated ground surface ( n.v.g. surface for short) mainly corresponds to impervious surface and bare land. In summary, these 6 selected land-cover categories result in 30 common change categories (including non-change ). Through the random selection of image pairs, the SECOND reflects real distributions of land-cover categories when changes occur. | Provide a detailed description of the following dataset: SECOND |
Shellcode_IA32 | Shellcode_IA32 is a dataset containing 20 years of shellcodes from a variety of sources is the largest collection of shellcodes in assembly available to date.
This dataset consists of 3,200 examples of instructions in assembly language for IA-32 (the 32-bit version of the x86 Intel Architecture) from publicly available security exploits. We collected assembly programs used to generate shellcode from exploit-db and from shell-storm. We enriched the dataset by adding examples of assembly programs for the IA-32 architecture from popular tutorials and books. This allowed us to understand how different authors and assembly experts comment and, thus, how to deal with the ambiguity of natural language in this specific context. Our dataset consists of 10% of instructions collected from books and guidelines, and the rest from real shellcodes.
Our focus is on Linux, the most common OS for security-critical network services. Accordingly, we added assembly instructions written with Netwide Assembler (NASM) for Linux.
Each line of Shellcode_IA32 dataset represents a snippet - intent pair. The snippet is a line or a combination of multiple lines of assembly code, built by following the NASM syntax. The intent is a comment in the English language.
Further statistics on the dataset and a set of preliminary experiments performed with a neural machine translation (NMT) model are described in the following paper: [Shellcode_IA32: A Dataset for Automatic Shellcode Generation](https://aclanthology.org/2021.nlp4prog-1.7). | Provide a detailed description of the following dataset: Shellcode_IA32 |
SILICONE Benchmark | The Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE (SILICONE) benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems specifically designed for spoken language. All datasets are in the English language and covers a large variety of domains (e.g daily life, scripted scenarios, joint task completion, phone call conversations, and televsion dialogue). Some datasets additionally include emotion and/or sentiment labels. | Provide a detailed description of the following dataset: SILICONE Benchmark |
FKD | The football keyword dataset (FKD), as a new keyword spotting dataset in Persian, is collected with crowdsourcing. This dataset contains nearly 31000 samples in 18 classes. | Provide a detailed description of the following dataset: FKD |
SDCNL (Suicide vs Depression Classification) | We develop a primary dataset based on our task of suicide or depression classification. This dataset is web-scraped from Reddit. We collect our data from subreddits using the Python Reddit API. We specifically scrape from two subreddits, r/SuicideWatch3 and r/Depression. The dataset contains 1,895 total posts. We utilize two fields from the scraped data: the original text of the post as our inputs, and the subreddit it belongs to as labels. Posts from r/SuicideWatch are labeled as suicidal, and posts from r/Depression are labeled as depressed. We make this dataset and the web-scraping script available in our code. | Provide a detailed description of the following dataset: SDCNL (Suicide vs Depression Classification) |
Reddit C-SSRS | The C-SSRS dataset contains 500 Reddit posts from the subreddit r/depression. These posts are labeled by psychologists on a five point scale according to guidelines established in the Columbia Suicide Severity Rating Scale, which progress according to
severity of depression. As this dataset is clinically verified and labeled, it is an adequate dataset to validate the label correction method, especially since it is from the same domain of mental health. | Provide a detailed description of the following dataset: Reddit C-SSRS |
SHHS | The Sleep Heart Health Study (SHHS) is a multi-center cohort study implemented by the National Heart Lung & Blood Institute to determine the cardiovascular and other consequences of sleep-disordered breathing. It tests whether sleep-related breathing is associated with an increased risk of coronary heart disease, stroke, all cause mortality, and hypertension. In all, 6,441 men and women aged 40 years and older were enrolled between November 1, 1995 and January 31, 1998 to take part in SHHS Visit 1. During exam cycle 3 (January 2001- June 2003), a second polysomnogram (SHHS Visit 2) was obtained in 3,295 of the participants. CVD Outcomes data were monitored and adjudicated by parent cohorts between baseline and 2011. More than 130 manuscripts have been published investigating predictors and outcomes of sleep disorders. | Provide a detailed description of the following dataset: SHHS |
AraCOVID19-MFH | AraCOVID19-MFH is a manually annotated multi-label Arabic COVID-19 fake news and hate speech detection dataset. The dataset contains 10,828 Arabic tweets annotated with 10 different labels. | Provide a detailed description of the following dataset: AraCOVID19-MFH |
UAVVaste | The UAVVaste dataset consists to date of 772 images and 3716 annotations. The main motivation for creation of the dataset was the lack of domain-specific data. The datasets that are widely used for object detection evaluation benchmarking. The dataset is made publicly available and is intended to be expanded.
| **Date** | **Images count** | **Annotations count** |
|------------ |:------------: |:-----------------: |
| 14.11.2020 | 772 | 3716 | | Provide a detailed description of the following dataset: UAVVaste |
AvaSym | Global Symmetry Ground-truth for AVA dataset.
Release Date: 2016.
Users of this software are encouraged to cite the following article:
Elawady, Mohamed, Cécile Barat, Christophe Ducottet, and Philippe Colantoni. "Global Bilateral Symmetry Detection Using Multiscale Mirror Histograms." In International Conference on Advanced Concepts for Intelligent Vision Systems, pp. 14-24. Springer International Publishing, 2016.
Contents:
GT_AVA/AVA_GT.mat : list of image names and axis groundtruth (x1,y1,x2,y2).
DwnImgs.m : MATLAB code file to download images from DpChallenge website and to show how to use groundtruth, the m-code will create two directories ('Imgs', 'ImgsGT'). | Provide a detailed description of the following dataset: AvaSym |
ConvQuestions | ConvQuestions is the first realistic benchmark for conversational question answering over knowledge graphs. It contains 11,200 conversations which can be evaluated over Wikidata. They are compiled from the inputs of 70 Master crowdworkers on Amazon Mechanical Turk, with conversations from five domains: Books, Movies, Soccer, Music, and TV Series. The questions feature a variety of complex question phenomena like comparisons, aggregations, compositionality, and temporal reasoning. Answers are grounded in Wikidata entities to enable fair comparison across diverse methods. The data gathering setup was kept as natural as possible, with the annotators selecting entities of their choice from each of the five domains, and formulating the entire conversation in one session. All questions in a conversation are from the same Turker, who also provided gold answers to the questions. For suitability to knowledge graphs, questions were constrained to be objective or factoid in nature, but no other restrictive guidelines were set. A notable property of ConvQuestions is that several questions are not answerable by Wikidata alone (as of September 2019), but the required facts can, for example, be found in the open Web or in Wikipedia. For details, please refer to our CIKM 2019 full paper. | Provide a detailed description of the following dataset: ConvQuestions |
Drinking Waste Classification | ## About the Dataset:
4 classes of drinking waste: Aluminium Cans, Glass bottles, PET (plastic) bottles and HDPE (plastic) Milk bottles.
rawimgs - images of 4 classes of waste
YOLO_imgs - images of 4 classes of waste with corresponding txt file (annotations for YOLO framework)
labels.txt - labels of the classes
## Story
This dataset was manually labelled and collected as a part of final year Individual Project at University College London. Pictures were taken with 12 MP phone camera. I created a real-time waste detection and identification system powered by YOLO framework. Use it as you like, if you could cite me in your work, would be much appreciated. Please reach out to me if this dataset actually helped you with your project.
Arkadiy Serezhkin - arkadiyhacks@gmail.com
## Acknowledgements
The dataset used parts of manually collected dataset of Gary Thung and Mindy Yang. I would like to thank them for collecting their dataset as this is not a fun thing to do (from my own experience). You can find their repository [here](https://github.com/garythung/trashnet). | Provide a detailed description of the following dataset: Drinking Waste Classification |
R2VQ | R2VQ is a dataset designed for testing competence-based comprehension of machines over a multimodal recipe collection, which contains text-video aligned recipes.
A total of 51,331 cooking events are annotated, which contain 19,201 explicit ingredients, 16,338 implicit ingredients, 12,316 explicit props, and 11,868 implicit props. | Provide a detailed description of the following dataset: R2VQ |
ionosphere | The original ionosphere dataset from UCI machine learning repository is a binary classification dataset with dimensionality 34. There is one attribute having values all zeros, which is discarded. So the total number of dimensions are 33. The ‘bad’ class is considered as outliers class and the ‘good’ class as inliers. | Provide a detailed description of the following dataset: ionosphere |
GeoLifeCLEF 2020 | The GeoLifeCLEF 2020 dataset is a large-scale remote sensing dataset. More specifically, it consists of 1.9 million species observations from the community science platform iNaturalist, each of which is paired with high-resolution covariates (RGB-IR imagery, land cover, and altitude). The dataset is roughly evenly split between the US and France, and covers over 31k plant and animal species. | Provide a detailed description of the following dataset: GeoLifeCLEF 2020 |
DBATES | DBATES is a database of multimodal communication features extracted from debate speeches in the 2019 North American Universities Debate Championships (NAUDC).
**Author's note:** If you want to access the dataset for research purposes, please email the authors.
Image source: [https://arxiv.org/pdf/2103.14189v1.pdf](https://arxiv.org/pdf/2103.14189v1.pdf) | Provide a detailed description of the following dataset: DBATES |
Boombox | **Boombox** is a multi-modal dataset for visual reconstruction from acoustic vibrations. Involves dropping objects into a box and capturing resulting images and vibrations. Used for training ML systems that predict images from vibration.
**Potential application domain:** Computer Vision, Multimodal Perception, Vision and Sound, Sight from Sound, Robotics, Deep Learning, and Machine Learning. | Provide a detailed description of the following dataset: Boombox |
ARC-100 | The **ARC-100** dataset was collected as part of a prototype retail checkout system titled ARC (Automatic Retail Checkout). It consists of 31,000 $640\times480$ RGB images of 100 commonly found retail items in Lahore, Pakistan. Each retail item has 310 images captured at various *logical* orientations (on a black, matte finish conveyor belt) by a Logitech C310 webcam, under a wooden hood frame illuminated by LED strips (luminance set to approximately $70lx$). In the proposed setup, images were pre-processed and standardized before feeding into a Convolutional Neural Network for identification.
Links:
- [ARC paper](https://arxiv.org/abs/2104.02832)
- [ARC-100 dataset](https://drive.google.com/drive/folders/1joDBa30_k_TegLDXZ2g5J11iLzNS3Py6) | Provide a detailed description of the following dataset: ARC-100 |
ImageNet-O | ImageNet-O consists of images from classes that are not found in the ImageNet-1k dataset. It is used to test the robustness of vision models to out-of-distribution samples. It's reported using the AUPR metric. | Provide a detailed description of the following dataset: ImageNet-O |
ImageNet-9 | ImageNet-9 consists of images with different amounts of background and foreground signal, which you can use to measure the extent to which your models rely on image backgrounds. This dataset is helpful in testing the robustness of vision models with respect to their dependence on the backgrounds of images. | Provide a detailed description of the following dataset: ImageNet-9 |
xSID | xSID, a new evaluation benchmark for cross-lingual (X) Slot and Intent Detection in 13 languages from 6 language families, including a very low-resource dialect, covering Arabic (ar), Chinese (zh), Danish (da), Dutch (nl), English (en), German (de), Indonesian (id), Italian (it), Japanese (ja), Kazakh (kk), Serbian (sr), Turkish (tr) and an Austro-Bavarian German dialect, South Tyrolean (de-st). | Provide a detailed description of the following dataset: xSID |
Few-NERD | Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset, which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities, and 4,601,223 tokens. Three benchmark tasks are built, one is supervised (Few-NERD (SUP)) and the other two are few-shot (Few-NERD (INTRA) and Few-NERD (INTER)). | Provide a detailed description of the following dataset: Few-NERD |
DL-HARD | Deep Learning Hard (**DL-HARD**) is an annotated dataset designed to more effectively evaluate neural ranking models on complex topics. It builds on TREC Deep Learning (DL) questions extensively annotated with query intent categories, answer types, wikified entities, topic categories, and result type metadata from a leading web search engine.
DL-HARD contains 50 queries from the official 2019/2020 evaluation benchmark, half of which are newly and independently assessed. Overall, DL-HARD is a new resource that promotes research on neural ranking methods by focusing on challenging and complex queries. | Provide a detailed description of the following dataset: DL-HARD |
SciDuet | **SciDuet** is a dataset for training and benchmarking models for automating document-to-slides generation. It consists of pairs of papers and their corresponding slides decks from recent years' NLP and ML conferences (e.g., ACL). This dataset contains 1,088 papers and 10,034 slides. | Provide a detailed description of the following dataset: SciDuet |
Flat Real World Simulink Models | This dataset contains:
(1) Slforge Generated Simulink Models : Synthetic Simulink Models
(2) Source of Real World Simulink Models
The `.txt` file is a combined text file that contains all the real world Simulink models based on SLGPT's experimental setup. | Provide a detailed description of the following dataset: Flat Real World Simulink Models |
PhotoShape | The PhotoShape dataset consists of photorealistic, relightable, 3D shapes produced by the work proposed in the work of [Park et al. (2021)](https://paperswithcode.com/paper/photoshape-photorealistic-materials-for-large). | Provide a detailed description of the following dataset: PhotoShape |
Fruits 360 | ## Fruits 360 dataset: A dataset of images containing fruits and vegetables
## Version: 2020.05.18.0
### Content
The following fruits and are included:
Apples (different varieties: Crimson Snow, Golden, Golden-Red, Granny Smith, Pink Lady, Red, Red Delicious), Apricot, Avocado, Avocado ripe, Banana (Yellow, Red, Lady Finger), Beetroot Red, Blueberry, Cactus fruit, Cantaloupe (2 varieties), Carambula, Cauliflower, Cherry (different varieties, Rainier), Cherry Wax (Yellow, Red, Black), Chestnut, Clementine, Cocos, Corn (with husk), Cucumber (ripened), Dates, Eggplant, Fig, Ginger Root, Granadilla, Grape (Blue, Pink, White (different varieties)), Grapefruit (Pink, White), Guava, Hazelnut, Huckleberry, Kiwi, Kaki, Kohlrabi, Kumsquats, Lemon (normal, Meyer), Lime, Lychee, Mandarine, Mango (Green, Red), Mangostan, Maracuja, Melon Piel de Sapo, Mulberry, Nectarine (Regular, Flat), Nut (Forest, Pecan), Onion (Red, White), Orange, Papaya, Passion fruit, Peach (different varieties), Pepino, Pear (different varieties, Abate, Forelle, Kaiser, Monster, Red, Stone, Williams), Pepper (Red, Green, Orange, Yellow), Physalis (normal, with Husk), Pineapple (normal, Mini), Pitahaya Red, Plum (different varieties), Pomegranate, Pomelo Sweetie, Potato (Red, Sweet, White), Quince, Rambutan, Raspberry, Redcurrant, Salak, Strawberry (normal, Wedge), Tamarillo, Tangelo, Tomato (different varieties, Maroon, Cherry Red, Yellow, not ripened, Heart), Walnut, Watermelon.
### Dataset properties ###
Total number of images: 90483.
Training set size: 67692 images (one fruit or vegetable per image).
Test set size: 22688 images (one fruit or vegetable per image).
Number of classes: 131 (fruits and vegetables).
Image size: 100x100 pixels.
Filename format: image_index_100.jpg (e.g. 32_100.jpg) or r_image_index_100.jpg (e.g. r_32_100.jpg) or r2_image_index_100.jpg or r3_image_index_100.jpg. "r" stands for rotated fruit. "r2" means that the fruit was rotated around the 3rd axis. "100" comes from image size (100x100 pixels).
Different varieties of the same fruit (apple for instance) are stored as belonging to different classes.
### How we made it
Fruits and vegetables were planted in the shaft of a low-speed motor (3 rpm) and a short movie of 20 seconds was recorded.
A Logitech C920 camera was used for filming the fruits. This is one of the best webcams available.
Behind the fruits, we placed a white sheet of paper as background.
However, due to the variations in the lighting conditions, the background was not uniform and we wrote a dedicated algorithm that extracts the fruit from the background. This algorithm is of flood fill type: we start from each edge of the image and we mark all pixels there, then we mark all pixels found in the neighborhood of the already marked pixels for which the distance between colors is less than a prescribed value. We repeat the previous step until no more pixels can be marked.
All marked pixels are considered as being background (which is then filled with white) and the rest of the pixels are considered as belonging to the object.
The maximum value for the distance between 2 neighbor pixels is a parameter of the algorithm and is set (by trial and error) for each movie.
Pictures from the test-multiple_fruits folder were taken with a Nexus 5X phone.
### Research papers
Horea Muresan, [Mihai Oltean](https://mihaioltean.github.io), [Fruit recognition from images using deep learning](https://www.researchgate.net/publication/321475443_Fruit_recognition_from_images_using_deep_learning), Acta Univ. Sapientiae, Informatica Vol. 10, Issue 1, pp. 26-42, 2018.
The paper introduces the dataset and implementation of a Neural Network trained to recognize the fruits in the dataset.
### Alternate download
This dataset is also available for download from GitHub: [Fruits-360 dataset](https://github.com/Horea94/Fruit-Images-Dataset)
### History ###
Fruits were filmed at the dates given below (YYYY.MM.DD):
2017.02.25 - Apple (golden).
2017.02.28 - Apple (red-yellow, red, golden2), Kiwi, Pear, Grapefruit, Lemon, Orange, Strawberry, Banana.
2017.03.05 - Apple (golden3, Braeburn, Granny Smith, red2).
2017.03.07 - Apple (red3).
2017.05.10 - Plum, Peach, Peach flat, Apricot, Nectarine, Pomegranate.
2017.05.27 - Avocado, Papaya, Grape, Cherrie.
2017.12.25 - Carambula, Cactus fruit, Granadilla, Kaki, Kumsquats, Passion fruit, Avocado ripe, Quince.
2017.12.28 - Clementine, Cocos, Mango, Lime, Lychee.
2017.12.31 - Apple Red Delicious, Pear Monster, Grape White.
2018.01.14 - Ananas, Grapefruit Pink, Mandarine, Pineapple, Tangelo.
2018.01.19 - Huckleberry, Raspberry.
2018.01.26 - Dates, Maracuja, Plum 2, Salak, Tamarillo.
2018.02.05 - Guava, Grape White 2, Lemon Meyer
2018.02.07 - Banana Red, Pepino, Pitahaya Red.
2018.02.08 - Pear Abate, Pear Williams.
2018.05.22 - Lemon rotated, Pomegranate rotated.
2018.05.24 - Cherry Rainier, Cherry 2, Strawberry Wedge.
2018.05.26 - Cantaloupe (2 varieties).
2018.05.31 - Melon Piel de Sapo.
2018.06.05 - Pineapple Mini, Physalis, Physalis with Husk, Rambutan.
2018.06.08 - Mulberry, Redcurrant.
2018.06.16 - Hazelnut, Walnut, Tomato, Cherry Red.
2018.06.17 - Cherry Wax (Yellow, Red, Black).
2018.08.19 - Apple Red Yellow 2, Grape Blue, Grape White 3-4, Peach 2, Plum 3, Tomato Maroon, Tomato 1-4 .
2018.12.20 - Nut Pecan, Pear Kaiser, Tomato Yellow.
2018.12.21 - Banana Lady Finger, Chesnut, Mangostan.
2018.12.22 - Pomelo Sweetie.
2019.04.21 - Apple Crimson Snow, Apple Pink Lady, Blueberry, Kohlrabi, Mango Red, Pear Red, Pepper (Red, Yellow, Green).
2019.06.18 - Beetroot Red, Corn, Ginger Root, Nectarine Flat, Nut Forest, Onion Red, Onion Red Peeled, Onion White, Potato Red, Potato Red Washed, Potato Sweet, Potato White.
2019.07.07 - Cauliflower, Eggplant, Pear Forelle, Pepper Orange, Tomato Heart.
2019.09.22 - Corn Husk, Cucumber Ripe, Fig, Pear 2, Pear Stone, Tomato not Ripened, Watermelon.
## License ##
MIT License
Copyright (c) 2017-2021 [Mihai Oltean](https://mihaioltean.github.io)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | Provide a detailed description of the following dataset: Fruits 360 |
WMT 2021 Ge'ez-Amharic | **WMT 2021 Ge'ez-Amharic** is a Ge'ez-Amharic dataset prepared for NMT tasks of the 6th Workshop on NLP at Debre Berhan University, Ethiopia. The corpus has been collected from:
* Ethiopian Orthodox Church old bible (from ethiopianorthodox.org), Anaphora, praise of St. Virgin Mary, praise of Lord Jesus and other Church's books.
* Ge'ez teaching books,
* Websites and other internet sources such as www.geez.org, www.debelo.org,
The Dataset has about 15454 parallel Ge'ez and Amharic sentences for training, 1001 parallel sentences for testing and 1001 parallel sentences for validation. | Provide a detailed description of the following dataset: WMT 2021 Ge'ez-Amharic |
PubMed Term, Abstract, Conclusion, Title Dataset | This dataset gathers three types of pairs: Title-to-Abstract (Training: 22,811/Development: 2095/Test: 2095), Abstract-to-Conclusion and Future work (Training: 22,811/Development: 2095/Test: 2095), Conclusion and Future work-to-Title (Training: 15,902/Development: 2095/Test: 2095) from PubMed. Each pair contains a pair of input and output as well as the corresponding terms(from original KB and link prediction results). | Provide a detailed description of the following dataset: PubMed Term, Abstract, Conclusion, Title Dataset |
PubMed Paper Reading Dataset | This dataset gathers 14,857 entities, 133 relations, and entities corresponding tokenized text from PubMed. It contains 875,698 training pairs, 109,462 development pairs, and 109,462 test pairs. | Provide a detailed description of the following dataset: PubMed Paper Reading Dataset |
ReviewRobot Dataset | # ReviewRobot Dataset
## Overview
This repository contains data for paper ReviewRobot: Explainable Paper Review Generation based on Knowledge Synthesis. [[Dataset]](https://drive.google.com/file/d/1NclEwGEVcHCrSWk8s3lDjvEbMlWXQoXM/view?usp=sharing)
## Dataset
There are three folders: `Raw_data`, `IE_result`, and `KGs`.
### Raw_data folder
The `Raw_data` has two parts: `Background` Corpus and `Paper-review` Corpus.
We create the `Background Corpus` by selecting machine learning related pappers from the [Semantic Scholar Open Research Corpus](http://s2-public-api-prod.us-west-2.elasticbeanstalk.com/corpus/). It contains papers with their titles and abstracts published from the year of 1965 to 2019 (included).
The `Paper-review Corpus` contains parsed paper pdfs and their corresponding reviews. The paper-review pairs of `acl_2017` and `iclr_2017` folders come from [PeerRead dataset](https://github.com/allenai/PeerRead). We fetched the rest from [OpenReview](https://openreview.net/) and [NeruIPS](https://papers.nips.cc/). We parsed those pdfs using [GROBID](https://github.com/kermitt2/grobid). In each folder, `metadata.txt` contains all human reviews, and the `txt/` folder contains all processed papers.
### IE_result folder
The `IE_result` folder contains information extraction results from [SciIE](https://bitbucket.org/luanyi/scierc/src/master/). In each group, the `*_json/` contains tokenized texts, and the `*_output/` contains IE results of tokenized texts.
The `Background_IE` contains two folders from one group for all paper abstracts from 1965 to 2019.
The `Paper-review_IE` contains four folders from two groups. The first group: `iclrnipsabs_json` and `iclrnipsabs_output` contain IE results for abstracts of `Paper-review Corpus`. The second group: `iclrnips_json` and `iclrnips_output` contain IE results for rest of papers in `Paper-review Corpus`.
### KGs
The `KGs` folder contains the knowledge graphs built on the `IE_result`.
#### back_kg
The `back_kg` contains the background KGs built up to a certain year. For each year, there are three files.
Take 2012 as an example:
* `2012.pkl` contains the background knowledge graph up to (include) 2012. It contains a dictionary of 6 fields: `num_doc` is the number of papers up to that year, `cluster2entity` is a mapping from the entity to its mentions, `entity2cluster` is a mapping from the mention to its corresponding entity, `cluster2type` is a mapping from the entity to its type, `entity` refers to all mentions in current KG, and `relations` refers to all relations in current KG.
* `2012_key.pkl` contains the mappings from knowledge elements to paper ids. It has two fields: `cluster` is the mapping from an entity to its corresponding paper ids, and `relation` is the mapping from a relation to the corresponding paper ids.
* `2012_paper` contains the mappings from paper id to its paper title.
#### idea_kg
The `idea_kg` folder contains idea KGs constructed from paper abstracts and conclusions. Each line is a paper in the venue and has the following fields: `id` for the paper id, `abs_num` for the number of abstract sentences, `sent` for all sentences related to `idea_kg`, `entity` for all mentions in current KG, `cluster2sent` for the corresponding sentence ids for a specific entity, `entity2num` for the occurence of a specific mention, `relation2num` for the occurence of a specific relation, `cluster2entity` for a mapping from the entity to its mentions, `entity2type` conains a mapping from the mention to the type, `relations` for all relations in current KG, `relation2sent` for corresponding sentence ids for a specific relation, and `entity2cluster` for a mapping from the mention to its corresponding entity.
#### related_kg
The `related_kg` contains related KGs constructed from related work for each venue. It is of the same structure as `idea_kg`.
#### contribute_kg
The `contribute_kg` contains contribute KGs constructed from paper contribution section (under introduction section) and experiment section. It contains a dictionary of 4 fields: `id` for the paper id, `total` for the number of entities covered in the contribution section, `covered` for the number of entities covered in the experiment section, `sents` related sentences that covered those entities from both sections.
#### future_kg
The `future_kg` contains future KGs constructed from future work for each venue. It is of the same structure as `idea_kg`.
### Review-annotation
The `Review-annotation` folder contains human annotations for review category and paper-review sentence pairs. The `review.txt` contains annotation for review category including 236 sentences for "SUMMARY", 33 sentences for "NOVELTY", 174 sentences for "SOUNDNESS_CORRECTNESS", 16 sentences for "MEANINGFUL_COMPARISON", and 14 sentences for "IMPACT". The `pair.txt` contains 2,535 review-paper pairs. For each pair, the first slot is the review sentence; the second slot is the paper sentence, the third slot is the label where 0 indicates two sentences are not related and 1 indicates they are related.
## License
Creative Commons — Attribution 4.0 International — CC BY 4.0 | Provide a detailed description of the following dataset: ReviewRobot Dataset |
FlyingThings3D | **FlyingThings3D** is a synthetic dataset for optical flow, disparity and scene flow estimation. It consists of everyday objects flying along randomized 3D trajectories. We generated about 25,000 stereo frames with ground truth data. Instead of focusing on a particular task (like KITTI) or enforcing strict naturalism (like Sintel), we rely on randomness and a large pool of rendering assets to generate orders of magnitude more data than any existing option, without running a risk of repetition or saturation. | Provide a detailed description of the following dataset: FlyingThings3D |
ATD-12K | **ATD-12K** is a large-scale animation triplet dataset, which comprises 12,000 triplets(train10k,test2k) by manually inspect and the test2k with rich annotations, including levels of difficulty, the Regions of Interest (RoIs) on movements, and tags on motion categories
The dataset collected from 30 series of movies(Is wild and modern) made by diversified producers, with a total duration of 25+ hours. total of 101 clips in two resolutions (i.e., 1920×1080, 1280×720)
* Note that some triplet has subtitles, watermarking issues | Provide a detailed description of the following dataset: ATD-12K |
Project CodeNet | **Project CodeNet** is a large-scale dataset with approximately 14 million code samples, each of which is an intended solution to one of 4000 coding problems. The code samples are written in over 50 programming languages (although the dominant languages are C++, C, Python, and Java) and they are annotated with a rich set of information, such as its code size, memory footprint, cpu run time, and status, which indicates acceptance or error types. The dataset is accompanied by a [repository](https://github.com/IBM/Project_CodeNet), where we provide a set of [tools](https://github.com/IBM/Project_CodeNet/tree/main/tools) to aggregate codes samples based on user criteria and to transform code samples into token sequences, simplified parse trees and other code graphs. A detailed discussion of Project CodeNet is available in this [paper](https://github.com/IBM/Project_CodeNet/blob/main/ProjectCodeNet_NeurIPS2021.pdf).
The rich annotation of Project CodeNet enables research in code search, code completion, code-code translation, and a myriad of other use cases. We also extracted several benchmarks in Python, Java and C++ to drive innovation in deep learning and machine learning models in code classification and code similarity.
#### Citation
```
@inproceedings{puri2021codenet,
author = {Ruchir Puri and David Kung and Geert Janssen and Wei Zhang and Giacomo Domeniconi and Vladmir Zolotov and Julian Dolby and Jie Chen and Mihir Choudhury and Lindsey Decker and Veronika Thost and Luca Buratti and Saurabh Pujar and Ulrich Finkler},
title = {Project CodeNet: A Large-Scale AI for Code Dataset for Learning a Diversity of Coding Tasks},
year = {2021},
}
``` | Provide a detailed description of the following dataset: Project CodeNet |
DanbooRegion | **DanbooRegion** is a dataset consists of 5377 in-the-wild illustration downloaded from the Danbooru2018 and region segment map annotation pairs
samples are provided as at 1024px 8-bit RGB images, and region segment maps as int-32 index images. | Provide a detailed description of the following dataset: DanbooRegion |
Voice Navigation | **Voice Navigation** is a large-scale dataset of Chinese speech for slot filling, containing more than 830,000 samples. | Provide a detailed description of the following dataset: Voice Navigation |
Active Terahertz | This is a public dataset for evaluating multi-object detection algorithms in active Terahertz imaging resolution 5 mm by 5 mm. | Provide a detailed description of the following dataset: Active Terahertz |
BookSum | **BookSum** is a collection of datasets for long-form narrative summarization. This dataset covers source documents from the literature domain, such as novels, plays and stories, and includes highly abstractive, human written summaries on three levels of granularity of increasing difficulty: paragraph-, chapter-, and book-level. The domain and structure of this dataset poses a unique set of challenges for summarization systems, which include: processing very long documents, non-trivial causal and temporal dependencies, and rich discourse structures.
**BookSum** contains summaries for 142,753 paragraphs, 12,293 chapters and 436 books. | Provide a detailed description of the following dataset: BookSum |
SPI dataset | The **SPI dataset** consists of force-controlled industrial robot data for training shadow program inversion (SPI) models. | Provide a detailed description of the following dataset: SPI dataset |
QAConv | **QAConv** is a new question answering (QA) dataset that uses conversations as a knowledge source. We focus on informative conversations including business emails, panel discussions, and work channels. Unlike opendomain and task-oriented dialogues, these conversations are usually long, complex, asynchronous, and involve strong domain knowledge. In total, we collect 34,204 QA pairs, including span-based, free-form, and unanswerable questions, from 10,259 selected conversations with both human-written and machine-generated questions. We segment long conversations into chunks, and use a question generator and dialogue summarizer as auxiliary tools to collect multi-hop questions. The dataset has two testing scenarios, chunk mode and full mode, depending on whether the grounded chunk is provided or retrieved from a large conversational pool. | Provide a detailed description of the following dataset: QAConv |
Fetoscopy Placenta Data | The fetoscopy placenta dataset is associated with our MICCAI2020 publication titled [“Deep Placental Vessel Segmentation for Fetoscopic Mosaicking”](https://arxiv.org/pdf/2007.04349.pdf). The dataset contains 483 frames with ground-truth vessel segmentation annotations taken from six different in vivo fetoscopic procedure videos. The dataset also includes six unannotated in vivo continuous fetoscopic video clips (950 frames) with predicted vessel segmentation maps obtained from the leave-one-out cross-validation of our method.
For ground-truth vessel annotation, we selected the non-occluded (no fetus or tool presence) frames through a separate frame-level fetoscopic event identification approach [Bano:IJCARS2020](https://link.springer.com/content/pdf/10.1007/s11548-020-02169-0.pdf). We annotate a binary mask for vessel segmentation using the Pixel Annotation Tool. | Provide a detailed description of the following dataset: Fetoscopy Placenta Data |
Fusion-DHL | Fusion-DHL is a multimodal sensor dataset with ground-truth positions. | Provide a detailed description of the following dataset: Fusion-DHL |
seeds | The examined group comprised kernels belonging to three different varieties of wheat: Kama, Rosa and Canadian, 70 elements each, randomly selected for the experiment. High quality visualization of the internal kernel structure was detected using a soft X-ray technique. It is non-destructive and considerably cheaper than other more sophisticated imaging techniques like scanning microscopy or laser technology. The images were recorded on 13x18 cm X-ray KODAK plates. Studies were conducted using combine harvested wheat grain originating from experimental fields, explored at the Institute of Agrophysics of the Polish Academy of Sciences in Lublin.
The data set can be used for the tasks of classification and cluster analysis. | Provide a detailed description of the following dataset: seeds |
97 synthetic datasets | 97 synthetic datasets consists of 97 datasets (as illustrated in the figure) and can be used to test graph-based clustering algorithms.
https://github.com/deric/clustering-benchmark | Provide a detailed description of the following dataset: 97 synthetic datasets |
PPR10K | **PPR10K** is a dataset for portrait photo retouching (PPR), which aims to enhance the visual quality of a collection of flat-looking portrait photos. The Portrait Photo Retouching dataset (PPR10K) is a large-scale and diverse dataset that contains:
* 11,161 high-quality raw portrait photos (resolutions from 4K to 8K) in 1,681 groups;
* 3 versions of manual retouched targets of all photos given by 3 expert retouchers;
* full resolution human-region masks of all photos. | Provide a detailed description of the following dataset: PPR10K |
SoftAttributes | The dataset consists of sets of movie titles, with each set annotated with a single English soft attribute (subjective descriptive property, such as 'confusing' or 'romantic') and a reference movie. For each set, a crowd worker has placed the movies into three sets: more, equally, and less than the reference movie. There are 5,991 such sets, from which one can infer approximately 250,000 pairwise preferences over movies for the 60 distinct soft attributes studied. | Provide a detailed description of the following dataset: SoftAttributes |
Ali-CCP | This data set is provided by Alimama | Provide a detailed description of the following dataset: Ali-CCP |
Essay-BR | This repository contains essays written by high school Brazilian students. These essays were graded by humans professionals following the criteria of the ENEM exam. | Provide a detailed description of the following dataset: Essay-BR |
OpenMEVA | OpenMEVA is a benchmark for evaluating open-ended story generation metrics. OpenMEVA provides a comprehensive test suite to assess the capabilities of metrics, including (a) the correlation with human judgments, (b) the generalization to different model outputs and datasets, (c) the ability to judge story coherence, and (d) the robustness to perturbations. To this end, OpenMEVA includes both manually annotated stories and auto-constructed test examples. | Provide a detailed description of the following dataset: OpenMEVA |
RITEyes | Deep neural networks for video based eye tracking have demonstrated resilience to noisy environments, stray reflections and low resolution. However, to train these networks, a large number of manually annotated images are required. To alleviate the cumbersome process of manual labeling, computer graphics rendering is employed to automatically generate a large corpus of annotated eye images under various conditions. In this work, we introduce RIT-Eyes, a novel synthetic eye image generation platform which improves upon previous work by adding features such as retinal retro-reflection, realistic blinks, an active deformable iris and an aspherical cornea. We add various external influences which potentially degrade eye tracking such as corrective eye-wear with varying refractive indices. To demonstrate the utility of RIT-Eyes, we generate and publicly share a large dataset of images with a variety of eye poses and viewing conditions. | Provide a detailed description of the following dataset: RITEyes |
NAVER LABS Localization Datasets | The NAVER LABS localization datasets are 5 new indoor datasets for visual localization in challenging real-world environments. They were captured in a large shopping mall and a large metro station in Seoul, South Korea, using a dedicated mapping platform consisting of 10 cameras and 2 laser scanners. In order to obtain accurate ground truth camera poses, we used a robust LiDAR SLAM which provides initial poses that are then refined using a novel structure-from-motion based optimization.
The datasets are provided in the [kapture](https://github.com/naver/kapture) format and contain about 130k images as well as 6DoF camera poses for training and validation. We also provide sparse Lidar-based depth maps for the training images. The poses of the test set are withheld to not bias the benchmark. | Provide a detailed description of the following dataset: NAVER LABS Localization Datasets |
MIT-Adobe FiveK | The **MIT-Adobe FiveK** dataset consists of 5,000 photographs taken with SLR cameras by a set of different photographers. They are all in RAW format; that is, all the information recorded by the camera sensor is preserved. We made sure that these photographs cover a broad range of scenes, subjects, and lighting conditions. We then hired five photography students in an art school to adjust the tone of the photos. Each of them retouched all the 5,000 photos using a software dedicated to photo adjustment (Adobe Lightroom) on which they were extensively trained. We asked the retouchers to achieve visually pleasing renditions, akin to a postcard. The retouchers were compensated for their work.
This dataset was collected for our project on learning photographic adjustments. When using images from this dataset, please cite this dataset using the following BibTeX:
```
@inproceedings{fivek,
author = "Vladimir Bychkovsky and Sylvain Paris and Eric Chan and Fr{\'e}do Durand",
title = "Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs",
booktitle = "The Twenty-Fourth IEEE Conference on Computer Vision and Pattern Recognition",
year = "2011"
}
``` | Provide a detailed description of the following dataset: MIT-Adobe FiveK |
behavioral observation data entry apps | In this repository, we provide the set-up files and output files of 5 behavioral observation data entry applications. These applications allow observers to collect animal behavior data on a handheld computer (phone/tablet). | Provide a detailed description of the following dataset: behavioral observation data entry apps |
GazeCapture | From scientific research to commercial applications, eye tracking is an important tool across many domains. Despite its range of applications, eye tracking has yet to become a pervasive technology. We believe that we can put the power of eye tracking in everyone's palm by building eye tracking software that works on commodity hardware such as mobile phones and tablets, without the need for additional sensors or devices. We tackle this problem by introducing GazeCapture, the first large-scale dataset for eye tracking, containing data from over 1450 people consisting of almost $2.5M$ frames. Using GazeCapture, we train iTracker, a convolutional neural network for eye tracking, which achieves a significant reduction in error over previous approaches while running in real time (10 - 15fps) on a modern mobile device. Our model achieves a prediction error of 1.7cm and 2.5cm without calibration on mobile phones and tablets respectively. With calibration, this is reduced to 1.3cm and 2.1cm. Further, we demonstrate that the features learned by iTracker generalize well to other datasets, achieving state-of-the-art results.
Image source: [Eye Tracking for Everyone](https://gazecapture.csail.mit.edu/cvpr2016_gazecapture.pdf) | Provide a detailed description of the following dataset: GazeCapture |
Gaze360 | Understanding where people are looking is an informative social cue. In this work, we present Gaze360, a large-scale gaze-tracking dataset and method for robust 3D gaze estimation in unconstrained images. Our dataset consists of 238 subjects in indoor and outdoor environments with labelled 3D gaze across a wide range of head poses and distances. It is the largest publicly available dataset of its kind by both subject and variety, made possible by a simple and efficient collection method. Our proposed 3D gaze model extends existing models to include temporal information and to directly output an estimate of gaze uncertainty. We demonstrate the benefits of our model via an ablation study, and show its generalization performance via a cross-dataset evaluation against other recent gaze benchmark datasets. We furthermore propose a simple self-supervised approach to improve cross-dataset domain adaptation. Finally, we demonstrate an application of our model for estimating customer attention in a supermarket setting.
Image source: [Gaze360: Physically Unconstrained Gaze Estimation in the Wild](https://arxiv.org/pdf/1910.10088v1.pdf) | Provide a detailed description of the following dataset: Gaze360 |
Rare Diseases Mentions in MIMIC-III | ## Data annotation
The 1,073 full rare disease mention annotations (from 312 MIMIC-III **discharge summaries**) are in [`full_set_RD_ann_MIMIC_III_disch.csv`](https://github.com/acadTags/Rare-disease-identification/blob/main/data%20annotation/full_set_RD_ann_MIMIC_III_disch.csv).
The data split:
* the first 400 rows are used for validation, [`validation_set_RD_ann_MIMIC_III_disch.csv`](https://github.com/acadTags/Rare-disease-identification/blob/main/data%20annotation/validation_set_RD_ann_MIMIC_III_disch.csv), and
* the last 673 rows are used for testing, [`test_set_RD_ann_MIMIC_III_disch.csv`](https://github.com/acadTags/Rare-disease-identification/blob/main/data%20annotation/test_set_RD_ann_MIMIC_III_disch.csv).
The 198 rare disease mention annotations (from 145 MIMIC-III **radiology reports**) are in [`test_set_RD_ann_MIMIC_III_rad.csv`](https://github.com/acadTags/Rare-disease-identification/blob/main/data%20annotation/test_set_RD_ann_MIMIC_III_rad.csv). To note that radiology reports were only used for testing and not for validation.
**To note**: a row can only be consider a true phenotype of the patient only when the value of the column **gold mention-to-ORDO label** is 1.
## Data sampling and annotation procedure
* (i) Randomly sampled 500 discharge summaries (and 1000 radiology reports) from MIMIC-III
* (ii) 312 of the 500 discharge summaries (and 145 of the 1000 radiology reports) have at least one positive UMLS mention linked to ORDO, as identified by SemEHR; there are altogether 1073 (and 198 in radiology reports) UMLS/ORDO mentions.
* (iii) 3 medical informatics researchers (staff or PhD students) annotated the 1,073 mentions (and 2 medical informatics researchers annotated the 198 mentions in radiology reports), regarding whether they are the correct patient phenotypes matched to UMLS and ORDO. Contradictions in the annotations were then resolved by another research staff having biomedical background.
## Data dictionary
| Column Name | Description |
|----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ROW_ID | Identifier unique to each row, see [`https://mimic.physionet.org/mimictables/noteevents/`](https://mimic.physionet.org/mimictables/noteevents/) |
| SUBJECT_ID | Identifier unique to a patient, see [`https://mimic.physionet.org/mimictables/noteevents/`](https://mimic.physionet.org/mimictables/noteevents/) |
| HADM_ID | Identifier unique to a patient hospital stay, see [`https://mimic.physionet.org/mimictables/noteevents/`](https://mimic.physionet.org/mimictables/noteevents/) |
| document structure name | The document structure name of the mention. The document structure name is identified by SemEHR (only for discharge summaries). |
| document structure offset in full document | The start and ending offsets of the document structure texts (or template) in the whole discharge summary. The document structure is parsed by SemEHR with regular expressions (only for discharge summaries). |
| mention | The mention identified by SemEHR. |
| mention offset in document structure | The start and ending offsets of the mention in the document structure (only for discharge summaries). |
| mention offset in full document | The start and ending offsets of the mention in the whole discharge summary. They can be calculated by `document structure offset in full document` and `mention offset in document structure`. |
| UMLS with desc | The UMLS identified by SemEHR, corresponding to the mention. |
| ORDO with desc | The ORDO matched to the UMLS, using the linkage in the ORDO ontology, see [`https://www.ebi.ac.uk/ols/ontologies/ordo/terms?iri=http%3A%2F%2Fwww.orpha.net%2FORDO%2FOrphanet_3325`](https://www.ebi.ac.uk/ols/ontologies/ordo/terms?iri=http%3A%2F%2Fwww.orpha.net%2FORDO%2FOrphanet_3325) as an example. |
| gold mention-to-UMLS label | Whether the mention-UMLS pair indicate a correct phenotype of the patient (i.e. a positive mention that correctly matches to the UMLS concept), 1 if correct, 0 if not. |
| gold UMLS-to-ORDO label | Whether the matching is correct from the UMLS concept to the ORDO concept, 1 if correct, 0 if not. |
| gold mention-to-ORDO label | Whether the mention-ORDO triple indicates a correct phenotype of the patient, 1 if correct, 0 if not. This column is 1 if both the mention-to-UMLS label and the UMLS-to-ORDO label are 1, otherwise 0. |
**Note:**
* These manual annotations are by no means to be perfect. There are hypothetical mentions which were difficult for the annotators to make a decision (see some notes in the raw annotations). Also, they are based on the output of [`SemEHR`](https://github.com/CogStack/CogStack-SemEHR), which does not have 100% recall, so the annotations may not cover all rare diseases mentions from the sampled discharge summaries.
* In row 323 from the full set or the validation set, the mention `nph` is not in the document structure (due to error in mention extraction), thus the `gold mention-to-UMLS label` is `-1`.
## Raw annotations (with model predictions)
The two excel workbooks,
* [`for validation - SemEHR ori (MIMIC-III-DS, free text removed, with predictions).xlsx`](https://github.com/acadTags/Rare-disease-identification/blob/main/data%20annotation/raw%20annotations%20(with%20model%20predictions)/for%20validation%20-%20SemEHR%20ori%20(MIMIC-III-DS%2C%20free%20text%20removed%2C%20with%20predictions).xlsx) (annotations starting from column `CX` and also in the third sheet, `distinct umls-ordo`), and
* [`for validation - 1000 docs - ori - MIMIC-III-rad (free text removed, with predictions).xlsx`](https://github.com/acadTags/Rare-disease-identification/blob/main/data%20annotation/raw%20annotations%20(with%20model%20predictions)/for%20validation%20-%201000%20docs%20-%20ori%20-%20MIMIC-III-rad%20(free%20text%20removed%2C%20with%20predictions).xlsx) (annotations starting from column `Z`),
show the raw annotations, including each annotator's results and notes, and the predictions of all baselines approaches/tools. The predictions were not available to the annotators when the annotations were made. Free texts of clinical notes have been removed before the publication of the data. | Provide a detailed description of the following dataset: Rare Diseases Mentions in MIMIC-III |
APPS | The APPS dataset consists of problems collected from different open-access coding websites such as Codeforces, Kattis, and more. The APPS benchmark attempts to mirror how humans programmers are evaluated by posing coding problems in unrestricted natural language and evaluating the correctness of solutions. The problems range in difficulty from introductory to collegiate competition level and measure coding ability as well as problem-solving.
The Automated Programming Progress Standard, abbreviated APPS, consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. In the test set, every problem has multiple test cases, and the average number of test cases is 21.2. Each test case is specifically designed for the corresponding problem, enabling us to rigorously evaluate program functionality. | Provide a detailed description of the following dataset: APPS |
BigCQ | **BigCQ** is a dataset of Competency Question templates paired with SPARQL-OWL query templates. These represent templates of ontology requirements formalizations which are then translated into SPARQL-OWL query language used to query T-Box level of ontologies. Thus, such a dataset can be used in various scenarios regarding ontology authoring:
- Provide a large scale dataset for automatization of CQ involving tasks (automatic extraction of Glossary of Terms from requirements, automatic translation of CQs into queries to check how mature given ontology is).
- Allow to understand better the relation between human-language and ontology constructs.
- Make Competency Question driven ontology authoring more popular, since, although CQs are suggested in many ontology design methodologies, there is very limited set of CQs made publicly available.
- Provide guidelines on how CQs can be constructed to target given modelling styles. | Provide a detailed description of the following dataset: BigCQ |
KLUE | Korean Language Understanding Evaluation (**KLUE**) benchmark is a series of datasets to evaluate natural language understanding capability of Korean language models. KLUE consists of 8 diverse and representative tasks, which are accessible to anyone without any restrictions. With ethical considerations in mind, we deliberately design annotation guidelines to obtain unambiguous annotations for all datasets. Furthermore, we build an evaluation system and carefully choose evaluations metrics for every task, thus establishing fair comparison across Korean language models.
KLUE benchmark is composed of 8 tasks:
- Topic Classification (TC)
- Sentence Textual Similarity (STS)
- Natural Language Inference (NLI)
- Named Entity Recognition (NER)
- Relation Extraction (RE)
- (Part-Of-Speech) + Dependency Parsing (DP)
- Machine Reading Comprehension (MRC)
- Dialogue State Tracking (DST) | Provide a detailed description of the following dataset: KLUE |
GBSG2 | The German Breast Cancer Study Group (GBSG2) dataset studies the effects of hormone treatment on recurrence-free survival time.
The event of interest is the recurrence of cancer time.
This data frame contains the observations of 686 women:
* horTh: hormonal therapy, a factor at two levels (yes and no).
* age: age of the patients in years.
* menostat: menopausal status, a factor at two levels pre (premenopausal) and post (postmenopausal).
* tsize: tumor size (in mm).
* tgrade: tumor grade, a ordered factor at levels I < II < III.
* pnodes: number of positive nodes.
* progrec: progesterone receptor (in fmol).
* estrec: estrogen receptor (in fmol).
* time: recurrence free survival time (in days).
* cens: censoring indicator (0- censored, 1- event).
**References**
* W. Sauerbrei and P. Royston (1999). Building multivariable prognostic and diagnostic models: transformation of the predictors by using fractional polynomials. Journal of the Royal Statistics Society Series A, Volume 162(1), 71–94
* M. Schumacher, G. Basert, H. Bojar, K. Huebner, M. Olschewski, W. Sauerbrei, C. Schmoor, C. Beyerle, R.L.A. Neumann and H.F. Rauschecker for the German Breast Cancer Study Group (1994), Randomized 2 × 2 trial evaluating hormonal treatment and the duration of chemotherapy in node- positive breast cancer patients. Journal of Clinical Oncology, 12, 2086–2093 | Provide a detailed description of the following dataset: GBSG2 |
PBC | Primary sclerosing cholangitis is an autoimmune disease leading to destruction of the small bile ducts in the liver. Progression is slow but inexhortable, eventually leading to cirrhosis and liver decompensation. The condition has been recognised since at least 1851 and was named "primary biliary cirrhosis" in 1949. Because cirrhosis is a feature only of advanced disease, a change of its name to "primary biliary cholangitis" was proposed by patient advocacy groups in 2014.
This data is from the Mayo Clinic trial in PBC conducted between 1974 and 1984. A total of 424 PBC patients, referred to Mayo Clinic during that ten-year interval, met eligibility criteria for the randomized placebo controlled trial of the drug D-penicillamine. The first 312 cases in the data set participated in the randomized trial and contain largely complete data. The additional 112 cases did not participate in the clinical trial, but consented to have basic measurements recorded and to be followed for survival. Six of those cases were lost to follow-up shortly after diagnosis, so the data here are on an additional 106 cases as well as the 312 randomized participants.
* age: in years
* albumin: serum albumin (g/dl)
* alk.phos: alkaline phosphotase (U/liter)
* ascites: presence of ascites
* ast: aspartate aminotransferase, once called SGOT (U/ml)
* bili: serum bilirunbin (mg/dl)
* chol: serum cholesterol (mg/dl)
* copper: urine copper (ug/day)
* edema: 0 no edema, 0.5 untreated or successfully treated, 1 edema despite diuretic therapy
* hepato: presence of hepatomegaly or enlarged liver
* id: case number
* platelet: platelet count
* protime: standardised blood clotting time
* sex: m/f
* spiders: blood vessel malformations in the skin
* stage: histologic stage of disease (needs biopsy)
* status: status at endpoint, 0/1/2 for censored, transplant, dead
* time: number of days between registration and the earlier of death, transplantion, or study analysis in July, 1986
* trt: 1/2/NA for D-penicillmain, placebo, not randomised
* trig: triglycerides (mg/dl) | Provide a detailed description of the following dataset: PBC |
Toyota Smarthome dataset | Toyota Smarthome Trimmed has been designed for the activity classification task of 31 activities. The videos were clipped per activity, resulting in a total of 16,115 short RGB+D video samples. activities were performed in a natural manner. As a result, the dataset poses a unique combination of challenges: high intra-class variation, high-class imbalance, and activities with similar motion and high duration variance. Activities were annotated with both coarse and fine-grained labels. These characteristics differentiate Toyota Smarthome Trimmed from other datasets for activity classification. | Provide a detailed description of the following dataset: Toyota Smarthome dataset |
VPCD | **VPCD** contains multi-modal annotations (face, body and voice) for all primary and secondary characters from a range of diverse TV-shows and movies. It is used for evaluating multi-modal person-clustering. It contains body-tracks for each annotated character, face-tracks when visible, and voice-tracks when speaking, with their associated features.
It consists of more than 30,000 face and body tracks of 300+ characters, from over 23 hours of video. | Provide a detailed description of the following dataset: VPCD |
MNIST Large Scale dataset | The **MNIST Large Scale dataset** is based on the classic [MNIST dataset](/dataset/mnist), but contains large scale variations up to a factor of 16. The motivation behind creating this dataset was to enable testing the ability of different algorithms to learn in the presence of large scale variability and specifically the ability to generalise to new scales not present in the training set over wide scale ranges.
The dataset contains training data for each one of the relative size factors 1, 2 and 4 relative to the original MNIST dataset and testing data for relative scaling factors between 1/2 and 8, with a ratio of $\sqrt[4]{2}$ between adjacent scales. | Provide a detailed description of the following dataset: MNIST Large Scale dataset |
NewsTSC | **NewsTSC** is a dataset for target-dependent sentiment classification (TSC), to investigate TSC in news articles, a much less researched domain, despite the importance of news as an essential information source in individual and societal decision making. | Provide a detailed description of the following dataset: NewsTSC |
DeepCAD | **DeepCAD** is a CAD dataset consisting of 179,133 models and their CAD construction sequences. It can be used to train generative models of 3D shapes. | Provide a detailed description of the following dataset: DeepCAD |
UIT-ViWikiQA | The UIT-ViWikiQA is a dataset for evaluating sentence extraction-based machine reading comprehension in the Vietnamese language. The UIT-ViWikiQA dataset is converted from the UIT-ViQuAD dataset, consisting of 23,074 question-answers based on 5,109 passages of 174 Vietnamese articles from Wikipedia. | Provide a detailed description of the following dataset: UIT-ViWikiQA |
ZuBuD | The goal of the ZuBuD Image Database is to share image data sets with researcheres around the world. To facilitate this, we have created this site, which contains over 1005 images about Zurich city building. The detail information about the database can be found on our Technical Report:TR-260. | Provide a detailed description of the following dataset: ZuBuD |
A Billion Ways to Grasp | Robot grasping is often formulated as a learning problem. With the increasing speed and quality of physics simulations, generating large-scale grasping data sets that feed learning algorithms is becoming more and more popular. An often overlooked question is how to generate the grasps that make up these data sets. In this paper, we review, classify, and compare different grasp sampling strategies. Our evaluation is based on a fine-grained discretization of SE(3) and uses physics-based simulation to evaluate the quality and robustness of the corresponding parallel-jaw grasps. Specifically, we consider more than 1 billion grasps for each of the 21 objects from the YCB data set. This dense data set lets us evaluate existing sampling schemes w.r.t. their bias and efficiency. Our experiments show that some popular sampling schemes contain significant bias and do not cover all possible ways an object can be grasped. | Provide a detailed description of the following dataset: A Billion Ways to Grasp |
The RBO Dataset of Articulated Objects and Interactions | The RBO dataset of articulated objects and interactions is a collection of 358 RGB-D video sequences (67:18 minutes) of humans manipulating 14 articulated objects under varying conditions (light, perspective, background, interaction). All sequences are annotated with ground truth of the poses of the rigid parts and the kinematic state of the articulated object (joint states) obtained with a motion capture system. We also provide complete kinematic models of these objects (kinematic structure and three-dimensional textured shape models). In 78 sequences the contact wrenches during the manipulation are also provided. | Provide a detailed description of the following dataset: The RBO Dataset of Articulated Objects and Interactions |
Clarkson Fingerprint Generator | Clarkson Fingerprint Generator consists of a dataset of 50K synthetically generated fingerprints. | Provide a detailed description of the following dataset: Clarkson Fingerprint Generator |
ReactionGIF | ReactionGIF is an affective dataset of 30K tweets which can be used for tasks like induced sentiment prediction and multilabel classification of induced emotions. | Provide a detailed description of the following dataset: ReactionGIF |
scb_name_length_data_Sweden_Stockholm_2019 | Appendix A in this paper contains a real-world name length data for the whole of Sweden as well as Stockholm Municipality (Swedish: Stockholms kommun) as of 31 December 2019. It excludes names that either belong to people with protected identities or are suspiciously incorrect due to errors in petition. But these excluded numbers are low and should not matter for statistical purposes.
The data are in the forms first name || last name (fl) and first name || maiden name || last name (fml). The name lengths are counted straight off with no spaces between different parts of the name.
It could be useful for security and privacy research, e.g., to evaluate message expansion of different padding schemes. | Provide a detailed description of the following dataset: scb_name_length_data_Sweden_Stockholm_2019 |
DIBCO and H_DIBCO | The contest of binarization using a popular document database was organized called as Document Image Binarization Contest (DIBCO) from 2009 to 2019, except for 2015. | Provide a detailed description of the following dataset: DIBCO and H_DIBCO |
EPISURG | EPISURG is a clinical dataset of $T_1$-weighted magnetic resonance images (MRI) from 430 epileptic patients who underwent resective brain surgery at the National Hospital of Neurology and Neurosurgery (Queen Square, London, United Kingdom) between 1990 and 2018.
The NIfTI files are anonymised and the images have been defaced to further protect the patients' identity.
The dataset comprises 430 postoperative MRI. The corresponding preoperative MRI is present for 269 subjects.
Three human raters segmented the resection cavity on partially overlapping subsets of EPISURG:
- Rater 1: 133 subjects (researcher in neuroimaging)
- Rater 2: 34 subjects (clinical research fellow)
- Rater 3: 33 subjects (neurologist)
## Acknowledgements
If you use this dataset for your research please cite the following publications:
Pérez-García F., Rodionov R., Alim-Marvasti A., Sparks R., Duncan J.S., Ourselin S. (2020) Simulation of Brain Resection for Cavity Segmentation Using Self-supervised and Semi-supervised Learning. In: Martel A.L. et al. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. Lecture Notes in Computer Science, vol 12263. Springer, Cham. [https://doi.org/10.1007/978-3-030-59716-0_12](https://doi.org/10.1007/978-3-030-59716-0_12)
Pérez-García F., Rodionov R., Alim-Marvasti A., Sparks R., Duncan J.S., Ourselin S. EPISURG: MRI dataset for quantitative analysis of resective neurosurgery for refractory epilepsy. University College London (2020). [DOI 10.5522/04/9996158.v1](https://rdr.ucl.ac.uk/articles/dataset/EPISURG_a_dataset_of_postoperative_magnetic_resonance_images_MRI_for_quantitative_analysis_of_resection_neurosurgery_for_refractory_epilepsy/9996158)
## Graphical user interface (GUI)
The [3D Slicer extension EPISURG](https://github.com/fepegar/SlicerEPISURG) may be used to visualise the dataset.
## Data use agreement
The EPISURG data are distributed to the greater scientific community under the following terms:
1. You will not attempt to establish the identity or to make contact with any of the included subjects.
2. You will acknowledge the use of EPISURG data and data derived from EPISURG data when publicly presenting any results or algorithms that benefitted from their use. Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from EPISURG data should cite the publications listed above.
3. You will not further disclose these data beyond the uses outlined in this agreement and understand that redistribution of data in any manner is prohibited.
4. You will require anyone on your team who uses these data, or anyone with whom you share these data to comply with this data use agreement. | Provide a detailed description of the following dataset: EPISURG |
SICAPv2 | **SICAPv2** is a database containing prostate histology whole slide images with both annotations of global Gleason scores and path-level Gleason grades.
Data associated with the paper:
Silva-Rodríguez, J., Colomer, A., Sales, M. A., Molina, R., & Naranjo, V. (2020). Going deeper through the Gleason scoring scale : An automatic end-to-end system for histology prostate grading and cribriform pattern detection. Computer Methods and Programs in Biomedicine, 195. https://doi.org/10.1016/j.cmpb.2020.105637 | Provide a detailed description of the following dataset: SICAPv2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.