dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
KDD Cup 1999 | This is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of distinguishing between ``bad'' connections, called intrusions or attacks, and ``good'' normal connections. This database contains a standard set of data to be audited, which includes a wide variety of intrusions simulated in a military network environment. | Provide a detailed description of the following dataset: KDD Cup 1999 |
Arcene | ARCENE was obtained by merging three mass-spectrometry datasets to obtain enough training and test data for a benchmark. The original features indicate the abundance of proteins in human sera having a given mass value. Based on those features one must separate cancer patients from healthy patients. We added a number of distractor feature called 'probes' having no predictive power. The order of the features and patterns were randomized. | Provide a detailed description of the following dataset: Arcene |
DukeMTMC-VideoReID | The DukeMTMC-VideoReID (Duke Multi-Tracking Multi-Camera Video-based ReIDentification) dataset is a subset of the DukeMTMC for video-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian video datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 4832 tracklets of 1812 identities in total, and each tracklet has 168 frames on average.
**NOTE**: This dataset [has been retracted](https://exposing.ai/duke_mtmc/). | Provide a detailed description of the following dataset: DukeMTMC-VideoReID |
MTOP | A multilingual task-oriented semantic parsing dataset covering 6 languages and 11 domains. | Provide a detailed description of the following dataset: MTOP |
Emotional Dialogue Acts | Emotional Dialogue Acts data contains dialogue act labels for existing emotion multi-modal conversational datasets.
We chose two popular multimodal emotion datasets: Multimodal EmotionLines Dataset (MELD) and Interactive Emotional dyadic MOtion CAPture database (IEMOCAP).
EDAs reveal associations between dialogue acts and emotional states in a natural-conversational language such as Accept/Agree dialogue acts often occur with the Joy emotion, Apology with Sadness, and Thanking with Joy. | Provide a detailed description of the following dataset: Emotional Dialogue Acts |
Santesteban VTO | Physics-based simulated garments on top of SMPL bodies. The data is generated used a modified version of ARCSim and sequences from the CMU Motion Capture Database converted to SMPL format in SURREAL. Each simulated sequence is stored as a .pkl file that contains the following data: | Provide a detailed description of the following dataset: Santesteban VTO |
Lemons quality control dataset | Lemon dataset has been prepared to investigate the possibilities to tackle the issue of fruit quality control. It contains 2690 annotated images (1056 x 1056 pixels). Raw lemon images have been captured using the procedure described in the following blogpost and manually annotated using CVAT. | Provide a detailed description of the following dataset: Lemons quality control dataset |
Douban Conversation Corpus | We release Douban Conversation Corpus, comprising a training data set, a development set and a test set for retrieval based chatbot. The statistics of Douban Conversation Corpus are shown in the following table.
| |Train|Val| Test |
| ------------- |:-------------:|:-------------:|:-------------:|
| session-response pairs | 1m|50k| 10k |
| Avg. positive response per session | 1|1| 1.18 |
| Fless Kappa | N\A|N\A|0.41 |
| Min turn per session | 3|3| 3 |
| Max ture per session | 98|91|45 |
| Average turn per session | 6.69|6.75|5.95 |
| Average Word per utterance | 18.56|18.50|20.74 |
The test data contains 1000 dialogue context, and for each context we create 10 responses as candidates. We recruited three labelers to judge if a candidate is a proper response to the session. A proper response means the response can naturally reply to the message given the context. Each pair received three labels and the majority of the labels was taken as the final decision.
<br>
As far as we known, this is the first human-labeled test set for retrieval-based chatbots. The entire corpus link https://www.dropbox.com/s/90t0qtji9ow20ca/DoubanConversaionCorpus.zip?dl=0
## Data template
label \t conversation utterances (splited by \t) \t response | Provide a detailed description of the following dataset: Douban Conversation Corpus |
E-commerce | We release E-commerce Dialogue Corpus, comprising a training data set, a development set and a test set for retrieval based chatbot. The statistics of E-commerical Conversation Corpus are shown in the following table.
| |Train|Val| Test |
| ------------- |:-------------:|:-------------:|:-------------:|
| Session-response pairs | 1m|10k| 10k |
| Avg. positive response per session|1|1|1|
| Min turn per session|3|3|3|
| Max ture per session|10|10|10|
| Average turn per session|5.51|5.48|5.64
| Average Word per utterance|7.02|6.99|7.11
The full corpus can be downloaded from https://drive.google.com/file/d/154J-neBo20ABtSmJDvm7DK0eTuieAuvw/view?usp=sharing. | Provide a detailed description of the following dataset: E-commerce |
RRS | | | Train | Validation | Test | Ranking Test |
| --------- | ----- | ---------- | ------- | ------------ |
| size | 0.4M | 50K | 5K | 800 |
| pos:neg | 1:1 | 1:9 | 1.2:8.8 | - |
| avg turns | 5.0 | 5.0 | 5.0 | 5.0 |
Ranking test set contains the high-quality responses that selected by some baselines, and their correlation with the conversation context are carefully annotated by 8 professional annotators (the average annotation scores are saved for ranking). For ranking test set, the metrics should be NDCG@3 and NDCG@5, since the correlation scores are provided. More details are available in the Appendix of the paper. | Provide a detailed description of the following dataset: RRS |
RRS Ranking Test | | | Train | Validation | Test | Ranking Test |
| --------- | ----- | ---------- | ------- | ------------ |
| size | 0.4M | 50K | 5K | 800 |
| pos:neg | 1:1 | 1:9 | 1.2:8.8 | - |
| avg turns | 5.0 | 5.0 | 5.0 | 5.0 |
Ranking test set contains the high-quality responses that selected by some baselines, and their correlation with the conversation context are carefully annotated by 8 professional annotators (the average annotation scores are saved for ranking). For ranking test set, the metrics should be NDCG@3 and NDCG@5, since the correlation scores are provided. More details are available in the Appendix of the paper. | Provide a detailed description of the following dataset: RRS Ranking Test |
Duolingo STAPLE Shared Task | This is the dataset for the 2020 Duolingo shared task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Sentence prompts, along with automatic translations, and high-coverage sets of translation paraphrases weighted by user response are provided in 5 language pairs. Starter code for this task can be found here: github.com/duolingo/duolingo-sharedtask-2020/. More details on the data set and task are available at: sharedtask.duolingo.com | Provide a detailed description of the following dataset: Duolingo STAPLE Shared Task |
Duolingo Bandit Notifications | Replication datasets (200 million rows) used in experiments by Yancey & Settles (2020). (2019-06-11) | Provide a detailed description of the following dataset: Duolingo Bandit Notifications |
Duolingo SLAM Shared Task | This repository contains gzipped files containing more than 2 million tokens (words) from answers submitted by more than 6,000 students over the course of their first 30 days of using Duolingo. It also contains baseline starter code written in Python. There are three data sets, corresponding to three different language courses. More details on the data set and task are available at: http://sharedtask.duolingo.com. (2018-01-10) | Provide a detailed description of the following dataset: Duolingo SLAM Shared Task |
Duolingo Spaced Repetition Data | This is a gzipped CSV file containing the 13 million Duolingo student learning traces used in experiments by Settles & Meeder (2016). For more details and replication source code, visit: https://github.com/duolingo/halflife-regression (2016-06-07) | Provide a detailed description of the following dataset: Duolingo Spaced Repetition Data |
SubSumE | # SubSumE Dataset
This repository contains the SubSumE dataset for subjective document summarization. See [the paper](https://aclanthology.org/2021.newsum-1.14/) and the [talk](https://www.youtube.com/watch?v=0vyUQArRrvY) for details on dataset creation. Also check out our work [SuDocu](http://sudocu.cs.umass.edu/) on example-based document summarization.
## Dataset Files
Download the dataset from [here](https://drive.google.com/file/d/1tEDDHzZM_idnv-_PfRE5BmJU5E8yKLRH/view).
The dataset contains :
* Simplified text from 48 Wikipedia pages of the states in the US. Additionally, all the sentences in these documents
are put together in a single file `processed_state_sentences.csv` and are assigned a unique sentence id that
is used in summary json files.
* Intent-based summaries created by human annotators.
Each datapoint file in the directory `user_summary_jsons` contains a json containing summaries of Wikipedia pages
of eight states with following keys:
* **intent** : Summarization intent provided to human annotators for generating the summary
* **summaries**: List of summary jsons for eight states assigned to the annotator. Each json in the list contains following keys:
* **state_name**: Name of the state
* **sentence_ids**: Global ids of sentences (wrt `processed_state_sentences.csv`) present in the summary
* **sentences**: List of sentences present in the summary
* **use_keywords**: Keywords used by the annotator to search the document when creating summaries | Provide a detailed description of the following dataset: SubSumE |
AnswerSumm | AnswerSumm is a dataset of 4,631 CQA threads for answer summarization, curated by professional linguists. | Provide a detailed description of the following dataset: AnswerSumm |
MultiSV | **MultiSV** is a corpus designed for training and evaluating text-independent multi-channel speaker verification systems. It can be readily used also for experiments with dereverberation, denoising, and speech enhancement. | Provide a detailed description of the following dataset: MultiSV |
ANIM | It comprises synthetic mesh sequences from Deformation Transfer for Triangle Meshes. | Provide a detailed description of the following dataset: ANIM |
AMA | **Articulated Mesh Animation** (**AMA**) is a real-world dataset containing 10 mesh sequences depicting 3 different humans performing various actions | Provide a detailed description of the following dataset: AMA |
CAPE | The CAPE dataset is a 3D dynamic dataset of clothed humans, featuring:
- 3D mesh registrations of accurate scans of clothed people in motion, captured at 60 FPS;
- Consistent SMPL mesh topology, all frames in correspondence;
- Precise, captured minimally clothed body shape under clothing;
- Clothed bodies of large pose variations;
- Both posed and unposed (i.e. in canonical pose) clothed body for each frame;
- SMPL body pose parameters for each frame;
- (New!) High-quality raw scan data of several subjects and sequences along with texture is available. Please first register as a user and send us your request. | Provide a detailed description of the following dataset: CAPE |
TSSB | The time series segmentation benchmark (TSSB) currently contains 75 annotated time series (TS) with 1-9 segments. Each TS is constructed from one of the UEA & UCR time series classification datasets. We group TS by label and concatenate them to create segments with distinctive temporal patterns and statistical properties. We annotate the offsets at which we concatenated the segments as change points (CPs). Addtionally, we apply resampling to control the dataset resolution and add approximate, hand-selected window sizes that are able to capture temporal patterns. | Provide a detailed description of the following dataset: TSSB |
Samoa Measles Outbreak 2019 | Dataset contains cumulative reported cases, hospital admission and discharge, and mortality data as parsed from the publicly available press releases by the Ministry of Health and National Emergency Operations Centre (NEOC) of the Government of Samoa. The data spans the initial press release at the end of September 2019 through to the final press release at the end of January 2020. | Provide a detailed description of the following dataset: Samoa Measles Outbreak 2019 |
WPC | The **WPC** (Waterloo Point Cloud) database is a dataset for subjective and objective quality assessment of point clouds. | Provide a detailed description of the following dataset: WPC |
ArgKP-2021 | Data set covering a set of debatable topics, where for each topic and stance, a set of triplets of the form `<argument, KP, label>` is provided. The data set is based on the [ArgKP data set](http://dx.doi.org/10.18653/v1/2020.acl-main.371), which contains arguments contributed by the crowd on 28 debatable topics, split by their stance towards the topic, and KPs written by an expert for those topics. Crowd annotations were collected to determine whether a KP represents an argument, i.e., is a match for an
argument.
The arguments in ArgKP are a subset of the [IBM-ArgQ-Rank-30kArgs data set](The arguments in ArgKP are a subset
of the IBM-ArgQ-Rank-30kArgs data set).
For a test set, we extended ArgKP, adding three new debatable topics, that were also not part of IBM-ArgQ-Rank-30kArgs. The test set was collected specifically for KPA-2021, and was carefully designed to be similar in various aspects to the training data 2 . For each topic, crowd sourced arguments were collected, expert KPs generated, and match/no match annotations for argument/KP pairs obtained, resulting in a data set compatible with the ArgKP format. Arguments collection strictly adhered to the guidelines, quality measures, and post processing used for the collection of arguments in IBM-ArgQ-Rank-30kArgs, while the generation of expert KPs, collection of match annotations, and final data set creation strictly adhered to the manner in which ArgKP was created. | Provide a detailed description of the following dataset: ArgKP-2021 |
Arendt | # Digital Edition: Essays from Hannah Arendt
We have created a NER dataset from the digital edition "Sechs Essays" by Hannah Arendt. It consists of 23 documents from the period 1932-1976, which are available as TEI files online (see https://hannah-arendt-edition.net/3p.html?lang=de).

This NER Dataset ist licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Germany (CC BY-NC-SA 3.0 DE).](http://creativecommons.org/licenses/by-nc-sa/3.0/de/)
From the original TEI files we build an NER dataset with tags distributed as shown in the following Table:
Tag | # All | # Train | # Test | # Devel
----|-------|---------|--------|---------
person | 1,702 | 1,303 | 182 | 217
place | 1,087 | 891 | 111 | 85
ethnicity | 1,093 | 867 | 115 | 111
organisation | 455 | 377 | 39 | 39
event | 57 | 49 | 6 | 2
language | 20 | 14 | 4 | 2
not tagged | 153,223 | 121,154 | 16,101 | 15,968
In the original TEI files the class person is additionally divided into "person", "biblicalFigure", "ficticiousPerson", "deity", and "mythologicalFigure", but some of these different "person" sub classes had too few examples. Therefore we have combined these classes into a general class for persons. Furthermore, the class place was divided into "place" and "country". In the original TEI files some countries are also tagged as places. Therefore we combined both classes into one class for general places. Finally there was a class "ship". But in the whole edition there were only 4 examples of this class. That is why we decided to exclude this class from our NER dataset.
We provide the dataset in two formats together with a partition into a train, dev, and testset. The first one is an easy format similar to the well-known CONLL-X format and the second one is an easy json format with the following structure:
It consists of a list of samples. Each sample is in turn a list of words or special characters. These in turn are represented as a two-element list, where the first element is the word itself and the second element is the corresponding target tag. Here is an example:
[[['Peter','B-person'],[Müller,'I-person'],['lebt','O'],['in','O'], ['Frankfurt','B-place'],['am','I-place'],['Main','I-place'],['.','O']],[['Gebürtig','O'],['stammt','O'],['er','O'],['aus','O'],['Berlin','B-place']] | Provide a detailed description of the following dataset: Arendt |
Sturm | # Digital Edition: Sturm Edition | Provide a detailed description of the following dataset: Sturm |
patients' data | The dataset describes 150 patients with the following demographic characteristics : sex, age, HOMA-IR , systolic and diastolic blood pressure , and LDL-Cholesterol . These patients were followed for 28 years . The characteristics are the mean of the following measurement each year. In each year a liver biopsy was taken to record the stage of fibrosis and then the count of transition among stages is recorded in the columns called ( lambda ij) where the i,j represents the stages where the patients move between them . In other word , for each patient , there is a column for the transition count this patient had made from stage 0 to stage 1 , then there is another column for the transition counts he made from stage 1 to stage 2 in this follow up 28 years , and so on , that is to mean there are 9 columns for the count of each transition for each patient as there are 9 transitions that could be made : from 0 to 1 , from 1 to 2 , from 2 to 3 , from 3 to 4 , from 1 to 0 , from 2 to 1 , from 3 to 2 , from 2 to 0 and from 3 to 1 . these transition counts are the dependent or the response variable , while the demographic characteristics are the predictors or the independent variables . Using Poisson regression model is used to relate these counts with the risk factors for NAFLD . | Provide a detailed description of the following dataset: patients' data |
DataCLUE | DataCLUE is the first Data-Centric benchmark applied in NLP field. | Provide a detailed description of the following dataset: DataCLUE |
Biased-Cars | We introduce a challenging new dataset for simultaneous object category and viewpoint classification—the Biased-Cars dataset. Our dataset features photo-realistic outdoor scene data with fine control over scene clutter (trees, street furniture, and pedestrians), car colors, object occlusions, diverse backgrounds (building/road textures) and lighting conditions (sky maps). Biased-Cars consists of 15K images of five different car models seen from viewpoints varying between 0-90 degrees of azimuth, and 0-50 degrees of zenith across multiple scales. Our dataset offers complete control over the joint distribution of categories, viewpoints, and other scene parameters, and the use of physically based rendering ensures photo-realism. | Provide a detailed description of the following dataset: Biased-Cars |
VGGFace2 HQ | A high-resolution version of VGGFace2 for academic face editing purposes.
This project uses GFPGAN for image restoration and insightface for data preprocessing (crop and align). | Provide a detailed description of the following dataset: VGGFace2 HQ |
GINC | GINC (Generative In-Context learning Dataset) is a small-scale synthetic dataset for studying in-context learning. The pretraining data is generated by a mixture of HMMs and the in-context learning prompt examples are also generated from HMMs (either from the mixture or not). The prompt examples are out-of-distribution with respect to the pretraining data since every example is independent, concatenated, and separated by delimiters. The GitHub repository provides code to generate GINC-style datasets of varying vocabulary sizes, number of HMMs, and other parameters. | Provide a detailed description of the following dataset: GINC |
HGP | Hands Guns and Phones (HGP) dataset contains 2199 images (1989 for training an 210 for testing) of people using guns or phones in real-world scenarios (people making phones reviews, shooting drills, or making calls). Every image of this dataset is labeled with the bounding boxes of Hands, Phones and Guns. All the aforementioned images were collected from Youtube videos and have different sizes. | Provide a detailed description of the following dataset: HGP |
THGP | Temporal Hands Guns and Phones (THGP) dataset, is a collection of 5960 video frames (5000 for training and 960 for testing). The training part is composed with 50 videos of 100 frames (720 × 720 pixels). This dataset contains 20 videos of shooting drills, 20 videos of armed robberies, and 10 videos of people making calls. The testing part contains 48 videos of 20 frames (720 × 720). Videos contained in the testing dataset includes phone calls, gun reviews, shooting drills, people making calls, and armed robberies at convenience stores. This dataset is labeled with the bounding boxes of hands, phones, and guns. | Provide a detailed description of the following dataset: THGP |
ARCT | Freely licensed dataset with warrants for 2k authentic arguments from news comments. On this basis, we present a new challenging task, the argument reasoning comprehension task. Given an argument with a claim and a premise, the goal is to choose the correct implicit warrant from two options. Both warrants are plausible and lexically close, but lead to contradicting claims. | Provide a detailed description of the following dataset: ARCT |
Pan-STARRS | Pan-STARRS is a system for wide-field astronomical imaging developed and operated by the Institute for Astronomy at the University of Hawaii. Pan-STARRS1 (PS1) is the first part of Pan-STARRS to be completed and is the basis for both Data Releases 1 and 2 (DR1 and DR2). The PS1 survey used a 1.8 meter telescope and its 1.4 Gigapixel camera to image the sky in five broadband filters (g, r, i, z, y). | Provide a detailed description of the following dataset: Pan-STARRS |
CAR | CAR contains visual attributes for objects in the Cityscapes dataset.
For each object in an image, we have a list of attributes that depend on the category of the object. For instance, a vehicle category has a visibility attribute while a pedestrian has an activity attribute (walking, standing, etc.).
The objective of this dataset is to ease the development of better algorithms for self-driving vehicles as that requires a complete understanding of the entire scene with all of its details including attributes of all objects.
We chose Cityscapes as it already contains different types of useful annotations and adding attributes to that will remove a huge burden over developing algorithms with self or semi supervision. | Provide a detailed description of the following dataset: CAR |
Robotic Interestingness | Robotic Interestingness dataset was created to promote the development visual interesting scene prediction for such purpose, for robots to better sense the world. | Provide a detailed description of the following dataset: Robotic Interestingness |
Haze4k | **Haze4k** is a synthesized dataset with 4,000 hazy images, in which each hazy image has the associate ground truths of a latent clean image, a transmission map, and an atmospheric light ma | Provide a detailed description of the following dataset: Haze4k |
ChEBI-20 | Dataset contains 33,010 molecule-description pairs split into 80\%/10\%/10\% train/val/test splits. The goal of the task is to retrieve the relevant molecule for a natural language description. It is defined as follows:
To push the boundaries of multimodal models, we present a new IR task: \textbf{Text2Mol}.
Given a text query and list of molecules without any reference textual information (represented, for example, as SMILES strings, graphs, or other equivalent representations) retrieve the molecule corresponding to the query. From a text description of a molecule, the model must incorporate the information in the description into a semantic representation which can be used to directly retrieve the molecule. This requires the integration of two very different types of information: the structured knowledge represented by text and the chemical properties present in molecular graphs. We assume there is only one correct (relevant) molecule for each description, so we consider two measures for this task: Hits@1 and mean reciprocal rank (MRR).
80\% of the data is used for training. Retrieval is done against the entire corpus of molecules (train, val, test). | Provide a detailed description of the following dataset: ChEBI-20 |
WikiContradiction | **WikiContradiction** is a novel wiki dataset for self-contradiction Wikipedia article detection. | Provide a detailed description of the following dataset: WikiContradiction |
OpenFWI | **OpenFWI** is a collection of large-scale open-source benchmark datasets for seismic full waveform inversion (FWI). OpenFWI is catered for the geoscience and machine learning community to facilitate diversified, rigorous and reproducible research on machine learning-based FWI. | Provide a detailed description of the following dataset: OpenFWI |
B-Pref | **B-Pref** is a benchmark specially designed for preference-based RL. A key challenge with such a benchmark is providing the ability to evaluate candidate algorithms quickly, which makes relying on real human input for evaluation prohibitive. At the same time, simulating human input as giving perfect preferences for the ground truth reward function is unrealistic. B-Pref alleviates this by simulating teachers with a wide array of irrationalities, and proposes metrics not solely for performance but also for robustness to these potential irrationalities. | Provide a detailed description of the following dataset: B-Pref |
Product Page | **Product Page** is a large-scale and realistic dataset of webpages. The dataset contains 51,701 manually labeled product pages from 8,175 real e-commerce websites. The pages can be rendered entirely in a web browser and are suitable for computer vision applications. This makes it substantially richer and more diverse than other datasets proposed for element representation learning, classification and prediction on the web. | Provide a detailed description of the following dataset: Product Page |
IconQA | Current visual question answering (VQA) tasks mainly consider answering human-annotated questions for natural images in the daily-life context. **Icon question answering** (**IconQA**) is a benchmark which aims to highlight the importance of abstract diagram understanding and comprehensive cognitive reasoning in real-world diagram word problems. For this benchmark, a large-scale IconQA dataset is built that consists of three sub-tasks: multi-image-choice, multi-text-choice, and filling-in-the-blank. Compared to existing [VQA benchmarks](https://paperswithcode.com/datasets?task=visual-question-answering), IconQA requires not only perception skills like object recognition and text understanding, but also diverse cognitive reasoning skills, such as geometric reasoning, commonsense reasoning, and arithmetic reasoning.
Description from: [IconQA](https://iconqa.github.io/) | Provide a detailed description of the following dataset: IconQA |
VoiceBank-SLR | Because there is no publicly available free dataset for
speech dereverberation, we prepared a dataset based on the
clean speech from VoiceBank-DEMAND [26] (discard the
noisy speech) and convolved them with the room impulse
response (RIR) from OpenSLR. | Provide a detailed description of the following dataset: VoiceBank-SLR |
LIVE-VQC | The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, complex and often commingled distortions that are difficult or impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Towards advancing NR video quality prediction, we have constructed a large-scale video quality assessment database containing 585 videos of unique content , captured using 101 different devices (43 device models) by 80 different users with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding more than 205000 opinion scores , resulting in an average of 240 recorded human opinions per video . This study is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. | Provide a detailed description of the following dataset: LIVE-VQC |
KoNViD-1k | Subjective video quality assessment (VQA) strongly depends on semantics, context, and the types of visual distortions. A lot of existing VQA databases cover small numbers of video sequences with artificial distortions. When testing newly developed Quality of Experience (QoE) models and metrics, they are commonly evaluated against subjective data from such databases, that are the result of perception experiments. However, since the aim of these QoE models is to accurately predict natural videos, these artificially distorted video databases are an insufficient basis for learning. Additionally, the small sizes make them only marginally usable for state-of-the-art learning systems, such as deep learning. In order to give a better basis for development and evaluation of objective VQA methods, we have created a larger datasets of natural, real-world video sequences with corresponding subjective mean opinion scores (MOS) gathered through crowdsourcing.
We took YFCC100m as a baseline database, consisting of 793436 Creative Commons (CC) video sequences, filtered them through multiple steps to ensure that the video sequences are representative of the whole spectrum of available video content, types of distortions, and subjective quality. The resulting 1200 videos are available to download, alongside the subjective data and evaluation of the best-performing techniques available for multiple video attributes. Namely, we have evaluated blur, colorfulness, contrast, spatial information, temporal information and video quality. | Provide a detailed description of the following dataset: KoNViD-1k |
YouTube-UGC | This YouTube dataset is a sampling from thousands of User Generated Content (UGC) as uploaded to YouTube distributed under the Creative Commons license. This dataset was created in order to assist in the advancement of video compression and quality assessment research of UGC videos. | Provide a detailed description of the following dataset: YouTube-UGC |
LIVE-FB LSVQ | No-reference (NR) perceptual video quality assessment (VQA) is a complex, unsolved, and important problem to social and streaming media applications. Efficient and accurate video quality predictors are needed to monitor and guide the processing of billions of shared, often imperfect, user-generated content (UGC). Unfortunately, current NR models are limited in their prediction capabilities on real-world, "in-the-wild" UGC video data. To advance progress on this problem, we created the largest (by far) subjective video quality dataset, containing 39, 000 real-world distorted videos and 117, 000 space-time localized video patches ("v-patches"), and 5.5M human perceptual quality annotations. Using this, we created two unique NR-VQA models: (a) a local-to-global region-based NR VQA architecture (called PVQ) that learns to predict global video quality and achieves state-of-the-art performance on 3 UGC datasets, and (b) a first-of-a-kind space-time video quality mapping engine (called PVQ Mapper) that helps localize and visualize perceptual distortions in space and time. We will make the new database and prediction models available immediately following the review process. | Provide a detailed description of the following dataset: LIVE-FB LSVQ |
LIVE-ETRI | The video deployed parameter space is continuously increasing to provide more realistic and immersive experiences to global streaming and social media viewers. However, increments in video parameters such as spatial resolution or frame rate are inevitably associated with larger data volumes. Transmitting increasingly voluminous videos through limited bandwidth networks in a perceptually optimal way is a present challenge affecting billions of viewers. One recent practice adopted by the video service providers is space-time resolution adaptation in conjunction with video compression. Consequently, it is important to understand how different levels of space-time subsampling and compression affect the perceptual quality of videos.
Towards making progress in this direction, we constructed a large new resource, called the ETRI-LIVE Space-Time Subsampled Video Quality (ETRI-LIVE-STSVQ) database, containing 437 videos generated by applying various levels of combined space-time subsampling and video compression on 15 diverse video contents. We also conducted a large-scale human study on the new dataset, collecting about 15,000 subjective judgments of video quality. The ETRI-LIVE STSVQ database is being made publicly and freely available with the desire to improve future research and development on topics such as video quality modeling and perceptual video coding. | Provide a detailed description of the following dataset: LIVE-ETRI |
P3M-10k | P3M-10k contains 10421 high-resolution real-world
face-blurred portrait images, along with their manually labeled alpha mattes. The Dataset is
aimed to aid research efforts in the area of portrait image matting and related topics. | Provide a detailed description of the following dataset: P3M-10k |
SLUE | **Spoken Language Understanding Evaluation** (**SLUE**) is a suite of benchmark tasks for spoken language understanding evaluation. It consists of limited-size labeled training sets and corresponding evaluation sets. This resource would allow the research community to track progress, evaluate pre-trained representations for higher-level tasks, and study open questions such as the utility of pipeline versus end-to-end approaches. The first phase of the SLUE benchmark suite consists of named entity recognition (NER), sentiment analysis (SA), and ASR on the corresponding datasets.
Corpus includes:
- SLUE-VoxPopuli: consists of ASR and NER tasks - [CC0 license](https://creativecommons.org/share-your-work/public-domain/cc0/)
- SLUE-VoxCeleb: consists of ASR and SA tasks - [CCBY 4.0 license](https://creativecommons.org/licenses/by/4.0/) | Provide a detailed description of the following dataset: SLUE |
SOSD | SOSD is a collection of dataset to benchmark the lookup performance of learned indexes.
SOSD currently includes eight different datasets. Each dataset consists of 200 million 64-bit unsigned integers (keys) with very few duplicates (if at all):
`amzn` represents book sale popularity data.
`face` is an upsampled version of a Facebook user ID dataset.
`logn` and `norm` are lognormal (0, 2) and normal distributions, respectively.
`osmc` is uniformly sampled OpenStreetMap locations represented as Google S2 CellIds.
`uden` is dense integers.
`uspr` is uniformly distributed sparse integers.
`wiki` is Wikipedia article edit timestamps.
In addition, there are 32-bit versions of all datasets (except `osmc` and `wiki`) with similar CDFs. We use different parameters, (0, 1), for logn in the 32-bit case to reduce the number of duplicates. | Provide a detailed description of the following dataset: SOSD |
ClevrTex | **ClevrTex** is a new benchmark designed as the next challenge to compare, evaluate and analyze algorithms for unsupervised multi-object segmentation. ClevrTex features synthetic scenes with diverse shapes, textures and photo-mapped materials, created using physically based rendering techniques.
Image source: [Karazija et al.](https://arxiv.org/pdf/2111.10265.pdf) | Provide a detailed description of the following dataset: ClevrTex |
LegalNERo | LegalNERo is a manually annotated corpus for named entity recognition in the Romanian legal domain.
It provides gold annotations for organizations, locations, persons, time and legal resources mentioned in legal documents.
Additionally it offers GEONAMES codes for the named entities annotated as location (where a link could be established).
The LegalNERo corpus is available in different formats: span-based, token-based and RDF.
The Linguistic Linked Open Data (LLOD) version is provided in RDF-Turtle format. | Provide a detailed description of the following dataset: LegalNERo |
Evidence Inference 2.0 | The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple treatments. Each of these articles will have multiple questions, or 'prompts' associated with them. These prompts will ask about the relationship between an intervention and comparator with respect to an outcome, as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared to placebo on the duration of headaches. | Provide a detailed description of the following dataset: Evidence Inference 2.0 |
RTASC | The ROBIN Technical Acquisition Speech Corpus (ROBINTASC) was developed within the ROBIN project. Its main purpose was to improve the behaviour of a conversational agent, allowing human-machine interaction in the context of purchasing technical equipment. It contains over 6 hours of read speech in Romanian language. We provide text files, associated speech files (WAV, 44.1KHz, 16-bit, single channel), annotated text files in CoNLL-U format. | Provide a detailed description of the following dataset: RTASC |
The ComMA Dataset v0.2 | The ComMA Dataset v0.2 is a multilingual dataset annotated with a hierarchical, fine-grained tagset marking different types of aggression and the "context" in which they occur. The context, here, is defined by the conversational thread in which a specific comment occurs and also the "type" of discursive role that the comment is performing with respect to the previous comment. The initial dataset, being discussed here (and made available as part of the ComMA@ICON shared task), consists of a total 15,000 annotated comments in four languages - Meitei, Bangla, Hindi, and Indian English - collected from various social media platforms such as YouTube, Facebook, Twitter and Telegram. As is usual on social media websites, a large number of these comments are multilingual, mostly code-mixed with English. | Provide a detailed description of the following dataset: The ComMA Dataset v0.2 |
Medical Bottles | Original dataset for "HIGH PRECISION MEDICINE BOTTLES VISION ONLINE INSPECTION SYSTEM AND CLASSIFICATION BASED ON MULTI-FEATURES AND ENSEMBLE LEARNING VIA INDEPENDENCE TEST" | Provide a detailed description of the following dataset: Medical Bottles |
RedCaps | **RedCaps** is a large-scale dataset of 12M image-text pairs collected from Reddit. Images and captions from Reddit depict and describe a wide variety of objects and scenes. The data is collected from a manually curated set of subreddits (350 total), which give coarse image labels and allow steering of the dataset composition without labeling individual instances.
**Terms of use**: Uses of RedCaps are subject to Reddit API terms. Users must comply with Reddit User Agreeement, Content Policy, and Privacy Policy.
**Usage Restrictions**: RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies
Refer to the [datasheet in the paper](https://paperswithcode.com/paper/redcaps-web-curated-image-text-data-created) more details.
Image source: [https://redcaps.xyz/download](https://redcaps.xyz/download) | Provide a detailed description of the following dataset: RedCaps |
Translated TACRED | 533 parallel examples sampled from TACRED, translated into Russian and Korean (and 3 additional examples in Russian), accompanied with tranlsation of a list of trigger words collected for the different relations. | Provide a detailed description of the following dataset: Translated TACRED |
CytoImageNet | CytoImageNet is a large-scale pretraining dataset of microscopy images (890K, 894 classes). In the paper, CytoImageNet pretraining yielded features competitive to **and different** from ImageNet pretrained features on downstream microscopy tasks.
* It was constructed from 40 openly available microscopy datasets.
* Weak labels (from experimental metadata) were assigned to each image in the dataset.
* Images are of varying sizes.
The primary purpose of the dataset is to be used for pretraining as a pretext task for learning useful bioimage representations. However, it may be used for validation or exploratory analysis. | Provide a detailed description of the following dataset: CytoImageNet |
MP-3DHP: Multi-Person 3D Human Pose Dataset | Multi-Person 3D HumanPose Dataset (MP-3DHP) is a depth sensor-based dataset, which was constructed to facilitate the development of multi-person 3D pose estimation methods targeting real-world challenges. The dataset includes 177k training data and 33k validation data where both the 3D human poses and body segments are avaliable. The dataset also include 9k clean background data and 4k testing data including multi-person 3D poses. | Provide a detailed description of the following dataset: MP-3DHP: Multi-Person 3D Human Pose Dataset |
3D Lane Synthetic Dataset | This is a synthetic dataset constructed to stimulate the development and evaluation of 3D lane detection methods. | Provide a detailed description of the following dataset: 3D Lane Synthetic Dataset |
Yelp2018 | The Yelp2018 dataset is adopted from the 2018 edition of the yelp challenge. Wherein local businesses like restaurants and bars are viewed as items. We use the same 10-core setting in order to ensure data quality. | Provide a detailed description of the following dataset: Yelp2018 |
CEAHB2021-5 | Ancient books script identification of Chinese ethnic minorities with deep convolutional neural networks via multi-branch and spatial pyramid pooling
Automatic classification of ancient books is an important component of the digital platform of ancient books. In view of the ancient books script identification task of different ethnic minorities in China, we build a dataset of Chinese ethnic ancient handwritten books(Tai Le, Tibet, Naxi, Yi, Shui), crop and standardize preprocessing images of ancient books. | Provide a detailed description of the following dataset: CEAHB2021-5 |
TLHDIBD2021 | Hybrid-CBF: A hybrid classification and binarization framework for historical Tai Le document image binarization
The binarization of historical documents is very important and more challenging than the binarization of ordinary documents. As a result of the serious noise pollution found on the historical Tai Le documents, a new hybrid classification and binarization framework (Hybrid-CBF) is proposed for the binarization of historical Tai Le document images. The Tai Le historical document image binarization dataset (TLHDIBD2021) containing 2,780 image pairs is constructed. Due to the different degrees of document background pollution, the single method has a poor effect on the binarization of historical Tai Le documents. First, Hybrid-CBF clusters the historical Tai Le document images according to the noise level estimation to obtain document images with different noise levels. Second, the corresponding optimal binarization method is used for historical Tai Le documents with different noise levels. In Hybrid-CBF, two binarization methods of historical Tai Le documents based on a deep neural network are proposed. | Provide a detailed description of the following dataset: TLHDIBD2021 |
4DMatch | A benchmark for matching and registration of partial point clouds with time-varying geometry. It is constructed using randomly selected 1761 sequences from [DeformingThings4D](/dataset/deformingthings4d). | Provide a detailed description of the following dataset: 4DMatch |
WDC LSPM | Many e-shops have started to mark-up product data within their HTML pages using the schema.org vocabulary. The Web Data Commons project regularly extracts such data from the Common Crawl, a large public web crawl. The Web Data Commons Training and Test Sets for Large-Scale Product Matching contain product offers from different e-shops in the form of binary product pairs (with corresponding label "match" or "no match") for four product categories, computers, cameras, watches and shoes.
In order to support the evaluation of machine learning-based matching methods, the data is split into training, validation and test sets. For each product category, we provide training sets in four different sizes (2.000-70.000 pairs). Furthermore there are sets of ids for each training set for a possible validation split (stratified random draw) available. The test set for each product category consists of 1.100 product pairs. The labels of the test sets were manually checked while those of the training sets were derived using shared product identifiers from the Web via weak supervision.
The data stems from the WDC Product Data Corpus for Large-Scale Product Matching - Version 2.0 which consists of 26 million product offers originating from 79 thousand websites. | Provide a detailed description of the following dataset: WDC LSPM |
Evaluating registrations of serial sections with distortions of the ground truths. Supplemental data | This is the supplemental data for our paper on how to benchmark registrations of serial sections with ground truths. There are three main modalities and one further, as a reference. | Provide a detailed description of the following dataset: Evaluating registrations of serial sections with distortions of the ground truths. Supplemental data |
UTFPR-SBD3 | The semantic segmentation of clothes is a challenging task due to the wide variety of clothing styles, layers and shapes.
The UTFPR-SBD3 contains 4,500 images manually annotated at pixel level in 18 classes plus background.
To ensure the high quality of the dataset, all images were manually annotated at the pixel level using JS Segment Annotator, 2 a free web-based image annotation tool. The raw images were carefully selected to avoid, as far as possible, classes with low number of instances. | Provide a detailed description of the following dataset: UTFPR-SBD3 |
FGraDA | Previous research for adapting a general neural machine translation (NMT) model into a specific domain usually neglects the diversity in translation within the same domain, which is a core problem for domain adaptation in real- world scenarios. One representative of such challenging scenarios is to deploy a translation system for a conference with a specific topic, e.g., global warming or coronavirus, where there are usually extremely less resources due to the limited schedule. To motivate wider investigation in such a scenario, we present a real-world fine-grained domain adaptation task in machine translation (FGraDA). The FGraDA dataset consists of Chinese-English translation task for four sub-domains of information technology: autonomous vehicles, AI education, real-time networks, and smart phone. Each sub-domain is equipped with a development set and test set for evaluation pur- poses. To be closer to reality, FGraDA does not employ any in-domain bilingual training data but provides bilingual dictionaries and wiki knowledge base, which can be easier obtained within a short time. We benchmark the fine-grained domain adaptation task and present in-depth analyses showing that there are still challenging problems to further improve the performance with heterogeneous resources. | Provide a detailed description of the following dataset: FGraDA |
IMDB-WIKI-SbS | IMDB-WIKI-SbS is a new large-scale dataset for evaluation pairwise comparisons, building on the success of a well-known benchmark for computer vision systems IMDB-WIKI. This dataset uses the age information offered by IMDB-WIKI as ground truth while providing a balanced distribution of ages and genders of people in photos. | Provide a detailed description of the following dataset: IMDB-WIKI-SbS |
LIRIS human activities dataset | The LIRIS human activities dataset contains (gray/rgb/depth) videos showing people performing various activities taken from daily life (discussing, telphone calls, giving an item etc.). The dataset is fully annotated, where the annotation not only contains information on the action class but also its spatial and temporal positions in the video. It was originally shot for the ICPR-HARL 2012 competition.
The dataset has been shot with two different cameras:
Subset D1 has been shot with a MS Kinect module mounted on a remotely controlled Wany robotics Pekee II mobile robot which is part of the LIRIS-VOIR platform.
Subset D2 has been shot with a sony consumer camcorder | Provide a detailed description of the following dataset: LIRIS human activities dataset |
CoVaxLies v1 | CoVaxLies v1 includes 17 known Misinformation Targets (MisTs) found on Twitter about the covid-19 vaccines. Language experts annotated tweets as Relevant or Not Relevant, and then further annotated Relevant tweets with Stance towards each MisT. This collection is a first step in providing large-scale resources for misinformation detection and misinformation stance identification. | Provide a detailed description of the following dataset: CoVaxLies v1 |
Freibrug Cars | An object-centric dataset consiting of 52 RGB sequences of cars | Provide a detailed description of the following dataset: Freibrug Cars |
LSUI | We released a large-scale underwater image (LSUI) dataset including 5004 image pairs, which involve richer underwater scenes (lighting conditions, water types and target categories) and better visual quality reference images than the existing ones. | Provide a detailed description of the following dataset: LSUI |
notebookcdg | Inspired by Wang et al. 2021, we decided to utilize the top-voted and well-documented Kaggle notebooks to construct the notebookCDGdataset
We collected the top 10% highly-voted notebooks from the top 20 popular competitions on Kaggle (e.g. Titanic). We checked the data policy of each of the 20 competitions, none of them has copyright issues. We also contacted the Kaggle administrators to make sure our data collection complies with the platform’s policy.
In total, we collected 3,944 notebooks as raw data. After data preprocessing, the final dataset contains 2,476 notebooks out of the 3,944 notebooks from the raw data. It has 28,625 code–documentation pairs. The overall code-to-markdown ratio is 2.2195
[Download *notebookCDG* dataset](https://www.dropbox.com/s/vpsst1el7f0jqo6/data_notebookcdg.pkl?dl=0) | Provide a detailed description of the following dataset: notebookcdg |
Abt-Buy | The Abt-Buy dataset for entity resolution derives from the online retailers Abt.com and Buy.com. The dataset contains 1081 entities from abt.com and 1092 entities from buy.com as well as a gold standard (perfect mapping) with 1097 matching record pairs between the two data sources. The common attributes between the two data sources are: product name, product description and product price.
The dataset was initially published in the repository of the Database Group of the University of Leipzig:
[https://dbs.uni-leipzig.de/research/projects/object_matching/benchmark_datasets_for_entity_resolution](https://dbs.uni-leipzig.de/research/projects/object_matching/benchmark_datasets_for_entity_resolution)
To enable the reproducibility of the results and the comparability of the performance of different matchers on the Abt-Buy matching task, the dataset was split into fixed train, validation and test sets.
The fixed splits are provided in the CompERBench repository:
[http://data.dws.informatik.uni-mannheim.de/benchmarkmatchingtasks/index.html](http://data.dws.informatik.uni-mannheim.de/benchmarkmatchingtasks/index.html) | Provide a detailed description of the following dataset: Abt-Buy |
Amazon-Google | The Amazon-Google dataset for entity resolution derives from the online retailers Amazon.com and the product search service of Google accessible through the Google Base Data API. The dataset contains 1363 entities from amazon.com and 3226 google products as well as a gold standard (perfect mapping) with 1300 matching record pairs between the two data sources. The common attributes between the two data sources are: product name, product description, manufacturer and price.
The dataset was initially published in the repository of the Database Group of the University of Leipzig: [https://dbs.uni-leipzig.de/research/projects/object_matching/benchmark_datasets_for_entity_resolution](https://dbs.uni-leipzig.de/research/projects/object_matching/benchmark_datasets_for_entity_resolution)
To enable the reproducibility of the results and the comparability of the performance of different matchers on the Amazon-Google matching task, the dataset was split into fixed train, validation and test sets. The fixed splits are provided in the CompERBench repository:
[http://data.dws.informatik.uni-mannheim.de/benchmarkmatchingtasks/index.html](http://data.dws.informatik.uni-mannheim.de/benchmarkmatchingtasks/index.html) | Provide a detailed description of the following dataset: Amazon-Google |
MusicBrainz20K | The MusicBrainz20K dataset for entity resolution and entity clustering is based on real records about songs from the MusicBrainz database. Each record is described with the following attributes: artist, title, album, year and length. The records have been modified with the DAPO [1] data generator. The generated dataset consists of five sources and approximately 20K records describing 10K unique song entities. It contains duplicates for 50% of the original records in two to five sources which are generated with a high degree of corruption to stress-test the entity resolution and clustering approaches.
[1] Hildebrandt, Kai, et al. "Large-scale data pollution with Apache Spark." IEEE Transactions on Big Data 6.2 (2017): 396-411. | Provide a detailed description of the following dataset: MusicBrainz20K |
Vehicle-1M | Vehicle-1M involves vehicle images captured across day and night, from head or rear, by multiple surveillance cameras installed in cities. There are totally 936,051 images from 55,527 vehicles and 400 vehicle models in the dataset. Each image is attached with a vehicle ID label denoting its identity in real world as well as a vehicle model label indicating the make, model and year of the vehicle(i.e. "Audi-A6-2013"). All publications using Vehicle-1M dataset should cite the paper below:
Haiyun Guo, Chaoyang Zhao, Zhiwei Liu, Jinqiao Wang, Hanqing Lu: Learning coarse-to-fine structured feature embedding for vehicle re-identification. AAAI 2018. | Provide a detailed description of the following dataset: Vehicle-1M |
WikiNEuRal | WikiNEuRal is a high-quality automatically-generated dataset for Multilingual Named Entity Recognition. | Provide a detailed description of the following dataset: WikiNEuRal |
Corrosion Image Data Set for Automating Scientific Assessment of Materials | The study of material corrosion is an important research area, with corrosion degradation of metallic structures causing expenses up to 4% of the global domestic product annually along with major safety risks worldwide. Unfortunately, large-scale and timely scientific discovery of materials has been hindered by the lack of standardized corrosion experimental data in the public domain for developing machine learning models. Obtaining such data is challenging due to the expert knowledge and time required to conduct these scientific experiments and assess corrosion levels. We curate a novel dataset consisting of 600 images annotated with expert corrosion ratings obtained over 10 years of laboratory corrosion testing by material scientists. Based on this data set, we find that non-experts even when rigorously trained with domain guidelines to rate corrosion fail to match expert ratings. Challenges include limited data, image artifacts, and millimeter-precision corrosion. This motivates us to explore the viability of deep learning approaches to tackle this benchmark classification task. We study (i) convolutional neural networks powered with rich domain-specific image augmentation techniques tuned to our data, and (ii) a recent self-supervised representation learning approach either pretrained on ImageNet or trained on our data. We demonstrate that pretrained ResNet-18 and HR-Net models with tuned augmentations can reach up to 0.83 accuracy. With this corrosion data set, we open the door for the design of more advanced deep learning models to support this real-world task, while driving innovative new research to bridge computer vision and material innovation.
[Disclaimer]
By downloading this code and/or using this data, you agree to abide by all of the rules and regulations.
1. Researcher shall use the dataset only for non-commercial research and educational purposes.
2. The authors with Worcester Polytechnic Institute and US Army Research Lab make no representations or warranties regarding the dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose.
3. Researcher accepts full responsibility for his or her use of the dataset and shall defend and indemnify the authors, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the dataset, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the dataset.
4. Researcher may provide research associates and colleagues with access to the dataset provided that they first agree to be bound by these terms and conditions.
5. The authors reserve the right to terminate Researcher's access to or usage of the dataset at any time.
6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
7. The law of the State of Massachusetts shall apply to all disputes under this agreement.
[Citation]
Anyone that uses this data set must cite and acknowledge our BMVC data set paper:
@InProceedings{yin2021BMVC, author = {Yin, Biao and Josselyn, Nicholas and Considine, Thomas and Kelley, John and Rinderspacher, Berend and Jensen, Robert and Snyder, James and Zhang, Ziming and Rundensteiner, Elke}, title = {Corrosion Image Data Set for Automating Scientific Assessment of Materials}, booktitle = {British Machine Vision Conference (BMVC)}, year = {2021}} | Provide a detailed description of the following dataset: Corrosion Image Data Set for Automating Scientific Assessment of Materials |
ClimART | Numerical simulations of Earth's weather and climate require substantial amounts of computation. This has led to a growing interest in replacing subroutines that explicitly compute physical processes with approximate machine learning (ML) methods that are fast at inference time. Within weather and climate models, atmospheric radiative transfer (RT) calculations are especially expensive. This has made them a popular target for neural network-based emulators. However, prior work is hard to compare due to the lack of a comprehensive dataset and standardized best practices for ML benchmarking. To fill this gap, we build a large dataset, ClimART, with more than \emph{10 million samples from present, pre-industrial, and future climate conditions}, based on the Canadian Earth System Model. ClimART poses several methodological challenges for the ML community, such as multiple out-of-distribution test sets, underlying domain physics, and a trade-off between accuracy and inference speed. | Provide a detailed description of the following dataset: ClimART |
IATOS Dataset | Archivos con audios de toses de personas grabadas por celular, segmentados por COVID positivo y negativo según resultado de test RT-PCR. | Provide a detailed description of the following dataset: IATOS Dataset |
GPR1200 | Most publications that aim to optimize neural networks for CBIR, train and test their models on domain specific datasets. It is therefore unclear, if those networks can be used as a general-purpose image feature extractor. After analyzing popular image retrieval test sets we decided to manually curate GPR1200, an easy to use and accessible but challenging benchmark dataset with 1200 categories and 10 class examples. Classes and images were manually selected from six publicly available datasets of different image areas, ensuring high class diversity and clean class boundaries.
This dataset can therefore be used for benchmarking image descriptor systems on their generalizability. | Provide a detailed description of the following dataset: GPR1200 |
Orchard | Orchard is a diagnostic dataset for systematically evaluating hierarchical reasoning in state-of-the-art neural sequence models | Provide a detailed description of the following dataset: Orchard |
GVFC | This is a new dataset of news headlines and their frames related to the issue of gun violence in the United States. This Gun Violence Frame Corpus (GVFC) was curated and annotated by journalism and communication experts. The articles in this dataset are drawn from a sample of news articles from a list of 30 top U.S. news websites defined in terms of traffic to the websites; and collected from four time periods over the course of 2018 in order to capture a diversity of articles.
We include in this dataset, headlines of news articles and their annotations, the accompanying images and text- and image-derived features. We also include the codebook protocol, which includes all of the variables for annotations and their definitions that are applied by the annotators. | Provide a detailed description of the following dataset: GVFC |
A dataset of neonatal EEG recordings with seizures annotations | Neonatal seizures are a common emergency in the neonatal intensive care unit (NICU). There are many questions yet to be answered regarding the temporal/spatial characteristics of seizures from different pathologies, response to medication, effects on neurodevelopment and optimal detection. This dataset contains EEG recordings from human neonates and the visual interpretation of the EEG by the human expert. Multi-channel EEG was recorded from 79 term neonates admitted to the neonatal intensive care unit (NICU) at the Helsinki University Hospital. The median recording duration was 74 minutes (IQR: 64 to 96 minutes). EEGs were annotated by three experts for the presence of seizures. An average of 460 seizures were annotated per expert in the dataset, 39 neonates had seizures by consensus and 22 were seizure free by consensus. The dataset can be used as a reference set of neonatal seizures, for the development of automated methods of seizure detection and other EEG analysis, as well as for the analysis of inter-observer agreement. | Provide a detailed description of the following dataset: A dataset of neonatal EEG recordings with seizures annotations |
MMPTRACK | Multi-camera Multiple People Tracking (MMPTRACK) dataset has about 9.6 hours of videos, with over half a million frame-wise annotations. The dataset is densely annotated, e.g., per-frame bounding boxes and person identities are available, as well as camera calibration parameters. Our dataset is recorded with 15 frames per second (FPS) in five diverse and challenging environment settings., e.g., retail, lobby, industry, cafe, and office. This is by far the largest publicly available multi-camera multiple people tracking dataset.
We expect the availability of such large-scale multi-camera multiple people tracking dataset will encourage more participants in this research topic. This dataset is also valuable for the evaluation of other tasks, such as multi-view people detection and monocular multiple people tracking. | Provide a detailed description of the following dataset: MMPTRACK |
MIS-Check Dam | Minor Irrigation Structures Check-Dam Dataset is a public dataset annotated by domain experts using images from Google static map for instance segmentation and object detection tasks.
Google drive link for the dataset:
https://drive.google.com/drive/u/2/folders/16-XNaD6Cfbec7cpJB9_raYz8tl0CEQzZ | Provide a detailed description of the following dataset: MIS-Check Dam |
fluocells | By releasing this dataset, we aim at providing a new testbed for computer vision techniques using Deep Learning. The main peculiarity is the shift from the domain of "natural images" proper of common benchmark dataset to biological imaging. We anticipate that the advantages of doing so could be two-fold: i) fostering research in biomedical-related fields - for which popular pre-trained models perform typically poorly - and ii) promoting methodological research in deep learning by addressing peculiar requirements of these images. Possible applications include but are not limited to semantic segmentation, object detection and object counting. The data consist of 283 high-resolution pictures (1600x1200 pixels) of mice brain slices acquired through a fluorescence microscope. The final goal is to individuate and count neurons highlighted in the pictures by means of a marker, so to assess the result of a biological experiment. The corresponding ground-truth labels were generated through a hybrid approach involving semi-automatic and manual semantic segmentation. The result consists of black (0) and white (255) images having pixel-level annotations of where the stained neurons are located. For more information, please refer to Morelli, R. et al., 2021. Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet. Scientific reports. https://doi.org/10.1038/s41598-021-01929-5. The collection of original images was supported by funding from the University of Bologna (RFO 2018) and the European Space Agency (Research agreement collaboration 4000123556). | Provide a detailed description of the following dataset: fluocells |
MSU Video Alignment and Retrieval Benchmark Suite | Frame-to-frame video alignment/synchronization | Provide a detailed description of the following dataset: MSU Video Alignment and Retrieval Benchmark Suite |
Manually annotated 3-digit occupation codes from the Norwegian 1950 census | Manually annotated 3-digit occupation codes from the Norwegian full count 1950 population census. | Provide a detailed description of the following dataset: Manually annotated 3-digit occupation codes from the Norwegian 1950 census |
Manually annotated 3-digit occupation code training set from the Norwegian 1950 census | The Norwegian Historical Data Centre, 2021, "Manually annotated 3-digit occupation code training set from the Norwegian 1950 census", https://doi.org/10.18710/7JWAZX, DataverseNO, V1 | Provide a detailed description of the following dataset: Manually annotated 3-digit occupation code training set from the Norwegian 1950 census |
DeepSport Dataset | This basketball dataset was acquired under the Walloon region project DeepSport, using the Keemotion system installed in multiple arenas.
We would like to thanks both Keemotion for letting us use their system for raw image acquisition during live productions, and the LNB for the rights on their images. | Provide a detailed description of the following dataset: DeepSport Dataset |
CNTD | Chinese and Naxi scene text detection data set, labelme to json. | Provide a detailed description of the following dataset: CNTD |
CUTE80 | CUTE80 is necessary in order to show the capability of the current text detection method in handling curved texts. | Provide a detailed description of the following dataset: CUTE80 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.