dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
ActivityNet Captions | The **ActivityNet Captions** dataset is built on ActivityNet v1.3 which includes 20k YouTube untrimmed videos with 100k caption annotations. The videos are 120 seconds long on average. Most of the videos contain over 3 annotated events with corresponding start/end time and human-written sentences, which contain 13.5 words on average. The number of videos in train/validation/test split is 10024/4926/5044, respectively. | Provide a detailed description of the following dataset: ActivityNet Captions |
smallNORB | The **smallNORB** dataset is a datset for 3D object recognition from shape. It contains images of 50 toys belonging to 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. The objects were imaged by two cameras under 6 lighting conditions, 9 elevations (30 to 70 degrees every 5 degrees), and 18 azimuths (0 to 340 every 20 degrees).
The training set is composed of 5 instances of each category (instances 4, 6, 7, 8 and 9), and the test set of the remaining 5 instances (instances 0, 1, 2, 3, and 5). | Provide a detailed description of the following dataset: smallNORB |
DocRED | **DocRED** (Document-Level Relation Extraction Dataset) is a relation extraction dataset constructed from Wikipedia and Wikidata. Each document in the dataset is human-annotated with named entity mentions, coreference information, intra- and inter-sentence relations, and supporting evidence. DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document. Along with the human-annotated data, the dataset provides large-scale distantly supervised data.
DocRED contains 132,375 entities and 56,354 relational facts annotated on 5,053 Wikipedia documents. In addition to the human-annotated data, the dataset provides large-scale distantly supervised data over 101,873 documents. | Provide a detailed description of the following dataset: DocRED |
iMaterialist | Constructed from over one million fashion images with a label space that includes 8 groups of 228 fine-grained attributes in total. Each image is annotated by experts with multiple, high-quality fashion attributes. | Provide a detailed description of the following dataset: iMaterialist |
ImageNet-C | **ImageNet-C** is an open source data set that consists of algorithmically generated corruptions (blur, noise) applied to the ImageNet test-set. | Provide a detailed description of the following dataset: ImageNet-C |
ImageNet-A | The **ImageNet-A** dataset consists of real-world, unmodified, and naturally occurring examples that are misclassified by ResNet models. | Provide a detailed description of the following dataset: ImageNet-A |
BIOSSES | The BIOSSES data set comprises total 100 sentence pairs all of which were selected from the "[TAC2 Biomedical Summarization Track Training Data Set](https://tac.nist.gov/2014/BiomedSumm/)" .
The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores in a range [0-4]. Our guideline was prepared based on SemEval 2012 Task 6 Guideline.
Image source: [BIOSSES](https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html) | Provide a detailed description of the following dataset: BIOSSES |
MedNLI | The **MedNLI** dataset consists of the sentence pairs developed by Physicians from the Past Medical History section of MIMIC-III clinical notes annotated for Definitely True, Maybe True and Definitely False. The dataset contains 11,232 training, 1,395 development and 1,422 test instances. This provides a natural language inference task (NLI) grounded in the medical history of patients.
Source: [MT-Clinical BERT: Scaling Clinical Information Extraction with Multitask Learning](https://arxiv.org/abs/2004.10220)
Image Source: [https://arxiv.org/abs/1904.02181](https://arxiv.org/abs/1904.02181) | Provide a detailed description of the following dataset: MedNLI |
UCF-QNRF | The **UCF-QNRF** dataset is a crowd counting dataset and it contains large diversity both in scenes, as well as in background types. It consists of 1535 images high-resolution images from Flickr, Web Search and Hajj footage. The number of people (i.e., the count) varies from 50 to 12,000 across images. | Provide a detailed description of the following dataset: UCF-QNRF |
WiderPerson | WiderPerson contains a total of 13,382 images with 399,786 annotations, i.e., 29.87 annotations per image, which means this dataset contains dense pedestrians with various kinds of occlusions. Hence, pedestrians in the proposed dataset are extremely challenging due to large variations in the scenario and occlusion, which is suitable to evaluate pedestrian detectors in the wild. | Provide a detailed description of the following dataset: WiderPerson |
CID | The **CID** (**Campus Image Dataset**) is a dataset captured in low-light env with the help of Android programming. Its basic unit is group, which is named by capture time and contains 8 exposure-time-varying raw image shot in a burst.
Source: [https://github.com/505030475/ExtremeLowLight](https://github.com/505030475/ExtremeLowLight) | Provide a detailed description of the following dataset: CID |
LeNER-Br | LeNER-Br is a dataset for named entity recognition (NER) in Brazilian Legal Text. | Provide a detailed description of the following dataset: LeNER-Br |
DAVIS | The Densely Annotation Video Segmentation dataset (**DAVIS**) is a high quality and high resolution densely annotated video segmentation dataset under two resolutions, 480p and 1080p. There are 50 video sequences with 3455 densely annotated frames in pixel level. 30 videos with 2079 frames are for training and 20 videos with 1376 frames are for validation. | Provide a detailed description of the following dataset: DAVIS |
VIST | The **Visual Storytelling** Dataset (**VIST**) consists of 210,819 unique photos and 50,000 stories. The images were collected from albums on Flickr. The albums included 10 to 50 images and all the images in an album are taken in a 48-hour span. The stories were created by workers on Amazon Mechanical Turk, where the workers were instructed to choose five images from the album and write a story about them. Every story has five sentences, and every sentence is paired with its appropriate image. The dataset is split into 3 subsets, a training set (80%), a validation set (10%) and a test set (10%). All the words and interpunction signs in the stories are separated by a space character and all the location names are replaced with the word location. All the names of people are replaced with the words male or female depending on the gender of the person. | Provide a detailed description of the following dataset: VIST |
DTD | The **Describable Textures Dataset** (**DTD**) contains 5640 texture images in the wild. They are annotated with human-centric attributes inspired by the perceptual properties of textures. | Provide a detailed description of the following dataset: DTD |
Adience | The **Adience** dataset, published in 2014, contains 26,580 photos across 2,284 subjects with a binary gender label and one label from eight different age groups, partitioned into five splits. The key principle of the data set is to capture the images as close to real world conditions as possible, including all variations in appearance, pose, lighting condition and image quality, to name a few. | Provide a detailed description of the following dataset: Adience |
Matterport3D | The **Matterport3D** dataset is a large RGB-D dataset for scene understanding in indoor environments. It contains 10,800 panoramic views inside 90 real building-scale scenes, constructed from 194,400 RGB-D images. Each scene is a residential building consisting of multiple rooms and floor levels, and is annotated with surface construction, camera poses, and semantic segmentation. | Provide a detailed description of the following dataset: Matterport3D |
OIE2016 | OIE2016 is the first large-scale OpenIE benchmark. It is created by automatic conversion from QA-SRL [He et al., 2015], a semantic role labeling dataset. The sentences are from news (e.g., WSJ) and encyclopedia (e.g., WIKI) domains. Since there are no restrictions on the elements of OpenIE extractions, partial-matching criteria instead of exact-matching is typically used. Hence, the evaluation script can tolerate the extractions that are slightly different from the gold annotation. | Provide a detailed description of the following dataset: OIE2016 |
ToTTo | ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
During the dataset creation process, tables from English Wikipedia are matched with (noisy) descriptions. Each table cell mentioned in the description is highlighted and the descriptions are iteratively cleaned and corrected to faithfully reflect the content of the highlighted cells. | Provide a detailed description of the following dataset: ToTTo |
PCam | **PatchCamelyon** is an image classification dataset. It consists of 327.680 color images (96 x 96px) extracted from histopathologic scans of lymph node sections. Each image is annotated with a binary label indicating presence of metastatic tissue. PCam provides a new benchmark for machine learning models: bigger than CIFAR10, smaller than ImageNet, trainable on a single GPU. | Provide a detailed description of the following dataset: PCam |
Kumar | The **Kumar** dataset contains 30 1,000×1,000 image tiles from seven organs (6 breast, 6 liver, 6 kidney, 6 prostate, 2 bladder, 2 colon and 2 stomach) of The Cancer Genome Atlas (TCGA) database acquired at 40× magnification. Within each image, the boundary of each nucleus is fully annotated. | Provide a detailed description of the following dataset: Kumar |
HellaSwag | HellaSwag is a challenge dataset for evaluating commonsense NLI that is specially hard for state-of-the-art models, though its questions are trivial for humans (>95% accuracy). | Provide a detailed description of the following dataset: HellaSwag |
LAMBADA | The **LAMBADA** (LAnguage Modeling Broadened to Account for Discourse Aspects) benchmark is an open-ended cloze task which consists of about 10,000 passages from BooksCorpus where a missing target word is predicted in the last sentence of each passage. The missing word is constrained to always be the last word of the last sentence and there are no candidate words to choose from. Examples were filtered by humans to ensure they were possible to guess given the context, i.e., the sentences in the passage leading up to the last sentence. Examples were further filtered to ensure that missing words could not be guessed without the context, ensuring that models attempting the dataset would need to reason over the entire paragraph to answer questions. | Provide a detailed description of the following dataset: LAMBADA |
PIQA | PIQA is a dataset for commonsense reasoning, and was created to investigate the physical knowledge of existing models in NLP. | Provide a detailed description of the following dataset: PIQA |
OpenBookQA | **OpenBookQA** is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject. It consists of 5,957 multiple-choice elementary-level science questions (4,957 train, 500 dev, 500 test), which probe the understanding of a small “book” of 1,326 core science facts and the application of these facts to novel situations. For training, the dataset includes a mapping from each question to the core science fact it was designed to probe. Answering OpenBookQA questions requires additional broad common knowledge, not contained in the book. The questions, by design, are answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm.
Additionally, the dataset includes a collection of 5,167 crowd-sourced common knowledge facts, and an expanded version of the train/dev/test questions where each question is associated with its originating core fact, a human accuracy score, a clarity score, and an anonymized crowd-worker ID. | Provide a detailed description of the following dataset: OpenBookQA |
WSC | The **Winograd Schema Challenge** was introduced both as an alternative to the Turing Test and as a test of a system’s ability to do commonsense reasoning. A Winograd schema is a pair of sentences differing in one or two words with a highly ambiguous pronoun, resolved differently in the two sentences, that appears to require commonsense knowledge to be resolved correctly. The examples were designed to be easily solvable by humans but difficult for machines, in principle requiring a deep understanding of the content of the text and the situation it describes.
The original Winograd Schema Challenge dataset consisted of 100 Winograd schemas constructed manually by AI experts. As of 2020 there are 285 examples available; however, the last 12 examples were only added recently. To ensure consistency with earlier models, several authors often prefer to report the performance on the first 273 examples only. These datasets are usually referred to as **WSC**285 and WSC273, respectively. | Provide a detailed description of the following dataset: WSC |
arXiv | **Arxiv HEP-TH (high energy physics theory) citation graph** is from the e-print **arXiv** and covers all the citations within a dataset of 27,770 papers with 352,807 edges. If a paper i cites paper j, the graph contains a directed edge from i to j. If a paper cites, or is cited by, a paper outside the dataset, the graph does not contain any information about this.
The data covers papers in the period from January 1993 to April 2003 (124 months).
Source: [https://snap.stanford.edu/data/cit-HepTh.html](https://snap.stanford.edu/data/cit-HepTh.html) | Provide a detailed description of the following dataset: arXiv |
ECE | The ECE dataset (Gui et al., 2016a) is collected from SINA city news and contains 2105 instances. Its document has only one emotion word and one or more emotion causes. | Provide a detailed description of the following dataset: ECE |
PhyAAt | The dataset contains a collection of physiological signals (EEG, GSR, PPG) obtained from an experiment of the auditory attention on natural speech. Ethical Approval was acquired for the experiment. Details of the experiment can be found here **[https://phyaat.github.io/experiment](https://phyaat.github.io/experiment)**
### Dataset
The dataset contain three physiological signals recorded at sampling rate of 128Hz from 25 healthy subjects during the experiment. Electroenceplogram (EEG) signal is recorded using a 14-channel Emotiv Epoc device. Two signal streams of Galvanic Skin Response (GSR) were recorded, instantaneous sample and moving averaged signal. From photoplethysmogram (PPG) sensor (pulse sensor), a raw signal, inter-beat interval (IBI), and pulse rate were recorded. All the signals were properly labeled.
- EEG Channels: 'AF3', 'F7', 'F3', 'FC5', 'T7', 'P7', 'O1', 'O2', 'P8', 'T8', 'FC6', 'F4', 'F8', 'AF4'
- GSR Signal: Instantaneous and moving averaged signal streams
- PPG: PPG (ECG like signal), IBI (Inter Beat Interval ) and BPM (Beats per minute)
### Download the dataset
#### Using Python
To download the dataset, install **phyaat** library and download through it.
***pip install phyaat***
```
import phyaat as ph
#to download dataset of subject 1 in given path 'dirpath
dirPath = ph.download_data(baseDir='../PhyAAt_Data', subject=1,verbose=0,overwrite=False)
#to download dataset of all the subjects
dirPath = ph.download_data(baseDir='../PhyAAt_Data', subject=-1,verbose=0,overwrite=False)
```
#### Manually
If you are using other programming framework such as matlab or R, Download dataset manually from
**[Github repository](https://github.com/Nikeshbajaj/PhyaatDataset)**
and extract all the csv files.
For more details on downloading and using dataset, check here: **[Getting Started](https://phyaat.github.io/introduction)**
### Helper Scripts
There are starter scripts and benchmark code to start building models. They are available here - **[https://phyaat.github.io/modeling/](https://phyaat.github.io/modeling/)** | Provide a detailed description of the following dataset: PhyAAt |
LEVIR-CD | LEVIR-CD is a new large-scale remote sensing building Change Detection dataset. The introduced dataset would be a new benchmark for evaluating change detection (CD) algorithms, especially those based on deep learning.
LEVIR-CD consists of 637 very high-resolution (VHR, 0.5m/pixel) Google Earth (GE) image patch pairs with a size of 1024 × 1024 pixels. These bitemporal images with time span of 5 to 14 years have significant land-use changes, especially the construction growth. LEVIR-CD covers various types of buildings, such as villa residences, tall apartments, small garages and large warehouses. Here, we focus on building-related changes, including the building growth (the change from soil/grass/hardened ground or building under construction to new build-up regions) and the building decline. These bitemporal images are annotated by remote sensing image interpretation experts using binary labels (1 for change and 0 for unchanged). Each sample in our dataset is annotated by one annotator and then double-checked by another to produce high-quality annotations. The fully annotated LEVIR-CD contains a total of 31,333 individual change-building instances. | Provide a detailed description of the following dataset: LEVIR-CD |
FEVER | FEVER is a publicly available dataset for fact extraction and verification against textual sources.
It consists of 185,445 claims manually verified against the introductory sections of Wikipedia pages and classified as SUPPORTED, REFUTED or NOTENOUGHINFO. For the first two classes, systems and annotators need to also return the combination of sentences forming the necessary evidence supporting or refuting the claim.
The claims were generated by human annotators extracting claims from Wikipedia and mutating them in a variety of ways, some of which were meaning-altering. The verification of each claim was conducted in a separate annotation process by annotators who were aware of the page but not the sentence from which original claim was
extracted and thus in 31.75% of the claims more than one sentence was considered appropriate evidence. Claims require composition of evidence from multiple sentences in 16.82% of cases. Furthermore, in 12.15% of the claims, this evidence was taken from multiple pages. | Provide a detailed description of the following dataset: FEVER |
MELD | **Multimodal EmotionLines Dataset** (**MELD**) has been created by enhancing and extending EmotionLines dataset. MELD contains the same dialogue instances available in EmotionLines, but it also encompasses audio and visual modality along with text. MELD has more than 1400 dialogues and 13000 utterances from Friends TV series. Multiple speakers participated in the dialogues. Each utterance in a dialogue has been labeled by any of these seven emotions -- Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. MELD also has sentiment (positive, negative and neutral) annotation for each utterance. | Provide a detailed description of the following dataset: MELD |
EmoryNLP | EmoryNLP comprises 97 episodes, 897 scenes, and 12,606 utterances, where each utterance is annotated with one of the seven emotions borrowed from the six primary emotions in the Willcox (1982)’s feeling wheel, sad, mad, scared, powerful, peaceful, joyful, and a default emotion of neutral. | Provide a detailed description of the following dataset: EmoryNLP |
4D Light Field Dataset | 4D Light Field Dataset is a light field benchmark consisting of 24 carefully designed synthetic, densely sampled 4D
light fields with highly accurate disparity ground truth. | Provide a detailed description of the following dataset: 4D Light Field Dataset |
Virtual KITTI 2 | Virtual KITTI 2 is an updated version of the well-known Virtual KITTI dataset which consists of 5 sequence clones from the KITTI tracking benchmark. In addition, the dataset provides different variants of these sequences such as modified weather conditions (e.g. fog, rain) or modified camera configurations (e.g. rotated by 15◦). For each sequence we provide multiple sets of images containing RGB, depth, class segmentation, instance segmentation, flow, and scene flow data. Camera parameters and poses as well as vehicle locations are available as well. In order to showcase some of the dataset’s capabilities, we ran multiple relevant experiments using state-of-the-art algorithms from the field of autonomous driving. The dataset is available for download at https://europe.naverlabs.com/Research/Computer-Vision/Proxy-Virtual-Worlds. | Provide a detailed description of the following dataset: Virtual KITTI 2 |
WHAMR! | **WHAMR!** is a dataset for noisy and reverberant speech separation. It extends [WHAM!](/dataset/wham) by introducing synthetic reverberation to the
speech sources in addition to the existing noise. Room impulse responses were generated and convolved using `pyroomacoustics`. Reverberation times were chosen to approximate domestic and classroom environments (expected to be similar to the restaurants and coffee shops where the WHAM! noise was collected), and
further classified as high, medium, and low reverberation based on a
qualitative assessment of the mixture’s noise recording. | Provide a detailed description of the following dataset: WHAMR! |
VoiceBank + DEMAND | VoiceBank+DEMAND is a noisy speech database for training speech enhancement algorithms and TTS models. The database was designed to train and test speech enhancement methods that operate at 48kHz. A more detailed description can be found in the paper associated with the database. Some of the noises were obtained from the Demand database, available here: http://parole.loria.fr/DEMAND/ . The speech database was obtained from the Voice Banking Corpus, available here: http://homepages.inf.ed.ac.uk/jyamagis/release/VCTK-Corpus.tar.gz . | Provide a detailed description of the following dataset: VoiceBank + DEMAND |
BUFF | **BUFF** consists of 5 subjects, 3 male and 2 female wearing 2 clothing styles: a) t-shirt and long pants and b) a soccer outfit.
They perform 3 different motions i) hips ii) tilt_twist_left iii) shoulders_mill. | Provide a detailed description of the following dataset: BUFF |
Taskonomy | Taskonomy provides a large and high-quality dataset of varied indoor scenes.
- Complete pixel-level geometric information via aligned meshes.
- Semantic information via knowledge distillation from ImageNet, MS COCO, and MIT Places.
- Globally consistent camera poses. Complete camera intrinsics.
- High-definition images.
- 3x times big as ImageNet. | Provide a detailed description of the following dataset: Taskonomy |
Abalone | Predicting the age of abalone from physical measurements. The age of abalone is determined by cutting the shell through the cone, staining it, and counting the number of rings through a microscope -- a boring and time-consuming task. Other measurements, which are easier to obtain, are used to predict the age. Further information, such as weather patterns and location (hence food availability) may be required to solve the problem.
Source: [UCL Machine Learning Repository](http://archive.ics.uci.edu/ml/datasets/Abalone)
Image Source: [http://archive.ics.uci.edu/ml/datasets/Abalone](http://archive.ics.uci.edu/ml/datasets/Abalone) | Provide a detailed description of the following dataset: Abalone |
Letter | Letter Recognition Data Set is a handwritten digit dataset. The task is to identify each of a large number of black-and-white rectangular pixel displays as one of the 26 capital letters in the English alphabet. The character images were based on 20 different fonts and each letter within these 20 fonts was randomly distorted to produce a file of 20,000 unique stimuli. Each stimulus was converted into 16 primitive numerical attributes (statistical moments and edge counts) which were then scaled to fit into a range of integer values from 0 through 15. | Provide a detailed description of the following dataset: Letter |
Electricity | **Abstract**: Measurements of electric power consumption in one household with a one-minute sampling rate over a period of almost 4 years. Different electrical quantities and some sub-metering values are available.
| Data Set Characteristics | Number of Instances | Area | Attribute Characteristics | Number of Attributes | Date Donated | Associated Tasks | Missing Values |
| ------------------------- | ------------------- | -------- | ------------------------- | -------------------- | ------------ | ---------------------- | -------------- |
| Multivariate, Time-Series | 2075259 | Physical | Real | 9 | 2012-08-30 | Regression, Clustering | Yes |
### Source:
Georges Hebrail (georges.hebrail '@' edf.fr), Senior Researcher, EDF R&D, Clamart, France
Alice Berard, TELECOM ParisTech Master of Engineering Internship at EDF R&D, Clamart, France
### Data Set Information:
This archive contains 2075259 measurements gathered in a house located in Sceaux (7km of Paris, France) between December 2006 and November 2010 (47 months).
Notes:
1. (global_active_power\*1000/60 - sub_metering_1 - sub_metering_2 - sub_metering_3) represents the active energy consumed every minute (in watt hour) in the household by electrical equipment not measured in sub-meterings 1, 2 and 3.
2. The dataset contains some missing values in the measurements (nearly 1,25% of the rows). All calendar timestamps are present in the dataset but for some timestamps, the measurement values are missing: a missing value is represented by the absence of value between two consecutive semi-colon attribute separators. For instance, the dataset shows missing values on April 28, 2007.
### Attribute Information:
1. `date`: Date in format `dd/mm/yyyy`
2. `time`: time in format `hh:mm:ss`
3. `global_active_power`: household global minute-averaged active power (in kilowatt)
4. `global_reactive_power`: household global minute-averaged reactive power (in kilowatt)
5. `voltage`: minute-averaged voltage (in volt)
6. `global_intensity`: household global minute-averaged current intensity (in ampere)
7. `sub_metering_1`: energy sub-metering No. 1 (in watt-hour of active energy). It corresponds to the kitchen, containing mainly a dishwasher, an oven and a microwave (hot plates are not electric but gas powered).
8. `sub_metering_2`: energy sub-metering No. 2 (in watt-hour of active energy). It corresponds to the laundry room, containing a washing-machine, a tumble-drier, a refrigerator and a light.
9. `sub_metering_3`: energy sub-metering No. 3 (in watt-hour of active energy). It corresponds to an electric water-heater and an air-conditioner.
### Relevant Papers:
N/A
### Citation Request:
This dataset is made available under the “Creative Commons Attribution 4.0 International (CC BY 4.0)” license | Provide a detailed description of the following dataset: Electricity |
NetHack Learning Environment | The **NetHack Learning Environment** (NLE) is a Reinforcement Learning environment based on NetHack 3.6.6. It is designed to provide a standard reinforcement learning interface to the game, and comes with tasks that function as a first step to evaluate agents on this new environment.
NetHack is one of the oldest and arguably most impactful videogames in history, as well as being one of the hardest roguelikes currently being played by humans. It is procedurally generated, rich in entities and dynamics, and overall an extremely challenging environment for current state-of-the-art RL agents, while being much cheaper to run compared to other challenging testbeds. Through NLE, the authors wish to establish NetHack as one of the next challenges for research in decision making and machine learning.
Source: [https://github.com/facebookresearch/nle](https://github.com/facebookresearch/nle)
Image Source: [https://github.com/facebookresearch/nle](https://github.com/facebookresearch/nle) | Provide a detailed description of the following dataset: NetHack Learning Environment |
Kvasir-SEG | Kvasir-SEG is an open-access dataset of gastrointestinal polyp images and corresponding segmentation masks, manually annotated by a medical doctor and then verified by an experienced gastroenterologist. | Provide a detailed description of the following dataset: Kvasir-SEG |
2018 Data Science Bowl | This dataset contains a large number of segmented nuclei images. The images were acquired under a variety of conditions and vary in the cell type, magnification, and imaging modality (brightfield vs. fluorescence). The dataset is designed to challenge an algorithm's ability to generalize across these variations.
Each image is represented by an associated ImageId. Files belonging to an image are contained in a folder with this ImageId. Within this folder are two subfolders:
images contains the image file.
masks contains the segmented masks of each nucleus. This folder is only included in the training set. Each mask contains one nucleus. Masks are not allowed to overlap (no pixel belongs to two masks).
The second stage dataset will contain images from unseen experimental conditions. To deter hand labeling, it will also contain images that are ignored in scoring. The metric used to score this competition requires that your submissions are in run-length encoded format. Please see the evaluation page for details.
As with any human-annotated dataset, you may find various forms of errors in the data. You may manually correct errors you find in the training set. The dataset will not be updated/re-released unless it is determined that there are a large number of systematic errors. The masks of the stage 1 test set will be released with the release of the stage 2 test set. | Provide a detailed description of the following dataset: 2018 Data Science Bowl |
CVC-ClinicDB | **CVC-ClinicDB** is an open-access dataset of 612 images with a resolution of 384×288 from 31 colonoscopy sequences.It is used for medical image segmentation, in particular polyp detection in colonoscopy videos.
Source: [ResUNet++: An Advanced Architecture for Medical Image Segmentation](https://arxiv.org/abs/1911.07067)
Image Source: [https://polyp.grand-challenge.org/CVCClinicDB/](https://polyp.grand-challenge.org/CVCClinicDB/) | Provide a detailed description of the following dataset: CVC-ClinicDB |
CAT2000 | Includes 4000 images; 200 from each of 20 categories covering different types of scenes such as Cartoons, Art, Objects, Low resolution images, Indoor, Outdoor, Jumbled, Random, and Line drawings. | Provide a detailed description of the following dataset: CAT2000 |
FixaTons | FixaTons is a large collection of datasets human scanpaths (temporally ordered sequences of fixations) and saliency maps. | Provide a detailed description of the following dataset: FixaTons |
ImageNet-R | ImageNet-R(endition) contains art, cartoons, deviantart, graffiti, embroidery, graphics, origami, paintings, patterns, plastic objects, plush objects, sculptures, sketches, tattoos, toys, and video game renditions of ImageNet classes.
ImageNet-R has renditions of 200 ImageNet classes resulting in 30,000 images. | Provide a detailed description of the following dataset: ImageNet-R |
20 Newsgroups | The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. | Provide a detailed description of the following dataset: 20 Newsgroups |
HACS | HACS is a dataset for human action recognition. It uses a taxonomy of 200 action classes, which is identical to that of the ActivityNet-v1.3 dataset. It has 504K videos retrieved from YouTube. Each one is strictly shorter than 4 minutes, and the average length is 2.6 minutes. A total of 1.5M clips of 2-second duration are sparsely sampled by methods based on both uniform randomness and consensus/disagreement of image classifiers. 0.6M and 0.9M clips are annotated as positive and negative samples, respectively.
Authors split the collection into training, validation and testing sets of size 1.4M, 50K and 50K clips, which are sampled
from 492K, 6K and 6K videos, respectively. | Provide a detailed description of the following dataset: HACS |
Kinetics-700 | Kinetics-700 is a video dataset of 650,000 clips that covers 700 human action classes. The videos include human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands and hugging. Each action class has at least 700 video clips. Each clip is annotated with an action class and lasts approximately 10 seconds. | Provide a detailed description of the following dataset: Kinetics-700 |
Completion3D | The Completion3D benchmark is a dataset for evaluating state-of-the-art 3D Object Point Cloud Completion methods. Ggiven a partial 3D object point cloud the goal is to infer a complete 3D point cloud for the object. | Provide a detailed description of the following dataset: Completion3D |
QMNIST | The exact pre-processing steps used to construct the MNIST dataset have long been lost. This leaves us with no reliable way to associate its characters with the ID of the writer and little hope to recover the full MNIST testing set that had 60K images but was never released. The official MNIST testing set only contains 10K randomly sampled images and is often considered too small to provide meaningful confidence intervals.
The **QMNIST** dataset was generated from the original data found in the NIST Special Database 19 with the goal to match the MNIST preprocessing as closely as possible.
QMNIST is licensed under the BSD-style license.
Source: [https://github.com/facebookresearch/qmnist](https://github.com/facebookresearch/qmnist)
Image Source: [https://github.com/facebookresearch/qmnist](https://github.com/facebookresearch/qmnist) | Provide a detailed description of the following dataset: QMNIST |
ROCStories | **ROCStories** is a collection of commonsense short stories. The corpus consists of 100,000 five-sentence stories. Each story logically follows everyday topics created by Amazon Mechanical Turk workers. These stories contain a variety of commonsense causal and temporal relations between everyday events. Writers also develop an additional 3,742 Story Cloze Test stories which contain a four-sentence-long body and two candidate endings. The endings were collected by asking Mechanical Turk workers to write both a right ending and a wrong ending after eliminating original endings of given short stories. Both endings were required to make logical sense and include at least one character from the main story line. The published ROCStories dataset is constructed with ROCStories as a training set that includes 98,162 stories that exclude candidate wrong endings, an evaluation set, and a test set, which have the same structure (1 body + 2 candidate endings) and a size of 1,871. | Provide a detailed description of the following dataset: ROCStories |
ePillID | **ePillID** is a benchmark for developing and evaluating computer vision models for pill identification. The ePillID benchmark is designed as a low-shot fine-grained benchmark, reflecting real-world challenges for developing image-based pill identification systems.
The characteristics of the ePillID benchmark include:
* Reference and consumer images: The reference images are taken with controlled lighting and backgrounds, and with professional equipment. The consumer images are taken with real-world settings including different lighting, backgrounds, and equipment. For most of the pills, one image per side (two images per pill type) is available from the NIH Pillbox dataset.
* Low-shot and fine-grained setting: 13k images representing 9804 appearance classes (two sides for 4902 pill types). For most of the appearance classes, there exists only one reference image, making it a challenging low-shot recognition setting.
Source: [https://github.com/usuyama/ePillID-benchmark](https://github.com/usuyama/ePillID-benchmark)
Image Source: [https://github.com/usuyama/ePillID-benchmark](https://github.com/usuyama/ePillID-benchmark) | Provide a detailed description of the following dataset: ePillID |
CodeSearchNet | The **CodeSearchNet** Corpus is a large dataset of functions with associated documentation written in Go, Java, JavaScript, PHP, Python, and Ruby from open source projects on GitHub. The CodeSearchNet Corpus includes:
* Six million methods overall
* Two million of which have associated documentation (docstrings, JavaDoc, and more)
* Metadata that indicates the original location (repository or line number, for example) where the data was found | Provide a detailed description of the following dataset: CodeSearchNet |
WikiTableQuestions | **WikiTableQuestions** is a question answering dataset over semi-structured tables. It is comprised of question-answer pairs on HTML tables, and was constructed by selecting data tables from Wikipedia that contained at least 8 rows and 5 columns. Amazon Mechanical Turk workers were then tasked with writing trivia questions about each table. WikiTableQuestions contains 22,033 questions. The questions were not designed by predefined templates but were hand crafted by users, demonstrating high linguistic variance. Compared to previous datasets on knowledge bases it covers nearly 4,000 unique column headers, containing far more relations than closed domain datasets and datasets for querying knowledge bases. Its questions cover a wide range of domains, requiring operations such as table lookup, aggregation, superlatives (argmax, argmin), arithmetic operations, joins and unions. | Provide a detailed description of the following dataset: WikiTableQuestions |
AViD | Is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries. | Provide a detailed description of the following dataset: AViD |
MTL-AQA | A new multitask action quality assessment (AQA) dataset, the largest to date, comprising of more than 1600 diving samples; contains detailed annotations for fine-grained action recognition, commentary generation, and estimating the AQA score. Videos from multiple angles provided wherever available. | Provide a detailed description of the following dataset: MTL-AQA |
AQA-7 | Consists of 1106 action samples from seven actions with quality scores as measured by expert human judges. | Provide a detailed description of the following dataset: AQA-7 |
AGENDA | Abstract GENeration DAtaset (AGENDA) is a dataset of knowledge graphs paired with scientific abstracts. The dataset consists of 40k paper titles and abstracts from the Semantic Scholar Corpus taken from the proceedings of 12 top AI conferences. | Provide a detailed description of the following dataset: AGENDA |
GoPro | The **GoPro** dataset for deblurring consists of 3,214 blurred images with the size of 1,280×720 that are divided into 2,103 training images and 1,111 test images. The dataset consists of pairs of a realistic blurry image and the corresponding ground truth shapr image that are obtained by a high-speed camera. | Provide a detailed description of the following dataset: GoPro |
AMZ Computers | AMZ Computers is a co-purchase graph extracted from Amazon, where nodes represent products, edges represent the co-purchased relations of products, and features are bag-of-words vectors extracted from product reviews. | Provide a detailed description of the following dataset: AMZ Computers |
SVG-Icons8 | A new large-scale dataset along with an open-source library for SVG manipulation. | Provide a detailed description of the following dataset: SVG-Icons8 |
K2HPD | Includes 100K depth images under challenging scenarios. | Provide a detailed description of the following dataset: K2HPD |
Binarized MNIST | A binarized version of MNIST. | Provide a detailed description of the following dataset: Binarized MNIST |
CAMO | Camouflaged Object (CAMO) dataset specifically designed for the task of camouflaged object segmentation. We focus on two categories, i.e., naturally camouflaged objects and artificially camouflaged objects, which usually correspond to animals and humans in the real world, respectively. Camouflaged object images consists of 1250 images (1000 images for the training set and 250 images for the testing set). Non-camouflaged object images are collected from the MS-COCO dataset (1000 images for the training set and 250 images for the testing set). CAMO has objectness mask ground-truth. | Provide a detailed description of the following dataset: CAMO |
CAS-VSR-W1k (LRW-1000) | *LRW-1000 has been renamed as CAS-VSR-W1k.** It is a naturally-distributed large-scale benchmark for word-level lipreading in the wild, including 1000 classes with about 718,018 video samples from more than 2000 individual speakers. There are more than 1,000,000 Chinese character instances in total. Each class corresponds to the syllables of a Mandarin word which is composed by one or several Chinese characters. This dataset aims to cover a natural variability over different speech modes and imaging conditions to incorporate challenges encountered in practical applications. | Provide a detailed description of the following dataset: CAS-VSR-W1k (LRW-1000) |
LRS2 | The Oxford-BBC **Lip Reading Sentences 2** (**LRS2**) dataset is one of the largest publicly available datasets for lip reading sentences in-the-wild. The database consists of mainly news and talk shows from BBC programs. Each sentence is up to 100 characters in length. The training, validation and test sets are divided according to broadcast date. It is a challenging set since it contains thousands of speakers without speaker labels and large variation in head pose. The pre-training set contains 96,318 utterances, the training set contains 45,839 utterances, the validation set contains 1,082 utterances and the test set contains 1,242 utterances. | Provide a detailed description of the following dataset: LRS2 |
PeMS04 | PeMS04 is a traffic forecasting benchmark. | Provide a detailed description of the following dataset: PeMS04 |
Moving MNIST | The **Moving MNIST** dataset contains 10,000 video sequences, each consisting of 20 frames. In each video sequence, two digits move independently around the frame, which has a spatial resolution of 64×64 pixels. The digits frequently intersect with each other and bounce off the edges of the frame | Provide a detailed description of the following dataset: Moving MNIST |
Sprites | The **Sprites** dataset contains 60 pixel color images of animated characters (sprites). There are 672 sprites, 500 for training, 100 for testing and 72 for validation. Each sprite has 20 animations and 178 images, so the full dataset has 120K images in total. There are many changes in the appearance of the sprites, they differ in their body shape, gender, hair, armor, arm type, greaves, and weapon. | Provide a detailed description of the following dataset: Sprites |
Hyperpartisan News Detection | Hyperpartisan News Detection was a dataset created for PAN @ SemEval 2019 Task 4. Given a news article text, decide whether it follows a hyperpartisan argumentation, i.e., whether it exhibits blind, prejudiced, or unreasoning allegiance to one party, faction, cause, or person.
There are two parts:
* byarticle: Labeled through crowdsourcing on an article basis. The data contains only articles for which a consensus among the crowdsourcing workers existed.
* bypublisher: Labeled by the overall bias of the publisher as provided by BuzzFeed journalists or MediaBiasFactCheck.com. | Provide a detailed description of the following dataset: Hyperpartisan News Detection |
BigPatent | Consists of 1.3 million records of U.S. patent documents along with human written abstractive summaries. | Provide a detailed description of the following dataset: BigPatent |
NoW Benchmark | The goal of this benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods under variations in viewing angle, lighting, and common occlusions.
The dataset contains 2054 2D images of 100 subjects, captured with an iPhone X, and a separate 3D head scan for each subject. This head scan serves as ground truth for the evaluation. The subjects are selected to contain variations in age, BMI, and sex (55 female, 45 male). | Provide a detailed description of the following dataset: NoW Benchmark |
WikiHow | **WikiHow** is a dataset of more than 230,000 article and summary pairs extracted and constructed from an online knowledge base written by different human authors. The articles span a wide range of topics and represent high diversity styles. | Provide a detailed description of the following dataset: WikiHow |
Tobacco-3482 | The Tobacco-3482 dataset consists of document images belonging to 10 classes such as letter, form, email, resume, memo, etc. The dataset has 3482 images. | Provide a detailed description of the following dataset: Tobacco-3482 |
Horse-10 | **Horse-10** is an animal pose estimation dataset. It comprises 30 diverse Thoroughbred horses, for which 22 body parts were labeled by an expert in *8,114* frames (animal pose estimation). Horses have various coat colors and the “in-the-wild” aspect of the collected data at various Thoroughbred yearling sales and farms added additional complexity. The authors introduce Horse-C to contrast the domain shift inherent in the Horse-10 dataset with domain shift induced by common image corruptions. | Provide a detailed description of the following dataset: Horse-10 |
FreiHAND | **FreiHAND** is a 3D hand pose dataset which records different hand actions performed by 32 people. For each hand image, MANO-based 3D hand pose annotations are provided. It currently contains 32,560 unique training samples and 3960 unique samples for evaluation. The training samples are recorded with a green screen background allowing for background removal. In addition, it applies three different post processing strategies to training samples for data augmentation. However, these post processing strategies are not applied to evaluation samples. | Provide a detailed description of the following dataset: FreiHAND |
DomainNet | **DomainNet** is a dataset of common objects in six different domain. All domains include 345 categories (classes) of objects such as Bracelet, plane, bird and cello. The domains include clipart: collection of clipart images; real: photos and real world images; sketch: sketches of specific objects; infograph: infographic images with specific object; painting artistic depictions of objects in the form of paintings and quickdraw: drawings of the worldwide players of game “Quick Draw!”. | Provide a detailed description of the following dataset: DomainNet |
Ethics | Ethics1 (sit ethics) dataset is created to test the knowledge of the basic concepts of morality. The task is to predict human ethical judgments about diverse text situations in a multi-label classification setting. Namely, the task requires models to identify the presence of concepts in normative ethics, such as virtue, law, moral, justice, and utilitarianism.
**Motivation**
There is a multitude of approaches to evaluating ethics in machine learning. The Ethics dataset for Russian is created from scratch for the first time, relying on the design compatible with (Hendrycks et al., 2021).
An example in English for illustration purposes:
`{
'source': 'gazeta',
'text': '100-year-old Greta Ploech gave handmade cookies to a toddler who helped her cross a busy highway at a pedestrian crossing. The video was posted on the Readers Channel.',
'sit_virtue': 1,
'sit_moral': 0,
'sit_law': 0,
'sit_justice': 1,
'sit_util': 1,
'episode': [5],
'perturbation': 'sit_ethics'
}`
**Data Fields**
- text: a string containing the body of a news article or a fiction text
- source: a string containing the source of the text
- sit_virtue: an integer, either 0 or 1, indicating whether the concept of virtue is present in the text
- sit_moral: an integer, either 0 or 1, indicating whether the concept of morality is present in the text
- sit_law:an integer, either 0 or 1, indicating whether the concept of law is present in the text
- sit_justice: an integer, either 0 or 1, indicating whether the concept of justice is present in the text
- sit_util: an integer, either 0 or 1, indicating whether the concept of utilitarianism is present in the text
perturbation: a string containing the name of the perturbation applied to text. If no perturbation was applied, the dataset name is used
- episode: a list of episodes in which the instance is used. Only used for the train set
**Data Splits**
The dataset consists of a training set with labeled examples and a test set in two configurations:
raw data: includes the original data with no additional sampling
episodes: data is split into evaluation episodes and includes several perturbations of test for robustness evaluation
**Test Perturbations**
Each training episode in the dataset corresponds to seven test variations, including the original test data and six adversarial test sets, acquired through the modification of the original test through the following text perturbations:
- ButterFingers: randomly adds noise to data by mimicking spelling mistakes made by humans through character swaps based on their keyboard distance
- Emojify: replaces the input words with the corresponding emojis, preserving their original meaning
- EDAdelete: randomly deletes tokens in the text
- EDAswap: randomly swaps tokens in the text
- BackTranslation: generates variations of the context through back-translation (ru -> en -> ru)
- AddSent: generates an extra sentence at the end of the text | Provide a detailed description of the following dataset: Ethics |
Skeleton-Mimetics | A dataset derived from the recently introduced Mimetics dataset. | Provide a detailed description of the following dataset: Skeleton-Mimetics |
Universal Dependencies | The **Universal Dependencies** (UD) project seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for multiple languages. The first version of the dataset was released in 2015 and consisted of 10 treebanks over 10 languages. Version 2.7 released in 2020 consists of 183 treebanks over 104 languages. The annotation consists of UPOS (universal part-of-speech tags), XPOS (language-specific part-of-speech tags), Feats (universal morphological features), Lemmas, dependency heads and universal dependency labels. | Provide a detailed description of the following dataset: Universal Dependencies |
TallyQA | TallyQA is a large-scale dataset for open-ended counting. | Provide a detailed description of the following dataset: TallyQA |
CrisisMMD | CrisisMMD is a large multi-modal dataset collected from Twitter during different natural disasters. It consists of several thousands of manually annotated tweets and images collected during seven major natural disasters including earthquakes, hurricanes, wildfires, and floods that happened in the year 2017 across different parts of the World. The provided datasets include three types of annotations. | Provide a detailed description of the following dataset: CrisisMMD |
UAVA | The UAVA,<i>UAV-Assistant</i>, dataset is specifically designed for fostering applications which consider UAVs and humans as cooperative agents.
We employ a real-world 3D scanned dataset (<a href="https://niessner.github.io/Matterport/">Matterport3D</a>), physically-based rendering, a gamified simulator for realistic drone navigation trajectory collection, to generate realistic multimodal data both from the user’s exocentric view of the drone, as well as the drone’s egocentric view. | Provide a detailed description of the following dataset: UAVA |
Panoptic | **CMU Panoptic** is a large scale dataset providing 3D pose annotations (1.5 millions) for multiple people engaging social activities. It contains 65 videos (5.5 hours) with multi-view annotations, but only 17 of them are in multi-person scenario and have the camera parameters.
**Massively Multiview System**
* 480 VGA camera views
* 30+ HD views
* 10 RGB-D sensors
* Hardware-based sync
* Calibration
* Interesting Scenes with Labels
**Multiple people**
* Socially interacting groups
* 3D body pose
* 3D facial landmarks
* Transcripts + speaker ID
**Hardware setup**
* 480 VGA cameras, 640 x 480 resolution, 25 fps, synchronized among themselves using a hardware clock
* 31 HD cameras, 1920 x 1080 resolution, 30 fps, synchronized among themselves using a hardware clock, timing aligned with VGA cameras
* 10 Kinect Ⅱ Sensors. 1920 x 1080 (RGB), 512 x 424 (depth), 30 fps, timing aligned among themselves and other sensors
5 DLP Projectors. synchronized with HD cameras | Provide a detailed description of the following dataset: Panoptic |
Set5 | The **Set5** dataset is a dataset consisting of 5 images (“baby”, “bird”, “butterfly”, “head”, “woman”) commonly used for testing performance of Image Super-Resolution models.
Image Source: [http://people.rennes.inria.fr/Aline.Roumy/results/SR_BMVC12.html](http://people.rennes.inria.fr/Aline.Roumy/results/SR_BMVC12.html) | Provide a detailed description of the following dataset: Set5 |
ContactPose | ContactPose is a dataset of hand-object contact paired with hand pose, object pose, and RGB-D images. ContactPose has 2306 unique grasps of 25 household objects grasped with 2 functional intents by 50 participants, and more than 2.9 M RGB-D grasp images. | Provide a detailed description of the following dataset: ContactPose |
DHF1K | **DHF1K** is a video saliency dataset which contains a ground-truth map of binary pixel-wise gaze fixation points and a continuous map of the fixation points after being blurred by a gaussian filter. DHF1K contains 1000 videos in total. 700 of the videos are annotated, 600 of which are used for training and 100 for validation. The remaining 300 are the testing set which are to be evaluated on a public server. | Provide a detailed description of the following dataset: DHF1K |
How2 | The **How2** dataset contains 13,500 videos, or 300 hours of speech, and is split into 185,187 training, 2022 development (dev), and 2361 test utterances. It has subtitles in English and crowdsourced Portuguese translations. | Provide a detailed description of the following dataset: How2 |
ASSET | ASSET is a new dataset for assessing sentence simplification in English. ASSET is a crowdsourced multi-reference corpus where each simplification was produced by executing several rewriting transformations. | Provide a detailed description of the following dataset: ASSET |
TurkCorpus | TurkCorpus, a dataset with 2,359 original sentences from English Wikipedia, each with 8 manual reference simplifications.
The dataset is divided into two subsets: 2,000 sentences for validation and 359 for testing of sentence simplification models. | Provide a detailed description of the following dataset: TurkCorpus |
IRMA | This collection compiles anonymous radiographs, which have been arbitrarly selected from routine at the Department of Diagnostic Radiology, Aachen University of Technology (RWTH), Aachen, Germany. The imagery represents different ages, genders, view positions and pathologies. Therefore, image quality varies significantly. All images were downscaled to fit into a 512 x 512 bounding box maintaining the original aspect ratio. All images were classified according to the IRMA code. Based on this code, 193 categories were defined. For 12,677 images, these categories are provided. The remaining 1,733 images without code are used as test data for the ImageCLEFmed 2009 competition.
- Training data: 12,677 radiographs with known categories
- Class distribution: Distribution of classes in the training data.
- Test data: 1,733 radiographs without classification. | Provide a detailed description of the following dataset: IRMA |
MLFP | The **MLFP** dataset consists of face presentation attacks captured with seven 3D latex masks and three 2D print attacks. The dataset contains videos captured from color, thermal and infrared channels. | Provide a detailed description of the following dataset: MLFP |
CoNLL++ | CoNLL++ is a corrected version of the CoNLL03 NER dataset where 5.38% of the test sentences have been fixed. | Provide a detailed description of the following dataset: CoNLL++ |
ViSal | DataViSal.rar (including the ground truth data) is our new collected dataset for the following paper.
===========================================================================
W. Wang, J. Shen, and L. Shao,
Consistent video saliency using local gradient flow optimization and global refinement,
IEEE Trans. on Image Processing, 24(11):4185-4196, 2015
===========================================================================
The related source code can be downloaded from:
https://github.com/shenjianbing/videosal
===========================================================================
Note:
===========================================================================
The data and code files are free to use for research purposes.
If you use them for research purposes,
you should cite above paper in any resulting publication.
This code also uses some publicly available functions.
===========================================================================
Contact Information
===========================================================================
Email:
wenguanwang@bit.edu.cn
shenjianbing@bit.edu.cn
shenjianbingcg@gmail.com | Provide a detailed description of the following dataset: ViSal |
SOC | SOC (Salient Objects in Clutter) is a dataset for Salient Object Detection (SOD). It includes images with salient and non-salient objects from daily object categories. Beyond object category annotations, each salient image is accompanied by attributes that reflect common challenges in real-world scenes. | Provide a detailed description of the following dataset: SOC |
CoSal2015 | Cosal2015 is a large-scale dataset for co-saliency detection which consists of 2,015 images of 50 categories, and each group suffers from various challenging factors such as complex environments, occlusion issues, target appearance variations and background clutters, etc. All these increase the difficulty for accurate co-saliency detection.
Source: [Adaptive Graph Convolutional Network with Attention Graph Clustering for Co-saliency Detection](https://arxiv.org/abs/2003.06167)
Image Source: [https://arxiv.org/pdf/1604.07090.pdf](https://arxiv.org/pdf/1604.07090.pdf) | Provide a detailed description of the following dataset: CoSal2015 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.