dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
CovidQA | The beginnings of a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle's COVID-19 Open Research Dataset Challenge. | Provide a detailed description of the following dataset: CovidQA |
COVIDx | An open access benchmark dataset comprising of 13,975 CXR images across 13,870 patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors' knowledge. | Provide a detailed description of the following dataset: COVIDx |
CoVoST | CoVoST is a large-scale multilingual speech-to-text translation corpus. Its latest 2nd version covers translations from 21 languages into English and from English into 15 languages. It has total 2880 hours of speech and is diversified with 78K speakers and 66 accents. | Provide a detailed description of the following dataset: CoVoST |
COWC | The Cars Overhead With Context (COWC) data set is a large set of annotated cars from overhead. It is useful for training a device such as a deep neural network to learn to detect and/or count cars. | Provide a detailed description of the following dataset: COWC |
CPCXR | The **COVID-19 Posteroanterior Chest X-Ray fused** (**CPCXR**) dataset is generated by the fusion of three publicly available datasets: COVID-19 cxr image, Radiological Society of North America (RSNA), and U.S. national library of medicine (USNLM) collected Montgomery country - NLM(MC). The dataset consists of samples of diseases labeled as COVID-19, Tuberculosis, Other pneumonia (SARS, MERS, etc.), and Normal. The dataset can be utilized to train an evaulate deep learning and machine learning models as binary and multi-class classification problem.
Source: [https://github.com/nspunn1993/COVID-19-PA-CXR-fused-dataset](https://github.com/nspunn1993/COVID-19-PA-CXR-fused-dataset) | Provide a detailed description of the following dataset: CPCXR |
CPH | A large-scale database including substantial CU partition data for HEVC intra- and inter-modes. This enables deep learning on the CU partition. | Provide a detailed description of the following dataset: CPH |
CPLFW | A renovation of Labeled Faces in the Wild (LFW), the de facto standard testbed for unconstraint face verification.
There are three motivations behind the construction of CPLFW benchmark as follows:
1.Establishing a relatively more difficult database to evaluate the performance of real world face verification so the effectiveness of several face verification methods can be fully justified.
2.Continuing the intensive research on LFW with more realistic consideration on pose intra-class variation and fostering the research on cross-pose face verification in unconstrained situation. The challenge of CPLFW emphasizes pose difference to further enlarge intra-class variance. Also, negative pairs are deliberately selected to avoid different gender or race. CPLFW considers both the large intra-class variance and the tiny inter-class variance simultaneously.
3.Maintaining the data size, the face verification protocol which provides a 'same/different' benchmark and the same identities in LFW, so one can easily apply CPLFW to evaluate the performance of face verification. | Provide a detailed description of the following dataset: CPLFW |
CPP | A benchmark dataset that consists of 99,000+ sentences for Chinese polyphone disambiguation. | Provide a detailed description of the following dataset: CPP |
CQR | CQR is an extension to the Stanford Dialogue Corpus. It contains crowd-sourced rewrites to facilitate research in dialogue state tracking using natural language as the interface. | Provide a detailed description of the following dataset: CQR |
CraigslistBargains | A richer dataset based on real items on Craigslist. | Provide a detailed description of the following dataset: CraigslistBargains |
Common Crawl Domain Names | Corpus of domain names scraped from Common Crawl and manually annotated to add word boundaries (e.g. "commoncrawl" to "common crawl"). | Provide a detailed description of the following dataset: Common Crawl Domain Names |
CRD3 | The dataset is collected from 159 Critical Role episodes transcribed to text dialogues, consisting of 398,682 turns. It also includes corresponding abstractive summaries collected from the Fandom wiki. The dataset is linguistically unique in that the narratives are generated entirely through player collaboration and spoken interaction. | Provide a detailed description of the following dataset: CRD3 |
Creative Flow+ Dataset | Includes 3000 animated sequences rendered using styles randomly selected from 40 textured line styles and 38 shading styles, spanning the range between flat cartoon fill and wildly sketchy shading. The dataset includes 124K+ train set frames and 10K test set frames rendered at 1500x1500 resolution, far surpassing the largest available optical flow datasets in size. | Provide a detailed description of the following dataset: Creative Flow+ Dataset |
CRL-Person | Provides two large-scale multi-step benchmarks for biometric identification, where the visual appearance of different classes are highly relevant. | Provide a detailed description of the following dataset: CRL-Person |
Crowd Dataset | A dense crowd dataset with manually annotated groundtruth, collected from different public datasets. This dataset comprises 20 videos that exhibit a multitude of motion behaviors that cover both the obvious and subtle instabilities. | Provide a detailed description of the following dataset: Crowd Dataset |
CrowdFix | Contributes dataset: (1) reviewing the dynamics behind saliency and crowds. (2) using eye tracking to create a dynamic human eye fixation dataset over a new set of crowd videos gathered from the Internet. The videos are annotated into three distinct density levels. | Provide a detailed description of the following dataset: CrowdFix |
CrowdFlow | The **TUB CrowdFlow** is a synthetic dataset that contains 10 sequences showing 5 scenes. Each scene is rendered twice: with a static point of view and a dynamic camera to simulate drone/UAV based surveillance. The scenes are render using Unreal Engine at HD resolution (1280x720) at 25 fps, which is typical for current commercial CCTV surveillance systems. The total number of frames is 3200.
Each sequence has the following ground-truth data:
* Optical flow fields
* Person trajectories (up to 1451)
* Dense pixel trajectories | Provide a detailed description of the following dataset: CrowdFlow |
CrowS-Pairs | CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs a model is presented with two sentences: one that is more stereotyping and another that is less stereotyping. The data focuses on stereotypes about historically disadvantaged groups and contrasts them with advantaged groups. | Provide a detailed description of the following dataset: CrowS-Pairs |
CRVD | The CRVD dataset consists of 55 groups of noisy-clean videos with ISO values ranging from 1600 to 25600. | Provide a detailed description of the following dataset: CRVD |
CS | This dataset is constructed and based on the online free-access fictions that are tagged with sci-fi, urban novel, love story, youth, etc. It is used for Writing Polishment with Smile (WPS) a task that aims to polish plain text with similes.
All similes are extracted by rich regular expression, and the extraction precision is estimated as 92% by labelling 500 random extracted samples.
It contains 5M samples for training and 2.5k for validation and test respectively.
Source: [https://github.com/mrzjy/writing-polishment-with-simile](https://github.com/mrzjy/writing-polishment-with-simile) | Provide a detailed description of the following dataset: CS |
CSD | Comprises 4 different subsets - Flat, House, Priory and Lab - each containing a number of different sequences that can be successfully relocalised against each other. | Provide a detailed description of the following dataset: CSD |
CSPubSum | CSPubSum is a dataset for summarisation of computer science publications, created by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. | Provide a detailed description of the following dataset: CSPubSum |
CSQA | Contains around 200K dialogs with a total of 1.6M turns. Further, unlike existing large scale QA datasets which contain simple questions that can be answered from a single tuple, the questions in the dialogs require a larger subgraph of the KG. | Provide a detailed description of the following dataset: CSQA |
CSS10 | A collection of single speaker speech datasets for ten languages. It is composed of short audio clips from LibriVox audiobooks and their aligned texts. | Provide a detailed description of the following dataset: CSS10 |
CTC | A dataset that allows exploration of cross-modal retrieval where images contain scene-text instances. | Provide a detailed description of the following dataset: CTC |
CTPelvic1K | Curates a large pelvic CT dataset pooled from multiple sources and different manufacturers, including 1, 184 CT volumes and over 320, 000 slices with different resolutions and a variety of the above-mentioned appearance variations. | Provide a detailed description of the following dataset: CTPelvic1K |
Cube++ | **Cube++** is a novel dataset for the color constancy problem that continues on the Cube+ dataset. It includes 4890 images of different scenes under various conditions. For calculating the ground truth illumination, a calibration object with known surface colors was placed in every scene.
Source: [https://github.com/Visillect/CubePlusPlus](https://github.com/Visillect/CubePlusPlus)
Image Source: [https://github.com/Visillect/CubePlusPlus](https://github.com/Visillect/CubePlusPlus) | Provide a detailed description of the following dataset: Cube++ |
CubiCasa5K | **CubiCasa5K** is a large-scale floorplan image dataset containing 5000 samples annotated into over 80 floorplan object categories. The dataset annotations are performed in a dense and versatile manner by using polygons for separating the different objects.
Source: [https://github.com/CubiCasa/CubiCasa5k](https://github.com/CubiCasa/CubiCasa5k) | Provide a detailed description of the following dataset: CubiCasa5K |
CUHK-QA | CUHK-QA is a dataset for natural language-based person search using iterative questioning.
The dataset consists of 400 images of 360 people, and 20 participants answered 5 specific questions about the appearance of each person. So, each person has labelled 20 images. Average length of combined description for each image is 39.15. All the images have been taken from the test set of CUHK-PEDES dataset. | Provide a detailed description of the following dataset: CUHK-QA |
CUHK-Shadow | Collects shadow images for multiple scenarios and compiled a new dataset of 10,500 shadow images, each with labeled ground-truth mask, for supporting shadow detection in the complex world. The dataset covers a rich variety of scene categories, with diverse shadow sizes, locations, contrasts, and types. | Provide a detailed description of the following dataset: CUHK-Shadow |
Cumulo | A benchmark dataset for training and evaluating global cloud classification models. It consists of one year of 1km resolution MODIS hyperspectral imagery merged with pixel-width 'tracks' of CloudSat cloud labels. | Provide a detailed description of the following dataset: Cumulo |
Curated AFD | The **Curated AFD** dataset is a curated version of the Asian Face Dataset (AFD) for face recognition research. The original AFD dataset has been curated to remove wrong identity labels, duplicate images and duplicate subjects.
Source: [https://arxiv.org/abs/2004.03074](https://arxiv.org/abs/2004.03074) | Provide a detailed description of the following dataset: Curated AFD |
Curation Corpus | The Curation Corpus is a collection of 40,000 professionally-written summaries of news articles, with links to the articles themselves. | Provide a detailed description of the following dataset: Curation Corpus |
CURE-TSD | Based on simulated challenging conditions that correspond to adversaries that can occur in real-world environments and systems. | Provide a detailed description of the following dataset: CURE-TSD |
CURE-TSR | Includes more than two million traffic sign images that are based on real-world and simulator data. | Provide a detailed description of the following dataset: CURE-TSR |
Curiosity | The **Curiosity** dataset consists of 14K dialogs (with 181K utterances) with fine-grained knowledge groundings, dialog act annotations, and other auxiliary annotation. In this dataset users and virtual assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages.
Source: [https://github.com/facebookresearch/curiosity](https://github.com/facebookresearch/curiosity)
Image Source: [https://www.pedro.ai/curiosity](https://www.pedro.ai/curiosity) | Provide a detailed description of the following dataset: Curiosity |
Czech restaurant information | Czech restaurant information is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the English San Francisco Restaurants dataset by Wen et al. (2015). | Provide a detailed description of the following dataset: Czech restaurant information |
CzEng 2.0 Parallel Corpus | Czech-English parallel corpus CzEng 2.0 consisting of over 2 billion words (2 "gigawords") in each language. The corpus contains document-level information and is filtered with several techniques to lower the amount of noise. | Provide a detailed description of the following dataset: CzEng 2.0 Parallel Corpus |
D2City | A large-scale comprehensive collection of dashcam videos collected by vehicles on DiDi's platform. D2-City contains more than 10000 video clips which deeply reflect the diversity and complexity of real-world traffic scenarios in China. | Provide a detailed description of the following dataset: D2City |
MVTec D2S | **MVTec D2S** is a benchmark for instance-aware semantic segmentation in an industrial domain. It contains 21,000 high-resolution images with pixel-wise labels of all object instances. The objects comprise groceries and everyday products from 60 categories. The benchmark is designed such that it resembles the real-world setting of an automatic checkout, inventory, or warehouse system. The training images only contain objects of a single class on a homogeneous background, while the validation and test sets are much more complex and diverse. | Provide a detailed description of the following dataset: MVTec D2S |
DAD | Contains normal driving videos together with a set of anomalous actions in its training set. In the test set of the DAD dataset, there are unseen anomalous actions that still need to be winnowed out from normal driving. | Provide a detailed description of the following dataset: DAD |
DAIS | A large benchmark dataset containing 50K human judgments for 5K distinct sentence pairs in the English dative alternation. This dataset includes 200 unique verbs and systematically varies the definiteness and length of arguments. | Provide a detailed description of the following dataset: DAIS |
DAiSEE | DAiSEE is a multi-label video classification dataset comprising of 9,068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration "in the wild". The dataset has four levels of labels namely - very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists. | Provide a detailed description of the following dataset: DAiSEE |
Danbooru2020 | A large-scale anime image database with 4.2m+ images annotated with 130m+ text tags describing image contents in detail; it can be useful for machine learning purposes such as image recognition and generation. It has been applied to a [wide variety of applications](https://www.gwern.net/Danbooru2020#applications), particularly generative modeling.
Danbooru20xx is updated annually with the previous years' images & metadata improvements. Previous iterations: Danbooru2017, Danbooru2018, Danbooru2019. | Provide a detailed description of the following dataset: Danbooru2020 |
DaNE | Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme. | Provide a detailed description of the following dataset: DaNE |
DART | DART is a large dataset for open-domain structured data record to text generation. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. | Provide a detailed description of the following dataset: DART |
DAVANet | A large-scale multi-scene dataset for stereo deblurring, containing 20,637 blurry-sharp stereo image pairs from 135 diverse sequences and their corresponding bidirectional disparities. | Provide a detailed description of the following dataset: DAVANet |
Da Vinci Dataset | A line drawing restoration dataset which consists of 71 line drawing sketches by Leonardo Da Vinci. | Provide a detailed description of the following dataset: Da Vinci Dataset |
DAWN | DAWN emphasizes a diverse traffic environment (urban, highway and freeway) as well as a rich variety of traffic flow. The DAWN dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms. The dataset is annotated with object bounding boxes for autonomous driving and video surveillance scenarios. This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems. | Provide a detailed description of the following dataset: DAWN |
DAWT | The DAWT dataset consists of Densely Annotated Wikipedia Texts across multiple languages. The annotations include labeled text mentions mapping to entities (represented by their Freebase machine ids) as well as the type of the entity. The data set contains total of 13.6M articles, 5.0B tokens, 13.8M mention entity co-occurrences. DAWT contains 4.8 times more anchor text to entity links than originally present in the Wikipedia markup. Moreover, it spans several languages including English, Spanish, Italian, German, French and Arabic. | Provide a detailed description of the following dataset: DAWT |
DBpedia NIF | The dataset provides the content of all articles for 128 Wikipedia languages. The dataset has been further enriched with about 25% more links and selected partitions published as Linked Data. | Provide a detailed description of the following dataset: DBpedia NIF |
DDAD | **DDAD** is a new autonomous driving benchmark from TRI (Toyota Research Institute) for long range (up to 250m) and dense depth estimation in challenging and diverse urban conditions. It contains monocular videos and accurate ground-truth depth (across a full 360 degree field of view) generated from high-density LiDARs mounted on a fleet of self-driving cars operating in a cross-continental setting. DDAD contains scenes from urban settings in the United States (San Francisco, Bay Area, Cambridge, Detroit, Ann Arbor) and Japan (Tokyo, Odaiba).
Source: [https://github.com/TRI-ML/DDAD](https://github.com/TRI-ML/DDAD)
Image Source: [https://github.com/TRI-ML/DDAD](https://github.com/TRI-ML/DDAD) | Provide a detailed description of the following dataset: DDAD |
DDD17 | DDD17 has over 12 h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, driver steering, throttle, and brake captured from the car's on-board diagnostics interface. | Provide a detailed description of the following dataset: DDD17 |
DDD20 | The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions. | Provide a detailed description of the following dataset: DDD20 |
DDI-100 | The DDI-100 dataset is a synthetic dataset for text detection and recognition based on 7000 real unique document pages and consists of more than 100000 augmented images. The ground truth comprises text and stamp masks, text and characters bounding boxes with relevant annotations. | Provide a detailed description of the following dataset: DDI-100 |
DECADE | DECADE is a large-scale dataset of ego-centric videos from a dog's perspective as well as her corresponding movements. | Provide a detailed description of the following dataset: DECADE |
DeepFashion2 | DeepFashion2 is a versatile benchmark of four tasks including clothes detection, pose estimation, segmentation, and retrieval. It has 801K clothing items where each item has rich annotations such as style, scale, viewpoint, occlusion, bounding box, dense landmarks and masks. There are also 873K Commercial-Consumer clothes pairs | Provide a detailed description of the following dataset: DeepFashion2 |
Deep Fashion3D | A novel benchmark and dataset for the evaluation of image-based garment reconstruction systems. Deep Fashion3D contains 2078 models reconstructed from real garments, which covers 10 different categories and 563 garment instances. It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images. In addition, each garment is randomly posed to enhance the variety of real clothing deformations. | Provide a detailed description of the following dataset: Deep Fashion3D |
DeepFish | **DeepFish** as a benchmark suite with a large-scale dataset to train and test methods for several computer vision tasks. The dataset consists of approximately 40 thousand images collected underwater from 20 habitats in the marine environments of tropical Australia. It contains classification labels as well as point-level and segmentation labels to have a more comprehensive fish analysis benchmark. These labels enable models to learn to automatically monitor fish count, identify their locations, and estimate their sizes. | Provide a detailed description of the following dataset: DeepFish |
OST | Is one of the largest egocentric datasets in the object search task with eyetracking information available | Provide a detailed description of the following dataset: OST |
DeepScores | DeepScores contains high quality images of musical scores, partitioned into 300,000 sheets of written music that contain symbols of different shapes and sizes. For advancing the state-of-the-art in small objects recognition, and by placing the question of object recognition in the context of scene understanding. | Provide a detailed description of the following dataset: DeepScores |
DeepWeeds | The DeepWeeds dataset consists of 17,509 images capturing eight different weed species native to Australia in situ with neighbouring flora. | Provide a detailed description of the following dataset: DeepWeeds |
DeepWriting | A new dataset of handwritten text with fine-grained annotations at the character level and report results from an initial user evaluation. | Provide a detailed description of the following dataset: DeepWriting |
Definite Pronoun Resolution Dataset | Composes sentence pairs (i.e., twin sentences). | Provide a detailed description of the following dataset: Definite Pronoun Resolution Dataset |
DEFT Corpus | A SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. | Provide a detailed description of the following dataset: DEFT Corpus |
DemCare | Dem@Care is providing the following datasets, which are collected during lab and home experiments. The data collection took place in the Greek Alzheimer’s Association for Dementia and Related Disorders in Thessaloniki, Greece and in participants’ homes. The datasets include video and audio recordings as well as data from physiological sensors. Moreover, they include data from sleep, motion and plug sensors. | Provide a detailed description of the following dataset: DemCare |
Dengue | Benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets and originally used in Livelo & Cheng (2018). | Provide a detailed description of the following dataset: Dengue |
DensePose | DensePose-COCO is a large-scale ground-truth dataset with image-to-surface correspondences manually annotated
on 50K COCO images and train DensePose-RCNN, to densely regress part-specific UV coordinates within every human
region at multiple frames per second. | Provide a detailed description of the following dataset: DensePose |
DesireDB | Includes gold-standard labels for identifying statements of desire, textual evidence for desire fulfillment, and annotations for whether the stated desire is fulfilled given the evidence in the narrative context. | Provide a detailed description of the following dataset: DesireDB |
DeSMOG | A dataset of stance-labeled GW sentences. | Provide a detailed description of the following dataset: DeSMOG |
DET | DET is a lane detection dataset that consists of the raw event data, accumulated images over 30ms and corresponding lane labels. Contains 17,103 lane instances, each of which is labeled pixel by pixel manually. | Provide a detailed description of the following dataset: DET |
UA-DETRAC | Consists of 100 challenging video sequences captured from real-world traffic scenes (over 140,000 frames with rich annotations, including occlusion, weather, vehicle category, truncation, and vehicle bounding boxes) for object detection, object tracking and MOT system. | Provide a detailed description of the following dataset: UA-DETRAC |
DFDC | The DFDC (Deepfake Detection Challenge) is a dataset for deepface detection consisting of more than 100,000 videos.
The DFDC dataset consists of two versions:
- Preview dataset. with 5k videos. Featuring two facial modification algorithms.
- Full dataset, with 124k videos. Featuring eight facial modification algorithms | Provide a detailed description of the following dataset: DFDC |
DFW | Contains over 11000 images of 1000 identities with different types of disguise accessories. The dataset is collected from the Internet, resulting in unconstrained face images similar to real world settings. | Provide a detailed description of the following dataset: DFW |
DHP19 | DHP19 is the first human pose dataset with data collected from DVS event cameras.
It has recordings from 4 synchronized 346x260 pixel DVS cameras and marker positions in 3D space from Vicon motion capture system. The files have event streams and 3D positions recorded from 17 subjects each performing 33 movements. | Provide a detailed description of the following dataset: DHP19 |
Diabetes60 | RGB-D images of 60 western dishes, home made. Data was recorded using a Microsoft Kinect V2. | Provide a detailed description of the following dataset: Diabetes60 |
Diabetic Foot Ulcers Classification Datasets | Contains Diabetic Foot Ulcers (DFU) from different patients. | Provide a detailed description of the following dataset: Diabetic Foot Ulcers Classification Datasets |
Diabetic Retinopathy Detection Dataset | A large scale of retina image dataset. | Provide a detailed description of the following dataset: Diabetic Retinopathy Detection Dataset |
DiaBLa | A new English-French test set for the evaluation of Machine Translation (MT) for informal, written bilingual dialogue. The test set contains 144 spontaneous dialogues (5,700+ sentences) between native English and French speakers, mediated by one of two neural MT systems in a range of role-play settings. The dialogues are accompanied by fine-grained sentence-level judgments of MT quality, produced by the dialogue participants themselves, as well as by manually normalised versions and reference translations produced a posteriori. | Provide a detailed description of the following dataset: DiaBLa |
DialoGLUE | DialoGLUE is a natural language understanding benchmark for task-oriented dialogue designed to encourage dialogue research in representation-based transfer, domain adaptation, and sample-efficient task learning. It consisting of 7 task-oriented dialogue datasets covering 4 distinct natural language understanding tasks. | Provide a detailed description of the following dataset: DialoGLUE |
DialogueFairness | The Dialogue Fairness dataset is used to evaluate and understand fairness in dialogue models, focusing on gender and racial biases.
Source: [https://github.com/zgahhblhc/DialogueFairness](https://github.com/zgahhblhc/DialogueFairness) | Provide a detailed description of the following dataset: DialogueFairness |
DIB-10K | Is a challenging image dataset which has more than 10 thousand different types of birds. It was created to enable the study of machine learning and also ornithology research. | Provide a detailed description of the following dataset: DIB-10K |
DIPS | Contains biases but is two orders of magnitude larger than those used previously. | Provide a detailed description of the following dataset: DIPS |
Diseases in Neurology Case Reports Dataset | Extracts diseases and syndromes (DsSs) from more than 65,000 neurology case reports from 66 journals in PubMed over the last six decades from 1955 to 2017. | Provide a detailed description of the following dataset: Diseases in Neurology Case Reports Dataset |
DiveFace | A new face annotation dataset with balanced distribution between genders and ethnic origins. | Provide a detailed description of the following dataset: DiveFace |
DLBCL-Morph | **DLBCL-Morph** is a dataset containing 42 digitally scanned high-resolution tissue microarray (TMA) slides accompanied by clinical, cytogenetic, and geometric features from 209 DLBCL cases.
Source: [https://github.com/stanfordmlgroup/DLBCL-Morph](https://github.com/stanfordmlgroup/DLBCL-Morph) | Provide a detailed description of the following dataset: DLBCL-Morph |
dMelodies | **dMelodies** is dataset of simple 2-bar melodies generated using 9 independent latent factors of variation where each data point represents a unique melody based on the following constraints:
- Each melody will correspond to a unique scale (major, minor, blues, etc.).
- Each melody plays the arpeggios using the standard I-IV-V-I cadence chord pattern.
- Bar 1 plays the first 2 chords (6 notes), Bar 2 plays the second 2 chords (6 notes).
- Each played note is an 8th note.
Source: [https://github.com/ashispati/dmelodies_dataset](https://github.com/ashispati/dmelodies_dataset) | Provide a detailed description of the following dataset: dMelodies |
DMQA | The DeepMind Q&A Dataset consists of two datasets for Question Answering, CNN and DailyMail. Each dataset contains many documents (90k and 197k each), and each document companies on average 4 questions approximately. Each question is a sentence with one missing word/phrase which can be found from the accompanying document/context. | Provide a detailed description of the following dataset: DMQA |
doc2dial | A new dataset of goal-oriented dialogues that are grounded in the associated documents. | Provide a detailed description of the following dataset: doc2dial |
DocBank | A benchmark dataset that contains 500K document pages with fine-grained token-level annotations for document layout analysis. DocBank is constructed using a simple yet effective way with weak supervision from the \LaTeX{} documents available on the arXiv.com. | Provide a detailed description of the following dataset: DocBank |
DocVQA | DocVQA consists of 50,000 questions defined on 12,000+ document images. | Provide a detailed description of the following dataset: DocVQA |
DOGC | Intended to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. | Provide a detailed description of the following dataset: DOGC |
DoMSEV | The Dataset of Multimodal Semantic Egocentric Video (DoMSEV) contains 80-hours of multimodal (RGB-D, IMU, and GPS) data related to First-Person Videos with annotations for recorder profile, frame scene, activities, interaction, and attention. | Provide a detailed description of the following dataset: DoMSEV |
DoQA | A dataset with 2,437 dialogues and 10,917 QA pairs. The dialogues are collected from three Stack Exchange sites using the Wizard of Oz method with crowdsourcing. | Provide a detailed description of the following dataset: DoQA |
DOTmark | DOTmark is a benchmark for discrete optimal transport, which is designed to serve as a neutral collection of problems, where discrete optimal transport methods can be tested, compared to one another, and brought to their limits on large-scale instances. It consists of a variety of grayscale images, in various resolutions and classes, such as several types of randomly generated images, classical test images and real data from microscopy. | Provide a detailed description of the following dataset: DOTmark |
DPC-Captions | This is an open-source image captions dataset for the aesthetic evaluation of images.
The dataset is called **DPC-Captions**, which contains comments of up to five aesthetic attributes of one image through knowledge transfer from a full-annotated small-scale dataset.
Source: [https://github.com/BestiVictory/DPC-Captions](https://github.com/BestiVictory/DPC-Captions) | Provide a detailed description of the following dataset: DPC-Captions |
DPED | A large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. | Provide a detailed description of the following dataset: DPED |
DpgMedia2019 | **DpgMedia2019** is a Dutch news dataset for partisanship detection. It contains more than 100K articles that are labelled on the publisher level and 776 articles that were crowdsourced using an internal survey platform and labelled on the article level.
Source: [https://github.com/dpgmedia/partisan-news2019](https://github.com/dpgmedia/partisan-news2019) | Provide a detailed description of the following dataset: DpgMedia2019 |
DramaQA | The DramaQA focuses on two perspectives: 1) Hierarchical QAs as an evaluation metric based on the cognitive developmental stages of human intelligence. 2) Character-centered video annotations to model local coherence of the story. The dataset is built upon the TV drama "Another Miss Oh" and it contains 17,983 QA pairs from 23,928 various length video clips, with each QA pair belonging to one of four difficulty levels. | Provide a detailed description of the following dataset: DramaQA |
Dreaddit | Consists of 190K posts from five different categories of Reddit communities. | Provide a detailed description of the following dataset: Dreaddit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.