dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
P-DESTRE | Provides consistent ID annotations across multiple days, making it suitable for the extremely challenging problem of person search, i.e., where no clothing information can be reliably used. Apart this feature, the P-DESTRE annotations enable the research on UAV-based pedestrian detection, tracking, re-identification and soft biometric solutions. | Provide a detailed description of the following dataset: P-DESTRE |
PEC | A novel large-scale multi-domain dataset for persona-based empathetic conversations. | Provide a detailed description of the following dataset: PEC |
PedX | PedX is a large-scale multi-modal collection of pedestrians at complex urban intersections. The dataset provides high-resolution stereo images and LiDAR data with manual 2D and automatic 3D annotations. The data was captured using two pairs of stereo cameras and four Velodyne LiDAR sensors. | Provide a detailed description of the following dataset: PedX |
People Snapshot Dataset | Enables detailed human body model reconstruction in clothing from a single monocular RGB video without requiring a pre scanned template or manually clicked points. | Provide a detailed description of the following dataset: People Snapshot Dataset |
PerKey | A corpus of 553k news articles from six Persian news websites and agencies with relatively high quality author extracted keyphrases, which is then filtered and cleaned to achieve higher quality keyphrases. | Provide a detailed description of the following dataset: PerKey |
Perlex | Persian dataset for relation extraction, which is an expert-translated version of the "Semeval-2010-Task-8" dataset. | Provide a detailed description of the following dataset: Perlex |
Permuted bAbI dialog task | The Permuted bAbi dialog task is an adaptation of the "Dialog bAbI tasks data" dataset released by Facebook. It is used for evaluating end-to-end dialog systems in the restaurant domain. This dataset introduces multiple valid next utterances to the original-bAbI dialog tasks, which allows evaluation of end-to-end goal-oriented dialog systems in a more realistic setting.
Source: [https://github.com/IBM/permuted-bAbI-dialog-tasks](https://github.com/IBM/permuted-bAbI-dialog-tasks) | Provide a detailed description of the following dataset: Permuted bAbI dialog task |
PerSenT | PerSenT is a dataset of crowd-sourced annotations of the sentiment expressed by the authors towards the main entities in news articles. The dataset also includes paragraph-level sentiment annotations to provide more fine-grained supervision for the task. | Provide a detailed description of the following dataset: PerSenT |
Perspectrum | Perspectrum is a dataset of claims, perspectives and evidence, making use of online debate websites to create the initial data collection, and augmenting it using search engines in order to expand and diversify the dataset. Crowd-sourcing was used to filter out noise and ensure high-quality data. The dataset contains 1k claims, accompanied with pools of 10k and 8k perspective sentences and evidence paragraphs, respectively. | Provide a detailed description of the following dataset: Perspectrum |
PEYMA | Peyma is a Persian NER dataset to train and test NER systems. It is constructed by collecting documents from ten news websites. | Provide a detailed description of the following dataset: PEYMA |
PFN-PIC | This dataset is a collection of spoken language instructions for a robotic system to pick and place common objects. Text instructions and corresponding object images are provided.
The dataset consists of situations where the robot is instructed by the operator to pick up a specific object and move it to another location: for example, Move the blue and white tissue box to the top right bin.
This dataset consists of RGBD images, bounding box annotations, destination box annotations, and text instructions.
Source: [https://github.com/pfnet-research/picking-instruction](https://github.com/pfnet-research/picking-instruction)
Image Source: [https://github.com/pfnet-research/picking-instruction](https://github.com/pfnet-research/picking-instruction) | Provide a detailed description of the following dataset: PFN-PIC |
PG-19 | A new open-vocabulary language modelling benchmark derived from books. | Provide a detailed description of the following dataset: PG-19 |
PGR | Phenotype-Gene Relations (PGR) is a corpus that consists of 1712 abstracts, 5676 human phenotype annotations, 13835 gene annotations, and 4283 relations. | Provide a detailed description of the following dataset: PGR |
PheMT | **PheMT** is a phenomenon-wise dataset designed for evaluating the robustness of Japanese-English machine translation systems. The dataset is based on the MTNT dataset, with additional annotations of four linguistic phenomena common in UGC; Proper Noun, Abbreviated Noun, Colloquial Expression, and Variant | Provide a detailed description of the following dataset: PheMT |
PHINC | PHINC is a parallel corpus of the 13,738 code-mixed English-Hindi sentences and their corresponding translation in English. The translations of sentences are done manually by the annotators. | Provide a detailed description of the following dataset: PHINC |
Photi-LakeIce | A new benchmark dataset of webcam images, Photi-LakeIce, from multiple cameras and two different winters, along with pixel-wise ground truth annotations. | Provide a detailed description of the following dataset: Photi-LakeIce |
PhotoBook | A large-scale collection of visually-grounded, task-oriented dialogues in English designed to investigate shared dialogue history accumulating during conversation. | Provide a detailed description of the following dataset: PhotoBook |
Photographic Defect Severity | A large-scale dataset of user annotations on seven common photographic defects. | Provide a detailed description of the following dataset: Photographic Defect Severity |
Photoswitch | A benchmark for molecular machine learning where improvements in model performance can be immediately observed in the throughput of promising molecules synthesized in the lab. Photoswitches are a versatile class of molecule for medical and renewable energy applications where a molecule's efficacy is governed by its electronic transition wavelengths. | Provide a detailed description of the following dataset: Photoswitch |
PhotoSynth | The **PhotoSynth** (PS) dataset for patch matching consists of a total of 30 scenes with 25 scenes for training and 5 scenes for validation. The different image pairs are captured in different illumination conditions, at different scales and with different viewpoints.
Source: [https://arxiv.org/abs/1801.01466](https://arxiv.org/abs/1801.01466)
Image Source: [https://github.com/rmitra/PS-Dataset](https://github.com/rmitra/PS-Dataset) | Provide a detailed description of the following dataset: PhotoSynth |
PhraseCut | **PhraseCut** is a dataset consisting of 77,262 images and 345,486 phrase-region pairs. The dataset is collected on top of the Visual Genome dataset and uses the existing annotations to generate a challenging set of referring phrases for which the corresponding regions are manually annotated. | Provide a detailed description of the following dataset: PhraseCut |
PHYRE | Benchmark for physical reasoning that contains a set of simple classical mechanics puzzles in a 2D physical environment. The benchmark is designed to encourage the development of learning algorithms that are sample-efficient and generalize well across puzzles. | Provide a detailed description of the following dataset: PHYRE |
pic2kcal | The pic2kal benchmark for calorie prediction contains 308,000 images from over 70,000 recipes including photographs, ingredients and instructions, matched with nutritional information.
Source: [https://arxiv.org/abs/2011.01082](https://arxiv.org/abs/2011.01082)
Image Source: [https://github.com/phiresky/pic2kcal](https://github.com/phiresky/pic2kcal) | Provide a detailed description of the following dataset: pic2kcal |
PicTropes | PicTropes is a dataset of films and the tropes that they use created from the database DBTropes.org. | Provide a detailed description of the following dataset: PicTropes |
Pinterest Complete The Look | The Pinterest Complete the Look dataset consists of over 1 million outfits and 4 million objects. It can be used to predict style compatibility between fashion items in order to recommend complementary items that complete an outfit.
Source: [https://arxiv.org/abs/2006.10792](https://arxiv.org/abs/2006.10792)
Image Source: [https://github.com/eileenforwhat/complete-the-look-dataset](https://github.com/eileenforwhat/complete-the-look-dataset) | Provide a detailed description of the following dataset: Pinterest Complete The Look |
pioNER | The **pioNER** corpus provides gold-standard and automatically generated named-entity datasets for the Armenian language.
The automatically generated corpus is generated from Wikipedia. The gold-standard set is a collection of over 250 news articles from iLur.am with manual named-entity annotation. It includes sentences from political, sports, local and world news, and is comparable in size with the test sets of other languages.
Source: [https://github.com/ispras-texterra/pioner](https://github.com/ispras-texterra/pioner) | Provide a detailed description of the following dataset: pioNER |
PIRM | The PIRM dataset consists of 200 images, which are divided into two equal sets for validation and testing. These images cover diverse contents, including people, objects, environments, flora, natural scenery, etc. Images vary in size, and are typically ~300K pixels in resolution. | Provide a detailed description of the following dataset: PIRM |
PIT | Paraphrase and Semantic Similarity in Twitter (PIT) presents a constructed Twitter Paraphrase Corpus that contains 18,762 sentence pairs. | Provide a detailed description of the following dataset: PIT |
Plaintext Jokes | There are about 208 000 jokes in this database scraped from three sources. | Provide a detailed description of the following dataset: Plaintext Jokes |
Planar Manipulator Dataset | The dataset consists of 90 000 color videos that show a planar robot manipulator executing articulated manipulation tasks. More precisely, the manipulator grasps a circular object of random color and size and places it on top of a square object/platform of again random color and size. The initial configurations (location, size and color) of the objects were randomly sampled during generation. Different from other datasets such as the moving MNIST dataset, the samples comprise a goal-oriented task as described, making it more suitable for testing prediction capabilities of an ML model. For instance, one can use it as a toy dataset to investigate the capacity and output behavior of a deep neural network before testing it on real-world data.
Source: [https://github.com/ferreirafabio/PlanarManipulatorDataset](https://github.com/ferreirafabio/PlanarManipulatorDataset)
Image Source: [https://github.com/ferreirafabio/PlanarManipulatorDataset](https://github.com/ferreirafabio/PlanarManipulatorDataset) | Provide a detailed description of the following dataset: Planar Manipulator Dataset |
PlantDoc | PlantDoc is a dataset for visual plant disease detection. The dataset contains 2,598 data points in total across 13 plant species and up to 17 classes of diseases, involving approximately 300 human hours of effort in annotating internet scraped images. | Provide a detailed description of the following dataset: PlantDoc |
Plant Seedlings Dataset | A database of images of approximately 960 unique plants belonging to 12 species at several growth stages is made publicly available. It comprises annotated RGB images with a physical resolution of roughly 10 pixels per mm. | Provide a detailed description of the following dataset: Plant Seedlings Dataset |
PMC-SA | **PMC-SA** (**PMC Structured Abstracts**) is a dataset of academic publications, used for the task of structured summarization.
Source: [https://arxiv.org/abs/1905.07695](https://arxiv.org/abs/1905.07695) | Provide a detailed description of the following dataset: PMC-SA |
PMIndia | Consists of parallel sentences which pair 13 major languages of India with English. The corpus includes up to 56000 sentences for each language pair. | Provide a detailed description of the following dataset: PMIndia |
PMLB | The **Penn Machine Learning Benchmarks** (**PMLB**) is a large, curated set of benchmark datasets used to evaluate and compare supervised machine learning algorithms. These datasets cover a broad range of applications, and include binary/multi-class classification problems and regression problems, as well as combinations of categorical, ordinal, and continuous features. | Provide a detailed description of the following dataset: PMLB |
pn-summary | Pn-summary is a dataset for Persian abstractive text summarization.
Source: [https://arxiv.org/abs/2012.11204](https://arxiv.org/abs/2012.11204)
Image Source: [https://github.com/hooshvare/pn-summary](https://github.com/hooshvare/pn-summary) | Provide a detailed description of the following dataset: pn-summary |
PoC | A dataset containing the documents, source and fusion sentences, and human annotations of points of correspondence between sentences. The dataset bridges the gap between coreference resolution and summarization. | Provide a detailed description of the following dataset: PoC |
PointDenoisingBenchmark | The **PointDenoisingBenchmark** dataset features 28 different shapes, split into 18 training shapes and 10 test shapes.
* PointDenoisingBenchmark for outliers removal: contains noisy point clouds with different levels of gaussian noise and the corresponding clean ground truths.
* PointDenoisingBenchmark for denoising: contains noisy point clouds with different levels of noise and density of outliers and the corresponding clean ground truths. | Provide a detailed description of the following dataset: PointDenoisingBenchmark |
PoKi | **PoKi** is a corpus of 61,330 poems written by children from grades 1 to 12. PoKi is especially useful in studying child language because it comes with information about the age of the child authors (their grade).
Source: [https://github.com/whipson/PoKi-Poems-by-Kids](https://github.com/whipson/PoKi-Poems-by-Kids) | Provide a detailed description of the following dataset: PoKi |
PolEmo 2.0 | PolEmo 2.0: Corpus of Multi-Domain Consumer Reviews, evaluation data for article presented at CoNLL. | Provide a detailed description of the following dataset: PolEmo 2.0 |
PolicyQA | A dataset that contains 25,017 reading comprehension style examples curated from an existing corpus of 115 website privacy policies. PolicyQA provides 714 human-annotated questions written for a wide range of privacy practices. | Provide a detailed description of the following dataset: PolicyQA |
Polish Political Advertising Dataset | A dataset for detecting specific text chunks and categories of political advertising in the Polish language. It contains 1,705 human-annotated tweets tagged with nine categories, which constitute campaigning under Polish electoral law. | Provide a detailed description of the following dataset: Polish Political Advertising Dataset |
PolitiFact | Fact-checking (FC) articles which contains pairs (multimodal tweet and a FC-article) from politifact.com. | Provide a detailed description of the following dataset: PolitiFact |
PolSF | Collects five open polarimetric SAR images, which are images of the San Francisco area. These five images come from different satellites at different times, which has great scientific research value. | Provide a detailed description of the following dataset: PolSF |
POLUSA | A dataset that represents the online media landscape as perceived by an average US news consumer. The dataset contains 0.9M articles covering policy topics published between Jan. 2017 and Aug. 2019 by 18 news outlets representing the political spectrum. Each outlet is labeled by its political leaning derived using a systematic aggregation of eight data sources. The news dataset is balanced with respect to publication date and outlet popularity. POLUSA enables studying a variety of subjects, e.g., media effects and political partisanship. | Provide a detailed description of the following dataset: POLUSA |
Polyglot-NER | Polyglot-NER builds massive multilingual annotators with minimal human expertise and intervention. | Provide a detailed description of the following dataset: Polyglot-NER |
PoMo | PoMo consists of more than 231K sentences with post-modifiers and associated facts extracted from Wikidata for around 57K unique entities. | Provide a detailed description of the following dataset: PoMo |
Pow-Wow | A dataset for studying situated goal-directed human communication. | Provide a detailed description of the following dataset: Pow-Wow |
prachathai-67k | The prachathai-67k dataset was scraped from the news site Prachathai excluding articles with less than 500 characters of body text (mostly images and cartoons). It contains 67,889 articles with 51,797 tags from August 24, 2004 to November 15, 2018. | Provide a detailed description of the following dataset: prachathai-67k |
PreCo | A large-scale English dataset for coreference resolution. The dataset is designed to embody the core challenges in coreference, such as entity representation, by alleviating the challenge of low overlap between training and test sets and enabling separated analysis of mention detection and mention clustering. | Provide a detailed description of the following dataset: PreCo |
PRECOG | The **PREdiction of Clinical Outcomes from Genomic profiles** (or PRECOG) encompasses 166 cancer expression data sets, including overall survival data for ~18,000 patients diagnosed with 39 distinct malignancies. | Provide a detailed description of the following dataset: PRECOG |
PRED18 | Twenty DAVIS recordings with a total duration of about 1.25 hour were obtained by driving the two robots in the robot arena of the University of Ulster in Londonderry. | Provide a detailed description of the following dataset: PRED18 |
PreSIL | Consists of over 50,000 frames and includes high-definition images with full resolution depth information, semantic segmentation (images), point-wise segmentation (point clouds), and detailed annotations for all vehicles and people. | Provide a detailed description of the following dataset: PreSIL |
PressurePose | A synthetic dataset with 206K pressure images with 3D human poses and shapes. | Provide a detailed description of the following dataset: PressurePose |
Procedural Human Action Videos | Procedural Human Action Videos contains a total of 39,982 videos, with more than 1,000 examples for each action of 35 categories. | Provide a detailed description of the following dataset: Procedural Human Action Videos |
Procon20 | A novel stance detection dataset covering 419 different controversial issues and their related pros and cons collected by procon.org in nonpartisan format. | Provide a detailed description of the following dataset: Procon20 |
Products-10K | Contains 10,000 fine-grained SKU-level products frequently bought by online customers in JD.com. | Provide a detailed description of the following dataset: Products-10K |
Proposal Flow Datasets | Dataset that can be used to evaluate both general semantic flow techniques and region-based approaches such as proposal flow. | Provide a detailed description of the following dataset: Proposal Flow Datasets |
Prostate MRI Segmentation Dataset | This prostate MRI segmentation dataset is collected from six different data sources.
Source: [https://github.com/liuquande/SAML](https://github.com/liuquande/SAML) | Provide a detailed description of the following dataset: Prostate MRI Segmentation Dataset |
ProtoQA | **ProtoQA** is a question answering dataset for training and evaluating common sense reasoning capabilities of artificial intelligence systems in such prototypical situations. The training set is gathered from an existing set of questions played in a long-running international game show FAMILY- FEUD. The hidden evaluation set is created by gathering answers for each question from 100 crowd-workers.
Source: [https://github.com/iesl/protoqa-data](https://github.com/iesl/protoqa-data) | Provide a detailed description of the following dataset: ProtoQA |
Proto Summ | This is a large-scale court judgment dataset, where each judgment is a summary of the case description with a patternized style. It contains 2,003,390 court judgment documents. The case description is used as the input, and the court judgment as the summary. The average lengths of the input documents and summaries are 595.15 words and 273.57 words respectively.
Source: [https://arxiv.org/pdf/1909.08837.pdf](https://arxiv.org/pdf/1909.08837.pdf) | Provide a detailed description of the following dataset: Proto Summ |
PROX | A dataset composed of 12 different 3D scenes and RGB sequences of 20 subjects moving in and interacting with the scenes. | Provide a detailed description of the following dataset: PROX |
PS-Battles | The PS-Battles dataset is gathered from a large community of image manipulation enthusiasts and provides a basis for media derivation and manipulation detection in the visual domain. The dataset consists of 102'028 images grouped into 11'142 subsets, each containing the original image as well as a varying number of manipulated derivatives. | Provide a detailed description of the following dataset: PS-Battles |
PST900 | **PST900** is a dataset of 894 synchronized and calibrated RGB and Thermal image pairs with per pixel human annotations across four distinct classes from the DARPA Subterranean Challenge.
Source: [https://arxiv.org/abs/1909.10980](https://arxiv.org/abs/1909.10980)
Image Source: [https://github.com/ShreyasSkandanS/pst900_thermal_rgb](https://github.com/ShreyasSkandanS/pst900_thermal_rgb) | Provide a detailed description of the following dataset: PST900 |
PTB-TIR | PTB-TIR is a Thermal InfraRed (TIR) pedestrian tracking benchmark, which provides 60 TIR sequences with mannuly annoations. The benchmark is used to fair evaluate TIR trackers. | Provide a detailed description of the following dataset: PTB-TIR |
PTL | A dataset of pedestrian traffic lights containing over 5000 photos taken at hundreds of intersections in Shanghai. | Provide a detailed description of the following dataset: PTL |
PubFig | The PubFig database is a large, real-world face dataset consisting of 58,797 images of 200 people collected from the internet. Unlike most other existing face datasets, these images are taken in completely uncontrolled situations with non-cooperative subjects. Thus, there is large variation in pose, lighting, expression, scene, camera, imaging conditions and parameters, etc. The PubFig dataset is similar in spirit to the Labeled Faces in the Wild (LFW) dataset. | Provide a detailed description of the following dataset: PubFig |
PUBHEALTH | **PUBHEALTH** is a comprehensive dataset for explainable automated fact-checking of public health claims. Each instance in the PUBHEALTH dataset has an associated veracity label (true, false, unproven, mixture). Furthermore each instance in the dataset has an explanation text field. The explanation is a justification for which the claim has been assigned a particular veracity label.
Source: [https://github.com/neemakot/Health-Fact-Checking](https://github.com/neemakot/Health-Fact-Checking) | Provide a detailed description of the following dataset: PUBHEALTH |
PubLayNet | PubLayNet is a dataset for document layout analysis by automatically matching the XML representations and the content of over 1 million PDF articles that are publicly available on PubMed Central. The size of the dataset is comparable to established computer vision datasets, containing over 360 thousand document images, where typical document layout elements are annotated. | Provide a detailed description of the following dataset: PubLayNet |
public_meetings | The **public_meetings** corpus contains meetings, made of pairs of automatic transcriptions from audio recordings and meeting reports written by a professional. 22 aligned meetings are provided in total. | Provide a detailed description of the following dataset: public_meetings |
Pump and dump dataset | The **Pump and dump dataset** is an annotated set of messages to detect cryptocurrency market manipulations. It consists of a list of a list of pump and dumps arranged by groups on Telegram. All the pump and dumps in the dataset are on the trading pair SYM/BTC.
Source: [https://github.com/SystemsLab-Sapienza/pump-and-dump-dataset](https://github.com/SystemsLab-Sapienza/pump-and-dump-dataset) | Provide a detailed description of the following dataset: Pump and dump dataset |
QBSUM | A high-quality large-scale dataset consisting of 49,000+ data samples for the task of Chinese query-based document summarization. | Provide a detailed description of the following dataset: QBSUM |
QMUL-SurvFace | **QMUL-SurvFace** is a surveillance face recognition benchmark that contains 463,507 face images of 15,573 distinct identities captured in real-world uncooperative surveillance scenes over wide space and time. | Provide a detailed description of the following dataset: QMUL-SurvFace |
Q-Traffic | **Q-Traffic** is a large-scale traffic prediction dataset, which consists of three sub-datasets: query sub-dataset, traffic speed sub-dataset and road network sub-dataset.
Source: [https://github.com/JingqingZ/BaiduTraffic](https://github.com/JingqingZ/BaiduTraffic)
Image Source: [https://github.com/JingqingZ/BaiduTraffic](https://github.com/JingqingZ/BaiduTraffic) | Provide a detailed description of the following dataset: Q-Traffic |
QuAIL | A new kind of question-answering dataset that combines commonsense, text-based, and unanswerable questions, balanced for different genres and reasoning types. Reasoning type annotation for 9 types of reasoning: temporal, causality, factoid, coreference, character properties, their belief states, subsequent entity states, event durations, and unanswerable. Genres: CC license fiction, Voice of America news, blogs, user stories from Quora 800 texts, 18 questions for each (~14K questions). | Provide a detailed description of the following dataset: QuAIL |
Quda | Aims to help V-NLIs recognize analytic tasks from free-form natural language by training and evaluating cutting-edge multi-label classification models. The dataset contains diverse user queries, and each is annotated with one or multiple analytic tasks. | Provide a detailed description of the following dataset: Quda |
QuerYD | A large-scale dataset for retrieval and event localisation in video. A unique feature of the dataset is the availability of two audio tracks for each video: the original audio, and a high-quality spoken description of the visual content. | Provide a detailed description of the following dataset: QuerYD |
Query-Focused Video Summarization Dataset | Collects dense per-video-shot concept annotations. | Provide a detailed description of the following dataset: Query-Focused Video Summarization Dataset |
Quick, Draw! Dataset | The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located. | Provide a detailed description of the following dataset: Quick, Draw! Dataset |
QuickDraw-Extended | Consists of 330,000 sketches and 204,000 photos spanning across 110 categories. | Provide a detailed description of the following dataset: QuickDraw-Extended |
Quizbowl | Consists of multiple sentences whose clues are arranged by difficulty (from obscure to obvious) and uniquely identify a well-known entity such as those found on Wikipedia. | Provide a detailed description of the following dataset: Quizbowl |
Qulac | A dataset on asking Questions for Lack of Clarity in open-domain information-seeking conversations. **Qulac** presents the first dataset and offline evaluation framework for studying clarifying questions in open-domain information-seeking conversational search systems.
Source: [https://github.com/aliannejadi/qulac](https://github.com/aliannejadi/qulac)
Image Source: [https://github.com/aliannejadi/qulac](https://github.com/aliannejadi/qulac) | Provide a detailed description of the following dataset: Qulac |
RAD | The dataset is useful for query-adaptive video summarization and annotated with diversity and query-specific relevance labels. | Provide a detailed description of the following dataset: RAD |
RADIATE | **RADIATE** (**RAdar Dataset In Adverse weaThEr**) is new automotive dataset created by Heriot-Watt University which includes Radar, Lidar, Stereo Camera and GPS/IMU.
The data is collected in different weather scenarios (sunny, overcast, night, fog, rain and snow) to help the research community to develop new methods of vehicle perception.
The radar images are annotated in 7 different scenarios: Sunny (Parked), Sunny/Overcast (Urban), Overcast (Motorway), Night (Motorway), Rain (Suburban), Fog (Suburban) and Snow (Suburban). The dataset contains 8 different types of objects (car, van, truck, bus, motorbike, bicycle, pedestrian and group of pedestrians).
Source: [https://github.com/marcelsheeny/radiate_sdk](https://github.com/marcelsheeny/radiate_sdk)
Image Source: [https://github.com/marcelsheeny/radiate_sdk](https://github.com/marcelsheeny/radiate_sdk) | Provide a detailed description of the following dataset: RADIATE |
RadioTalk | **RadioTalk** is a corpus of speech recognition transcripts sampled from talk radio broadcasts in the United States between October of 2018 and March of 2019. The corpus is intended for use by researchers in the fields of natural language processing, conversational analysis, and the social sciences. The corpus encompasses approximately 2.8 billion words of automatically transcribed speech from 284,000 hours of radio, together with metadata about the speech, such as geographical location, speaker turn boundaries, gender, and radio program information.
Source: [https://github.com/social-machines/RadioTalk](https://github.com/social-machines/RadioTalk) | Provide a detailed description of the following dataset: RadioTalk |
RAF-ML | Real-world Affective Faces Multi Label (RAF-ML) is a multi-label facial expression dataset with around 5K great-diverse facial images downloaded from the Internet with blended emotions and variability in subjects' identity, head poses, lighting conditions and occlusions. During annotation, 315 well-trained annotators are employed to ensure each image can be annotated enough independent times. And images with multi-peak label distribution are selected out to constitute the RAF-ML.
RAF-ML provides 4908 number of real-world images with blended emotions, 6-dimensional expression distribution vector for each image, 5 accurate landmark locations and 37 automatic landmark locations, and baseline classifier outputs for multi-label emotion recognition. | Provide a detailed description of the following dataset: RAF-ML |
Raindrop | Raindrop is a set of image pairs, where
each pair contains exactly the same background scene, yet
one is degraded by raindrops and the other one is free from
raindrops. To obtain this, the images are captured through two pieces of exactly the
same glass: one sprayed with water, and the other is left
clean. The dataset consists of 1,119 pairs of images, with various
background scenes and raindrops. They were captured with a Sony A6000
and a Canon EOS 60. | Provide a detailed description of the following dataset: Raindrop |
RainNet | **RainNet** is a real (non-simuated) large-scale spatial precipitation downscaling dataset that contains 62,424 pairs of low-resolution and high-resolution precipitation maps for 17 years. Contrary to simulated data, this real dataset covers various types of real meteorological phenomena (e.g., Hurricane, Squall, etc.), and shows the physical characters - Temporal Misalignment, Temporal Sparse and Fluid Properties - that challenge the downscaling algorithms.
Source: [https://github.com/neuralchen/RainNet](https://github.com/neuralchen/RainNet)
Image Source: [https://github.com/neuralchen/RainNet](https://github.com/neuralchen/RainNet) | Provide a detailed description of the following dataset: RainNet |
RareAct | **RareAct** is a video dataset of unusual actions, including actions like “blend phone”, “cut keyboard” and “microwave shoes”. It aims at evaluating the zero-shot and few-shot compositionality of action recognition models for unlikely compositions of common action verbs and object nouns. It contains 122 different actions which were obtained by combining verbs and nouns rarely co-occurring together in the large-scale textual corpus from HowTo100M, but that frequently appear separately.
Source: [https://github.com/antoine77340/RareAct](https://github.com/antoine77340/RareAct)
Image Source: [https://github.com/antoine77340/RareAct](https://github.com/antoine77340/RareAct) | Provide a detailed description of the following dataset: RareAct |
RarePlanes Dataset | The dataset specifically focuses on the value of synthetic data to aid computer vision algorithms in their ability to automatically detect aircraft and their attributes in satellite imagery. Although other synthetic/real combination datasets exist, RarePlanes is the largest openly-available very-high resolution dataset built to test the value of synthetic data from an overhead perspective. Previous research has shown that synthetic data can reduce the amount of real training data needed and potentially improve performance for many tasks in the computer vision domain. The real portion of the dataset consists of 253 Maxar WorldView-3 satellite scenes spanning 112 locations and 2,142 km^2 with 14,700 hand-annotated aircraft. | Provide a detailed description of the following dataset: RarePlanes Dataset |
RAVEN | RAVEN consists of 1,120,000 images and 70,000 RPM (Raven's Progressive Matrices)
problems, equally distributed in 7 distinct figure configurations. | Provide a detailed description of the following dataset: RAVEN |
RAVEN-FAIR | **RAVEN-FAIR** is a modified version of the RAVEN dataset.
Source: [https://github.com/yanivbenny/RAVEN_FAIR](https://github.com/yanivbenny/RAVEN_FAIR) | Provide a detailed description of the following dataset: RAVEN-FAIR |
RCTW-17 | Features a large-scale dataset with 12,263 annotated images. Two tasks, namely text localization and end-to-end recognition, are set up. The competition took place from January 20 to May 31, 2017. 23 valid submissions were received from 19 teams. | Provide a detailed description of the following dataset: RCTW-17 |
Real Rain Dataset | A large-scale dataset of ~29.5K rain/rain-free image pairs that covers a wide range of natural rain scenes. | Provide a detailed description of the following dataset: Real Rain Dataset |
ReCO | A human-curated ChineseReading Comprehension dataset on Opinion. The questions in ReCO are opinion based queries issued to the commercial search engine. The passages are provided by the crowdworkers who extract the support snippet from the retrieved documents. | Provide a detailed description of the following dataset: ReCO |
RED | The **Real Embodied Dataset** (**RED**) is a computer vision large-scale dataset for grasping in cluttered scenes. It contains complete segmentation masks for partially occluded objects, with their order of occlusion.
Source: [https://arxiv.org/pdf/2004.13358.pdf](https://arxiv.org/pdf/2004.13358.pdf)
Image Source: [https://arxiv.org/pdf/2004.13358.pdf](https://arxiv.org/pdf/2004.13358.pdf) | Provide a detailed description of the following dataset: RED |
ReDial | ReDial (Recommendation Dialogues) is an annotated dataset of dialogues, where users recommend movies to each other. The dataset consists of over 10,000 conversations centered around the theme of providing movie recommendations. | Provide a detailed description of the following dataset: ReDial |
ReDWeb-S | **ReDWeb-S** is a large-scale challenging dataset for Salient Object Detection. It has totally 3179 images with various real-world scenes and high-quality depth maps. The dataset is split into a training set with 2179 RGB-D image pairs and a testing set with the remaining 1000 image pairs.
Source: [https://github.com/nnizhang/SMAC](https://github.com/nnizhang/SMAC)
Image Source: [https://github.com/nnizhang/SMAC](https://github.com/nnizhang/SMAC) | Provide a detailed description of the following dataset: ReDWeb-S |
redwood-3dscan | A dataset of more than ten thousand 3D scans of real objects. | Provide a detailed description of the following dataset: redwood-3dscan |
RefCOCO | This referring expression generation (REG) dataset was collected using the ReferitGame. In this two-player game, the first player is shown an image with a segmented target object and asked to write a natural language expression referring to the target object. The second player is shown only the image and the referring expression and asked to click on the corresponding object. If the players do their job correctly, they receive points and swap roles. If not, they are presented with a new object and image for description. Images in these collections were selected to contain two or more objects of the same object category. In the RefCOCO dataset, no restrictions are placed on the type of language used in the referring expressions. In a version of this dataset called RefCOCO+ players are disallowed from using location words in their referring expressions by adding “taboo” words to the ReferItGame. This dataset was collected to obtain a referring expression dataset focsed on purely appearance based description, e.g., “the man in the yellow polka-dotted shirt” rather than “the second man from the left”, which tend to be more interesting from a computer vision based perspective and are independent of viewer perspective. RefCOCO consists of 142,209 refer expressions for 50,000 objects in 19,994 images, and RefCOCO+ has 141,564 expressions for 49,856 objects in 19,992 images. | Provide a detailed description of the following dataset: RefCOCO |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.