dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
N-CARS
A large real-world event-based dataset for object classification.
Provide a detailed description of the following dataset: N-CARS
NCBI Disease Corpus
NCBI Disease Corpus is a large-scale disease corpus consisting of 6900 disease mentions in 793 PubMed citations, derived from an earlier corpus. The corpus contains rich annotations, was developed by a team of 12 annotators (two people per annotation) and covers all sentences in a PubMed abstract. Disease mentions are categorized into Specific Disease, Disease Class, Composite Mention and Modifier categories.
Provide a detailed description of the following dataset: NCBI Disease Corpus
NCD
The **Natural-Color Dataset** (**NCD**) is an image colorization dataset where images are true to their colors. For example, a carrot will have an orange color in most images. Bananas will be either greenish or yellowish. It contains 723 images from the internet distributed in 20 categories. Each image has an object and a white background. Source: [https://github.com/saeed-anwar/ColorSurvey#dataset](https://github.com/saeed-anwar/ColorSurvey#dataset) Image Source: [https://github.com/saeed-anwar/ColorSurvey](https://github.com/saeed-anwar/ColorSurvey)
Provide a detailed description of the following dataset: NCD
NCLS
Presents two high-quality large-scale CLS datasets based on existing monolingual summarization datasets.
Provide a detailed description of the following dataset: NCLS
NDD20
Northumberland Dolphin Dataset 2020 (NDD20) is a challenging image dataset annotated for both coarse and fine-grained instance segmentation and categorisation. This dataset, the first release of the NDD, was created in response to the rapid expansion of computer vision into conservation research and the production of field-deployable systems suited to extreme environmental conditions -- an area with few open source datasets. NDD20 contains a large collection of above and below water images of two different dolphin species for traditional coarse and fine-grained segmentation.
Provide a detailed description of the following dataset: NDD20
N-Digit MNIST
N-Digit MNIST is a multi-digit MNIST-like dataset.
Provide a detailed description of the following dataset: N-Digit MNIST
PSU NRTDB
The **PSU Near-Regular Texture Database** is a texture dataset. It covers the spectrum of textures from completely regular to near-regular to irregular. It also includes video of near-regular textures in motion. The database also contains, or will include, test image sets with ground-truth for translation, rotation, reflection/glide-reflection symmetry detection algorithms.
Provide a detailed description of the following dataset: PSU NRTDB
NEEQ Annual Reports
Business taxonomies automatically constructed from the content of corporate annual reports.
Provide a detailed description of the following dataset: NEEQ Annual Reports
Negotiation Dialogues Dataset
This dataset consists of 5808 dialogues, based on 2236 unique scenarios. Each dialogue is converted into two training examples in the dataset, showing the complete conversation from the perspective of each agent. The perspectives differ on their input goals, output choice, and in special tokens marking whether a statement was read or written.
Provide a detailed description of the following dataset: Negotiation Dialogues Dataset
NERGRIT Corpus
NERGRIT involves machine learning based NLP Tools and a corpus used for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis.
Provide a detailed description of the following dataset: NERGRIT Corpus
NetiLook
A large-scale clothing dataset named NetiLook to discover netizen-style comments.
Provide a detailed description of the following dataset: NetiLook
Neural Code Search Evaluation Dataset
Neural-Code-Search-Evaluation-Dataset presents an evaluation dataset consisting of natural language query and code snippet pairs, with the hope that future work in this area can use this dataset as a common benchmark.
Provide a detailed description of the following dataset: Neural Code Search Evaluation Dataset
Neural Conversational QA
A modified data-set that has fewer spurious patterns than the original data-set, consequently allowing models to learn better.
Provide a detailed description of the following dataset: Neural Conversational QA
PAF Benchmark
Introduce three new neuromorphic vision datasets recorded by a novel neuromorphic vision sensor named Dynamic Vision Sensors (DVS).
Provide a detailed description of the following dataset: PAF Benchmark
NewB
A text corpus of more than 200,000 sentences from eleven news sources regarding Donald Trump.
Provide a detailed description of the following dataset: NewB
New Brown Corpus
A new dataset for training and evaluating grounded language models.
Provide a detailed description of the following dataset: New Brown Corpus
NewSHead
The **NewSHead** dataset contains 369,940 English stories with 932,571 unique URLs, among which there are 359,940 stories for training, 5,000 for validation, and 5,000 for testing, respectively. Each news story contains at least three (and up to five) articles. The dataset is collected from news stories published between May 2018 and May 2019, where a proprietary clustering algorithm iteratively loads articles published in a time window and groups them based on content similarity. Up to five representative articles are picked from the cluster for generating the story headline. Curators from a crowd-sourcing platform are requested to provide a headline of up to 35 characters to describe the major information covered by the story.
Provide a detailed description of the following dataset: NewSHead
Newspaper Navigator
The largest dataset of extracted visual content from historic newspapers ever produced. The Newspaper Navigator dataset, finetuned visual content recognition model.
Provide a detailed description of the following dataset: Newspaper Navigator
NewsPH-NLI
NewsPH-NLI is a sentence entailment benchmark dataset in the low-resource Filipino language.
Provide a detailed description of the following dataset: NewsPH-NLI
NIND
An open dataset of real photographs with real noise, from identical scenes captured with varying ISO values. Most images are taken with a Fujifilm X-T1 and XF18-55mm, other photographers are encouraged to contribute images for a more diverse crowdsourced effort. Source: [https://commons.wikimedia.org/wiki/Natural_Image_Noise_Dataset](https://commons.wikimedia.org/wiki/Natural_Image_Noise_Dataset) Image Source: [https://commons.wikimedia.org/wiki/Natural_Image_Noise_Dataset](https://commons.wikimedia.org/wiki/Natural_Image_Noise_Dataset)
Provide a detailed description of the following dataset: NIND
NLI-PT
The first Portuguese dataset compiled for Native Language Identification (NLI), the task of identifying an author's first language based on their second language writing. The dataset includes 1,868 student essays written by learners of European Portuguese, native speakers of the following L1s: Chinese, English, Spanish, German, Russian, French, Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish. NLI-PT includes the original student text and four different types of annotation: POS, fine-grained POS, constituency parses, and dependency parses. NLI-PT can be used not only in NLI but also in research on several topics in the field of Second Language Acquisition and educational NLP.
Provide a detailed description of the following dataset: NLI-PT
NLI-TR
Natural Language Inference in Turkish (NLI-TR) provides translations of two large English NLI datasets into Turkish and had a team of experts validate their translation quality and fidelity to the original labels.
Provide a detailed description of the following dataset: NLI-TR
nocaps
The nocaps benchmark consists of 166,100 human-generated captions describing 15,100 images from the OpenImages validation and test sets.
Provide a detailed description of the following dataset: nocaps
NoReC
The Norwegian Review Corpus (NoReC) was created for the purpose of training and evaluating models for document-level sentiment analysis. More than 43,000 full-text reviews have been collected from major Norwegian news sources and cover a range of different domains, including literature, movies, video games, restaurants, music and theater, in addition to product reviews across a range of categories. Each review is labeled with a manually assigned score of 1–6, as provided by the rating of the original author.
Provide a detailed description of the following dataset: NoReC
NoReC_fine
NoReC_fine is a dataset for fine-grained sentiment analysis in Norwegian, annotated with respect to polar expressions, targets and holders of opinion.
Provide a detailed description of the following dataset: NoReC_fine
Noun-Noun Compound Dataset
The noun–noun compounds dataset created by Fares (2016) consists of compounds annotated with two different taxonomies of relations; that is, for each noun–noun compound there are two distinct relations, drawing on different linguistic schools. The dataset was derived from existing linguistic resources, such as NomBank (Meyers et al., 2004) and the Prague Czech-English Dependency Treebank 2.0 (PCEDT; Hajič et al., 2012).
Provide a detailed description of the following dataset: Noun-Noun Compound Dataset
NREC Agricultural Person-Detection
A dataset to encourage research in these environments. It consists of labeled stereo video of people in orange and apple orchards taken from two perception platforms (a tractor and a pickup truck), along with vehicle position data from RTK GPS.
Provide a detailed description of the following dataset: NREC Agricultural Person-Detection
NSMC
This is a movie review dataset in the Korean language. Reviews were scraped from Naver Movies.
Provide a detailed description of the following dataset: NSMC
NTPairs
The **NTPairs** dataset consists of the pairs of news articles and their corresponding tweets that were published by eight media outlets in 2018. The eight outlets were selected to consider diverse outlets, which employ a different editing style for news sharing, in terms of publishing channels and political leaning. Source: [https://github.com/bywords/NTPairs](https://github.com/bywords/NTPairs) Image Source: [https://github.com/bywords/NTPairs](https://github.com/bywords/NTPairs)
Provide a detailed description of the following dataset: NTPairs
Numeric Fused-Head
The Numeric Fused-Head dataset consists of ~10K examples of crowd-sourced classified examples, labeled into 7 different categories, from two types. In the first type, Reference, the missing head is referenced explicitly somewhere else in the discourse, either in the same sentence or in surrounding sentences. In the second type, Implicit, the missing head does not appear in the text and needs to be inferred by the reader or hearer based on the context or world knowledge. This category was labeled into the 6 most common categories of the dataset. Models are evaluated based on accuracy.
Provide a detailed description of the following dataset: Numeric Fused-Head
NumerSense
Contains 13.6k masked-word-prediction probes, 10.5k for fine-tuning and 3.1k for testing.
Provide a detailed description of the following dataset: NumerSense
NWPU-Crowd
NWPU-Crowd consists of 5,109 images, in a total of 2,133,375 annotated heads with points and boxes. Compared with other real-world datasets, it contains various illumination scenes and has the largest density range (0~20,033).
Provide a detailed description of the following dataset: NWPU-Crowd
NYC3DCars
A vehicle detection database for vision tasks set in the real world.
Provide a detailed description of the following dataset: NYC3DCars
NYTWIT
A collection of over 2,500 novel English words published in the New York Times between November 2017 and March 2019, manually annotated for their class of novelty (such as lexical derivation, dialectal variation, blending, or compounding).
Provide a detailed description of the following dataset: NYTWIT
NYU Symmetry Database
The **NYU Symmetry** database contains 176 single-symmetry and 63 multiple-symmetry images (.png files) with accompanying ground-truth annotations (.mat files). Also included are a .m file to visualize the annotations on top of the images, and a .txt file with instructions on how to interpret the .mat annotations.
Provide a detailed description of the following dataset: NYU Symmetry Database
O4B
O4B is a dataset of 17,458 open access business articles and their reference summaries. The dataset introduces a new challenge for summarization in the business domain, requiring highly abstractive and more concise summaries as compared to other existing datasets.
Provide a detailed description of the following dataset: O4B
OASIS
A dataset for single-image 3D in the wild consisting of annotations of detailed 3D geometry for 140,000 images.
Provide a detailed description of the following dataset: OASIS
Objectron
The **Objectron** dataset is a collection of short, object-centric video clips, which are accompanied by AR session metadata that includes camera poses, sparse point-clouds and characterization of the planar surfaces in the surrounding environment. In each video, the camera moves around the object, capturing it from different angles. The data also contain manually annotated 3D bounding boxes for each object, which describe the object’s position, orientation, and dimensions. The dataset consists of 15K annotated video clips supplemented with over 4M annotated images in the following categories: bikes, books, bottles, cameras, cereal boxes, chairs, cups, laptops, and shoes. To ensure geo-diversity, the dataset is collected from 10 countries across five continents. Source: [https://github.com/google-research-datasets/Objectron](https://github.com/google-research-datasets/Objectron) Image Source: [https://github.com/google-research-datasets/Objectron](https://github.com/google-research-datasets/Objectron)
Provide a detailed description of the following dataset: Objectron
Objects365
Objects365 is a large-scale object detection dataset, Objects365, which has 365 object categories over 600K training images. More than 10 million, high-quality bounding boxes are manually labeled through a three-step, carefully designed annotation pipeline. It is the largest object detection dataset (with full annotation) so far and establishes a more challenging benchmark for the community.
Provide a detailed description of the following dataset: Objects365
OBP
Open Bandit Dataset is a public real-world logged bandit feedback data. The dataset is provided by ZOZO, Inc., the largest Japanese fashion e-commerce company with over 5 billion USD market capitalization (as of May 2020). The company uses multi-armed bandit algorithms to recommend fashion items to users in a large-scale fashion e-commerce platform called ZOZOTOWN.
Provide a detailed description of the following dataset: OBP
Obstacle Tower
Obstacle Tower is a high fidelity, 3D, 3rd person, procedurally generated environment for reinforcement learning. An agent playing Obstacle Tower must learn to solve both low-level control and high-level planning problems in tandem while learning from pixels and a sparse reward signal. Unlike other benchmarks such as the Arcade Learning Environment, evaluation of agent performance in Obstacle Tower is based on an agent’s ability to perform well on unseen instances of the environment.
Provide a detailed description of the following dataset: Obstacle Tower
Occ-Traj120
**Occ-Traj120** is a trajectory dataset that contains occupancy representations of different local-maps with associated trajectories. This dataset contains 400 locally-structured maps with occupancy representation and roughly around 120K trajectories in total. Source: [https://github.com/soraxas/Occ-Traj120](https://github.com/soraxas/Occ-Traj120) Image Source: [https://github.com/soraxas/Occ-Traj120](https://github.com/soraxas/Occ-Traj120)
Provide a detailed description of the following dataset: Occ-Traj120
OCR-VQA
OCR-VQA dataset contains 207572 images and associated question-answer pairs.
Provide a detailed description of the following dataset: OCR-VQA
ODMS
**ODMS** is a dataset for learning Object Depth via Motion and Segmentation. ODMS training data are configurable and extensible, with each training example consisting of a series of object segmentation masks, camera movement distances, and ground truth object depth. As a benchmark evaluation, the dataset provides four ODMS validation and test sets with 15,650 examples in multiple domains, including robotics and driving. Source: [https://github.com/griffbr/ODMS](https://github.com/griffbr/ODMS) Image Source: [https://github.com/griffbr/ODMS](https://github.com/griffbr/ODMS)
Provide a detailed description of the following dataset: ODMS
ODSQA
The **ODSQA** dataset is a spoken dataset for question answering in Chinese. It contains more than three thousand questions from 20 different speakers. Source: [https://github.com/chiahsuan156/ODSQA](https://github.com/chiahsuan156/ODSQA)
Provide a detailed description of the following dataset: ODSQA
OffComBR
Offensive comments obtained from Brazilian website.
Provide a detailed description of the following dataset: OffComBR
Dataset of Structured Queries and Spatial Relations
Provides 450, 000 relevance annotations and 53 structured queries.
Provide a detailed description of the following dataset: Dataset of Structured Queries and Spatial Relations
OGB
The **Open Graph Benchmark** (**OGB**) is a collection of realistic, large-scale, and diverse benchmark datasets for machine learning on graphs. OGB datasets are automatically downloaded, processed, and split using the OGB Data Loader. The model performance can be evaluated using the OGB Evaluator in a unified manner. OGB is a community-driven initiative in active development.
Provide a detailed description of the following dataset: OGB
OGTD
A manually annotated dataset containing 4,779 posts from Twitter annotated as offensive and not offensive.
Provide a detailed description of the following dataset: OGTD
Oktoberfest Food Dataset
A realistic, diverse, and challenging dataset for object detection on images. The data was recorded at a beer tent in Germany and consists of 15 different categories of food and drink items.
Provide a detailed description of the following dataset: Oktoberfest Food Dataset
Okutama-Action
A new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors.
Provide a detailed description of the following dataset: Okutama-Action
OK-VQA
Outside Knowledge Visual Question Answering (OK-VQA) includes more than 14,000 questions that require external knowledge to answer.
Provide a detailed description of the following dataset: OK-VQA
OmniArt
Presents half a million samples and structured meta-data to encourage further research and societal engagement.
Provide a detailed description of the following dataset: OmniArt
Omni-MOT
The Omni-MOT is realistic CARLA based large-scale dataset with over 14M frames for multiple vehicle tracking . The dataset comprises 14M+ frames, 250K tracks, 110 million bounding boxes, three weather conditions, three crowd levels and three camera views in five simulated towns.
Provide a detailed description of the following dataset: Omni-MOT
One Million Posts Corpus
An annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). DER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper’s website, there is a discussion section below each news article where readers engage in online discussions. The data set contains a selection of user posts from the 12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and 1,000,000 unlabeled posts in the data set. The labeled posts were annotated by professional forum moderators employed by the newspaper.
Provide a detailed description of the following dataset: One Million Posts Corpus
OneStopEnglish
Useful for through two applications - automatic readability assessment and automatic text simplification. The corpus consists of 189 texts, each in three versions (567 in total).
Provide a detailed description of the following dataset: OneStopEnglish
OneStopQA
OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance.
Provide a detailed description of the following dataset: OneStopQA
OOVD
This data set was created to understand the potential for machine learning, computer vision, and HPC to improve the energy efficiency aspects of traffic control by leveraging GRIDSMART traffic cameras as sensors for adaptive traffic control, with a sensitivity to the fuel consumption characteristics of the traffic in the camera’s visual field. GRIDSMART cameras—an existing, fielded commercial product—sense the presence of vehicles at intersections and replace more conventional sensors (such as inductive loops) to issue calls to traffic control. These cameras, which have horizon-to-horizon view, offer the potential for an improved view of the traffic environment which can be used to generate better control algorithms.
Provide a detailed description of the following dataset: OOVD
openDD
Annotated using images taken by a drone in 501 separate flights, totalling in over 62 hours of trajectory data. As of today, openDD is by far the largest publicly available trajectory dataset recorded from a drone perspective, while comparable datasets span 17 hours at most.
Provide a detailed description of the following dataset: openDD
OpenDialKG
OpenDialKG contains utterance from 15K human-to-human role-playing dialogs is manually annotated with ground-truth reference to corresponding entities and paths from a large-scale KG with 1M+ facts.
Provide a detailed description of the following dataset: OpenDialKG
OpenEDS
OpenEDS (Open Eye Dataset) is a large scale data set of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs of left and right point cloud data compiled from corneal topography of eye regions collected from a subset, 143 out of 152, participants in the study.
Provide a detailed description of the following dataset: OpenEDS
OpenEDS2020
OpenEDS2020 is a dataset of eye-image sequences captured at a frame rate of 100 Hz under controlled illumination, using a virtual-reality head-mounted display mounted with two synchronized eye-facing cameras. The dataset, which is anonymized to remove any personally identifiable information on participants, consists of 80 participants of varied appearance performing several gaze-elicited tasks, and is divided in two subsets: 1) Gaze Prediction Dataset, with up to 66,560 sequences containing 550,400 eye-images and respective gaze vectors, created to foster research in spatio-temporal gaze estimation and prediction approaches; and 2) Eye Segmentation Dataset, consisting of 200 sequences sampled at 5 Hz, with up to 29,500 images, of which 5% contain a semantic segmentation label, devised to encourage the use of temporal information to propagate labels to contiguous frames.
Provide a detailed description of the following dataset: OpenEDS2020
OpenLORIS-object
(L)ifel(O)ng (R)obotic V(IS)ion (OpenLORIS) - Object Recognition Dataset (OpenLORIS-Object) is designed for accelerating the lifelong/continual/incremental learning research and application,currently focusing on improving the continuous learning capability of the common objects in the home scenario.
Provide a detailed description of the following dataset: OpenLORIS-object
Open MIC
Open MIC (Open Museum Identification Challenge) contains photos of exhibits captured in 10 distinct exhibition spaces of several museums which showcase paintings, timepieces, sculptures, glassware, relics, science exhibits, natural history pieces, ceramics, pottery, tools and indigenous crafts. The goal of Open MIC is to stimulate research in domain adaptation, egocentric recognition and few-shot learning by providing a testbed complementary to the famous Office 31.
Provide a detailed description of the following dataset: Open MIC
OpenSubtitles
OpenSubtitles is collection of multilingual parallel corpora. The dataset is compiled from a large database of movie and TV subtitles and includes a total of 1689 bitexts spanning 2.6 billion sentences across 60 languages.
Provide a detailed description of the following dataset: OpenSubtitles
OpenSurfaces
**OpenSurfaces** is a large database of annotated surfaces created from real-world consumer photographs. The framework used for the annotation process draws on crowdsourcing to segment surfaces from photos, and then annotate them with rich surface properties, including material, texture and contextual information.
Provide a detailed description of the following dataset: OpenSurfaces
OpenViDial
**OpenViDial** is a large-scale open-domain dialogue dataset with visual contexts. The dialogue turns and visual contexts are extracted from movies and TV series, where each dialogue turn is paired with the corresponding visual context in which it takes place. OpenViDial contains a total number of 1.1 million dialogue turns, and thus 1.1 million visual contexts stored in images. Source: [https://github.com/ShannonAI/OpenViDial](https://github.com/ShannonAI/OpenViDial) Image Source: [https://github.com/ShannonAI/OpenViDial](https://github.com/ShannonAI/OpenViDial)
Provide a detailed description of the following dataset: OpenViDial
OPIEC
OPIEC is an Open Information Extraction (OIE) corpus, constructed from the entire English Wikipedia. It containing more than 341M triples. Each triple from the corpus is composed of rich meta-data: each token from the subj / obj / rel along with NLP annotations (POS tag, NER tag, ...), provenance sentence (along with its dependency parse, sentence order relative to the article), original (golden) links contained in the Wikipedia articles, space / time.
Provide a detailed description of the following dataset: OPIEC
Opinosis
This dataset contains sentences extracted from user reviews on a given topic. Example topics are “performance of Toyota Camry” and “sound quality of ipod nano”, etc. In total there are 51 such topics with each topic having approximately 100 sentences (on average). The reviews were obtained from various sources – Tripadvisor (hotels), Edmunds.com (cars) and Amazon.com (various electronics). This dataset was used for the following automatic text summarization project .
Provide a detailed description of the following dataset: Opinosis
OPUS-100
A novel multilingual dataset with 100 languages.
Provide a detailed description of the following dataset: OPUS-100
OrangeSum
Source: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](/paper/barthez-a-skilled-pretrained-french-sequence) **OrangeSum** is a single-document extreme summarization dataset with two tasks: title and abstract. Ground truth summaries are respectively 11.42 and 32.12 words in length on average, for the title and abstract tasks respectively, while document sizes are 315 and 350 words. The motivation for OrangeSum was to put together a French equivalent of the XSum dataset. Unlike the historical CNN, DailyMail, and NY Times datasets, OrangeSum requires the models to display a high degree of abstractivity to perform well. OrangeSum was created by scraping articles and their titles and abstracts from the Orange Actu website. Scraped pages cover almost a decade from Feb 2011 to Sep 2020, and belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual ("insolite" in French), and miscellaneous. The dataset is publicly available at: https://github.com/Tixierae/OrangeSum.
Provide a detailed description of the following dataset: OrangeSum
ORCAS
ORCAS is a click-based dataset. It covers 1.4 million of the TREC DL documents, providing 18 million connections to 10 million distinct queries.
Provide a detailed description of the following dataset: ORCAS
ORConvQA
Enhances QuAC by adapting it to an open-retrieval setting. It is an aggregation of three existing datasets: (1) the QuAC dataset that offers information-seeking conversations, (2) the CANARD dataset that consists of context-independent rewrites of QuAC questions, and (3) the Wikipedia corpus that serves as the knowledge source of answering questions.
Provide a detailed description of the following dataset: ORConvQA
ORGaze
A new video dataset for OR, with 30, 000 objects over 5, 000 stereo video sequences annotated for their descriptions and gaze.
Provide a detailed description of the following dataset: ORGaze
ORKG-QA
A preliminary dataset of related tables and a corresponding set of natural language questions.
Provide a detailed description of the following dataset: ORKG-QA
OTT-QA
The Open Table-and-Text Question Answering (**OTT-QA**) dataset contains open questions which require retrieving tables and text from the web to answer. This dataset is re-annotated from the previous HybridQA dataset. The dataset is collected by UCSB NLP group and issued under MIT license. Source: [https://github.com/wenhuchen/OTT-QA](https://github.com/wenhuchen/OTT-QA) Image Source: [https://github.com/wenhuchen/OTT-QA](https://github.com/wenhuchen/OTT-QA)
Provide a detailed description of the following dataset: OTT-QA
Out the Window
The Out the Window (OTW) dataset is a crowdsourced activity dataset containing 5,668 instances of 17 activities from the NIST Activities in Extended Video (ActEV) challenge. These videos are crowdsourced from workers on the Amazon Mechanical Turk using a novel scenario acting strategy, which collects multiple instances of natural activities per scenario.
Provide a detailed description of the following dataset: Out the Window
Oxford Radar RobotCar Dataset
The Oxford Radar RobotCar Dataset is a radar extension to The Oxford RobotCar Dataset. It has been extended with data from a Navtech CTS350-X Millimetre-Wave FMCW radar and Dual Velodyne HDL-32E LIDARs with optimised ground truth radar odometry for 280 km of driving around Oxford, UK (in addition to all sensors in the original Oxford RobotCar Dataset).
Provide a detailed description of the following dataset: Oxford Radar RobotCar Dataset
Oxford RobotCar Dataset
The Oxford RobotCar Dataset contains over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset captures many different combinations of weather, traffic and pedestrians, along with longer term changes such as construction and roadworks.
Provide a detailed description of the following dataset: Oxford RobotCar Dataset
OxUva
OxUva is a dataset and benchmark for evaluating single-object tracking algorithms.
Provide a detailed description of the following dataset: OxUva
PadChest
PadChest is a labeled large-scale, high resolution chest x-ray dataset for the automated exploration of medical images along with their associated reports. This dataset includes more than 160,000 images obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital San Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional information on image acquisition and patient demography. The reports were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. Of these reports, 27% were manually annotated by trained physicians and the remaining set was labeled using a supervised method based on a recurrent neural network with attention mechanisms. The labels generated were then validated in an independent test set achieving a 0.93 Micro-F1 score.
Provide a detailed description of the following dataset: PadChest
PA-HMDB51
The **Privacy Annotated HMDB51** (**PA-HMDB51**) dataset is a video-based dataset for evaluating pirvacy protection in visual action recognition algorithms. The dataset contains both target task labels (action) and selected privacy attributes (skin color, face, gender, nudity, and relationship) annotated on a per-frame basis. Source: [https://github.com/VITA-Group/PA-HMDB51](https://github.com/VITA-Group/PA-HMDB51)
Provide a detailed description of the following dataset: PA-HMDB51
PANDA
PANDA is the first gigaPixel-level humAN-centric viDeo dAtaset, for large-scale, long-term, and multi-object visual analysis. The videos in PANDA were captured by a gigapixel camera and cover real-world scenes with both wide field-of-view (~1 square kilometer area) and high-resolution details (~gigapixel-level/frame). The scenes may contain 4k head counts with over 100x scale variation. PANDA provides enriched and hierarchical ground-truth annotations, including 15,974.6k bounding boxes, 111.8k fine-grained attribute labels, 12.7k trajectories, 2.2k groups and 2.9k interactions.
Provide a detailed description of the following dataset: PANDA
PanNuke
PanNuke is a semi automatically generated nuclei instance segmentation and classification dataset with exhaustive nuclei labels across 19 different tissue types. The dataset consists of 481 visual fields, of which 312 are randomly sampled from more than 20K whole slide images at different magnifications, from multiple data sources. In total the dataset contains 205,343 labeled nuclei, each with an instance segmentation mask.
Provide a detailed description of the following dataset: PanNuke
Spherical-Navi
A novel 360◦ fisheye panoramas dataset, i.e., the Spherical-Navi image dataset is collected, with a unique labeling strategy enabling automatic generation of an arbitrary number of negative samples (wrong heading direction).
Provide a detailed description of the following dataset: Spherical-Navi
ParaBank
A large-scale English paraphrase dataset that surpasses prior work in both quantity and quality.
Provide a detailed description of the following dataset: ParaBank
PARADE
PARADE contains paraphrases that overlap very little at the lexical and syntactic level but are semantically equivalent based on computer science domain knowledge, as well as non-paraphrases that overlap greatly at the lexical and syntactic level but are not semantically equivalent based on this domain knowledge.
Provide a detailed description of the following dataset: PARADE
Parallel Meaning Bank
The **Parallel Meaning Bank** (PMB), developed at the University of Groningen and building upon the Groningen Meaning Bank, comprises sentences and texts in raw and tokenised format, syntactic analysis, word senses, thematic roles, reference resolution, and formal meaning representations. The main objective of the PMB is to provide fine-grained meaning representations for words, sentences and texts. Sentences are, in isolation, often ambiguous. The aim is to provide the most likely interpretation for a sentence, with a minimal use of underspecification. The PMB annotations include gold standard data, which is fully manually corrected, as well as silver (partially manually corrected) and bronze (with no manual corrections) data. The releases so far contain documents for English, German, Italian and Dutch, but for future releases it is planned to include Chinese and Japanese.
Provide a detailed description of the following dataset: Parallel Meaning Bank
Bilingual Corpus of Arabic-English Parallel Tweets
A bilingual corpus of English-Arabic parallel tweets and a list of Twitter accounts who post English-Arabic tweets regularly.
Provide a detailed description of the following dataset: Bilingual Corpus of Arabic-English Parallel Tweets
PARANMT-50M
PARANMT-50M is a dataset for training paraphrastic sentence embeddings. It consists of more than 50 million English-English sentential paraphrase pairs.
Provide a detailed description of the following dataset: PARANMT-50M
ParaPat
A parallel corpus from the open access Google Patents dataset in 74 language pairs, comprising more than 68 million sentences and 800 million tokens. Sentences were automatically aligned using the Hunalign algorithm for the largest 22 language pairs, while the others were abstract (i.e. paragraph) aligned.
Provide a detailed description of the following dataset: ParaPat
ParCorFull
ParCorFull is a parallel corpus annotated with full coreference chains that has been created to address an important problem that machine translation and other multilingual natural language processing (NLP) technologies face -- translation of coreference across languages. This corpus contains parallel texts for the language pair English-German, two major European languages. Despite being typologically very close, these languages still have systemic differences in the realisation of coreference, and thus pose problems for multilingual coreference resolution and machine translation. This parallel corpus covers the genres of planned speech (public lectures) and newswire. It is richly annotated for coreference in both languages, including annotation of both nominal coreference and reference to antecedents expressed as clauses, sentences and verb phrases.
Provide a detailed description of the following dataset: ParCorFull
Paris Art Deco Facades
Anew dataset of facade images from Paris following the Art-deco style.
Provide a detailed description of the following dataset: Paris Art Deco Facades
Paris-Lille-3D
The **Paris-Lille-3D** is a Benchmark on Point Cloud Classification. The Point Cloud has been labeled entirely by hand with 50 different classes. The dataset consists of around 2km of Mobile Laser System point cloud acquired in two cities in France (Paris and Lille).
Provide a detailed description of the following dataset: Paris-Lille-3D
Parkinson's Pose Estimation Dataset
The data includes all movement trajectories extracted from the videos of Parkinson's assessments using Convolutional Pose Machines (CPM) as well as the confidence values from CPM. The dataset also includes ground truth ratings of parkinsonism and dyskinesia severity using the UDysRS, UPDRS, and CAPSIT. Source: [https://github.com/limi44/Parkinson-s-Pose-Estimation-Dataset](https://github.com/limi44/Parkinson-s-Pose-Estimation-Dataset)
Provide a detailed description of the following dataset: Parkinson's Pose Estimation Dataset
Pars-ABSA
Pars-ABSA is a manually annotated Persian dataset, Pars-ABSA, which is verified by 3 native Persian speakers. The dataset consists of 5,114 positive, 3,061 negative and 1,827 neutral data samples from 5,602 unique reviews.
Provide a detailed description of the following dataset: Pars-ABSA
PartNet
PartNet is a consistent, large-scale dataset of 3D objects annotated with fine-grained, instance-level, and hierarchical 3D part information. The dataset consists of 573,585 part instances over 26,671 3D models covering 24 object categories. This dataset enables and serves as a catalyst for many tasks such as shape analysis, dynamic 3D scene modeling and simulation, affordance analysis, and others.
Provide a detailed description of the following dataset: PartNet
PathTrack
PathTrack is a dataset for person tracking which contains more than 15,000 person trajectories in 720 sequences.
Provide a detailed description of the following dataset: PathTrack
PathVQA
PathVQA consists of 32,799 open-ended questions from 4,998 pathology images where each question is manually checked to ensure correctness.
Provide a detailed description of the following dataset: PathVQA
PCDS
Contains over 4,500 videos recorded at the entrance doors of buses in normal and cluttered conditions. It also proposes an efficient method for counting people in real-world cluttered scenes related to public transportations using depth videos.
Provide a detailed description of the following dataset: PCDS