dataset_name
stringlengths
2
128
description
stringlengths
1
9.7k
prompt
stringlengths
59
185
DR(eye)VE
DR(eye)VE is a large dataset of driving scenes for which eye-tracking annotations are available. This dataset features more than 500,000 registered frames, matching ego-centric views (from glasses worn by drivers) and car-centric views (from roof-mounted camera), further enriched by other sensors measurements.
Provide a detailed description of the following dataset: DR(eye)VE
Drive&Act
The Drive&Act dataset is a state of the art multi modal benchmark for driver behavior recognition. The dataset includes 3D skeletons in addition to frame-wise hierarchical labels of 9.6 Million frames captured by 6 different views and 3 modalities (RGB, IR and depth). It offers following key features: * 12h of video data in 29 long sequences * Calibrated multi view camera system with 5 views * Multi modal videos: NIR, Depth and Color data * Markerless motion capture: 3D Body Pose and Head Pose * Model of the static interior of the car * 83 manually annotated hierarchical activity labels: * Level 1: Long running tasks (12) * Level 2: Semantic actions (34) * Level 3: Object Interaction tripplets [action|object|location] (6|17|14)
Provide a detailed description of the following dataset: Drive&Act
DrivingStereo
DrivingStereo contains over 180k images covering a diverse set of driving scenarios, which is hundreds of times larger than the KITTI Stereo dataset. High-quality labels of disparity are produced by a model-guided filtering strategy from multi-frame LiDAR points.
Provide a detailed description of the following dataset: DrivingStereo
Drone Tracking
This dataset contains videos where a flying drone (hexacopter) is captured with multiple consumer-grade cameras (smartphones, compact cameras, gopro,...) with highly accurate 3D drone trajectory ground truth recorderd by a precise real-time RTK system from Fixposition. In some videos, the ground truth temporal synchronization and ground truth camera locations are also provided. Source: [https://github.com/CenekAlbl/drone-tracking-datasets](https://github.com/CenekAlbl/drone-tracking-datasets) Image Source: [https://github.com/CenekAlbl/drone-tracking-datasets](https://github.com/CenekAlbl/drone-tracking-datasets)
Provide a detailed description of the following dataset: Drone Tracking
DSBI
The **Double-Sided Braille Image** dataset (**DSBI**) is a large-scale dataset for Braille image recognition. It has detailed Braille recto dots, verso dots and Braille cells annotation. Source: [https://arxiv.org/abs/1811.10893](https://arxiv.org/abs/1811.10893) Image Source: [https://github.com/yeluo1994/DSBI](https://github.com/yeluo1994/DSBI)
Provide a detailed description of the following dataset: DSBI
dSprites
dSprites is a dataset of 2D shapes procedurally generated from 6 ground truth independent latent factors. These factors are color, shape, scale, rotation, x and y positions of a sprite. All possible combinations of these latents are present exactly once, generating N = 737280 total images.
Provide a detailed description of the following dataset: dSprites
DublinCity
A novel benchmark dataset that includes a manually annotated point cloud for over 260 million laser scanning points into 100'000 (approx.) assets from Dublin LiDAR point cloud [12] in 2015. Objects are labelled into 13 classes using hierarchical levels of detail from large (i.e., building, vegetation and ground) to refined (i.e., window, door and tree) elements.
Provide a detailed description of the following dataset: DublinCity
Dunhuang Grottoes Painting Dataset
This dataset provides a large number of training and testing example which is sufficient for a deep learning approach to address Dunhuang Grotto Painting restoration.
Provide a detailed description of the following dataset: Dunhuang Grottoes Painting Dataset
DuRecDial
A human-to-human Chinese dialog dataset (about 10k dialogs, 156k utterances), which contains multiple sequential dialogs for every pair of a recommendation seeker (user) and a recommender (bot).
Provide a detailed description of the following dataset: DuRecDial
DVQA
DVQA is a synthetic question-answering dataset on images of bar-charts.
Provide a detailed description of the following dataset: DVQA
DVS128 Gesture
Comprises 11 hand gesture categories from 29 subjects under 3 illumination conditions.
Provide a detailed description of the following dataset: DVS128 Gesture
DWIE
The '**Deutsche Welle corpus for Information Extraction**' (**DWIE**) is a multi-task dataset that combines four main Information Extraction (IE) annotation sub-tasks: (i) Named Entity Recognition (NER), (ii) Coreference Resolution, (iii) Relation Extraction (RE), and (iv) Entity Linking. DWIE is conceived as an entity-centric dataset that describes interactions and properties of conceptual entities on the level of the complete document.
Provide a detailed description of the following dataset: DWIE
Dynamic FAUST
Dynamic FAUST extends the FAUST dataset to dynamic 4D data. It consists of high-resolution 4D scans of human subjects in motion, captured at 60 fps.
Provide a detailed description of the following dataset: Dynamic FAUST
DynaSent
DynaSent is an English-language benchmark task for ternary (positive/negative/neutral) sentiment analysis. DynaSent combines naturally occurring sentences with sentences created using the open-source Dynabench Platform, which facilities human-and-model-in-the-loop dataset creation. DynaSent has a total of 121,634 sentences, each validated by five crowdworkers.
Provide a detailed description of the following dataset: DynaSent
E2E
End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena.
Provide a detailed description of the following dataset: E2E
ECUSTFD
The **ECUST Food Dataset** is a food recognition dataset that contains 2978 images Source: [https://github.com/Liang-yc/ECUSTFD-resized-](https://github.com/Liang-yc/ECUSTFD-resized-) Image Source: [https://github.com/Liang-yc/ECUSTFD-resized-](https://github.com/Liang-yc/ECUSTFD-resized-)
Provide a detailed description of the following dataset: ECUSTFD
Edge-Map-345C
**Edge-Map-345C** is a large-scale edge-map dataset including 290,281 edge-maps corresponding to 345 object categories of QuickDraw dataset. In particular, these 345 categories are corresponding to the 345 free-hand sketch categories of Google QuickDraw dataset. Source: [https://github.com/PengBoXiangShang/EdgeMap345C_Dataset](https://github.com/PengBoXiangShang/EdgeMap345C_Dataset) Image Source: [https://github.com/PengBoXiangShang/EdgeMap345C_Dataset](https://github.com/PengBoXiangShang/EdgeMap345C_Dataset)
Provide a detailed description of the following dataset: Edge-Map-345C
Edina-DR
Edina-DR is a novel corpus of discourse relation pairs; the first of its kind to attempt to identify the discourse relations connecting the dialogic turns in open-domain discourse.
Provide a detailed description of the following dataset: Edina-DR
EdNet
A large-scale hierarchical dataset of diverse student activities collected by Santa, a multi-platform self-study solution equipped with artificial intelligence tutoring system. EdNet contains 131,441,538 interactions from 784,309 students collected over more than 2 years, which is the largest among the ITS datasets released to the public so far.
Provide a detailed description of the following dataset: EdNet
EDUB-Seg
Egocentric Dataset of the University of Barcelona – Segmentation (EDUB-Seg) is a dataset for egocentric event segmentation acquired by the Narrative Clip, which takes a picture every 30 seconds. The dataset contains a total of 18,735 images captured by 7 different users during overall 20 days. To ensure diversity, all users were wearing the camera in different contexts: while attending a conference, on holiday, during the weekend, and during the week.
Provide a detailed description of the following dataset: EDUB-Seg
EgoCap
EgoCap is a dataest of 100,000 egocentric images of eight people in different clothing, with 75,000 images from six people used for training. The images have been captured with two fisheye cameras.
Provide a detailed description of the following dataset: EgoCap
EgoHands
The EgoHands dataset contains 48 Google Glass videos of complex, first-person interactions between two people. The main intention of this dataset is to enable better, data-driven approaches to understanding hands in first-person computer vision. The dataset offers * high quality, pixel-level segmentations of hands * the possibility to semantically distinguish between the observer’s hands and someone else’s hands, as well as left and right hands * virtually unconstrained hand poses as actors freely engage in a set of joint activities * lots of data with 15,053 ground-truth labeled hands Source: [Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions](/paper/lending-a-hand-detecting-hands-and)
Provide a detailed description of the following dataset: EgoHands
EGOK360
Contains annotations of human activity with different sub-actions, e.g., activity Ping-Pong with four sub-actions which are pickup-ball, hit, bounce-ball and serve.
Provide a detailed description of the following dataset: EGOK360
EgoShots
Egoshots is a 2-month Ego-vision Dataset with Autographer Wearable Camera annotated "for free" with transfer learning. Three state of the art pre-trained image captioning models are used. The dataset represents the life of 2 interns while working at Philips Research (Netherlands) (May-July 2015) generously donating their data. Source: [https://github.com/NataliaDiaz/Egoshots](https://github.com/NataliaDiaz/Egoshots)
Provide a detailed description of the following dataset: EgoShots
EYTH
Includes egocentric videos containing hands in the wild.
Provide a detailed description of the following dataset: EYTH
Egyptian Arabic Segmentation Dataset
Contains 350 tweets with more than 8,000 words including 3,000 unique words written in Egyptian dialect. The tweets have much dialectal content covering most of dialectal Egyptian phonological, morphological, and syntactic phenomena. It also includes Twitter-specific aspects of the text, such as #hashtags, @mentions, emoticons and URLs.
Provide a detailed description of the following dataset: Egyptian Arabic Segmentation Dataset
EiTB-ParCC
A large comparable corpus for Basque-Spanish was prepared, on the basis of independently-produced news by the Basque public broadcaster EiTB.
Provide a detailed description of the following dataset: EiTB-ParCC
Electro-Magnetic Emanations Interception Dataset
An open data corpus of 123.610 labeled samples,
Provide a detailed description of the following dataset: Electro-Magnetic Emanations Interception Dataset
Elsevier OA CC-BY
An open corpus of Scientific Research papers which has a representative sample from across scientific disciplines. This corpus not only includes the full text of the article, but also the metadata of the documents, along with the bibliographic information for each reference.
Provide a detailed description of the following dataset: Elsevier OA CC-BY
EMBER
A labeled benchmark dataset for training machine learning models to statically detect malicious Windows portable executable files. The dataset includes features extracted from 1.1M binary files: 900K training samples (300K malicious, 300K benign, 300K unlabeled) and 200K test samples (100K malicious, 100K benign).
Provide a detailed description of the following dataset: EMBER
EmoBank
**EmoBank** is a corpus of 10k English sentences balancing multiple genres, annotated with dimensional emotion metadata in the Valence-Arousal-Dominance (VAD) representation format. EmoBank excels with a bi-perspectival and bi-representational design.
Provide a detailed description of the following dataset: EmoBank
EMOTIC
The EMOTIC dataset, named after EMOTions In Context, is a database of images with people in real environments, annotated with their apparent emotions. The images are annotated with an extended list of 26 emotion categories combined with the three common continuous dimensions Valence, Arousal and Dominance.
Provide a detailed description of the following dataset: EMOTIC
CARER
CARER is an emotion dataset collected through noisy labels, annotated via distant supervision as in (Go et al., 2009). The subset of data provided here corresponds to the six emotions variant described in the paper. The six emotions are anger, fear, joy, love, sadness, and surprise.
Provide a detailed description of the following dataset: CARER
EMU
48k question-answer pairs written in rich natural language.
Provide a detailed description of the following dataset: EMU
EndoSLAM
The endoscopic SLAM dataset (**EndoSLAM**) is a dataset for depth estimation approach for endoscopic videos. It consists of both ex-vivo and synthetically generated data. The ex-vivo part of the dataset includes standard as well as capsule endoscopy recordings. The dataset is divided into 35 sub-datasets. Specifically, 18, 5 and 12 sub-datasets exist for colon, small intestine and stomach respectively. Source: [https://github.com/CapsuleEndoscope/EndoSLAM](https://github.com/CapsuleEndoscope/EndoSLAM) Image Source: [https://github.com/CapsuleEndoscope/EndoSLAM](https://github.com/CapsuleEndoscope/EndoSLAM)
Provide a detailed description of the following dataset: EndoSLAM
ENT-DESC
ENT-DESC involves retrieving abundant knowledge of various types of main entities from a large knowledge graph (KG), which makes the current graph-to-sequence models severely suffer from the problems of information loss and parameter explosion while generating the descriptions.
Provide a detailed description of the following dataset: ENT-DESC
EORSSD
The **Extended Optical Remote Sensing Saliency Detection** (**EORSSD**) dataset is an extension of the ORSSD dataset. This new dataset is larger and more varied than the original. It contains 2,000 images and corresponding pixel-wise ground truth, which includes many semantically meaningful but challenging images.
Provide a detailed description of the following dataset: EORSSD
EPIE
Corpus containing 25206 sentences labelled with lexical instances of 717 idiomatic expressions. These spans also cover literal usages for the given set of idiomatic expressions.
Provide a detailed description of the following dataset: EPIE
ERA
Consists of 2,864 videos each with a label from 25 different classes corresponding to an event unfolding 5 seconds. The ERA dataset is designed to have a significant intra-class variation and inter-class similarity and captures dynamic events in various circumstances and at dramatically various scales.
Provide a detailed description of the following dataset: ERA
ESAD
ESAD is a large-scale dataset designed to tackle the problem of surgeon action detection in endoscopic minimally invasive surgery. ESAD aims at contributing to increase the effectiveness and reliability of surgical assistant robots by realistically testing their awareness of the actions performed by a surgeon. The dataset provides bounding box annotation for 21 action classes on real endoscopic video frames captured during prostatectomy, and was used as the basis of a recent MIDL 2020 challenge.
Provide a detailed description of the following dataset: ESAD
eSCAPE
Consists of millions of entries in which the MT element of the training triplets has been obtained by translating the source side of publicly-available parallel corpora, and using the target side as an artificial human post-edit. Translations are obtained both with phrase-based and neural models.
Provide a detailed description of the following dataset: eSCAPE
e-SNLI
e-SNLI is used for various goals, such as obtaining full sentence justifications of a model's decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets.
Provide a detailed description of the following dataset: e-SNLI
eSports Sensors Dataset
The eSports Sensors dataset contains sensor data collected from 10 players in 22 matches in League of Legends. The sensor data collected includes: * Hand/head/chair movements. * Heart rate. * Muscle activity. * Gaze movement on the monitor. * Galvanic skin response(GSR). * Electroencephalography (EEG). * Mouse and keyboard activity. * Facial skin temperature. * Environmental data. The data were collected for one team of 5 people simultaneously. In-game logs and meta information for each match are also provided for each match.
Provide a detailed description of the following dataset: eSports Sensors Dataset
esXNLI
**esXNLI** is a bilingual NLI dataset. It comprises 2,490 examples from 5 different genres that were originally annotated in Spanish, and translated into English by professional translators. It serves as a counterpoint to XNLI, which was originally annotated in English and translated into 14 other languages, including Spanish. The dataset was conceived to be used in conjunction with the XNLI development set to analyse the effect of translation in cross-lingual transfer learning. Source: [https://github.com/artetxem/esxnli](https://github.com/artetxem/esxnli)
Provide a detailed description of the following dataset: esXNLI
ETH3D
ETHD is a multi-view stereo benchmark / 3D reconstruction benchmark that covers a variety of indoor and outdoor scenes. Ground truth geometry has been obtained using a high-precision laser scanner. A DSLR camera as well as a synchronized multi-camera rig with varying field-of-view was used to capture images.
Provide a detailed description of the following dataset: ETH3D
ETHICS
A new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
Provide a detailed description of the following dataset: ETHICS
ETHOS
**ETHOS** is a hate speech detection dataset. It is built from YouTube and Reddit comments validated through a crowdsourcing platform. It has two subsets, one for binary classification and the other for multi-label classification. The former contains 998 comments, while the latter contains fine-grained hate-speech annotations for 433 comments.
Provide a detailed description of the following dataset: ETHOS
ETH Py150 Open
A massive, deduplicated corpus of 7.4M Python files from GitHub.
Provide a detailed description of the following dataset: ETH Py150 Open
ETH-XGaze
Consists of over one million high-resolution images of varying gaze under extreme head poses. The dataset is collected from 110 participants with a custom hardware setup including 18 digital SLR cameras and adjustable illumination conditions, and a calibrated system to record ground truth gaze targets.
Provide a detailed description of the following dataset: ETH-XGaze
eTRIMS Image Database
The database is comprised of two datasets, the 4-Class eTRIMS Dataset with 4 annotated object classes and the 8-Class eTRIMS Dataset with 8 annotated object classes.
Provide a detailed description of the following dataset: eTRIMS Image Database
ETT
The **Electricity Transformer Temperature** (**ETT**) is a crucial indicator in the electric power long-term deployment. This dataset consists of 2 years data from two separated counties in China. To explore the granularity on the Long sequence time-series forecasting (LSTF) problem, different subsets are created, {ETTh1, ETTh2} for 1-hour-level and ETTm1 for 15-minutes-level. Each data point consists of the target value ”oil temperature” and 6 power load features. The train/val/test is 12/4/4 months.
Provide a detailed description of the following dataset: ETT
EuroCity Persons
The EuroCity Persons dataset provides a large number of highly diverse, accurate and detailed annotations of pedestrians, cyclists and other riders in urban traffic scenes. The images for this dataset were collected on-board a moving vehicle in 31 cities of 12 European countries. With over 238,200 person instances manually labeled in over 47,300 images, EuroCity Persons is nearly one order of magnitude larger than person datasets used previously for benchmarking. The dataset furthermore contains a large number of person orientation annotations (over 211,200).
Provide a detailed description of the following dataset: EuroCity Persons
Europarl ConcoDisco Dataset
The ConcoDisco Corpus is an English-French parallel corpus with discourse relations (DRs) and discourse connectives (DCs) annotations. Source: [https://github.com/mjlaali/Europarl-ConcoDisco](https://github.com/mjlaali/Europarl-ConcoDisco) Image Source: [https://github.com/mjlaali/Europarl-ConcoDisco](https://github.com/mjlaali/Europarl-ConcoDisco)
Provide a detailed description of the following dataset: Europarl ConcoDisco Dataset
Europarl-ST
Europarl-ST is a multilingual Spoken Language Translation corpus containing paired audio-text samples for SLT from and into 9 European languages, for a total of 72 different translation directions. This corpus has been compiled using the debates held in the European Parliament in the period between 2008 and 2012.
Provide a detailed description of the following dataset: Europarl-ST
Europeana Newspapers
Europeana Newspapers consists of four datasets with 100 pages each for the languages Dutch, French, German (including Austrian) as part of the Europeana Newspapers project is expected to contribute to the further development and improvement of named entity recognition systems with a focus on historical content.
Provide a detailed description of the following dataset: Europeana Newspapers
European Flood 2013 Dataset
This dataset consists of 3,710 flood images, annotated by domain experts regarding their relevance with respect to three tasks (determining the flooded area, inundation depth, water pollution). Source: [https://github.com/cvjena/eu-flood-dataset](https://github.com/cvjena/eu-flood-dataset) Image Source: [https://github.com/cvjena/eu-flood-dataset](https://github.com/cvjena/eu-flood-dataset)
Provide a detailed description of the following dataset: European Flood 2013 Dataset
Event-Camera Dataset
The **Event-Camera Dataset** is a collection of datasets with an event-based camera for high-speed robotics. The data also include intensity images, inertial measurements, and ground truth from a motion-capture system. An event-based camera is a revolutionary vision sensor with three key advantages: a measurement rate that is almost 1 million times faster than standard cameras, a latency of 1 microsecond, and a high dynamic range of 130 decibels (standard cameras only have 60 dB). These properties enable the design of a new class of algorithms for high-speed robotics, where standard cameras suffer from motion blur and high latency. All the data are released both as text files and binary (i.e., rosbag) files.
Provide a detailed description of the following dataset: Event-Camera Dataset
Event-focused Emotion Corpora for German and English
A corpus designed in analogy to the well-established English ISEAR emotion dataset.
Provide a detailed description of the following dataset: Event-focused Emotion Corpora for German and English
EventKG+Click
Builds upon the event-centric EventKG knowledge graph and language-specific information on user interactions with events, entities, and their relations derived from the Wikipedia clickstream.
Provide a detailed description of the following dataset: EventKG+Click
Event-QA
Contains 1000 semantic queries and the corresponding English, German and Portuguese verbalizations for EventKG - an event-centric knowledge graph with more than 970 thousand events.
Provide a detailed description of the following dataset: Event-QA
Evidence Inference
Evidence Inference is a corpus for this task comprising 10,000+ prompts coupled with full-text articles describing RCTs.
Provide a detailed description of the following dataset: Evidence Inference
EV-IMO
Includes accurate pixel-wise motion masks, egomotion and ground truth depth.
Provide a detailed description of the following dataset: EV-IMO
EXAMS
A new benchmark dataset for cross-lingual and multilingual question answering for high school examinations. Collects more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. EXAMS offers a fine-grained evaluation framework across multiple languages and subjects, which allows precise analysis and comparison of various models.
Provide a detailed description of the following dataset: EXAMS
ExDark
The **Exclusively Dark** (ExDARK) dataset is a collection of 7,363 low-light images from very low-light environments to twilight (i.e 10 different conditions) with 12 object classes (similar to PASCAL VOC) annotated on both image class level and local object bounding boxes. Source: [https://github.com/cs-chan/Exclusively-Dark-Image-Dataset](https://github.com/cs-chan/Exclusively-Dark-Image-Dataset)
Provide a detailed description of the following dataset: ExDark
EXEQ-300k
The **EXEQ-300k** dataset contains 290,479 detailed questions with corresponding math headlines from Mathematics Stack Exchange. The dataset can be used to generate concise math headlines from detailed math questions. Source: [https://arxiv.org/pdf/1912.00839.pdf](https://arxiv.org/pdf/1912.00839.pdf)
Provide a detailed description of the following dataset: EXEQ-300k
Explainable Abstract Trains
An image dataset containing simplified representations of trains. It aims to provide a platform for the application and research of algorithms for justification and explanation extraction. The dataset is accompanied by an ontology that conceptualizes and classifies the depicted trains based on their visual characteristics, allowing for a precise understanding of how each train was labeled. Each image in the dataset is annotated with multiple attributes describing the trains' features and with bounding boxes for the train elements.
Provide a detailed description of the following dataset: Explainable Abstract Trains
ExPose
Curates a dataset of SMPL-X fits on in-the-wild images.
Provide a detailed description of the following dataset: ExPose
ExpW
The **Expression in-the-Wild (ExpW)** dataset is for facial expression recognition and contains 91,793 faces manually labeled with expressions. Each of the face images is annotated as one of the seven basic expression categories: “angry”, “disgust”, “fear”, “happy”, “sad”, “surprise”, or “neutral”.
Provide a detailed description of the following dataset: ExpW
ExtremeWeather
Encourages machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change.
Provide a detailed description of the following dataset: ExtremeWeather
EyeQ
Dataset with 28,792 retinal images from the EyePACS dataset, based on a three-level quality grading system (i.e., `Good', `Usable' and `Reject') for evaluating RIQA methods.
Provide a detailed description of the following dataset: EyeQ
Facebook Post Reactions
Collects posts (and their reactions) from Facebook pages of large supermarket chains.
Provide a detailed description of the following dataset: Facebook Post Reactions
FaceForensics++
FaceForensics++ is a forensics dataset consisting of 1000 original video sequences that have been manipulated with four automated face manipulation methods: Deepfakes, Face2Face, FaceSwap and NeuralTextures. The data has been sourced from 977 youtube videos and all videos contain a trackable mostly frontal face without occlusions which enables automated tampering methods to generate realistic forgeries.
Provide a detailed description of the following dataset: FaceForensics++
FairFace
**FairFace** is a face image dataset which is race balanced. It contains 108,501 images from 7 different race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups.
Provide a detailed description of the following dataset: FairFace
FAKBAT
The Freebase Annotations of TREC KBA 2014 Stream Corpus with Timestamps (**FAKBAT**) is an extension of the FAKBA1 dataset that contains entity age and entity timestamp. It comprises roughly 1.2 billion timestamped documents from global public news wires, blogs, forums, and shortened links shared on social media. It spans 572 days (October 7, 2011–May 1, 2013). Source: [https://arxiv.org/pdf/1701.04039.pdf](https://arxiv.org/pdf/1701.04039.pdf)
Provide a detailed description of the following dataset: FAKBAT
Fakeddit
**Fakeddit** is a novel multimodal dataset for fake news detection consisting of over 1 million samples from multiple categories of fake news. After being processed through several stages of review, the samples are labeled according to 2-way, 3-way, and 6-way classification categories through distant supervision. Source: [https://fakeddit.netlify.app/](https://fakeddit.netlify.app/)
Provide a detailed description of the following dataset: Fakeddit
Fake News Filipino Dataset
Expertly-curated benchmark dataset for fake news detection in Filipino.
Provide a detailed description of the following dataset: Fake News Filipino Dataset
FPDS
A benchmark for detecting fallen people lying on the floor. It consists of 6982 images, with a total of 5023 falls and 2275 non falls corresponding to people in conventional situations (standing up, sitting, lying on the sofa or bed, walking, etc). Almost all the images have been captured in indoor environments with very different situations: variation of poses and sizes, occlusions, lighting changes, etc.
Provide a detailed description of the following dataset: FPDS
FAS100K
**FAS100K** is a large-scale visual localization dataset. This dataset is comprised of two traverses of 238 and 130 kms respectively where the latter is a partial repeat of the former. The data was collected using stereo cameras in Australia under sunny day conditions. It covers a variety of road and environment types including urban and rural areas. The raw image data from one of the cameras streaming at 5 Hz constitutes 63,650 and 34,497 image frames for the two traverses respectively. Source: [https://arxiv.org/pdf/2001.08434.pdf](https://arxiv.org/pdf/2001.08434.pdf)
Provide a detailed description of the following dataset: FAS100K
Fashion 144K
**Fashion 144K** is a novel heterogeneous dataset with 144,169 user posts containing diverse image, textual and meta information.
Provide a detailed description of the following dataset: Fashion 144K
Fashion-Gen
Fashion-Gen consists of 293,008 high definition (1360 x 1360 pixels) fashion images paired with item descriptions provided by professional stylists. Each item is photographed from a variety of angles.
Provide a detailed description of the following dataset: Fashion-Gen
Fashion IQ
Fashion IQ support and advance research on interactive fashion image retrieval. Fashion IQ is the first fashion dataset to provide human-generated captions that distinguish similar pairs of garment images together with side-information consisting of real-world product descriptions and derived visual attribute labels for these images.
Provide a detailed description of the following dataset: Fashion IQ
Fashionpedia
Fashionpedia consists of two parts: (1) an ontology built by fashion experts containing 27 main apparel categories, 19 apparel parts, 294 fine-grained attributes and their relationships; (2) a dataset with everyday and celebrity event fashion images annotated with segmentation masks and their associated per-mask fine-grained attributes, built upon the Fashionpedia ontology.
Provide a detailed description of the following dataset: Fashionpedia
FAT
Falling Things (FAT) is a dataset for advancing the state-of-the-art in object detection and 3D pose estimation in the context of robotics. It consists of generated photorealistic images with accurate 3D pose annotations for all objects in 60k images. The 60k annotated photos of 21 household objects are taken from the YCB objects set. For each image, the dataset contains the 3D poses, per-pixel class segmentation, and 2D/3D bounding box coordinates for all objects.
Provide a detailed description of the following dataset: FAT
FB15k-237-low
The **FB15k-237-low** dataset is a variation of the FB15k-237 dataset where relations with a low number of triplets are kept. Source: [https://arxiv.org/pdf/1911.03091.pdf](https://arxiv.org/pdf/1911.03091.pdf)
Provide a detailed description of the following dataset: FB15k-237-low
FCDB
Consists of 76 million geo-tagged images in 16 cosmopolitan cities.
Provide a detailed description of the following dataset: FCDB
FDDB-360
A 360-degree fisheye-like version of the popular FDDB face detection dataset.
Provide a detailed description of the following dataset: FDDB-360
FDF
A diverse dataset of human faces, including unconventional poses, occluded faces, and a vast variability in backgrounds.
Provide a detailed description of the following dataset: FDF
FDST
The **Fudan-ShanghaiTech** dataset (**FDST**) is a dataset for video crowd counting. It contains 15K frames with about 394K annotated heads captured from 13 different scenes Source: [https://arxiv.org/abs/1907.07911](https://arxiv.org/abs/1907.07911)
Provide a detailed description of the following dataset: FDST
FeathersV1
The FeatherV1 dataset is a dataset for fine-grained visual classification. It contains 28,272 images of feathers categorized by 595 bird species. Source: [https://github.com/feathers-dataset/feathersv1-dataset](https://github.com/feathers-dataset/feathersv1-dataset) Image Source: [https://github.com/feathers-dataset/feathersv1-dataset](https://github.com/feathers-dataset/feathersv1-dataset)
Provide a detailed description of the following dataset: FeathersV1
FewGlue
FewGLUE consists of a random selection of 32 training examples from the SuperGLUE training sets and up to 20,000 unlabeled examples for each SuperGLUE task.
Provide a detailed description of the following dataset: FewGlue
FewRel 2.0
A more challenging task to investigate two aspects of few-shot relation classification models: (1) Can they adapt to a new domain with only a handful of instances? (2) Can they detect none-of-the-above (NOTA) relations?
Provide a detailed description of the following dataset: FewRel 2.0
FFHQ-Aging
**FFHQ-Aging** is a Dataset of human faces designed for benchmarking age transformation algorithms as well as many other possible vision tasks. This dataset is an extention of the NVIDIA FFHQ dataset, on top of the 70,000 original FFHQ images, it also contains the following information for each image: * Gender information (male/female with confidence score) * Age group information (10 classes with confidence score) * Head pose (pitch, roll & yaw) * Glasses type (none, normal or dark) * Eye occlusion score (0-100, different score for each eye) * Full semantic map (19 classes, based on CelebAMask-HQ labels) Source: [https://github.com/royorel/FFHQ-Aging-Dataset](https://github.com/royorel/FFHQ-Aging-Dataset) Image Source: [https://github.com/royorel/FFHQ-Aging-Dataset](https://github.com/royorel/FFHQ-Aging-Dataset)
Provide a detailed description of the following dataset: FFHQ-Aging
FGADR
This dataset has 1,842 images with pixel-level DR-related lesion annotations, and 1,000 images with image-level labels graded by six board-certified ophthalmologists with intra-rater consistency. The proposed dataset will enable extensive studies on DR diagnosis.
Provide a detailed description of the following dataset: FGADR
FIGRIM
This is a dataset of 9428 images, 1754 of which are target images with memorability scores. The images span 21 scene categories from the SUN database. Each scene category was chosen to contain at least 300 images of size 700x700 or greater. All images were cropped to 700x700 pixels.
Provide a detailed description of the following dataset: FIGRIM
FigureQA
FigureQA is a visual reasoning corpus of over one million question-answer pairs grounded in over 100,000 images. The images are synthetic, scientific-style figures from five classes: line plots, dot-line plots, vertical and horizontal bar graphs, and pie charts.
Provide a detailed description of the following dataset: FigureQA
FinChat
Finnish chat conversation corpus and includes unscripted conversations on seven topics from people of different ages.
Provide a detailed description of the following dataset: FinChat
Fine-grained 3D Pose
A new large-scale dataset that consists of 409 fine-grained categories and 31,881 images with accurate 3D pose annotation.
Provide a detailed description of the following dataset: Fine-grained 3D Pose
Fine-Grained R2R
This dataset enriches the benchmark Room-to-Room (R2R) dataset by dividing the instructions into sub-instructions and pairing each of those with their corresponding viewpoints in the path. The overall instruction and trajectory of each sample remains the same. Source: [https://github.com/YicongHong/Fine-Grained-R2R](https://github.com/YicongHong/Fine-Grained-R2R)
Provide a detailed description of the following dataset: Fine-Grained R2R
Finer
Finnish News Corpus for Named Entity Recognition (Finer) is a corpus that consists of 953 articles (193,742 word tokens) with six named entity classes (organization, location, person, product, event,and date). The articles are extracted from the archives of Digitoday, a Finnish online technology news source.
Provide a detailed description of the following dataset: Finer
FinnSentiment
FinnSentiment introduces a 27,000 sentence dataset (in Finnish) annotated independently with sentiment polarity by three native annotators.
Provide a detailed description of the following dataset: FinnSentiment