query stringlengths 17 161 | keyphrase_query stringlengths 3 85 | year int64 2.01k 2.02k | negative_cands list | positive_cands list | abstracts list |
|---|---|---|---|---|---|
PS-RCNN (our proposal) can detect human bodies in highly crowded | human detection highly crowded scenes images | 2,020 | [
"CityPersons",
"PS-Battles",
"JTA",
"DensePose",
"H3D",
"CUHK-SYSU",
"PhotoSynth"
] | [
"COCO",
"WiderPerson"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
I am training a supervised model for image captioning. | image captioning images | 2,016 | [
"ConvAI2",
"CommonsenseQA",
"EPIC-KITCHENS-100",
"COCO Captions",
"TextCaps",
"ActivityNet Entities",
"BanglaLekhaImageCaptions"
] | [
"Flickr30k",
"COCO"
] | [
{
"dkey": "Flickr30k",
"dval": "The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators."
},
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmen... |
I want to train a system for user ranking in social networks. | user ranking social networks text | 2,019 | [
"SNAP",
"Friendster",
"SNIPS",
"Orkut"
] | [
"Ciao",
"DBLP"
] | [
{
"dkey": "Ciao",
"dval": "The Ciao dataset contains rating information of users given to items, and also contain item category information. The data comes from the Epinions dataset."
},
{
"dkey": "DBLP",
"dval": "The DBLP is a citation network dataset. The citation data is extracted from DBLP, ... |
We propose an efficient online multitask tracking framework which jointly learns a deep visual tracker and a multitask | visual tracking video | 2,019 | [
"BDD100K",
"MLM",
"MTL-AQA",
"MPII",
"VOT2018",
"MLMA Hate Speech"
] | [
"MultiNLI",
"SICK",
"SST"
] | [
{
"dkey": "MultiNLI",
"dval": "The Multi-Genre Natural Language Inference (MultiNLI) dataset has 433K sentence pairs. Its size and mode of collection are modeled closely like SNLI. MultiNLI offers ten distinct genres (Face-to-face, Telephone, 9/11, Travel, Letters, Oxford University Press, Slate, Verbatim, ... |
XLNet-based question answering model. | question answering text | 2,019 | [
"TextVQA",
"MovieFIB",
"UIT-ViNewsQA",
"AQUA",
"KnowIT VQA",
"OpenBookQA"
] | [
"RACE",
"SearchQA",
"SQuAD"
] | [
{
"dkey": "RACE",
"dval": "The ReAding Comprehension dataset from Examinations (RACE) dataset is a machine reading comprehension dataset consisting of 27,933 passages and 97,867 questions from English exams, targeting Chinese students aged 12-18. RACE consists of two subsets, RACE-M and RACE-H, from middle ... |
We propose a novel approach for object detection based on keypoint estimation. Our detector takes as input a | object detection images | 2,019 | [
"THEODORE",
"OccludedPASCAL3D+",
"SVIRO",
"Localized Narratives",
"MOT17"
] | [
"COCO",
"KITTI"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
A novel approach to recognize actions from videos. | action recognition videos | 2,017 | [
"Hollywood 3D dataset",
"Moments in Time",
"MTL-AQA",
"DAD"
] | [
"UCF101",
"HMDB51"
] | [
{
"dkey": "UCF101",
"dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). T... |
An online module with an attention mechanism for offline siamese networks to extract target-specific features under L | visual object tracking images | 2,019 | [
"LastFM Asia",
"LIVE1",
"CoSal2015",
"PadChest"
] | [
"TrackingNet",
"VOT2018",
"LaSOT"
] | [
{
"dkey": "TrackingNet",
"dval": "TrackingNet is a large-scale tracking dataset consisting of videos in the wild. It has a total of 30,643 videos split into 30,132 training videos and 511 testing videos, with an average of 470,9 frames."
},
{
"dkey": "VOT2018",
"dval": "VOT2018 is a dataset for ... |
We explore the use of bounding boxes as weak supervision for semantic segmentation and instance | semantic labelling instance segmentation images text | 2,017 | [
"VRD",
"TableBank",
"THEODORE",
"A2D2",
"WoodScape",
"EPIC-KITCHENS-100",
"SVIRO"
] | [
"BSDS500",
"COCO"
] | [
{
"dkey": "BSDS500",
"dval": "Berkeley Segmentation Data Set 500 (BSDS500) is a standard benchmark for contour detection. This dataset is designed for evaluating natural edge detection that includes not only object contours but also object interior boundaries and background boundaries. It includes 500 natur... |
We study efficient action recognition in untrimmed videos. We first propose an ImgAud2 | action recognition videos | 2,019 | [
"MECCANO",
"PKU-MMD",
"A2D",
"BDD100K",
"Localized Narratives",
"Hollywood 3D dataset",
"EPIC-KITCHENS-100"
] | [
"UCF101",
"ActivityNet"
] | [
{
"dkey": "UCF101",
"dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). T... |
We present a data augmentation technique using distant supervision to exploit positive as well as negative examples. We apply | question answering text | 2,019 | [
"SlowFlow",
"ReCAM",
"SemEval 2014 Task 4 Sub Task 2",
"SciTail",
"Word Sense Disambiguation: a Unified Evaluation Framework and Empirical Comparison",
"Delicious"
] | [
"SQuAD",
"TriviaQA"
] | [
{
"dkey": "SQuAD",
"dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through c... |
I want to train a fully supervised image segmentation model. | image segmentation | 2,018 | [
"SNIPS",
"ConvAI2",
"ACDC",
"NYU-VP",
"BSDS500",
"SBD"
] | [
"ImageNet",
"CIFAR-10"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to create a model for scene-aware dialog. | scene-aware dialog audio, video, history | 2,019 | [
"ConvAI2",
"AVSD",
"CLEVR-Dialog",
"MMD",
"DailyDialog++",
"Taskmaster-1",
"WildDash"
] | [
"MovieQA",
"Charades"
] | [
{
"dkey": "MovieQA",
"dval": "The MovieQA dataset is a dataset for movie question answering. to evaluate automatic story comprehension from both video and text. The data set consists of almost 15,000 multiple choice question answers obtained from over 400 movies and features high semantic diversity. Each qu... |
I want to learn the structure of deep neural networks. | image classification images | 2,018 | [
"COWC",
"GoPro",
"UNSW-NB15",
"Places",
"WikiReading"
] | [
"ImageNet",
"CIFAR-10"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to apply a nonlocal block to video action recognition. | action recognition video | 2,019 | [
"Kinetics-600",
"Image and Video Advertisements",
"FineGym",
"TinyVIRAT"
] | [
"UCF101",
"CIFAR-10"
] | [
{
"dkey": "UCF101",
"dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). T... |
I want to train a sentence similarity and retrieval model with an unsupervised approach. | sentence similarity retrieval text | 2,020 | [
"ConvAI2",
"MultiReQA",
"SNIPS",
"COUGH",
"SemEval 2014 Task 4 Sub Task 2",
"DUC 2004"
] | [
"MultiNLI",
"SentEval"
] | [
{
"dkey": "MultiNLI",
"dval": "The Multi-Genre Natural Language Inference (MultiNLI) dataset has 433K sentence pairs. Its size and mode of collection are modeled closely like SNLI. MultiNLI offers ten distinct genres (Face-to-face, Telephone, 9/11, Travel, Letters, Oxford University Press, Slate, Verbatim, ... |
We propose to improve the performance of a text classification model by introducing an additional module that produces high | sentiment analysis | 2,017 | [
"THEODORE",
"MVTecAD",
"SuperGLUE",
"Syn2Real",
"COCO-Text"
] | [
"SNLI",
"SQuAD",
"SST"
] | [
{
"dkey": "SNLI",
"dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and a... |
A person re-identification method based on weighted bilinear coding. | person re-identification image | 2,018 | [
"Airport",
"P-DESTRE",
"CUHK02",
"SYSU-MM01",
"Occluded REID"
] | [
"Market-1501",
"CUHK03"
] | [
{
"dkey": "Market-1501",
"dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person... |
A practical system to verify machine learning based object detections. | object detection images | 2,019 | [
"TriviaQA",
"MPIIGaze",
"HolStep",
"Sydney Urban Objects"
] | [
"AVD",
"COCO"
] | [
{
"dkey": "AVD",
"dval": "AVD focuses on simulating robotic vision tasks in everyday indoor environments using real imagery. The dataset includes 20,000+ RGB-D images and 50,000+ 2D bounding boxes of object instances densely captured in 9 unique scenes."
},
{
"dkey": "COCO",
"dval": "The MS COCO... |
I am interested in implementing a system that will remove all the dynamic elements from an image ( | image inpainting images | 2,019 | [
"CommonsenseQA",
"SEN12MS-CR",
"ConvAI2",
"WebText",
"BanglaLekhaImageCaptions"
] | [
"Places",
"CARLA",
"Cityscapes"
] | [
{
"dkey": "Places",
"dval": "The Places dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category."
},
{
"dkey": "CARLA",
"dval": "CARLA (CAR Learning to Act) is an open simulator for urban... |
A novel network architecture for the retinal vessel segmentation task that has a highly efficient and fast inference speed | retinal vessel segmentation images | 2,017 | [
"ORVS",
"ROSE",
"RITE",
"HRF",
"ADAM"
] | [
"STARE",
"DRIVE"
] | [
{
"dkey": "STARE",
"dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.."
},
{
"dkey": "DRIVE",
"dval": "The Digital Retinal I... |
We propose a novel ConvNet model for predicting 2D human body poses in | 2d human pose estimation images | 2,017 | [
"3DPW",
"Deep Fashion3D",
"COCO-WholeBody",
"ITOP",
"EgoDexter"
] | [
"MPII",
"COCO"
] | [
{
"dkey": "MPII",
"dval": "The MPII Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 d... |
Glaucoma is a leading cause of irreversible blindness worldwide. Optic disc analysis can be used | optic disc detection fundus images | 2,019 | [
"G1020",
"ADAM",
"FUNSD",
"CrowdFlow",
"LAG"
] | [
"HRF",
"DRIVE"
] | [
{
"dkey": "HRF",
"dval": "The HRF dataset is a dataset for retinal vessel segmentation which comprises 45 images and is organized as 15 subsets. Each subset contains one healthy fundus image, one image of patient with diabetic retinopathy and one glaucoma image. The image sizes are 3,304 x 2,336, with a tra... |
I want to train a weakly-supervised model for object detection from unlabeled web images. | object detection images | 2,018 | [
"SBD",
"DCASE 2018 Task 4",
"Twitter100k",
"TableBank",
"OpoSum",
"EMBER"
] | [
"ImageNet",
"WebVision"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
We study the problem of image-to-image translation and propose a novel biphasic learning | image-to-image translation images | 2,019 | [
"BDD100K",
"FGADR",
"MIMIC-CXR",
"MMID"
] | [
"RaFD",
"CelebA"
] | [
{
"dkey": "RaFD",
"dval": "The Radboud Faces Database (RaFD) is a set of pictures of 67 models (both adult and children, males and females) displaying 8 emotional expressions."
},
{
"dkey": "CelebA",
"dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,... |
I want to train a proposal-based detector for crowded object detection. | object detection images | 2,020 | [
"COVERAGE",
"GQA",
"MOT15",
"COCO-Tasks",
"MOT17",
"SNIPS"
] | [
"COCO",
"CrowdHuman",
"CityPersons"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
This paper proposes a novel algorithm (PIFR) to reconstruct a face from | 3d face reconstruction images | 2,018 | [
"Deep Fashion3D",
"WHU",
"AnimalWeb",
"MaskedFace-Net",
"BlendedMVS",
"VOT2018",
"MegaFace"
] | [
"AFW",
"FaceWarehouse",
"AFLW"
] | [
{
"dkey": "AFW",
"dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box."
},
{
"dkey": "FaceWarehouse",
"dval": "FaceWarehouse is a 3D fa... |
We study the problem of extracting fine-grained attributes of an instance as a multi-attribute classification problem | multi-attribute classification images birds | 2,018 | [
"SemArt",
"CompCars",
"Fashionpedia",
"SUN Attribute",
"iMaterialist",
"HVU"
] | [
"ImageNet",
"COCO"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
An end-to-end neural network approach to image captioning. | image captioning images | 2,019 | [
"WikiReading",
"iSUN",
"MLe2e",
"ActivityNet Captions",
"HPatches",
"THEODORE"
] | [
"Flickr30k",
"COCO"
] | [
{
"dkey": "Flickr30k",
"dval": "The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators."
},
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmen... |
I want to train a model for person re-identification. | person re-identification images | 2,018 | [
"SYSU-MM01",
"Airport",
"CUHK03",
"Partial-iLIDS",
"CUHK02",
"P-DESTRE"
] | [
"Market-1501",
"MARS"
] | [
{
"dkey": "Market-1501",
"dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person... |
A system that performs meta-learning in an unsupervised fashion for few-shot learning of classifiers for | few-shot learning images | 2,018 | [
"Meta-Dataset",
"FC100",
"ModaNet",
"PASCAL-5i",
"MetaLWOz",
"SGD",
"FewRel"
] | [
"ImageNet",
"UCF101",
"CelebA"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
A sentence encoder is proposed to learn sentence representations. | sentence representation learning text | 2,018 | [
"LDC2020T02",
"QAMR",
"CUB-200-2011",
"EmoBank",
"e-SNLI"
] | [
"SICK",
"SST"
] | [
{
"dkey": "SICK",
"dval": "The Sentences Involving Compositional Knowledge (SICK) dataset is a dataset for compositional distributional semantics. It includes a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena. Each pair of sentences is annotated in two dimension... |
A model that takes a paragraph and a question as input, and outputs an answer as a sequence of | question answering text paragraph-level | 2,018 | [
"QNLI",
"HotpotQA",
"ProPara",
"DROP",
"MultiRC",
"TweetQA",
"Quoref"
] | [
"NewsQA",
"SearchQA"
] | [
{
"dkey": "NewsQA",
"dval": "The NewsQA dataset is a crowd-sourced machine reading comprehension dataset of 120,000 question-answer pairs.\n\n\nDocuments are CNN news articles.\nQuestions are written by human users in natural language.\nAnswers may be multiword passages of the source text.\nQuestions may be... |
I want to train a supervised model for blind image deblurring from image pairs. | blind image deblurring | 2,019 | [
"Real Blur Dataset",
"DAVANet",
"VizWiz-Captions",
"ConvAI2",
"HIDE",
"FaceForensics"
] | [
"BSDS500",
"COCO"
] | [
{
"dkey": "BSDS500",
"dval": "Berkeley Segmentation Data Set 500 (BSDS500) is a standard benchmark for contour detection. This dataset is designed for evaluating natural edge detection that includes not only object contours but also object interior boundaries and background boundaries. It includes 500 natur... |
I think that the logic form step can be injected into the deep model. The reason why we think | semantic parsing natural language | 2,019 | [
"LogiQA",
"ARC-DA",
"Logic2Text",
"Image and Video Advertisements",
"COG",
"CLEVR-Humans"
] | [
"WikiSQL",
"SQuAD"
] | [
{
"dkey": "WikiSQL",
"dval": "WikiSQL consists of a corpus of 87,726 hand-annotated SQL query and natural language question pairs. These SQL queries are further split into training (61,297 examples), development (9,145 examples) and test sets (17,284 examples). It can be used for natural language inference ... |
In this paper, we discuss the challenges and state-of-the-art methods | crowd counting density estimation images | 2,017 | [
"E2E",
"AQUA",
"Completion3D",
"LogiQA",
"HellaSwag",
"THEODORE"
] | [
"Mall",
"ShanghaiTech"
] | [
{
"dkey": "Mall",
"dval": "The Mall is a dataset for crowd counting and profiling research. Its images are collected from publicly accessible webcam. It mainly includes 2,000 video frames, and the head position of every pedestrian in all frames is annotated. A total of more than 60,000 pedestrians are annot... |
In this exposition, we extensively compare 30+ state-of-the-art super- | super-resolution image | 2,019 | [
"THEODORE",
"REDS",
"Dialogue State Tracking Challenge",
"FLIC",
"NetHack Learning Environment",
"Talk2Car"
] | [
"Set5",
"Urban100"
] | [
{
"dkey": "Set5",
"dval": "The Set5 dataset is a dataset consisting of 5 images (“baby”, “bird”, “butterfly”, “head”, “woman”) commonly used for testing performance of Image Super-Resolution models."
},
{
"dkey": "Urban100",
"dval": "The Urban100 dataset contains 100 images of urban scenes. It c... |
I'm trying to train a supervised model for human pose estimation. | human pose estimation video | 2,016 | [
"PoseTrack",
"V-COCO",
"MannequinChallenge",
"K2HPD",
"UMDFaces",
"MuPoTS-3D"
] | [
"MPII",
"FLIC"
] | [
{
"dkey": "MPII",
"dval": "The MPII Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 d... |
We propose a Global Context Network (GCNet), which can effectively capture global context for | image classification images | 2,019 | [
"MaskedFace-Net",
"DUT-OMRON",
"Global Voices",
"WiC",
"DICM",
"Microsoft Research Social Media Conversation Corpus",
"ROPES"
] | [
"ImageNet",
"COCO"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I'm working on a set learning problem. | set learning image | 2,017 | [
"COG",
"PMLB",
"RL Unplugged",
"MineRL"
] | [
"COCO",
"CelebA"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
Audio-to-video synchronisation with cross-modal embeddings learnt via | audio-to-video synchronisation | 2,019 | [
"YouTube-8M",
"MMAct",
"Recipe1M+",
"VoxCeleb2",
"CTC",
"K2HPD"
] | [
"LRS2",
"LRW"
] | [
{
"dkey": "LRS2",
"dval": "The Oxford-BBC Lip Reading Sentences 2 (LRS2) dataset is one of the largest publicly available datasets for lip reading sentences in-the-wild. The database consists of mainly news and talk shows from BBC programs. Each sentence is up to 100 characters in length. The training, vali... |
I want to learn the embeddings of entities and relations in knowledge graphs, then predict missing relations between entities | link prediction text | 2,019 | [
"OLPBENCH",
"FrameNet",
"YAGO",
"SherLIiC",
"WikiHop"
] | [
"FB15k",
"WN18"
] | [
{
"dkey": "FB15k",
"dval": "The FB15k dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. It has a total of 592,213 triplets with 14,951 entities and 1,345 relationships. FB15K-237 is a variant of the original dataset where inverse relations are removed, since it... |
We propose a novel architecture-neutral CNN building block called asymmetric convolution block (ACB), | action recognition video | 2,019 | [
"MuST-Cinema",
"THEODORE",
"NAS-Bench-101",
"CamVid"
] | [
"ImageNet",
"CIFAR-10"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
"I'm very happy with this project! " | image captioning images text | 2,018 | [
"HappyDB",
"AFLW2000-3D",
"EmoContext",
"CAL500",
"ExpW",
"VeRi-776"
] | [
"ImageNet",
"COCO",
"Flickr30k"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to improve fill-in-the-blank multiple | fill-in-the-blank multiple choice question answering images paragraph-level | 2,018 | [
"MovieFIB",
"CLOTH",
"ChID",
"CNN/Daily Mail",
"DailyDialog++"
] | [
"COCO",
"HICO"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
I want to train a supervised action recognition model from video data. | action recognition video | 2,017 | [
"EPIC-KITCHENS-100",
"AViD",
"Kinetics-600",
"NTU RGB+D",
"Image and Video Advertisements",
"Charades"
] | [
"ImageNet",
"ActivityNet"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to learn to walk over a graph towards a target node for a given query and | graph-walking text | 2,018 | [
"SNIPS",
"SemEval 2014 Task 4 Sub Task 2",
"OGB-LSC",
"Decagon",
"WikiHop",
"NAS-Bench-101"
] | [
"FB15k",
"WN18"
] | [
{
"dkey": "FB15k",
"dval": "The FB15k dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. It has a total of 592,213 triplets with 14,951 entities and 1,345 relationships. FB15K-237 is a variant of the original dataset where inverse relations are removed, since it... |
I want to train a model for facial landmark localization. | facial landmark localization images | 2,017 | [
"AFLW2000-3D",
"300-VW",
"UTKFace",
"LS3D-W",
"WFLW"
] | [
"AFW",
"AFLW"
] | [
{
"dkey": "AFW",
"dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box."
},
{
"dkey": "AFLW",
"dval": "The Annotated Facial Landmarks in... |
I want to train an object detection model for images. | object detection images | 2,016 | [
"COCO-Tasks",
"COVERAGE",
"SNIPS",
"ConvAI2",
"APRICOT",
"T-LESS"
] | [
"ImageNet",
"COCO"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I am writing an unsupervised sentence classifier that can detect negation in sentences. | sentence classification text | 2,018 | [
"ROSTD",
"GYAFC",
"ConvAI2",
"FCE",
"LDC2020T02",
"Chinese Classifier"
] | [
"SST",
"WikiText-103"
] | [
{
"dkey": "SST",
"dval": "The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a\ncomplete analysis of the compositional effects of\nsentiment in language. The corpus is based on\nthe dataset introduced by Pang and Lee (2005) and\nconsists of 11,855 single sentences ext... |
A simple yet effective training schedule for deep learning algorithms with class-imbalance. | class-imbalanced learning images | 2,019 | [
"MNIST-1D",
"DocBank",
"IMDB-BINARY",
"REDDIT-BINARY",
"RIT-18",
"BDD100K"
] | [
"iNaturalist",
"CIFAR-10"
] | [
{
"dkey": "iNaturalist",
"dval": "The iNaturalist 2017 dataset (iNat) contains 675,170 training and validation images from 5,089 natural fine-grained categories. Those categories belong to 13 super-categories including Plantae (Plant), Insecta (Insect), Aves (Bird), Mammalia (Mammal), and so on. The iNat da... |
I'm training a lip-reading model for isolated word recognition. | lip-reading images | 2,019 | [
"GSL",
"BosphorusSign22k",
"CASIA-HWDB",
"ReCAM"
] | [
"LRS2",
"LRW"
] | [
{
"dkey": "LRS2",
"dval": "The Oxford-BBC Lip Reading Sentences 2 (LRS2) dataset is one of the largest publicly available datasets for lip reading sentences in-the-wild. The database consists of mainly news and talk shows from BBC programs. Each sentence is up to 100 characters in length. The training, vali... |
We present an approach to transfer learning which involves training an auxiliary model to learn the relevant features and a target | image classification images | 2,018 | [
"Syn2Real",
"KLEJ",
"BanglaLekhaImageCaptions",
"EMBER"
] | [
"Places",
"ImageNet"
] | [
{
"dkey": "Places",
"dval": "The Places dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category."
},
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated imag... |
I want to train a supervised model for bounding box estimation. | bounding box estimation video | 2,019 | [
"TableBank",
"VRD",
"COCO-WholeBody",
"PoseTrack",
"UMDFaces"
] | [
"COCO",
"TrackingNet",
"UTKFace"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
This paper proposes a new class of image models, called autoregressive image models with auxiliary variables. We show that | image generation images | 2,017 | [
"THEODORE",
"Localized Narratives",
"BDD100K",
"INRIA-Horse",
"CONCODE"
] | [
"CIFAR-10",
"CelebA"
] | [
{
"dkey": "CIFAR-10",
"dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck),... |
A survey of recent advances in facial expression recognition. | facial expression recognition images | 2,019 | [
"FERG",
"ExpW",
"CLOTH",
"4DFAB",
"LAMBADA"
] | [
"SFEW",
"BP4D",
"DISFA",
"MMI",
"JAFFE"
] | [
{
"dkey": "SFEW",
"dval": "The Static Facial Expressions in the Wild (SFEW) dataset is a dataset for facial expression recognition. It was created by selecting static frames from the AFEW database by computing key frames based on facial point clustering. The most commonly used version, SFEW 2.0, was the ben... |
The importance of sampling synthetic data before augmentation, and to our knowledge, our method is the first | face attribute classification image | 2,019 | [
"C&Z",
"FreiHAND",
"Shiny dataset",
"THEODORE",
"GSL",
"NSynth"
] | [
"AffectNet",
"CelebA"
] | [
{
"dkey": "AffectNet",
"dval": "AffectNet is a large facial expression dataset with around 0.4 million images manually labeled for the presence of eight (neutral, happy, angry, sad, fear, surprise, disgust, contempt) facial expressions along with the intensity of valence and arousal."
},
{
"dkey": "... |
A system for composing bag-of-words embeddings. | bag-of-words feature embedding | 2,019 | [
"TAC 2010",
"ACM",
"SEMCAT",
"WiC",
"MUSE"
] | [
"SICK",
"SST",
"CIFAR-10"
] | [
{
"dkey": "SICK",
"dval": "The Sentences Involving Compositional Knowledge (SICK) dataset is a dataset for compositional distributional semantics. It includes a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena. Each pair of sentences is annotated in two dimension... |
I want to train a supervised model for commonsense reasoning. | commonsense reasoning text | 2,019 | [
"CC-Stories",
"CoS-E",
"CosmosQA",
"ReCoRD",
"PIQA",
"SNIPS",
"ATOMIC"
] | [
"BookCorpus",
"SQuAD",
"CommonsenseQA"
] | [
{
"dkey": "BookCorpus",
"dval": "BookCorpus is a large collection of free novel books written by unpublished authors, which contains 11,038 books (around 74M sentences and 1G words) of 16 different sub-genres (e.g., Romance, Historical, Adventure, etc.)."
},
{
"dkey": "SQuAD",
"dval": "The Stanf... |
A new evaluation protocol for remote sensing representation learning. | remote sensing representation learning | 2,019 | [
"RSICD",
"RIT-18",
"EORSSD",
"WiC-TSV",
"MLRSNet",
"WiC"
] | [
"ImageNet",
"EuroSAT",
"BigEarthNet"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I'm looking for a person re-identification model which can be trained on my | person re-identification still images | 2,017 | [
"Airport",
"SYSU-MM01",
"P-DESTRE",
"Partial-iLIDS",
"CUHK02",
"DukeMTMC-reID"
] | [
"Market-1501",
"CUHK03"
] | [
{
"dkey": "Market-1501",
"dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person... |
We propose a novel S-shaped rectified linear activation unit (SReLU) to learn both convex and | semantic segmentation image | 2,015 | [
"DISFA",
"MNIST-1D",
"BP4D",
"CHiME-Home"
] | [
"ImageNet",
"CIFAR-10"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to train a model for machine comprehension. | machine comprehension text paragraph-level | 2,019 | [
"SNIPS",
"VisualMRC",
"ConvAI2",
"MCTest",
"MC-AFP"
] | [
"SNLI",
"SQuAD"
] | [
{
"dkey": "SNLI",
"dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and a... |
In this paper, we present the first and the largest study of all facial behaviour tasks learned jointly | facial behavior analysis images | 2,019 | [
"4DFAB",
"BDD100K",
"SEWA DB",
"ISTD",
"Multi Task Crowd"
] | [
"BP4D",
"AffectNet"
] | [
{
"dkey": "BP4D",
"dval": "The BP4D-Spontaneous dataset is a 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was ... |
I'd like to train a model with ADAM. | image classification images | 2,019 | [
"ADAM",
"MECCANO",
"BDD100K",
"MultiNLI",
"ReCAM"
] | [
"ORL",
"CIFAR-10"
] | [
{
"dkey": "ORL",
"dval": "The ORL Database of Faces contains 400 images from 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were ... |
I want to implement a robust loss function for image registration. | image registration images | 2,019 | [
"LFW",
"ORVS",
"SNIPS",
"fMoW"
] | [
"KITTI",
"CelebA"
] | [
{
"dkey": "KITTI",
"dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RG... |
We propose a machine reading comprehension model based on the compare-aggregate framework with two-staged | machine reading comprehension text | 2,018 | [
"VisualMRC",
"ReCAM",
"RACE",
"BiPaR",
"UIT-ViQuAD"
] | [
"MovieQA",
"SQuAD"
] | [
{
"dkey": "MovieQA",
"dval": "The MovieQA dataset is a dataset for movie question answering. to evaluate automatic story comprehension from both video and text. The data set consists of almost 15,000 multiple choice question answers obtained from over 400 movies and features high semantic diversity. Each qu... |
We show that fine-tuning can improve the ability of a state-of | recognizing textual entailment | 2,017 | [
"THEODORE",
"NumerSense",
"RarePlanes Dataset",
"Alchemy"
] | [
"SNLI",
"SICK"
] | [
{
"dkey": "SNLI",
"dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and a... |
Knowledge graphs can be used to organize and store a wide range of entities and relations. We propose | knowledge graph learning text | 2,019 | [
"COMETA",
"YAGO",
"FrameNet",
"OLPBENCH",
"KdConv"
] | [
"FB15k",
"WN18",
"DDI"
] | [
{
"dkey": "FB15k",
"dval": "The FB15k dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. It has a total of 592,213 triplets with 14,951 entities and 1,345 relationships. FB15K-237 is a variant of the original dataset where inverse relations are removed, since it... |
I want to augment CNN architecture with location cue to improve performance for salient object segmentation. | salient object segmentation images | 2,018 | [
"Stylized ImageNet",
"THEODORE",
"ELFW",
"MSU-MFSD"
] | [
"Cityscapes",
"SBD",
"ECSSD"
] | [
{
"dkey": "Cityscapes",
"dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sk... |
This is a system for joint keypoint detection and body part association. | keypoint detection images | 2,017 | [
"MSRC-12",
"JTA",
"UI-PRMD",
"Composable activities dataset",
"COCO-WholeBody"
] | [
"MPII",
"COCO"
] | [
{
"dkey": "MPII",
"dval": "The MPII Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 d... |
A lightweight variant of Neural Shuffle-Exchange network for long-range sequence modelling. | language modelling audio | 2,020 | [
"PG-19",
"SHREC",
"Penn Treebank",
"ImageNet-P",
"FGVC-Aircraft"
] | [
"MusicNet",
"LAMBADA"
] | [
{
"dkey": "MusicNet",
"dval": "MusicNet is a collection of 330 freely-licensed classical music recordings, together with over 1 million annotated labels indicating the precise time of each note in every recording, the instrument that plays each note, and the note's position in the metrical structure of the ... |
I want to train a fully supervised model for interactive navigation in indoor environments. | interactive navigation | 2,019 | [
"Lani",
"StreetLearn",
"IQUAD",
"HoME"
] | [
"Scan2CAD",
"ShapeNet"
] | [
{
"dkey": "Scan2CAD",
"dval": "Scan2CAD is an alignment dataset based on 1506 ScanNet scans with 97607 annotated keypoints pairs between 14225 (3049 unique) CAD models from ShapeNet and their counterpart objects in the scans. The top 3 annotated model classes are chairs, tables and cabinets which arises due... |
We propose a new approach that uses deep learning techniques to solve the inverse problems. The inverse problem | motion deblurring images | 2,017 | [
"WN18",
"BDD100K",
"EyeCar",
"LS3D-W"
] | [
"ImageNet",
"CelebA"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
A simple, low-cost, easy-to-use method for measuring which parts of a static | eye tracking images | 2,017 | [
"word2word",
"LoDoPaB-CT",
"Middlebury 2014",
"Funcom",
"UCLA Aerial Event Dataset",
"ARC-DA"
] | [
"SALICON",
"COCO"
] | [
{
"dkey": "SALICON",
"dval": "The SALIency in CONtext (SALICON) dataset contains 10,000 training images, 5,000 validation images and 5,000 test images for saliency prediction. This dataset has been created by annotating saliency in images from MS COCO.\nThe ground-truth saliency annotations include fixation... |
In this paper, we propose ERNIE 2.0, a pre-training model that | natural language understanding text | 2,019 | [
"CLUECorpus2020",
"ASNQ",
"NumerSense",
"FewRel 2.0",
"SFEW"
] | [
"DuReader",
"QNLI",
"MRPC",
"CoLA",
"GLUE"
] | [
{
"dkey": "DuReader",
"dval": "DuReader is a large-scale open-domain Chinese machine reading comprehension dataset. The dataset consists of 200K questions, 420K answers and 1M documents. The questions and documents are based on Baidu Search and Baidu Zhidao. The answers are manually generated. The dataset a... |
I want to generate an image from the [DATASET] bedroom data set. | image generation | 2,020 | [
"SNIPS",
"ConvAI2",
"UAVA",
"OpenEDS"
] | [
"LSUN",
"FFHQ"
] | [
{
"dkey": "LSUN",
"dval": "The Large-scale Scene Understanding (LSUN) challenge aims to provide a different benchmark for large-scale scene classification and understanding. The LSUN classification dataset contains 10 scene categories, such as dining room, bedroom, chicken, outdoor church, and so on. For tr... |
A new object detection dataset with parking stickers that mimics the type of data available in industry problems more | object detection images | 2,020 | [
"MIMIC-III",
"Industrial Benchmark",
"RADIATE",
"IIIT-AR-13K"
] | [
"ImageNet",
"COCO"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
An extension of the knowledge distillation method, where the student is not only imitated by the teacher, but | image classification images paragraph-level | 2,014 | [
"WebChild",
"ICubWorld",
"QuAC",
"ImageNet-32",
"OMICS",
"Taskonomy"
] | [
"AFLW",
"CIFAR-10"
] | [
{
"dkey": "AFLW",
"dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total ... |
I am interested in studying knowledge graph embedding-based methods for link prediction. | link prediction kg | 2,020 | [
"OGB-LSC",
"OLPBENCH",
"MutualFriends",
"CommonsenseQA",
"COMETA",
"FrameNet"
] | [
"FB15k",
"WN18"
] | [
{
"dkey": "FB15k",
"dval": "The FB15k dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. It has a total of 592,213 triplets with 14,951 entities and 1,345 relationships. FB15K-237 is a variant of the original dataset where inverse relations are removed, since it... |
I want to use a HRNet for human pose estimation. | human pose estimation images paragraph-level | 2,019 | [
"MPII",
"K2HPD",
"LSP",
"MPII Human Pose",
"MPI-INF-3DHP",
"COCO-WholeBody"
] | [
"LIP",
"COCO",
"Cityscapes"
] | [
{
"dkey": "LIP",
"dval": "The LIP (Look into Person) dataset is a large-scale dataset focusing on semantic understanding of a person. It contains 50,000 images with elaborated pixel-wise annotations of 19 semantic human part labels and 2D human poses with 16 key points. The images are collected from real-wo... |
I want to use the softmax loss to train a network for person re-identification. | person re-identification images | 2,017 | [
"SYSU-MM01",
"Airport",
"P-DESTRE",
"DukeMTMC-reID"
] | [
"Market-1501",
"CUHK03"
] | [
{
"dkey": "Market-1501",
"dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person... |
In this paper, we propose a novel texture recognition method which makes use of Convolutional Neural Network (CNN | texture recognition images | 2,015 | [
"THEODORE",
"Stanford Cars",
"ObjectNet",
"DAGM2007"
] | [
"ImageNet",
"DTD"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to train a supervised model for action recognition from videos. | action recognition videos | 2,017 | [
"EPIC-KITCHENS-100",
"Kinetics",
"AViD",
"Kinetics-600",
"NTU RGB+D",
"Charades"
] | [
"COCO",
"VRD"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
We propose a novel approach for face deblurring by combining two state-of-the | face deblurring images | 2,017 | [
"MaskedFace-Net",
"Hanabi Learning Environment",
"DAVANet",
"FollowUp",
"GoPro",
"TableBank"
] | [
"AFLW",
"300W"
] | [
{
"dkey": "AFLW",
"dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total ... |
We propose a new stacked attention network (SAN) for image question answering. We show that | image question answering images | 2,016 | [
"Localized Narratives",
"TrecQA",
"VisDial",
"UASOL",
"VQA-HAT",
"QUASAR-S"
] | [
"COCO",
"DAQUAR"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
We address the problem of estimating the positions of human joints, i.e., articulated pose | articulated pose estimation images | 2,017 | [
"MSRC-12",
"PoseTrack",
"PASCAL3D+",
"TotalCapture",
"Composable activities dataset"
] | [
"MPII",
"LSP"
] | [
{
"dkey": "MPII",
"dval": "The MPII Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 d... |
We propose a novel method to enhance the feature learning of person re-identification. We utilize | person re-identification images | 2,019 | [
"CUHK02",
"CUHK-PEDES",
"P-DESTRE",
"Airport"
] | [
"Market-1501",
"CUHK03"
] | [
{
"dkey": "Market-1501",
"dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person... |
I am trying to use a machine to answer a question about the world around us. | open-domain question answering text | 2,018 | [
"ProofWriter",
"CommonsenseQA",
"DuoRC",
"DAQUAR",
"QuAC"
] | [
"ARC",
"NewsQA",
"SearchQA",
"SQuAD"
] | [
{
"dkey": "ARC",
"dval": "The AI2’s Reasoning Challenge (ARC) dataset is a multiple-choice question-answering dataset, containing questions from science exams from grade 3 to grade 9. The dataset is split in two partitions: Easy and Challenge, where the latter partition contains the more difficult questions... |
We propose a novel unsupervised approach for single image dehazing. Our method first estimates | image dehazing | 2,019 | [
"MVTecAD",
"NH-HAZE",
"RESIDE",
"Localized Narratives",
"GVGAI",
"Make3D",
"I-HAZE"
] | [
"STARE",
"DRIVE"
] | [
{
"dkey": "STARE",
"dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.."
},
{
"dkey": "DRIVE",
"dval": "The Digital Retinal I... |
An accuracy predictor for deep neural network architectures for image classification. | image classification images | 2,018 | [
"UNITOPATHO",
"COWC",
"Birdsnap",
"CODEBRIM",
"30MQA",
"Multi Task Crowd",
"GoPro"
] | [
"ImageNet",
"CIFAR-10"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to train a supervised model for optic disc localization. | optic disc localization images | 2,009 | [
"G1020",
"SNIPS",
"MVSEC",
"AVE",
"ConvAI2",
"ADAM"
] | [
"STARE",
"DRIVE"
] | [
{
"dkey": "STARE",
"dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.."
},
{
"dkey": "DRIVE",
"dval": "The Digital Retinal I... |
This is the first survey of handcrafted and learning-based representation for human | har videos paragraph-level | 2,017 | [
"NELL",
"Kinetics",
"MPIIGaze",
"NetHack Learning Environment",
"REDS",
"Icentia11K",
"CONVERSE"
] | [
"UCF101",
"ActivityNet",
"HMDB51"
] | [
{
"dkey": "UCF101",
"dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). T... |
A generative adversarial network is designed to extract person features invariant to pose variations. | person re-identification images | 2,018 | [
"FDF",
"EYEDIAP",
"ISTD",
"Raindrop",
"UMDFaces",
"WinoGrande"
] | [
"DukeMTMC-reID",
"Market-1501"
] | [
{
"dkey": "DukeMTMC-reID",
"dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein ... |
I want to detect activities in continuous video streams. | temporal activity detection video | 2,019 | [
"MEVA",
"NAB",
"Stream-51",
"MLB-YouTube Dataset",
"TVQA",
"SYNTHIA-AL",
"UT-Interaction"
] | [
"Charades",
"ActivityNet"
] | [
{
"dkey": "Charades",
"dval": "The Charades dataset is composed of 9,848 videos of daily indoors activities with an average length of 30 seconds, involving interactions with 46 objects classes in 15 types of indoor scenes and containing a vocabulary of 30 verbs leading to 157 action classes. Each video in t... |
I want to detect grasshoppers in images automatically using deep learning. | insect detection images | 2,020 | [
"CHB-MIT",
"FLAME",
"COWC",
"Flightmare Simulator",
"DeepFix"
] | [
"ImageNet",
"COCO"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
Object detection in images. | object detection image | 2,019 | [
"Open Images V4",
"FAT",
"Objects365",
"DUT-OMRON",
"HICO-DET"
] | [
"COCO",
"ECSSD"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
I want to build a multi-body feature tracker using only feature correspondences. | feature tracking video | 2,016 | [
"MLPF",
"MSAW",
"Deep Fashion3D",
"TVQA",
"Oxford5k",
"SCUT-HEAD",
"Flightmare Simulator"
] | [
"Hopkins155",
"KITTI"
] | [
{
"dkey": "Hopkins155",
"dval": "The Hopkins 155 dataset consists of 156 video sequences of two or three motions. Each video sequence motion corresponds to a low-dimensional subspace. There are 39−550 data vectors drawn from two or three motions for each video sequence."
},
{
"dkey": "KITTI",
"d... |
Video classification can be achieved by exploiting static and motion information in video. | video classification | 2,019 | [
"Drive&Act",
"MLB-YouTube Dataset",
"MovieShots",
"Hopkins155",
"JIGSAWS"
] | [
"UCF101",
"HMDB51"
] | [
{
"dkey": "UCF101",
"dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). T... |
We investigate the problem of cross-dataset adaptation for visual question answering | cross-dataset adaptation visual question answering images questions paragraph-level | 2,018 | [
"COG",
"LEAF-QA",
"KorQuAD",
"XQA",
"TechQA",
"VizWiz"
] | [
"COCO",
"Visual7W"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.