query stringlengths 17 161 | keyphrase_query stringlengths 3 85 | year int64 2.01k 2.02k | negative_cands sequence | positive_cands sequence | abstracts list |
|---|---|---|---|---|---|
I want to implement a real-time action detection system. | action detection video | 2,017 | [
"NAB",
"G3D",
"ESAD",
"BAR",
"SoccerDB"
] | [
"UCF101",
"COCO"
] | [
{
"dkey": "UCF101",
"dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). T... |
We propose a deep learning based framework for image relighting. It consists of a generator network which | image relighting images | 2,019 | [
"AMASS",
"Places",
"GoPro",
"UNSW-NB15"
] | [
"CARLA",
"KITTI"
] | [
{
"dkey": "CARLA",
"dval": "CARLA (CAR Learning to Act) is an open simulator for urban driving, developed as an open-source layer over Unreal Engine 4. Technically, it operates similarly to, as an open source layer over Unreal Engine 4 that provides sensors in the form of RGB cameras (with customizable posi... |
I want to select sentences to support my answers for the multi-hop questions. | multi-hop question answering text | 2,019 | [
"WikiHop",
"CommonsenseQA",
"HybridQA",
"GYAFC",
"BiPaR",
"QNLI",
"QED"
] | [
"ARC",
"MultiRC"
] | [
{
"dkey": "ARC",
"dval": "The AI2’s Reasoning Challenge (ARC) dataset is a multiple-choice question-answering dataset, containing questions from science exams from grade 3 to grade 9. The dataset is split in two partitions: Easy and Challenge, where the latter partition contains the more difficult questions... |
This paper proposes a novel self-guiding LSTM (sg-LSTM) image caption | image captioning images text | 2,019 | [
"Bengali Hate Speech",
"Weibo NER",
"nocaps",
"AOLP",
"MSU-MFSD",
"MVSEC"
] | [
"COCO",
"Flickr30k"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
I want to train a model to find the referred object within the image according to the natural | natural language object retrieval images paragraph-level | 2,017 | [
"SNIPS",
"COVERAGE",
"ConvAI2",
"Image and Video Advertisements",
"Market-1501",
"CLEVR-Hans"
] | [
"COCO",
"ReferItGame"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
This is the source code of my paper. | language modeling | 2,019 | [
"CommonsenseQA",
"SNIPS",
"PHINC",
"ConvAI2",
"CONCODE"
] | [
"WebText",
"WikiText-103"
] | [
{
"dkey": "WebText",
"dval": "WebText is an internal OpenAI corpus created by scraping web pages with emphasis on\ndocument quality. The authors scraped all outbound links from\nReddit which received at least 3\nkarma. The authors used the approach as a heuristic indicator for\nwhether other users found the... |
We propose an end-to-end model for cross-lingual transfer learning for question answering. We | question answering text | 2,019 | [
"iVQA",
"ReQA",
"EXAMS",
"XQA",
"XQuAD"
] | [
"DRCD",
"NewsQA",
"SQuAD"
] | [
{
"dkey": "DRCD",
"dval": "Delta Reading Comprehension Dataset (DRCD) is an open domain traditional Chinese machine reading comprehension (MRC) dataset. This dataset aimed to be a standard Chinese machine reading comprehension dataset, which can be a source dataset in transfer learning. The dataset contains... |
The proposed model can learn to disentangle appearance and geometric information from image and video sequences in | image/video editing | 2,018 | [
"Moving MNIST",
"irc-disentanglement",
"REDS",
"3DMatch",
"ABC Dataset",
"MAFL"
] | [
"CIFAR-10",
"CelebA"
] | [
{
"dkey": "CIFAR-10",
"dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck),... |
A novel cascaded CNN scheme for accurate face landmark localization. | face landmark localization images | 2,018 | [
"WFLW",
"AFLW2000-3D",
"UTKFace",
"LS3D-W",
"LaPa"
] | [
"Helen",
"AFW"
] | [
{
"dkey": "Helen",
"dval": "The HELEN dataset is composed of 2330 face images of 400×400 pixels with labeled facial components generated through manually-annotated contours along eyes, eyebrows, nose, lips and jawline."
},
{
"dkey": "AFW",
"dval": "AFW (Annotated Faces in the Wild) is a face det... |
We propose a unified model that combines the strengths of two well-established deformable model approaches to the face alignment | face alignment images | 2,015 | [
"iFakeFaceDB",
"PANDORA",
"MaskedFace-Net",
"SpeakingFaces",
"EPIC-KITCHENS-100",
"Scan2CAD"
] | [
"AFW",
"LFPW"
] | [
{
"dkey": "AFW",
"dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box."
},
{
"dkey": "LFPW",
"dval": "The Labeled Face Parts in-the-Wil... |
We report the results of our replication study on BERT pretraining. Our best model outperforms every published | language model pretraining text | 2,019 | [
"GSL",
"THEODORE",
"ReCAM",
"BDD100K",
"Horne 2017 Fake News Data"
] | [
"QNLI",
"MRPC",
"RACE",
"GLUE",
"SQuAD"
] | [
{
"dkey": "QNLI",
"dval": "The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) co... |
We present a simple and effective model for learning general purpose sentence representations. Our model uses a single | natural language inference text | 2,017 | [
"GLUE",
"SuperGLUE",
"Fluent Speech Commands",
"BDD100K"
] | [
"SNLI",
"MultiNLI"
] | [
{
"dkey": "SNLI",
"dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and a... |
This paper proposes a new multiple-choice reading comprehension (MCRC) model which performs | multiple-choice reading comprehension text paragraph-level | 2,019 | [
"DREAM",
"DROP",
"CosmosQA",
"OneStopQA",
"C3",
"VisualMRC"
] | [
"RACE",
"SQuAD"
] | [
{
"dkey": "RACE",
"dval": "The ReAding Comprehension dataset from Examinations (RACE) dataset is a machine reading comprehension dataset consisting of 27,933 passages and 97,867 questions from English exams, targeting Chinese students aged 12-18. RACE consists of two subsets, RACE-M and RACE-H, from middle ... |
I have been reading about blood vessel segmentation and tried to reproduce the results. | retinal blood vessel segmentation images | 2,017 | [
"IntrA",
"COCO-Tasks",
"ORVS",
"SUN3D",
"ROSE"
] | [
"STARE",
"DRIVE"
] | [
{
"dkey": "STARE",
"dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.."
},
{
"dkey": "DRIVE",
"dval": "The Digital Retinal I... |
I want to build a model to automatically determine whether an image is acceptable for diagnosis. | fundus image quality classification images | 2,018 | [
"ACDC",
"SemEval 2014 Task 4 Sub Task 2",
"QNLI",
"Image and Video Advertisements",
"IntrA",
"Violin"
] | [
"STARE",
"DRIVE"
] | [
{
"dkey": "STARE",
"dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.."
},
{
"dkey": "DRIVE",
"dval": "The Digital Retinal I... |
We propose an end-to-end framework to reconstruct the 3D scene from | semantic reconstruction indoor scenes images | 2,020 | [
"DIPS",
"MLe2e",
"E2E",
"DeeperForensics-1.0",
"THEODORE",
"DDD20"
] | [
"COCO",
"Pix3D"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
We investigate the effectiveness of different pre-trained language models for Question Answering (QA) on four | question answering text | 2,019 | [
"How2QA",
"TweetQA",
"SQuAD-shifts",
"PAQ",
"TVQA"
] | [
"CoQA",
"SQuAD"
] | [
{
"dkey": "CoQA",
"dval": "CoQA is a large-scale dataset for building Conversational Question Answering systems. The goal of the CoQA challenge is to measure the ability of machines to understand a text passage and answer a series of interconnected questions that appear in a conversation.\n\nCoQA contains 1... |
I want to use a supervised model to recognize activities from low-resolution videos. | extreme low-resolution activity recognition images | 2,019 | [
"TinyVIRAT",
"DAiSEE",
"DIV2K",
"UCF-Crime",
"MPII Cooking 2 Dataset",
"Composable activities dataset",
"FaceForensics"
] | [
"UCF101",
"HMDB51"
] | [
{
"dkey": "UCF101",
"dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). T... |
I want to learn an action recognition model from trimmed videos. | action recognition videos | 2,019 | [
"Kinetics-600",
"Kinetics",
"AViD",
"DISFA",
"JHMDB",
"MTL-AQA",
"EPIC-KITCHENS-100"
] | [
"UCF101",
"ActivityNet"
] | [
{
"dkey": "UCF101",
"dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). T... |
I want to train a supervised model that is robust to adversarial perturbations. | adversarial robustness image classification | 2,018 | [
"ImageNet-P",
"NYU-VP",
"eQASC",
"SNIPS",
"DailyDialog++",
"APRICOT",
"Clothing1M"
] | [
"ImageNet",
"CIFAR-10"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
We propose a simple yet effective approach to exploit the available dense depth | 3d semantic labeling images dense depth maps outdoor street scenes | 2,018 | [
"DocBank",
"IMDB-BINARY",
"REDDIT-BINARY",
"Localized Narratives",
"SBU Captions Dataset",
"Shiny dataset"
] | [
"SYNTHIA",
"Cityscapes"
] | [
{
"dkey": "SYNTHIA",
"dval": "The SYNTHIA dataset is a synthetic dataset that consists of 9400 multi-viewpoint photo-realistic frames rendered from a virtual city and comes with pixel-level semantic annotations for 13 classes. Each frame has resolution of 1280 × 960."
},
{
"dkey": "Cityscapes",
... |
A novel hybrid convolutional and transformer model, WaLDORf, that achieves state-of-the- | nlu text | 2,019 | [
"BraTS 2017",
"THEODORE",
"Glint360K",
"GTEA",
"PG-19",
"LibriSpeech",
"Multi-PIE"
] | [
"QNLI",
"GLUE",
"SQuAD"
] | [
{
"dkey": "QNLI",
"dval": "The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) co... |
Visual question answering (VQA) is an important task in the field of computer | visual question answering images natural language | 2,016 | [
"VizWiz",
"ST-VQA",
"VQA-E",
"TDIUC"
] | [
"DBpedia",
"COCO",
"DAQUAR"
] | [
{
"dkey": "DBpedia",
"dval": "DBpedia (from \"DB\" for \"database\") is a project aiming to extract structured content from the information created in the Wikipedia project. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datas... |
I want to build an effective tracking model based on a simple tracking framework. | tracking image sequences | 2,019 | [
"SNIPS",
"ProPara",
"Frames Dataset",
"PoseTrack"
] | [
"Penn Treebank",
"OTB"
] | [
{
"dkey": "Penn Treebank",
"dval": "The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall Street Journal (WSJ), is one of the most known and used corpus for the evaluation of models for sequence labelling. The task consists of annotating ea... |
I want to use distant supervision to extract evidence sentences from reference documents for MRC tasks. | machine reading comprehension text paragraph-level | 2,019 | [
"DocRED",
"Delicious",
"ELI5",
"Melinda",
"DWIE",
"FOBIE"
] | [
"RACE",
"SearchQA",
"MultiNLI"
] | [
{
"dkey": "RACE",
"dval": "The ReAding Comprehension dataset from Examinations (RACE) dataset is a machine reading comprehension dataset consisting of 27,933 passages and 97,867 questions from English exams, targeting Chinese students aged 12-18. RACE consists of two subsets, RACE-M and RACE-H, from middle ... |
I want to train a classifier to classify objects in images. | object classification images | 2,018 | [
"GYAFC",
"UCF101",
"Chinese Classifier",
"Food-101",
"SNIPS",
"StreetStyle"
] | [
"ImageNet",
"CelebA"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to use a video inpainting model to inpaint the frames of | video frame inpainting | 2,020 | [
"FVI",
"DeepFashion",
"DTD",
"OpenEDS",
"SNIPS",
"FaceForensics"
] | [
"UCF101",
"HMDB51"
] | [
{
"dkey": "UCF101",
"dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). T... |
An end-to-end hierarchical action recognition architecture. | action recognition video | 2,017 | [
"EPIC-KITCHENS-55",
"CCPD",
"E2E",
"PixelHelp",
"MLe2e",
"RCTW-17",
"DDD20"
] | [
"ImageNet",
"HMDB51"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I'm training a question answering system on the SQuAD dataset. | long-form question answering text paragraph-level | 2,019 | [
"Spoken-SQuAD",
"SQuAD",
"SQuAD-shifts",
"MultiReQA",
"TweetQA",
"QNLI"
] | [
"ELI5",
"WikiSum"
] | [
{
"dkey": "ELI5",
"dval": "ELI5 is a dataset for long-form question answering. It contains 270K complex, diverse questions that require explanatory multi-sentence answers. Web search results are used as evidence documents to answer each question.\n\nELI5 is also a task in Dodecadialogue."
},
{
"dkey... |
A novel CNN architecture for face detection. The main contribution is a new loss layer for CNNs, which | face detection image | 2,016 | [
"MMED",
"THEODORE",
"AFLW2000-3D",
"MSU-MFSD",
"CNN/Daily Mail",
"ReCoRD",
"MLSUM"
] | [
"COFW",
"AFLW"
] | [
{
"dkey": "COFW",
"dval": "The Caltech Occluded Faces in the Wild (COFW) dataset is designed to present faces in real-world conditions. Faces show large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.... |
The proposed attention-based adversarial defense framework consists of a two-stage pipeline. The first stage is designed | adversarial defense images | 2,018 | [
"AnimalWeb",
"Raindrop",
"DramaQA",
"ECSSD",
"Fakeddit",
"WinoGrande",
"Spoken-SQuAD"
] | [
"ImageNet",
"CIFAR-10"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to use a CNN-based tracking model. | visual tracking video | 2,019 | [
"SNIPS",
"LAG",
"AFLW2000-3D",
"DiCOVA",
"ConvAI2"
] | [
"OTB",
"VOT2017"
] | [
{
"dkey": "OTB",
"dval": "Object Tracking Benchmark (OTB) is a visual tracking benchmark that is widely used to evaluate the performance of a visual tracking algorithm. The dataset contains a total of 100 sequences and each is annotated frame-by-frame with bounding boxes and 11 challenge attributes. OTB-201... |
Instance mask projection is an end-to-end trainable operator that projects instance | semantic segmentation images top-view grid map sequences autonomous driving | 2,019 | [
"WikiReading",
"KnowledgeNet",
"THEODORE",
"PKU-MMD",
"LSHTC",
"SOBA",
"ISBDA"
] | [
"COCO",
"Cityscapes"
] | [
{
"dkey": "COCO",
"dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K imag... |
I want to train a model to answer questions from text. | question answering text | 2,018 | [
"TextVQA",
"RecipeQA",
"CommonsenseQA",
"BREAK",
"TrecQA",
"Spoken-SQuAD"
] | [
"WebQuestions",
"SQuAD",
"TriviaQA"
] | [
{
"dkey": "WebQuestions",
"dval": "The WebQuestions dataset is a question answering dataset using Freebase as the knowledge base and contains 6,642 question-answer pairs. It was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechanical Turk. The origina... |
3D object recognition is an important component of many vision and robotics systems. | 3d object recognition voxels pixels | 2,016 | [
"OCID",
"3DNet",
"Flightmare Simulator",
"HoME"
] | [
"ImageNet",
"ModelNet"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
I want to detect facial landmark and components simultaneously. | landmark-region-based facial detection images | 2,019 | [
"AFLW2000-3D",
"300-VW",
"LS3D-W",
"AFLW",
"AffectNet"
] | [
"Helen",
"AFW"
] | [
{
"dkey": "Helen",
"dval": "The HELEN dataset is composed of 2330 face images of 400×400 pixels with labeled facial components generated through manually-annotated contours along eyes, eyebrows, nose, lips and jawline."
},
{
"dkey": "AFW",
"dval": "AFW (Annotated Faces in the Wild) is a face det... |
I would like to implement the Neural Architecture Search (NAS) approach and apply it | neural architecture search | 2,019 | [
"NAS-Bench-201",
"NAS-Bench-101",
"NATS-Bench",
"NAS-Bench-1Shot1",
"30MQA"
] | [
"ImageNet",
"Caltech-101",
"CIFAR-10"
] | [
{
"dkey": "ImageNet",
"dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released data... |
Task: MCQ with multiple correct answers.
Dataset: Recommendation of datasets to validate a research question.
This dataset is derived from the DataFinder dataset. We curate the abstracts of each dataset from PapersWithCode.
Given is a short query discussing a research question, and keyphrases relevant the query.
The original training set of the DataFinder dataset has positive and negative candidates for each query, to train a contrastive model.
We objective is to convert the dataset into a MCQ question-answering task with multiple correct answers. We also add the abstracts from the research papers introducing the datasets so that context can be provided to the models.
To reproduce the construction of this dataset, please visit https://github.com/shruti-singh/scidata_recommendation.
Please note that the query instances in this dataset have no intersection with the dataset_recommendation_mcq_sc dataset. dataset_recommendation_mcq_sc is a variant of this MCQ question-answering task with only single correct answer.
- Downloads last month
- 16