prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the BLIP-2 ViT-G OPT 2.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-L OPT 2.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the FewVLM (zero-shot) model in the A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the GPT4RoI model in the GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest paper on the VCR (QA-R) test dataset?
Accuracy
What metrics were used to measure the ERNIE-ViL-large(ensemble of 15 models) model in the ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph paper on the VCR (QA-R) test dataset?
Accuracy
What metrics were used to measure the UNITER-large (ensemble of 10 models) model in the UNITER: UNiversal Image-TExt Representation Learning paper on the VCR (QA-R) test dataset?
Accuracy
What metrics were used to measure the UNITER (Large) model in the UNITER: UNiversal Image-TExt Representation Learning paper on the VCR (QA-R) test dataset?
Accuracy
What metrics were used to measure the KVL-BERTLARGE model in the KVL-BERT: Knowledge Enhanced Visual-and-Linguistic BERT for Visual Commonsense Reasoning paper on the VCR (QA-R) test dataset?
Accuracy
What metrics were used to measure the VL-BERTLARGE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VCR (QA-R) test dataset?
Accuracy
What metrics were used to measure the VL-T5 model in the Unifying Vision-and-Language Tasks via Text Generation paper on the VCR (QA-R) test dataset?
Accuracy
What metrics were used to measure the VisualBERT model in the VisualBERT: A Simple and Performant Baseline for Vision and Language paper on the VCR (QA-R) test dataset?
Accuracy
What metrics were used to measure the CRCT model in the Classification-Regression for Chart Comprehension paper on the PlotQA-D2 dataset?
1:1 Accuracy
What metrics were used to measure the PlotQA model in the PlotQA: Reasoning over Scientific Plots paper on the PlotQA-D2 dataset?
1:1 Accuracy
What metrics were used to measure the PReFIL model in the Answering Questions about Data Visualizations using Efficient Bimodal Fusion paper on the PlotQA-D2 dataset?
1:1 Accuracy
What metrics were used to measure the Graph VQA model in the Graph-Structured Representations for Visual Question Answering paper on the COCO Visual Question Answering (VQA) abstract 1.0 multiple choice dataset?
Percentage correct
What metrics were used to measure the Dualnet ensemble model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) abstract 1.0 multiple choice dataset?
Percentage correct
What metrics were used to measure the LSTM + global features model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) abstract 1.0 multiple choice dataset?
Percentage correct
What metrics were used to measure the LSTM blind model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) abstract 1.0 multiple choice dataset?
Percentage correct
What metrics were used to measure the VLAB model in the VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the MaMMUT model in the MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the MuLTI model in the MuLTI: Efficient Video-and-Language Understanding with MultiWay-Sampler and Multiple Choice Modeling paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the Flamingo model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the FrozenBiLM+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the VideoCoCa model in the VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the HBI model in the Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the EMCL-Net model in the Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the Co-Tokenization model in the Video Question Answering with Iterative Video-Text Co-Tokenization paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the All-in-one-B model in the All in One: Exploring Unified Video-Language Pre-training paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the OmniVL model in the OmniVL:One Foundation Model for Image-Language and Video-Language Tasks paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the Clover model in the Clover: Towards A Unified Video-Language Alignment and Fusion Model paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the ALPRO model in the Align and Prompt: Video-and-Language Pre-training with Entity Prompts paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the LRCE model in the Lightweight Recurrent Cross-modal Encoder for Video Question Answering paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the JustAsk+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the Just Ask model in the Just Ask: Learning to Answer Questions from Millions of Narrated Videos paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the All-in-one+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the CLIPBERT model in the Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the HCRN model in the Hierarchical Conditional Relation Networks for Video Question Answering paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the DualVGR model in the DualVGR: A Dual-Visual Graph Reasoning Unit for Video Question Answering paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the SSML model in the Noise Estimation Using Density Estimation for Self-Supervised Multimodal Learning paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the HMEMA model in the Heterogeneous Memory Enhanced Multimodal Attention Model for Video Question Answering paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the Co-Mem model in the Motion-Appearance Co-Memory Networks for Video Question Answering paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the Flamingo (32-shot) model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the ST-VQA model in the TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the Flamingo (0-shot) model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the MSRVTT-QA dataset?
Accuracy
What metrics were used to measure the Unified-IOXL model in the Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks paper on the GRIT dataset?
VQA (ablation), VQA (test)
What metrics were used to measure the GPV-2 model in the Webly Supervised Concept Expansion for General Purpose Vision Models paper on the GRIT dataset?
VQA (ablation), VQA (test)
What metrics were used to measure the VL-BERTLARGE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VCR (Q-AR) dev dataset?
Accuracy
What metrics were used to measure the VL-BERTBASE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VCR (Q-AR) dev dataset?
Accuracy
What metrics were used to measure the VisualBERT model in the VisualBERT: A Simple and Performant Baseline for Vision and Language paper on the VCR (Q-AR) dev dataset?
Accuracy
What metrics were used to measure the MAC model in the QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning paper on the QLEVR dataset?
Overall Accuracy
What metrics were used to measure the CNN+LSTM model in the QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning paper on the QLEVR dataset?
Overall Accuracy
What metrics were used to measure the BERT model in the QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning paper on the QLEVR dataset?
Overall Accuracy
What metrics were used to measure the LSTM model in the QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning paper on the QLEVR dataset?
Overall Accuracy
What metrics were used to measure the Q-type model in the QLEVR: A Diagnostic Dataset for Quantificational Language and Elementary Visual Reasoning paper on the QLEVR dataset?
Overall Accuracy
What metrics were used to measure the CLIP-Ensemble model in the Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model paper on the VizWiz 2020 Answerability dataset?
average_precision, f1_score
What metrics were used to measure the CLIP-Single model in the Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model paper on the VizWiz 2020 Answerability dataset?
average_precision, f1_score
What metrics were used to measure the VT-Transformer (MUL) model in the paper on the VizWiz 2020 Answerability dataset?
average_precision, f1_score
What metrics were used to measure the VT-Transformer (CAT) model in the paper on the VizWiz 2020 Answerability dataset?
average_precision, f1_score
What metrics were used to measure the BERT-RG-Regression model in the paper on the VizWiz 2020 Answerability dataset?
average_precision, f1_score
What metrics were used to measure the VWTest1 model in the paper on the VizWiz 2020 Answerability dataset?
average_precision, f1_score
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the TGIF-QA dataset?
Accuracy
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the TGIF-QA dataset?
Accuracy
What metrics were used to measure the OFA-X-MT model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the VQA-X dataset?
Accuracy
What metrics were used to measure the OFA-X model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the VQA-X dataset?
Accuracy
What metrics were used to measure the VL-BERTLARGE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VCR (QA-R) dev dataset?
Accuracy
What metrics were used to measure the VL-BERTBASE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VCR (QA-R) dev dataset?
Accuracy
What metrics were used to measure the VisualBERT model in the VisualBERT: A Simple and Performant Baseline for Vision and Language paper on the VCR (QA-R) dev dataset?
Accuracy
What metrics were used to measure the PDVN model in the Retrosynthetic Planning with Dual Value Networks paper on the USPTO-190 dataset?
Success Rate (100 model calls), Success Rate (500 model calls)
What metrics were used to measure the RetroGraph model in the RetroGraph: Retrosynthetic Planning with Graph Search paper on the USPTO-190 dataset?
Success Rate (100 model calls), Success Rate (500 model calls)
What metrics were used to measure the EG-MCTS model in the Retrosynthetic Planning with Experience-Guided Monte Carlo Tree Search paper on the USPTO-190 dataset?
Success Rate (100 model calls), Success Rate (500 model calls)
What metrics were used to measure the Retro* plus model in the Self-Improved Retrosynthetic Planning paper on the USPTO-190 dataset?
Success Rate (100 model calls), Success Rate (500 model calls)
What metrics were used to measure the Retro* model in the Retro*: Learning Retrosynthetic Planning with Neural Guided A* Search paper on the USPTO-190 dataset?
Success Rate (100 model calls), Success Rate (500 model calls)
What metrics were used to measure the VIOLETv2 model in the An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling paper on the MSRVTT-MC dataset?
Accuracy
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the MSRVTT-MC dataset?
Accuracy
What metrics were used to measure the Clover model in the Clover: Towards A Unified Video-Language Alignment and Fusion Model paper on the MSRVTT-MC dataset?
Accuracy
What metrics were used to measure the Singularity-temporal model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the MSRVTT-MC dataset?
Accuracy
What metrics were used to measure the Singularity model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the MSRVTT-MC dataset?
Accuracy
What metrics were used to measure the TimeSformer model in the Is Space-Time Attention All You Need for Video Understanding? paper on the Howto100M-QA dataset?
Accuracy
What metrics were used to measure the Multi (text + video, IO) model in the WildQA: In-the-Wild Video Question Answering paper on the WildQA dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Multi (text + video, SE) model in the WildQA: In-the-Wild Video Question Answering paper on the WildQA dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the T5 (text) model in the WildQA: In-the-Wild Video Question Answering paper on the WildQA dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the T5 (text + video) model in the WildQA: In-the-Wild Video Question Answering paper on the WildQA dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the T5 (text, zero-shot) model in the WildQA: In-the-Wild Video Question Answering paper on the WildQA dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Tem-adapter model in the Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer paper on the SUTD-TrafficQA dataset?
1/4, 1/2
What metrics were used to measure the Eclipse model in the SUTD-TrafficQA: A Question Answering Benchmark and an Efficient Network for Video Reasoning over Traffic Events paper on the SUTD-TrafficQA dataset?
1/4, 1/2
What metrics were used to measure the HCRN model in the Hierarchical Conditional Relation Networks for Video Question Answering paper on the SUTD-TrafficQA dataset?
1/4, 1/2
What metrics were used to measure the TVQA model in the TVQA: Localized, Compositional Video Question Answering paper on the SUTD-TrafficQA dataset?
1/4, 1/2
What metrics were used to measure the VIS+LST model in the Exploring Models and Data for Image Question Answering paper on the SUTD-TrafficQA dataset?
1/4, 1/2
What metrics were used to measure the Text + Text (no Multimodal Pretext Training) model in the Towards Fast Adaptation of Pretrained Contrastive Models for Multi-channel Video-Language Retrieval paper on the iVQA dataset?
Accuracy
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the iVQA dataset?
Accuracy
What metrics were used to measure the VideoCoCa model in the VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners paper on the iVQA dataset?
Accuracy
What metrics were used to measure the Co-Tokenization model in the Video Question Answering with Iterative Video-Text Co-Tokenization paper on the iVQA dataset?
Accuracy
What metrics were used to measure the Just Ask model in the Just Ask: Learning to Answer Questions from Millions of Narrated Videos paper on the iVQA dataset?
Accuracy