prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the VLC-BERT model in the VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the ReVeaL model in the REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the HDU-USYD-UNCC model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 2.0 open ended dataset?
Percentage correct
What metrics were used to measure the DLAIT model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 2.0 open ended dataset?
Percentage correct
What metrics were used to measure the MCB model in the Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 2.0 open ended dataset?
Percentage correct
What metrics were used to measure the d-LSTM+nI model in the Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 2.0 open ended dataset?
Percentage correct
What metrics were used to measure the BERT LARGE Baseline model in the DocVQA: A Dataset for VQA on Document Images paper on the DocVQA val dataset?
ANLS, Accuracy
What metrics were used to measure the ensemble_two_best model in the paper on the VizWiz 2018 Answerability dataset?
average_precision, f1_score
What metrics were used to measure the VL-BERTLARGE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VCR (Q-A) dev dataset?
Accuracy
What metrics were used to measure the VL-BERTBASE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VCR (Q-A) dev dataset?
Accuracy
What metrics were used to measure the VisualBERT model in the VisualBERT: A Simple and Performant Baseline for Vision and Language paper on the VCR (Q-A) dev dataset?
Accuracy
What metrics were used to measure the BEiT-3 model in the Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the mPLUG-Huge model in the mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the ONE-PEACE model in the ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the VLMo model in the VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Florence model in the Florence: A New Foundation Model for Computer Vision paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the SimVLM model in the SimVLM: Simple Visual Language Model Pretraining with Weak Supervision paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Prompt Tuning model in the Prompt Tuning for Generative Multimodal Pretrained Models paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Prismer model in the Prismer: A Vision-Language Model with An Ensemble of Experts paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the MSR + MS Cog. Svcs., X10 models model in the VinVL: Revisiting Visual Representations in Vision-Language Models paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the MSR + MS Cog. Svcs. model in the VinVL: Revisiting Visual Representations in Vision-Language Models paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the ALBEF (14M) model in the Align before Fuse: Vision and Language Representation Learning with Momentum Distillation paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the BGN, ensemble model in the Bilinear Graph Networks for Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the ERNIE-ViL-single model model in the ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Single, w/o VLP model in the In Defense of Grid Features for Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Single, w/o VLP model in the Deep Multimodal Neural Architecture Search paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the UNITER (Large) model in the UNITER: UNiversal Image-TExt Representation Learning paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the X-101 grid features + MCAN model in the In Defense of Grid Features for Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the LXMERT model in the LXMERT: Learning Cross-Modality Encoder Representations from Transformers paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the VL-BERTLARGE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the MCAN+VC model in the Visual Commonsense R-CNN paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the VisualBERT model in the VisualBERT: A Simple and Performant Baseline for Vision and Language paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the MCANed-6 model in the Deep Modular Co-Attention Networks for Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Unified VLP model in the Unified Vision-Language Pre-Training for Image Captioning and VQA paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the BAN+Glove+Counter model in the Bilinear Attention Networks paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Up-Down model in the Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Image features from bottom-up attention (adaptive K, ensemble) model in the Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Caption VQA model in the Generating Question Relevant Captions to Aid Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the MuRel model in the MUREL: Multimodal Relational Reasoning for Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the DMN model in the Learning to Count Objects in Natural Images for Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the BLOCK model in the BLOCK: Bilinear Superdiagonal Fusion for Visual Question Answering and Visual Relationship Detection paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the MUTAN model in the MUTAN: Multimodal Tucker Fusion for Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the 2D continuous softmax model in the Sparse and Continuous Attention Mechanisms paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the MCB [11, 12] model in the Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Language-only model in the Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the Prior model in the Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering paper on the VQA v2 test-std dataset?
overall, yes/no, number, other
What metrics were used to measure the CMN model in the Modeling Relationships in Referential Expressions with Compositional Modular Networks paper on the Visual7W dataset?
Percentage correct
What metrics were used to measure the CTI (with Boxes) model in the Compact Trilinear Interaction for Visual Question Answering paper on the Visual7W dataset?
Percentage correct
What metrics were used to measure the CFR model in the Coarse-to-Fine Reasoning for Visual Question Answering paper on the Visual7W dataset?
Percentage correct
What metrics were used to measure the MCB+Att. model in the Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding paper on the Visual7W dataset?
Percentage correct
What metrics were used to measure the BLIP-2 ViT-G FlanT5 XXL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the PNP-VQA model in the Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-G FlanT5 XL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-L FlanT5 XL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-G OPT 6.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-G OPT 2.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-L OPT 2.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the Few VLM (zero-shot) model in the A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the MetaLM model in the Language Models are General-Purpose Interfaces paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the VLKD(ViT-B/16) model in the Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the Frozen model in the Multimodal Few-Shot Learning with Frozen Language Models paper on the VQA v2 val dataset?
Accuracy
What metrics were used to measure the CSS model in the Counterfactual Samples Synthesizing for Robust Visual Question Answering paper on the VQA-CP dataset?
Score
What metrics were used to measure the GGE-DQ model in the Greedy Gradient Ensemble for Robust Visual Question Answering paper on the VQA-CP dataset?
Score
What metrics were used to measure the LMH+Entropy regularization (Ensemble) model in the Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies paper on the VQA-CP dataset?
Score
What metrics were used to measure the LMH+Entropy regularization model in the Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies paper on the VQA-CP dataset?
Score
What metrics were used to measure the Learned-Mixin +H model in the Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases paper on the VQA-CP dataset?
Score
What metrics were used to measure the UpDn+SCR (VQA-X) model in the Self-Critical Reasoning for Robust Visual Question Answering paper on the VQA-CP dataset?
Score
What metrics were used to measure the RUBi model in the RUBi: Reducing Unimodal Biases in Visual Question Answering paper on the VQA-CP dataset?
Score
What metrics were used to measure the NSM model in the Learning by Abstraction: The Neural State Machine paper on the VQA-CP dataset?
Score
What metrics were used to measure the MuRel model in the MUREL: Multimodal Relational Reasoning for Visual Question Answering paper on the VQA-CP dataset?
Score
What metrics were used to measure the HAN model in the Learning Visual Question Answering by Bootstrapping Hard Attention paper on the VQA-CP dataset?
Score
What metrics were used to measure the BLIP2 FlanT5-XXL (Fine-tuned) model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Exact Match, BEM
What metrics were used to measure the BLIP2 FlanT5-XL (Fine-tuned) model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Exact Match, BEM
What metrics were used to measure the BLIP2 FlanT5-XXL (Zero-shot) model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Exact Match, BEM
What metrics were used to measure the OFA Large model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Exact Match, BEM
What metrics were used to measure the BLIP Large model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Exact Match, BEM
What metrics were used to measure the BLIP2 FlanT5-XXL (Text-only FT) model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Exact Match, BEM
What metrics were used to measure the LXR955, No Ensemble model in the LXMERT: Learning Cross-Modality Encoder Representations from Transformers paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the fw_vqa_ model in the paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the Pythia v0.3 model in the Towards VQA Models That Can Read paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the B-Ultra model in the Decoupled Box Proposal and Featurization with Ultrafine-Grained Semantic Labels Improve Image Captioning and Visual Question Answering paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the DVW model in the paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the DVizWiz model in the paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the BAN model in the paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the ss model in the paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the hdhs model in the paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the Colin model in the paper on the VizWiz 2018 dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the CFR model in the Coarse-to-Fine Reasoning for Visual Question Answering paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the NSM model in the Learning by Abstraction: The Neural State Machine paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the LXMERT (Pre-train + scratch) model in the LXMERT: Learning Cross-Modality Encoder Representations from Transformers paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the single-hop + LCGN (ours) model in the Language-Conditioned Graph Networks for Relational Reasoning paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-G FlanT5 XXL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-L FlanT5 XL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-G FlanT5 XL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the PNP-VQA model in the Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training paper on the GQA test-dev dataset?
Accuracy
What metrics were used to measure the BLIP-2 ViT-G OPT 6.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the GQA test-dev dataset?
Accuracy