prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Modified Attention model in the paper on the VizWiz 2020 VQA dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the shaunakh model in the paper on the VizWiz 2020 VQA dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the e50 model in the paper on the VizWiz 2020 VQA dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the SKP model in the paper on the VizWiz 2020 VQA dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the knight777 model in the paper on the VizWiz 2020 VQA dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the pk model in the paper on the VizWiz 2020 VQA dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the Tartans model in the paper on the VizWiz 2020 VQA dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the VWTest1 model in the paper on the VizWiz 2020 VQA dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the BERT-RG model in the paper on the VizWiz 2020 VQA dataset?
overall, yes/no, number, other, unanswerable
What metrics were used to measure the CMN model in the Modeling Relationships in Referential Expressions with Compositional Modular Networks paper on the Visual Genome (pairs) dataset?
Percentage correct
What metrics were used to measure the DUBLIN model in the DUBLIN -- Document Understanding By Language-Image Network paper on the AI2D dataset?
EM
What metrics were used to measure the GPT4RoI model in the GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the ERNIE-ViL-large(ensemble of 15 models) model in the ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the UNITER-large (10 ensemble) model in the UNITER: UNiversal Image-TExt Representation Learning paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the MAD (Single Model, Formerly CLIP-TD) model in the Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the UNITER (Large) model in the UNITER: UNiversal Image-TExt Representation Learning paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the KVL-BERTLARGE model in the KVL-BERT: Knowledge Enhanced Visual-and-Linguistic BERT for Visual Commonsense Reasoning paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the VL-BERTLARGE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the VL-T5 model in the Unifying Vision-and-Language Tasks via Text Generation paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the VisualBERT model in the VisualBERT: A Simple and Performant Baseline for Vision and Language paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the OFA-X model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the OFA-X-MT model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the VCR (Q-A) test dataset?
Accuracy
What metrics were used to measure the GPT-4V model in the HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models paper on the HallusionBench dataset?
Question Pair Acc
What metrics were used to measure the LLaVA-1.5 model in the Visual Instruction Tuning paper on the HallusionBench dataset?
Question Pair Acc
What metrics were used to measure the mPLUG-Owl model in the mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality paper on the HallusionBench dataset?
Question Pair Acc
What metrics were used to measure the LRV-Instruct model in the Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning paper on the HallusionBench dataset?
Question Pair Acc
What metrics were used to measure the PaLM-E-562B model in the PaLM-E: An Embodied Multimodal Language Model paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the PaLI-X (Single-task FT) model in the PaLI-X: On Scaling up a Multilingual Vision and Language Model paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the PaLI 17B model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the RA-VQA-v2 (BLIP 2) model in the Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the Prophet model in the Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the PromptCap model in the PromptCap: Prompt-Guided Task-Aware Image Captioning paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the ReVeaL WIT + CC12M + Wikidata + VQA-2 model in the REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the REVIVE (Ensemble) model in the REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the REVIVE (Single) model in the REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the RA-VQA-v2 (T5-large) model in the Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the RA-VQA (T5-large) model in the Retrieval Augmented Visual Question Answering with Outside Knowledge paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the VK-OOD model in the Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the RA-VQA-FrDPR (T5-large) model in the Retrieval Augmented Visual Question Answering with Outside Knowledge paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the Flamingo80B model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the TRiG (T5-Large) model in the Transform-Retrieve-Generate: Natural Language-Centric Outside-Knowledge Visual Question Answering paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the PICa model in the An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the LaKo model in the LaKo: Knowledge-driven Visual Question Answering via Late Knowledge-to-Text Injection paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the BLIP-2 ViT-G FlanT5 XXL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the Flamingo9B model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the VLC-BERT model in the VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the T5(Tan and Bansal, 2019) + Prefixes model in the LaKo: Knowledge-driven Visual Question Answering via Late Knowledge-to-Text Injection paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the Flamingo3B model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the BLIP-2 ViT-G FlanT5 XL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the BLIP-2 ViT-L FlanT5 XL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the BLIP-2 ViT-G OPT 6.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the PNP-VQA model in the Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the BLIP-2 ViT-G OPT 2.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the BLIP-2 ViT-L OPT 2.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the FewVLM model in the A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the MetaLM model in the Language Models are General-Purpose Interfaces paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the VLKD(ViT-B/16) model in the Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the Frozen model in the Multimodal Few-Shot Learning with Frozen Language Models paper on the OK-VQA dataset?
Accuracy, Exact Match (EM), Recall@5
What metrics were used to measure the SAAA (ResNet) model in the Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering paper on the VQA v1 test-std dataset?
Accuracy
What metrics were used to measure the RAU (ResNet) model in the Training Recurrent Answering Units with Joint Loss Minimization for VQA paper on the VQA v1 test-std dataset?
Accuracy
What metrics were used to measure the HieCoAtt (ResNet) model in the Hierarchical Question-Image Co-Attention for Visual Question Answering paper on the VQA v1 test-std dataset?
Accuracy
What metrics were used to measure the DMN+ model in the Dynamic Memory Networks for Visual and Textual Question Answering paper on the VQA v1 test-std dataset?
Accuracy
What metrics were used to measure the SAN (VGG) model in the Stacked Attention Networks for Image Question Answering paper on the VQA v1 test-std dataset?
Accuracy
What metrics were used to measure the NMN+LSTM+FT model in the Neural Module Networks paper on the VQA v1 test-std dataset?
Accuracy
What metrics were used to measure the MedVInT model in the PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering paper on the PMC-VQA dataset?
Accuracy
What metrics were used to measure the Open-Flamingo model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the PMC-VQA dataset?
Accuracy
What metrics were used to measure the PMC-CLIP model in the PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents paper on the PMC-VQA dataset?
Accuracy
What metrics were used to measure the BLIP-2 model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the PMC-VQA dataset?
Accuracy
What metrics were used to measure the MCB 7 att. model in the Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the Dual-MFA model in the Co-attending Free-form Regions and Detections with Multi-modal Multiplicative Feature Embedding for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the QGHC+Att+Concat model in the Question-Guided Hybrid Convolution for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the RelAtt model in the R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the joint-loss model in the Training Recurrent Answering Units with Joint Loss Minimization for VQA paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the HQI+ResNet model in the Hierarchical Question-Image Co-Attention for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the MRN + global features model in the Multimodal Residual Learning for Visual QA paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the DMN+ [xiong2016dynamic] model in the Dynamic Memory Networks for Visual and Textual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the CNN-RNN model in the Image Captioning and Visual Question Answering Based on Attributes and External Knowledge paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the FDA model in the A Focused Dynamic Attention Model for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the SAN model in the Stacked Attention Networks for Image Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the LSTM Q+I model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the SMem-VQA model in the Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the iBOWIMG baseline model in the Simple Baseline for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the human model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the DREAM+Unicoder-VL (MSRA) model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the TRRNet (Ensemble) model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the MIL-nbgao model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the Kakao Brain model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the Coarse-to-Fine Reasoning, Single Model model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the 270 model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the NSM ensemble (updated) model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the VinVL-DPT model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the VinVL+L model in the VinVL+L: Enriching Visual Representation with Location Context in VQA paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the Single Model model in the VinVL: Revisiting Visual Representations in Vision-Language Models paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the Wayne model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the Single model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the NSM single (updated) model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the LXR955, Ensemble model in the LXMERT: Learning Cross-Modality Encoder Representations from Transformers paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the MDETR model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the 1-gqa model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution
What metrics were used to measure the UCM model in the paper on the GQA Test2019 dataset?
Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution