prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the PaLI-X (Multi-task FT) model in the PaLI-X: On Scaling up a Multilingual Vision and Language Model paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the PaLI-X (Single-task FT) model in the PaLI-X: On Scaling up a Multilingual Vision and Language Model paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the GPT-3.5 + LATIN-Prompt model in the Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the DocFormerv2-large model in the DocFormerv2: Local Features for Document Understanding paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the UDOP model in the Unifying Vision, Text, and Layout for Universal Document Processing paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the DUBLIN (variable resolution) model in the DUBLIN -- Document Understanding By Language-Image Network paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the Pix2Struct-large model in the Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the Pix2Struct-base model in the Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the MatCha model in the MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the DUBLIN model in the DUBLIN -- Document Understanding By Language-Image Network paper on the InfographicVQA dataset?
ANLS
What metrics were used to measure the Graph VQA model in the Graph-Structured Representations for Visual Question Answering paper on the COCO Visual Question Answering (VQA) abstract images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the Dualnet ensemble model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) abstract images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the LSTM + global features model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) abstract images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the LSTM blind model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) abstract images 1.0 open ended dataset?
Percentage correct
What metrics were used to measure the DUBLIN model in the DUBLIN -- Document Understanding By Language-Image Network paper on the DeepForm dataset?
F1
What metrics were used to measure the ProTo model in the ProTo: Program-Guided Transformer for Program-Guided Tasks paper on the GQA test-std dataset?
Accuracy
What metrics were used to measure the NSM model in the Learning by Abstraction: The Neural State Machine paper on the GQA test-std dataset?
Accuracy
What metrics were used to measure the MDETR-ENB5 model in the MDETR -- Modulated Detection for End-to-End Multi-Modal Understanding paper on the GQA test-std dataset?
Accuracy
What metrics were used to measure the LXMERT model in the LXMERT: Learning Cross-Modality Encoder Representations from Transformers paper on the GQA test-std dataset?
Accuracy
What metrics were used to measure the single-hop + LCGN (ours) model in the Language-Conditioned Graph Networks for Relational Reasoning paper on the GQA test-std dataset?
Accuracy
What metrics were used to measure the MAC model in the GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering paper on the GQA test-std dataset?
Accuracy
What metrics were used to measure the CNN+LSTM model in the GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering paper on the GQA test-std dataset?
Accuracy
What metrics were used to measure the DUBLIN model in the DUBLIN -- Document Understanding By Language-Image Network paper on the WebSRC dataset?
EM
What metrics were used to measure the GPT4RoI model in the GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest paper on the VCR (Q-AR) test dataset?
Accuracy
What metrics were used to measure the ERNIE-ViL-large(ensemble of 15 models) model in the ERNIE-ViL: Knowledge Enhanced Vision-Language Representations Through Scene Graph paper on the VCR (Q-AR) test dataset?
Accuracy
What metrics were used to measure the UNITER (Large) model in the UNITER: UNiversal Image-TExt Representation Learning paper on the VCR (Q-AR) test dataset?
Accuracy
What metrics were used to measure the KVL-BERTLARGE model in the KVL-BERT: Knowledge Enhanced Visual-and-Linguistic BERT for Visual Commonsense Reasoning paper on the VCR (Q-AR) test dataset?
Accuracy
What metrics were used to measure the VL-BERTLARGE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VCR (Q-AR) test dataset?
Accuracy
What metrics were used to measure the VL-T5 model in the Unifying Vision-and-Language Tasks via Text Generation paper on the VCR (Q-AR) test dataset?
Accuracy
What metrics were used to measure the VisualBERT model in the VisualBERT: A Simple and Performant Baseline for Vision and Language paper on the VCR (Q-AR) test dataset?
Accuracy
What metrics were used to measure the Human model in the DocVQA: A Dataset for VQA on Document Images paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the PaLI-3 (w/ OCR) model in the PaLI-3 Vision Language Models: Smaller, Faster, Stronger paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the ERNIE-Layout large (ensemble) model in the ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the GPT-4 model in the Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the DocFormerv2-large model in the DocFormerv2: Local Features for Document Understanding paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the UDOP (aux) model in the Unifying Vision, Text, and Layout for Universal Document Processing paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the PaLI-3 model in the PaLI-3 Vision Language Models: Smaller, Faster, Stronger paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the TILT-Large model in the Going Full-TILT Boogie on Document Understanding with Text-Image-Layout Transformer paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the PaLI-X (Single-task FT w/ OCR) model in the PaLI-X: On Scaling up a Multilingual Vision and Language Model paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the LayoutLMv2LARGE model in the LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the ERNIE-Layout large model in the ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the UDOP model in the Unifying Vision, Text, and Layout for Universal Document Processing paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the TILT-Base model in the Going Full-TILT Boogie on Document Understanding with Text-Image-Layout Transformer paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the Claude + LATIN-Prompt model in the Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the GPT-3.5 + LATIN-Prompt model in the Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the PaLI-X (Multi-task FT) model in the PaLI-X: On Scaling up a Multilingual Vision and Language Model paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the DUBLIN (variable resolution) model in the DUBLIN -- Document Understanding By Language-Image Network paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the PaLI-X (Single-task FT) model in the PaLI-X: On Scaling up a Multilingual Vision and Language Model paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the DUBLIN model in the DUBLIN -- Document Understanding By Language-Image Network paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the LayoutLMv2BASE model in the LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the Pix2Struct-large model in the Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the MatCha model in the MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the Pix2Struct-base model in the Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the Donut model in the OCR-free Document Understanding Transformer paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the BERT_LARGE_SQUAD_DOCVQA_FINETUNED_Baseline model in the DocVQA: A Dataset for VQA on Document Images paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the Qwen-VL model in the Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the Dessurt model in the End-to-end Document Recognition and Understanding with Dessurt paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the Qwen-VL-Chat model in the Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond paper on the DocVQA test dataset?
ANLS, Accuracy
What metrics were used to measure the VLAB model in the VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the MaMMUT (ours) model in the MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the VideoCoCa model in the VideoCoCa: Video-Text Modeling with Zero-Shot Transfer from Contrastive Captioners paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the GIT model in the GIT: A Generative Image-to-text Transformer for Vision and Language paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the FrozenBiLM+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the FrozenBiLM model in the Zero-Shot Video Question Answering via Frozen Bidirectional Language Models paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the VIOLETv2 model in the An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the MuLTI model in the MuLTI: Efficient Video-and-Language Understanding with MultiWay-Sampler and Multiple Choice Modeling paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the Clover model in the Clover: Towards A Unified Video-Language Alignment and Fusion Model paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the VIOLET + MELTR model in the MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the OmniVL model in the OmniVL:One Foundation Model for Image-Language and Video-Language Tasks paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the VIOLET+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the Co-Tokenization model in the Video Question Answering with Iterative Video-Text Co-Tokenization paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the All-in-one-B model in the All in One: Exploring Unified Video-Language Pre-training paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the LRCE model in the Lightweight Recurrent Cross-modal Encoder for Video Question Answering paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the JustAsk+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the Just Ask model in the Just Ask: Learning to Answer Questions from Millions of Narrated Videos paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the ALPRO model in the Align and Prompt: Video-and-Language Pre-training with Entity Prompts paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the All-in-one+ model in the Open-vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the DualVGR model in the DualVGR: A Dual-Visual Graph Reasoning Unit for Video Question Answering paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the HCRN model in the Hierarchical Conditional Relation Networks for Video Question Answering paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the SSML model in the Noise Estimation Using Density Estimation for Self-Supervised Multimodal Learning paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the HMEMA model in the Heterogeneous Memory Enhanced Multimodal Attention Model for Video Question Answering paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the Co-Mem model in the Motion-Appearance Co-Memory Networks for Video Question Answering paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the ST-VQA model in the TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering paper on the MSVD-QA dataset?
Accuracy
What metrics were used to measure the Prophet model in the Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the PromptCap model in the PromptCap: Prompt-Guided Task-Aware Image Captioning paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the GPV-2 model in the Webly Supervised Concept Expansion for General Purpose Vision Models paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the KRISP model in the KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain Knowledge-Based VQA paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the ViLBERT - VQA model in the ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the LXMERT model in the LXMERT: Learning Cross-Modality Encoder Representations from Transformers paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the ViLBERT model in the ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the Pythia model in the Pythia v0.1: the Winning Entry to the VQA Challenge 2018 paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy
What metrics were used to measure the ViLBERT - OK-VQA model in the ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks paper on the A-OKVQA dataset?
MC Accuracy, DA VQA Score, Accuracy