prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the MReaL model in the paper on the GQA Test2019 dataset? | Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution |
What metrics were used to measure the LSTM model in the paper on the GQA Test2019 dataset? | Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution |
What metrics were used to measure the Academia Sinica model in the paper on the GQA Test2019 dataset? | Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution |
What metrics were used to measure the Fj model in the paper on the GQA Test2019 dataset? | Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution |
What metrics were used to measure the Mycsulb model in the paper on the GQA Test2019 dataset? | Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution |
What metrics were used to measure the LocalPrior model in the paper on the GQA Test2019 dataset? | Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution |
What metrics were used to measure the GlobalPrior model in the paper on the GQA Test2019 dataset? | Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution |
What metrics were used to measure the muc_ai model in the paper on the GQA Test2019 dataset? | Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution |
What metrics were used to measure the CNN model in the paper on the GQA Test2019 dataset? | Accuracy, Binary, Open, Consistency, Plausibility, Validity, Distribution |
What metrics were used to measure the Accuracy model in the MUREL: Multimodal Relational Reasoning for Visual Question Answering paper on the TDIUC dataset? | Accuracy |
What metrics were used to measure the BAN2-CTI model in the Compact Trilinear Interaction for Visual Question Answering paper on the TDIUC dataset? | Accuracy |
What metrics were used to measure the PaLI model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BEiT-3 model in the Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the VLMo model in the VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the ONE-PEACE model in the ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the mPLUG (Huge) model in the mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the CoCa model in the CoCa: Contrastive Captioners are Image-Text Foundation Models paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the MMU model in the Achieving Human Parity on Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the XFM (base) model in the Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the Florence model in the Florence: A New Foundation Model for Computer Vision paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the SimVLM model in the SimVLM: Simple Visual Language Model Pretraining with Weak Supervision paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the Prismer model in the Prismer: A Vision-Language Model with An Ensemble of Experts paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the X-VLM (base) model in the Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the ALBEF (14M) model in the Align before Fuse: Vision and Language Representation Learning with Momentum Distillation paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the Oscar model in the Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the UNITER (Large) model in the UNITER: UNiversal Image-TExt Representation Learning paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the X-101 grid features + MCAN model in the In Defense of Grid Features for Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the CFR model in the Coarse-to-Fine Reasoning for Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the VL-BERTLARGE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the ViLT-B/32 model in the ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the MCAN+VC model in the Visual Commonsense R-CNN paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the VL-BERTBASE model in the VL-BERT: Pre-training of Generic Visual-Linguistic Representations paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the VisualBERT model in the VisualBERT: A Simple and Performant Baseline for Vision and Language paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the MCANed-6 model in the Deep Modular Co-Attention Networks for Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the ViLBERT model in the ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BAN+Glove+Counter model in the Bilinear Attention Networks paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the LXMERT (Pre-train + scratch) model in the LXMERT: Learning Cross-Modality Encoder Representations from Transformers paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the Image features from bottom-up attention (adaptive K, ensemble) model in the Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the Pythia v0.3 + LoRRA model in the Towards VQA Models That Can Read paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the DMN model in the Learning to Count Objects in Natural Images for Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the LaKo model in the LaKo: Knowledge-driven Visual Question Answering via Late Knowledge-to-Text Injection paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the MuRel model in the MUREL: Multimodal Relational Reasoning for Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BLOCK model in the BLOCK: Bilinear Superdiagonal Fusion for Visual Question Answering and Visual Relationship Detection paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the MUTAN model in the MUTAN: Multimodal Tucker Fusion for Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BAN2-CTI model in the Compact Trilinear Interaction for Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the 2D continuous softmax model in the Sparse and Continuous Attention Mechanisms paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BLIP-2 ViT-G FlanT5 XXL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the N2NMN (ResNet-152, policy search) model in the Learning to Reason: End-to-End Module Networks for Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the PNP-VQA model in the Plug-and-Play VQA: Zero-shot VQA by Conjoining Large Pretrained Models with Zero Training paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the MCB model in the Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the RUBi model in the RUBi: Reducing Unimodal Biases in Visual Question Answering paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BLIP-2 ViT-G FlanT5 XL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BLIP-2 ViT-L FlanT5 XL (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the Flamingo 80B model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BLIP-2 ViT-G OPT 6.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BLIP-2 ViT-G OPT 2.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the Flamingo 9B model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the KOSMOS-1 1.6B (zero-shot) model in the Language Is Not All You Need: Aligning Perception with Language Models paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the BLIP-2 ViT-L OPT 2.7B (zero-shot) model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the Flamingo 3B model in the Flamingo: a Visual Language Model for Few-Shot Learning paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the VLKD model in the Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation paper on the VQA v2 test-dev dataset? | Accuracy |
What metrics were used to measure the PReFIL (Oracle OCR) model in the Answering Questions about Data Visualizations using Efficient Bimodal Fusion paper on the DVQA test-familiar dataset? | 1:1 Accuracy |
What metrics were used to measure the PaLI model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the TAP model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the TAG model in the TAG: Boosting Text-VQA via Text-aware Visual Question-answer Generation paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the ssbaseline model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the SMA single model model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the SAM (Single Model) model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the colab_buaa model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the CRN (Single Model) model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the CIG model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the M4C model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the Shuai model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the mmgnn model in the paper on the TextVQA test-standard dataset? | overall |
What metrics were used to measure the SAN † - hard mask model in the Zero-shot Visual Question Answering using Knowledge Graph paper on the ZS-F-VQA dataset? | Top-1 Accuracy |
What metrics were used to measure the Patch-TRM model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the ViLT model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the ViT model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the UNITER model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the DFAF model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the MCAN model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the ViLBERT model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the BAN model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the Top-Down model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the Random model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the Q-Only model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the I-Only model in the IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning paper on the IconQA dataset? | Sub-tasks (Img.), Sub-tasks (Txt.), Sub-tasks (Blank), Reasoning (Geo.), Reasoning (Cou.), Reasoning (Com.), Reasoning (Spa.), Reasoning (Sce.), Reasoning (Pat.), Reasoning (Tim.), Reasoning (Fra.), Reasoning (Est.), Reasoning (Alg.), Reasoning (Mea.), Reasoning (Sen.), Reasoning (Pro.) |
What metrics were used to measure the CMN model in the Modeling Relationships in Referential Expressions with Compositional Modular Networks paper on the Visual Genome (subjects) dataset? | Percentage correct |
What metrics were used to measure the Human model in the InfographicVQA paper on the InfographicVQA dataset? | ANLS |
What metrics were used to measure the UDOP (aux) model in the Unifying Vision, Text, and Layout for Universal Document Processing paper on the InfographicVQA dataset? | ANLS |
What metrics were used to measure the PaLI-3 (w/ OCR) model in the PaLI-3 Vision Language Models: Smaller, Faster, Stronger paper on the InfographicVQA dataset? | ANLS |
What metrics were used to measure the TILT-Large model in the Going Full-TILT Boogie on Document Understanding with Text-Image-Layout Transformer paper on the InfographicVQA dataset? | ANLS |
What metrics were used to measure the PaLI-3 model in the PaLI-3 Vision Language Models: Smaller, Faster, Stronger paper on the InfographicVQA dataset? | ANLS |
What metrics were used to measure the PaLI-X (Single-task FT w/ OCR) model in the PaLI-X: On Scaling up a Multilingual Vision and Language Model paper on the InfographicVQA dataset? | ANLS |
What metrics were used to measure the Claude + LATIN-Prompt model in the Layout and Task Aware Instruction Prompt for Zero-shot Document Image Question Answering paper on the InfographicVQA dataset? | ANLS |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.