prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Cardal model in the Moving Beyond the Turing Test with the Allen AI Science Challenge paper on the Aristo Kaggle Allen AI 8th grade questions dataset? | 1:1 Accuracy |
What metrics were used to measure the Alejandro Mosquera model in the Moving Beyond the Turing Test with the Allen AI Science Challenge paper on the Aristo Kaggle Allen AI 8th grade questions dataset? | 1:1 Accuracy |
What metrics were used to measure the GA+MAGE (32) model in the Linguistic Knowledge as Memory for Recurrent Neural Networks paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the GA Reader model in the Gated-Attention Readers for Text Comprehension paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the Attentive + relabling + ensemble model in the A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the BiDAF model in the Bidirectional Attention Flow for Machine Comprehension paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the AIA model in the Iterative Alternating Neural Attention for Machine Reading paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the AS Reader (ensemble model) model in the Text Understanding with the Attention Sum Reader Network paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the ReasoNet model in the ReasoNet: Learning to Stop Reading in Machine Comprehension paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the AoA Reader model in the Attention-over-Attention Neural Networks for Reading Comprehension paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the EpiReader model in the Natural Language Comprehension with the EpiReader paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the Dynamic Entity Repres. + w2v model in the Dynamic Entity Representation with Max-pooling Improves Machine Reading paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the AttentiveReader + bilinear attention model in the A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the AS Reader (single model) model in the Text Understanding with the Attention Sum Reader Network paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the MemNNs (ensemble) model in the Teaching Machines to Read and Comprehend paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the Classifier model in the A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the Impatient Reader model in the Teaching Machines to Read and Comprehend paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the Attentive Reader model in the Teaching Machines to Read and Comprehend paper on the CNN / Daily Mail dataset? | CNN, Daily Mail |
What metrics were used to measure the Cluster-Former (#C=512) model in the Cluster-Former: Clustering-based Sparse Transformer for Long-Range Dependency Encoding paper on the Quasart-T dataset? | EM |
What metrics were used to measure the Locality-Sensitive Hashing model in the Reformer: The Efficient Transformer paper on the Quasart-T dataset? | EM |
What metrics were used to measure the Sparse Attention model in the Generating Long Sequences with Sparse Transformers paper on the Quasart-T dataset? | EM |
What metrics were used to measure the Multi-passage BERT model in the Multi-passage BERT: A Globally Normalized BERT Model for Open-domain Question Answering paper on the Quasart-T dataset? | EM |
What metrics were used to measure the Denoising QA model in the Denoising Distantly Supervised Open-Domain Question Answering paper on the Quasart-T dataset? | EM |
What metrics were used to measure the DECAPROP model in the Densely Connected Attention Propagation for Reading Comprehension paper on the Quasart-T dataset? | EM |
What metrics were used to measure the DrQA model in the Reading Wikipedia to Answer Open-Domain Questions paper on the Quasart-T dataset? | EM |
What metrics were used to measure the T5-small+prolog model in the Domain Specific Question Answering Over Knowledge Graphs Using Logical Programming and Large Language Models paper on the MetaQA dataset? | AnswerExactMatch (Question Answering) |
What metrics were used to measure the Fusion-in-Decoder (large) model in the Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering paper on the SQuAD dataset? | EM, Exact Match, F1, Loss, SQuAD EM, SQuAD F1 |
What metrics were used to measure the DPR model in the Dense Passage Retrieval for Open-Domain Question Answering paper on the SQuAD dataset? | EM, Exact Match, F1, Loss, SQuAD EM, SQuAD F1 |
What metrics were used to measure the RAG-end2end model in the Fine-tune the Entire RAG Architecture (including DPR retriever) for Question-Answering paper on the SQuAD dataset? | EM, Exact Match, F1, Loss, SQuAD EM, SQuAD F1 |
What metrics were used to measure the Longformer model in the MuLD: The Multitask Long Document Benchmark paper on the MuLD (VLSP) dataset? | BLEU-1, BLEU-4, METEOR, Rouge-L |
What metrics were used to measure the T5 model in the MuLD: The Multitask Long Document Benchmark paper on the MuLD (VLSP) dataset? | BLEU-1, BLEU-4, METEOR, Rouge-L |
What metrics were used to measure the SciGraphQA-baseline model in the SciGraphQA: A Large-Scale Synthetic Multi-Turn Question-Answering Dataset for Scientific Graphs paper on the SciGraphQA dataset? | CIDEr |
What metrics were used to measure the ZS-F-VQA model in the Zero-shot Visual Question Answering using Knowledge Graph paper on the F-VQA dataset? | Top-1 Accuracy, Top-3 Accuracy, Accuracy, MR, MRR |
What metrics were used to measure the F-VQA (top-3-QQmaping) model in the FVQA: Fact-based Visual Question Answering paper on the F-VQA dataset? | Top-1 Accuracy, Top-3 Accuracy, Accuracy, MR, MRR |
What metrics were used to measure the F-VQA (top-1-QQmaping) model in the FVQA: Fact-based Visual Question Answering paper on the F-VQA dataset? | Top-1 Accuracy, Top-3 Accuracy, Accuracy, MR, MRR |
What metrics were used to measure the CRCT model in the Classification-Regression for Chart Comprehension paper on the PlotQA-D1 dataset? | 1:1 Accuracy |
What metrics were used to measure the PReFIL model in the Answering Questions about Data Visualizations using Efficient Bimodal Fusion paper on the PlotQA-D1 dataset? | 1:1 Accuracy |
What metrics were used to measure the PlotQA model in the PlotQA: Reasoning over Scientific Plots paper on the PlotQA-D1 dataset? | 1:1 Accuracy |
What metrics were used to measure the MDETR model in the MDETR -- Modulated Detection for End-to-End Multi-Modal Understanding paper on the CLEVR-Humans dataset? | Accuracy |
What metrics were used to measure the MAC model in the Compositional Attention Networks for Machine Reasoning paper on the CLEVR-Humans dataset? | Accuracy |
What metrics were used to measure the CNN+GRU+FiLM model in the FiLM: Visual Reasoning with a General Conditioning Layer paper on the CLEVR-Humans dataset? | Accuracy |
What metrics were used to measure the NS-VQA (1K programs) model in the Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding paper on the CLEVR-Humans dataset? | Accuracy |
What metrics were used to measure the IEP-18K model in the Inferring and Executing Programs for Visual Reasoning paper on the CLEVR-Humans dataset? | Accuracy |
What metrics were used to measure the RandImg model in the Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering paper on the VQA-CE dataset? | Accuracy (Counterexamples) |
What metrics were used to measure the LMH + CSS model in the Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering paper on the VQA-CE dataset? | Accuracy (Counterexamples) |
What metrics were used to measure the LFF model in the Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering paper on the VQA-CE dataset? | Accuracy (Counterexamples) |
What metrics were used to measure the LMH model in the Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering paper on the VQA-CE dataset? | Accuracy (Counterexamples) |
What metrics were used to measure the UpDown model in the Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering paper on the VQA-CE dataset? | Accuracy (Counterexamples) |
What metrics were used to measure the ESR model in the Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering paper on the VQA-CE dataset? | Accuracy (Counterexamples) |
What metrics were used to measure the LMH + RMFE model in the Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering paper on the VQA-CE dataset? | Accuracy (Counterexamples) |
What metrics were used to measure the BLOCK model in the Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering paper on the VQA-CE dataset? | Accuracy (Counterexamples) |
What metrics were used to measure the RUBi model in the Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering paper on the VQA-CE dataset? | Accuracy (Counterexamples) |
What metrics were used to measure the MCB 7 att. model in the Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the Dual-MFA model in the Co-attending Free-form Regions and Detections with Multi-modal Multiplicative Feature Embedding for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the RelAtt model in the R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the 3-Modalities: Unary + Pairwise + Ternary (ResNet) model in the High-Order Attention Models for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the joint-loss model in the Training Recurrent Answering Units with Joint Loss Minimization for VQA paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the MRN model in the Multimodal Residual Learning for Visual QA paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the HQI+ResNet model in the Hierarchical Question-Image Co-Attention for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the FDA model in the A Focused Dynamic Attention Model for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the LSTM Q+I model in the VQA: Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the iBOWIMG baseline model in the Simple Baseline for Visual Question Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 multiple choice dataset? | Percentage correct |
What metrics were used to measure the SAAA (ResNet) model in the Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering paper on the VQA v1 test-dev dataset? | Accuracy |
What metrics were used to measure the DAN (ResNet) model in the Dual Attention Networks for Multimodal Reasoning and Matching paper on the VQA v1 test-dev dataset? | Accuracy |
What metrics were used to measure the MCB (ResNet) model in the Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding paper on the VQA v1 test-dev dataset? | Accuracy |
What metrics were used to measure the RAU (ResNet) model in the Training Recurrent Answering Units with Joint Loss Minimization for VQA paper on the VQA v1 test-dev dataset? | Accuracy |
What metrics were used to measure the HieCoAtt (ResNet) model in the Hierarchical Question-Image Co-Attention for Visual Question Answering paper on the VQA v1 test-dev dataset? | Accuracy |
What metrics were used to measure the DMN+ model in the Dynamic Memory Networks for Visual and Textual Question Answering paper on the VQA v1 test-dev dataset? | Accuracy |
What metrics were used to measure the NMN+LSTM+FT model in the Neural Module Networks paper on the VQA v1 test-dev dataset? | Accuracy |
What metrics were used to measure the NS-VQA (1K programs) model in the Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the MDETR model in the MDETR -- Modulated Detection for End-to-End Multi-Modal Understanding paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the OCCAM (ours) model in the Interpretable Visual Reasoning via Induced Symbolic Space paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the TbD + reg + hres model in the Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the NS-CL model in the The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the MAC model in the Compositional Attention Networks for Machine Reasoning paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the CNN + LSTM + RN + HAN model in the Learning Visual Question Answering by Bootstrapping Hard Attention paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the DDRprog* model in the DDRprog: A CLEVR Differentiable Dynamic Reasoning Programmer paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the single-hop + LCGN (ours) model in the Language-Conditioned Graph Networks for Relational Reasoning paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the CNN+GRU+FiLM model in the FiLM: Visual Reasoning with a General Conditioning Layer paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the XNM-Det supervised model in the Explainable and Explicit Visual Reasoning over Scene Graphs paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the IEP-700K model in the Inferring and Executing Programs for Visual Reasoning paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the CNN + LSTM + RN model in the A simple neural network module for relational reasoning paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the QGHC+Att+Concat model in the Question-Guided Hybrid Convolution for Visual Question Answering paper on the CLEVR dataset? | Accuracy |
What metrics were used to measure the PaLI-X model in the PaLI-X: On Scaling up a Multilingual Vision and Language Model paper on the InfoSeek dataset? | Accuracy |
What metrics were used to measure the CLIP + FiD model in the Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? paper on the InfoSeek dataset? | Accuracy |
What metrics were used to measure the CLIP + PaLM (540B) model in the Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? paper on the InfoSeek dataset? | Accuracy |
What metrics were used to measure the PaLI model in the Can Pre-trained Vision and Language Models Answer Visual Information-Seeking Questions? paper on the InfoSeek dataset? | Accuracy |
What metrics were used to measure the BLIP2 model in the BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models paper on the InfoSeek dataset? | Accuracy |
What metrics were used to measure the InstructBLIP model in the InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning paper on the InfoSeek dataset? | Accuracy |
What metrics were used to measure the PReFIL model in the Answering Questions about Data Visualizations using Efficient Bimodal Fusion paper on the FigureQA - test 1 dataset? | 1:1 Accuracy |
What metrics were used to measure the CRCT model in the Classification-Regression for Chart Comprehension paper on the FigureQA - test 1 dataset? | 1:1 Accuracy |
What metrics were used to measure the RN model in the FigureQA: An Annotated Figure Dataset for Visual Reasoning paper on the FigureQA - test 1 dataset? | 1:1 Accuracy |
What metrics were used to measure the PEVL+ model in the PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models paper on the GQA dataset? | Accuracy |
What metrics were used to measure the RelViT model in the RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning paper on the GQA dataset? | Accuracy |
What metrics were used to measure the PaLI model in the PaLI: A Jointly-Scaled Multilingual Language-Image Model paper on the VizWiz 2020 VQA dataset? | overall, yes/no, number, other, unanswerable |
What metrics were used to measure the CLIP-Ensemble model in the Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model paper on the VizWiz 2020 VQA dataset? | overall, yes/no, number, other, unanswerable |
What metrics were used to measure the CLIP-Single model in the Less Is More: Linear Layers on CLIP Features as Powerful VizWiz Model paper on the VizWiz 2020 VQA dataset? | overall, yes/no, number, other, unanswerable |
What metrics were used to measure the HSSLab model in the paper on the VizWiz 2020 VQA dataset? | overall, yes/no, number, other, unanswerable |
What metrics were used to measure the sudoku model in the paper on the VizWiz 2020 VQA dataset? | overall, yes/no, number, other, unanswerable |
What metrics were used to measure the Katya model in the paper on the VizWiz 2020 VQA dataset? | overall, yes/no, number, other, unanswerable |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.