prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the BiAttention + DCU-LSTM model in the Multi-Granular Sequence Encoding via Dilated Compositional Units for Reading Comprehension paper on the NarrativeQA dataset?
Rouge-L, BLEU-1, BLEU-4, METEOR
What metrics were used to measure the BiDAF model in the Bidirectional Attention Flow for Machine Comprehension paper on the NarrativeQA dataset?
Rouge-L, BLEU-1, BLEU-4, METEOR
What metrics were used to measure the FiD+Distil model in the Distilling Knowledge from Reader to Retriever for Question Answering paper on the NarrativeQA dataset?
Rouge-L, BLEU-1, BLEU-4, METEOR
What metrics were used to measure the Oracle IR Models model in the The NarrativeQA Reading Comprehension Challenge paper on the NarrativeQA dataset?
Rouge-L, BLEU-1, BLEU-4, METEOR
What metrics were used to measure the UnitedQA model in the UnitedQA: A Hybrid Approach for Open Domain Question Answering paper on the EfficientQA test dataset?
Accuracy
What metrics were used to measure the Ma et al. - ELECTRA model in the Enhanced Speaker-aware Multi-party Multi-turn Dialogue Comprehension paper on the FriendsQA dataset?
EM, F1
What metrics were used to measure the Li and Zhao - ELECTRA model in the Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading Comprehension paper on the FriendsQA dataset?
EM, F1
What metrics were used to measure the Li and Choi - RoBERTa model in the Transformers to Learn Hierarchical Contexts in Multiparty Dialogue for Span-based Question Answering paper on the FriendsQA dataset?
EM, F1
What metrics were used to measure the Li and Zhao - BERT model in the Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading Comprehension paper on the FriendsQA dataset?
EM, F1
What metrics were used to measure the Li and Choi - BERT model in the Transformers to Learn Hierarchical Contexts in Multiparty Dialogue for Span-based Question Answering paper on the FriendsQA dataset?
EM, F1
What metrics were used to measure the Liu et al. - BERT model in the Graph-Based Knowledge Integration for Question Answering over Dialogue paper on the FriendsQA dataset?
EM, F1
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the LLaMA 2 70B (one-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the PaLM-540B (Few-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the PaLM-540B (One-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the Codex + REPLUG LSR (Few-Shot) model in the REPLUG: Retrieval-Augmented Black-Box Language Models paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the PaLM-540B (Zero-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the Codex + REPLUG (Few-Shot) model in the REPLUG: Retrieval-Augmented Black-Box Language Models paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the GLaM 62B/64E (One-shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the GLaM 62B/64E (Few-shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the LLaMA 65B (few-shot, k=64) model in the LLaMA: Open and Efficient Foundation Language Models paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the FiE+PAQ model in the FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the LLaMA 65B (few-shot, k=5) model in the LLaMA: Open and Efficient Foundation Language Models paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the FiD+Distil model in the Distilling Knowledge from Reader to Retriever for Question Answering paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the LLaMA 65B (one-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the EMDR2 model in the End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the GLaM 62B/64E (Zero-shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the Fusion-in-Decoder (large) model in the Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the MemoReader model in the MemoReader: Large-Scale Reading Comprehension through Neural Memory Controller paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the S-Norm model in the Simple and Effective Multi-Paragraph Reading Comprehension paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the TOME-2 model in the Mention Memory: incorporating textual knowledge into Transformers through entity mention attention paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the DPR model in the Dense Passage Retrieval for Open-Domain Question Answering paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the RAG model in the Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the Reading Twice for NLU model in the Dynamic Integration of Background Knowledge in Neural NLU Systems paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the Mnemonic Reader model in the Reinforced Mnemonic Reader for Machine Reading Comprehension paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the ORQA model in the Latent Retrieval for Weakly Supervised Open Domain Question Answering paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the MEMEN model in the MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the BigBird-etc model in the Big Bird: Transformers for Longer Sequences paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the LinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the DyREX model in the DyREx: Dynamic Query Representation for Extractive Question Answering paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the UnitedQA (Hybrid reader) model in the UnitedQA: A Hybrid Approach for Open Domain Question Answering paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the ReasonBERTR model in the ReasonBERT: Pre-trained to Reason with Distant Supervision paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the ReasonBERTB model in the ReasonBERT: Pre-trained to Reason with Distant Supervision paper on the TriviaQA dataset?
EM, F1
What metrics were used to measure the RGX model in the Cooperative Self-training of Machine Reading Comprehension paper on the MRQA out-of-domain dataset?
Average F1
What metrics were used to measure the {ANNA} (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the LUKE (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the LUKE (single model) model in the LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the LUKE model in the LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the XLNet (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the XLNET-123++ (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the XLNET-123+ (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the XLNET-123 (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Unnamed submission by NMC model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERTSP (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the SpanBERT (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the SpanBERT (single model) model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT+WWM+MT (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Tuned BERT-1seq Large Cased (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the LinkBERT (large) model in the LinkBERT: Pretraining Language Models with Document Links paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT (ensemble) model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT-LARGE (Ensemble+TriviaQA) model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the ATB (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Tuned BERT Large Cased (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT+MT (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Knowledge-enhanced BERT (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the KT-NET (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the ST_bl model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the nlnet (ensemble) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the EL-BERT (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BISAN (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT+Sparse-Transformer model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT (single model) model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the DPN (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT-uncased (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the WD (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Original BERT Large Cased (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the MARS (ensemble) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Common-sense Governed BERT-123 (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the WD1 (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the nlnet (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Pytalk + Stanza + BERT (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Reinforced Mnemonic Reader + A2D (ensemble model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT-Base mod (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the r-net+ (ensemble) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Hybrid AoA Reader (ensemble) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the QANet (single) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the SLQA+ (ensemble) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the Reinforced Mnemonic Reader (ensemble model) model in the Reinforced Mnemonic Reader for Machine Reading Comprehension paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the r-net (ensemble) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT (single model) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the AttentionReader+ (ensemble) model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the MMIPN model in the paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass
What metrics were used to measure the BERT - 6 Layers model in the Information Theoretic Representation Distillation paper on the SQuAD1.1 dataset?
EM, F1, Hardware Burden, Exact Match, Operations per network pass