prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the MLM+ subs+ del-span model in the CLEAR: Contrastive Learning for Sentence Representation paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the ELECTRA model in the ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the T5-3B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the DistilBERT model in the DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the RE2 model in the Simple and Effective Text Matching with Richer Alignment Features paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the BigBird model in the Big Bird: Transformers for Longer Sequences paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the SWEM-concat model in the Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the SqueezeBERT model in the SqueezeBERT: What can computer vision teach NLP about efficient neural networks? paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the 24hBERT model in the How to Train BERT with an Academic Budget paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the RoBERTa-large Tagger + LIQUID (Ensemble) model in the LIQUID: A Framework for List Question Answering Dataset Generation paper on the MultiSpanQA dataset?
Exact F1
What metrics were used to measure the ST-MoE-32B model in the ST-MoE: Designing Stable and Transferable Sparse Expert Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the PaLM 540B (finetuned) model in the PaLM: Scaling Language Modeling with Pathways paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the DeBERTa-1.5B model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the MUPPET Roberta Large model in the Muppet: Massive Multi-task Representations with Pre-Finetuning paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the LLaMA 2 70B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the MUPPET Roberta Base model in the Muppet: Massive Multi-task Representations with Pre-Finetuning paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the Chinchilla (zero-shot) model in the Training Compute-Optimal Large Language Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the LLaMA 2 34B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the FLAN 137B (zero-shot) model in the Finetuned Language Models Are Zero-Shot Learners paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the LLaMA 2 13B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the Gopher (zero-shot) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the LLaMA 2 7B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the GPT-3 175B (few-shot) model in the Language Models are Few-Shot Learners paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the OPT-IML 175B model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the AlexaTM 20B model in the AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the Neo-6B (QA + WS) model in the Ask Me Anything: A simple strategy for prompting language models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the OPT-IML 30B model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the Neo-6B (few-shot) model in the Ask Me Anything: A simple strategy for prompting language models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the N-Grammer model in the N-Grammer: Augmenting Transformers with latent n-grams paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the Neo-6B (QA) model in the Ask Me Anything: A simple strategy for prompting language models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the OPT 30B (zero-shot) model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the OPT-IML 1.3B (zero-shot) model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the GPT-3 (zero-shot) model in the Language Models are Few-Shot Learners paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the OPT 1.3B (zero-shot) model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the OPT 175B model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the Hyena model in the Hyena Hierarchy: Towards Larger Convolutional Language Models paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the BoolQ dataset?
Accuracy
What metrics were used to measure the MAFiD model in the MAFiD: Moving Average Equipped Fusion-in-Decoder for Question Answering over Tabular and Textual Data paper on the HybridQA dataset?
ANS-EM
What metrics were used to measure the MATE Ponter model in the MATE: Multi-view Attention for Table Transformer Efficiency paper on the HybridQA dataset?
ANS-EM
What metrics were used to measure the DocHopper model in the Iterative Hierarchical Attention for Answering Complex Questions over Long Documents paper on the HybridQA dataset?
ANS-EM
What metrics were used to measure the HYBRIDER model in the HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data paper on the HybridQA dataset?
ANS-EM
What metrics were used to measure the phi-1.5-web 1.3B (zero-shot) model in the Textbooks Are All You Need II: phi-1.5 technical report paper on the SIQA dataset?
Accuracy
What metrics were used to measure the phi-1.5 1.3B (zero-shot) model in the Textbooks Are All You Need II: phi-1.5 technical report paper on the SIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the SIQA dataset?
Accuracy
What metrics were used to measure the Chinchilla (zero-shot) model in the Training Compute-Optimal Large Language Models paper on the SIQA dataset?
Accuracy
What metrics were used to measure the Gopher (zero-shot) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the SIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the SIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the SIQA dataset?
Accuracy
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the SIQA dataset?
Accuracy
What metrics were used to measure the FlowQA (single model) model in the FlowQA: Grasping Flow in History for Conversational Machine Comprehension paper on the QuAC dataset?
F1, HEQD, HEQQ
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the QuAC dataset?
F1, HEQD, HEQQ
What metrics were used to measure the Custom Legal-BERT model in the When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset paper on the CaseHOLD dataset?
Macro F1 (10-fold)
What metrics were used to measure the Legal-BERT model in the When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset paper on the CaseHOLD dataset?
Macro F1 (10-fold)
What metrics were used to measure the BERT model in the When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset paper on the CaseHOLD dataset?
Macro F1 (10-fold)
What metrics were used to measure the monoT5-3B model in the No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval paper on the HotpotQA (BEIR) dataset?
nDCG@10
What metrics were used to measure the BM25+CE model in the BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models paper on the HotpotQA (BEIR) dataset?
nDCG@10
What metrics were used to measure the SGPT-CE-6.1B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the HotpotQA (BEIR) dataset?
nDCG@10
What metrics were used to measure the SGPT-BE-5.8B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the HotpotQA (BEIR) dataset?
nDCG@10
What metrics were used to measure the TP-MANN model in the StepGame: A New Benchmark for Robust Multi-Hop Spatial Reasoning in Texts paper on the StepGame dataset?
1-of-100 Accuracy
What metrics were used to measure the FiD model in the Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering paper on the ConditionalQA dataset?
Conditional (answers), Conditional (w/ conditions), Overall (answers), Overall (w/ conditions)
What metrics were used to measure the DocHopper model in the Iterative Hierarchical Attention for Answering Complex Questions over Long Documents paper on the ConditionalQA dataset?
Conditional (answers), Conditional (w/ conditions), Overall (answers), Overall (w/ conditions)
What metrics were used to measure the ETC-Pipeline model in the ETC: Encoding Long and Structured Inputs in Transformers paper on the ConditionalQA dataset?
Conditional (answers), Conditional (w/ conditions), Overall (answers), Overall (w/ conditions)
What metrics were used to measure the Med-PaLM 2 (ER) model in the Towards Expert-Level Medical Question Answering with Large Language Models paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the Med-PaLM 2 (CoT + SC) model in the Towards Expert-Level Medical Question Answering with Large Language Models paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the Med-PaLM 2 (5-shot) model in the Towards Expert-Level Medical Question Answering with Large Language Models paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the Flan-PaLM (540 B) model in the Large Language Models Encode Clinical Knowledge paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the Codex 5-shot CoT model in the Can large language models reason about medical questions? paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the VOD (BioLinkBERT) model in the Variational Open-Domain Question Answering paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the BioMedGPT-10B model in the BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the PubMedGPT (2.7 B) model in the Large Language Models Encode Clinical Knowledge paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the DRAGON + BioLinkBERT model in the Deep Bidirectional Language-Knowledge Graph Pretraining paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the BioLinkBERT (340 M) model in the Large Language Models Encode Clinical Knowledge paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the GAL 120B (zero-shot) model in the Galactica: A Large Language Model for Science paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the BioLinkBERT (base) model in the LinkBERT: Pretraining Language Models with Document Links paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the GrapeQA: PEGA model in the GrapeQA: GRaph Augmentation and Pruning to Enhance Question-Answering paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the BioBERT (large) model in the BioBERT: a pre-trained biomedical language representation model for biomedical text mining paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the BioBERT (base) model in the BioBERT: a pre-trained biomedical language representation model for biomedical text mining paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the GPT-Neo (2.7 B) model in the Large Language Models Encode Clinical Knowledge paper on the MedQA-USMLE dataset?
Accuracy
What metrics were used to measure the BLOOM (few-shot, k=5) model in the Galactica: A Large Language Model for Science paper on the MedQA-USMLE dataset?
Accuracy