prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Vector-wise model in the LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the OPT-IML 175B model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the OPT-IML 30B model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the ELECTRA model in the paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the MLM+ del-span model in the CLEAR: Contrastive Learning for Sentence Representation paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the BigBird model in the Big Bird: Transformers for Longer Sequences paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the Neo-6B (QA + WS) model in the Ask Me Anything: A simple strategy for prompting language models paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the RealFormer model in the RealFormer: Transformer Likes Residual Attention paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the SqueezeBERT model in the SqueezeBERT: What can computer vision teach NLP about efficient neural networks? paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the SMART model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the BERT-LARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the data2vec model in the data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the FNet-Large model in the FNet: Mixing Tokens with Fourier Transforms paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Language Representation with Informative Entities paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the AlexaTM 20B model in the AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the OPT-IML 1.3B model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the DistilBERT model in the DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the TinyBERT model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the Neo-6B (QA) model in the Ask Me Anything: A simple strategy for prompting language models paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the OPT 175B model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the N-Grammer model in the N-Grammer: Augmenting Transformers with latent n-grams paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the Neo-6B (few-shot) model in the Ask Me Anything: A simple strategy for prompting language models paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the OPT 30B model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the 24hBERT model in the How to Train BERT with an Academic Budget paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the OPT 1.3B model in the OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the SMARTRoBERTa model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the SMART-BERT model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the RTE dataset?
Accuracy, Dev Accuracy
What metrics were used to measure the aESIM model in the Attention Boosted Sequential Inference Model paper on the Quora Question Pairs dataset?
Accuracy
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the T5 model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the T5-3B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the ALBERT model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the DeBERTa (large) model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Adv-RoBERTa ensemble model in the StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Vector-wise model in the LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the BERT-Large model in the FNet: Mixing Tokens with Fourier Transforms paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the ASA + RoBERTa model in the Adversarial Self-Attention for Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the MT-DNN-ensemble model in the Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Snorkel MeTaL (ensemble) model in the Training Complex Models with Multi-Task Weak Supervision paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the BigBird model in the Big Bird: Transformers for Longer Sequences paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the MT-DNN model in the Multi-Task Deep Neural Networks for Natural Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the BERT-LARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the RealFormer model in the RealFormer: Transformer Likes Residual Attention paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the gMLP-large model in the Pay Attention to MLPs paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the ASA + BERT-base model in the Adversarial Self-Attention for Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the 24hBERT model in the How to Train BERT with an Academic Budget paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Language Representation with Informative Entities paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Charformer-Tall model in the Charformer: Fast Character Transformers via Gradient-based Subword Tokenization paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the TinyBERT model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the MFAE model in the What Do Questions Exactly Ask? MFAE: Duplicate Question Identification with Multi-Fusion Asking Emphasis paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Finetuned Transformer LM model in the paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Finetuned Transformer LM model in the Improving Language Understanding by Generative Pre-Training paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the SqueezeBERT model in the SqueezeBERT: What can computer vision teach NLP about efficient neural networks? paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the FNet-Large model in the FNet: Mixing Tokens with Fourier Transforms paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the aESIM model in the Attention Boosted Sequential Inference Model paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Multi-task BiLSTM + Attn model in the GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Stacked Bi-LSTMs (shortcut connections, max-pooling) model in the Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the GenSen model in the Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Bi-LSTM sentence encoder (max-pooling) model in the Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the Stacked Bi-LSTMs (shortcut connections, max-pooling, attention) model in the Combining Similarity Features and Deep Representation Learning for Stance Detection in the Context of Checking Fake News paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the SWEM-max model in the Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the MT-DNN-SMARTv0 model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the MT-DNN-SMART model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the SMART+BERT-BASE model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the LM-CPPF RoBERTa-base model in the LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the SMARTRoBERTa model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the SMART-BERT model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MultiNLI dataset?
Matched, Mismatched, Accuracy, Dev Matched, Dev Mismatched
What metrics were used to measure the roberta-base-mnli model in the Probing neural language models for understanding of words of estimative probability paper on the Probability words NLI dataset?
1:1 Accuracy
What metrics were used to measure the MacBERT-large model in the CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark paper on the KUAKE-QTR dataset?
Accuracy
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the SNLI dataset?
% Test Accuracy, % Train Accuracy, Parameters, Dev Accuracy, % Dev Accuracy, Accuracy
What metrics were used to measure the EFL (Entailment as Few-shot Learner) + RoBERTa-large model in the Entailment as Few-Shot Learner paper on the SNLI dataset?
% Test Accuracy, % Train Accuracy, Parameters, Dev Accuracy, % Dev Accuracy, Accuracy
What metrics were used to measure the RoBERTa-large+Self-Explaining model in the Self-Explaining Structures Improve NLP Models paper on the SNLI dataset?
% Test Accuracy, % Train Accuracy, Parameters, Dev Accuracy, % Dev Accuracy, Accuracy
What metrics were used to measure the RoBERTa-large + self-explaining layer model in the Self-Explaining Structures Improve NLP Models paper on the SNLI dataset?
% Test Accuracy, % Train Accuracy, Parameters, Dev Accuracy, % Dev Accuracy, Accuracy
What metrics were used to measure the CA-MTL model in the Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data paper on the SNLI dataset?
% Test Accuracy, % Train Accuracy, Parameters, Dev Accuracy, % Dev Accuracy, Accuracy
What metrics were used to measure the SemBERT model in the Semantics-aware BERT for Language Understanding paper on the SNLI dataset?
% Test Accuracy, % Train Accuracy, Parameters, Dev Accuracy, % Dev Accuracy, Accuracy
What metrics were used to measure the MT-DNN-SMARTLARGEv0 model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the SNLI dataset?
% Test Accuracy, % Train Accuracy, Parameters, Dev Accuracy, % Dev Accuracy, Accuracy
What metrics were used to measure the MT-DNN model in the Multi-Task Deep Neural Networks for Natural Language Understanding paper on the SNLI dataset?
% Test Accuracy, % Train Accuracy, Parameters, Dev Accuracy, % Dev Accuracy, Accuracy
What metrics were used to measure the SJRC (BERT-Large +SRL) model in the Explicit Contextual Semantics for Text Comprehension paper on the SNLI dataset?
% Test Accuracy, % Train Accuracy, Parameters, Dev Accuracy, % Dev Accuracy, Accuracy