prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the SBERT_Large_mt_ru_finetuning model in the paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the SBERT_Large model in the paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the Multilingual Bert model in the paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the YaLM 1.0B few-shot model in the paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the RuGPT3XL few-shot model in the paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the MT5 Large model in the mT5: A massively multilingual pre-trained text-to-text transformer paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the heuristic majority model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the majority_class model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the RuGPT3Medium model in the paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the RuGPT3Small model in the paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the Random weighted model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the Baseline TF-IDF1.1 model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the TERRa dataset? | Accuracy |
What metrics were used to measure the ExplainThenPredictAttention (e-InferSent Bi-LSTM + Attention) model in the e-SNLI: Natural Language Inference with Natural Language Explanations paper on the e-SNLI dataset? | BLEU, Accuracy |
What metrics were used to measure the ALBERT model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the StructBERTRoBERTa ensemble model in the StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the ALICE model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the MT-DNN-SMART model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the T5-3B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the DeBERTaV3large model in the DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the ELECTRA model in the paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the DeBERTa (large) model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the Vector-wise model in the LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the SpanBERT model in the SpanBERT: Improving Pre-training by Representing and Predicting Spans paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the TRANS-BLSTM model in the TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the ASA + RoBERTa model in the Adversarial Self-Attention for Language Understanding paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the MLM+ subs+ del-span model in the CLEAR: Contrastive Learning for Sentence Representation paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the BERT-LARGE model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the BigBird model in the Big Bird: Transformers for Longer Sequences paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the RealFormer model in the RealFormer: Transformer Likes Residual Attention paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the ASA + BERT-base model in the Adversarial Self-Attention for Language Understanding paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Language Representation with Informative Entities paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the data2vec model in the data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the Charformer-Tall model in the Charformer: Fast Character Transformers via Gradient-based Subword Tokenization paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the 24hBERT model in the How to Train BERT with an Academic Budget paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the DistilBERT model in the DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the SqueezeBERT model in the SqueezeBERT: What can computer vision teach NLP about efficient neural networks? paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the Nyströmformer model in the Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the TinyBERT model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the FNet-Large model in the FNet: Mixing Tokens with Fourier Transforms paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the LM-CPPF RoBERTa-base model in the LM-CPPF: Paraphrasing-Guided Data Augmentation for Contrastive Prompt-Based Few-Shot Fine-Tuning paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the SMARTRoBERTa model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the SMART-BERT model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the QNLI dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the SMARTRoBERTa-LARGE model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the MNLI + SNLI + ANLI + FEVER dataset? | % Dev Accuracy, % Test Accuracy |
What metrics were used to measure the T5-3B (explanation prompting) model in the Prompting for explanations improves Adversarial NLI. Is this true? {Yes} it is {true} because {it weakens superficial cues} paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the T0-11B (explanation prompting) model in the Prompting for explanations improves Adversarial NLI. Is this true? {Yes} it is {true} because {it weakens superficial cues} paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the InfoBERT (RoBERTa) model in the InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the RoBERTa (Large) model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the ALUM (RoBERTa-LARGE) model in the Adversarial Training for Large Neural Language Models paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the XLNet (Large) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the ChatGPT model in the A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the GPT-3 model in the Language Models are Few-Shot Learners paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the PaLM 540B (Self Improvement, Self Consistency) model in the Large Language Models Can Self-Improve paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the PaLM 540B (Self Improvement, CoT Prompting) model in the Large Language Models Can Self-Improve paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the PaLM 540B (Self Improvement, Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the PaLM 540B (Self Consistency) model in the Large Language Models Can Self-Improve paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the PaLM 540B (CoT Prompting) model in the Large Language Models Can Self-Improve paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the PaLM 540B (Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the ANLI test dataset? | A1, A2, A3 |
What metrics were used to measure the DeBERTa model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the T5 model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the XLNet model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the ALBERT model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the T5-3B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the StructBERTRoBERTa ensemble model in the StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the T5-Base model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the T5-Small model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the SqueezeBERT model in the SqueezeBERT: What can computer vision teach NLP about efficient neural networks? paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the DistilBERT model in the DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter paper on the WNLI dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (finetuned) model in the PaLM: Scaling Language Modeling with Pathways paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the DeBERTa-1.5B model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the MUPPET Roberta Large model in the Muppet: Massive Multi-task Representations with Pre-Finetuning paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the DeBERTaV3large model in the DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the T5 model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the T5-3B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the EFL model in the Entailment as Few-Shot Learner paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the ALBERT model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the Adv-RoBERTa ensemble model in the StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the RoBERTa model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the T5-Large model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the RTE dataset? | Accuracy, Dev Accuracy |
What metrics were used to measure the XLNet (single model) model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the RTE dataset? | Accuracy, Dev Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.