prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the BioBERT-MIMIC model in the Saama Research at MEDIQA 2019: Pre-trained BioBERT with Attention Visualisation for Medical Natural Language Inference paper on the MedNLI dataset?
Accuracy, F1
What metrics were used to measure the NCBI_BERT(base) (P+M) model in the Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets paper on the MedNLI dataset?
Accuracy, F1
What metrics were used to measure the Human Benchmark model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the RuBERT conversational model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the RuGPT3Large model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the YaLM 1.0B few-shot model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the Golden Transformer model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the heuristic majority model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the RuGPT3Medium model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the SBERT_Large model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the RuBERT plain model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the Multilingual Bert model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the MT5 Large model in the mT5: A massively multilingual pre-trained text-to-text transformer paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the ruRoberta-large finetune model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the ruBert-large finetune model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the RuGPT3Small model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the SBERT_Large_mt_ru_finetuning model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the ruBert-base finetune model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the Random weighted model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the ruT5-base-finetune model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the ruT5-large-finetune model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the RuGPT3XL few-shot model in the paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the Baseline TF-IDF1.1 model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the majority_class model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the RCB dataset?
Average F1, Accuracy
What metrics were used to measure the MMBT model in the Supervised Multimodal Bitransformers for Classifying Images and Text paper on the V-SNLI dataset?
Accuracy
What metrics were used to measure the V-BiMPM model in the Grounded Textual Entailment paper on the V-SNLI dataset?
Accuracy
What metrics were used to measure the BiMPM model in the Grounded Textual Entailment paper on the V-SNLI dataset?
Accuracy
What metrics were used to measure the mGPT model in the mGPT: Few-Shot Learners Go Multilingual paper on the XWINO dataset?
Accuracy
What metrics were used to measure the NeuralLog model in the NeuralLog: Natural Language Inference with Joint Neural and Logical Reasoning paper on the MED dataset?
1:1 Accuracy
What metrics were used to measure the BERT-base model in the CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark paper on the KUAKE-QQR dataset?
Accuracy
What metrics were used to measure the FlauBERT (large) model in the FlauBERT: Unsupervised Language Model Pre-training for French paper on the XNLI French dataset?
Accuracy
What metrics were used to measure the CamemBERT model in the CamemBERT: a Tasty French Language Model paper on the XNLI French dataset?
Accuracy
What metrics were used to measure the FlauBERT (base) model in the FlauBERT: Unsupervised Language Model Pre-training for French paper on the XNLI French dataset?
Accuracy
What metrics were used to measure the XLM (MLM+TLM) model in the Cross-lingual Language Model Pretraining paper on the XNLI French dataset?
Accuracy
What metrics were used to measure the BiLSTM-max model in the XNLI: Evaluating Cross-lingual Sentence Representations paper on the XNLI French dataset?
Accuracy
What metrics were used to measure the Roberta-large model in the Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets paper on the HANS dataset?
1:1 Accuracy
What metrics were used to measure the DeBERTaV3large model in the DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing paper on the MRPC dataset?
Acc
What metrics were used to measure the ERNIE 2.0 Large model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the XNLI Chinese dataset?
Accuracy
What metrics were used to measure the ERNIE 2.0 Base model in the ERNIE 2.0: A Continual Pre-training Framework for Language Understanding paper on the XNLI Chinese dataset?
Accuracy
What metrics were used to measure the ERNIE model in the ERNIE: Enhanced Representation through Knowledge Integration paper on the XNLI Chinese dataset?
Accuracy
What metrics were used to measure the NeuralLog model in the NeuralLog: Natural Language Inference with Joint Neural and Logical Reasoning paper on the SICK dataset?
1:1 Accuracy
What metrics were used to measure the TinyBERT (M=6;d'=768;d'i=3072) model in the TinyBERT: Distilling BERT for Natural Language Understanding paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the BERT-Large-uncased-PruneOFA (90% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the BERT-Large-uncased-PruneOFA (90% unstruct sparse, QAT Int8) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the BERT-Base-uncased-PruneOFA (85% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the BERT-Base-uncased-PruneOFA (90% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the BERT-Base-uncased-PruneOFA (85% unstruct sparse, QAT Int8) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the DistilBERT-uncased-PruneOFA (85% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the DistilBERT-uncased-PruneOFA (90% unstruct sparse) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the DistilBERT-uncased-PruneOFA (85% unstruct sparse, QAT Int8) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the DistilBERT-uncased-PruneOFA (90% unstruct sparse, QAT Int8) model in the Prune Once for All: Sparse Pre-Trained Language Models paper on the MultiNLI Dev dataset?
Matched, Mismatched
What metrics were used to measure the T5 model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the AX dataset?
Accuracy
What metrics were used to measure the CA-MTL model in the Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the MT-DNN model in the Multi-Task Deep Neural Networks for Natural Language Understanding paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the Finetuned Transformer LM model in the paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the Finetuned Transformer LM model in the Improving Language Understanding by Generative Pre-Training paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the Hierarchical BiLSTM Max Pooling model in the Sentence Embeddings in NLI with Iterative Refinement Encoders paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the RE2 model in the Simple and Effective Text Matching with Richer Alignment Features paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the CAFE model in the Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the SplitEE-S model in the SplitEE: Early Exit in Deep Neural Networks with Split Computing paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the MT-DNN-SMART_100%ofTrainingData model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the MT-DNN-SMART_10%ofTrainingData model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the MT-DNN-SMART_1%ofTrainingData model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the MT-DNN-SMART_0.1%ofTrainingData model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the MT-DNN-SMARTLARGEv0 model in the SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization paper on the SciTail dataset?
Accuracy, Dev Accuracy, % Dev Accuracy, % Test Accuracy
What metrics were used to measure the mBERT model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the ParsBERT model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the Translate-Source + fastText model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the LSTM + BERT (concat) model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the ESIM + BERT (FarsTail, MultiNLI) model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the ULMFiT model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the ESIM + fastText model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the Translate-Target + fastText model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the Decomposable Attention Model + word2vec model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the HBMP + word2vec model in the FarsTail: A Persian Natural Language Inference Dataset paper on the FarsTail dataset?
% Test Accuracy
What metrics were used to measure the BioLinkBert model in the BioNLI: Generating a Biomedical NLI Dataset Using Lexico-semantic Constraints for Adversarial Examples paper on the BioNLI dataset?
Macro F1
What metrics were used to measure the PaLM 540B (finetuned) model in the PaLM: Scaling Language Modeling with Pathways paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the DeBERTa-1.5B model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the AlexaTM 20B model in the AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the N-Grammer model in the N-Grammer: Augmenting Transformers with latent n-grams paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the CommitmentBank dataset?
Accuracy, F1
What metrics were used to measure the DeBERTa (large) model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the MRPC Dev dataset?
Accuracy
What metrics were used to measure the Human Benchmark model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the TERRa dataset?
Accuracy
What metrics were used to measure the Golden Transformer model in the paper on the TERRa dataset?
Accuracy
What metrics were used to measure the ruRoberta-large finetune model in the paper on the TERRa dataset?
Accuracy
What metrics were used to measure the ruT5-large-finetune model in the paper on the TERRa dataset?
Accuracy
What metrics were used to measure the ruBert-large finetune model in the paper on the TERRa dataset?
Accuracy
What metrics were used to measure the ruBert-base finetune model in the paper on the TERRa dataset?
Accuracy
What metrics were used to measure the ruT5-base-finetune model in the paper on the TERRa dataset?
Accuracy
What metrics were used to measure the RuGPT3Large model in the paper on the TERRa dataset?
Accuracy
What metrics were used to measure the RuBERT plain model in the paper on the TERRa dataset?
Accuracy
What metrics were used to measure the RuBERT conversational model in the paper on the TERRa dataset?
Accuracy