prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the GraphStar model in the Graph Star Net for Generalized Multi-Task Learning paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the SSGC model in the Simple Spectral Graph Convolution paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the SGC model in the Simplifying Graph Convolutional Networks paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the SGCN model in the Simplifying Graph Convolutional Networks paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the NABoE-full model in the Neural Attentive Bag-of-Entities Model for Text Classification paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the Text GCN model in the Graph Convolutional Networks for Text Classification paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the WideMLP model in the Transformers are Short Text Classifiers: A Study of Inductive Short Text Classifiers on Benchmarks and Real-world Datasets paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the TextEnt-full model in the Representation Learning of Entities and Documents from Knowledge Base Descriptions paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the ConvTextTM model in the ConvTextTM: An Explainable Convolutional Tsetlin Machine Framework for Text Classification paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the fastText model in the Transformers are Short Text Classifiers: A Study of Inductive Short Text Classifiers on Benchmarks and Real-world Datasets paper on the R8 dataset?
Accuracy, F-measure
What metrics were used to measure the Custom Legal-BERT model in the When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset paper on the Terms of Service dataset?
F1(10-fold)
What metrics were used to measure the Legal-BERT model in the When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset paper on the Terms of Service dataset?
F1(10-fold)
What metrics were used to measure the BERT model in the When Does Pretraining Help? Assessing Self-Supervised Learning for Law and the CaseHOLD Dataset paper on the Terms of Service dataset?
F1(10-fold)
What metrics were used to measure the Vicuna13B v1.1 model in the This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models paper on the This is not a Dataset dataset?
Accuracy, Coherence
What metrics were used to measure the Flan-T5-xxl model in the This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models paper on the This is not a Dataset dataset?
Accuracy, Coherence
What metrics were used to measure the Spark NLP model in the Mining Adverse Drug Reactions from Unstructured Mediums at Scale paper on the Adverse Drug Events (ADE) Corpus dataset?
F1 - macro
What metrics were used to measure the LSVC + linguistic features + publishing attributes model in the A Comparative Study of Feature Types for Age-Based Text Classification paper on the RusAge: Corpus for Age-Based Text Classification dataset?
F1
What metrics were used to measure the ERNIE-Doc-Large model in the ERNIE-Doc: A Retrospective Long-Document Modeling Transformer paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the XLNet model in the XLNet: Generalized Autoregressive Pretraining for Language Understanding paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the In-domain Pretraining+Semi-supervised model in the Neural Semi-supervised Learning for Text Classification Under Large-Scale Pretraining paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the Heinsen Routing + RoBERTa-large model in the An Algorithm for Routing Vectors in Sequences paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the ERNIE-Doc model in the ERNIE-Doc: A Retrospective Long-Document Modeling Transformer paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the BERT Finetune + UDA model in the Unsupervised Data Augmentation for Consistency Training paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the BERT-ITPT-FiT model in the How to Fine-Tune BERT for Text Classification? paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the BigBird model in the Big Bird: Transformers for Longer Sequences paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the HAHNN (CNN) model in the Hierarchical Attentional Hybrid Neural Networks for Document Classification paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the Paragraph Vectors Le & Mikolov (2014) model in the Distributed Representations of Sentences and Documents paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the byte mLSTM7 model in the A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the MPAD-path model in the Message Passing Attention Networks for Document Understanding paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the Transductive SVM Johnson & Zhang ([2015b]) model in the Semi-supervised Convolutional Neural Networks for Text Categorization via Region Embedding paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the Context-Aware Pipeline model in the Context-Aware Compilation of DNN Training Pipelines across Edge and Cloud paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the Document Classification Using Importance of Sentences model in the Improving Document-Level Sentiment Classification Using Importance of Sentences paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the KD-LSTMreg model in the DocBERT: BERT for Document Classification paper on the IMDb dataset?
Accuracy (2 classes), Accuracy (10 classes), Accuracy, F1, Precision, Recall, eval_accuracy
What metrics were used to measure the TRANS-BLSTM model in the TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding paper on the GLUE RTE dataset?
Accuracy
What metrics were used to measure the BERT-ITPT-FiT model in the How to Fine-Tune BERT for Text Classification? paper on the Sogou News dataset?
Accuracy
What metrics were used to measure the CCCapsNet model in the Compositional Coding Capsule Network with K-Means Routing for Text Classification paper on the Sogou News dataset?
Accuracy
What metrics were used to measure the ULMFiT (Small data) model in the Sampling Bias in Deep Active Classification: An Empirical Study paper on the Sogou News dataset?
Accuracy
What metrics were used to measure the TRANS-BLSTM model in the TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding paper on the GLUE COLA dataset?
Accuracy, Matthews Correlation
What metrics were used to measure the TRANS-BLSTM model in the TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding paper on the GLUE MRPC dataset?
Accuracy, F1
What metrics were used to measure the RoBERTa-Large + ICDA model in the Selective In-Context Data Augmentation for Intent Detection using Pointwise V-Information paper on the BANKING77 dataset?
Accuracy, F1, F1 Macro, F1 Micro, F1 Weighted, Macro F1, Precision Macro, Precision Micro, Precision Weighted, Recall Macro, Recall Micro, Recall Weighted, Weighted F1, loss
What metrics were used to measure the mBART model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the Abstractive Text Summarization from Il Post dataset?
ROUGE-1, ROUGE-2, ROUGE-L, BERTScore
What metrics were used to measure the mBART model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the Abstractive Text Summarization from Il Post dataset?
ROUGE-1, ROUGE-2, ROUGE-L, BERTScore
What metrics were used to measure the BART-IT model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the Abstractive Text Summarization from Il Post dataset?
ROUGE-1, ROUGE-2, ROUGE-L, BERTScore
What metrics were used to measure the mT5 model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the Abstractive Text Summarization from Il Post dataset?
ROUGE-1, ROUGE-2, ROUGE-L, BERTScore
What metrics were used to measure the IT5 model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the Abstractive Text Summarization from Il Post dataset?
ROUGE-1, ROUGE-2, ROUGE-L, BERTScore
What metrics were used to measure the IT5-base model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the Abstractive Text Summarization from Il Post dataset?
ROUGE-1, ROUGE-2, ROUGE-L, BERTScore
What metrics were used to measure the Pegasus-CNN/DM (eng-it translation) model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the Abstractive Text Summarization from Il Post dataset?
ROUGE-1, ROUGE-2, ROUGE-L, BERTScore
What metrics were used to measure the Pegasus-XSum (eng-it translation) model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the Abstractive Text Summarization from Il Post dataset?
ROUGE-1, ROUGE-2, ROUGE-L, BERTScore
What metrics were used to measure the T2SAM model in the An abstractive text summarization technique using transformer model with self-attention mechanism paper on the Inshorts News dataset?
ROUGE
What metrics were used to measure the Pegasus model in the Calibrating Sequence likelihood Improves Conditional Language Generation paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BRIO model in the BRIO: Bringing Order to Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the PEGASUS + SummaReranker model in the SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BART + SimCLS model in the SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the SEASON model in the Salience Allocation as Guidance for Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Fourier Transformer model in the Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the GLM-XXLarge model in the GLM: General Language Model Pretraining with Autoregressive Blank Infilling paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BART + R-Drop model in the R-Drop: Regularized Dropout for Neural Networks paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the CoCoNet + CoCoPretrain model in the Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the MUPPET BART Large model in the Muppet: Massive Multi-task Representations with Pre-Finetuning paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the CoCoNet model in the Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BART+R3F model in the Better Fine-Tuning by Reducing Representational Collapse paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the ERNIE-GENLARGE (large-scale text corpora) model in the ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the PALM model in the PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the ProphetNet model in the ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the PEGASUS model in the PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BART model in the BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the ERNIE-GENLARGE model in the ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the LongT5 model in the LongT5: Efficient Text-To-Text Transformer for Long Sequences paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the T5 model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the SRformer-BART model in the Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the UniLMv2 model in the UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the UniLM model in the Unified Language Model Pre-training for Natural Language Understanding and Generation paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the ERNIE-GENBASE model in the ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BertSumExtAbs model in the Text Summarization with Pretrained Encoders paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BERT-ext + abs + RL + rerank model in the Summary Level Training of Sentence Rewriting for Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Selector & Pointer-Generator model in the Mixture Content Selection for Diverse Sequence Generation paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Two-Stage + RL model in the Pretraining-Based Natural Language Generation for Text Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the DCA model in the Deep Communicating Agents for Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Li et al. model in the Improving Neural Abstractive Document Summarization with Explicit Information Selection Modeling paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the rnn-ext + RL model in the Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the EditNet model in the An Editorial Network for Enhanced Document Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Bottom-Up Summarization model in the Bottom-Up Abstractive Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Mask Attention Network model in the Mask Attention Networks: Rethinking and Strengthen Transformer paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Subformer-base model in the Subformer: A Parameter Reduced Transformer paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the rnn-ext + abs + RL + rerank model in the Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the end2end w/ inconsistency loss model in the A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the RL + pg + cbdec model in the Closed-Book Training to Improve Summarization Encoder Memory paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the ROUGESal+Ent RL model in the Multi-Reward Reinforced Summarization with Saliency and Entailment paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the LEAD-3 model in the Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Li et al. model in the Improving Neural Abstractive Document Summarization with Structural Regularization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the ML+RL ROUGE+Novel, with LM model in the Improving Abstraction in Text Summarization paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Dynamic Conv model in the Pay Less Attention with Lightweight and Dynamic Convolutions paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Pointer + Coverage + EntailmentGen + QuestionGen model in the Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the PTGEN + Coverage model in the Get To The Point: Summarization with Pointer-Generator Networks paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the PTGEN + Coverage model in the Get To The Point: Summarization with Pointer-Generator Networks paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Pointer-Generator + Coverage model in the Get To The Point: Summarization with Pointer-Generator Networks paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Transformer model in the Attention Is All You Need paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Summary Loop Unsup model in the The Summary Loop: Learning to Write Abstractive Summaries Without Examples paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the DELTA (BLSTM) model in the DELTA: A DEep learning based Language Technology plAtform paper on the CNN / Daily Mail dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BertSum model in the Abstractive Summarization of Spoken andWritten Instructions with BERT paper on the WikiHow dataset?
Content F1, ROUGE-1, ROUGE-2, ROUGE-L