prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the ML+RL ROUGE+Novel, with LM model in the Improving Abstraction in Text Summarization paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the GAN model in the Generative Adversarial Network for Abstractive Text Summarization paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the ML+RL, with intra-attention model in the A Deep Reinforced Model for Abstractive Summarization paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the rnn-ext + abs + RL + rerank model in the Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the SummaRuNNer model in the SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Lead-3 baseline model in the SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the KIGN+Prediction-guide model in the Guiding Generation for Abstractive Text Summarization Based on Key Information Guide Network paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Fastformer model in the Fastformer: Additive Attention Can Be All You Need paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Tan et al. model in the Abstractive Document Summarization with a Graph-Based Attentional Neural Model paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the words-lvt2k-temp-att model in the Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond paper on the CNN / Daily Mail (Anonymized) dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the GenCompareSum model in the GenCompareSum: a hybrid unsupervised summarization method using salience paper on the CORD-19 dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Top Down Transformer (AdaPool) (464M) model in the Long Document Summarization with Top-down and Bottom-up Inference paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the BART-LS model in the Adapting Pretrained Text-to-Text Models for Long Text Sequences paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the LongT5 model in the LongT5: Efficient Text-To-Text Transformer for Long Sequences paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the GoSum (extractive) model in the GoSum: Extractive Summarization of Long Documents by Reinforcement Learning and Graph Organized discourse state paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Lodoss-full-large (extractive) model in the Toward Unifying Text Segmentation and Long Document Summarization paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the MemSum (extractive) model in the MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Lodoss-full-base (extractive) model in the Toward Unifying Text Segmentation and Long Document Summarization paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the HAT-BART model in the Hierarchical Learning for Generation with Long Source Sequences paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the GRETEL model in the GRETEL: Graph Contrastive Topic Enhanced Language Model for Long Document Extractive Summarization paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the DeepPyramidion model in the Sparsifying Transformer Models with Trainable Representation Pooling paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the FactorSum model in the Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the HiStruct+ model in the HiStruct+: Improving Extractive Text Summarization with Hierarchical Structure Information paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the DANCER PEGASUS model in the A Divide-and-Conquer Approach to the Summarization of Long Documents paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the BigBird-Pegasus model in the Big Bird: Transformers for Longer Sequences paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the ExtSum-LG+MMR-Select+ model in the Systematically Exploring Redundancy Reduction in Summarizing Long Documents paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the ExtSum-LG+RdLoss model in the Systematically Exploring Redundancy Reduction in Summarizing Long Documents paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the PEGASUS model in the PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Sent-CLF model in the On Extractive and Abstractive Neural Document Summarization with Transformer Language Models paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the ExtSum-LG model in the Extractive Summarization of Long Documents by Combining Global and Local Context paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the DANCER LSTM model in the A Divide-and-Conquer Approach to the Summarization of Long Documents paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the DANCER RUM model in the A Divide-and-Conquer Approach to the Summarization of Long Documents paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Sent-PTR model in the On Extractive and Abstractive Neural Document Summarization with Transformer Language Models paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the GenCompareSum model in the GenCompareSum: a hybrid unsupervised summarization method using salience paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the TLM-I+E model in the On Extractive and Abstractive Neural Document Summarization with Transformer Language Models paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the MatchSum (BERT-base) model in the Extractive Summarization as Text Matching paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Discourse model in the A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Fastformer model in the Fastformer: Additive Attention Can Be All You Need paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Pntr-Gen-Seq2Seq model in the Get To The Point: Summarization with Pointer-Generator Networks paper on the Pubmed dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the SRformer-BART model in the Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model paper on the XSum dataset? | ROUGE-1 |
What metrics were used to measure the MPNet-multilingual model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the ST5-Base model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-BERT-unsup model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the MiniLM-L6 model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the MiniLM-L12-multilingual model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the GTR-XXL model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the Komninos model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the Contriever model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the SGPT-125M-nli model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the GTR-XL model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the ST5-XXL model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the ST5-XL model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the BERT model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the GTR-Base model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the ST5-Large model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the coCondenser-msmarco model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the Glove model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the MiniLM-L12 model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the SPECTER model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the MPNet model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the Ada Similarity model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the LASER2 model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the SGPT-1.3B-msmarco model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the SGPT-BLOOM-7.1B-msmarco model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the SGPT-5.8B-msmarco model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the SimCSE-BERT-sup model in the MTEB: Massive Text Embedding Benchmark paper on the MTEB dataset? | Spearman Correlation |
What metrics were used to measure the BART-LS model in the Adapting Pretrained Text-to-Text Models for Long Text Sequences paper on the QMSum dataset? | ROUGE-1 |
What metrics were used to measure the InstructDS model in the Instructive Dialogue Summarization with Query Aggregations paper on the DialogSum dataset? | Rouge1, Rouge2, RougeL, BertScore |
What metrics were used to measure the SICK model in the Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization paper on the DialogSum dataset? | Rouge1, Rouge2, RougeL, BertScore |
What metrics were used to measure the BertSum model in the Abstractive Summarization of Spoken andWritten Instructions with BERT paper on the WikiHow dataset? | ROUGE-1, ROUGE-2, ROUGE-L, Content F1 |
What metrics were used to measure the MatchSum (BERT-base) model in the Extractive Summarization as Text Matching paper on the WikiHow dataset? | ROUGE-1, ROUGE-2, ROUGE-L, Content F1 |
What metrics were used to measure the Pointer-generator + coverage model in the WikiHow: A Large Scale Text Summarization Dataset paper on the WikiHow dataset? | ROUGE-1, ROUGE-2, ROUGE-L, Content F1 |
What metrics were used to measure the Pegasus 2B + SLiC model in the Calibrating Sequence likelihood Improves Conditional Language Generation paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the BRIO model in the BRIO: Bringing Order to Abstractive Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the PEGASUS + SummaReranker model in the SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the PEGASUS + SimCLS model in the SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the PEGASUSLARGE model in the PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the HAT-BART model in the Hierarchical Learning for Generation with Long Source Sequences paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the BART model in the BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the BertSumExtAbs model in the Text Summarization with Pretrained Encoders paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the T-ConvS2S model in the Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the Baseline : Extractive Oracle model in the Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the PtGen model in the Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the Seq2Seq model in the Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the PtGen-Covg model in the Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the Baseline : Lead-3 model in the Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the Baseline : Random model in the Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the X-Sum dataset? | ROUGE-1, ROUGE-2, ROUGE-3, ROUGE-L |
What metrics were used to measure the FactorSum model in the Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents paper on the GovReport dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the BART-LS model in the Adapting Pretrained Text-to-Text Models for Long Text Sequences paper on the GovReport dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the LongT5 model in the LongT5: Efficient Text-To-Text Transformer for Long Sequences paper on the BigPatent dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the BigBird-Pegasus model in the Big Bird: Transformers for Longer Sequences paper on the BigPatent dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Longformer Encoder Decoder model in the BillSum: A Corpus for Automatic Summarization of US Legislation paper on the BillSum dataset? | rouge1 |
What metrics were used to measure the BART-LS model in the Adapting Pretrained Text-to-Text Models for Long Text Sequences paper on the BookSum dataset? | ROUGE |
What metrics were used to measure the Top Down Transformer (AdaPool) (464M) model in the Long Document Summarization with Top-down and Bottom-up Inference paper on the BookSum dataset? | ROUGE |
What metrics were used to measure the Anchor-context + Query biased model in the Abstractive Snippet Generation paper on the Webis-Snippet-20 Corpus dataset? | Rouge-1, Rouge-2, Rouge-L |
What metrics were used to measure the Ground-truth transcript + Action with Hierarchical Attn model in the Multimodal Abstractive Summarization for How2 Videos paper on the How2 dataset? | Content F1, ROUGE-L, ROUGE-1 |
What metrics were used to measure the BertSum model in the Abstractive Summarization of Spoken andWritten Instructions with BERT paper on the How2 dataset? | Content F1, ROUGE-L, ROUGE-1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.