prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Collaborative Experts model in the Use What You Have: Video Retrieval Using Representations From Collaborative Experts paper on the ActivityNet dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Mean Rank, text-to-video Median Rank, video-to-text R@1, video-to-text R@5, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@10, video-to-text R@50 |
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the UMT-L (ViT-L/16) model in the Unmasked Teacher: Towards Training-Efficient Video Foundation Models paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the VALOR model in the VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the TESTA (ViT-B/16) model in the TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the InternVideo model in the InternVideo: General Video Foundation Models via Generative and Discriminative Learning paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the VLAB model in the VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the HiTeA model in the HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the MuLTI model in the MuLTI: Efficient Video-and-Language Understanding with MultiWay-Sampler and Multiple Choice Modeling paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the CLIP-ViP model in the CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Representation Alignment paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the STAN model in the Revisiting Temporal Modeling for CLIP-based Image-to-Video Knowledge Transferring paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the Singularity model in the Revealing Single Frame Bias for Video-and-Language Learning paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the DMAE (ViT-B/32) model in the Dual-Modal Attention-Enhanced Text-Video Retrieval with Triplet Partial Margin Contrastive Learning paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the HunYuan_tvr (huge) model in the Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the OmniVL model in the OmniVL:One Foundation Model for Image-Language and Video-Language Tasks paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the HunYuan_tvr model in the Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the Cap4Video model in the Cap4Video: What Can Auxiliary Captions Do for Text-Video Retrieval? paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the Clover model in the Clover: Towards A Unified Video-Language Alignment and Fusion Model paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the DRL model in the Disentangled Representation Learning for Text-Video Retrieval paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the DiffusionRet+QB-Norm model in the DiffusionRet: Generative Text-Video Retrieval with Diffusion Model paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the VIOLETv2 model in the An Empirical Study of End-to-End Video-Language Transformers with Masked Visual Modeling paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the X-CLIP model in the X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the HBI model in the Video-Text as Game Players: Hierarchical Banzhaf Interaction for Cross-Modal Representation Learning paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the DiffusionRet model in the DiffusionRet: Generative Text-Video Retrieval with Diffusion Model paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the CAMoE model in the Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the QB-Norm+CLIP4Clip model in the Cross Modal Retrieval with Querybank Normalisation paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the CLIP4Clip model in the CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the ALPRO model in the Align and Prompt: Video-and-Language Pre-training with Entity Prompts paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the FROZEN model in the Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the HD-VILA model in the Advancing High-Resolution Video-Language Representation with Large-Scale Video Transcriptions paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the PO Loss model in the Rudder: A Cross Lingual Video and Text Retrieval Dataset paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the Collaborative Experts model in the Use What You Have: Video Retrieval Using Representations From Collaborative Experts paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the Aurora (ours, r=64) model in the Parameter-efficient Tuning of Large-scale Multimodal Foundation Model paper on the DiDeMo dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10, text-to-video R@50, text-to-video Median Rank, text-to-video Mean Rank, video-to-text R@1, video-to-text R@5, video-to-text R@10, video-to-text Median Rank, video-to-text Mean Rank, text-to-videoR@1 |
What metrics were used to measure the PO Loss model in the Rudder: A Cross Lingual Video and Text Retrieval Dataset paper on the Charades-STA dataset? | text-to-video Mean Rank, text-to-video Median Rank, text-to-video R@1, text-to-video R@10, video-to-text Mean Rank, video-to-text Median Rank, video-to-text R@1, video-to-text R@10 |
What metrics were used to measure the TESTA (ViT-B/16) model in the TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding paper on the QuerYD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10 |
What metrics were used to measure the LF-VILA model in the Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive Learning paper on the QuerYD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10 |
What metrics were used to measure the VINDLU model in the VindLU: A Recipe for Effective Video-and-Language Pretraining paper on the QuerYD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10 |
What metrics were used to measure the Frozen model in the Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval paper on the QuerYD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10 |
What metrics were used to measure the QB-Norm+TT-CE+ model in the Cross Modal Retrieval with Querybank Normalisation paper on the QuerYD dataset? | text-to-video R@1, text-to-video R@5, text-to-video R@10 |
What metrics were used to measure the DeepEventMine model in the DeepEventMine: end-to-end neural nested event extraction from biomedical texts paper on the Pathway Curation 2013 (PC) dataset? | F1 |
What metrics were used to measure the DeepEventMine model in the DeepEventMine: end-to-end neural nested event extraction from biomedical texts paper on the Infectious Diseases 2011 (ID) dataset? | F1 |
What metrics were used to measure the DeepEventMine model in the DeepEventMine: end-to-end neural nested event extraction from biomedical texts paper on the Cancer Genetics 2013 (CG) dataset? | F1 |
What metrics were used to measure the DeepEventMine model in the DeepEventMine: end-to-end neural nested event extraction from biomedical texts paper on the GENIA 2013 dataset? | F1 |
What metrics were used to measure the DeepEventMine model in the DeepEventMine: end-to-end neural nested event extraction from biomedical texts paper on the Multi-Level Event Extraction (MLEE) dataset? | F1 |
What metrics were used to measure the DeepEventMine model in the DeepEventMine: end-to-end neural nested event extraction from biomedical texts paper on the GENIA dataset? | F1 |
What metrics were used to measure the GEANet-SciBERT model in the Biomedical Event Extraction with Hierarchical Knowledge Graphs paper on the GENIA dataset? | F1 |
What metrics were used to measure the DeepEventMine model in the DeepEventMine: end-to-end neural nested event extraction from biomedical texts paper on the Epigenetics and Post-translational Modifications 2011 (EPI) dataset? | F1 |
What metrics were used to measure the DeepStruct multi-task model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the ACE2005 dataset? | Argument Cl, Argument Id, Trigger Cl, Trigger Id |
What metrics were used to measure the DeepStruct multi-task w/ finetune model in the DeepStruct: Pretraining of Language Models for Structure Prediction paper on the ACE2005 dataset? | Argument Cl, Argument Id, Trigger Cl, Trigger Id |
What metrics were used to measure the Text2Event - T5-large model in the Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction paper on the ACE2005 dataset? | Argument Cl, Argument Id, Trigger Cl, Trigger Id |
What metrics were used to measure the Text2Event - T5-base model in the Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction paper on the ACE2005 dataset? | Argument Cl, Argument Id, Trigger Cl, Trigger Id |
What metrics were used to measure the AutoLink model in the AutoLink: Self-supervised Learning of Human Skeletons and Object Outlines by Linking Keypoints paper on the MAFL Unaligned dataset? | Mean NME |
What metrics were used to measure the UniPoll model in the UniPoll: A Unified Social Media Poll Generation Framework via Multi-Objective Optimization paper on the WeiboPolls dataset? | ROUGE-1, ROUGE-L, BLEU-1, BLEU-3 |
What metrics were used to measure the T5 model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the WeiboPolls dataset? | ROUGE-1, ROUGE-L, BLEU-1, BLEU-3 |
What metrics were used to measure the Dual Dec model in the Engage the Public: Poll Question Generation for Social Media Posts paper on the WeiboPolls dataset? | ROUGE-1, ROUGE-L, BLEU-1, BLEU-3 |
What metrics were used to measure the MDN model in the Multimodal Differential Network for Visual Question Generation paper on the Visual Question Generation dataset? | BLEU-1 |
What metrics were used to measure the MDN model in the Multimodal Differential Network for Visual Question Generation paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset? | BLEU-1 |
What metrics were used to measure the coco-Caption [[Karpathy and Li2014]] model in the Deep Visual-Semantic Alignments for Generating Image Descriptions paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset? | BLEU-1 |
What metrics were used to measure the Max(Yang,2015) model in the Neural Self Talk: Image Understanding via Continuous Questioning and Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset? | BLEU-1 |
What metrics were used to measure the Sample(Yang,2015) model in the Neural Self Talk: Image Understanding via Continuous Questioning and Answering paper on the COCO Visual Question Answering (VQA) real images 1.0 open ended dataset? | BLEU-1 |
What metrics were used to measure the Info-HCVAE model in the Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs paper on the TriviaQA dataset? | QAE, R-QAE |
What metrics were used to measure the HCVAE model in the Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs paper on the TriviaQA dataset? | QAE, R-QAE |
What metrics were used to measure the BART fine-tuned on FairytaleQA model in the Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension paper on the FairytaleQA dataset? | ROUGE-L |
What metrics were used to measure the BART fine-tuned on NarrativeQA and FairytaleQA model in the Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension paper on the FairytaleQA dataset? | ROUGE-L |
What metrics were used to measure the BART fine-tuned on NarrativeQA model in the Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic Dataset for Narrative Comprehension paper on the FairytaleQA dataset? | ROUGE-L |
What metrics were used to measure the Info-HCVAE model in the Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs paper on the Natural Questions dataset? | QAE, R-QAE |
What metrics were used to measure the HCVAE model in the Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs paper on the Natural Questions dataset? | QAE, R-QAE |
What metrics were used to measure the Info-HCVAE model in the Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs paper on the SQuAD dataset? | QAE, R-QAE |
What metrics were used to measure the HCVAE model in the Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs paper on the SQuAD dataset? | QAE, R-QAE |
What metrics were used to measure the ERNIE-GENLARGE (beam size=5) model in the ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the BART (TextBox 2.0) model in the TextBox 2.0: A Text Generation Library with Pre-trained Language Models paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the ProphetNet + ASGen model in the Learning to Generate Questions by Recovering Answer-containing Sentences paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the UniLMv2 model in the UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the ProphetNet + syn. mask + localness model in the Enhancing Pre-trained Models with Text Structure Knowledge for Question Generation paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the ProphetNet model in the ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the UniLM + ASGen model in the Learning to Generate Questions by Recovering Answer-containing Sentences paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the UniLM model in the Unified Language Model Pre-training for Natural Language Understanding and Generation paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the BERTSQG model in the A Recurrent BERT-based Model for Question Generation paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the Selector & NQG++ model in the Mixture Content Selection for Diverse Sequence Generation paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the MPQG model in the Leveraging Context Information for Natural Question Generation paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the RNN +attn +copy model in the Evaluating Rewards for Question Generation Models paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the NQG++ model in the Neural Question Generation from Text: A Preliminary Study paper on the SQuAD1.1 dataset? | BLEU-4, METEOR, ROUGE-L |
What metrics were used to measure the DP model in the Multi-tasking Dialogue Comprehension with Discourse Parsing paper on the Molweni dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the Structured model in the Structured Dialogue Discourse Parsing paper on the Molweni dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the SSP-BERT + SCIJE model in the Speaker-Aware Discourse Parsing on Multi-Party Dialogues paper on the Molweni dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the HG-MDP model in the paper on the Molweni dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the Struct-Aware model in the A Structure Self-Aware Model for Discourse Parsing on Multi-Party Dialogues paper on the Molweni dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the Hierarchical model in the Improving Multi-Party Dialogue Discourse Parsing via Domain Integration paper on the Molweni dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the Deep Sequential model in the Molweni: A Challenge Multiparty Dialogues-based Machine Reading Comprehension Dataset with Discourse Structure paper on the Molweni dataset? | Link & Rel F1, Link F1 |
What metrics were used to measure the Bottom-up (DeBERTa) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Top-down (DeBERTa) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up (XLNet) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Top-down (RoBERTa) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up (RoBERTa) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Top-down (XLNet) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Top-down (SpanBERT) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up (SpanBERT) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Guz et al. (2020) (pretrained) model in the Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
What metrics were used to measure the Bottom-up (BERT) model in the A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing paper on the Instructional-DT (Instr-DT) dataset? | Standard Parseval (Nuclearity), Standard Parseval (Span), Standard Parseval (Full), Standard Parseval (Relation) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.