prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Control Prefixes (A1, T5-large) model in the Control Prefixes for Parameter-Efficient Text Generation paper on the WebNLG Full dataset?
BLEU
What metrics were used to measure the T5-large + Wiki + Position model in the Stage-wise Fine-tuning for Graph-to-Text Generation paper on the WebNLG Full dataset?
BLEU
What metrics were used to measure the T5-large model in the Investigating Pretrained Language Models for Graph-to-Text Generation paper on the WebNLG Full dataset?
BLEU
What metrics were used to measure the T5-Large model in the Text-to-Text Pre-Training for Data-to-Text Tasks paper on the WebNLG Full dataset?
BLEU
What metrics were used to measure the HTLM (prefix 0.1%) model in the HTLM: Hyper-Text Pre-Training and Prompting of Language Models paper on the WebNLG Full dataset?
BLEU
What metrics were used to measure the DATATUNER_NO_FC model in the Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity paper on the WebNLG Full dataset?
BLEU
What metrics were used to measure the Transformer (Pipeline) model in the Neural data-to-text generation: A comparison between pipeline and end-to-end architectures paper on the WebNLG Full dataset?
BLEU
What metrics were used to measure the Control Prefixes (A1, T5-large) model in the Control Prefixes for Parameter-Efficient Text Generation paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the Control Prefixes (A1, A2, T5-large) model in the Control Prefixes for Parameter-Efficient Text Generation paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the T5-large + Wiki + Position model in the Stage-wise Fine-tuning for Graph-to-Text Generation paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the HTML (fine-tuning) model in the HTLM: Hyper-Text Pre-Training and Prompting of Language Models paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the T5-small model in the Investigating Pretrained Language Models for Graph-to-Text Generation paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the T5-Base model in the Text-to-Text Pre-Training for Data-to-Text Tasks paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the CGE-LW (Levi Graph) model in the Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the Multiview-G2S model in the Structural Information Preserving for Graph-to-Text Generation paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the Graformer model in the Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the GTR-LSTM (entity masking) model in the GTR-LSTM: A Triple Encoder for Sentence Generation from RDF Data paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the E2E GRU model in the Neural data-to-text generation: A comparison between pipeline and end-to-end architectures paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the GCN EC model in the Deep Graph Convolutional Encoders for Structured Data to Text Generation paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the BestPlan model in the Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the BART (TextBox 2.0) model in the TextBox 2.0: A Text Generation Library with Pre-trained Language Models paper on the WebNLG dataset?
BLEU, BLEU-4, METEOR, ROUGE-L
What metrics were used to measure the T5-3B model in the Text-to-Text Pre-Training for Data-to-Text Tasks paper on the ToTTo dataset?
BLEU, PARENT, METEOR
What metrics were used to measure the LATTICE (T5-base) model in the Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning paper on the ToTTo dataset?
BLEU, PARENT, METEOR
What metrics were used to measure the BERT-to-BERT model in the ToTTo: A Controlled Table-To-Text Generation Dataset paper on the ToTTo dataset?
BLEU, PARENT, METEOR
What metrics were used to measure the Pointer Generator model in the ToTTo: A Controlled Table-To-Text Generation Dataset paper on the ToTTo dataset?
BLEU, PARENT, METEOR
What metrics were used to measure the NCP+CC (Puduppully et al 2019) model in the ToTTo: A Controlled Table-To-Text Generation Dataset paper on the ToTTo dataset?
BLEU, PARENT, METEOR
What metrics were used to measure the T5 model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the ToTTo dataset?
BLEU, PARENT, METEOR
What metrics were used to measure the T5-Base model in the Text-to-Text Pre-Training for Data-to-Text Tasks paper on the MULTIWOZ 2.1 dataset?
BLEU
What metrics were used to measure the T5-small model in the Template Guided Text Generation for Task-Oriented Dialogue paper on the MULTIWOZ 2.1 dataset?
BLEU
What metrics were used to measure the T2G2 model in the Template Guided Text Generation for Task-Oriented Dialogue paper on the MULTIWOZ 2.1 dataset?
BLEU
What metrics were used to measure the SC-GPT2 model in the Few-shot Natural Language Generation for Task-Oriented Dialog paper on the MULTIWOZ 2.1 dataset?
BLEU
What metrics were used to measure the HDSA model in the Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention paper on the MULTIWOZ 2.1 dataset?
BLEU
What metrics were used to measure the SeqPlan model in the Data-to-text Generation with Variational Sequential Planning paper on the MLB Dataset (Content Ordering) dataset?
DLD
What metrics were used to measure the Macro model in the Data-to-text Generation with Macro Planning paper on the MLB Dataset (Content Ordering) dataset?
DLD
What metrics were used to measure the Force-Copy model in the May the Force Be with Your Copy Mechanism: Enhanced Supervised-Copy Method for Natural Language Generation paper on the MLB Dataset (Content Ordering) dataset?
DLD
What metrics were used to measure the ENT model in the Data-to-text Generation with Macro Planning paper on the MLB Dataset (Content Ordering) dataset?
DLD
What metrics were used to measure the StructAdapt model in the Structural Adapters in Pretrained Language Models for AMR-to-text Generation paper on the AMR3.0 dataset?
Bleu
What metrics were used to measure the HierarchicalEncoder + NR + IR model in the Improving Encoder by Auxiliary Supervision Tasks for Table-to-Text Generation paper on the RotoWire dataset?
BLEU
What metrics were used to measure the Hierarchical transformer encoder + conditional copy model in the A Hierarchical Model for Data-to-Text Generation paper on the RotoWire dataset?
BLEU
What metrics were used to measure the Force-Copy model in the May the Force Be with Your Copy Mechanism: Enhanced Supervised-Copy Method for Natural Language Generation paper on the RotoWire dataset?
BLEU
What metrics were used to measure the Neural Content Planning + conditional copy model in the Data-to-Text Generation with Content Selection and Planning paper on the RotoWire dataset?
BLEU
What metrics were used to measure the Macro model in the Data-to-text Generation with Macro Planning paper on the RotoWire dataset?
BLEU
What metrics were used to measure the Encoder-decoder + conditional copy model in the Challenges in Data-to-Document Generation paper on the RotoWire dataset?
BLEU
What metrics were used to measure the mBART model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the WebNLG ru dataset?
METEOR
What metrics were used to measure the mT5 model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the WebNLG ru dataset?
METEOR
What metrics were used to measure the Force-Copy model in the May the Force Be with Your Copy Mechanism: Enhanced Supervised-Copy Method for Natural Language Generation paper on the MLB Dataset (Content Selection) dataset?
Precision, Recall
What metrics were used to measure the SeqPlan model in the Data-to-text Generation with Variational Sequential Planning paper on the MLB Dataset (Content Selection) dataset?
Precision, Recall
What metrics were used to measure the Macro model in the Data-to-text Generation with Macro Planning paper on the MLB Dataset (Content Selection) dataset?
Precision, Recall
What metrics were used to measure the DataTuner_FC model in the Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity paper on the ViGGO dataset?
BLEU
What metrics were used to measure the Bo3 model in the ViGGO: A Video Game Corpus for Data-To-Text Generation in Open-Domain Conversation paper on the ViGGO dataset?
BLEU
What metrics were used to measure the SeqPlan model in the Data-to-text Generation with Variational Sequential Planning paper on the RotoWire (Relation Generation) dataset?
Precision, count
What metrics were used to measure the Macro model in the Data-to-text Generation with Macro Planning paper on the RotoWire (Relation Generation) dataset?
Precision, count
What metrics were used to measure the Force-Copy model in the May the Force Be with Your Copy Mechanism: Enhanced Supervised-Copy Method for Natural Language Generation paper on the RotoWire (Relation Generation) dataset?
Precision, count
What metrics were used to measure the Hierarchical Transformer Encoder + conditional copy model in the A Hierarchical Model for Data-to-Text Generation paper on the RotoWire (Relation Generation) dataset?
Precision, count
What metrics were used to measure the Neural Content Planning + conditional copy model in the Data-to-Text Generation with Content Selection and Planning paper on the RotoWire (Relation Generation) dataset?
Precision, count
What metrics were used to measure the Encoder-decoder + conditional copy model in the Challenges in Data-to-Document Generation paper on the RotoWire (Relation Generation) dataset?
Precision, count
What metrics were used to measure the SeqPlan model in the Data-to-text Generation with Variational Sequential Planning paper on the MLB Dataset dataset?
BLEU
What metrics were used to measure the Macro model in the Data-to-text Generation with Macro Planning paper on the MLB Dataset dataset?
BLEU
What metrics were used to measure the ENT model in the Data-to-text Generation with Entity Modeling paper on the MLB Dataset dataset?
BLEU
What metrics were used to measure the Force-Copy model in the May the Force Be with Your Copy Mechanism: Enhanced Supervised-Copy Method for Natural Language Generation paper on the MLB Dataset dataset?
BLEU
What metrics were used to measure the Fact-aware embedding with mT5 model in the XF2T: Cross-lingual Fact-to-Text Generation for Low-Resource Languages paper on the XAlign dataset?
BLEU4, METEOR
What metrics were used to measure the Bi-lingual mT5 model in the XF2T: Cross-lingual Fact-to-Text Generation for Low-Resource Languages paper on the XAlign dataset?
BLEU4, METEOR
What metrics were used to measure the mT5 model in the XAlign: Cross-lingual Fact-to-Text Alignment and Generation for Low-Resource Languages paper on the XAlign dataset?
BLEU4, METEOR
What metrics were used to measure the Vanilla Transformer model in the XAlign: Cross-lingual Fact-to-Text Alignment and Generation for Low-Resource Languages paper on the XAlign dataset?
BLEU4, METEOR
What metrics were used to measure the Translate-Output mT5 model in the XF2T: Cross-lingual Fact-to-Text Generation for Low-Resource Languages paper on the XAlign dataset?
BLEU4, METEOR
What metrics were used to measure the Graph Attention Network Encoder +Transformer Decoder model in the XAlign: Cross-lingual Fact-to-Text Alignment and Generation for Low-Resource Languages paper on the XAlign dataset?
BLEU4, METEOR
What metrics were used to measure the SeqPlan model in the Data-to-text Generation with Variational Sequential Planning paper on the MLB Dataset (Relation Generation) dataset?
Precision, count
What metrics were used to measure the Macro model in the Data-to-text Generation with Macro Planning paper on the MLB Dataset (Relation Generation) dataset?
Precision, count
What metrics were used to measure the Force-Copy model in the May the Force Be with Your Copy Mechanism: Enhanced Supervised-Copy Method for Natural Language Generation paper on the MLB Dataset (Relation Generation) dataset?
Precision, count
What metrics were used to measure the ENT model in the Data-to-text Generation with Macro Planning paper on the MLB Dataset (Relation Generation) dataset?
Precision, count
What metrics were used to measure the Transition based Deep Input Linearization model in the Transition-Based Deep Input Linearization paper on the SR11Deep dataset?
BLEU
What metrics were used to measure the GCN + feat model in the Deep Graph Convolutional Encoders for Structured Data to Text Generation paper on the SR11Deep dataset?
BLEU
What metrics were used to measure the mBART model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the WebNLG en dataset?
METEOR
What metrics were used to measure the mT5 model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the WebNLG en dataset?
METEOR
What metrics were used to measure the Ours model in the Towards Faithful Neural Table-to-Text Generation with Content-Matching Constraints paper on the Wikipedia Person and Animal Dataset dataset?
BLEU
What metrics were used to measure the Hierarchical Transformer Encoder + conditional copy model in the A Hierarchical Model for Data-to-Text Generation paper on the RotoWire (Content Ordering) dataset?
DLD, BLEU
What metrics were used to measure the Neural Content Planning + conditional copy model in the Data-to-Text Generation with Content Selection and Planning paper on the RotoWire (Content Ordering) dataset?
DLD, BLEU
What metrics were used to measure the Macro model in the Data-to-text Generation with Macro Planning paper on the RotoWire (Content Ordering) dataset?
DLD, BLEU
What metrics were used to measure the Force-Copy model in the May the Force Be with Your Copy Mechanism: Enhanced Supervised-Copy Method for Natural Language Generation paper on the RotoWire (Content Ordering) dataset?
DLD, BLEU
What metrics were used to measure the Encoder-decoder + conditional copy model in the Challenges in Data-to-Document Generation paper on the RotoWire (Content Ordering) dataset?
DLD, BLEU
What metrics were used to measure the Hierarchical Transformer Encoder + conditional copy model in the A Hierarchical Model for Data-to-Text Generation paper on the Rotowire (Content Selection) dataset?
Precision, Recall
What metrics were used to measure the Force-Copy model in the May the Force Be with Your Copy Mechanism: Enhanced Supervised-Copy Method for Natural Language Generation paper on the Rotowire (Content Selection) dataset?
Precision, Recall
What metrics were used to measure the Neural Content Planning + conditional copy model in the Data-to-Text Generation with Content Selection and Planning paper on the Rotowire (Content Selection) dataset?
Precision, Recall
What metrics were used to measure the Macro model in the Data-to-text Generation with Macro Planning paper on the Rotowire (Content Selection) dataset?
Precision, Recall
What metrics were used to measure the Encoder-decoder + conditional copy model in the Challenges in Data-to-Document Generation paper on the Rotowire (Content Selection) dataset?
Precision, Recall
What metrics were used to measure the binmt model in the Machine Translation Pre-training for Data-to-Text Generation -- A Case Study in Czech paper on the Czech Restaurant NLG dataset?
BLEU score, METEOR, CIDER, NIST
What metrics were used to measure the tgen model in the Neural Generation for Czech: Data and Baselines paper on the Czech Restaurant NLG dataset?
BLEU score, METEOR, CIDER, NIST
What metrics were used to measure the mass model in the Machine Translation Pre-training for Data-to-Text Generation -- A Case Study in Czech paper on the Czech Restaurant NLG dataset?
BLEU score, METEOR, CIDER, NIST
What metrics were used to measure the TypiClust model in the Active Learning on a Budget: Opposite Strategies Suit High and Low Budgets paper on the CIFAR10 (10,000) dataset?
Accuracy
What metrics were used to measure the PT4AL model in the PT4AL: Using Self-Supervised Pretext Tasks for Active Learning paper on the CIFAR10 (10,000) dataset?
Accuracy
What metrics were used to measure the Learning loss model in the Learning Loss for Active Learning paper on the CIFAR10 (10,000) dataset?
Accuracy
What metrics were used to measure the CoreGCN model in the Sequential Graph Convolutional Network for Active Learning paper on the CIFAR10 (10,000) dataset?
Accuracy
What metrics were used to measure the Core-set model in the Active Learning for Convolutional Neural Networks: A Core-Set Approach paper on the CIFAR10 (10,000) dataset?
Accuracy
What metrics were used to measure the Random Baseline (Resnet18) model in the Towards Robust and Reproducible Active Learning Using Neural Networks paper on the CIFAR10 (10,000) dataset?
Accuracy
What metrics were used to measure the Random Baseline (VGG16) model in the Towards Robust and Reproducible Active Learning Using Neural Networks paper on the CIFAR10 (10,000) dataset?
Accuracy
What metrics were used to measure the SLDD-Model model in the Take 5: Interpretable Image Classification with a Handful of Features paper on the CUB-200-2011 dataset?
Top 1 Accuracy
What metrics were used to measure the Harris Corner model in the HarrisZ$^+$: Harris Corner Selection for Next-Gen Image Matching Pipelines paper on the IMC PhotoTourism dataset?
mean average accuracy @ 10
What metrics were used to measure the DISK model in the DISK: Learning local features with policy gradient paper on the IMC PhotoTourism dataset?
mean average accuracy @ 10
What metrics were used to measure the SuperGlue model in the SuperGlue: Learning Feature Matching with Graph Neural Networks paper on the IMC PhotoTourism dataset?
mean average accuracy @ 10
What metrics were used to measure the DoG-AffNet-HardNet8 model in the Repeatability Is Not Enough: Learning Affine Regions via Discriminability paper on the IMC PhotoTourism dataset?
mean average accuracy @ 10