prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the JointGT (BART) model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebNLG 2.0 (Constrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the BART model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebNLG 2.0 (Constrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the SOTA-NPT model in the Handling Rare Items in Data-to-Text Generation paper on the WebNLG 2.0 (Constrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the JointGT (BART) model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebQuestions dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the BART model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebQuestions dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the SOTA-NPT model in the Toward Subgraph-Guided Knowledge Graph Question Generation with Graph Neural Networks paper on the WebQuestions dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the JointGT (T5) model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebQuestions dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the T5 model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebQuestions dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the BART-large+ STA model in the Investigating Pretrained Language Models for Graph-to-Text Generation paper on the AGENDA dataset?
BLEU
What metrics were used to measure the BART-large model in the Investigating Pretrained Language Models for Graph-to-Text Generation paper on the AGENDA dataset?
BLEU
What metrics were used to measure the Writer-Reviewer model in the How to Train Your Agent to Read and Write paper on the AGENDA dataset?
BLEU
What metrics were used to measure the CGE-LW model in the Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs paper on the AGENDA dataset?
BLEU
What metrics were used to measure the Graformer model in the Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs paper on the AGENDA dataset?
BLEU
What metrics were used to measure the GraphWriter model in the Text Generation from Knowledge Graphs with Graph Transformers paper on the AGENDA dataset?
BLEU
What metrics were used to measure the T5_large model in the Investigating Pretrained Language Models for Graph-to-Text Generation paper on the WebNLG (All) dataset?
BLEU, METEOR, chrF++
What metrics were used to measure the BART_large model in the Investigating Pretrained Language Models for Graph-to-Text Generation paper on the WebNLG (All) dataset?
BLEU, METEOR, chrF++
What metrics were used to measure the MGCN+sum model in the ENT-DESC: Entity Description Generation by Exploring Knowledge Graph paper on the ENT-DESC dataset?
BLEU
What metrics were used to measure the JointGT (BART) model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the PathQuestion dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the BART model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the PathQuestion dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the SOTA-NPT model in the Toward Subgraph-Guided Knowledge Graph Question Generation with Graph Neural Networks paper on the PathQuestion dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the JointGT (T5) model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the PathQuestion dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the T5 model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the PathQuestion dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the GAP - Me,r+γ model in the GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation paper on the EventNarrative dataset?
BLEU, METEOR, ROUGE, BertScore, CIDEr, ChrF++
What metrics were used to measure the GAP - Me,re model in the GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation paper on the EventNarrative dataset?
BLEU, METEOR, ROUGE, BertScore, CIDEr, ChrF++
What metrics were used to measure the BART model in the EventNarrative: A large-scale Event-centric Dataset for Knowledge Graph-to-Text Generation paper on the EventNarrative dataset?
BLEU, METEOR, ROUGE, BertScore, CIDEr, ChrF++
What metrics were used to measure the JointGT model in the GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation paper on the EventNarrative dataset?
BLEU, METEOR, ROUGE, BertScore, CIDEr, ChrF++
What metrics were used to measure the GraphWriter model in the EventNarrative: A large-scale Event-centric Dataset for Knowledge Graph-to-Text Generation paper on the EventNarrative dataset?
BLEU, METEOR, ROUGE, BertScore, CIDEr, ChrF++
What metrics were used to measure the T5 model in the EventNarrative: A large-scale Event-centric Dataset for Knowledge Graph-to-Text Generation paper on the EventNarrative dataset?
BLEU, METEOR, ROUGE, BertScore, CIDEr, ChrF++
What metrics were used to measure the GAP - Me,r+γ model in the GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the JointGT (T5) model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the JointGT (BART) model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the JointGT (BART) - w/ JointGTPretrain model in the GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the GAP - Me,re model in the GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the JointGT (BART) - w/ BARTPretrain model in the GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the BART model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the T5 model in the JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the KGPT model in the KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the KGPT w/o pretrain model in the KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the Handling Rare Items in Data-to-Text Generation model in the Handling Rare Items in Data-to-Text Generation paper on the WebNLG 2.0 (Unconstrained) dataset?
BLEU, METEOR, ROUGE
What metrics were used to measure the Unconditional model in the WikiGraphs: A Wikipedia Text - Knowledge Graph Paired Dataset paper on the WikiGraphs dataset?
Test perplexity, rBLEU (Test), rBLEU (Valid), rBLEU(w/title)(Test), rBLEU(w/title)(Valid)
What metrics were used to measure the BoW model in the WikiGraphs: A Wikipedia Text - Knowledge Graph Paired Dataset paper on the WikiGraphs dataset?
Test perplexity, rBLEU (Test), rBLEU (Valid), rBLEU(w/title)(Test), rBLEU(w/title)(Valid)
What metrics were used to measure the GNN model in the WikiGraphs: A Wikipedia Text - Knowledge Graph Paired Dataset paper on the WikiGraphs dataset?
Test perplexity, rBLEU (Test), rBLEU (Valid), rBLEU(w/title)(Test), rBLEU(w/title)(Valid)
What metrics were used to measure the Nodes model in the WikiGraphs: A Wikipedia Text - Knowledge Graph Paired Dataset paper on the WikiGraphs dataset?
Test perplexity, rBLEU (Test), rBLEU (Valid), rBLEU(w/title)(Test), rBLEU(w/title)(Valid)
What metrics were used to measure the 9-gram LM with back-off model in the MultiSubs: A Large-scale Multimodal and Multilingual Dataset paper on the MultiSubs dataset?
Accuracy, Word similarity
What metrics were used to measure the Temporal Label Smoothing model in the Temporal Label Smoothing for Early Event Prediction paper on the HiRID dataset?
AUPRC, Recall@50
What metrics were used to measure the LGBM model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
AUPRC, Recall@50
What metrics were used to measure the LGBM ( + hand crafted features) model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
AUPRC, Recall@50
What metrics were used to measure the GRU model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
AUPRC, Recall@50
What metrics were used to measure the Transformer model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
AUPRC, Recall@50
What metrics were used to measure the LR model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
AUPRC, Recall@50
What metrics were used to measure the LSTM model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
AUPRC, Recall@50
What metrics were used to measure the TCN model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
AUPRC, Recall@50
What metrics were used to measure the LGBM ( + hand crafted features) model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
MAE
What metrics were used to measure the LGBM model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
MAE
What metrics were used to measure the Transformer model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
MAE
What metrics were used to measure the GRU model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
MAE
What metrics were used to measure the LSTM model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
MAE
What metrics were used to measure the TCN model in the HiRID-ICU-Benchmark -- A Comprehensive Machine Learning Benchmark on High-resolution ICU Data paper on the HiRID dataset?
MAE
What metrics were used to measure the Stacked DCNN model in the A stacked deep convolutional neural network to predict the remaining useful life of a turbofan engine paper on the NASA C-MAPSS-2 dataset?
Score
What metrics were used to measure the RVE model in the Variational encoding approach for interpretable assessment of remaining useful life estimation paper on the NASA C-MAPSS dataset?
RMSE
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the RefCOCO+ val dataset?
Accuracy (%)
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the RefCOCO+ val dataset?
Accuracy (%)
What metrics were used to measure the XFM (base) model in the Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks paper on the RefCOCO+ val dataset?
Accuracy (%)
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the RefCOCO+ val dataset?
Accuracy (%)
What metrics were used to measure the X-VLM (base) model in the Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts paper on the RefCOCO+ val dataset?
Accuracy (%)
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the RefCOCO+ testA dataset?
Accuracy (%)
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the RefCOCO+ testA dataset?
Accuracy (%)
What metrics were used to measure the XFM (base) model in the Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks paper on the RefCOCO+ testA dataset?
Accuracy (%)
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the RefCOCO+ testA dataset?
Accuracy (%)
What metrics were used to measure the X-VLM (base) model in the Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts paper on the RefCOCO+ testA dataset?
Accuracy (%)
What metrics were used to measure the mPLUG-2 model in the mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video paper on the RefCOCO+ test B dataset?
Accuracy (%)
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the RefCOCO+ test B dataset?
Accuracy (%)
What metrics were used to measure the XFM (base) model in the Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks paper on the RefCOCO+ test B dataset?
Accuracy (%)
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the RefCOCO+ test B dataset?
Accuracy (%)
What metrics were used to measure the X-VLM (base) model in the Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts paper on the RefCOCO+ test B dataset?
Accuracy (%)
What metrics were used to measure the GDANet model in the Learning Geometry-Disentangled Representation for Complementary Understanding of 3D Object Point Cloud paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PointMAE model in the Masked Autoencoders for Point Cloud Self-supervised Learning paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PAConv model in the PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PointMLP model in the Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the OcCo-DGCNN model in the Unsupervised Point Cloud Pre-Training via Occlusion Completion paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the DGCNN model in the Dynamic Graph CNN for Learning on Point Clouds paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PointTransformers model in the Point Transformer paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PointNet++ model in the PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the OcCo-PointNet model in the Unsupervised Point Cloud Pre-Training via Occlusion Completion paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the OcCo-PCN model in the Unsupervised Point Cloud Pre-Training via Occlusion Completion paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the PointNet model in the PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation paper on the PointCloud-C dataset?
mean Corruption Error (mCE)
What metrics were used to measure the SetFit + OCD model in the OCD: Learning to Overfit with Conditional Diffusion Models paper on the Amazon Counterfeit dataset?
Accuracy
What metrics were used to measure the SetFit + OCD(5) model in the OCD: Learning to Overfit with Conditional Diffusion Models paper on the Average on NLP datasets dataset?
Accuracy
What metrics were used to measure the SetFit + OCD model in the OCD: Learning to Overfit with Conditional Diffusion Models paper on the Average on NLP datasets dataset?
Accuracy
What metrics were used to measure the T-few 3B model in the OCD: Learning to Overfit with Conditional Diffusion Models paper on the Average on NLP datasets dataset?
Accuracy
What metrics were used to measure the SetFit model in the OCD: Learning to Overfit with Conditional Diffusion Models paper on the Average on NLP datasets dataset?
Accuracy
What metrics were used to measure the Induction Networks model in the Induction Networks for Few-Shot Text Classification paper on the ODIC 5-way (10-shot) dataset?
Accuracy
What metrics were used to measure the SetFit + OCD model in the OCD: Learning to Overfit with Conditional Diffusion Models paper on the SST-5 dataset?
Accuracy
What metrics were used to measure the Induction Networks model in the Induction Networks for Few-Shot Text Classification paper on the ODIC 10-way (5-shot) dataset?
Accuracy
What metrics were used to measure the T-Few model in the Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning paper on the RAFT dataset?
Avg, ADE, B77, NIS, OSE, Over, SOT, SRI, TAI, ToS, TEH, TC, Over
What metrics were used to measure the Human (crowdsourced) model in the RAFT: A Real-World Few-Shot Text Classification Benchmark paper on the RAFT dataset?
Avg, ADE, B77, NIS, OSE, Over, SOT, SRI, TAI, ToS, TEH, TC, Over
What metrics were used to measure the GPT-3 model in the RAFT: A Real-World Few-Shot Text Classification Benchmark paper on the RAFT dataset?
Avg, ADE, B77, NIS, OSE, Over, SOT, SRI, TAI, ToS, TEH, TC, Over
What metrics were used to measure the AdaBoost model in the RAFT: A Real-World Few-Shot Text Classification Benchmark paper on the RAFT dataset?
Avg, ADE, B77, NIS, OSE, Over, SOT, SRI, TAI, ToS, TEH, TC, Over
What metrics were used to measure the GPT-Neo model in the RAFT: A Real-World Few-Shot Text Classification Benchmark paper on the RAFT dataset?
Avg, ADE, B77, NIS, OSE, Over, SOT, SRI, TAI, ToS, TEH, TC, Over
What metrics were used to measure the GPT-2 model in the RAFT: A Real-World Few-Shot Text Classification Benchmark paper on the RAFT dataset?
Avg, ADE, B77, NIS, OSE, Over, SOT, SRI, TAI, ToS, TEH, TC, Over