prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Polaratio Consensus Clustering model in the Polaratio: A magnitude-contingent monotonic correlation metric and its improvements to scRNA-seq clustering paper on the Goolam et al dataset?
Adjusted Rand Index
What metrics were used to measure the Polaratio Consensus Clustering model in the Polaratio: A magnitude-contingent monotonic correlation metric and its improvements to scRNA-seq clustering paper on the Treutlein et al dataset?
Adjusted Rand Index
What metrics were used to measure the Polaratio Consensus Clustering model in the Polaratio: A magnitude-contingent monotonic correlation metric and its improvements to scRNA-seq clustering paper on the Biase et al dataset?
Adjusted Rand Index
What metrics were used to measure the R-GMM-VGAE model in the Rethinking Graph Auto-Encoder Models for Attributed Graph Clustering paper on the Cora dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the R-DGAE model in the Rethinking Graph Auto-Encoder Models for Attributed Graph Clustering paper on the Cora dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the AGC model in the Attributed Graph Clustering via Adaptive Graph Convolution paper on the Cora dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the RWR-VGAE model in the RWR-GAE: Random Walk Regularization for Graph Auto Encoders paper on the Cora dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the RWR-GAE model in the RWR-GAE: Random Walk Regularization for Graph Auto Encoders paper on the Cora dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the ARGE model in the Adversarially Regularized Graph Autoencoder for Graph Embedding paper on the Cora dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the ARVGE model in the Adversarially Regularized Graph Autoencoder for Graph Embedding paper on the Cora dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the GAE model in the Variational Graph Auto-Encoders paper on the Cora dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the Polaratio Consensus Clustering model in the Polaratio: A magnitude-contingent monotonic correlation metric and its improvements to scRNA-seq clustering paper on the Yan et al dataset?
Adjusted Rand Index
What metrics were used to measure the R-GMM-VGAE model in the Rethinking Graph Auto-Encoder Models for Attributed Graph Clustering paper on the Pubmed dataset?
ACC, NMI, ARI
What metrics were used to measure the RWR-VGAE model in the RWR-GAE: Random Walk Regularization for Graph Auto Encoders paper on the Pubmed dataset?
ACC, NMI, ARI
What metrics were used to measure the RWR-GAE model in the RWR-GAE: Random Walk Regularization for Graph Auto Encoders paper on the Pubmed dataset?
ACC, NMI, ARI
What metrics were used to measure the R-DGAE model in the Rethinking Graph Auto-Encoder Models for Attributed Graph Clustering paper on the Pubmed dataset?
ACC, NMI, ARI
What metrics were used to measure the AGC model in the Attributed Graph Clustering via Adaptive Graph Convolution paper on the Pubmed dataset?
ACC, NMI, ARI
What metrics were used to measure the VGAE model in the Variational Graph Auto-Encoders paper on the Pubmed dataset?
ACC, NMI, ARI
What metrics were used to measure the R-DGAE model in the Rethinking Graph Auto-Encoder Models for Attributed Graph Clustering paper on the Citeseer dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the R-GMM-VGAE model in the Rethinking Graph Auto-Encoder Models for Attributed Graph Clustering paper on the Citeseer dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the AGC model in the Attributed Graph Clustering via Adaptive Graph Convolution paper on the Citeseer dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the RWR-GAE model in the RWR-GAE: Random Walk Regularization for Graph Auto Encoders paper on the Citeseer dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the RWR-VGAE model in the RWR-GAE: Random Walk Regularization for Graph Auto Encoders paper on the Citeseer dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the ARGE model in the Adversarially Regularized Graph Autoencoder for Graph Embedding paper on the Citeseer dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the ARVGE model in the Adversarially Regularized Graph Autoencoder for Graph Embedding paper on the Citeseer dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the GAE model in the Variational Graph Auto-Encoders paper on the Citeseer dataset?
ACC, NMI, ARI, F1, Precision
What metrics were used to measure the Polaratio Consensus Clustering model in the Polaratio: A magnitude-contingent monotonic correlation metric and its improvements to scRNA-seq clustering paper on the Deng et al dataset?
Adjusted Rand Index
What metrics were used to measure the Polaratio Consensus Clustering model in the Polaratio: A magnitude-contingent monotonic correlation metric and its improvements to scRNA-seq clustering paper on the Bozec et al dataset?
Adjusted Rand Index
What metrics were used to measure the Polaratio Consensus Clustering model in the Polaratio: A magnitude-contingent monotonic correlation metric and its improvements to scRNA-seq clustering paper on the Pollen et al dataset?
Adjusted Rand Index
What metrics were used to measure the TranSalNet model in the TranSalNet: Towards perceptually relevant visual saliency prediction paper on the MIT300 dataset?
AUC-Judd, CC, KLD, NSS, SIM, sAUC
What metrics were used to measure the TranSalNet model in the TranSalNet: Towards perceptually relevant visual saliency prediction paper on the SALICON dataset?
AUC, CC, KLD, NSS, SIM, sAUC
What metrics were used to measure the Ensemble Calibration model in the One Eye is All You Need: Lightweight Ensembles for Gaze Estimation with Single Encoders paper on the GazeCapture dataset?
Euclidean Mean Error (EME), FPS
What metrics were used to measure the TinyTracker model in the TinyTracker: Ultra-Fast and Ultra-Low-Power Edge Vision In-Sensor for Gaze Estimation paper on the GazeCapture dataset?
Euclidean Mean Error (EME), FPS
What metrics were used to measure the FAZE model in the Few-Shot Adaptive Gaze Estimation paper on the MPII Gaze dataset?
Angular Error
What metrics were used to measure the L2CS model in the L2CS-Net: Fine-Grained Gaze Estimation in Unconstrained Environments paper on the MPII Gaze dataset?
Angular Error
What metrics were used to measure the RT-GENE 4 model ensemble model in the RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments paper on the MPII Gaze dataset?
Angular Error
What metrics were used to measure the RT-GENE 2 model ensemble model in the RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments paper on the MPII Gaze dataset?
Angular Error
What metrics were used to measure the RT-GENE single model model in the RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments paper on the MPII Gaze dataset?
Angular Error
What metrics were used to measure the RT-GENE 4 model ensemble model in the RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments paper on the RT-GENE dataset?
Angular Error
What metrics were used to measure the RecurrentGaze (Temporal) model in the Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues paper on the EYEDIAP (floating target) dataset?
Angular Error
What metrics were used to measure the RecurrentGaze (Static) model in the Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues paper on the EYEDIAP (floating target) dataset?
Angular Error
What metrics were used to measure the RecurrentGaze (Static) model in the Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues paper on the EYEDIAP (screen target) dataset?
Angular Error
What metrics were used to measure the RecurrentGaze (Temporal) model in the Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues paper on the EYEDIAP (screen target) dataset?
Angular Error
What metrics were used to measure the MCGaze model in the End-to-end Video Gaze Estimation via Capturing Head-face-eye Spatial-temporal Interaction Context paper on the Gaze360 dataset?
Angular Error
What metrics were used to measure the L2CS model in the L2CS-Net: Fine-Grained Gaze Estimation in Unconstrained Environments paper on the Gaze360 dataset?
Angular Error
What metrics were used to measure the ResNet-18 model in the Weakly-Supervised Physically Unconstrained Gaze Estimation paper on the Gaze360 dataset?
Angular Error
What metrics were used to measure the ResNet-18 model in the Gaze360: Physically Unconstrained Gaze Estimation in the Wild paper on the Gaze360 dataset?
Angular Error
What metrics were used to measure the RT-GENE 4 model ensemble model in the RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments paper on the UT Multi-view dataset?
Angular Error
What metrics were used to measure the ETHXGaze model in the ETH-XGaze: A Large Scale Dataset for Gaze Estimation under Extreme Head Pose and Gaze Variation paper on the ETH-XGaze dataset?
Angular Error
What metrics were used to measure the ResNet18 model in the ETH-XGaze: A Large Scale Dataset for Gaze Estimation under Extreme Head Pose and Gaze Variation paper on the MPSGaze dataset?
Angular Error
What metrics were used to measure the Two-stream CNN+CLSTM model in the Depth estimation from 4D light field videos paper on the Sintel 4D LFV - thebigfight2 dataset?
BadPix(0.01), BadPix(0.03), BadPix(0.05), MSE*100
What metrics were used to measure the Two-stream CNN+CLSTM model in the Depth estimation from 4D light field videos paper on the Sintel 4D LFV - shaman2 dataset?
BadPix(0.01), BadPix(0.03), BadPix(0.07), MSE*100
What metrics were used to measure the Two-stream CNN+CLSTM model in the Depth estimation from 4D light field videos paper on the Sintel 4D LFV - ambushfight5 dataset?
BadPix(0.01), BadPix(0.03), BadPix(0.07), MSE*100
What metrics were used to measure the Two-stream CNN+CLSTM model in the Depth estimation from 4D light field videos paper on the Sintel 4D LFV - bamboo3 dataset?
BadPix(0.01), BadPix(0.03), BadPix(0.07), MSE*100
What metrics were used to measure the Transformer model in the CoDesc: A Large Code-Description Parallel Dataset paper on the CoDesc dataset?
BLEU-4
What metrics were used to measure the AdaMo-noise model in the Assemble Foundation Models for Automatic Code Summarization paper on the DeepCom-Java dataset?
BLEU-4, METEOR
What metrics were used to measure the AdaMo-basic model in the Assemble Foundation Models for Automatic Code Summarization paper on the DeepCom-Java dataset?
BLEU-4, METEOR
What metrics were used to measure the CodeTrans-MT-TF-Large model in the CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing paper on the Summarizing Source Code using a Neural Attention Model - SQL dataset?
Smoothed BLEU-4
What metrics were used to measure the ContraCode model in the Contrastive Code Representation Learning paper on the CodeSearchNet dataset?
F1
What metrics were used to measure the CodeTrans-MT-Large model in the CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing paper on the Summarizing Source Code using a Neural Attention Model - C# dataset?
Smoothed BLEU-4
What metrics were used to measure the AdaMo-basic model in the Assemble Foundation Models for Automatic Code Summarization paper on the CodeSearchNet - Python dataset?
BLEU-4, METEOR
What metrics were used to measure the CodeTrans-MT-Base model in the CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing paper on the Summarizing Source Code using a Neural Attention Model - Python dataset?
Smoothed BLEU-4
What metrics were used to measure the AdaMo-basic model in the Assemble Foundation Models for Automatic Code Summarization paper on the Java scripts dataset?
BLEU-4, METEOR
What metrics were used to measure the AdaMo-noise model in the Assemble Foundation Models for Automatic Code Summarization paper on the ParallelCorpus-Python dataset?
BLEU-4, METEOR
What metrics were used to measure the AdaMo-basic model in the Assemble Foundation Models for Automatic Code Summarization paper on the ParallelCorpus-Python dataset?
BLEU-4, METEOR
What metrics were used to measure the RR-STG model in the Relational Reasoning Over Spatial-Temporal Graphs for Video Summarization paper on the TvSum dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the DSNet model in the DSNet: A Flexible Detect-to-Summarize Network for Video Summarization paper on the TvSum dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the VASNet model in the Summarizing Videos with Attention paper on the TvSum dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the M-AVS model in the Video Summarization with Attention-Based Encoder-Decoder Networks paper on the TvSum dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the PGL-SUM model in the Combining Global and Local Attention with Positional Encoding for Video Summarization paper on the TvSum dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the PGL-SUM model in the Combining Global and Local Attention with Positional Encoding for Video Summarization paper on the SumMe dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the RR-STG model in the Relational Reasoning Over Spatial-Temporal Graphs for Video Summarization paper on the SumMe dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the DSNet model in the DSNet: A Flexible Detect-to-Summarize Network for Video Summarization paper on the SumMe dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the VASNet model in the Summarizing Videos with Attention paper on the SumMe dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the M-AVS model in the Video Summarization with Attention-Based Encoder-Decoder Networks paper on the SumMe dataset?
F1-score (Canonical), F1-score (Augmented)
What metrics were used to measure the PRIMER model in the PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization paper on the arXiv Summarization Dataset dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the DeepPyramidion model in the Sparsifying Transformer Models with Trainable Representation Pooling paper on the arXiv Summarization Dataset dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Blockwise (baseline) model in the Sparsifying Transformer Models with Trainable Representation Pooling paper on the arXiv Summarization Dataset dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the PEGASUS 2B + SLiC model in the Calibrating Sequence likelihood Improves Conditional Language Generation paper on the Reddit TIFU dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BART+R3F model in the Better Fine-Tuning by Reducing Representational Collapse paper on the Reddit TIFU dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the MUPPET BART Large model in the Muppet: Massive Multi-task Representations with Pre-Finetuning paper on the Reddit TIFU dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the PEGASUS + SummaReranker model in the SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization paper on the Reddit TIFU dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the MatchSum model in the Extractive Summarization as Text Matching paper on the Reddit TIFU dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the BiomedGPT model in the BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks paper on the MeQSum dataset?
RougeL
What metrics were used to measure the GenCompareSum model in the GenCompareSum: a hybrid unsupervised summarization method using salience paper on the S2ORC dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the HAT-CNNDM model in the Hierarchical Learning for Generation with Long Source Sequences paper on the AMI dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Luhn's algorithm (25 sentences) model in the Klexikon: A German Dataset for Joint Summarization and Simplification paper on the Klexikon dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Lead-k model in the Klexikon: A German Dataset for Joint Summarization and Simplification paper on the Klexikon dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Lead-3 model in the Klexikon: A German Dataset for Joint Summarization and Simplification paper on the Klexikon dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Full article model in the Klexikon: A German Dataset for Joint Summarization and Simplification paper on the Klexikon dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the Finetuned mBART model in the Dataset for Automatic Summarization of Russian News paper on the Gazeta dataset?
ROUGE-1, ROUGE-2, ROUGE-L, BLEU, Meteor
What metrics were used to measure the GCN Hybrid model in the ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks paper on the CL-SciSumm dataset?
ROUGE-2
What metrics were used to measure the ERNIE-GENLARGE (large-scale text corpora) model in the ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation paper on the GigaWord-10k dataset?
ROUGE-L, ROUGE-1, ROUGE-2
What metrics were used to measure the ERNIE-GENLARGE model in the ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation paper on the GigaWord-10k dataset?
ROUGE-L, ROUGE-1, ROUGE-2
What metrics were used to measure the ERNIE-GENBASE model in the ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation paper on the GigaWord-10k dataset?
ROUGE-L, ROUGE-1, ROUGE-2
What metrics were used to measure the MatchSum model in the Extractive Summarization as Text Matching paper on the BBC XSum dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the SRformer-BART model in the Segmented Recurrent Transformer: An Efficient Sequence-to-Sequence Model paper on the MediaSum dataset?
ROUGE-1
What metrics were used to measure the HSSAS model in the A Hierarchical Structured Self-Attentive Model for Extractive Document Summarization (HSSAS) paper on the CNN / Daily Mail (Anonymized) dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the SWAP-NET model in the Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks paper on the CNN / Daily Mail (Anonymized) dataset?
ROUGE-1, ROUGE-2, ROUGE-L
What metrics were used to measure the RNES w/o coherence model in the Learning to Extract Coherent Summary via Deep Reinforcement Learning paper on the CNN / Daily Mail (Anonymized) dataset?
ROUGE-1, ROUGE-2, ROUGE-L