prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the BART (TextBox 2.0) model in the TextBox 2.0: A Text Generation Library with Pre-trained Language Models paper on the CNN/Daily Mail dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the SEGMENT model in the IMPROVING ABSTRACTIVE SUMMARIZATION WITH SEGMENT-AUGMENTED AND POSITION-AWARENESS (ACLing2021) paper on the CNN/Daily Mail dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the PEGASUS model in the PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization paper on the AESLC dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the Multi-Stage Extractor/Abstractor model in the This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation paper on the AESLC dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the mBART model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the MLSUM es dataset? | METEOR |
What metrics were used to measure the ViT5 large model in the ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation paper on the vietnews dataset? | Rouge-1, Rouge-2, Rouge-L |
What metrics were used to measure the ViT5 base model in the ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation paper on the vietnews dataset? | Rouge-1, Rouge-2, Rouge-L |
What metrics were used to measure the BARTpho model in the BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese paper on the vietnews dataset? | Rouge-1, Rouge-2, Rouge-L |
What metrics were used to measure the mBART model in the paper on the vietnews dataset? | Rouge-1, Rouge-2, Rouge-L |
What metrics were used to measure the mT5 model in the paper on the vietnews dataset? | Rouge-1, Rouge-2, Rouge-L |
What metrics were used to measure the Transformer model in the paper on the vietnews dataset? | Rouge-1, Rouge-2, Rouge-L |
What metrics were used to measure the BART-IT model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the WITS dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore |
What metrics were used to measure the mT5 model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the WITS dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore |
What metrics were used to measure the mBART model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the WITS dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore |
What metrics were used to measure the IT5-base model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the WITS dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore |
What metrics were used to measure the mBART model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the MLSum-it dataset? | rouge1 |
What metrics were used to measure the IT5 model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the MLSum-it dataset? | rouge1 |
What metrics were used to measure the Pegasus-CNN/DM (eng-it translation) model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the MLSum-it dataset? | rouge1 |
What metrics were used to measure the Pegasus-XSum (eng-it translation) model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the MLSum-it dataset? | rouge1 |
What metrics were used to measure the Seq2seq model in the A Step-by-Step Gradient Penalty with Similarity Calculation for Text Summary Generation paper on the EDUsum dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the BERT model in the A Step-by-Step Gradient Penalty with Similarity Calculation for Text Summary Generation paper on the EDUsum dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the RoBERTa model in the A Step-by-Step Gradient Penalty with Similarity Calculation for Text Summary Generation paper on the EDUsum dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the NEZHA model in the A Step-by-Step Gradient Penalty with Similarity Calculation for Text Summary Generation paper on the EDUsum dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the GP_Step_Sim model in the A Step-by-Step Gradient Penalty with Similarity Calculation for Text Summary Generation paper on the EDUsum dataset? | ROUGE-1, ROUGE-2, ROUGE-L |
What metrics were used to measure the mBART model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the MLSUM de dataset? | METEOR |
What metrics were used to measure the mBART model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the Abstractive Text Summarization from Fanpage dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore, # Parameters |
What metrics were used to measure the mBART model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the Abstractive Text Summarization from Fanpage dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore, # Parameters |
What metrics were used to measure the BART-IT model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the Abstractive Text Summarization from Fanpage dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore, # Parameters |
What metrics were used to measure the mT5 model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the Abstractive Text Summarization from Fanpage dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore, # Parameters |
What metrics were used to measure the IT5-base model in the BART-IT: An Efficient Sequence-to-Sequence Model for Italian Text Summarization paper on the Abstractive Text Summarization from Fanpage dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore, # Parameters |
What metrics were used to measure the IT5 model in the Two New Datasets for Italian-Language Abstractive Text Summarization paper on the Abstractive Text Summarization from Fanpage dataset? | ROUGE-1, ROUGE-2, ROUGE-L, BERTScore, # Parameters |
What metrics were used to measure the HPRNet (Hourglass-104) model in the HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation paper on the COCO-WholeBody dataset? | keypoint AP |
What metrics were used to measure the HPRNet (DLA) model in the HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation paper on the COCO-WholeBody dataset? | keypoint AP |
What metrics were used to measure the JVCR model in the Joint Voxel and Coordinate Regression for Accurate 3D Facial Landmark Localization paper on the AFLW2000-3D dataset? | GTE |
What metrics were used to measure the TS3 model in the Teacher Supervises Students How to Learn From Partially Labeled Images for Facial Landmark Detection paper on the 300W (Full) dataset? | Mean NME |
What metrics were used to measure the AnchorFace model in the AnchorFace: An Anchor-based Facial Landmark Detector Across Large Poses paper on the 300W (Full) dataset? | Mean NME |
What metrics were used to measure the AnchorFace model in the AnchorFace: An Anchor-based Facial Landmark Detector Across Large Poses paper on the AFLW-Front dataset? | Mean NME, Mean NME |
What metrics were used to measure the SAN model in the Style Aggregated Network for Facial Landmark Detection paper on the AFLW-Front dataset? | Mean NME, Mean NME |
What metrics were used to measure the AnchorFace model in the AnchorFace: An Anchor-based Facial Landmark Detector Across Large Poses paper on the AFLW-Full dataset? | Mean NME , Mean NME |
What metrics were used to measure the SAN model in the Style Aggregated Network for Facial Landmark Detection paper on the AFLW-Full dataset? | Mean NME , Mean NME |
What metrics were used to measure the DCFE (Box height Norm, 19 landmarks - no earlobs) model in the A Deeply-initialized Coarse-to-fine Ensemble of Regression Trees for Face Alignment paper on the AFLW-Full dataset? | Mean NME , Mean NME |
What metrics were used to measure the 3DDE (Box height Norm, 19 landmarks - no earlobs) model in the Face Alignment using a 3D Deeply-initialized Ensemble of Regression Trees paper on the AFLW-Full dataset? | Mean NME , Mean NME |
What metrics were used to measure the SPIGA (Inter-ocular Norm) model in the Shape Preserving Facial Landmarks with Graph Attention Networks paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the AnchorFace model in the AnchorFace: An Anchor-based Facial Landmark Detector Across Large Poses paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the 3DDE (Inter-ocular Norm) model in the Face Alignment using a 3D Deeply-initialized Ensemble of Regression Trees paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the DCFE (Inter-ocular Norm) model in the A Deeply-initialized Coarse-to-fine Ensemble of Regression Trees for Face Alignment paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the CHR2C (Inter-ocular Norm) model in the Cascade of Encoder-Decoder CNNs with Learned Coordinates Regressor for Robust Facial Landmarks Detection paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the CNN-CRF (Inter-ocular Norm) model in the Deep Structured Prediction for Facial Landmark Detection paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the Adaloss model in the Adaloss: Adaptive Loss Function for Landmark Localization paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the TS3 model in the Teacher Supervises Students How to Learn From Partially Labeled Images for Facial Landmark Detection paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the SAN GT model in the Style Aggregated Network for Facial Landmark Detection paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the CFSS model in the Face Alignment Across Large Poses: A 3D Solution paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the Pose-Invariant model in the Pose-Invariant Face Alignment with a Single CNN paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the 3DDFA model in the Face Alignment Across Large Poses: A 3D Solution paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the FPN model in the FacePoseNet: Making a Case for Landmark-Free Face Alignment paper on the 300W dataset? | NME, Mean Error Rate |
What metrics were used to measure the CPM+SBR+PAM model in the Supervision-by-Registration: An Unsupervised Approach to Improve the Precision of Facial Landmark Detectors paper on the 300-VW (C) dataset? | AUC0.08 private |
What metrics were used to measure the CPM+SBR model in the Supervision-by-Registration: An Unsupervised Approach to Improve the Precision of Facial Landmark Detectors paper on the 300-VW (C) dataset? | AUC0.08 private |
What metrics were used to measure the Re-AudioLDM-L model in the Retrieval-Augmented Text-to-Audio Generation paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the AudioLDM 2-AC-Large model in the AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the TANGO model in the Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the Make-An-Audio 2 model in the Make-An-Audio 2: Temporal-Enhanced Text-to-Audio Generation paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the CoDi model in the Any-to-Any Generation via Composable Diffusion paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the AudioLDM-L-Full model in the AudioLDM: Text-to-Audio Generation with Latent Diffusion Models paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the Consistency TTA (Single-step generation) model in the Accelerating Diffusion-Based Text-to-Audio Generation with Consistency Distillation paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the Make-An-Audio model in the Make-An-Audio: Text-To-Audio Generation with Prompt-Enhanced Diffusion Models paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the AudioGen model in the AudioGen: Textually Guided Audio Generation paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the Diffsound model in the Diffsound: Discrete Diffusion Model for Text-to-sound Generation paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the AUDIT model in the AUDIT: Audio Editing by Following Instructions with Latent Diffusion Models paper on the AudioCaps dataset? | FAD, FD |
What metrics were used to measure the SymphonyNet model in the Symphony Generation with Permutation Invariant Language Model paper on the Symphony music dataset? | Human listening average results |
What metrics were used to measure the Sparse Transformer 152M (strided) model in the Generating Long Sequences with Sparse Transformers paper on the Classical music, 5 seconds at 12 kHz dataset? | Bits per byte |
What metrics were used to measure the TWIST (ResNet-50) model in the Self-Supervised Learning by Estimating Twin Class Distributions paper on the Oxford-IIIT Pet Dataset dataset? | Accuracy, PARAMS, FLOPS |
What metrics were used to measure the ResNet-152x4-AGC (ImageNet-21K) model in the Effect of Pre-Training Scale on Intra- and Inter-Domain Full and Few-Shot Transfer Learning for Natural and Medical X-Ray Chest Images paper on the Oxford-IIIT Pet Dataset dataset? | Accuracy, PARAMS, FLOPS |
What metrics were used to measure the NNCLR model in the With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations paper on the Oxford-IIIT Pet Dataset dataset? | Accuracy, PARAMS, FLOPS |
What metrics were used to measure the kMobileNet V3 Large 16ch model in the Grouped Pointwise Convolutions Reduce Parameters in Convolutional Neural Networks paper on the Oxford-IIIT Pet Dataset dataset? | Accuracy, PARAMS, FLOPS |
What metrics were used to measure the kEffNet-B0 model in the Grouped Pointwise Convolutions Significantly Reduces Parameters in EfficientNet paper on the Oxford-IIIT Pet Dataset dataset? | Accuracy, PARAMS, FLOPS |
What metrics were used to measure the ThanosNet model in the ThanosNet: A Novel Trash Classification Method Using Metadata paper on the ISBNet dataset? | Macro F1 |
What metrics were used to measure the HSANR model in the Hard Sample Aware Noise Robust Learning for Histopathology Image Classification paper on the Chaoyang dataset? | Accuracy |
What metrics were used to measure the mMND (STDP) model in the Sequence Approximation using Feedforward Spiking Neural Network for Spatiotemporal Learning: Theory and Optimization Methods paper on the N-Caltech 101 dataset? | Accuracy |
What metrics were used to measure the mMND (BPTT) model in the Sequence Approximation using Feedforward Spiking Neural Network for Spatiotemporal Learning: Theory and Optimization Methods paper on the N-Caltech 101 dataset? | Accuracy |
What metrics were used to measure the WaveMixLite-112/16 model in the WaveMix: A Resource-efficient Neural Network for Image Analysis paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the VGG-5(Spinal FC) model in the SpinalNet: Deep Neural Network with Gradual Input paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the VGG-5 model in the SpinalNet: Deep Neural Network with Gradual Input paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the TextCaps model in the TextCaps : Handwritten Character Recognition with Very Small Datasets paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the OptConv+Log+Perc model in the Efficient Neural Vision Systems Based on Convolutional Image Acquisition paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the DWT-DCT + SVM model in the Handwritten digit and letter recognition using hybrid dwt-dct with knn and svm classifier paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the OPIUM Classifier model in the EMNIST: an extension of MNIST to handwritten letters paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the Linear Classifier model in the EMNIST: an extension of MNIST to handwritten letters paper on the EMNIST-Letters dataset? | Accuracy |
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the CLEVR/Count dataset? | Top 1 Accuracy |
What metrics were used to measure the SEER (RegNetY-128GF) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the CLEVR/Count dataset? | Top 1 Accuracy |
What metrics were used to measure the SqueezeNet + Simple Bypass model in the SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size paper on the ImageNet-9 dataset? | Top 1 Accuracy |
What metrics were used to measure the VIT-L/16 (Background, Spinal FC) model in the Reduction of Class Activation Uncertainty with Background Information paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the VIT-L/16 (Background) model in the Reduction of Class Activation Uncertainty with Background Information paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the efficient adaptive ensembling model in the Efficient Adaptive Ensembling for Image Classification paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the NAT-M3 model in the Neural Architecture Transfer paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the NAT-M2 model in the Neural Architecture Transfer paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the NAT-M1 model in the Neural Architecture Transfer paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ResNeXt29_2x64d model in the CINIC-10 is not ImageNet or CIFAR-10 paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the DenseNet-121 model in the CINIC-10 is not ImageNet or CIFAR-10 paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
What metrics were used to measure the ResNet-18 model in the CINIC-10 is not ImageNet or CIFAR-10 paper on the CINIC-10 dataset? | Accuracy, FLOPS, PARAMS |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.