prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the NLDF model in the Non-Local Deep Features for Salient Object Detection paper on the SBU dataset?
Balanced Error Rate
What metrics were used to measure the JDR model in the Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal paper on the SBU dataset?
Balanced Error Rate
What metrics were used to measure the scGAN model in the Shadow Detection With Conditional Generative Adversarial Networks paper on the SBU dataset?
Balanced Error Rate
What metrics were used to measure the InSPyReNet model in the Revisiting Image Pyramid Structure for High Resolution Salient Object Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the M3Net-S model in the M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the TRACER-TE7 model in the TRACER: Extreme Attention Guided Salient Object Tracing Network paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the M3Net-R model in the M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the InSPyReNet model in the Revisiting Image Pyramid Structure for High Resolution Salient Object Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the UCNet-ABP model in the Uncertainty Inspired RGB-D Saliency Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the UCNet-CVAE model in the Uncertainty Inspired RGB-D Saliency Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the PoolNet (VGG-16) model in the A Simple Pooling-Based Design for Real-Time Salient Object Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the DSS (Res2Net-50) model in the Res2Net: A New Multi-scale Backbone Architecture paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the CPD-R (ResNet50) model in the Cascaded Partial Decoder for Fast and Accurate Salient Object Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the C4Net model in the C$^{4}$Net: Contextual Compression and Complementary Combination Network for Salient Object Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the TRACER-(ResNet50) model in the TRACER: Extreme Attention Guided Salient Object Tracing Network paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the BASNet model in the BASNet: Boundary-Aware Salient Object Detection paper on the DUT-OMRON dataset?
S-Measure, F-measure, MAE, mean F-Measure, mean E-Measure, Weighted F-Measure
What metrics were used to measure the Next model in the Peeking into the Future: Predicting Future Person Activities and Locations in Videos paper on the ActEV dataset?
mAP
What metrics were used to measure the ERNIE-UniX2 model in the ERNIE-UniX2: A Unified Cross-lingual Cross-modal Framework for Understanding and Generation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the IKD-MMT model in the Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the DCCN model in the Dynamic Context-guided Capsule Network for Multimodal Machine Translation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the Caglayan model in the Multimodal Machine Translation through Visuals and Speech paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the Gumbel-Attention MMT model in the Gumbel-Attention for Multi-modal Machine Translation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the Multimodal Transformer model in the Multimodal Transformer for Multimodal Machine Translation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the ImagiT model in the Generative Imagination Elevates Machine Translation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the del+obj model in the Distilling Translations with Visual Awareness paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the VMMTF model in the Latent Variable Model for Multi-modal Translation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the IMGD model in the Incorporating Global Visual Features into Attention-Based Neural Machine Translation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the NMTSRC+IMG model in the Doubly-Attentive Decoder for Multi-modal Neural Machine Translation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the VAG-NMT model in the A Visual Attention Grounding Neural Model for Multimodal Machine Translation paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the PS-KD model in the Self-Knowledge Distillation with Progressive Refinement of Targets paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the Transformer model in the Attention Is All You Need paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the del model in the Distilling Translations with Visual Awareness paper on the Multi30K dataset?
BLEU (EN-DE), BLUE (DE-EN), Meteor (EN-DE), Meteor (EN-FR)
What metrics were used to measure the ViTA model in the ViTA: Visual-Linguistic Translation by Aligning Object Tags paper on the Hindi Visual Genome (Test Set) dataset?
BLEU (EN-HI)
What metrics were used to measure the ViTA model in the ViTA: Visual-Linguistic Translation by Aligning Object Tags paper on the Hindi Visual Genome (Challenge Set) dataset?
BLEU (EN-HI)
What metrics were used to measure the ILL model in the Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations paper on the CIFAR-100 (partial ratio 0.1) dataset?
Accuracy
What metrics were used to measure the CASS model in the CASS: Cross Architectural Self-Supervision for Medical Image Analysis paper on the Autoimmune Dataset dataset?
F1 score
What metrics were used to measure the DINO model in the CASS: Cross Architectural Self-Supervision for Medical Image Analysis paper on the Autoimmune Dataset dataset?
F1 score
What metrics were used to measure the ILL model in the Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations paper on the CIFAR-10 (partial ratio 0.5) dataset?
Accuracy
What metrics were used to measure the DB-GAE model in the General Partial Label Learning via Dual Bipartite Graph Autoencoder paper on the M-VAD Names dataset?
Accuracy
What metrics were used to measure the DB-GAE model in the General Partial Label Learning via Dual Bipartite Graph Autoencoder paper on the MPII Movie Description dataset?
Accuracy, F1-Score
What metrics were used to measure the ILL model in the Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations paper on the CIFAR-10 (partial ratio 0.1) dataset?
Accuracy
What metrics were used to measure the CASS model in the CASS: Cross Architectural Self-Supervision for Medical Image Analysis paper on the ISIC 2019 dataset?
Balanced Multi-Class Accuracy
What metrics were used to measure the ILL model in the Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations paper on the CIFAR-10 (partial ratio 0.3) dataset?
Accuracy
What metrics were used to measure the ILL model in the Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations paper on the CIFAR-100 (partial ratio 0.01) dataset?
Accuracy
What metrics were used to measure the ILL model in the Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations paper on the Caltech-UCSD Birds 200 (partial ratio 0.05) dataset?
Accuracy
What metrics were used to measure the ILL model in the Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations paper on the CIFAR-100 (partial ratio 0.05) dataset?
Accuracy
What metrics were used to measure the CPG model in the Compacting, Picking and Growing for Unforgetting Continual Learning paper on the Stanford Cars (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the CondConvContinual model in the EXTENDING CONDITIONAL CONVOLUTION STRUCTURES FOR ENHANCING MULTITASKING CONTINUAL LEARNING paper on the Stanford Cars (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the H$^{2}$ model in the Helpful or Harmful: Inter-Task Association in Continual Learning paper on the Stanford Cars (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the Piggyback model in the Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights paper on the Stanford Cars (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the ProgressiveNet model in the Progressive Neural Networks paper on the Stanford Cars (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the PackNet model in the PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning paper on the Stanford Cars (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the TAG-RMSProp model in the TAG: Task-based Accumulated Gradients for Lifelong learning paper on the mini-Imagenet (20 tasks) - 1 epoch dataset?
Accuracy
What metrics were used to measure the RMN (Resnet) model in the Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping paper on the Cifar100 (10 tasks) dataset?
Average Accuracy
What metrics were used to measure the CondConvContinual model in the EXTENDING CONDITIONAL CONVOLUTION STRUCTURES FOR ENHANCING MULTITASKING CONTINUAL LEARNING paper on the Wikiart (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the CPG model in the Compacting, Picking and Growing for Unforgetting Continual Learning paper on the Wikiart (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the H$^{2}$ model in the Helpful or Harmful: Inter-Task Association in Continual Learning paper on the Wikiart (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the ProgressiveNet model in the Progressive Neural Networks paper on the Wikiart (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the Piggyback model in the Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights paper on the Wikiart (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the PackNet model in the PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning paper on the Wikiart (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the Model Zoo-Continual model in the Model Zoo: A Growing "Brain" That Learns Continually paper on the Rotated MNIST dataset?
Average Accuracy
What metrics were used to measure the H$^{2}$ model in the Helpful or Harmful: Inter-Task Association in Continual Learning paper on the Split MNIST (5 tasks) dataset?
Top 1 Accuracy %
What metrics were used to measure the CondConvContinual model in the EXTENDING CONDITIONAL CONVOLUTION STRUCTURES FOR ENHANCING MULTITASKING CONTINUAL LEARNING paper on the Flowers (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the CPG model in the Compacting, Picking and Growing for Unforgetting Continual Learning paper on the Flowers (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the H$^{2}$ model in the Helpful or Harmful: Inter-Task Association in Continual Learning paper on the Flowers (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the Piggyback model in the Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights paper on the Flowers (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the ProgressiveNet model in the Progressive Neural Networks paper on the Flowers (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the PackNet model in the PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning paper on the Flowers (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the CondConvContinual model in the EXTENDING CONDITIONAL CONVOLUTION STRUCTURES FOR ENHANCING MULTITASKING CONTINUAL LEARNING paper on the CUBS (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the H$^{2}$ model in the Helpful or Harmful: Inter-Task Association in Continual Learning paper on the CUBS (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the CPG model in the Compacting, Picking and Growing for Unforgetting Continual Learning paper on the CUBS (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the Piggyback model in the Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights paper on the CUBS (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the PackNet model in the PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning paper on the CUBS (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the ProgressiveNet model in the Progressive Neural Networks paper on the CUBS (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the RMN model in the Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping paper on the Permuted MNIST dataset?
Average Accuracy, MLP Hidden Layers-width, Pretrained/Transfer Learning
What metrics were used to measure the Model Zoo-Continual model in the Model Zoo: A Growing "Brain" That Learns Continually paper on the Permuted MNIST dataset?
Average Accuracy, MLP Hidden Layers-width, Pretrained/Transfer Learning
What metrics were used to measure the ProgressiveNet model in the Progressive Neural Networks paper on the ImageNet (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the Piggyback model in the Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights paper on the ImageNet (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the CondConvContinual model in the EXTENDING CONDITIONAL CONVOLUTION STRUCTURES FOR ENHANCING MULTITASKING CONTINUAL LEARNING paper on the ImageNet (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the CPG model in the Compacting, Picking and Growing for Unforgetting Continual Learning paper on the ImageNet (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the PackNet model in the PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning paper on the ImageNet (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the H$^{2}$ model in the Helpful or Harmful: Inter-Task Association in Continual Learning paper on the ImageNet (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the TAG-RMSProp model in the TAG: Task-based Accumulated Gradients for Lifelong learning paper on the Cifar100 (20 tasks) - 1 epoch dataset?
Average Accuracy
What metrics were used to measure the Multi-task Learning (MTL; Upper Bound) model in the Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the CTR model in the Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the B-CL model in the Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the LAMOL model in the LAMOL: LAnguage MOdeling for Lifelong Language Learning paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the OWM model in the Continual Learning of Context-dependent Processing in Neural Networks paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the A-GEM model in the Efficient Lifelong Learning with A-GEM paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the HAT model in the Overcoming catastrophic forgetting with hard attention to the task paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the Independent Learning (ONE) model in the Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the KAN model in the Continual Learning with Knowledge Transfer for Sentiment Classification paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the Naive Continual Learning (NCL) model in the Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the UCL model in the Uncertainty-based Continual Learning with Adaptive Regularization paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the DER++ model in the Dark Experience for General Continual Learning: a Strong, Simple Baseline paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the EWC model in the Overcoming catastrophic forgetting in neural networks paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the CAT model in the Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the L2 model in the Overcoming catastrophic forgetting in neural networks paper on the ASC (19 tasks) dataset?
F1 - macro
What metrics were used to measure the CondConvContinual model in the EXTENDING CONDITIONAL CONVOLUTION STRUCTURES FOR ENHANCING MULTITASKING CONTINUAL LEARNING paper on the Sketch (Fine-grained 6 Tasks) dataset?
Accuracy
What metrics were used to measure the CPG model in the Compacting, Picking and Growing for Unforgetting Continual Learning paper on the Sketch (Fine-grained 6 Tasks) dataset?
Accuracy