prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the ResMLP-S24/16 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the ImageNet V2 dataset?
Top 1 Accuracy
What metrics were used to measure the ResNet-152x2-SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the ImageNet V2 dataset?
Top 1 Accuracy
What metrics were used to measure the LeViT-192 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the ImageNet V2 dataset?
Top 1 Accuracy
What metrics were used to measure the ResNet50 (A1) model in the ResNet strikes back: An improved training procedure in timm paper on the ImageNet V2 dataset?
Top 1 Accuracy
What metrics were used to measure the LeViT-128 model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the ImageNet V2 dataset?
Top 1 Accuracy
What metrics were used to measure the ViT-B/16-SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the ImageNet V2 dataset?
Top 1 Accuracy
What metrics were used to measure the ResMLP-S12/16 model in the ResMLP: Feedforward networks for image classification with data-efficient training paper on the ImageNet V2 dataset?
Top 1 Accuracy
What metrics were used to measure the Mixer-B/8-SAM model in the When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations paper on the ImageNet V2 dataset?
Top 1 Accuracy
What metrics were used to measure the LeViT-128S model in the LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference paper on the ImageNet V2 dataset?
Top 1 Accuracy
What metrics were used to measure the EfficientNet-L2-Ns model in the ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification paper on the ImageNet-Hard dataset?
Accuracy (%)
What metrics were used to measure the CLIP-ViT-L/14@336px model in the ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification paper on the ImageNet-Hard dataset?
Accuracy (%)
What metrics were used to measure the LRA-diffusion (CC) model in the Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the SANM (DivideMix) model in the Learning with Noisy labels via Self-supervised Adversarial Noisy Masking paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the CC model in the Centrality and Consistency: Two-Stage Clean Samples Identification for Learning with Instance-Dependent Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the CPC model in the Class Prototype-based Cleaner for Label Noise Learning paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the Jigsaw-ViT+NCT model in the Jigsaw-ViT: Learning Jigsaw Puzzles in Vision Transformer paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the MFRW model in the Learning advisor networks for noisy image classification paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the PGDF model in the Sample Prior Guided Robust Model Learning to Suppress Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the AugDesc model in the Augmentation Strategies for Learning with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the Nested+Co-teaching (ResNet-50) model in the Compressing Features for Learning with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the SSR model in the SSR: An Efficient and Robust Framework for Learning with Unknown Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the NestedCoTeaching model in the Boosting Co-teaching with Compression Regularization for Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the ELR+ model in the Early-Learning Regularization Prevents Memorization of Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the DivideMix model in the DivideMix: Learning with Noisy Labels as Semi-supervised Learning paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the ELR+ with C2D (ResNet-50) model in the Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the InstanceGM model in the Instance-Dependent Noisy Label Learning via Graphical Modelling paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the LongReMix model in the LongReMix: Robust Learning with High Confidence Samples in a Noisy Label Environment paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the FINE + DivideMix model in the FINE Samples for Learning with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the Negative Label Smoothing (NLS) model in the To Smooth or Not? When Label Smoothing Meets Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the CAL model in the A Second-Order Approach to Learning with Instance-Dependent Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the NoiseRank model in the NoiseRank: Unsupervised Label Noise Reduction with Dependence Models paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the FOCI model in the Which Strategies Matter for Noisy Label Classification? Insight into Loss and Uncertainty paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the MW-Net model in the Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the PENCIL model in the Probabilistic End-to-end Noise Correction for Learning with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the MLNT model in the Learning to Learn from Noisy Labeled Data paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the HOC model in the Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the MAE (SimCLR) model in the Contrastive Learning Improves Model Robustness Under Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the Generalized CE (SimCLR) model in the Contrastive Learning Improves Model Robustness Under Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the DM model in the Derivative Manipulation for General Example Weighting paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the CCE (SimCLR) model in the Contrastive Learning Improves Model Robustness Under Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the CORES2 model in the Learning with Instance-Dependent Label Noise: A Sample Sieve Approach paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the IMAE model in the IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude's Variance Matters paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the Robust f-divergence model in the When Optimizing $f$-divergence is Robust with Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the LCCN model in the Safeguarded Dynamic Label Regression for Generalized Noisy Supervision paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the DMI model in the L_DMI: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the DMI model in the L_DMI: An Information-theoretic Noise-robust Loss Function paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the BARE model in the Adaptive Sample Selection for Robust Learning under Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the Joint Opt. model in the Joint Optimization Framework for Learning with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the LRT model in the Error-Bounded Correction of Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the MASKING model in the Masking: A New Perspective of Noisy Supervision paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the SCE model in the Symmetric Cross Entropy for Robust Learning with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the DY model in the Unsupervised Label Noise Modeling and Loss Correction paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the SEAL model in the Beyond Class-Conditional Assumption: A Primary Attempt to Combat Instance-Dependent Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the JoCoR model in the Combating noisy labels by agreement: A joint training method with co-regularization paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the CoT model in the Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the GCE model in the Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the D2L model in the Dimensionality-Driven Learning with Noisy Labels paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the CCE model in the Adaptive Sample Selection for Robust Learning under Label Noise paper on the Clothing1M dataset?
Accuracy
What metrics were used to measure the SparseSwin with L2 model in the SparseSwin: Swin Transformer with Sparse Transformer Block paper on the ImageNet-100 dataset?
Percentage correct, Params
What metrics were used to measure the DLME (ResNet-50, linear) model in the DLME: Deep Local-flatness Manifold Embedding paper on the ImageNet-100 dataset?
Percentage correct, Params
What metrics were used to measure the Entropy-based Logic Explained Network model in the Entropy-based Logic Explanations of Neural Networks paper on the CUB dataset?
Classification Accuracy, Explanation Accuracy, Explanation complexity, Explanation extraction time
What metrics were used to measure the $\psi$ network model in the Entropy-based Logic Explanations of Neural Networks paper on the CUB dataset?
Classification Accuracy, Explanation Accuracy, Explanation complexity, Explanation extraction time
What metrics were used to measure the Bayesian Rule List model in the Entropy-based Logic Explanations of Neural Networks paper on the CUB dataset?
Classification Accuracy, Explanation Accuracy, Explanation complexity, Explanation extraction time
What metrics were used to measure the Decision Tree model in the Entropy-based Logic Explanations of Neural Networks paper on the CUB dataset?
Classification Accuracy, Explanation Accuracy, Explanation complexity, Explanation extraction time
What metrics were used to measure the NCR (ResNet-18) model in the Learning with Neighbor Consistency for Noisy Labels paper on the Red MiniImageNet 40% label noise dataset?
Accuracy
What metrics were used to measure the InstanceGM-SS model in the Instance-Dependent Noisy Label Learning via Graphical Modelling paper on the Red MiniImageNet 40% label noise dataset?
Accuracy
What metrics were used to measure the PropMix model in the PropMix: Hard Sample Filtering and Proportional MixUp for Learning with Noisy Labels paper on the Red MiniImageNet 40% label noise dataset?
Accuracy
What metrics were used to measure the InstanceGM model in the Instance-Dependent Noisy Label Learning via Graphical Modelling paper on the Red MiniImageNet 40% label noise dataset?
Accuracy
What metrics were used to measure the FaMUS model in the Faster Meta Update Strategy for Noise-Robust Deep Learning paper on the Red MiniImageNet 40% label noise dataset?
Accuracy
What metrics were used to measure the adaptive minimal ensembling model in the Improving plant disease classification by adaptive minimal ensembling paper on the PlantVillage dataset?
Accuracy, PARAMS, F1
What metrics were used to measure the µ2Net+ (ViT-L/16) model in the A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems paper on the PlantVillage dataset?
Accuracy, PARAMS, F1
What metrics were used to measure the Light-Chroma Inception V3 model in the Reliable Deep Learning Plant Leaf Disease Classification Based on Light-Chroma Separated Branches paper on the PlantVillage dataset?
Accuracy, PARAMS, F1
What metrics were used to measure the Inception V3 20%L + 80%AB model in the Color-aware two-branch DCNN for efficient plant disease classification paper on the PlantVillage dataset?
Accuracy, PARAMS, F1
What metrics were used to measure the ViT-Large/16 (384) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the ViT-Base/16 (384) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the SE-ResNeXt-101-32x4d model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the EfficientNet-B5 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the EfficientNet-B3 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the ViT-Large/16 (224) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the SE-ResNeXt-101-32x4d (224) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the EfficientNet-B1 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the Inception-ResNet-V2 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the EfficientNet-B0 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the ResNet-50 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the Inception-V4 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the EfficientNet-B3 (224) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the Inception-V3 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the EfficientNet-B0 (224) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the MobileNet-V2 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the ResNet-18 model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the SE-ResNeXt-101-32x4d (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the ResNet-34 (299) model in the Danish Fungi 2020 -- Not Just Another Image Recognition Dataset paper on the DF20 dataset?
Top-1, Top-3, F1 - macro
What metrics were used to measure the ResNet50 model in the In-domain representation learning for remote sensing paper on the RESISC45 dataset?
Top 1 Accuracy
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the RESISC45 dataset?
Top 1 Accuracy
What metrics were used to measure the SwAV (ResNet50-w5) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the RESISC45 dataset?
Top 1 Accuracy
What metrics were used to measure the DINO (DeiT-B/16) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the RESISC45 dataset?
Top 1 Accuracy
What metrics were used to measure the MoCo-v3 (ViT-B/16) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the RESISC45 dataset?
Top 1 Accuracy
What metrics were used to measure the CLIP (ViT-B/16) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the RESISC45 dataset?
Top 1 Accuracy
What metrics were used to measure the BYOL (ResNet200-w2) model in the paper on the RESISC45 dataset?
Top 1 Accuracy
What metrics were used to measure the DeiT-B/16 model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the RESISC45 dataset?
Top 1 Accuracy