prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the VGG8B + LocalLearning + CO model in the Training Neural Networks with Local Error Signals paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the ResNet18(GN, 4) model in the Extended Batch Normalization paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the PSLR-Linear model in the Probabilistic Structural Latent Representation for Unsupervised Embedding paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the ResNet18(BN, 128) model in the Extended Batch Normalization paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Mean Teacher model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the CPC† model in the A Framework For Contrastive Self-Supervised Learning And Designing A New Approach paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Hamiltonian model in the Deep Neural Networks Motivated by Partial Differential Equations paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the CC-GAN² model in the Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the CC-GAN model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Parabolic model in the Deep Neural Networks Motivated by Partial Differential Equations paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Scat + WRN 20-8 model in the Scaling the Scattering Transform: Deep Hybrid Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the ResNet18(EBN, 4) model in the Extended Batch Normalization paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Exemplar CNN model in the Scaling the Scattering Transform: Deep Hybrid Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the ResNet18(EBN, 128) model in the Extended Batch Normalization paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Stacked what-where AE model in the Scaling the Scattering Transform: Deep Hybrid Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the SWWAE model in the HybridNet: Classification and Reconstruction Cooperation for Semi-Supervised Learning paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the SWWAE model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the SWWAE model in the Stacked What-Where Auto-encoders paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Second-order model in the Deep Neural Networks Motivated by Partial Differential Equations paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Convolutional Clustering model in the Convolutional Clustering for Unsupervised Learning paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Π-Model model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Discriminative Unsupervised Feature Learning with Convolutional Neural Networks model in the Discriminative Unsupervised Feature Learning with Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the ResNet18(GN, 128) model in the Extended Batch Normalization paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Pseudo-Labeling model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Entropy model in the Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the BDW model in the Don’t Wait, Just Weight: Improving Unsupervised Representations by Learning Goal-Driven Instance Weights paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the MP model in the Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the WaveMixLite-256/7 model in the WaveMix: A Resource-efficient Neural Network for Image Analysis paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the CNN model in the Scaling the Scattering Transform: Deep Hybrid Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the An Analysis of Unsupervised Pre-training in Light of Recent Advances model in the An Analysis of Unsupervised Pre-training in Light of Recent Advances paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Multi-Task Bayesian Optimization model in the Multi-Task Bayesian Optimization paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the NN-Weighter model in the Don’t Wait, Just Weight: Improving Unsupervised Representations by Learning Goal-Driven Instance Weights paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Accuracy Monitoring model in the Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the C-SVDDNet model in the Unsupervised Feature Learning with C-SVDDNet paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the RotNet model in the Don’t Wait, Just Weight: Improving Unsupervised Representations by Learning Goal-Driven Instance Weights paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the DFF Committees model in the Committees of deep feedforward networks trained with few data paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Hierarchical Matching Pursuit (HMP) model in the Scaling the Scattering Transform: Deep Hybrid Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the L2RW model in the Don’t Wait, Just Weight: Improving Unsupervised Representations by Learning Goal-Driven Instance Weights paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Discriminative Learning of Sum-Product Networks model in the Discriminative Learning of Sum-Product Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the CKN model in the Convolutional Kernel Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the S-CNN model in the Selective Unsupervised Feature Learning with Convolutional Neural Network (S-CNN) paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Simulated Fixations model in the A Framework For Contrastive Self-Supervised Learning And Designing A New Approach paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the No more meta-parameter tuning in unsupervised sparse feature learning model in the No more meta-parameter tuning in unsupervised sparse feature learning paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Convolutional K-means Network model in the Scaling the Scattering Transform: Deep Hybrid Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Receptive Fields model in the Receptive Fields without Spike-Triggering paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the PWD model in the Effective Version Space Reduction for Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the GVD model in the Effective Version Space Reduction for Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the VR model in the Effective Version Space Reduction for Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Core SET model in the Effective Version Space Reduction for Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the GE model in the Effective Version Space Reduction for Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the DFAL model in the Effective Version Space Reduction for Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Random model in the Effective Version Space Reduction for Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the BALD-MCD model in the Effective Version Space Reduction for Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the Sign-symmetry model in the How Important is Weight Symmetry in Backpropagation? paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the M2-PWD model in the Effective Version Space Reduction for Convolutional Neural Networks paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the soft ica model in the ICA with Reconstruction Cost for Efficient Overcomplete Feature Learning paper on the STL-10 dataset?
Percentage correct, FLOPS, PARAMS
What metrics were used to measure the WaveMix-256/16 (level 2) model in the WaveMix: A Resource-efficient Neural Network for Image Analysis paper on the iNat2021-mini dataset?
Top 1 Accuracy
What metrics were used to measure the LRA-diffusion (CLIP ViT) model in the Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the Robust LR model in the Two Wrongs Don't Make a Right: Combating Confirmation Bias in Learning with Label Noise paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the PGDF (Inception-ResNet-v2) model in the Sample Prior Guided Robust Model Learning to Suppress Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the SSR model in the SSR: An Efficient and Robust Framework for Learning with Unknown Label Noise paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the BtR model in the Bootstrapping the Relationship Between Images and Their Clean and Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the CoDiM-Sup (Inception-ResNet-v2) model in the CoDiM: Learning with Noisy Labels via Contrastive Semi-Supervised Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the NCR+Mixup+DA (ResNet-50) model in the Learning with Neighbor Consistency for Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the CMW-Net-SL+C2D model in the CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the Dynamic Loss (Inception-ResNet-v2) model in the Dynamic Loss For Robust Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the CoDiM-Self (Inception-ResNet-v2) model in the CoDiM: Learning with Noisy Labels via Contrastive Semi-Supervised Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the Sel-CL+ (ResNet-18) model in the Selective-Supervised Contrastive Learning with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the CPC model in the Class Prototype-based Cleaner for Label Noise Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the DivideMix with C2D (ResNet-50) model in the Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the FaMUS model in the Faster Meta Update Strategy for Noise-Robust Deep Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the NCR+Mixup (ResNet-50) model in the Learning with Neighbor Consistency for Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the CC model in the Centrality and Consistency: Two-Stage Clean Samples Identification for Learning with Instance-Dependent Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the GJS (ResNet-50) model in the Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the NGC (Inception-ResNet-v2) model in the NGC: A Unified Framework for Learning with Open-World Noisy Data paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the TCL model in the Twin Contrastive Learning with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the LongReMix (Inception-ResNet-v2) model in the LongReMix: Robust Learning with High Confidence Samples in a Noisy Label Environment paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the MOIT+ (ResNet-18) model in the Multi-Objective Interpolation Training for Robustness to Label Noise paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the CMW-Net-SL model in the CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the ELR+ (Inception-ResNet-v2) model in the Early-Learning Regularization Prevents Memorization of Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the ScanMix (Inception-ResNet-v2) model in the ScanMix: Learning from Severe Label Noise via Semantic Clustering and Semi-Supervised Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the ROLT+ (Inception-ResNet-v2) model in the Robust Long-Tailed Learning under Label Noise paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the CNLCU-S + DivideMix (Inception-ResNet-v2) model in the Sample Selection with Uncertainty of Losses for Learning with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the HSA-NRL(Inception-ResNet-v2) model in the Hard Sample Aware Noise Robust Learning for Histopathology Image Classification paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the CAR model in the Confidence Adaptive Regularization for Deep Learning with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the DivideMix (Inception-ResNet-v2) model in the DivideMix: Learning with Noisy Labels as Semi-supervised Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the NCR (ResNet-50) model in the Learning with Neighbor Consistency for Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the DivideMix (ResNet-50) model in the DivideMix: Learning with Noisy Labels as Semi-supervised Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the DivideMix (ResNet-18) model in the DivideMix: Learning with Noisy Labels as Semi-supervised Learning paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the MentorMix (Inception-ResNet-v2) model in the Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the NCT (Inception-ResNet-v2) model in the Noisy Concurrent Training for Efficient Learning under Label Noise paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the ODD (Inception-ResNet-v2) model in the Robust and On-the-fly Dataset Denoising for Image Classification paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the Crust (Inception-ResNet-v2) model in the Coresets for Robust Training of Neural Networks against Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the Iterative-CV (Inception-ResNet-v2) model in the Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the Co-teaching (Inception-ResNet-v2) model in the Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the D2L (Inception-ResNet-v2) model in the Dimensionality-Driven Learning with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the F-Correction (Inception-ResNet-v2) model in the Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the RTE (Inception-ResNet-v2) model in the Robust Temporal Ensembling for Learning with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the MentorNet (Inception-ResNet-v2) model in the MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy
What metrics were used to measure the NCE+RCE (ResNet-50) model in the Normalized Loss Functions for Deep Learning with Noisy Labels paper on the mini WebVision 1.0 dataset?
Top-1 Accuracy, Top-5 Accuracy, ImageNet Top-1 Accuracy, ImageNet Top-5 Accuracy